id
string | text
string | source
string | created
timestamp[s] | added
string | metadata
dict |
---|---|---|---|---|---|
2107.12132
|
# Tripping and laminar–turbulent transition:
Implementation in RANS-EVM
N. Tabatabaei1,2∗, G. Fahland3, A. Stroh3, D. Gatti3, B. Frohnapfel3
M. Atzori1,2, R. Vinuesa1,2, and P. Schlatter1,2
1 SimEx/FLOW, Engineering Mechanics, KTH Royal Institute of Technology,
Stockholm, Sweden
2 Swedish e-Science Research Centre (SeRC), Stockholm, Sweden
3 Institute of Fluid Mechanics, Karlsruhe Institute of Technology (KIT),
Karlsruhe, Germany
[email protected]
Abstract
Fundamental fluid-mechanics studies and many engineering developments are
based on tripped cases. Therefore, it is essential for CFD simulations to
replicate the same forced transition in spite of the availability of advanced
transition modelling. In the last decade, both direct and large-eddy
simulations (DNS and LES) include tripping methods in an effort to avoid the
need for modeling the complex mechanisms associated with the natural
transition process, which we would like to bring over to Reynolds-averaged
Navier-–Stokes (RANS) turbulence models. This paper investigates the necessity
and applications of numerical tripping, despite of the developments in
numerical modeling of natural transition. The second goal of this paper is to
assess a technique to implement tripping in eddy-viscosity models (EVM) for
RANS. A recent approach of turbulence generation, denoted as turbulence-
injection method ($k$I), is evaluated and investigated through different test
cases ranging from a turbulent boundary layer on a flat plate to the three-
dimensional (3D) flow over a wing section. The desired tripping is achieved at
the target location and the simulation results compare favourably with the
reference results (DNS, LES and measured data). With the application of the
model, the challenging transition region can be minimised in a simulation, and
consequently more reliable results are obtained.
## 1 Introduction
Modeling the laminar–turbulent transition is still a challenging subject,
especially for engineering computational fluid dynamics (CFD). The exact
placement of the laminar–turbulent transition has a significant effect on
relevant characteristics of the boundary layer and aerodynamics, such as drag,
heat transfer and flow separation on e.g. wings and turbine blades. For
instance, the inaccuracy in prediction of the transition onset can result in
larger separation regions near the wing trailing edge. Such limitations of CFD
simulations increase the discrepancy between experimental and numerical data
in the design processes.
Tripping, which fixes the transition position, has been implemented in wind-
tunnel experiments to promote early transition to turbulence in the boundary
layer for the past 70 years, because it makes the transition independent of
the local condition of the free-stream. In the first part of the paper in
section 2, the applications of the tripping technique are discussed, in order
to describe why one needs a tripping model rather than a model to simulate
different transition mechanisms. To bring tripping to practical applications,
there is a demand to assess the implementation of tripping mechanisms with
Reynolds-averaged Navier–Stokes (RANS) approach which can also serve as a
design tool. The laminar–turbulent transition models are discussed in section
3, focusing on RANS-EVM.
With the goal of replicating the same transition point as in the experiments
(and corresponding resolved simulations), and removing the uncertainty in the
numerical model of forced transition, this study investigates a numerical
approach that mimics the effect of a turbulence trip. Tripping in experiments
is not always implemented at the natural transition point, rather the target
point of forced transition may shift for different reasons such as control.
Therefore, a flexible numerical approach is required to control the flow
condition, as it has already been investigated in LES and DNS. The present
work considers a method to implement tripping with RANS as well as its
assessment compared to wind-tunnel experiments, beside the similar tripping
approaches in LES and DNS. The methodology and the results are described in
sections 3 and 4, respectively. The ultimate goal of the project is to develop
a numerical–simulation approach which can represent the complex experimental
setup and measurements in a typical wind tunnel, reduce the uncertainty in
design of a setup, and thus increase the fidelity of a campaign. This
motivates to include three-dimensional (3D) setups as a part of this research,
such as complete wind-tunnel setups, in the framework of a _virtual wind
tunnel_ [insert_1].
## 2 Tripping applications and models
Due to the variety of transition types and the complexity of modeling the
different mechanisms in numerical simulations, one alternative is to start
with a correct boundary layer at a certain Reynolds number ($Re_{\theta}$),
using fully turbulent boundary-layer data as inflow, e.g. from a direct
numerical simulation, DNS. In this way, there is no need to go through any
transition model, _i.e._ the flow is turbulent throughout the computational
domain. However, such data are not always available. Another possibility is to
skip the whole transition process by tripping the boundary layer. In this way,
transition is forced at a prescribed and meaningful location, rather than the
natural transition process.
Additionally, the action of forcing transition from a laminar to a turbulent
boundary layer (BL) is common in wind-tunnel testing to eliminate the later
transition caused by testing at reduced Reynolds numbers ($Re$) [scale]. For
example, at low $Re$ and for an airfoil at stall-angle, it is essential to
ensure that the transition occurs before the laminar flow separation. BL trips
are traditionally also used in scale-model testing to aid in scaling the flow
characteristics, where duplicating full-scale $Re$s is not feasible in the
wind tunnel which leads to the fact that the developed BLs on wind-tunnel
models do not correspond to the BLs which develop on full-scale vehicles. For
this purpose, tripping devices are placed on models to hasten the BL
transition from laminar to turbulent flow. In this way, the characteristics
which are sensitive to the condition of the BL are more accurately simulated
in tripped cases [whytripping], since the transition is modeled independent of
the local condition of the free-stream which may differ from case to case.
Furthermore, the key idea of passive techniques is to trip the BL to re-
energize the flow so that the flow remains attached [RANStrip]. The skin
friction, and consequently the drag, change due to the shift of the transition
position from one test to another, which defines the turbulent portion of the
model. The different transition onset in adverse-pressure-gradient (APG) flow
can also result in larger separation regions farther downstream, with the
corresponding impact on aerodynamic performance [nargesPhD]. In addition, it
is sometimes possible to duplicate the relative thickness of the full-scale
turbulent BL at certain locations on the wind-tunnel model by fixing
transition at the proper location [Blackwell1969PreliminarySO].
Most studies focusing on the physics of turbulent BLs employ tripping to
promote early and robust transition to turbulence [erm_joubert_1991, RamisAd,
sanmigue]. For instance, different tripping devices were studied, in the
context of a flat-plate BL [head_bandyopadhyay_1981, ExpTrip2017]. Roughness
elements were also used to force transition in order to eliminate the
transitional effects [ExpTrip2019, bypass]. The wide and continuous
application of tripping in wind tunnels [recentTrp] motivated the use also in
numerical methods to model similar effects in order to replicate the
experimental data. Therefore, tripping methods evolved beside the numerical
transition models that attempted to model the natural transition process.
Referring to the variety of transition types and the complexity of modeling
the complex physical interactions and mechanisms leading to transition,
tripping models have the advantage of simplicity, which results in a much
lower uncertainty. For instance, various tripping strategies were assessed
over a flat plate by DNS [schlatter_orlu_2012] and were later implemented in
large-eddy simulation (LES) for a 2D airfoil [LEStrip].
## 3 Laminar–turbulent transition and tripping in EVM
Among eddy-viscosity RANS models (EVM), the one-equation turbulence model of
Spalart-–Allmaras (SA) contains a trip term [SA]. These authors used the word
trip to mean that the transition point is imposed by an actual trip, or
natural but obtained from a separate method [SA]. The trip version of the SA
model, named as SA-Ia, is rarely used, because the model is most often
employed for fully-turbulent (FT) applications [SAla]. Its trip term was found
to be inadequate to force transition at a specified location, specifically for
hypersonic flows [SAtrip]. The $k-\omega$ SST model, as the most common two-
equations EVM model, was developed in 1994 [SST_BSL]. The formulations of both
turbulence transport equations ($k$ and $\omega$) are based on the features of
FT-BL and so the initial laminar region, and consequently the transition part,
are not modeled accurately. Such a formulation induces an early
turbulent–viscosity buildup (as if for _e.g._ there is a surface roughness)
and therefore causes FT flow over the region which is laminar in the physical
model and leads to over-estimating the drag [FoulingRoughness, nargesPhD].
RANS-based BL transition algorithms have been broadly considered in literature
since few decades ago [kklmod, kklomega]. Most commonly transition models
consist of two main parts: 1. Define the laminar, transition and FT regions,
_i.e._ the intermittency ($\gamma$) distribution. This can be done via two,
one, or even zero transport equations [ansysGuidetheory, Menter,
Langtry2006ACT]; 2. Apply the modifications into the turbulence model, _i.e._
in the $k$ and $\omega$ equations, which is referred to as ‘coupling with
$k-\omega$ SST’ model. The currently available transition models in RANS are
typically based on empirical correlations, but are not specifically aimed at
representing the physical mechanisms in the transition process. As discussed
by Langtry [Langtry2006ACT], They do not attempt to model the physics of the
transition process (unlike _e.g._ turbulence models), but form a framework for
the implementation of transition correlations into general purpose CFD
methods. They are basically designed to cover the standard ‘bypass
transition’, as well as flows in low free-stream turbulence environments
(since the transition location is correlated with the free-stream turbulence
intensity, based on laboratory data) [ansysGuidetheory]. In addition to
becoming unstable (in terms of convergence) [FoulingRoughness], it was
observed that setting a lower value for the free-stream turbulence in the CFD
simulations would result in a later transition prediction than observed in the
physical model [RANShyb]. Apart from the pros and cons of such approaches in
simulating the transition process, recent studies show that there is potential
for uncertainty or error in simulating the forced transition case with RANS
models[FoulingRoughness]. In order to initiate transition at the same location
as the experimental data, zigzag tapes were used as the model, but the
uncertainties inevitably appeared even in the calculations of the integrated
parameters, _e.g._ the total power of the whole turbine rotor. Certainly, it
is more challenging when a point-to-point comparison is intended, _e.g._ in
chordwise $C_{p}$ distribution. Similarly, turbulence tripping was implemented
in RANS using a specific type of obstacle in the geometry, which caused flow
disturbances that facilitated the transition from laminar to turbulent flow
[RANStrip]. Although a sudden jump in the local pressure was achieved, a
spurious small vortex emerged downstream of the obstacle. According to section
2, tripping would be a required feature in RANS models, while on the numerical
side, we found that there is no suitable model to implement tripping. In the
following sections we discuss a method of tripping for k-$\omega$ SST.
## 4 Methodology
We start with describing a turbulence-generation mechanism, which was recently
adopted by Fahland [Gerog] for the RANS simulation of the flow around an
airfoil. We denote this as ‘injection method’ ($k$I), and it is based on
directly modifying the turbulent kinetic energy, $k$ at the target trip point.
This technique serves as an efficient tripping and the results are in
agreement with the other methods described in Ref. [tripping_3].
Note that the injection of extra $k$ effectively promotes transition at the
position or shortly downstream of it. The modelled transport equation for $k$
is may be written as
$\frac{\partial(\rho k)}{\partial t}+\frac{\partial(\rho u_{j}k)}{\partial
x_{j}}=P_{k}-D_{k}+\frac{\partial[(\mu+\sigma_{k}\mu_{t})\frac{\partial
k}{\partial x_{j}}]}{\partial x_{j}}+S_{k}\ ,$ (1)
where $P_{k}$ and $D_{k}$ denote the production and dissipation terms
respectively [SST_BSL]. The coefficients $\mu$ and ${\mu}_{t}$ denote the
dynamic viscosity and turbulent dynamic viscosity, respectively, and the
corresponding term (the third term on the right-hand side) refers to the
diffusion of $k$. The last term on the right-hand side, $S_{k}$, is included
to account for the sources of turbulent kinetic energy. The $k$I approach is
based on adding a local $S_{k}$ at the target trip point so that the flow
becomes FT immediately, in the same way as in the experimental tripping.
Furthermore, the standard $k-\omega$ SST model leads to an early transition so
that the BL becomes FT even before the physical transition position. In order
to ensure the turbulence model does not lead to a premature deviation from the
laminar solution, the value of $k$ can be set to zero in the domain just
upstream the tripping. This constraint is set to avoid an over-estimation of
$k$ in the laminar region, which is anyway calculated from the FT equations in
the $k-\omega$ SST model.
Figure 1: Illustration of the injection–tripping procedure in various flows
relevant for the present work. See text for more details.
Note that the value of $S_{k}$, the injection magnitude, should be large
enough to raise the local skin-friction coefficient $C_{f}$ at the tripping
point. An advantage of the $k$I method is the simplicity of the
implementation, because the source terms can typically be externally modified
without changing the core of the solver. Fig. 1 illustrates how the injection
area is specified for a flat plate and a wing model, similar to tripping in
the experiments. The laminar region, _i.e._ the $k=0$ area, is defined to
implement the turbulence constraint, which was defined at the beginning of
this section. The approximated depth of the injected area is suggested to be
approximately equal to the momentum thickness ${\theta}$ at the tripping
location [Gerog], since the tripping should be located inside the BL region.
The $k_{1}$ part in Fig. 1 bottom shows the injection section, as well as the
injection bar assigned on the wing in Fig. 1 top.
The assessment includes three test cases. We consider the Minimum-Turbulence-
Level (MTL) wind tunnel at KTH Royal Institute of Technology. The NACA4412
profile is the reference airfoil selected for this study. As the base
turbulence model, a two-equation EVM model is considered: k-$\omega$ SST.
OpenFOAM is used as the CFD solver.
## 5 Results
The results are shown in terms of $C_{f}$, which is the normalized wall shear
stress at the wall. The shape factor $H_{12}$ is also plotted as an indication
of the boundary-layer development, since it is the ratio of the displacement
to momentum thicknesses. For the mid-height section of the wing, the chordwise
$C_{p}$ distribution is plotted, where the static pressure $p$ is non-
dimensionalized with the dynamic pressure $P_{d}$. Three test cases are
discussed in the following. Two main features are intended to ensure proper
tripping: i) a sufficiently sharp $C_{f}$ increase, which is ii) immediate at
the intended tripping location.
I. Flat plate with zero pressure gradient (ZPG): In Fig. 2, two scenarios are
tested to assess the method efficiency: early and delayed tripping. First the
flow is tripped immediately after the domain inlet as an ‘early-trip’.
Conversely, in the case denoted as ‘delayed trip’, the simulation keeps the
flow laminar for some distance before it is tripped, see Ref. [tripping_3]. It
is shown that boundary-layer development can be controlled with this tripping
method, which results in an adaptive laminar-turbulent transition.
(a)
(b)
Figure 2: Injection Tripping for ZPG flat plate, where we show the skin-
friction coefficient $C_{f}$ and the shape factor $H_{12}$. Fully-turbulent
baselines are based on Refs. [DNS_FT_2010, LES2014FT,
SelfConsist_Monkewitz2007]; ‘FTI’ denotes the fully-turbulent profile at
inflow from DNS;‘BI’ denotes the Blasius inflow. $Re_{\theta}$ and $C_{f}$ are
in log scale, while $H_{12}$ plot is in linear scale.
To evaluate the $k$I tripping in RANS, in this part we focus on the
low-$Re_{\theta}$ trends, which were studied in Ref. [schlatter_orlu_2012] via
the use of different tripping parameters in DNS, see Fig. 3. The resulting
turbulent flow with $k$I tripping quickly adapts to the canonical form of the
turbulent BL, with shorter development length than the non-optimal tripping as
studied by DNS.
Figure 3: Comparison of the injection methods with various tripping parameters
implemented in DNS [schlatter_orlu_2012]. FT baseline is according to Ref.
[Osterlund8624] (oil-film fit 1999).
II. 2D airfoil: Tripping with RANS has been implemented for an isolated
airfoil (a NACA4412 wing section in free-flight conditions) and compared with
a well-resolved LES of the same case [RicardADD] tripped with the method in
Ref. [schlatter_orlu_2012]. Skin-friction plots are in very good agreement, as
observed in Figure 4(a). The $k$I tripping (RANS-$k$I) is applied to an
airfoil in a wind tunnel and the results are in agreement with another
tripping method, described in Ref. [tripping_3]. The standard k-$\omega$ SST
is denoted as RANS.
(a) Injection Tripping: Well-resolved LES [LES2014FT] vs RANS-$kI$
(b) Wind–tunnel tripping is compared to the reference data, denoted as
RANS-$\gamma$-${k}$I in Ref. [tripping_3].
Figure 4: Skin-Friction Factor $C_{f}$ for 2D cases at two angle of attacks
(AOA): (a) isolated airfoil at AOA=$5^{\circ}$, (b) airfoil in a wind tunnel
at AOA=$11^{\circ}$(b).
III. 3D wing in a wind tunnel: A 3D RANS simulation of wing at 11-degree angle
of attack is performed considering the same tripping location as that in the
wind-tunnel experiment. The qualitative velocity contours at several selected
sections are illustrated as well as the chordwise $C_{p}$ distribution (Figure
5(a)). At such a high angle of attack, 3D RANS results in a lower suction
compared to experiments, while a perfect agreement is achieved through the
proposed tripping technique (Fig. 5(b)). Similar good agreement is also
observed for $C_{f}$ (not shown here).
(a) Velocity-magnitude ontours
(b) Pressure coefficient, $C_{p}$
Figure 5: Tripping on a wing placed in a wind tunnel.
## 6 Conclusions and outlook
An adaptive method for forced laminar–turbulent transition is assessed in this
paper. Different applications of this tripping are discussed in an effort to
replicate the tripped experimental tests, which is the specific purpose of
this research. The implementation of laminar–turbulent tripping is assessed in
a RANS–EVM turbulence model with the purpose of developing more reliable
aerodynamic simulations, in which the uncertain (and ultimately unnecessary)
modeling of the transition process is avoided. Two main features are intended
via the numerical tripping in the $k-{\omega}$SST model, which are according
to the function of the experimental trip devices: first, the transition onset
at the exact target trip location; and second a short development length. The
results from the turbulence-injection ($k$I) method show a fair agreement with
DNS and LES tripping approaches and experimental data. The tripping technique
in 3D RANS simulation improves the results significantly so that a very good
agreement between the experimental data and the 3D RANS is achieved.
Therefore, the proposed tripping method is indicated as a potential approach
to replicate experimentally–measured data from a real wind tunnel. This opens
the way for faithful predictions of wind-tunnel experiments using RANS.
Financial support provided by the Knut and Alice Wallenberg Foundation is
gratefully acknowledged. The computations were enabled by resources provided
by the Swedish National Infrastructure for Computing (SNIC) at PDC and HPC2N.
References
[heading=none]
|
arxiv-papers
| 2021-07-26T12:03:18 |
2024-09-04T03:07:18.431182
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "N. Tabatabaei, G. Fahland, A. Stroh, D. Gatti, B. Frohnapfel, M.\n Atzori, R. Vinuesa1, and P. Schlatter",
"submitter": "Narges Tabatabaei",
"url": "https://arxiv.org/abs/2107.12132"
}
|
2107.12136
|
# The Role of Functional Programming in Management and Orchestration of
Virtualized Network Resources††thanks: Supported by ERASMUS+ project “Focusing
Education on Composability, Comprehensibility and Correctness of Working
Software”, no. 2017-1-SK01-KA203-035402 and the research project “Reliability
and Safety in Complex Software Systems: From Empirical Principles towards
Theoretical Models in View of Industrial Applications (RELYSOFT)” no.
IP-2019-04-4216 funded by the Croatian Science Foundation.
Part I. System structure for Complex Systems and Design Principles
Tihana Galinac Grbac University Juraj Dobrila of Pula, Zagrebačka 30, HR-52100
Pula, Croatia11 0000-0002-4351-4082 [email protected]
###### Abstract
This is part I of the follow-up lecture notes of the lectures given by the
authors at the _Three “CO” (Composability, Comprehensibility, Correctness)_
Winter School held in Košice, Slovakia, in January 2018, and Summer School
held in Budapest, Hungary, in June 2019. In this part we explain the role of
functional programming paradigm in the management of complex software systems,
and how the functional programming concepts play important role in the
designing such systems. Key prerequisite for implementing functional
programming concepts is properly designed system structure following well
defined design principles and rules. That is the main goal of this lecture to
introduce students with proper system modeling. Furthermore, we also explain
how new emerging technologies are designed in such a way that they enforce the
development of systems that comply to the design rules inspired by the
functional programming. This is extremely important in view of the current
network evolution and virtualization concepts, which will require many
functional programming concepts in the network services and functions, as will
be discussed in part II of these lecture notes.
These notes provide an introduction to the subject, with the goal of
explaining the problems and the principles, methods and techniques used for
their solution. The worked examples and exercises serve students as the
teaching material, from which they can learn how to use design principles to
model effective system structures. Here we focus on students understanding of
importance of effective system structures for coordination of development and
management processes that are driven by business goals and further evolution.
###### Keywords:
Network Function Virtualization Management and orchestration Complex software
systems OpenStack platform.
## 1 Introduction
During the last decade of working with undergraduate and graduate students of
computing at the Faculty of Engineering, my main reflection on their project
work is the lack of understanding what a complex software system is, what and
where the problems with complex software systems are, why we need and how to
define and design effective software system structures. Students lack the
basic understanding of system modelling concepts in general.
However, on the other hand, from my decade of industrial experience within
Ericsson R&D, I am aware of the importance of these particular skills for
software engineers since majority of software industry is increasingly facing
with complexity of software systems. The main problem is that students are
usually not exposed to large complex systems design and development, and usual
student projects are smaller software products built in teams of maximum 1-5
students, in duration of at most 4 months during the semester. These lecture
notes are intended for the final year of master study programs or PhD study
programs with the main aim to introduce students into better understanding of
complexity of modelling and building complex software systems. Therefore, in
this lecture I am reflecting theoretical definitions not only to examples of
complex software systems but also examples of other complex systems from
everyday life, so students may easier grasp the main point of this lecture.
Students are asked to think about example system and to apply design
principles on these examples. By doing this we engage students to complexity
thinking and to reason about causal complexity by promoting discussions around
main learnings. Similar teaching approach has been used in [11]. Students are
provided with task before each part of course. Student solutions are discussed
at the end of each part. Throughout the practical cases we ask students to
construct their own view from the content learned applied on the concrete
cases of complex software systems but also complex systems in other contexts
with reflections on main learning’s.
The setting of these lectures is within the theory of complex systems, in
particular, the complex software systems and telecommunication networks.
Hence, the lectures start with a gentle introduction to the theory, carefully
positioning the considered problems and challenges within the current
evolution of software systems and networks. Throughout of the lecture numerous
examples are provided and discussed with students. Moreover, students are
asked at the beginning of the lecture to take a paper and pencil and perform
exercises along the course. The course is divided into following main parts:
* •
Introduction to complex software systems, definition of complex system and
challenging aspects of their management.
* •
System design principles.
* •
Technologies enforcing design principles.
* •
Reflections on practical examples.
Firstly, we introduce the term complex system and relate to complex systems
theory. Complex system is defined as system composed of large set of
components that interact in between to accomplish specified system
functionality. However, global properties of the system that are identified
during system execution of previously mentioned functionalities can not be
deducted or predicted from the local properties of system components. Exactly,
this is behaviour observed in mission critical and large scale software
systems that concurrently serves to numerous users. Here we introduce
telecommunication network and Mobile switching centre as example of complex
software systems and discuss its properties in relation to standard complex
system definition. Students are asked to think about example of complex system
from everyday life.
Then we discuss main challenges evolving these complex systems. These
challenges are mostly related to evolving these systems and delivering its
further versions while keeping its global properties in control. The biggest
problems arise when we can not predict consequences of introducing changes
within the system, and when we can not predict its reaction on environmental
conditions. Management of these system becomes extremely costly, requiring lot
of expertise, slowly responsive and loosing its competitive business power. In
the same time as system grows with number of functionalities, it is loosing on
efficiency, number of interactions within the system is exponentially growing,
faults are harder to locate, and errors are easier to made. System global
properties such are reliability and availability become seriously affected.
New paradigms are also needed to accommodate business needs and provide its
users grater flexibility in use of resources and on demand. New ideas to
increase system scalability are needed. We continue to discuss challenges of
system management by using the same examples already introduced in the
beginning of this lecture.
Main challenging obstacle here is human inability to cope with such
complexity. Main tool that is used to reason such as systems is system
structure. Not only that development projects are using system structure to
define work packages, timetables, deliverable and all project activities. But
also, human organisation within such software development company is often
reflecting the software system structure that is being developed.
Consequently, human interactions are usually also reused from system
procedures and system behaviour. Thus, the quality of system structure may be
a critical for business success. Hence, we introduce main design principles
that are used to successfully define system structure and have been already
identified to be crucial to cope with system complexity in many other fields.
These principles enables easier system management. Here, we introduce students
with main system design principles such are:
* •
modularity,
* •
abstraction,
* •
layering, and
* •
hierarchy.
Modular are systems that can be easily divided into number of components. In a
case of complex software systems the functional system decomposition is
followed. System functions are then abstracted and as such provided as a
service to its users. Note that system operation is now organised in a service
requesting and service providing fashion. Set of functions provided within the
system is organised into set of layers. Similar functions are grouped together
within one layer that may be developed and replaced independently of other
system layers. Also, additional communication rules among these functions are
restricting number of possible interactions by introducing hierarchy among
system layers. Here we exercise with students their view on possible
reflections of these principles on already discussed examples of complex
systems.
Although, the main four design principles are well known still there is need
for technologies that would directly or indirectly enforce its use. We
introduce main technologies that were developed to enforce correct
implementation of aforementioned design principles such as Client–server
architecture, Service orientation and virtualisation, that are currently
widely used within complex software systems and networks. Here, we explain the
new challenges arising while implementing these technologies and show students
how to deal with them using the available programming techniques.
Finally, in the last section we open discussion on new challenges arising with
introduction of 5G (five generation) network. Throughout the course we are
examining all the theory on example of telecommunication network but focusing
on complex system. At this point students would be ready enough to understand
main ideas driving network evolution and how we solved challenges of complex
system. Here we will introduce at a high level novelties and challenges of 5G
network release and discuss their vision and ideas how to approach them. In
the same time this final discussion will be used as introduction to the next
lecture on design principles in network evolution and the role of functional
programming at network level.
These notes provide an introduction to the subject, with the goal of
explaining the problems and the principles, methods and techniques used for
their solution. The worked examples and exercises serve students as the
teaching material, from which they can learn how to use functional programming
to effectively and efficiently coordinate management and design of complex
systems.
The methods and techniques explained in these lecture notes, are already
existing and we claim no originality in that sense. The purpose of these notes
is to serve as a teaching material for these methods. Our contribution here is
discussion of this topic on telecommunication network example by using
industrial experience in developing these systems and previous lecturing
experience teaching on this topic to master lever students of computing and
during preparatory course provided to the newcomers in industry.
## 2 Complex Software Systems
In this section we will introduce with basic terms and concepts used along
within this lecture. Firstly we define complex systems from theory of complex
systems and complex software systems. Here we introduce our examples for
discussion. We also ask students to provide their ideas for complex software
system. Furthermore we discuss challenges arising while developing such
complex system. We also ask students to discuss challenges on their own
examples as well to provide their viewpoint on the potential challenges.
### 2.1 Systems become more and more complex
Software systems are built with aim to accomplish some particular end user
need. In the last few decades the number of functions the software replaced
the human being is continuously growing. Furthermore, the software systems
support humans in more and more complex tasks. As result, the software systems
are becoming more and more complex and new concepts are needed to cope with
their management in order to keep satisfied its users. It is worth to note
that these systems are evolutionary developed usually following product line
concept like is the case for example in car industry. For example Volkswagen
model Golf 1, 2, 3.,… evolve in sequence of releases and each version has
improved engine version but also involve number of additional features.
These complex systems usually involve number of levels of abstraction. System
is modeled as number of interacting system components, each specialised for
some function. System requirements are defined at global system level and
involves definition of expected system functioning. However, these global
system requirements (functional and non–functional) requires intervention and
implementation on low system level. So, for every new requirement the global
system functioning has to be decomposed into low level system design in order
to identify and implement low level details and their interactions needed for
functioning of new requirement. As systems complexity grows, it is harder to
keep details in global system view that is crucial to understand global system
behaviour and its interaction with low level design details. Moreover, it is
very hard to understand how the changes in low level details system design may
affect global system properties. This is exactly the main characteristic
defining complex system. There are number of definitions of complex systems.
In [3] the complex system is the system with number of levels of abstraction
and where there is no clear links between local and global system properties.
Usually during system design there are numerous possibilities how to design
system at low level in order to accomplish global level requirement. Good
engineering approach is to identify candidate solutions and based on
argumented reasoning to select the best design solution among number of
possible solutions. However, number of possible candidate solutions grows
exponentially with increase in system complexity. In complex systems
management we lack tools and methods for their mathematical modeling and only
possible solution is through simulation of their behavior. Systems may be
nested within other systems. Then their function may play critical role in
functioning systems of systems. Management of such complex systems becomes
challenging task.
### 2.2 Quality of complex software systems
Complex systems did not evolve accidentally. Huge effort is invested to
develop these systems and lot of programming and functional expertise was
needed. There must be a great interest into system functionality, mostly
serving numerous users, that lead to further system evolution and that makes
these systems to grow into complex system. These systems were usually
developed in sequence of projects over decades, by hundreds or thousands of
developers and technical experts. These systems mostly perform tasks that are
of crucial importance for community (examples are in telecommunication,
defense, health) for very large number of end users (everybody use them
directly or indirectly). In such systems reliability and safety becomes of
crucial importance.
Reliability is defined according to IEEE standard glossary [7] as ability of
system or component to perform its required functions under stated conditions
for a specified period of time. This particular requirement has no defined
specific implementation implementation reflection. However, any complex system
has to implement number of technologies, follow numerous strict rules and
procedures that are technology dependent to successfully deliver this
particular requirement. Also, numerous verification activities during system
lifecycle, that are very costly, are devoted to fulfilment of this particular
requirement. This requirement is closely related to ability of the system that
is available within specified period of time. System may be unavailable
because of numerous implementation limitations such are system failures and
congestion. These limitations in most cases arise due to unintentional human
mistakes and human inability to cope with system complexity and communication
issues when numerous developers worldwide implement the system parts that
should work commonly in harmony to deliver specified functions and system
functionalities to end users. When system become complex it is hard to oversee
all the possible implementation implications on system functioning. Things get
very complicated when number of processes that may be active simultaneously
for example in threading system use same resources such is corrupted data,
files, on irregular interface.
Here, in this lecture, we will focus on well known design principles and
concepts that are related to proper system structuring and simplifying its
reasoning and management in order to minimise probability of introducing
system malfunctioning.
### 2.3 Software system structure
When a complex system is constructed, there are numerous possibilities how to
design and implement such system. The way how system is built is limiting or
enabling its further evolution and system maintenance. For building large
scale complex systems, which provide complex functionalities, functional
system composition is one logical solution.
For such complex systems the communication tool among all these parties
involved is of crucial importance. System structure is the main instrument to
engineer these systems and to connect between this global and local system
view. Its main purpose is to enable humans to reason and manage the
implementation of the functionality the system will provide to its users.
Also, the system structure is very important communication tool among all
parties involved within software lifecycle. System structure may also have
influence on product documentation and may limit product selling process and
companies business opportunities.
One system is implementing a number of system functionalities for its users.
System in operation is accomplishing system functionality by interacting
number of system functions. Complex systems usually follow functional system
decomposition. Efficient systems have defined a structure of system functions
that may serve variety of system functionalities. Therefore, keeping all
possible side effects of function change on variety of system functionalities
is getting more and more complex and expensive. There, the importance of
functional programming paradigm is becoming extremely important as system
complexity grows. This means that we tend to treat program execution while
operating system functionality as evaluation of mathematical functions without
influencing with global system state and keeping mutable data across system
functions.
Principles of writing clean code and function design are to write short
functions, well named and nicely organized [8]. In this lecture our focus of
interest is exactly the system structure that is logical scheme of connecting
global system property (system functionality) to low level system design
(system function). We focus on design principles engineering complex system
structures. Note, that these principles become necessity when facing with
system complexity. Here, we want to describe importance of well designed
system structure for system management, further development and reaching
business targets. Also, we want to describe how to develop effective system
structures capable to deal succesfully with growing system complexity and
challenging business needs.
The system structure provides a blueprint of system modules (usually
implementing specific functions), their relations and properties of modules
and relations. It is a term related to descriptive documentation of real
implemented system. Proper system structure uses hierarchy to logically
represent and manage module roles and roles of their relationships. Hierarchy
decompose system modules and relationships into several distinct system layers
where each layer has its own role in system functioning.
Currently we lack systematic approach to engineer complex system structures.
However, the software systems engineering theory has developed design
principles that are used to guide software engineers while developing such a
systems. Furthermore, there are architectural technologies developed that
promote use of these system design principles. In contrast to system
structure, a system architecture is a metaphor similar as architecting a
building [13] and its main purpose is to provide system blueprint but for the
developing project purposes, where we need to derive overview of project tasks
necessary to implement new requirements. Further, in this lecture we will
address the main design principles and reflect on selected technologies that
support their implementation. Our intention is to discuss relevance of these
technologies from the perspective of system design principles. These
understandings will be of crucial importance for Part II of these lecture
where we will extend the understanding of functional system structuring
concepts to a network modeling, management and orchestration.
### 2.4 Software organisations developing complex systems are complex too
From software engineering perspective the software organisations responsible
for development and maintainance of these complex systems are usually very
complex, globally distributed and involve number of developers into software
development lifecycle. From business perspective, there are number of
interested parties into these system and software development usually involves
number of stakeholders in requirements elicitation and system definition phase
and sometimes with contradicting requests. The key element for developing such
a complex systems is concise and structured communication among all involved
parties that lead to simple solutions. Software engineering tools, methods,
practices and principles are of crucial importance to support complex system
development processes.
For example, when an functionality of a system in operation experience failure
there are serious consequences for the owner of the system and its numerous
users. On the other hand side, when the system functionality experience
failure it may be hard to identify fault location within the system
implementation. The link between global system property malfunction and low
level system design has to be identified. However, this link is not always
clear and may involve conflicts in interaction among components at low level
design. Furthermore, if we consider that components may be developed by
different organisational units that are usually globally distributed then
speed of finding a solution to the experienced system failure may depend on
communication tools global organisations use. The cost of system ’out of
order’ in this case scenario is seriously impacted with these tools.
Complex software systems have usually numerous markets and applications.
Therefore, it may become challenging task to maintain all the systems versions
that are running in different markets or in different applications. For
example, when system has many variants, each tailored to national specific
regulations managing all configuration versions may become a challenging task
in the case when system is not modular. Furthermore, when failure occurs a
fault mapping process becomes extremely expensive for all system variants.
Organisation that own monolithic in house developed complex systems have to
reconsider their business goals. There may be considerable specialist
knowledge invested into their development that is proprietary right of the
owning organisation. On the other hand side, business drives, like cost of
ownership, cost of change may force organisation to open part of its product
that may interact with system functions developed by other organisations by
using open standards. Thus, organzations are forced to introduce competitive
approach not only at system boundaries but also within systems internal
structure.
### 2.5 Software engineering is not mature discipline
Current knowledge in engineering complex systems, about concepts and theories
how to design, maintain these systems is still not mature enough to
successfully cope with such complexity. That is why software engineering
science behind engineering complex systems is very active and trying to
develop new theories that can explain fundamental system behaviour, find
adequate models for interconnection of global and local system properties, and
trying to find new concepts for better complex systems management. We need
this understanding to better reason these systems and thus enable their
further evolution. Moreover, fundamental is to understand system behaviour
over time and not just observing individual events. With this knowledge we may
be able to better engineer autonomous and self management system principles
which may turn complex systems into equilibrium stage directed towards
business goals. Another aspect of system engineering is to understand system
change in respect to introduced time delays. Introduced time delays may
completely seriously damage healthy functioning of the system [9].
Let us compare available knowledge in mechanical engineering in engineering
vehicles and available knowledge in engineering software systems. We can model
and predict changes on vehicle functioning while we are changing its local
properties like for example piston dimension. On the other hand side we can
not model and predict complex software system behaviour, i.e. if we change a
single line of code within complex software system, we can not predict the
consequences it may introduce in system operation.
### 2.6 Exercises
Exercises for students to asses learning’s from the chapter just read
1. 1.
Define the term complex system.
2. 2.
What are essential characteristics of complex software systems?
3. 3.
Define reliability aspect of software systems and discuss reliability
requirements in relation to growing system complexity.
4. 4.
Define software structure.
5. 5.
What are the challenges development organisation is facing when developing
complex software systems?
6. 6.
What we mean when we say that engineering complex software system is not
mature discipline?
Exercises for students that requires students to reflect on major issues and
to asses their understanding of complex system modelling challenges
1. 1.
Lets firstly imagine an example of complex system from our environment. It is
not mandatory to be an existing computer system of software implementation. It
can be anything that you can imagine, and can fit into complex system
definition. Please elaborate in sense of complex system definition how your
example can be categorised as complex system.
2. 2.
Then can you think about its possible users and functionalities that system
perform. Please write at least three different kinds of its users and at least
five system functionalities.
3. 3.
If we have to engineer such a system can you imagine components/functions
needed to perform system. Please depict high level system structure, with
which you can explain functioning of previously mentioned five
functionalities.
4. 4.
List main non–functional requirements your system may have.
5. 5.
Can you imagine how big this system might be? Can you imagine how much
developers are needed? Can you think about structure of human organisation
that is able to develop such product?
6. 6.
Describe challenges the system might face when introducing new kind of users,
new functionalities, massive traffic.
Note that these questions are used in the classroom to engage students into
complex systems thinking and reasoning. Students are asked to answer on these
questions on the paper. Then, teacher collects papers and starts group
discussion for each question. From my experience the biggest challenge for
students is to think about system structure, organisation and the challenges
an organisation may face evolving these systems. So, conclusion to the
discussion is followed by providing the examples of complex systems that is
provided in the following section.
## 3 Examples of complex systems
In this section we will provide three examples of systems that we may consider
as complex.
Firstly, we provide an example of human body that is complex system inspired
from the nature. This system is not built by humans but the body structure is
used when we want to understand its behaviour with main motivation for medical
purposes. In that sense we may consider medical science as engineering of
human body. Here, we want just to represent how much complex one system may
become and that number of functionalities and level of its autonomicity in
number of contexts extremely increases their complexity. Since, here we focus
on complex software systems we will continue discussion on programming human
body and switch to the example of humanoid robot.
Another example we provide is vehicle. Vehicles are mechanical systems built
by the humans so high level of engineering was already applied during its
development. In contrast to human body, all behavioural processes are modeled
by solid mathematical and physical models that can be used to deduct from
global to local system behaviour. Engineering of these systems is not
considered so complex as in the case of human body. In medical science we
still lack adequate models for modeling this global–local interactions to
understand human body behaviour.
Third example we provide is one of the largest and most complex technical
system that human has engineered – the global telecommunication network.
Majority of people across the globe use its functionalities in some way to
interconnect people. Furthermore, every day we are witnesses of development of
new technologies. All these new technologies finds its applications within the
technical systems that are all interconnected via telecommunication network.
Therefore, telecommunication network becomes one of the key enablers for
development of societies. Its great importance lead to its rapid development
and rapidly increasing complexity. Its main constructive ingredient is
software and as software complexity has grown new innovations were needed for
structural approach. Here, number of telecommunication principles were
introduced to organize and structure telecommunication functions within the
network, and within the telecommunication software. This is why this example
is the main leading example we provide across this lecture.
In a sequel we will try to discuss aforementioned exercises by having in mind
these three examples of complex systems.
### 3.1 Example 1. Human body and humanoid robot.
Figure 1: Human body, modular structure of system functions.
Lets take a human body as analogy for complex system. Humans in their every
day life perform number of functionalities; they speak, sing, walk, run,
write, perform number of complex tasks by combining its basic functionalities.
This human outside view is presented to its environment. On the other hand,
from the human inside view the human body is performing a number of body
functions in which the body organs take vital role. Human body has very clear
structure that is composed of organs and each organ have its unique role in
performing human body function. Thus for example the organs within the human
body are heart, liver, brain, skin, see Fig. 1. Within the human body we have
system structure at different levels of abstraction (e.g. cell, etc) and it is
very complex to reason among the processes that are executing among different
levels of abstraction. Furthermore, there exist communication channels among
human body organs that are mandatory in body functioning. The human body
communication system may be considered nervous system which is responsible to
transfer information across the human body, communicate central brain system
with peripheral elements, coordinate actions and sensory information.
In sense of complex system definition, there is huge gap in understanding of
human body global functioning and its relation to local body organ
functioning. There may be numerous root causes in malfunctioning of body
organs that cause effects on human body functionality. For example, inability
to speak may be rooted in some specific malfunction in tongue organ, nervous
system, brain function, etc. Furthermore, medical treatment may have numerous
side effects and we are not able to predict and control them.
In the example of humanoid robot, let us suppose that we have to program
software for robot mechanics. Then, lets suppose that our robot will be a
soccer player. Its main functionalities would be to walk, to be able to direct
a ball in desired direction, to coordinate with other robots and within the
field, to be able to recover from unexpected events such is for example
walking on non-perfectly flat surface. An robot that is capable to play a
soccer game have to implement number of drivers for all its sensory system
needed to interact with its environment. Furthermore, it is implemented as
distributed control system where various controllers are used to control all
the mechanics needed for its movement. This robot in order to fulfill its
required functionalities must have number of different functions. Some
functions that may be implemented are control functions for kinematics,
coordination of sensory functions, navigation, localisation, etc. In this
example numerous effects from environment may effect on robot behaviour and
the number of possible scenarios may increase so all robot functions
management and related coordination and control functions may become complex.
This means that it may become very hard to isolate events that cause robot
malfunctioning. Solid robot design would also imply modular robot system with
automatical detection and configuration of new sensory systems, new functions
and new platforms [4].
### 3.2 Example 2. Vehicle.
Another example of modular system that comes from mechanical engineering is
vehicle. It is assembled of number of parts each performing its function
needed in vehicle functionality. In the context of software system the analogy
to human organs or car parts are entities, modules, system units. Although,
there exists whole theory how to build functioning vehicle and how we can
structure it, in software engineering there is lack of theories how to build
complex software systems. Here we may observe the difference with complex
system.
The systems modules communicate to each other in accomplishing particular
system functionality. For these communication purposes there are established
physical connections between the system organs. To transfer of particular
information between system modules that is needed to accomplish particular
system functionality these physical links are used. In analogy to human body
these physical links are nervous and communication system is nervous system.
Or in mechanical sense of car these are number of mechanical or electrical
transmitters. Modern complex software systems are build over the network and
system modules communicate over the telecommunication network. Therefore, here
we have interleaving of network and software engineering theories to build
modern complex software systems.
The system structure provides a blueprint of system modules. It uses hierarchy
to logically represent and manage their roles. Hierarchy decompose system
modules into several distinct system layers where each layer has its own role
in system functioning.
### 3.3 Example 3. Mobile Switching Centre within telecommunication network
architecture.
Telecommunication network is set of distributed computer systems, involving
hardware and its related software, interconnected with links. Network serve to
numerous users that connects to network via various terminals that are
distributed geographically, with diverse communicating needs, developed by
numerous producers. These systems are all developed and engineered by humans.
However, the complexity of this system is much higher then is in vehicle
example. The network has to integrate variety of terminals developed on
different platforms thus act as interconnection among various technologies,
industries, equipment producers. That is why clear and open standards has
vital role in further network evolution.
Telecomminication network has evolved in sequence of releases. Its evolution
is standardized in various telecommunication standards. This was important to
enable interwork among equipment of various produces but also to open
competition among network equipment suppliers. Initially, it was built for
voice traffic, and further extended to carry data, video and multimedia
traffic with very different transport needs and offering variety of services.
Also, huge network revolution was introduction of mobile users. Because of all
that, network becomes too complex and during its evolution number of
structural changes have been introduced and that force redesign of network
architecture. These structural changes where always introducing design
principles that we provide here at some network abstraction layer. All these
structural changes were followed by standardisation bodies. Here we will
reflect on work within 3rd Generation Partnership Project (3GPP) that covers
cellular telecommunications technologies, including radio access, core network
and service capabilities, which provide a complete system description for
mobile telecommunications. An excellent overview of 5G mobile communications
technologies is presented in [12]. From 3GPP specifications it can be observed
mobile network architecture evolution across releases 2G, 3G, 4G and finally
5G. Also, here we will explain its implementation of concrete examples.
Main functionalities introduced in 3GPP evolution steps are following:
* •
2G - Mobile core network is introduced for GSM (Global System for Mobile
Communications) users and voice based services offered by GSM network. The
main network functions are located in Mobile Switching Centers (MSC), Home
Location Registers (HLR) and Visitor Location Registers (VLR).
* •
2.5G - Mobile Core Network is extended for GPRS (General Packet Radio Service)
users and data based services offered in GPRS network.
* •
3G - Mobile core network is extended for UMTS (Universal Mobile
Telecommunication System) users and intergated voice, data, video and
multimedia (combination of aforementioned traffic) traffic. Integration of
GSM, GPRS and UMTS services within the core network is achieved in IP
Multimedia System (IMS). Details of IMS system architecture and main design
principles and technologies used may be found in [10].
* •
4G introduce concept of Long Term Evolution (LTE) mainly concerns on new core
network architecture redesigned to enable rapid evolution by introducing
common mobility management functions for all its users and packed based
transport for all services.
* •
5G introduce new network management architecture where rapid network
softwarisation is forcing service orientation by offering all of network
resources as a services to all network users. As stated in [12] 5G will create
the conditions where wireless connectivity will become necessity in a huge
number of applications.
Along this evolution mobile core network architecture has been restructured by
following design principles that we will present in following section. Here in
this lecture we will explain application of design principles during mobile
core network evolution. For that purpose we will focus on design of central
network function that is switching of mobile subscribes. Here we will use
example of complex software system Mobile Switching Centre (MSC) and its
Ericsson implementation on AXE platofrm. Note that switching relates to
establishment and release of connection across the network among two end
users, connected to that network, which want to communicate. Ericsson’s MSC
product was implemented on Ericsson proprietary product AXE [2]. AXE is 40
years old in-house developed platform.
From the beginning product was traditionally developed in monolithic fashion.
That means that all node functions where developed in-house by Ericsson, on
Ericsson AXE proprietary platform [5]. Most node functions are implemented in
software and majority of software is written in Ericsson internal Programming
Language for Exchanges (PLEX) from which Erlang evolved, an special–purpose
language, concurrent and for real time applications. The system structure
followed implementation view, where modules performed specified implementation
functions. As product matures, the number of functions grows, and high
expertise was needed to further develop and maintain that product. Adding new
functionality in already mature product become very inefficient and costly.
AXE based MSC has evolved within more then fifteen releases, has several
millions lines of code, is developed in globally distributed organisation
involving more then ten development units geographically distributed across
the globe [15, 5]. For each project release there are several hundred even
thousands software engineers involved in its development. Product has
requirements to handle concurrently more then one million of users with very
strict reliability requirements 2.2. Product should be able to minimise delays
caused due to serving numerous users concurrently. Furthermore, the switching
equipment have to have high level of availability that is directly connected
to the system architecture and software structure. Software is structured into
functional blocks (functions like callee and caller number analysis, charging,
switching, etc.) with clearly defined interfaces.
### 3.4 Exercises
Exercises for students to asses learning’s from the chapter just read
1. 1.
What are the main differences among the examples of complex systems provided
in this chapter?
2. 2.
Order the examples of complex systems by level of complexity from the less
complex to the most complex. Explain your ordering criteria.
Exercises for students that requires students to reflect on example of complex
systems and discuss on complexity implications
1. 1.
Let us consider for example a Mobile Switching Centre node that is serving
node for switching mobile calls. This system has possibility to handle one
million of users simultaneously. Imagine that this system experience failure
such that it requires restart. Or imagine that system is not modular and
adding of new functionality requires system restart. All mobile services,
calls, SMS would be discarded. If we want to measure cost of loss for the
operator owning this system we may multiple cost of call per minute with
number of users and with number of seconds system was out of service. Please
calculate operator loss for 1 minute out–of–service at 50% load. Note that MSC
system is very complex and system restart may take hours. This example has the
main purpose to increase awareness of students on importance of flexible and
reliable system operation and how it may become important in complex systems
that serve number of users.
## 4 Design Principles
Whenever a complex logical problem has to be engineered, there are some
typical patterns arising in all computing fields, as for example is the case
in logic and programming languages, software engineering, networking, database
architecture, etc. Specialists from different fields have experienced the same
problems facing complexity, and have come to solutions that are grounded on
the same concepts, but perhaps implemented differently in various fields.
Therefore, we define here these common concepts in form of design principles:
modularity, abstraction, layering and hierarchy. These principles are
explained in detail in every serious textbook presenting system design
principles [16, 14].
### 4.1 Modularity
The most frequently used approach when dealing with complexity is division of
the system into set of smaller components, simpler to understand, that are
independent of each other. This concept is very much used in all fields and we
call it modularity. In complex software systems we use functional system
decomposition when the system is decomposed into set of independent function
units called system modules which interact among each other aiming to
accomplish system functionalities. Thus, we have relation between global
system level on which we have functionalities that system is able to perform,
and local system level where we have structure of system modules which are
able to perform specific functions. System functionality is achieved as
interwork of set of system functions.
The benefit of system modularity does not only lies in better understanding of
system functioning by travelling between global and local system view but also
its benefits are seen from perspective of easier collaborative design and
expanding business opportunities. Modular systems may be developed by globally
distributed organisations and the system responsibility may be shared among
system modules and its development organisations in different countries. Also,
some modules may be easily given to third parties for further evolution. This
way, decreased cost of development have to be carefully balanced with impact
on system quality.
### 4.2 Abstraction
Abstraction is term very much tied with modularity concept already explained
above. It is related to concept of introducing communication rules within the
system by introducing standard interfaces among system modules and separation
of interface from module internal details. Introducing term standard for
interface means that everyone using that interface use same rules of
operation. Thus, we have two or more independent modules tied with same
interface which may evolve independently while they are interconnected in
between with standard interfaces. Idea is to abstract function of each module
by using its interface. In other words, the interacting module does not have
to know implementation details of other component and their interaction is
achieved through exchange of standard set of messages and data.
Additional benefits that arise from abstraction on modular system are numerous
like easier system evolution, inherent and autonomous failure prevention and
failure diagnosis.
We can divide software system in number of different ways but the best one is
the one where we can use abstraction. For example, in object oriented
programming some programming languages have implemented concept of inheritance
to force programmers to make their programs to be not only modular but
abstract too. This means that for example different geometrical figures that
may be drown on GUI like are triangle, square, circle are inherited from the
same class called Figure that is an excellent candidate for abstract class,
[17]. Thus, functions draw and erase may be defined on Figure object type
while exact geometrical object (triangle, square, circle) will be called
during program execution, when needed. Separation of implementations in
function draw, erase and inherited classes triangle, square and circle is
achieved through definition of abstract class Figure. This is very important
for limiting the impact of propagation of fault and its effects. When a module
is exhibiting a fault, it means that module does not meet its abstract
interface specifications. Furthermore, here we allowed various objects with
different internal implementation to share the same external interface. This
is well known concept called polymorphism in object oriented programming.
Design and implementation of standard interfaces is about definition of set of
rules for information exchange among two interacting entities. All possible
interactions have to be developed in advanced but also all not allowed and
irregular interactions have to be considered and proper action have to be
defined in a way that lead to regular system behaviour. Module implementation
have to take care for unexpected behaviour as well and avoid system failure
situation by securing proper system reaction. For example, if an unexpected
message or data is received through standard interface, the component that
experience such condition have to implement regular process termination while
keeping in mind to release all unnecessary resources. In concurrent systems if
one process fails we want to keep all the other active processes. When failure
occurs, troubleshooting process of identifying fault location in modular
system is much easier then in non modular system. System may be divided into
components but not compiling to modularity principle of module independence
and function abstraction. Then, there is no guide to control low level
implementations and fault may be located anywhere in the system. The role of
standard interfaces is to force well defined design rules for implementing
interfaces within the system. Trace of exchanged messages over these standard
interfaces in execution paths that leads to failure helps in fault isolation
process and may detect module that did not comply to standard interface rules.
Moreover, every module has to correctly implement regular use cases of
standard interface but also have defined actions for irregular use case
scenarios. Thus standard and well defined interfaces helps in fault prevention
process.
Furthermore, modules which have clear function within the system may be easily
changed and updated without affecting each other until strict interface rules
are followed. Thus, the same system may be collaboratively developed and
responsibility over the system modules may be distributed across the globe.
In evolving systems there may be number of system versions implemented and in
operation and used by numerous users. Development impact on one system version
which share components with previous system version are made invisible to all
previous system versions until strict backward compatibility rules for
protocols on interface are followed. Thus, at system evolution, impacts due to
new or improved system functionality are easily located and implemented. Only
modules that needs intervention are opened and improved. Change is simple
until the change is keep within the module without change of interface. Bigger
changes which require intervention on the interface have to impact all
components which use this interface. However, interface is also subject of
versioning and all new impacts have to comply to strict rules to keep backward
compatibility on interface with all components that use that interface but are
not affected by change, i.e. new functionality is not implemented. In some
cases, the change on interface may involve impact on all interacting units but
in that case modularity process was not performed correctly. Note that there
exist strict rules how to change modules and interfaces in modular system
design if we want to benefit from modularity concept.
### 4.3 Layering
Figure 2: Layering communication functions: Open Standards Interface OSI
Model.
In dealing with complexity we usually use layering. It assumes grouping of
functions and related information into distinct layers of abstraction which
communicate with standard interfaces by limiting the rules of interaction
among various functions. The need for layering in communicating computer
systems was recognized early 70ies when various computer manufactures
developed their own products that needs to communicate. To avoid situation
that each manufacturer developers its own communication rules, the standard
body International Organisation for Standardisation developed Open Systems
Interconnection Model OSI that made it possible for system produced by
different manufactures to communicate in between. One example of similar
standardisation effort is international standard IEC 60906-1 that define power
plugs and socket-outlets with the main purpose to be used across the national
borders. Thanks to these efforts today we do not need specific plug for each
country while travelling except for some specific countries. For those
countries we need specific adaptors in order to use equipment produced for
European market in their countries. OSI Communication model was developed with
similar aim.
OSI Model introduces layering of systems functions in communication with other
systems. OSI Model provides details what information is carried by each layer
and not describing its implementation details. Thus, the data flow among
computers may be followed across the layers of computers involved in
communication. The model is depicted in figure 2. It consist of seven
functional layers. Each functional layer have defined communicating rules with
its pear at the same functional layer but within two computers in
communication. These layers are physical, data link layer, network, transport,
session, presentation layer and application layer. Thus we obtain layering of
communication within the network of physical computers and the rules of
communication are defined within various standard protocols at each functional
layer. These protocols define horizontal communication in computer network. In
the figure 2 it is presented how each layer adds its communication overhead
into transmitted data between two computers. Within the computer network there
exist communication network at each horizontal functional layer. Furthermore,
vertical communication, the communication between the layers remains within
one computer and thus may be subject of manufacturer internal standard rules.
However, proper design and implementation of functional layers within the
system should aim to make the layers independent one of another. Thus, each
layer may be changed and reused within the computer architecture without
affecting other layers. Such design involve introduction of services within
the computer and layer functions may be offered to other layers as services.
The service interface should be made clear and independent of other services.
Pure functional layering should made layer user unaware of service
implementation details. Here, if we are able to introduce standard rules into
communication among the functional layers we will be able to use equipment of
different producers also at each functional communication layer. Therefore,
further network evolution involves splitting of communication architectures
and often introducing standard vertical protocols. For example in 4G mobile
network architecture a new network architecture was proposed with idea of
splitting the network architecture into thre planes applications, control and
resource network plane. Gateway Control Protocol, GCP
(https://www.itu.int/rec/T-REC-H.248.1-201303-I/en) was introduced in between
control and resource layer. The protocol is used to manipulate and control
physical resources placed in resources plane needed in mobile end-to-end
communications. Another example is introduction of Software Defined Network in
5G network architecture [12] and OpenFlow logical switch
(https://www.opennetworking.org) to control network resources in network
plane. Both network layering occurred at different abstraction layers,
layering in 4G occurred on virtualisation of network physical resources and
layering in 5G occurred on virtualisation of network functions.
For better understanding of OSI communication model we use postal analogy that
is provided in excellent student book [15] where two project managers from
different countries communicate to each other on a secret project using the
land post infrastructure. OSI functional layers are compared to functional
layers involved in communication path when the postal office is used as a
service for transferring communication messages.
Postal office analogy Lets suppose that two project managers work on the same
secret project and each manage team in its own office. In a given example in
[15] project manager Erik is Swedish working in office in Luleå and the other
Louis is French working in Goteborg. In real telecommunication network,
project managers are applications that communicate to each other using
telecommunication network which in our analogy is represented by standard land
post.
Since the project is secret project managers agreed to communicate on English,
use standard land post for message exchange and since the project is secret
they agreed to use encryption of messages. Also, their secretaries exchange
addresses and letter formats for the communication. These agreements are
equivalent to horizontal layer agreements on protocols to use at each
horizontal layer. These three agreements in our analogy represent agreements
used on peer protocols among the functions of same layers in different nodes.
Thus, we have project managers and the translators at the seventh layer,
crypt–experts at sixth layer and secretaries at layer 5.
The communication path from project manager Erik in office in Luleå and the
project manager Louis in office in Goteborg is passing all the seven OSI
layers once in office in Luleå and again all the seven OSI layers in office in
Goteborg. When project manager Erik writes a letter in Swedish, it gives this
letter to translation office to translate it in standard communicating
language (in this case English). Then, the letter is transferred to the
crypt–expert to encrypt the letter using agreed encryption for the
communication with Louis located in Goteborg. When message is encrypted the
letter is transferred to the secretary to prepare the letter for transmission
through land post. This means, that secretary put the letter into an envelope
and address the letter to Louis (its secretary) in Goteborg. When the letter
got its addressed envelope it is ready to be passed to the local postal
office. Local postal office in our analogy is the entering point to the post
network. Here it starts packaging of this letter into postal packets that
travel to the same destination. The letter of Louis now is packed with all
other letters that have destination to Goteborg postal office. This function
is equivalent to the layer 4 functions. At layer 3 and 2 the group of letters
with destination to Goteborg postal office is packed into separate packet and
mark the address of postal office in Goteborg on that packet. The packet is
then hand over to the transporter. Finally, the transporter is transferring
this packet to the Goteborg postal office using pubic transport
infrastructure. When the packet is received at the post office in Goteborg the
reverse process of passing through layer 1 to 7 starts. Firstly, the packet is
opened and the letters are regrouped to the local destinations in Goteborg.
Thus the Luis letter in Goteborg postal office is now grouped with all other
letters addressed to Luis office that are received from various postal
offices. This function is equivalent to layer 2 and 3 at the other
communicating party. This group of letters including Louis letter is given to
postman that is delivering the letters to the Louis project office. Postmaster
employed in Louis project office is delivering the letter to Louis secretary
(layer 4). At layer 5 the secretary signs for the received letter, notice that
the message is encrypted and transfer the letter to the crypt–expert.
Encrypted message the crypt expert passes to the English–French translation
(layer 6). Finally, the letter is received to Louis.
This example we used to better explain layering of functions and communication
protocols in computer networks. There, by the standard we have defined seven
distinct layers of communication where communication functions are grouped and
organized into distinct layers. Furthermore, the communication is allowed only
between neighbouring layers and it flows vertically from the layer 7 to layer
1 in one computer then it is transmitted over the physical wires to the other
communicating computer where it again flows from layer 1 to seven. Finally,
dialog is achieved in between applications located on the top of seven OSI
layers in two communicating parties/computers.
Figure 3: Example 3. MSC in signalling network 3G architecture (3GPP
standard).
In Figure 3 is presented an example of signalling in mobile core network
architecture in 4G of 3GPP. By signalling we mean all the communication needed
prior to call establishment and posterior to call release. At the figure all
links are represented by protocol stacks that are used to communicate among
network nodes. In this particular example we have depiced the case where users
from the GSM and UMTS networks (BSS, UTRAN) comunicate over the core network
nodes MSC, GMSC by using Media Gateway functions for physical resources, with
users located in one of the traditional networks ISDN, PSTN, PLMN. The
protocol stacks follows the OSI communication model and functions organised
into distict layers are equivalent to functions of each ekvivalent OSI layer.
Note that signalling communication is achieved at fourth layer of OSI model.
### 4.4 Hierarchy
Another concept to manage system complexity is to further restrict possible
communication interactions by organizing system into tree-like hierarchy.
Communication is allowed only between the modules of the same layer or with a
modules of upper or lower layer. Thus, communication possibilities is
significantly reduced. Furthermore, in hierarchical systems we may
differentiate among communication types but also among roles the end point
modules take in communication. For example, we may distinguish between
peer–to–peer and client–server communication. In peer–to–peer communication
both modules in interaction are equal and both may initiate communication and
exchange information. During the whole module lifecycle, both modules are
aware of each other. On the other hand side, in client–server type of
communication the client side is always requesting some service from the
server side while the server side is unaware of possible clients until the
service request is received. The communication may be requested only from the
client side and server is serving numerous client requests. These differences
in communication types have also reflecting on benefits of system separation
on distinct system layers. System layering with hierarchy allows
specialisation of layers and their functions and involve hierarchy of
functions within the system.
### 4.5 Exercises
Exercises for students to asses learning’s from the chapter just read
1. 1.
What are the four main design principles?
2. 2.
Define the term abstraction as design concept for structuring complex systems
and provide an example. Discuss benefits.
3. 3.
Define the term layering as design concept for structuring complex systems and
provide an example. Discuss benefits.
4. 4.
Define the term hierarchy as design concept for structuring complex systems
and provide an example. Discuss benefits.
Exercises for students that requires students to reflect on applying design
principles in practice
1. 1.
Depict structure of the complex system that you imagined as part of excersise
at the end of the first section. Use system design principles.
2. 2.
Discuss each design principle you used to structure your system.
3. 3.
Imagine how your system will evolve in the future.
4. 4.
Discuss challenges that software organisation may face further evolving such
system.
5. 5.
Can you think about adding more abstractions into previously depicted system
structure. Discuss benefits.
## 5 Technologies that promote system design principles
New era of technology is devoted to Information and Communication Technology
(ICT). Numerous efforts have been invested into research and technology
innovations aiming to provide efficient and effective ways to share
information among distant parties that may communicate to each other in timely
manner and that these distant parties are not mandatory humans. New
technological advances within telecommunication industry and computing
industry are driven with this same aim. From telecommunication perspective we
are witnesses of revolution. Complete core network infrastructure has been
redesigned to cope with these new challenges to carry massive traffic, with
very diverse information content, for fixed and mobile users, across various
geographic and application domains.
Recent technology advances have been actively promoting above mentioned design
principles. Here we will introduce basic technologies we will use in following
section dealing with management functions of virtual network resources. The
key new technological trends that have seriously impacted the way systems are
developed are client–server architecture, system virtualisation technologies
and Service Orientation. Understanding these three technologies is
prerequisite for getting introduced with new telecommunication era involving
Network Virtualisation Functions that is the main subject of this lecture.
### 5.1 Restricting communication
When an system has modular architecture the it just means that system may be
decomposed into number of functional modules. However, amount of communication
possibilities is still unlimited and it exponentially grows as system is
getting more complex. Propagation of faults in such system is unpredictable
and left without any control. Another approach to introduce more control into
system operation is to introduce hierarchy into communication processes and
more restrictions into communicating rules. This means that design rules are
not only documented as guiding principles for designing interfaces but it
means also to restrict possible actions over the interface with use of
specialised technologies that will enforce interface designers and programmers
to use rules and minimise fault propagation within system. Furthermore, change
in communication processes should be clearly defined in advance, controlled
and independently implemented among affected parties. There are two possible
communication types: through protocols and through interface. Protocols are
used for example in telecommunication network among peer users that
communicate and is represented by a set of defined rules related to
information exchanges among communicating parties. Well defined protocols have
defined set of messages, set of data exchanged by messages, state behaviour or
allowed message sequences, a compatibility mechanism, and timer mechanism.
These are the main elements of all standardised protocols.
On the other side, an interface represents a shared boundary (media) for
information exchange. Client–server architecture is using interface.
Introducing client–server architecture in communication among system modules
means to organize modules within the system as clients and services. The
communication among the modules is thus restricted. System behaviour is
represented as series of events that can be easily tracked to identify root
causes for improper system behaviour. Firstly, the clients can ask services
for a specific service only using messages. There are no direct procedure
calls between clients so implementation errors may be propagated through the
system. Errors may propagate exclusively through messages. Moreover, malicious
processes may not affect code in systems modules directly. These processes may
be introduced only through the messages. Each module in communication is then
responsible to verify correctness of the messages. Furthermore, if client is
not satisfied with a service it may ask for a service on other place, until it
uses standard interface. Functions are getting standardised through the use of
standard interface. Open standards are key to promote further standardisation
of system functions. The best design principle is to design an interface with
assumption that client and server reside on different hardware resources
within different physical machines.
With well defined communication interface, each module can be designed and
developed separately, without knowing implementation details of each other.
Each module is implemented as it is intended to run on its own physical
hardware machine. This programming approach introduces overhead and is weakly
reusing benefits of coexistence of functions on the same hardware resources.
In client–server communication clients requests a service from the server.
This communication is stateless, there are no common global states among
client and server and no shared memory data structures (i.e. like stack among
them) thus making client and server state machines independent. Furthermore,
client does not have to trust to the server and all data and messages received
can be locally verified by client. Also, if there is no response in reasonable
timestamp then client may decide to ask for another service or take proper
recovery procedure. This way clients are enforced to be implemented as
self–protective, i.e. in case of unexpected behaviour from message oriented
interface it can implement regular recovery procedures (i.e. release all
related processes before termination of active process or selection of another
service in order to successfully complete its function). Furthermore, by using
public interfaces the competition among programmers is encouraged so the best
implementation may wins.
### 5.2 Service orientation
Service orientation is a concept where system is made of services implemented
as Web services which may be communicated via open standards. These services
are dynamically discovered and used on demand, [16].
Figure 4: Complex systems composed of services.
There are already many software solutions available in various application
domains (medicine, pharmacy, ecology, agronomy, and many others). However, in
most of the cases these software is traditionally developed, in vertical and
monolithic fashion. In Figure 4 vertical ellipses represent software
applications developed for each application domain in traditional monolithic
manner and each is tailored to the specifics of the application. These
specifics are governed by the specific equipment used in these application
domains and are described by industry standards that are often inaccessible to
the general public or protected under the intellectual property of individual
manufacturers of that equipment. Variety of specialist knowledge were
integrated into this single software application and in Figure is represented
with a circles. The specialist knowledge (in the form of software) remained
locked in the vertical branches of the application domain. However, this
knowledge / software may be reused in other domains as well (e.g. signal
denoising). But, due to the rigorous monolithic approach to development and
the lack of modularity in the software, it is almost impossible to reuse this
software/knowledge elsewhere. Furthermore, such monolithic technology limits
functional decomposition so it may be difficult to decompose system into
separate software functions, but even when possible, it would be difficult to
integrate these separate software functions into software applications from
other application domains due to the diversity of software technologies used,
industry standards, programming languages and platforms on which they are
built. Such, traditional software production imply that the user purchases the
software, installs it on his computer, and keeps it working by installing a
patch for any errors or being forced to purchase a new version of the
software. Such an approach implies ownership of the software, taking care of
new versions and evolution, making it much more difficult to maintain.
Finally, the biggest concern here is propagation of faults throughout of the
system. When failure is occurred, fault is hard to locate, and whole system
has to be replaced when introducing changes.
The modular architecture and reusability of already written and tested
software has improved significantly in the object-oriented development
paradigm. For example, companies in one of the largest domains of the software
industry engaged in the development of entertainment applications - software
games, achieve significant savings through an object-oriented approach. One
character from one game is represented by an object. The characteristics of
that character are described by the attributes of that object while the
behaviors are described in its functions. This character can be transformed in
a number of other games as well. This saves money because all software (code
and related documentation) do not have to be developed from scratch and more
importantly they have already been tested. However, the interfaces are not
standardized and it is difficult for these objects to be reused on other
platforms and integrated with other programming languages.
A step further, was the component development approach that made the greatest
contribution to interface standardization. The idea behind component
development was to develop software components as separate products that
implement certain functions that can be offered on the store shelf. By
stacking such components, more complex software products can be built.
However, the fundamental problem of closed industrial standards was still
unsolved and made it very difficult to mix such finished components on
heterogeneous technologies. Furthermore, although the components where easy to
change and system was modular still there were big concerns about maintaining
and upgrading such systems.
Service-oriented computing is a paradigm for the development of complex
software applications, which seeks to move computing from the industrial,
closed world software production to the open world by using software services
independent of proprietary technologies. It is an architectural style of
building software applications that promotes loose coupling between components
so that you can reuse them and work within a distributed systems architecture.
Special emphasis is given to:
* •
the transition from the world of industry closed standards to the world of
open standards,
* •
The shift from product manufacturing to the use of software services.
In the traditional software development paradigm, all software is stored on
the personal computer from which it is executed. Or, during the execution of a
software application, individual packages are additionally stored and run for
execution within a software application already installed. In the new
service–oriented paradigm, pieces of software are stored on a distributed
network and are called dynamically, on user (client) demand, to perform a
specific task. When calling such services, the exact first and last name,
number or address of that service is not used. The service is called just
based on the description of the client needs. The network offers the service
that most closely corresponds to that description at a given moment. This kind
of software delivery is reminiscent of customer service, which is why we call
these software functions services. Each simple software service is conceived
as a software implementation of a certain function that is packaged in such a
way that it has a formal and documented interface that is autonomous and
loosely–coupled, i.e does not depend on the way other software services and
their software implementations are executed. Any simple software service can
be located and retrieved, based on open communication norms and mechanisms
that are already implemented in the telecommunications network.
The basic difference in this new development paradigm (SOA) is that we do not
have all the software on our computer before runtime. Parts of the software
are invoked during runtime, and the network makes sure that the user is best
served at all times. Maintaining and taking care of new software versions is
no longer the subject of headaches for application users. This is what is
presented in top of Figure 4. Since customer is served on a dynamic basis, the
network is able to offer the service that best address user needs on the
network. Similarly, service descriptions may also contain other information on
service quality needs. Software billing is also becoming dynamic, with the use
of online services. Note that the basic prerequisite here is the network that
serves the user and the interaction between the user and the network.
Much of the work in this new paradigm of software development has been
transferred to the network, such as storage, retrieval, integration and
integration of software in real time – as needed by software users. This is
why the network needed to devise a completely new concept, jointly proposed by
OASIS (Organization for Advancement of Structured Information Standards) and
the W3C Group (World Wide Web Consortium). This new concept is defined using
the well–known Service Oriented Architecture (SOA) presented in Figure 5. This
architecture is based on publicly available and open Internet protocol
standards, so it is increasingly cost–effective and simpler to implement. The
basis of this paradigm is the Web or Web service, and open standards such as
XML language (eXtension Markup Languague) and SOAP (Simple Object Access
Protocol), a simple object access protocol used to exchange messages between
services. The WSDL (Web Services Description Language) is defined to describe
the services, which actually defines the interfaces that access the services
and the standard that prescribes the compilation and organization of the
Universal Description, Discovery and Integration (UDDI) registry. For the
purpose of building software applications based on Web service layouts, the
formal language WS - BPEL (Web Service – Business Process Execution Language)
is defined. Some SOA product has been built by many industrial well accepted
frameworks but also as part of some virtualisation environments e.g.
OpenStack.
Figure 5: Service Oriented Architecture (SOA).
There are numerous ongoing research efforts on the analysis of services
offered in the network and the evaluation of quality, reliability and security
of large and distributed systems based on service-oriented computing.
Specifically, in service-oriented computing, it is very important to know how
much the quality, reliability and security of the service provided to the user
will be, or how a particular Web service will affect the complex software
system as a whole. Research is focused on new mechanisms and autonomous
monitoring methods and algorithms that will be used for network smart
management with the aim of achieving reliable, secure and high-quality service
systems. The ultimate goal is full automation, which means that in the future,
when matching software systems to the services available, the network will be
able to determine for itself which version of a particular service is most
appropriate and evaluate the properties of the overall system.
In mobile network evolution, introduction of IMS system into core network has
been implemented by using client–server architecture and IMS service offerings
as Web services over the REST style API [10] aiming to link IMS to the Web
world and secure IMS services offered over the Internet thus accessible to
billions of users. However. Since SOA architecture was under long process of
standardisation REST style has been firstly chosen as simpler to implement and
easier to use because of numerous available technologies. Further network
evolution is integrating SOA for all network functions [12].
### 5.3 Virtualisation technologies
Let us firstly define what we mean by saying virtualisation. We can say that
we are introducing a new abstraction layer. This actually means separation of
upper layer functions from the lower layer functions through new
virtualisation layer. Virtualisation layer is mapping the logic from upper
layer to lower layer. As already mentioned in hierarchical layering there are
strict rules on implementing communication interface among layers. Only
neighbouring layers may communicate one to another and only upper layer may
request services from lower layer. Then, with such separation these two layers
become independent. You can change, duplicate, remove or whatever you want any
of these two layers without having to impact the other part. Client–server
type of communication is used on interface and introduces additional
restrictions that limits error propagation among the layers.
Figure 6: Virtualisation concept of managing logical connections.
In the figure 6 we present logical decomposition of physical instances in
software execution. In classical case, an logical instance is representing a
physical resource instance. This connection is fixed and there is no sharing
of physical resource instance among various logical instances. When there are
multiple calls are running using such an architecture then each logical
instance is connected to its own physical instance. Such architecture is
inefficient because when the call is inactive the physical resource is left
unused. Thus, virtualisation is concept usually applied when we want to gain
capacity efficiency and maximal reuse of resources.
In telecommunications example we may observe evolution of switching function
for telephone calls within an telecommunication exchange. The first switches
where physical boards with number of connectors, where each connection is
representing one physical wire to one telephone end user and human worker that
is manually interconnecting two physical connectors on the switching board.
When an telephone user want to subscribe for telecommunication service it gets
a physical connector on the switching board in telecommunication exchange. At
call establishment phase the caller establish communication with human working
on the switching board and ask for connection to callee (a person who is
called by caller). Then the human worker at the switch board make a physical
link between caller and callee connector at physical board. The physical link
is established among caller and callee telephones and they may start
conversation on that physical resource. In the Figure 7 is presented human
workers on switching board in telephone exchange Maribor, Slovenia (1957).
Figure 7: First telephone exchange in Maribor, Slovenia 1957.
Next step in evolution of switching board was idea of multiplexing number of
telephone users over one physical link using time sharing approach. This
concept is called time division multiplexing (TDM). This idea involved
definition of virtual circuit that is represented by time slot each telephone
user is assigned at subscription to telephone service and is called a
communication channel. In this phase, each telephone user gets its own virtual
channel that is represented as logical instance at logical layer and set of
users share one physical wire. This approach introduces sharing of physical
resources and there is a need for development of reliable management
functions. In this case, still the management of call control processes was
completely in control of logical layer. So, error were easily propagating
among multiple call processes.
Introduction of additional virtualisation layer separates management of call
control functions from physical resource control functions. All functions
related to regular usage of physical resources are left to resource layer
management and are separated from the functions related to regular use of
logical processes. This separation of call control from resource control logic
involves definition of strict rules by using client–server communication
principles. Thus the call control and resource control processes are enforced
to be independently managed. Common name for new virtualisation layer that is
responsible for managing virtual resources is hypervisor. There exists
different implementations of hypervisors e.g. in container technology and
virtual machine technology. The main difference lies in level of
virtualisation involved. Hardware and software resources may be virtualized at
various functional layers i.e. hardware level, operating system lavel, library
support level and application level [6]. While virtual machines are
virtualizing resource approach where each virtual instance has its own
operating system and fault tolerance and security mechanisms are fully under
control of each virtual instance. On the other hand side, containers
implements a soft version of virtualisation and provide isolation of
containers with underlaying common operating system for all containers running
within the same hypervisor. The balance in level of virtualisation has to be
find for each application. Virtual machine approach is much more expensive in
terms of resource usage but it provides less secure resource control
mechanism. For some applications container technology may be sufficient and it
is dependent on security requirement for particular application.
Similarly to case of virtualisation of communication wire where multiple
virtual channels are multiplexed in the same communication wire the same
virtualisation concept is applied to virtualisation of computer architecture
where multiple virtual machines are multiplexed in the same physical hardware
machine.
## 6 Redesigning the complex system structure
In our example Ericsson based MSC node the above mentioned designed principles
where introduced with help of key technologies, client–server architecture,
service orientation and virtualisation. There are published material
explaining in detail modularity concepts introduced and its internal software
architecture [5]. These concepts were reused from networking example where
each function within the network architecture is defined through set of
services node offers, and may be implemented on separate hardware node and by
different vendors. So, the communication interfaces and protocols are well
specified for their interaction in achieving the network functionalities.
Their interaction is governed by these communication protocols and left
without any knowledge of their internal structure. Such network architecture
enable definition of autonomous service functions that communicate with well-
defined interfaces.
Firstly, virtualisation is introduced to enable separation of application
layer functionalities from the resource layer functionalities. Application
modules are defined to pack application layer functionality that can be sell
to the customers and thus increase value of MSC product. On the other side,
there is application platform layer where the non-application specific (or
application independent) physical or logical functions, resource specific and
resource management functions are located. One example is switching function
that is implemented as service within application platform. Services are
grouped into modules. The platform is responsible to coordinate common
resources. Design principles are introduced specifying that all modules within
the system provide its service to other modules or external users over an
interface or protocol. Thus, a standard set of communication rules is
introduced. The interface among application modules and application platform
layer is called application platform service interface (APSI) and contains set
of service specifications that are provided over that interface. It is an
client–server interface and is independent of service users, does not depend
on user implementation and configuration specifics. These service
specifications are describing services and the ways how to approach specific
service.
Furthermore, the product structure is documented for the purpose of its easier
management. The products from the product structure are somehow categorized
and hierarchically layered. In the case of MSC node the product hierarchy is
following OSI layer hierarchy as is presented in Figure 4.3. This structure is
represented through product numbering scheme. The benefits of this kind of
product management are numerous and are related to collaborative product
management. I.e. the strict numbering scheme also implies relationships among
modules that are somehow interrelated in specific functionality and product
control level and implies any special service agreements. Interfaces also
become a part of product portfolio. A set of standards defined that are
related to procedures and recovery actions needed while interfaces and modules
are changes.
Such architecture enabled easier product marketing and production of variety
of MSC node configurations reusing the same set of software modules. Strict
definition of interfaces, restrict fault propagation. Furthermore, independent
changes can happen at application layer and application platform layer.
### 6.1 Exercises
Exercises for students to asses learning’s from the chapter just read
1. 1.
What are the benefits of introducing new abstractions in the system?
2. 2.
When and how we identify need for system restructuring.
Exercises for students that requires students to reflect on case study
1. 1.
What do you think may be limitations and obstacles when introducing new
abstractions. See, for example, [5].
## 7 Network evolution and further role of functional programming
The main goal during network evolution is to enable different technologies,
vendors to access network infrastructure and to provide their interconnection
effectively and with minimal use of resources. Network provide an shared
resource for all interconnected parties. New generation of networks introduce
two concepts: Software Defined Network and Network Function Virtualisation.
With these two new concepts the network is opening its resources to be used
and configured by its end users and on demand fashion. Furthermore, network
introduce self–organisation and autonomic network management functions [1].
Along this new concepts, existing complex systems have to re–engineer their
internal structure so it can provide as much as possible its functions in as a
service fashion. Use of open standards is promoted by network infrastructure.
New business models will revolutionarize future telecom business.
In the second part of this lecture we introduce these new technologies,
provide reflections on system design principles. We discuss autonomic
(networking) design principles on management functions for network operation.
Furthermore, we define network design principles that will drive future
innovation within the network. Finally, in this new generation of network a
shift will be made from system programming to network programming. Here as
well, the functional programming approach would be enforced.
## References
* [1] Agoulmine, N.: Autonomic Network Management Principles: From Concepts to Applications. Academic Press, Inc., Orlando, FL, USA, 1st edn. (2016)
* [2] Armstrong, J.: A history of erlang. In: Proceedings of the Third ACM SIGPLAN Conference on History of Programming Languages. p. 6–1–6–26. HOPL III, Association for Computing Machinery, New York, NY, USA (2007). https://doi.org/10.1145/1238844.1238850, https://doi.org/10.1145/1238844.1238850
* [3] Barabási, A.L.: Network science. Cambridge University Press, 1st edn. (2016)
* [4] Elkady, A., Joy, J., Sobh, T., Valavanis, K.: A structured approach for modular design in robotics and automation environments. Journal of Intelligent and Robotic Systems 72, 5–19 (05 2012). https://doi.org/10.1007/s10846-012-9798-y
* [5] Enderin, M., LeCorney, D., Lindberg, M., Lundqvist, T.: AXE 810—The evolution continues. Ericsson Review 78(1), 10–23 (2001)
* [6] Hwang, K., Dongarra, J., Fox, G.C.: Distributed and Cloud Computing: From Parallel Processing to the Internet of Things. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1st edn. (2011)
* [7] IEEE: Ieee standard glossary of software engineering terminology (1990)
* [8] Martin, R.C.: Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1 edn. (2008)
* [9] Meadows, D.: Thinking in systems. Chelsea Green Publishing (2008)
* [10] Noldus, R., Olsson, U., Mulligan, C., Fikouras, I., Ryde, A., Stille, M.: IMS Application Developer’s Handbook: Creating and Deploying Innovative IMS Applications. Academic Press, Inc., USA, 1st edn. (2016)
* [11] Orange, V.: Teaching About Supercomplexity in Interaction, pp. 107–125. Springer International Publishing, Cham (2019)
* [12] Osseiran, A., Monserrat, J.F., Marsch, P.: 5G Mobile and Wireless Communications Technology. Cambridge University Press, New York, NY, USA, 1st edn. (2016)
* [13] Perry, D.E., Wolf, A.L.: Foundations for the study of software architecture. SIGSOFT Softw. Eng. Notes 17(4), 40–52 (Oct 1992)
* [14] Saltzer, J.H., Kaashoek, M.F.: Principles of Computer System Design: An Introduction. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (2009)
* [15] Telecom, E.: Understanding Telecommunications, Vol 1. Ericsson Telecom, Telia, and Studentlitteratur, Lund, Sweden, 1st edn. (1998)
* [16] Vliet, H.v.: Software Engineering: Principles and Practice. Wiley Publishing, 3rd edn. (2008)
* [17] Weisfeld, M.: The Object-Oriented Thought Process. Addison-Wesley Professional, 3rd edn. (2008)
|
arxiv-papers
| 2021-07-26T12:14:50 |
2024-09-04T03:07:18.444533
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Tihana Galinac Grbac",
"submitter": "Tihana Galinac Grbac",
"url": "https://arxiv.org/abs/2107.12136"
}
|
2107.12137
|
# AA3DNet: Attention Augmented Real Time 3D Object Detection
Abhinav Sagar
Vellore Institute of Technology
Vellore, India
[email protected]
###### Abstract
In this work, we address the problem of 3D object detection from point cloud
data in real time. For autonomous vehicles to work, it is very important for
the perception component to detect the real world objects with both high
accuracy and fast inference. We propose a novel neural network architecture
along with the training and optimization details for detecting 3D objects
using point cloud data. We present anchor design along with custom loss
functions used in this work. A combination of spatial and channel wise
attention module is used in this work. We use the Kitti 3D Bird’s Eye View
dataset for benchmarking and validating our results. Our method surpasses
previous state of the art in this domain both in terms of average precision
and speed running at > 30 FPS. Finally, we present the ablation study to
demonstrate that the performance of our network is generalizable. This makes
it a feasible option to be deployed in real time applications like self
driving cars.
## 1 Introduction
A lot of work has been done in 2D object detection using convolutional neural
networks. The object detection algorithms can be broadly grouped into the
following two types:
1\. Single stage detector - Yolo (Redmon et al., 2016), SSD (Liu et al.,
2016).
2\. Two stage detector - RCNN (Girshick et al., 2014), Fast RCNN (Girshick,
2015), Faster RCNN (Ren et al., 2015).
The difference between the two is that in the two stage detectors, the first
stage uses region proposal networks to generate regions of interest and the
second stage uses these regions of interest for object classification and
bounding box regression. These are proven to have achieved better accuracy
than the one stage architecture but comes at a tradeoff of more computational
burden and time taken. On the other hand, a single stage detector uses the
input image to directly learn the class wise probability and bounding box
coordinates. Thus these architectures treat the object detection as a simple
regression problem and thus are faster but less accurate.
There has also been a lot of work done on 3D object detection. Some of them
use a camera based approach using either monocular or stereo images. Also work
has been done by fusing the depth information on RGBD images taken from the
camera. The main problem with camera based approach is the low accuracy
achieved. Therefore lidar data has been proven to be a better alternative
achieving higher accuracy and thus safety which is a primary concern for self
driving cars. The challenge with using lidar data is that it produces data in
the form of point clouds which have millions of points thus increasing the
computational cost and processing time.
Point cloud data are of many types, of which the main type is 3D voxel grid.
However, monocular 3D object detection is a difficult problem due to the depth
information loss in 2D image planes. Recent networks have been proposed to
first predict the pixel-level depth and convert the monocular image to 3D
point cloud representations. These methods although achieves good performance
but it introduces additional expensive computational cost for predicting high-
resolution depth maps from images, making them challenging to be deployed in
real time settings like self driving cars.
In this work, our approach uses only the bird’s eye view for 3D object
detection in real time. The context of our work is in self driving cars but
can be deployed in other settings as well. To validate our work, we benchmark
our results on the publicly available 3D Kitti dataset (Geiger et al., 2012).
We use spatial and channel attention modules in one branch for finding out
where is an informative part in the image and finding out what feature is
meaningful in the image respectively. The second branch locates the 2d
bounding box co-ordinates while the third branch is used to get the deviations
between the predicted and actual co-ordinates. The individual features are
summed to give the refined 3d bounding box co-ordinates. For the evaluation
metric, we use the class wise average precision. Our work beats the previous
state of the art approaches for 3D object detection while also running at
greater than 30 FPS. We also further show the learning and optimization
aspects along with ablation study of this approach and present how it could
potentially be generalized to other real world settings.
A sample of the predicted 3D detection from the KITTI validation dataset is
shown in Figure 1:
Figure 1: 3D detection from the KITTI validation dataset projected onto an
image
## 2 Related Work
Recently there have been a surge of papers on 3D object detection from various
kinds of data like LIDAR, stereo etc. VOTE 3D (Qi et al., 2019) uses a sliding
window on a 3D voxel grid to detect objects. The pre-trained model is fed to a
SVM classifier later. VELOFCN (li20173d) projects 3D point cloud data to a
perspective in the front view and gets a 2D depth map. The objects are
detected by running a convolutional neural network on the depth map. MV3D (Qi
et al., 2018) architecture also used a similar approach by combining the
features extracted from multiple views like front view, birds eye view and
camera view. These extracted features are passed through a CNN to detect 3D
objects.
PointNet (Qi et al., 2017) proposed an end-to-end classification neural
network that directly takes a point cloud as input without any preprocessing
and outputs class scores. (Zhou and Tuzel, 2018) subdivides the point cloud
into 3D voxels and then transforms points within each voxel to a trainable
feature vector that characterizes the shape information of the contained
points. The representation vectors for each voxel stacks together and passes
to a region proposal network to detect the objects. (Chen et al., 2020a)
proposed and anchor free method using firing of different hotspots. (Ge et
al., 2020) used anchor free one stage network for 3d object detection.
Pairwise spatial relationship of features was used for monocular 3D object
detection (Chen et al., 2020c). A learnable depth guided convolution was used
to tackle monocular 3D object detection problem (Ding et al., 2020).
Triple attention module was used (Liu et al., 2020) for 3d object detection
from point clouds. A comprehensive study of various localization errors
involved while detecting 3d objects was presented (Ma et al., 2021). A new
voting algorithm was individually proposed for improving the robustness of 3d
object detector (Qi et al., 2020) and (Xie et al., 2020). (Zhou et al., 2020)
used an end to end learnable network using multi view feature fusion from
lidar data. (Vora et al., 2020) similarly used sequential fusion approach. A
more generalizable method taking into account different shapes and sizes of
objects present in image was proposed by (Zhang et al., 2021). Both 3d object
detection and tracking problem was tackled using a single network (Yin et al.,
2021).
We summarize our main contributions as follows:
• A novel approach using spatial and channel attention mechanism to
simultaneously detect and regress 3D bounding box over all the objects present
in the image.
• A thorough analysis of backbone, optimization, anchors and loss function
used in our network which is end to end trainable.
• Evaluation on the KITTI dataset shows we outperform all previous state-of-
the-art methods in terms of average precision while running at >30 FPS.
## 3 Model
### 3.1 Dataset
For this work, we have used the Kitti dataset (Geiger et al., 2012) which
contains LIDAR data taken from a sensor mounted in front of the car. Since the
data contains millions of points and is of quite high resolution, processing
is a challenge especially in real world situations. The task is to detect and
regress a bounding box for 3D objects detected in real time. The dataset has
7481 training images and 7518 test point clouds comprising a total of labelled
objects. The object detection performance is measured through average
precision and IOU (Intersection over union) with threshold 0.7 for car class.
The 3D object KITTI benchmark provides 3D bounding boxes for object classes
including cars, vans, trucks, pedestrians and cyclists which are labelled
manually in 3D point clouds on the basis of information from the camera. KITTI
also provides three detection evaluation levels: easy, moderate and hard,
according to the object size, occlusion state and truncation level. The
minimal pixel height for easy objects is 40px, which approximately corresponds
to vehicles within 28m. For moderate and hard level objects are 25px,
corresponding to a minimal distance of 47m.
### 3.2 Problem Definition
Given a RGB images and the corresponding camera parameters, our goal is to
classify and localize the objects of interest in 3D space. Each object is
represented by its category, 2D bounding box $B_{2D}$, and 3D bounding box
$B_{3D}$. $B_{2D}$ is represented by its center $c_{i}$ = $[x_{0},y_{0}]_{2D}$
and size $[h_{0},w_{0}]_{2D}$ in the image plane, while $B_{3D}$ is defined by
its center $[x,y,z]_{3D}$, size $[h,w,l]_{3D}$ and heading angle $\gamma$ in
the 3D world space.
The 3D bounding box $B_{3D}$ is the final goal of prediction. The first task
is 2D object detection in which the goal is to predict the 2D bounding box
$B_{2D}$ of the object and its class. $B_{2D}=(w_{2D},h_{2D},u_{b},v_{b})$
where $(w_{2D},h_{2D})$ indicates the size of $B_{2D}$ and $(u_{b},v_{b})$
represents the center of $B_{2D}$ on the image plane.
### 3.3 Spatial Attention Module
The spatial attention module is used for capturing the spatial dependencies of
the feature maps. The spatial attention (SA) module used in our network is
defined below:
$f_{{SA}}(x)=f_{sigmoid}\left({W}_{2}\left(f_{{ReLU}}\left({W}_{1}(x)\right)\right)\right)$
(1)
where $W_{1}$ and $W_{2}$ denotes the first and second $1\times 1$ convolution
layer respectively, $x$ denotes the input data, $f_{Sigmoid}$ denotes the
sigmoid function, $f_{ReLU}$ denotes the ReLu activation function.
The spatial attention module used in this work is shown in Figure 2:
Figure 2: Illustration of our spatial attention module
.
### 3.4 Channel Attention Module
The channel attention module is used for extracting high level multi-scale
semantic information. The channel attention (CA) module used in our network is
defined below:
$f_{{CA}}(x)=f_{sigmoid}({W}_{2}(f_{{ReLU}}({W}_{1}f_{{AvgPool}}^{1}(x))))$
(2)
where $W_{1}$ and $W_{2}$ denotes the first and second $1\times 1$ convolution
layer, $x$ denotes the input data. $f^{1}_{AvgPool}$ denotes the global
average pooling function, $f_{Sigmoid}$ denotes the Sigmoid function,
$f_{ReLU}$ denotes ReLU activation function.
The channel attention module used in this work is shown in Figure 3:
Figure 3: Illustration of our channel attention module
.
### 3.5 Network Architecture
We divide the point cloud data into 3D voxel grid cells. Our CNN backbone
takes as input the image in the form of voxel and outputs a feature vector. We
use Resnet as backbone for our network. Residual blocks are used for locating
the 2d bounding box co-ordinates which is then propagated to a Roi Align
operator which is then sent to a fully connected layer. In parallel, spatial
and channel attention mechanism are used for finding out where is an
informative part in the image and finding out what feature is meaningful given
in the image. the individual features are summed up which is in turn summed up
with the first block to produce the 3d bounding box co-ordinates. In parallel,
a third block uses Roi Align and fully connected layers to find out the
deviations between the actual and predicted co-ordinates. Anchors are used in
these deltas blocks to adjust the coordinates according to the size and shape
of the object detected. This block is learnable thus improving the hyper-
parameters in every iteration. The learned deviations are finally summed up
with the 3d bounding box co-ordinates to give the refined 3d bounding box co-
ordinates.
The residual blocks are made up of: a fully connected layer followed by a non
linearity activation function which is ReLU used in this case and a batch
normalization layer. These layers are used for transforming each point in the
voxel to a point wise feature vector. Element wise max-pooling layer is also
used which extracts the maximum value from all the neighbouring pixel values
when the filter is applied on the image. This operation is used for getting
the locally aggregated features. Also a point wise concatenation operator is
used which concatenates each point wise feature vector with the locally
aggregated features. For our detector there are in total 7 parameters - three
for the offset center coordinates, three for the offset dimensions and the
last is for offset rotation angle. The network architecture is shown in Figure
4:
Figure 4: Illustration of our network architecture. SA denotes spatial
attention module, CA denotes channel attention module, FC denotes fully
connected layer and + denotes summation operator.
## 4 Experiments
### 4.1 Anchors
Anchors are very important for efficient object detection. These are basically
prior beliefs containing information of the size for the detected object, its
position is the image, its pose, its orientation etc. Anchors of multiple
shape, size are more stable, also helps in reducing the computational burden
and time taken by the model. We have chosen two anchors for each of the
classes as shown in Table 1, Table 2 and Table 3 respectively:
Table 1: Car anchors Height(m) | Width(m) | Length(m) | Rotation(Theta)
---|---|---|---
1.6 | 1.6 | 4 | 0
1.6 | 1.6 | 1.6 | 90
Table 2: Pedestrian anchors Height(m) | Width(m) | Length(m) | Rotation(Theta)
---|---|---|---
1.7 | 0.5 | 0.7 | 0
1.7 | 1.5 | 0.7 | 90
Table 3: Cyclist anchors Height(m) | Width(m) | Length(m) | Rotation(Theta)
---|---|---|---
1.6 | 0.7 | 2 | 0
1.6 | 0.7 | 2 | 90
### 4.2 Loss Functions
A vector $s=(x,y,z,l,h,w,\theta)$ represents 3D bounding box center
coordinates, height, width, length and yaw respectively. The geometric
relations between various parameters is illustrated in the equation below
where $s$ represents the ground truth vector and $a$ represents the anchor
vector. The localization regression between ground truth and anchors are
defined using set of Equations 3-10:
$\Delta x=\frac{x_{s}-x_{a}}{\sqrt{l^{2}+w^{2}}}$ (3)
$\Delta z_{b}=z_{s}-\frac{h_{s}}{2}-z_{a}+\frac{h_{a}}{2}$ (4)
$\Delta y=\frac{y_{s}-y_{a}}{\sqrt{l^{2}+w^{2}}}$ (5)
$\Delta z_{t}=z_{s}+\frac{h_{s}}{2}-z_{a}-\frac{h_{a}}{2}$ (6)
$\Delta l=\log\frac{l_{s}}{l_{a}}$ (7)
$\Delta w=\log\frac{w_{s}}{w_{a}}$ (8)
$\Delta\zeta=\left|\sin\left(\theta_{s}-\theta_{a}\right)\right|$ (9)
$\Delta\eta=\cos\left(\theta_{s}-\theta_{a}\right)$ (10)
Since the angle localization loss cannot distinguish the bounding boxes which
are flipped, we use a softmax classification loss as shown for both positive
and negative anchors. For the object classification, we have used focal loss
as shown in Equation 11 and Equation 12 respectively:
$\mathcal{L}_{pos}=-\alpha_{a}\left(1-p^{a}\right)^{\gamma}\log p^{a}$ (11)
$\mathcal{L}_{neg}=-\alpha_{a}\left(1-p^{a}\right)^{\gamma}\log p^{a}$ (12)
We used Intersection Over Union (IOU) for evaluating the performance of our
network. All the positive anchors have an IOU value above 0.60 while those
with less than 0.45 are treated as negative anchors. We used binary cross
entropy loss for detection and a variant of huber loss for regression.
Let $i$ and $j$ denote the positive and negative anchors and let $p$ denote
the sigmoid activation for the classification network. Let $pos$ represent the
positive regression anchors and $neg$ the negative regression anchors. The
individual loss terms can be denoted using set of Equations 13-15.
$L_{1}=\frac{1}{N}\sum_{i}L_{pos}\left(p_{i}^{pos},1\right)$ (13)
$L_{2}=\frac{1}{N}\sum_{j}L_{neg}\left(p_{j}^{neg},0\right)$ (14)
$L_{3}=\frac{1}{N}\sum_{k}\left(L_{r}\left(l,l^{*}\right)+L\left(h,h^{*}\right)+L_{c}\left(w,w^{*}\right)\right)$
(15)
The overall loss function is shown in Equation 16:
$L_{total}=\alpha L_{1}+\beta L_{2}+\gamma L_{3}$ (16)
Here $\alpha$, $\beta$ and $\gamma$ are the hyper-parameters with values set
as 0.5, 0.5 and 1.0 respectively.
### 4.3 Evaluation Metrics
We use the Average Precision with 40 recall positions ($AP_{40}$) under three
difficult settings (easy, moderate, and hard) for those tasks. We present the
performances of the Car, Pedestrian and Cyclist categories as reference. The
default IoU threshold values are 0.7, 0.5, 0.5 for these three categories
respectively. Each manually annotated object is divided into easy, moderate,
and hard level according to the occlusion, truncation, and box height. The
metrics used extensively in the literature are Average precisions (AP) on the
car class for bird’s-eye-view (BEV) and 3D boxes with 0.5/0.7 IoU thresholds.
We present both $AP_{11}$ and $AP_{40}$ results to make comprehensive
comparisons as has been studied in literature.
### 4.4 Implementation Details
We train our model on a GTX 1080Ti GPU with a batch size of 16 for 100 epochs.
We use Adam optimizer with an initial learning rate of 0.001, and decay it by
ten times at every 100 epochs. The weight decay is set to 0.0001. We use Non-
Maximum Suppression (NMS) on center detection results. We use 3D bounding
boxes score of center detection as the confidence of predicted results. We
discard predictions with confidence value less than 0.1. All input images are
padded to the same size of $384\times 1280$. The prediction head of the
backbone consists of one $3\times 3\times 256$ conv layer, BatchNorm, ReLU,
and $1\times 1\times op$ conv layer where $op$ is the output size.
## 5 Results
We report our results of the Car category on KITTI test set as shown in Table
4. Overall, our method achieves superior results over previous methods.
Compared with the methods with extra data, our network still get comparable
performances, which further proves the effectiveness of our model. Our method
is also much faster than most existing methods, allowing for real-time
inference which is important in the context of autonomous driving.
Table 4: Quantitative results for Car on KITTI test sets, evaluated by AP3D. “Extra” lists the required extra information for each method. We divide existing methods into two groups considering whether they utilize extra information and sort them according to their performance on the moderate level of the test set within each group. The three sets of Easy, Mod and Hard denotes Val $AP_{11}$, Val $AP_{40}$ Test $AP_{40}$ respectively. Method | Extra | Time(ms) | $Easy_{1}$ | $Mod_{1}$ | $Hard_{1}$ | $Easy_{2}$ | $Mod_{2}$ | $Hard_{2}$ | $Easy_{3}$ | $Mod_{3}$ | $Hard_{3}$
---|---|---|---|---|---|---|---|---|---|---|---
MonoPSR | depth, LiDAR | 120 | 12.75 | 11.48 | 8.59 | - | - | - | 10.76 | 7.25 | 5.85
UR3D | depth | 120 | 28.05 | 18.76 | 16.55 | 23.24 | 13.35 | 10.15 | 15.58 | 8.61 | 6.00
AM3D | depth | - | 32.23 | 21.09 | 17.26 | 28.31 | 15.76 | 12.24 | 16.50 | 10.74 | 9.52
PatchNet | depth | - | 35.10 | 22.00 | 19.60 | 31.60 | 16.80 | 13.80 | 15.68 | 11.12 | 10.17
DA-3Ddet | depth, LiDAR | - | 33.40 | 24.00 | 19.90 | - | - | - | 16.80 | 11.50 | 8.90
D4LCN | depth | - | 26.97 | 21.71 | 18.22 | 22.32 | 16.20 | 12.30 | 16.65 | 11.72 | 9.51
Kinem3D | multi-frames | 120 | - | - | - | 19.76 | 14.10 | 10.47 | 19.07 | 12.72 | 9.17
FQNet | - | - | 5.98 | 5.50 | 4.75 | - | - | - | 2.77 | 1.51 | 1.01
MonoGRNet | - | 60 | 13.88 | 10.19 | 7.62 | - | - | - | 9.61 | 5.74 | 4.25
MonoDIS | - | 100 | 18.05 | 14.98 | 13.42 | - | - | - | 10.37 | 7.94 | 6.40
M3D-RPN | - | 160 | 20.27 | 17.06 | 15.21 | 14.53 | 11.07 | 8.65 | 14.76 | 9.71 | 7.42
MonoPair | - | 57 | - | - | - | 16.28 | 12.30 | 10.42 | 13.04 | 9.99 | 8.65
RTM3D | - | 55 | 20.77 | 16.86 | 16.63 | - | - | - | 14.41 | 10.34 | 8.77
Movi3D | - | 45 | - | - | - | 14.28 | 11.13 | 9.68 | 15.19 | 10.90 | 9.26
Zhang et al. (2021) | - | 35 | 28.17 | 21.92 | 19.07 | 23.64 | 17.51 | 14.83 | 19.94 | 13.89 | 12.07
AA3DNet | - | 26 | 30.22 | 22.54 | 18.38 | 24.01 | 17.81 | 14.31 | 21.62 | 14.90 | 11.82
We present our model’s performance on the KITTI validation set in Table 5. Our
approach shows better performance consistency between the validation set and
test set. This indicates that our method has better generalization ability,
which is important in autonomous driving.
Table 5: Performance of the Car category on the KITTI validation set. Methods are ranked by moderate setting (same as KITTI leaderboard). We highlight the best results in bold. The four sets of Easy, Mod and Hard denotes $3D_{IOU}$=0.7, $BEV_{IOU}$=0.7, $3D_{IOU}$=0.5 and $BEV_{IOU}$=0.5 respectively. Method | $Easy_{1}$ | $Mod_{1}$ | $Hard{1}$ | $Easy_{2}$ | $Mod_{2}$ | $Hard_{2}$ | $Easy_{3}$ | $Mod_{3}$ | $Hard_{3}$ | $Easy_{4}$ | $Mod_{4}$ | $Hard_{4}$
---|---|---|---|---|---|---|---|---|---|---|---|---
CenterNet | 0.60 | 0.66 | 0.77 | 3.46 | 3.31 | 3.21 | 20.00 | 17.50 | 15.57 | 34.36 | 27.91 | 24.65
MonoGRNet | 11.90 | 7.56 | 5.76 | 19.72 | 12.81 | 10.15 | 47.59 | 32.28 | 25.50 | 48.53 | 35.94 | 28.59
MonoDIS | 11.06 | 7.60 | 6.37 | 18.45 | 12.58 | 10.66 | - | - | - | - | |
M3D-RPN | 14.53 | 11.07 | 8.65 | 20.85 | 15.62 | 11.88 | 48.53 | 35.94 | 28.59 | 53.35 | 39.60 | 31.76
MonoPair | 16.28 | 12.30 | 10.42 | 24.12 | 18.17 | 15.76 | 55.38 | 42.39 | 37.99 | 61.06 | 47.63 | 41.92
(Ma et al., 2021) | 17.45 | 13.66 | 11.68 | 24.97 | 19.33 | 17.01 | 55.41 | 43.42 | 37.81 | 60.73 | 46.87 | 41.89
AA3DNet | 18.06 | 14.27 | 11.51 | 25.68 | 19.83 | 16.64 | 57.24 | 44.90 | 37.15 | 62.18 | 47.55 | 41.24
Our results are considerably better than the previous state of the art
approaches.
### 5.1 Average Precision
The ideal value of precision and recall is 1. Since it is not possible to get
perfect values, the closer the metrics ie precision and recall is to 1, the
better our model is performing, It’s often seen that there is a tradeoff
between precision and recall ie if we are optimizing for precision, recall
value gets less and if we are trying to improve recall, precision value
becomes less. So our task is to balance both and note that threshold point.
Average precision is the average value of precision for the sampled points at
various recall threshold values. The precision - recall curve for 3D object
detection for the 3 classes i.e. cars, pedestrians and cyclists for all the
three categories i.e. easy, moderate and hard are shown in Figure 5. The
closer the curve is to (1,1), the higher performance of the model is.
Figure 5: Precision-recall curve for 3D detection in a) Cars b) Pedestrian c)
Cyclists.
Finally we present the results for 3D object detection results on KITTI
validation set in Figure 6. The ground truth bounding boxes are shown in blue
and the predicted bounding boxes are shown in orange.
Figure 6: Predicted 3D bounding boxes are drawn in orange, while ground truths
are in blue.
Note that our model is based only on LiDAR data. For better visualization the
3D bounding boxes are projected on to the bird’s eye view and the images.
### 5.2 Ablation Study
The compared results with different backbones on Average Precision metric is
shown in Table 6:
Table 6: Ablation study of different backbone networks on $AP_{3D}$ (IoU=0.3). Backbone Network | Easy | Moderate | Hard
---|---|---|---
VGG16 | 53.68 | 41.45 | 34.08
InceptionV3 | 54.32 | 41.60 | 34.66
DenseNet169 | 54.26 | 40.04 | 35.06
ResNet50 | 56.16 | 42.61 | 35.36
The best results are achieved using ResNet50 as the backbone on our network.
A study of with and without using channel and spatial attention module on
Average Precision metric is shown in Table 7:
Table 7: Ablation study using variations of spatial and channel attention modules on $AP_{3D}$ (IoU=0.3). Attention Module | Easy | Moderate | Hard
---|---|---|---
No attention | 53.59 | 40.06 | 32.18
Only SA | 55.05 | 42.06 | 34.58
Only CA | 55.51 | 40.49 | 34.46
Both | 56.16 | 42.61 | 35.36
The best results are achieved using both spatial and channel attention modules
in our network.
A study of using individual loss function terms used while training our
network on Average Precision metric is shown in Table 8:
Table 8: Ablation study using individual loss function terms on $AP_{3D}$ (IoU=0.3). $L_{1}$ | $L_{2}$ | $L_{3}$ | Easy | Moderate | Hard
---|---|---|---|---|---
$\times$ | $\times$ | $\checkmark$ | 44.50 | 32.33 | 29.10
$\checkmark$ | $\checkmark$ | $\times$ | 52.72 | 40.59 | 33.71
$\checkmark$ | $\checkmark$ | $\checkmark$ | 56.16 | 42.61 | 35.36
The best results are achieved using all the loss functions ie $L_{1}$, $L_{2}$
and $L_{3}$ combined.
## 6 Conclusions
In this paper, we proposed a real time 3D object detection network using
spatial and channel attention mechanism using LIDAR point cloud data. For
making efficient computation, our architecture uses a single stage type neural
network with bird’s view representation. We evaluate our network on the KITTI
benchmark dataset and show that our approach outperforms previous state of the
art methids. As for the evaluation metric, we chose class wise average
precision. The model runs at faster than 30 FPS and hence can be used in
autonomous driving applications where safety is a major challenge. In the
future, we would be interested in studying attention mechanism in the context
of 3D semantic segmentation.
#### Acknowledgments
We would like to thank Nvidia for providing the GPUs.
## References
* Chen et al. (2020a) Q. Chen, L. Sun, Z. Wang, K. Jia, and A. Yuille. Object as hotspots: An anchor-free 3d object detection approach via firing of hotspots. In _European Conference on Computer Vision_ , pages 68–84. Springer, 2020a.
* Chen et al. (2016) X. Chen, K. Kundu, Z. Zhang, H. Ma, S. Fidler, and R. Urtasun. Monocular 3d object detection for autonomous driving. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 2147–2156, 2016.
* Chen et al. (2017) X. Chen, H. Ma, J. Wan, B. Li, and T. Xia. Multi-view 3d object detection network for autonomous driving. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 1907–1915, 2017.
* Chen et al. (2020b) Y. Chen, S. Liu, X. Shen, and J. Jia. Dsgn: Deep stereo geometry network for 3d object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 12536–12545, 2020b.
* Chen et al. (2020c) Y. Chen, L. Tai, K. Sun, and M. Li. Monopair: Monocular 3d object detection using pairwise spatial relationships. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 12093–12102, 2020c.
* Dai et al. (2016) J. Dai, Y. Li, K. He, and J. Sun. R-fcn: Object detection via region-based fully convolutional networks. In _Advances in neural information processing systems_ , pages 379–387, 2016.
* Ding et al. (2020) M. Ding, Y. Huo, H. Yi, Z. Wang, J. Shi, Z. Lu, and P. Luo. Learning depth-guided convolutions for monocular 3d object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_ , pages 1000–1001, 2020.
* Engelcke et al. (2017) M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner. Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In _2017 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 1355–1361. IEEE, 2017.
* Ge et al. (2020) R. Ge, Z. Ding, Y. Hu, Y. Wang, S. Chen, L. Huang, and Y. Li. Afdet: Anchor free one stage 3d object detection. _arXiv preprint arXiv:2006.12671_ , 2020.
* Geiger et al. (2012) A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_ , pages 3354–3361. IEEE, 2012.
* Girshick (2015) R. Girshick. Fast r-cnn. In _Proceedings of the IEEE international conference on computer vision_ , pages 1440–1448, 2015.
* Girshick et al. (2014) R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 580–587, 2014.
* He et al. (2020) C. He, H. Zeng, J. Huang, X.-S. Hua, and L. Zhang. Structure aware single-stage 3d object detection from point cloud. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11873–11882, 2020.
* Huang et al. (2020) T. Huang, Z. Liu, X. Chen, and X. Bai. Epnet: Enhancing point features with image semantics for 3d object detection. In _European Conference on Computer Vision_ , pages 35–52. Springer, 2020.
* Ku et al. (2018) J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. L. Waslander. Joint 3d proposal generation and object detection from view aggregation. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 1–8. IEEE, 2018.
* Li et al. (2016) B. Li, T. Zhang, and T. Xia. Vehicle detection from 3d lidar using fully convolutional network. _arXiv preprint arXiv:1608.07916_ , 2016.
* Liang et al. (2019) M. Liang, B. Yang, Y. Chen, R. Hu, and R. Urtasun. Multi-task multi-sensor fusion for 3d object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 7345–7353, 2019.
* Lin et al. (2017a) T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 2117–2125, 2017a.
* Lin et al. (2017b) T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In _Proceedings of the IEEE international conference on computer vision_ , pages 2980–2988, 2017b.
* Liu et al. (2016) W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In _European conference on computer vision_ , pages 21–37. Springer, 2016.
* Liu et al. (2020) Z. Liu, X. Zhao, T. Huang, R. Hu, Y. Zhou, and X. Bai. Tanet: Robust 3d object detection from point clouds with triple attention. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 11677–11684, 2020.
* Ma et al. (2021) X. Ma, Y. Zhang, D. Xu, D. Zhou, S. Yi, H. Li, and W. Ouyang. Delving into localization errors for monocular 3d object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 4721–4730, 2021.
* Peng et al. (2021) L. Peng, F. Liu, Z. Yu, S. Yan, D. Deng, and D. Cai. Lidar point cloud guided monocular 3d object detection. _arXiv preprint arXiv:2104.09035_ , 2021.
* Qi et al. (2017) C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 652–660, 2017.
* Qi et al. (2018) C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas. Frustum pointnets for 3d object detection from rgb-d data. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 918–927, 2018.
* Qi et al. (2019) C. R. Qi, O. Litany, K. He, and L. J. Guibas. Deep hough voting for 3d object detection in point clouds. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 9277–9286, 2019.
* Qi et al. (2020) C. R. Qi, X. Chen, O. Litany, and L. J. Guibas. Imvotenet: Boosting 3d object detection in point clouds with image votes. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 4404–4413, 2020.
* Qian et al. (2020) R. Qian, D. Garg, Y. Wang, Y. You, S. Belongie, B. Hariharan, M. Campbell, K. Q. Weinberger, and W.-L. Chao. End-to-end pseudo-lidar for image-based 3d object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 5881–5890, 2020.
* Qin et al. (2021) Z. Qin, J. Wang, and Y. Lu. Monogrnet: A general framework for monocular 3d object detection. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2021.
* Redmon et al. (2016) J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 779–788, 2016.
* Ren et al. (2015) S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In _Advances in neural information processing systems_ , pages 91–99, 2015.
* Sagar (2021) A. Sagar. Dmsanet: Dual multi scale attention network. _arXiv preprint arXiv:2106.08382_ , 2021.
* Sagar and Soundrapandiyan (2020) A. Sagar and R. Soundrapandiyan. Semantic segmentation with multi scale spatial attention for self driving cars. _arXiv preprint arXiv:2007.12685_ , 2020.
* Shi et al. (2019) S. Shi, X. Wang, and H. Li. Pointrcnn: 3d object proposal generation and detection from point cloud. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 770–779, 2019.
* Shi et al. (2020a) S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 10529–10538, 2020a.
* Shi et al. (2020b) S. Shi, Z. Wang, J. Shi, X. Wang, and H. Li. From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network. _IEEE transactions on pattern analysis and machine intelligence_ , 2020b.
* Vora et al. (2020) S. Vora, A. H. Lang, B. Helou, and O. Beijbom. Pointpainting: Sequential fusion for 3d object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 4604–4612, 2020.
* Xie et al. (2020) Q. Xie, Y.-K. Lai, J. Wu, Z. Wang, Y. Zhang, K. Xu, and J. Wang. Mlcvnet: Multi-level context votenet for 3d object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 10447–10456, 2020.
* Yang et al. (2018) B. Yang, W. Luo, and R. Urtasun. Pixor: Real-time 3d object detection from point clouds. In _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , pages 7652–7660, 2018.
* Ye et al. (2020) M. Ye, S. Xu, and T. Cao. Hvnet: Hybrid voxel network for lidar based 3d object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 1631–1640, 2020.
* Yin et al. (2021) T. Yin, X. Zhou, and P. Krahenbuhl. Center-based 3d object detection and tracking. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11784–11793, 2021.
* Zhang et al. (2021) Y. Zhang, J. Lu, and J. Zhou. Objects are different: Flexible monocular 3d object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 3289–3298, 2021.
* Zhou and Tuzel (2018) Y. Zhou and O. Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 4490–4499, 2018.
* Zhou et al. (2020) Y. Zhou, P. Sun, Y. Zhang, D. Anguelov, J. Gao, T. Ouyang, J. Guo, J. Ngiam, and V. Vasudevan. End-to-end multi-view fusion for 3d object detection in lidar point clouds. In _Conference on Robot Learning_ , pages 923–932. PMLR, 2020.
|
arxiv-papers
| 2021-07-26T12:18:23 |
2024-09-04T03:07:18.465478
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Abhinav Sagar",
"submitter": "Abhinav Sagar",
"url": "https://arxiv.org/abs/2107.12137"
}
|
2107.12141
|
# Non-minimal coupling inspires the Dirac cosmological model
H. [email protected], H.
[email protected], A. H. [email protected],
U. K. [email protected] 1 Research Institute for Astronomy and
Astrophysics of Maragha (RIAAM), University of Maragheh, P.O. Box 55136-553,
Maragheh, Iran
2 Physics Department, Faculty of Sciences, University of Sistan and
Baluchestan, Zahedan, Iran
3 Department of Mathematics, Institute of Applied Sciences and Humanities, GLA
University, Mathura-281406, Uttar Pradesh, India
###### Abstract
In the framework of the generalized Rastall theory (GRT), we study the ability
of a non-minimal coupling between geometry and matter fields in order to
provide a setting which allows for a variable $G$ during the cosmic evolution.
In this regard, the compatibility of this theory with Dirac hypothesis on the
variations of $G$ is investigated, and additionally, the possibility of
obtaining the current accelerated universe is also addressed. In summary, our
study indicates that, in GRT, having in hand the $G$ profile, one may find the
corresponding non-minimal coupling between the energy source and geometry and
vise versa, in a compatible way with the current accelerated universe.
## I Introduction
The idea that $G$ (the Newtonian gravitational coupling) has probably
experienced diverse values during the cosmic evolution has many motivations.
It began with Dirac’s proposal dir1 ; dir2 ; dir3 which states that, the
ubiquitousness of certain large dimensionless numbers (LDN’s), arising in
combinations of physical constants and cosmological quantities WZE was not a
coincidence but an outcome of an underlying relationship between them BTB . In
his proposal, Dirac pointed out that the electrical force between proton and
electron within a hydrogen atom i.e., $F_{e}=e^{2}/4\pi\epsilon_{0}r^{2}$ is a
large number being 40 orders of magnitude greater than their gravitational
force $F_{G}=Gm_{p}m_{e}/r^{2}$, i.e.,
$\displaystyle{\rm
LN}_{1}=\frac{F_{e}}{F_{G}}=\frac{e^{2}}{4\pi\epsilon_{0}Gm_{p}m_{e}}\approx
10^{40},$ (1)
where $m_{e},e,m_{p},\epsilon_{0}$ and $G$ are the mass and charge of
electron, the proton mass, the vacuum permittivity and gravitational constant,
respectively. On the other side, the ratio of the age of the universe and the
time for light to traverse an electron is also nearly of the same size, i.e.,
$\displaystyle{\rm LN}_{2}=\frac{t}{e^{2}/4\pi\epsilon_{0}m_{e}c^{3}}\approx
10^{40}.$ (2)
Dirac then suggested that the above two quantities are equal. As a result of
such a relationship, some of the fundamental constants cannot remain constant
for ever since ${\rm LN}_{2}$ varies with the age of the universe. According
to Dirac’s hypothesis, atomic parameters cannot change with time and thus $G$
should change inversely with time, i.e., $G\propto t^{-1}$ CHK , see also
DIRACREV for recent reviews. Since the advent of this idea, it has led to
interesting implications within theoretical physics, and has attracted a great
deal of attention during the past decades ras2 ; vin ; sab ; bap ; bee ; wu ;
deg ; bar1 ; bar2 ; bar3 ; mans ; gaz ; clif ; bro ; sol ; uza1 ; uza2 ; smo ;
fri ; les . Moreover, it has even interesting power to justify baryogenesis
les , the current and primary accelerated universes ell and can support the
de Sitter spacetime uza1 ; uza2 .
In Newtonian gravity one is allowed to write an explicit time variation of $G$
without the need of satisfying any further constraint. However, the situation
is different in GR as there are further constraints to be satisfied. Consider
the Einstein field equation $G^{\mu}_{\,\nu}=8\pi GT^{\mu}_{\,\,\nu}$ with the
assumption of $G=G(t)$ and $c\equiv 1$. If one takes the covariant divergence
of this equation the left hand side vanishes as a result of Bianchi identity.
Then, if the ordinary energy-momentum conservation law (OCL) is assumed to
hold, i.e., $T^{\mu}_{\,\,\nu;\mu}=0$, one finds that $G$ must be a constant
with respect to spacetime coordinates, i.e., $\partial G/\partial x^{\mu}=0$
always. In this respect, GR does not allow for any variation in the
gravitational coupling $G$ owing to the fact that the Einstein tensor is
divergence free and the divergence of energy-momentum tensor is also zero.
Hence, in the light of Dirac’s proposal, some modifications of GR field
equation are essential. This is because, if we simply let $G$ to be a variable
then the OCL is violated CanutoAdams . In this respect, investigating the
effects of a varying $G$ can be performed only through modified field
equations along with modified conservation laws. From these arguments, one may
intuitively imagine that a varying $G$ could contribute as a new degree of
freedom within the OCL. As in GR, $G$ denotes mutual relation between geometry
and matter fields, hence, variations of $G$ together with the violation of OCL
may be considered as a signal for the idea that another relation between
geometry and matter fields may exist that connects their changes to each
other. However, there are modifications of GR with a varying $G$ that respect
the OCL such as Brans-Dicke theory, in which, the dynamical scalar field
$\phi$ can be considered as the origin of gravitational coupling and thus it
varies as $G\propto\frac{1}{\phi}$ 11 ; 12 ; 13 ; 14 ; bar2 .
OCL, as one of the cornerstones of GR pois , is not respected in all modified
gravity theories, for example, it is broken in the non-minimal curvature
matter coupling theories od1 ; all ; koi ; bert ; hark ; car ; boh ; ras1 ;
mor1 ; mora . Rastall gravity is a pioneering theory in this area ras1 in
accordance with various observations li ; raw1 ; raw2 ; raw3 ; maj ; arb ;
rah1 ; rah2 ; rah3 ; mor2 ; man ; ortiz2020 ; shabooni2020 and its
corresponding cosmology avoids the age and entropy problems arisen in the
framework of the standard cosmology fab . In fact, this theory can even
provide a better platform for describing the matter dominated era compared to
the Einstein theory raw2 . A generalized form of this theory allows us to
relate the current and primary accelerated universe to the ability of the
spacetime to couple with the energy-momentum sources, filling the background,
and in fact, introduces this coupling as a candidate for dark energy and
inflaton field mor1 .
In addition to inflationary models powered by employing varying $G$ theories
ell , there are also other models to describe inflation without considering an
inflaton field jos ; mor1 ; wat ; gam . In Ref. mor1 , it has been shown that
while the existence of an interaction between the geometry and matter fields
may model the primary and current inflationary eras, it does not necessarily
lead to the break-down of OCL. In fact, if geometry has the ability of being
non-minimally coupled with the matter fields, then this ability may support
the primary inflationary era and the current accelerated phase mor1 . To
obtain these results, authors focus on the special case of $T^{\mu\nu}_{\ \
;\mu}=0$, and find out the form of non-minimal coupling in each cosmic era.
The study of various non-minimal couplings can at least make us familiar with
their consequences and properties which may finally lead to a better
understanding of spacetime that helps us provide better predictions about its
behavior and nature. In GRT, cosmological scenarios jos ; mor1 ; das ; lin
imply the power of non-minimal coupling in $i$) providing both singular and
non-singular universes, $ii$) describing the particle production process,
$iii$) avoiding the coincidence problem, and $iv$) playing the role of dark
energy (unlike the original Rastall theory mor1 ; batis ). In this regard,
thermodynamically it has also been shown that the confusion in defining energy
and some of its outcomes which may lead to the OCL generalization (or
equivalently, the breakdown of OCL) could make the cosmos dark mor3 ; mor4 .
Since in Rastall gravity, the gravitational coupling is a constant, but
differs from those of GR and Newtonian gravity (NG) mor5 ; ras1 , Rastall
theory (and indeed a mutual non-minimal coupling between the geometry and
matter fields in the Rastall way) cannot provide a theoretical basis for the
probable variations of $G$ during the cosmic evolution. These points will be
reopened in more details in the next section.
Motivated by the above arguments, it is reasonable to $i$) examine the ability
of non-minimal coupling between geometry and matter fields in producing a non-
constant $G$, and also $ii$) study the results of a non-constant $G$ in the
framework of Rastall theory. The latter is tried to be answered by some
authors in Ref. ref , by combining Rastall and Brans-Dicke theories with each
other. In the present study, the changes in $G$ is not originated by the
Rastall theory meaning that the first part is still unsolved and debateable.
We therefore focus on GRT to describe the compatibility of a non-minimal
coupling with Dirac’s idea on evolution of $G$. Indeed, we are eager to show
that, at least phenomenologically, a non-minimal coupling may itself change
$G$ and play the role of dark energy.
The present work is then arranged as follows. In Sects. II and III, a brief
review on the Rastall theory and its generalization mor1 has been provided,
and some of their predictions about the variations of $G$ are addressed. Sect.
IV includes our survey on the possibility of explaining a well-known Dirac
cosmological model, previously introduced by other authors, within the
framework of GRT. To show the ability of non-minimal coupling in satisfying
Dirac hypothesis and describing the cosmic evolution, simultaneously, a new
model is also introduced in Sect. V. Sect. VI is devoted to concluding
remarks. Here, we use $c=\hbar=1$ units.
## II Rastall theory and a model for varying $G$
Originally, P. Rastall argued that the OCL may not be valid in a curved
spacetime leading to ras1
$\displaystyle T^{\mu\nu}_{\ \ ;\mu}\neq 0,$ (3)
in the non-flat spacetimes. From the mathematical point of view,
$T^{\mu\nu}_{\ \ ;\mu}$ is a ranked one tensor field written as $T^{\mu\nu}_{\
\ ;\mu}=Q^{,\nu}$ where $Q$ is an unknown scalar function found out from other
parts of physics, mathematics and observations ras1 . Since $Q$ is a scalar
and Rastall hypothesis admits the violation of OCL in a curved spacetime
(where Ricci scalar is not always zero), therefore Ricci scalar, R, can be
considered as a suitable suggestion for $Q$, and thus ras1
$\displaystyle T^{\mu\nu}_{\ \ ;\mu}=\lambda^{\prime}R^{;\nu},$ (4)
where $\lambda^{\prime}$ is called the Rastall constant parameter. Using the
Bianchi identity, it is easy to get
$\displaystyle
G_{\mu\nu}+\kappa^{\prime}\lambda^{\prime}g_{\mu\nu}R=\kappa^{\prime}T_{\mu\nu},$
(5)
which $\kappa^{\prime}$ is a constant ras1 called the Rastall gravitational
coupling constant. Applying the Newtonian limit on this result, we obtain ras1
$\displaystyle\frac{\kappa^{\prime}}{4\kappa^{\prime}\lambda^{\prime}-1}\left(3\kappa^{\prime}\lambda^{\prime}-\frac{1}{2}\right)=\kappa_{G},$
(6)
where $\kappa_{G}\equiv 4\pi G$. Hence, since $\kappa^{\prime}$ and
$\lambda^{\prime}$ are constants, $G$ should also be a constant as well (the
current value of $G$, namely $G_{0}$, is proper option leading to
$\kappa_{G}\equiv\kappa_{G_{0}}=4\pi G_{0}$). We therefore conclude that,
since the left hand side of (6) is a constant then a mutual non-minimal
interaction between the geometry and matter fields within the framework of
original version of Rastall gravity does not support the varying $G$ theories.
Eq. (6) also reveals that the Rastall gravitational coupling constant
($\kappa^{\prime}$) differs from that of GR ($2\kappa_{G}=8\pi G$) and only if
$\lambda^{\prime}=0$ then they will be equal.
It is also useful to note that one may use Eq. (5) in order to introduce the
generalized energy-momentum tensor
$\Theta_{\mu\nu}=T_{\mu\nu}-(\kappa^{\prime}\lambda^{\prime})/(4\kappa^{\prime}\lambda^{\prime}-1)Tg_{\mu\nu}$
which finally leads to the GR counterpart form of the Rastall field equations,
given as $G_{\mu\nu}=\kappa^{\prime}\Theta_{\mu\nu}$. In this manner, although
the obtained field equations are similar to those of GR, their solutions for
$T_{\mu\nu}$ differ in general from those of GR mor4 ; dar , a result
confirmed by various observational data, see e.g., li ; mor2 ; dar and
references therein).
One can also generalize the Rastall theory by considering
$\lambda^{\prime}\rightarrow\lambda$, where $\lambda$ is a varying parameter.
Therefore Eq. (4) is extended as follows mor1
$\displaystyle T^{\mu\nu}_{\ \ \ ;\mu}=\left(\lambda R\right)^{;\nu},$ (7)
which finally leads to
$\displaystyle G_{\mu\nu}+\kappa\lambda g_{\mu\nu}R=\kappa T_{\mu\nu},$ (8)
where $\kappa$ is again a constant but $\lambda$ can change over time. Using
the trace of Eq. (8), one can also rewrite this equation as
$G_{\mu\nu}+\tau Tg_{\mu\nu}=\kappa T_{\mu\nu},$ (9)
in which
$\tau=\frac{\kappa^{2}\lambda}{4\kappa\lambda-1}.$ (10)
Now, since $\kappa$ is constant, the covariant derivative of Eq. (9) leads to
$\tau^{,\nu}T+\tau T^{,\nu}=\kappa T^{\nu\,\,\,;\mu}_{\,\,\,\mu},$ (11)
meaning that even if OCL is respected and until $\tau\neq constant$ (or
equally, $\lambda\neq constant$), the non-minimal coupling affects the
evolution of the energy-momentum source and vice versa mor1 . Therefore,
unlike the Rastall theory, OCL can be met in this framework even in the
presence of non-minimal coupling. In this regard, it is shown that, in the
framework of Eq. (8), even if OCL is met, the accelerated universe can be
explained under the shadow of $\lambda$ without resorting to a dark energy
source mor1 .
Now, considering the Newtonian limit (ignoring the pressure of $T_{\mu\nu}$
and utilizing relation $R_{00}=\nabla^{2}\phi$, in which $\phi$ denotes the
Newtonian potential mor6 ), one can easily find
$\displaystyle\frac{\kappa}{4\kappa\lambda-1}\left(3\kappa\lambda-\frac{1}{2}\right)=\kappa_{G}.$
(12)
Due to the similarity of Eqs. (8) and (5), one could expect that the Newtonian
limit of field equations (8) is obtainable by replacing $\kappa^{\prime}$ and
$\lambda^{\prime}$ with $\kappa$ and $\lambda$, respectively, in Eq. (6). Eq.
(12) also indicates that $G$ (or equally $\kappa_{G}$) does not necessarily
remain constant in this theory. Therefore, this generalization of Rastall
theory provides a basis for theories including a varying $G$ dir1 ; dir2 ; vin
; sab ; bap ; bee ; wu ; mans ; gaz ; bar1 ; bar2 ; bar3 ; clif ; uza1 ; uza2
; smo ; fri ; les . In fact, this equation tells that a non-minimal coupling
between the geometry and matter fields can make $G$ variable mot meaning that
such coupling can be considered as a theoretical basis for varying $G$
theories.
## III Newtonian limit, a model for running $G$, and the value of $\kappa$
Now, using Eq. (12), and following Ref. mor1 , in which
$\kappa\lambda\equiv\beta=[4+\theta(1+z)^{3}]^{-1}$, where $\theta$ is an
unknown constant and $z$ denotes the redshift, one can obtain
$\displaystyle\kappa_{G}=\frac{\kappa}{2}\left[1-\frac{2}{\theta(1+z)^{3}}\right],$
(13)
finally leading to
$\displaystyle\kappa_{G}=\frac{\kappa}{2}\left[1-\frac{2}{\theta}\right]\equiv\kappa_{G_{0}},$
(14)
and
$\displaystyle\kappa_{G}=\frac{\kappa}{2},$ (15)
for $z\rightarrow 0$ and $z\rightarrow\infty$, respectively. Based on Ref.
mor1 , whenever $0<\theta\leq 1/2$ (leading to $\beta>0$), the current
accelerated universe is explainable in the presence of OCL, and without
considering a dark energy-like source. Moreover, expression
$\beta=[4+\theta(1+z)^{3}]^{-1}$ is present in both of the matter dominated
era (MDE) and the current accelerated universe mor1 . Hence, Eq. (15) can be
considered as the value of $G$ at the beginning of MDE whereas the value of
$\kappa$ is obtainable by using Eq. (14)
$\displaystyle\kappa=\frac{8\pi G_{0}}{1-\frac{2}{\theta}},$ (16)
combined with Eq. (15) to see that $\kappa$, and thus $\kappa_{G}$ are
negative at the beginning of MDE. Therefore, in the model proposed in Ref.
mor1 which still respects OCL in the framework of (8), $G$ is not always
positive during the cosmic evolution. Negative values of $\kappa$ provide a
setting for baryonic matters to support traversable wormholes in the Rastall
framework mor5 . Moreover, in the framework of GRT, it has been shown that
negative values of $\kappa$ could have their own effects on matter
perturbations and formation of structures in large scale universe AHH2020 . In
this regard, overdense and underdense regions in the universe could form
periodically so that both large scale structures and voids could form as the
universe evolves from MDE to present time. Also, emergence of structures in a
class of alternative theories of gravity has been reported in Lohiya1996 ,
where the authors considered a non-minimally coupled scalar field in addition
to an induced negative gravitational constant and studied structure formation
with repulsive gravitation on the large scale. In the framework of general
scalar tensor-theories, a cosmological mechanism has been proposed in which it
is possible for $G$ to change sign from a positive branch (attracting) to a
negative branch (repulsive gravity) and vice versa Nunez2019 . It is also
worth mentioning that negative values of $G$ have previously been reported in
some other approaches studying the variations of $G$ bar2 ; uza1 ; uza2 .
Beside the effects of repulsive gravity (represented by a universal negative
coupling) on the evolution of perturbations and formation of structures, the
study of possible consequences of $\kappa<0$ on the stability of the model is
of particular importance. In this regard, from the viewpoint of perturbative
analysis, the existence of a repulsive gravity phase in the evolution of the
universe could lead to growing models with respect to scalar perturbations
producing then, large inhomogeneities. Hence a repulsive phase may destroy
homogeneity and in this sense it may be unstable Batista2001 . In Star1981 ,
it has been discussed that a transition from positive gravitational coupling
$G$ to negative one results in an instability, in such a way that, small
deviations from isotropy and homogeneity within the gravitational field will
grow unboundedly, leading to a true cosmological singularity at the boundary
between gravity and anti gravity. Also, investigating classical stability of
the model through dynamical system approach is of long-standing interest and
significance. Work along this line has been carried out for a class of GRT
models Lin2020 , where the authors have shown that the eventual fate of the
universe ends in late time attractors which are classically stable. However,
investigating these issues for the present model needs a deeper analysis with
more scrutiny and future studies will be reported elsewhere. Finally, we note
that, since $\dot{G}$ does not decrease with time for $0<\theta\leq 1/2$
($\dot{G}>0$ in this manner), this model does not respect the Dirac’s
hypothesis claiming that $G$ should decrease as a function of time dir1 ; dir2
; vin ; bap . Hence, more comprehensive non-minimal couplings are needed to
provide settings for Dirac hypothesis and also to model the cosmic evolution
without considering a mysterious fluid (dark energy), simultaneously.
### III.1 Another possibility
In Ref. das , choosing $\lambda=(1+d_{0}H)/[3\kappa(w+1)]$, in which $w\equiv
p/\rho$ (where $p$ and $\rho$ denote the pressure and energy density of the
cosmic fluid, respectively), it has been shown that non-singular cosmic
evolution is obtainable in GRT. In this case $d_{0}$ is a free parameter, and
some outcomes of this proposal in various cosmic eras have also been studied
in Ref. das . Accepting this proposal along with considering the unit
$\kappa=8\pi G_{0}$ and also assuming $G(H_{0})=G_{0}$ (which helps us in
finding $d_{0}$), one easily reaches
$\displaystyle G(H)=G_{0}\frac{3(1-w)H_{0}-6H}{(1-3w)H_{0}-4H},$ (17)
where $H_{0}$ is the current value of $H$ and use has been made of Eq. (12).
## IV Dirac cosmological model
As in the present model there is no evolution equation for the variation of
$G$ which is promoted as a dynamical field, one then has to impose a suitable
ansatz on the behavior of this parameter. Based on Dirac hypothesis, $G$
should decrease with time, i.e, $G\propto t^{-1}$ CHK . In general, one may
consider $G=G_{0}f$, in which $f$ is a decreasing function of time dir1 ; dir2
; vin ; bap ; clif ), in order to preserve Dirac hypothesis. Now, combining
Eq. (12) with $\kappa=8\pi G_{0}\alpha$ raw2 , along with Eqs. (7) and (8) for
a flat FLRW universe, one finds
$\displaystyle\gamma\equiv\lambda\kappa=\frac{f-\alpha}{4f-6\alpha},$ (18)
$\displaystyle
3\int(\rho+p)\frac{da}{a}=\frac{1}{2\alpha}\Big{[}(f-3\alpha)\rho-3(f-\alpha)p\Big{]},$
$\displaystyle H^{2}=\frac{1}{6}\Big{[}(3\alpha-f)\rho+3(f-\alpha)p\Big{]},$
$\displaystyle
q=-1-\frac{\dot{H}}{H^{2}}=-1+\frac{3\alpha(\rho+p)}{\rho(3\alpha-f)+3(f-\alpha)p},$
whenever a fluid with energy density $\rho$ and pressure $p$ fills the
background. We note that $\gamma$ is a varying parameter and, $q$ and $a$
denote deceleration parameter and scale factor, respectively, and we also have
assumed $8\pi G_{0}=1$.
Figure 1: The evolution of $q$ and state parameter $w$ versus $z$ for
$H(z=0)=67$ dom . Upper panels are provided for the case (i) and the lower
ones are depicted for case (ii) discussed in Sect. IV. The model parameters
used to draw the curves of $w$ are the same as those of $q$ diagrams.
The case with $f=a^{-n}$ leads to a decreasing function of time whenever $n>0$
gaz ; smo . In this manner, assuming $w\equiv p/\rho=0$, together with using
Eqs. (18), one easily finds $q=(3\alpha-1)^{-1}$, and
$\rho=\rho_{0}a^{n}(1-3\alpha a)^{-(n+2)/n}$, where $\rho_{0}$ is the
integration constant. These results indicate that, at limit $a\rightarrow 1$,
the obtained pressureless fluid can accelerate the universe expansion with
$q\leq-1/2$ for $-1/3\leq\alpha<1/3$. Consequently, the non-minimal coupling
$\gamma=[(1+z)^{n}-\alpha]/[4(1+z)^{n}-6\alpha]$ allows $G$ to vary as
$G=G_{0}(1+z)^{n}$ gaz , where we used the $1+z=1/a$ relation. It is also easy
to see that the universe described by this model has begun from a primary
inflationary phase ($q=-1$) corresponding to the $a\rightarrow 0$ point. In
fact, in this limit, we also have $\gamma=1/4$, a value that supports an
inflationary phase for even an empty universe mor1 .
Now, let us consider two more comprehensive cases i.e., $i$)
$p=k\rho^{1+1/m}$, where $m$ and $k$ are unknown constants to be evaluated
later, and $ii$) $p=\sigma\rho/(a-b\rho)-c\rho^{2}$ in which $\sigma$, $a$,
$b$ and $c$ are unknown coefficients. In this manner, as it is obvious from
Fig. 1, a proper behavior is obtainable for the cosmos. Here, $w\equiv p/\rho$
denotes the equation of state of cosmic fluids. Depending on the values of
unknown parameters, the universe can also experience a transition at $z_{t}$
which can even take values smaller than $1$. Clearly, both fluids behave as
dark energy sources, and the corresponding non-minimal coupling can not be
considered as a dark energy source.
## V A new proposal for $\lambda$ parameter
Now, let us consider a flat FRW universe filled by a pressureless fluid with
energy density $\rho$ when $\lambda R=\zeta H^{n}$ in which $\zeta$ and $n$
are unknown constants. In this manner, the $\lambda$ parameter takes the form
$\displaystyle\lambda=\zeta\frac{H^{n}}{R}=\frac{\zeta}{6}\frac{H^{n}}{\dot{H}+2H^{2}},$
(19)
whence, the corresponding Friedmann equations read
$\displaystyle H^{2}-\frac{\kappa\zeta}{3}H^{n}=\frac{\kappa}{3}\rho,$
$\displaystyle H^{2}+\frac{2}{3}\dot{H}-\frac{\kappa\zeta}{3}H^{n}=0.$ (20)
Defining $\Omega=8\pi G\rho/3H^{2}$, while $\Omega_{0}$ denotes its current
value macq , the evolution of $q$ and $G/G_{0}$ have been plotted in Fig. (2).
For the employed parameters, transition redshift ($z_{t}$) lies within the
range of $0.4\leq z_{t}\leq 0.88$. The sensitivity of diagrams to the values
of $\Omega_{0}$ and $H_{0}$ is so weak compared with those of $\zeta$ and $n$
and $\kappa$. Indeed, although we only consider a baryonic source for current
density parameter $\Omega_{0}=0.049$ macq , and $H_{0}=67.66$ agh , the
obtained behaviors are also achievable for other candidates of $\Omega$ (such
as dark matter) and also the other values of $H_{0}$, reported in the
literature. Hence, suitable behavior of $q$ is obtainable by only considering
the baryonic content of the universe, meaning that the $\zeta H^{n}$ term may
play the role of the unknown parts (dark components) of cosmos. Dirac
hypothesis is also respected during the cosmic evolution. Remarkably, $G$ will
take negative values in future meaning that gravity will become repulsive
which speeds the universe expansion rate up more i.e., $q$ decreases. All
these happen under the shadow of the existence of non-minimal coupling
$\lambda$ which varies during the evolution of the universe. In Fig. (3),
$H(z)$ far and the distance modulus ama are plotted for the $\Lambda$CDM
model and also our model.
Figure 2: The evolution of $q$ and $G/G_{0}$ assuming $w=0$, for the case
discussed in Sec.V. The diagrams for $G/G_{0}$ are plotted using the same
model parameters as of $q$ diagrams.
Figure 3: The evolution of $H(z)$ and $\mu(z)$ whenever $w=0$, for the case
discussed in Sec.V. The same values of parameters as of Fig. 2 have been used.
The black dashed lines show $H(z)$ and $\mu(z)$ for $\Lambda$CDM model.
The negative value of $G$ is the direct result of the assumed $\lambda$, and
changes in the values of model parameters do not affect this result. There are
also other works that predict negative values for $G$ bar2 ; uza1 ; uza2 .
Theoretically, our model shows that a non-minimal coupling between geometry
and matter fields can accelerate the universe expansion and has an ability to
satisfy Dirac hypothesis.
## VI concluding remarks
After addressing some properties of previously cosmological models introduced
in the framework of GRT mor1 ; das , the implications of GRT on obtaining
varying $G$ has been studied through considering the Newtonian limit of the
field equations. Thereinafter, following a proposal of Dirac hypothesis
introduced in gaz ; smo , the required non-minimal coupling needed to support
Dirac model was also obtained. Our results show that the dark sectors of
cosmos can be unified into one cosmic fluid which behaves as a pressureless
fluid in high redshift limit, and also accelerates the universe in line with
the current observations (Fig. 1). We also proposed a non-minimal coupling
(Sec. V) which can play the role of dark side of cosmos satisfying Dirac
hypothesis. Indeed, the present study addresses a deep connection between non-
minimal coupling (between the matter fields and geometry) and the idea of
variable $G$. This translates into saying that one may find the footprints of
non-minimal coupling between the matter fields and geometry by having the
observationally confirmed profile of $G$ and conversely.
Although relying on the Rastall hypothesis on relation between the changes in
spacetime curvature and violation of OCL we only focused on the implications
of the violation of OCL in cosmology and its connection with Dirac hypothesis,
the OCL violation can also be allowed due to the quantum considerations such
as uncertainty principle, and in the framework of unimodular gravity producing
significant cosmological outcomes jos . Indeed, even in the framework of GR
and thanks to the Bianchi identity, OCL is violated as the result of the
existence of a non-constant $G$. In summary, it was our goal to address $i$)
probable connection between Dirac hypothesis and non-minimal couplings, and
simultaneously, $ii$) the ability of such couplings in being responsible for
the unknown parts (dark sides) of cosmos. Therefore, such couplings need to be
further studied from both of the theoretical and observational viewpoints.
Finally, we would like to mention that, though Rastall gravity and its
generalizations provide interesting results, cosmological models based on this
theory need to be accurately tested by observations. In the present model, we
tried to explore theoretical consequences of a varying G cosmology based on
GRT and also briefly examined observational aspects of the theory. However, a
full observational treatment of the present model, e.g., in light of
Akarsu2020 , needs to be done and work along this line can be considered as an
interesting subject for future studies and developments.
## References
* (1) Dirac P. A. M., Nature 139 (1937), 323
* (2) Dirac P. A. M., Proc. Roy. Soc. London, Ser. A 165 (1938), 199.
* (3) Dirac P. A. M., Nature, 139, (1937), 1001.
* (4) Weyl H. Ann. Phys., 59, 129 (1919);
Zwicky F., Phys. Rev., 55, 726 (1939);
Eddington A. S., The Mathematical Theory of Relativity, Cambridge University
Press, London (1923).
* (5) Barrow J. D., Tipler F. J., The Anthropic Cosmological Principle, Oxford University Press, Oxford, (1986);
J. D. Barrow, Varying G and Other Constants, In: S$\acute{a}$nchez N.,
Zichichi A. (eds) Current Topics in Astrofundamental Physics: Primordial
Cosmology. NATO ASI Series (Series C: Mathematical and Physical Sciences), vol
511. Springer, Dordrecht (1998).
* (6) Chandrasekhar S., Nature 139 (1937), 757;
Kothari D. S., Nature, 142 (1938), 354.
* (7) S. Ray, U. Mukhopadhyay, S. Ray, A. Bhattacharjee, Int. Journal Mod. Phys. D 28 (2019), 1930014.
* (8) Rastall P., Can. J. Phys. 54 (1976), 66
* (9) Vinti J. P., Celestial Mechanics 16 (1977), 391
* (10) De Sabbata V., Acta Cosmologica Zesz. 9 (1980), 63
* (11) Baptista J. P., Batista A. B., Fabris J. C., Revista Brasileira de Fisica. 14 (1984), 208
* (12) Beesham A., Int. J. Theo. Phys. 25 (1986), 1295
* (13) Wu Y. S., Wang Z., Phys. Rev. Lett. 57 (1986), 16
* (14) Degl’Innocenti S. et al., A&A 312 (1996), 345
* (15) Barrow J. D., Mon. Not. R. Astron. Soc. 282 (1996), 1397
* (16) Barrow J. D., 1997, arXiv:gr-qc/9711084.
* (17) Barrow J. D., The Constants of Nature, (Vintage Books, London, 2002)
* (18) Mansouri R., Nasseri F., Khorrami M., Phys. Lett. A 259 (1999), 194
* (19) Gaztañaga E. et al., Phys. Rev. D 65 (2001), 023506
* (20) Clifton T., Mota D., Barrow J. D, Mon. Not. R. Astron. Soc. 358 (2005), 601
* (21) Bronnikov K. A., Kononogov S. A., Metrologia 43 (2006), 1
* (22) Solà J., J. Phys. A: Math. Theor. 41 (2008), 164066
* (23) Uzan J. P., Rev. Mod. Phys. 75 (2003), 403
* (24) Uzan J. P., Liv. Rev. Relativ. 14 (2011), 2
* (25) Smolin L., Class. Quantum Grav. 33 (2016), 025011
* (26) Fritzsch H., Solà J., Nunes R. C., Eur. Phys. J. C 77 (2017), 193
* (27) Leszczyńska K., Da̧browski M. P., Denkiewicz T., Eur. Phys. J. C 79 (2019), 222
* (28) Ellis G. F. R., Maartens R., Maccallum M. A. H., The Constants of Nature, (Cambridge University Press, UK, 2012).
* (29) Canuto, V., Adams, P. J., Hsieh, S. H., Tsiang, E., Phy. Rev. D 16, 6 (1977);
Wesson, P., Goodson, R. E., Observ. 101, 105 (1981).
* (30) C. Brans, R. H. Dicke, Phys. Rev. 124, 925 (1961).
* (31) R. H. Dicke, Phys. Rev. 125, 2163 (1962).
* (32) R. H. Dicke, Rev. Mod. Phys. 29, 355 (1957).
* (33) R. H. Dicke, Nature 192, 440 (1961).
* (34) Poisson E., A Relativist’s Toolkit, (Cambridge University Press, UK, 2004).
* (35) Nojiri S., Odintsov S. D., Phys. Lett. B 599 (2004), 137
* (36) Allemandi G. et al., Phys. Rev. D 72 (2005), 063505
* (37) Koivisto T.,Class. Quant. Grav. 23 (2006), 4289
* (38) Bertolami O. et al., Phys. Rev. D 75 (2007), 104016.
* (39) Harko T., Lobo F. S. N., Galaxies 2 (2014), 410.
* (40) Carloni S., Phys. Lett. B 766 (2017), 55.
* (41) Boehmer C. G., Carloni S.,Phys. Rev. D 98 (2018), 024054.
* (42) Rastall P., Phys. Rev. D 6 (1972), 3357.
* (43) Moradpour H. et al., The European Physical Journal C, 77 (2017), 259.
* (44) De Moraes W. A. G., Santos A. F., Gen. Relativ. Gravit. 51 (2019), 167.
* (45) Li R. et al., Mon. Not. R. Astron. Soc. 486 (2019), 2407.
* (46) Al-Rawaf A. S., Taha O. M., Phys. Lett. B 366 (1996), 69.
* (47) Al-Rawaf A. S., Taha O. M., Gen. Relat. Gravit. 28 (1996), 935.
* (48) Al-Rawaf A. S., Int. J. Mod. Phys. D 14 (2005), 1941.
* (49) Majernik V., Gen. Relat. Gravit. 35 (2003), 1007.
* (50) Arbab A. I., J. Cosmol. Astropart. Phys. 05 (2003), 008.
* (51) Abdel-Rahman A. M. M., Astrophys. Space. Sci. 278 (2001), 383.
* (52) Abdel-Rahman A. M. M., Hashim M. H. A., Astrophys. Space. Sci. 298 (2005), 519.
* (53) Abdel-Rahman A. M. M., Riad I. F., Astron. J. 134 (2007), 1931.
* (54) Moradpour H. et al., Phys. Rev. D. 96 (2017), 123504.
* (55) Manna T., Rahaman F., Mondal M., Mod. Phys. Lett. A 35 (2020), 2050034.
* (56) S. K. Maurya and F. T.-Ortiz, Phys. Dark Univ. 29 (2020), 100577.
* (57) H. Shabani and A. H. Ziaie, Europhysics Letters 129, (2020) 20004.
* (58) Fabris J. C., Kerner R., Tossa J., Int. J. Mod. Phys. D 9 (2000), 111.
* (59) Josset T., Perez A., Phys. Rev. Lett. 118 118 (2017), 021102.
* (60) Watson S. et al., J. Cosmol. Astropart. Phys. 07 (2017), 11.
* (61) Gamboa J. et al., Phys. Rev. D 96 (2017), 083534.
* (62) Das D., Dutta S., Chakraborty S., Eur. Phys. J. C 78 (2018), 810.
* (63) Lin K., Qian W. L., Eur. Phys. J. C 80 (2020), 561.
* (64) C. E. M. Batista, M. H. Daouda, J. C. Fabris, O. F. Piattella, D. C. Rodrigues, Phys. Rev. D 85, (2012), 084008.
* (65) Moradpour H. et al., Mod. Phys. Lett. A 32 (2017), 1750078
* (66) Moradpour H. et al., Adv. High Energy Phys. 2018 (2018), 7124730
* (67) Moradpour H., Sadeghnezhad N., Hendi S. H., Can. J. Phys. 95 (2017), 1257
* (68) T. R. P. Carames. et al., Eur. Phys. J. C74 (2014) 3145.
* (69) Darabi F. et al., Eur. Phys. J. C 78 (2018), 25
* (70) Moradpour H. et al., Mod. Phys. Lett. A 33 (2019), 1950096
* (71) Mota C.E., et al., arXiv:2007.01968.
* (72) A. H. Ziaie, H. Moradpour, H. Shabani, Eur. Phys. J. Plus 135 (2020), 916.
* (73) D. Lohiya, A. Batra, S. Mehra, S. Mahajan and A. Mukherjee, Astron. Astrophys. Trans. 14 (1997), 199.
* (74) I. Ayuso, J. P. Mimoso and N. J. Nunes, Galaxies, 7 (2019), 38.
* (75) A. B. Batista, J. C. Fabris and S. V. B. Goncalves, Class. Quant. Grav. 18 (2001), 1389.
* (76) A. A. Starobinskij, Pisma v Astronomicheskii Zhurnal, 7, (1981), 67; Soviet Astronomy Letters, 7, (1981), 36, Translation.
* (77) K. Lin and W.-L. Qian, Eur. Phys. J. C 80 (2020), 561.
* (78) Domínguez A. et al., Astrophys. J. 885 (2019), 137.
* (79) Macquart J. et al., Nature 581 (2020), 391.
* (80) Farooq O. et al., Astrophys. J. 835 (2017), 26.
* (81) Aghanim N. et al., A&A 641 (2020), A6.
* (82) Amanullah et al., Astrophys. J. 716 (2010), 712.
* (83) O. Akarsu, N. Katirci, S. Kumar, R. C. Nunes, B. Ozturk and S. Sharma, Eur. Phys. J. C 80 (2020), 1050.
|
arxiv-papers
| 2021-07-11T06:15:11 |
2024-09-04T03:07:18.477273
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Hooman Moradpour, Hamid Shabani, Amir Hadi Ziaie and Umesh Kumar\n Sharma",
"submitter": "Hamid Shabani",
"url": "https://arxiv.org/abs/2107.12141"
}
|
2107.12146
|
# Physics-informed graph neural Galerkin networks: A unified framework for
solving PDE-governed forward and inverse problems
Han Gao Matthew J. Zahr Jian-Xun Wang Department of Aerospace and
Mechanical Engineering, University of Notre Dame, Notre Dame, IN
###### Abstract
Despite the great promise of the physics-informed neural networks (PINNs) in
solving forward and inverse problems, several technical challenges are present
as roadblocks for more complex and realistic applications. First, most
existing PINNs are based on point-wise formulation with fully-connected
networks to learn continuous functions, which suffer from poor scalability and
hard boundary enforcement. Second, the infinite search space over-complicates
the non-convex optimization for network training. Third, although the
convolutional neural network (CNN)-based discrete learning can significantly
improve training efficiency, CNNs struggle to handle irregular geometries with
unstructured meshes. To properly address these challenges, we present a novel
discrete PINN framework based on graph convolutional network (GCN) and
variational structure of PDE to solve forward and inverse partial differential
equations (PDEs) in a unified manner. The use of a piecewise polynomial basis
can reduce the dimension of search space and facilitate training and
convergence. Without the need of tuning penalty parameters in classic PINNs,
the proposed method can strictly impose boundary conditions and assimilate
sparse data in both forward and inverse settings. The flexibility of GCNs is
leveraged for irregular geometries with unstructured meshes. The effectiveness
and merit of the proposed method are demonstrated over a variety of forward
and inverse computational mechanics problems governed by both linear and
nonlinear PDEs.
###### keywords:
Partial differential equations , Inverse problem , Physics-informed machine
learning , Graph convolutional neural networks , Mechanics
††journal: Elsevier
## 1 Introduction
Partial differential equations (PDEs) play an important role in engineering
applications since most of the physics governing natural or man-made complex
systems are described by PDEs. However, finding solutions to most PDEs is a
challenging problem, which may involve sophisticated numerical techniques and
can be time-consuming, particularly for scenarios where parameters or
initial/boundary conditions are partially known. Most recently, physics-
informed neural networks (PINNs) [1], as a new paradigm for solving both
forward and inverse PDEs, has attracted increasing attention due to its great
flexibility and simplicity compared to classic numerical methods. The general
idea of PINNs is to approximate the PDE solutions with deep neural networks,
whose loss functions are formulated as a combination of PDE residuals and data
mismatch. This unique loss formulation enables physics-informed training that
leverages the information from both physics equations and sparse observation
data.
Based on how to construct differential operators of PDE residuals using neural
networks, PINNs can be classified into two categories: continuous and
discrete. The continuous PINNs usually employ fully-connected (FC) neural
networks to approximate the continuous solution function $f(\mathbf{x},t)$
with respect to spatiotemporal coordinates $(\mathbf{x},t)$ in a point-wise
manner, where the spatial and temporal derivative terms are computed using
automatic differentiation (AD) techniques [2]. The continuous PINNs are
undergoing a renaissance since recent impressive contributions made by Raissi
et al. [1] on development of the continuous FC-PINN for solving forward and
inverse PDEs. Its merit and effectiveness has been demonstrated over a
plethora of scientific applications in many areas [3, 4, 5, 6, 7]. For
instance, in fluid applications, PINNs have been used for fast surrogate
modeling of idealized vascular flow problems in a forward parametric setting
without training labels [8]. Moreover, PINNs have also been formulated in an
inverse modeling setting to extract unobservable information (e.g., blood flow
velocity) from observable data (e.g., concentration data) in cardiovascular
problems [9, 10, 11, 12]. Jin et al. [13] applied FC-PINNs to solve Navier-
Stokes equations, ranging from laminar to turbulent regimes, while Mao et al.
[14] further showed their effectiveness on high-speed flow problems. Recently,
NVIDIA developed a scalable implementation SimNet based on continuous PINNs
and applied it to solve various multiphysics problems with massive GPU
parallelization [15].
Despite the enormous success and rapid developments thanks to their great
flexibility, the current continuous PINNs still have some limitations. First,
they suffers from high training cost since the point-wise formulation requires
huge amount of AD computations on vast collocation points in a high-
dimensional spatiotemporal (and parameter) domain [16, 17]. Second, it is
challenging to formulate a strict enforcement of initial/boundary conditions
(IC/BCs) for continuous PINNs, which has been demonstrated to be effective in
finding correct unique PDE solutions, especially when labeled data is very
scarce or absent [8]. Although a distance-based particular solution can be
introduced to strictly impose IC/BCs on a few simple 2-D domains using either
specifically designed algebraic expressions or low-capacity neural networks
[8, 18], it fails to show the effectiveness on complex geometries for real-
world applications. To reduce training costs and enable efficient learning,
discrete PINNs that leverage convolution operations and numerical
discretizations have begun to spur interests due to their better efficiency
and scalability [19, 20]. Specifically, convolutional neural networks (CNN)
are often used in discrete PINN to directly learn the entire spatiotemporal
solution fields end to end and all the derivative terms of the physics-
informed loss are calculated based on numerical discretization instead of
point-wise AD. For instance, Zhu et al. [19] developed a physics-constrained
convolutional aencoder-decoder to solve high-dimensional elliptic PDEs, and
Geneva et al. [21] further extended this framework to dynamic hyperbolic PDEs
with parametric initial conditions. Zhang et al. [22] presented a physics-
guided CNN for seismic response modeling and also explored the similar idea
with a Recurrent Neural Network (RNN) for metamodeling of nonlinear structures
[22]. Wandel et al. [23] recently proposed a data-free fluid surrogate based
on an autoregressive U-net in a parametric setting. In aforementioned works,
the computational domains are regular and discretized by uniform grids, where
PDE residuals are calculated by finite difference (FD) methods. This is
because the FD-based CNNs are fundamentally rooted in structured Cartesian
grids of rectangular domains. Besides FD-based PINN, finite volume (FV)
discretization has also been utilized to construct the PDE-based loss function
to solve steady fluid problems, which, however, is still restricted to
rectangular domains due to intrinsic limitations of classic convolution
operations [24]. To enable physics-informed CNNs to solve parametric PDEs on
irregular domains with unstructured grids, Gao et al. [20] proposed a
geometry-adaptive physics-informed CNN, PhyGeoNet, which embeds a pre-computed
coordinate mapping into the classic CNN structure. Although the effectiveness
of the PhyGeoNet has been demonstrated on simple irregular domains, it remains
challenging for general complex geometries at large.
Motivated by existing challenges, we propose a novel discrete PINN framework
to handle irregular domains with unstructured grids based on generalized
convolution operations. Namely, the convolution operations are directly
performed on unstructured mesh data, which can be seen as discrete non-
euclidean manifolds, i.e., Graph. Moreover, the construction of PDE-informed
graph convolutional network (GCN) structure is inspired by finite element (FE)
method [25, 26], which is another classic numerical discretization technique
that possess many advantages for physics-informed learning. First, thanks to a
variational formulation (weak form) of PDE residuals, where Neumann boundary
conditions can be naturally incorporated in the weak formulation of governing
equations, the order of differential operators can be effectively reduced by
integration by part and thus the learning complexity can be largely mitigated.
Moreover, the massive amount of collocations points required by strong-form
PINNs can be replaced by a relatively small amount of quadrature points, which
could potentially reduce considerable training cost. The variational (weak)
formulation has been recently developed for continuous PINNs and notable
superiority has been shown over strong-form PINNs [27, 28, 29, 30, 31, 32]. In
these variational continuous PINNs, a point-wise fully-connected neural
network is usually built as the trial basis, combined with polynomial test
functions, to formulate the variational forms in Petrov-Galerkin fashion. Due
to the black-box nature of the deep neural networks, accurate quadrature rules
are difficult to construction, which leads to additional error associated with
variational crimes. Moreover, the essential BCs cannot be imposed in a hard
manner due to the point-wise formulation. The FEM-Net proposed by Yao et al.
[33] is a FE-based discrete PINN, where a FE-based convolution has been
developed to build variational PDE residuals for CNNs. However, this method is
under a linear assumption and still limited to rectangular domains due to the
classic CNN backbone.
In this work, we proposed an innovative discrete PINN framework based on graph
convolutional network and variational structure of PDE to solve forward and
inverse PDEs in a unified manner. Specifically, the novel contributions are
summarized as follows:
1. (a)
We introduce the graph convolution operation into physics-informed learning to
fully leverage the power of FE-based discretization for irregular domains with
unstructured meshes. Unlike the state-of-art discrete PINNs based on classic
CNNs, the proposed approach does not need rasterization as it can directly
handle unstructured mesh with simplex/quadrilateral elements as traditional FE
solver does.
2. (b)
A set of finite-dimensional polynomial basis functions are used to reconstruct
the full-field predictions based on the output nodal solution graph in a
Galerkin formulation, and thus, the search space can be significantly reduced
to facilitate training. Moreover, since both test/trial function are based on
standard polynomials, the variational integrals can be computed accurately
using Gaussian quadrature.
3. (c)
The proposed PINN is designed to exactly satisfy essential boundary
conditions, avoiding penalty coefficient tuning in most PINNs with a soft BC
enforcement.
4. (d)
A new data assimilation scheme is proposed to strictly enforce observation
data.
## 2 Methodology
### 2.1 Overview
Consider a physical system in a bounded domain ($\Omega\subset\mathbb{R}^{d}$)
governed by a set of nonlinear, steady parameterized PDEs in the generic
discretized form,
${\bm{R}}({\bm{U}}({\boldsymbol{\mu}});{\boldsymbol{\mu}})=0,$ (1)
where ${\boldsymbol{\mu}}\in\mathbb{R}^{N_{{\boldsymbol{\mu}}}}$ is the PDE
parameter vector,
${\bm{U}}:\mathbb{R}^{N_{{\boldsymbol{\mu}}}}\rightarrow\mathbb{R}^{N_{{\bm{U}}}}$
is the discrete parameter-dependent state vector implicitly defined as the
solution of (1), and
${\bm{R}}:\mathbb{R}^{N_{{\bm{U}}}}\times\mathbb{R}^{N_{{\boldsymbol{\mu}}}}\rightarrow\mathbb{R}^{N_{{\bm{U}}}}$
represents the discretized PDE operator. The set of PDEs are subjected to
boundary conditions (BCs), which are defined on the boundary $\partial\Omega$
of the domain. In this work, we present an innovative physics-informed graph
neural Galerkin network (PI-GGN) to establish a solution approach for such
PDE-governed system in both forward and inverse settings. In the forward
problem, we aim to obtain the solution ${\bm{U}}$ given known BCs and
parameters ${\boldsymbol{\mu}}$; as for the inverse setting, the system is
solved when BCs and parameters ${\boldsymbol{\mu}}$ are partially known,
whereas sparse observations of the state are available. In the proposed
framework, a GCN is devised to learn nodal solutions of the state on a set of
unstructured grids. The PDE residuals in the physics-informed loss function
are reconstructed based on the continuous Galerkin method. Essential BCs of
the system are imposed in a hard manner and additional data can be assimilated
to solve the forward and inverse problems simultaneously. Each component of
the proposed method will be detailed in the following subsections.
### 2.2 Graph convolutional neural network for unstructured data
There has been growing interest in applying GCN for scientific machine
learning problems because of its great flexibility in dealing with
unstructured data. Excellent performance of the graph-based learning has been
reported in modeling various computational mechanics problems through classic
data-driven training [34, 35, 36, 37, 38, 39]. In general, by defining
convolution operations for non-Euclidean space, GCNs generalize CNN-type
constructions to graph data. The capability of modeling dependencies between
nodes of a graph is the key that enables GCNs to handle unstructured mesh data
with any arbitrary boundaries.
Figure 1: An example of a GCN, where the input/output graph has 3 nodes &
edges and the same adjacency matrix ($\mathcal{N}(1)=\\{2,3\\}$,
$\mathcal{N}(2)=\\{1,3\\}$, $\mathcal{N}(3)=\\{1,2\\}$). The input feature is
the coordinate of each node (${\bm{f}}_{i}^{(\mathrm{in})}=x_{i}$), while the
output feature is the nodal solution vector
(${\bm{f}}_{i}^{(\mathrm{out})}=u_{i}(x_{i})$).
As shown in Fig. 1, a graph consists of nodes and edges, where each node is
defined by its feature vector ${\bm{f}}$ and the relation with other nodes are
described by edges. The neighbor $\mathcal{N}(\cdot)$ of a node refers to a
set of adjacent nodes that are connected to that node via edges. Therefore, a
mesh with unstructured grids and corresponding nodal PDE solutions can be
naturally described as graphs. Similar to CNN-based discrete PINN [20], a GCN
is built to model the discretized solution fields
${\bm{U}}(\bar{\boldsymbol{\mu}})\approx\hat{{\bm{U}}}({\boldsymbol{\Theta}}^{*})$,
where ${\boldsymbol{\Theta}}^{*}$ are trained parameters of the GCN for graph
convolutions for the parameter $\bar{\boldsymbol{\mu}}$.
###### Remark.
In general, the input feature vector of GCN can be any spatially varying field
discretized by the mesh due to the universal approximation capacity of deep
neural network. In this work, the GCN takes an input graph that each node is
associated with its spatial coordinates of the mesh, and then outputs the
discretized solutions fields as an out graph, where each node contains the
corresponding nodal solution vector.
Similar to CNNs, the output solution graph is obtained by applying multiple
graph convolution operations on the input layer, sequentially updating nodal
features via a message passing function, which can be written in a generic
form,
${\bm{f}}_{i}^{(l)}=\gamma^{(l)}({\bm{f}}_{i}^{(l-1)},\square^{(l)}_{j\in\mathcal{N}(i)}\Psi^{(l)}({\bm{f}}_{i}^{(l-1)},{\bm{f}}_{j}^{(l-1)})),$
(2)
where $i$ denotes $i^{\mathrm{th}}$ node, $(l)$ denotes $l^{\mathrm{th}}$th
layer, $\gamma$, $\Psi$ are differentiable non-linear functions, and $\square$
denotes a differentiable, permutation-invariant function (e.g., summation,
mean, or maximum). The feature vectors are represented by
${\bm{f}}_{i}^{(l)}\in\mathbb{R}^{N_{{\bm{f}}^{(l)}}}$ and
${\bm{f}}_{i}^{(l-1)}\in\mathbb{R}^{N_{{\bm{f}}^{(l-1)}}}$, where
$N_{{\bm{f}}^{(l-1)}}$ and $N_{{\bm{f}}^{(l)}}$ are feature dimensions in
$(l-1)^{\mathrm{th}}$ and $l^{\mathrm{th}}$ layers, respectively. For
implementation simplicity, all the nodal features are usually concatenated and
flattened as a larger vector ${\bm{X}}$. The information of edge connection is
stored in a sparse matrix ${\bm{A}}$, known as the adjacency matrix. In this
work, the GCN is constructed based on the Chebyshev spectral graph convolution
operator [40], which is derived from the spectral convolution theorem [41],
where Chebyshev polynomials are introduced to avoid expensive eigen-
decomposition. Specifically, the message passing function of Chebyshev graph
convolution can be written as,
${\bm{X}}^{l}=\mathrm{ReLU}\left(\sum_{k=1}^{K}{\bm{Z}}^{(l-1,k)}\cdot{\boldsymbol{\Theta}}^{(l-1,k)}+{\bm{b}}^{l-1}\right),$
(3)
where ${\boldsymbol{\Theta}}^{(l-1,k)}$ are trainable parameters for the
$k^{\mathrm{th}}$ basis in the $(l-1)^{\mathrm{th}}$ layer, ${\bm{b}}^{(l-1)}$
is an additive trainable bias vector, and the $k^{\mathrm{th}}$ basis
${\bm{Z}}^{(l-1,k)}$ is calculated recursively as follows,
$\begin{split}&{\bm{Z}}^{(l-1,1)}={\bm{X}}^{(l-1)},\\\
&{\bm{Z}}^{(l-1,2)}=\hat{{\bm{L}}}\cdot{\bm{X}}^{(l-1)},\\\
&{\bm{Z}}^{(l-1,k)}=2\hat{{\bm{L}}}\cdot{\bm{Z}}^{(l-1,k-1)}-{\bm{Z}}^{(l-1,k-2)},\\\
\end{split}$ (4)
and
$\begin{split}&\hat{{\bm{L}}}={\bm{L}}-{\bm{I}}\\\
&{\bm{L}}={\bm{I}}-{\bm{D}}^{-\frac{1}{2}}{\bm{A}}{\bm{D}}^{-\frac{1}{2}}\end{split}$
(5)
where ${\bm{I}}$ is an identity matrix and ${\bm{D}}$ represents the degree
matrix of the graph. The Rectified Linear Unit (ReLU) [42] is chosen as the
nonlinear activation function and polynomial order $K$ is set as $10$ in this
work.
### 2.3 Variational PDE-informed loss function
The loss function is built based on the PDE residuals (Eq. 1), such that the
conservation laws are utilized to inform/drive the GCN training. The generic
PDE for steady-state scenarios can be re-written as,
$\nabla\cdot F(u,\nabla u;{\boldsymbol{\mu}})=S(u,\nabla
u;{\boldsymbol{\mu}})\quad\text{in }\Omega,$ (6)
where $u:\Omega\rightarrow\mathbb{R}^{N_{c}}$ is the solution variable,
$F:\mathbb{R}^{N_{c}}\rightarrow\mathbb{R}^{N_{c}\times d}$ is the flux
function, $S:\mathbb{R}^{N_{c}}\rightarrow\mathbb{R}^{N_{c}}$ is the source
term, and $\nabla:=(\partial_{x_{1}},...,\partial_{x_{d}})$ denotes the
gradient operator defined in the physical domain. Equation 6 can represent a
wide range of static PDEs such as Poisson equation, linear elasticity
equations, and Navier-Stokes equations.
#### 2.3.1 Weak formulation of PDE residuals
For continuous FC-PINNs, the derivative terms for constructing the PDE-
informed loss function are obtained by AD in point-wise manner, and the FCNN
as a continuous trial function searches an infinite-dimensional solution
space. Therefore, the infinite search space over-complicates the non-convex
optimization for the network training and a massive amount of collocation
points are usually required. In this work, we use a piecewise polynomial basis
to reduce the dimension of the search space and facilitate physics-informed
training/convergence. Specifically, the conservation laws (Eq. 6) are
discretized based using a nodal continuous Galerkin method and the trial space
$\mathcal{V}_{h}^{p}$ is constructed by continuous piecewise polynomial basis
functions
$\mathcal{V}_{h}^{p}=\big{\\{}v\in[\mathcal{H}^{1}(\Omega)]^{N_{c}}\;\big{|}\;v|_{K}\in[\mathcal{P}_{p}(K)]^{N_{c}},\;\forall
K\in\mathcal{E}_{h}\big{\\}},$ (7)
where $\mathcal{H}^{1}(\Omega)$ represents Sobolev spaces where weak
derivatives up to order one are square integrable, $\mathcal{P}_{p}(K)$ is the
space of polynomial functions of degree up to $p$ defined on the element $K$,
and $\mathcal{E}_{h}$ is the finite element mesh. The test space is set to be
the same as the trial space $\mathcal{V}_{h}^{p}$ and the solution
$u_{h}\in\mathcal{V}_{h}^{p}$ satisfies the weak formulation of the PDEs for
any test function $\omega_{h}\in\mathcal{V}_{h}^{p}$,
$\int_{\partial\Omega}\omega_{h}\cdot F(u_{h},\nabla
u_{h};{\boldsymbol{\mu}})n\,dS-\int_{\Omega}\nabla\omega_{h}:F(u_{h},\nabla
u_{h};{\boldsymbol{\mu}})\,dV=\int_{\Omega}\omega_{h}\cdot S(u_{h},\nabla
u_{h};{\boldsymbol{\mu}})\,dV.$ (8)
We introduce a basis ${\boldsymbol{\Phi}}(x)\in\mathbb{R}^{N_{\bm{U}}\times
N_{c}}$ for $\mathcal{V}_{h}^{p}$ to express the test variables as
$\omega_{h}(x)={\boldsymbol{\Phi}}(x)^{T}\tilde{\bm{W}}$, where
$\tilde{\bm{W}}\in\mathbb{R}^{N_{\bm{U}}}$ are the coefficients of the test
variable in the basis, which leads to an equivalent version of the Galerkin
form
$\int_{\partial\Omega}{\boldsymbol{\Phi}}\cdot F\Big{(}u_{h},\nabla
u_{h};{\boldsymbol{\mu}}\Big{)}n\,dS-\int_{\Omega}\nabla{\boldsymbol{\Phi}}:F\Big{(}u_{h},\nabla
u_{h};{\boldsymbol{\mu}}\Big{)}\,dV-\int_{\Omega}{\boldsymbol{\Phi}}\cdot
S\Big{(}u_{h},\nabla u_{h};{\boldsymbol{\mu}}\Big{)}\,dV=0.$ (9)
using arbitrariness of the test function coefficients. We convert this to
residual form by introducing
$\\{(\beta_{i}^{v},\tilde{x}_{i}^{v})\\}^{N_{qv}}_{i=1}\\}$ and
$\\{(\beta_{i}^{s},\tilde{x}_{i}^{s})\\}^{N_{qs}}_{i=1}\\}$ as the quadrature
weights and points for integrals over $\Omega$ and $\partial\Omega$,
respectively, to define the residual as
$\begin{split}{\bm{R}}(\tilde{\bm{U}};{\boldsymbol{\mu}})=&\sum_{i=1}^{N_{qs}}\beta^{s}_{i}{\boldsymbol{\Phi}}(\tilde{x}^{s}_{i})\cdot
F\Big{(}\tilde{u}_{h}(\tilde{x}_{i}^{s};\tilde{\bm{U}}),\nabla\tilde{u}_{h}(\tilde{x}_{i}^{s};\tilde{\bm{U}});{\boldsymbol{\mu}}\Big{)}n-\\\
&\sum_{i=1}^{N_{qv}}\beta^{v}_{i}\nabla{\boldsymbol{\Phi}}(\tilde{x}^{v}_{i}):F\Big{(}\tilde{u}_{h}(\tilde{x}_{i}^{v};\tilde{\bm{U}}),\nabla\tilde{u}_{h}(\tilde{x}_{i}^{v};\tilde{\bm{U}});{\boldsymbol{\mu}}\Big{)}-\\\
&\sum_{i=1}^{N_{qv}}\beta^{v}_{i}{\boldsymbol{\Phi}}(\tilde{x}^{v}_{i})\cdot
S\Big{(}\tilde{u}_{h}(\tilde{x}_{i}^{v};\tilde{\bm{U}}),\nabla\tilde{u}_{h}(\tilde{x}_{i}^{v};\tilde{\bm{U}});{\boldsymbol{\mu}}\Big{)},\end{split}$
(10)
where
$\tilde{u}_{h}:\Omega\times\mathbb{R}^{N_{\bm{U}}}\rightarrow\mathbb{R}^{N_{c}}$
is the continuous representation in $\mathcal{V}_{h}^{p}$ of the discrete
state vector, i.e.,
$\tilde{u}_{h}(x;\tilde{\bm{U}})={\boldsymbol{\Phi}}(x)^{T}\tilde{\bm{U}}.$
(11)
The surface and volume quadrature coefficients ($\beta^{s}$ and $\beta^{v}$)
are stored as constant tensors and remain unchanged during the network
training. The matrix of basis function ${\boldsymbol{\Phi}}$ are obtained on
the limited amount of quadrature points and can be pre-computed as constant
tensors
(${\boldsymbol{\Phi}}(\tilde{x}^{v}),{\boldsymbol{\Phi}}(\tilde{x}^{s}),\nabla{\boldsymbol{\Phi}}(\tilde{x}^{v}),\nabla{\boldsymbol{\Phi}}(\tilde{x}^{s})$).
The variational formulation of the PDE residual (eq. 10) will be used to
define the physics-informed loss function for the GCN. Namely, the nodal
solution vector $\tilde{{\bm{U}}}$ will be learned by GCN as the output graph
$\hat{{\bm{U}}}({\boldsymbol{\Theta}})$, which takes the coordinates
(${\boldsymbol{\chi}}$) as the input graph. When the PDE parameters
${\boldsymbol{\mu}}$ are unknown, they can be treated as trainable parameters,
being updated along with network parameters ${\boldsymbol{\Theta}}$. Both the
flux and source functions ($F,S$) are differentiable functions, where gradient
information can be propagated from the outputs to their inputs. Table 1
summarizes these notations.
Notations | Description | Treatment in PI-GGN
---|---|---
${\boldsymbol{\mu}}$ | PDE parameter | Constant (if known) | Trainable (if unknown)
$\beta_{i}^{v}$, $\beta_{i}^{s}$ | Quadrature weights | Constant tensors
${\boldsymbol{\Phi}}(\cdot)$ | Basis function | Constant tensors
$F$, $S$ | Flux and source functions | Differentiable functions
$\hat{{\bm{U}}}$ | Nodal solution | Output graph of the GCN
${\boldsymbol{\chi}}$ | Nodal coordinates | Input graph of the GCN
Table 1: Summary of notation
#### 2.3.2 Essential boundary conditions enforcement
We apply static condensation to (10) by restricting to the unconstrained
degrees of freedom, e.g., degrees of freedom away from essential BCs, to yield
${\bm{R}}_{u}({\bm{U}}_{u}({\boldsymbol{\mu}}),{\bm{U}}_{e};{\boldsymbol{\mu}})=0,$
(12)
where ${\bm{U}}_{e}$ are the known value of the essential boundary conditions
and ${\bm{U}}_{u}({\boldsymbol{\mu}})$ are the indices of
${\bm{U}}({\boldsymbol{\mu}})$ corresponding to the unconstrained degrees of
freedom. In the neural network setting, we enforce the essential boundary
conditions strongly by partitioning the degrees of freedom into unconstrained
(unknown) and constrained (known) degrees of freedom as
$\hat{\bm{U}}({\boldsymbol{\Theta}})=(\hat{\bm{U}}_{u}({\boldsymbol{\Theta}})^{T},\hat{\bm{U}}_{c}^{T})^{T}$
and defining the constrained degrees of freedom using the known value of the
essential BCs, i.e., $\hat{\bm{U}}_{c}={\bm{U}}_{e}$, and the unconstrained
degrees of freedom by minimizing the physics-informed loss function
$\mathcal{L}_{\mathrm{f}}({\boldsymbol{\Theta}};{\boldsymbol{\mu}})=\left\|{\bm{R}}_{u}\left(\hat{{\bm{U}}}_{u}({\boldsymbol{\Theta}}),{\bm{U}}_{e};{\boldsymbol{\mu}}\right)\right\|_{2}.$
(13)
In this formulation, the essential boundary condition will be satisfied
automatically by construction, which is in contrast to continuous FC-PINN that
defines the FCNN as a point-wise solution function, posing challenges in hard
boundary enforcement.
### 2.4 Unifying forward and inverse solutions
The GCN can be trained based on the physics-informed loss function defined in
Eq. 13 by solving the following optimization problem without labels,
${\boldsymbol{\Theta}}^{*}=\underset{{\boldsymbol{\Theta}}}{\arg\min}~{}\mathcal{L}_{\mathrm{f}}({\boldsymbol{\Theta}};\bar{\boldsymbol{\mu}})$
(14)
where ${\boldsymbol{\Theta}}^{*}$ denotes optimal network parameters and
$\bar{\boldsymbol{\mu}}$ are the known PDE parameters; the GCN is then used to
solve a forward PDE (_forward solution_). However, in many cases, some
physical parameters such as material properties, inlet velocity, and Reynolds
number, are not available, while sparse observation data (labels)
${\bm{U}}_{o}$ can be obtained, which can be assimilated to infer the unknown
parameters (_inverse solution_). In previous PINN approaches, the inverse
problem can be solved by assimilating data ${\bm{U}}_{o}$ in a soft manner,
where the physics-informed loss is augmented by a data loss component. Namely,
the following optimization is formulated,
$({\boldsymbol{\Theta}}^{*},{\boldsymbol{\mu}}^{*})=\underset{{\boldsymbol{\Theta}},{\boldsymbol{\mu}}}{\arg\min}~{}\mathcal{L}_{\mathrm{f}}({\boldsymbol{\Theta}};{\boldsymbol{\mu}})+\lambda\underbrace{\left\|\mathcal{{\bm{F}}}^{s2o}\left(\hat{{\bm{U}}}({\boldsymbol{\Theta}})\right)-{\bm{U}}_{o}\right\|_{2}}_{\text{data
loss:}\ \mathcal{L}^{d}},$ (15)
where $\mathcal{{\bm{F}}}^{s2o}$ represents the state-to-observable map and
$\lambda$ is the penalty parameter. Properly tuning the penalty weight
$\lambda$ is critical to the convergence, which is, however, challenging and
often conducted empirically [16]. Here we introduce a novel approach to
assimilate observation data and infer unknown parameters without the need of
hyperparameter tuning. Specifically, the observation data are strictly imposed
by constructing the GCN output as
$\mathcal{{\bm{F}}}^{s2o}\left(\hat{{\bm{U}}}({\boldsymbol{\Theta}})\right)={\bm{U}}_{o}$
(16)
Therefore, unknown parameters ${\boldsymbol{\mu}}$ and boundary conditions
$\hat{{\bm{U}}}_{u}$ can be obtained along with the PDE solutions
$\hat{{\bm{U}}}_{u}$ simultaneously by solving the following constrained
optimization problem,
$({\boldsymbol{\Theta}}^{*},{\boldsymbol{\mu}}^{*})=\underset{{\boldsymbol{\Theta}},{\boldsymbol{\mu}}}{\arg\min}~{}\mathcal{L}_{\mathrm{f}}({\boldsymbol{\Theta}};{\boldsymbol{\mu}}),\quad\text{subject
to:}\quad\mathcal{{\bm{F}}}^{s2o}\left(\hat{{\bm{U}}}({\boldsymbol{\Theta}})\right)={\bm{U}}_{o}.$
(17)
## 3 Numerical experiments
We demonstrate the proposed physics-informed graph Galerkin neural network
(PI-GGN) on a variety of computational mechanics problems in both forward and
inverse settings. Specifically, Poisson equations, linear elasticity
equations, and Navier-Stokes equations with known or unknown BCs/parameters
are investigated here to demonstrate the effectiveness of the proposed method.
Moreover, we also compare two different ways of assimilating sparse
observation data and show the advantage of strictly enforcing data for the
parameter/field inversion. For all cases, the GCN architecture remains the
same, where the dimensions of node vector in hidden graph lays are fixed as
$[32,64,128,256,128,64,32]$. The relative error metric $e$ is defined as,
$e=\frac{||\hat{{\bm{U}}}({\boldsymbol{\Theta}}^{*})-{\bm{U}}(\bar{\boldsymbol{\mu}})||_{2}}{||{\bm{U}}(\bar{\boldsymbol{\mu}})||_{2}},$
(18)
where ${\boldsymbol{\Theta}}^{*}$ is the optimal training parameters computed
for the parameter configuration $\bar{\boldsymbol{\mu}}$.
### 3.1 Poisson equation
We start from a 2-D homogeneous Poisson equation,
$\begin{split}&f+\Delta u=0\quad\text{in }\Omega,\quad\\\ &u=0\quad\text{on
}\partial\Omega,\end{split}$ (19)
where $u$ is the primary variable, $f$ is the source term, and $\Delta$
denotes the Laplacian operator.
#### 3.1.1 Forward solution of diffusion field
We first consider the forward problem, where the source term $f$ is given
($f=1$) over a unit square domain (Figs. 2a and 2b). Four quadrilateral
elements are used to discretize the domain with $3^{\mathrm{rd}}$order of
polynomial basis for solution and domain transformation.
(a) PI-GGN (b) FEM
(c) PI-GGN (d) Analytical
Figure 2: PI-GGN forward solutions of the diffusion field $u$ on the (a)
square and (c) circular disks, compared against corresponding FEM or
analytical solutions, where the relative prediction error of the PI-GGN is
$e=5\times 10^{-3}$ on the square domain and $e=5\times 10^{-4}$ on the
circular disk, respectively.
As a result, the total nodal points of the graph is 49, which is much lower
than the total number of collocation points for a typical point-wise FC-PINN.
The contour of the PI-GGN prediction is in a good agreement with the FEM
reference, and the relative error is $e=0.5\%$, though slight under-estimation
near the boundary is observed. In Fig. 2c, the same PDE is solved on a unit
circular domain, where the analytical solution exists (Fig. 2d),
$u(x,y)=\frac{1-x^{2}-y^{2}}{4}.$ (20)
In PI-GGN, the number of elements remains the same, while the order of the
polynomial basis is set as two and thus 25 nodal points are used to construct
the graph. We can see the PI-GGN forward solution is almost identical to the
analytical reference and the relative prediction error $e$ is only $0.05\%$.
This simple test case demonstrates that the graph-based discrete PINN can
easily handle non-rectangular domains with unstructured meshes, which have
posed challenges for standard FD-based CNN architectures, where special
treatment such as rasterization or coordinate transformation is required [20],
complicating the implementation and convergence.
#### 3.1.2 Inverse solution of unknown source term
The real power of the PI-GGN is to solve the forward and inverse problems
simultaneously by assimilating additional state observations. For example,
when the source term is not given, the PI-GGN is able to assimilate sparse
data to solve the diffusion field and meanwhile infer the unknown source term
in a unified manner. Here we assume the constant source term $f=2$ is unknown
and observation of $u$ is available only at one point as shown in Fig. 3a. We
use two ways to assimilate the data and solve the inverse problem: one is to
assimilate data by adding a data loss as a penalty term with (Eq. 15) with the
hyper-parameter chosen as $\lambda=1000$, and the other is to assimilate data
strictly based on Eq. 17. As shown in Fig. 3b, the inferred source terms from
both approaches converge to the ground truth and the forward solution of the
$u$ field is also obtained _simultaneously_. Overall, the prediction errors of
the unknown source term and diffusion field are less than $1\%$.
(a) PI-GGN prediction (b) Value of inferred constant source (c) Error of
predicted $u$ field
Figure 3: PI-GGN inverse solutions of the source term $f$ by assimilating
observed diffusion data (black dots) using (1) penalty method
(LABEL:fig:SoftPosInvVal) and (2) hard enforcement (LABEL:fig:HardPosInvVal),
compared against the ground truth (LABEL:fig:TruePosInvVal), where the error
of field prediction by soft (LABEL:fig:SoftPosInvEr) and hard
(LABEL:fig:HardPosInvEr) data assimilation are presented.
### 3.2 Linear elasticity equations
Next, we consider problems governed by linear elasticity equations,
$\begin{split}\nabla\cdot\sigma=0&\quad\text{in }\Omega,\\\ \sigma\cdot
n=t&\quad\text{on }\partial\Omega^{N},\\\ u=u^{D}&\quad\text{on
}\partial\Omega^{D},\end{split}$ (21)
where $u:\Omega\rightarrow\mathbb{R}^{d}$ is the displacement vector,
$\sigma:\Omega\rightarrow\mathbb{R}^{d\times d}$ is the stress tensor defined
as $\sigma_{ij}=\lambda u_{kk}\delta_{ij}+\mu(u_{i,j}+u_{j,i})$,
$n:\partial\Omega\rightarrow\mathbb{R}^{d}$ is the unit normal vector on the
boundary, $t:\partial\Omega^{N}\rightarrow\mathbb{R}^{d}$ is the applied
traction force, $u^{D}:\partial\Omega^{D}\rightarrow\mathbb{R}^{d}$ is the
essential boundary condition, and $\lambda$ and $\mu$ are the constant Lamé
parameters. For each variable component $u_{i}$, a sub-GCN is constructed for
the prediction.
#### 3.2.1 Forward solution of displacement field
(a) PI-GGN (b) FEM
Figure 4: PI-GGN forward solutions of the displacement field $u$, compared
against corresponding FEM reference, where the relative prediction error of
the PI-GGN is $e=1\times 10^{-2}$.
First, we solve the forward problem in a unit square domain. To discretize
this domain, four quadrilateral elements are used, and the order of polynomial
basis for solution and domain transformation is set as two, resulting in a
$25$-nodal graph. The Lamé parameters are set as $\lambda=1$ and $\mu=1$. The
essential boundary condition $u=[0,0]$ is prescribed on the left side ($x=0$)
and the natural boundary condition $t=[0.5,0]$ is imposed on the right side.
Fig. 4 shows that the PI-GGN forward solution of the displacement field agrees
with the FEM reference very well.
(a) PI-GGN solution (b) FEM solution
Figure 5: PI-GGN forward solutions of the displacement field $u$, compared
against corresponding FEM reference, where the relative prediction error of
the PI-GGN is $e=5\times 10^{-3}$.
Then we investigate a irregular domain, a rectangular with a notch, where same
Lamé parameters are specified. The domain is discretized by $55$ simplex
elements with 1st order polynomial basis for the solution and domain
transformation. The essential boundary conditions $u^{D}=[0,0]$ is imposed at
the left boundary side $x=-0.4$ and the natural boundary condition $t_{1}=0.5$
is prescribed at the right boundary side. As mentioned above, no special
treatment is needed for PI-GGN to handle irregular geometry with simplex mesh.
Fig. 5 shows that the forward solution by PI-GGN are very accurate compared to
the FEM reference.
(a) PI-GNN
(b) FEM
Figure 6: PI-GGN forward solutions of the displacement field $u$, compared
against corresponding FEM reference, where the relative prediction error of
the PI-GGN is $e=5\times 10^{-2}$.
Lastly, we consider a 3-D domain. Specifically, the deformation of a 3-D
hollow cylinder is solved by the PI-GGN. The essential boundary conditions
$u^{D}=[0,0,0]$ are imposed at the left surface, the Neumann boundary
conditions $t=-n$ are prescribed at the inner surface of the cylinder
($x^{2}+y^{2}=1$), and $t=[0,0,-0.25]$ are imposed at the right surface. The
second order polynomial basis is used and the number of hexahedral element is
$40$ with $440$ nodal points. The Lamé parameters are set as $\lambda=0.73$
and $\mu=0.376$. The forward solution of the displacement by PI-GGN agree with
the FEM reference reasonably well, though PI-GGN slightly over-predicts the
displacement of the right end of the cylinder (Fig. 6).
#### 3.2.2 Inverse solution of unknown material properties
Next, we solve an inverse problem governed by the linear elasticity equations
(Eq. 21). The Lamé parameters ($\lambda$ and $\mu$) are assumed to be unknown,
whose true values are set as $\lambda=\mu=1$.
(a) PI-GGN prediction (b) Errors of predicted $u$ field
(c) $\lambda$ (d) $\mu$
Figure 7: PI-GGN inverse solutions of the Lamé parameters by assimilating
observed displacement data (black dots) using (1) penalty method
(LABEL:fig:SoftLinelaInvVal) and (2) hard enforcement approach
(LABEL:fig:HardLinelaInvVal), compared against the ground truth
(LABEL:fig:TrueLinelaInvVal), where the error of field prediction by soft
(LABEL:fig:SoftLinelaInvEr) and hard (LABEL:fig:HardLinelaInvEr) data
assimilation are presented.
The displacement field is observed at five randomly selected points shown in
Fig. 7(a). The entire field of the displacement is obtained via PI-GGN and the
Lamé parameters can be inferred accurately as well (Figs. 7c and 7d). The
relative error of the PI-GGN predicted displacement field is $0.005$ by
assimilating data in a hard manner, which is slightly lower than that of using
penalty method ($e=0.01$), shown in Fig 7b.
### 3.3 Naiver-Stokes equations
In the last test case, we study forward and inverse problems governed by the
static incompressible Navier-Stokes (NS) equations, which is more challenging
due to its strong nonlinearity. The steady NS equations model the viscous
fluid flow with a constant density, which can be expressed as,
$\begin{split}(v\cdot\nabla)v-\nu\Delta v+\nabla p=0,\quad\nabla\cdot
v=0\quad\text{in }\Omega,\\\ v=v^{D}\quad\text{on }\partial\Omega^{D},\\\
\nu(n\cdot\nabla)v-pn=0\quad\text{on }\partial\Omega^{N},\end{split}$ (22)
where $v:\Omega\rightarrow\mathbb{R}^{d}$ is the velocity vector,
$p:\Omega\rightarrow\mathbb{R}$ is the pressure, $\nu$ is the the viscosity of
the fluid, and $n:\partial\Omega\rightarrow\mathbb{R}^{d}$ is the unit outward
normal vector to the boundary. The solution variable vector is denoted by
$u=[v_{1},v_{2},p]$. The viscosity is set as $\nu=0.01$. For stability
reasons, a mixed element approximation is adopted [43]. A separate sub-net is
constructed for prediction of each of the solution variables $v_{1}$, $v_{2}$
and $p$.
#### 3.3.1 Forward solution of velocity and pressure fields
(a) PI-GGN $|u|$ (b) FEM $|u|$ (c) PI-GGN $p$ (d) FEM $p$
Figure 8: PI-GGN forward solutions of the velocity magnitude and pressure
fields, compared against corresponding FEM reference, where the relative
errors is $8.7\times 10^{-3}$ for the velocity prediction and $1.95\times
10^{-2}$ for the pressure prediction.
First, we test the proposed approach on a classic flow problem, lid-driven
cavity flow, defined on a square domain. The lid is placed on the top edge and
moves rightward ($v_{1}=1,v_{2}=0$). The remaining three edges are set as no-
slip walls ($v_{1}=v_{2}=0$). The domain is discretized by $100$ quadrilateral
elements. The numbers of collocation points for velocity and pressure fields
are $441$ and $121$, respectively. The contours of forward solutions of
velocity and pressure by PI-GGN is in a good agreement with the corresponding
FEM reference, as shown in Fig. 8. The relative prediction errors are less
than $1\%$. It is worth noting that over $10000$ collocation points were used
to achieve same level of accuracy for AD-based FC-PINN [16, 17].
(a) PI-GGN $|u|$ (b) FEM $|u|$ (c) PI-GGN $p$ (d) FEM $p$
Figure 9: PI-GGN forward solutions of the velocity magnitude and pressure
fields, compared against corresponding FEM reference, where the relative
errors is $4.4\times 10^{-3}$ for the velocity prediction and $1.8\times
10^{-2}$ for the pressure prediction.
We also test the PI-GGN on solving the fluid flow in an idealized stenosis,
where the inlet velocity is set as $v^{D}=[0,1]$ at the bottom ($y=0$) and no-
traction boundary condition is prescribed at the outlet on the top ($y=0$).
The same finite element setting is used as the lid-driven cavity problem.
Similarly, both the velocity and pressure fields can be accurately solved and
the PI-GGN predictions agree with the FEM reference well.
#### 3.3.2 Inverse solution of unknown inlet velocity field and unobserved
pressure field
Lastly, we consider an inverse problem governed by the NS equations.
(a) Predicted $|u|$ (b) Reference (c) Predicted $p$ (d) Reference $p$ (e)
Penalty method (LABEL:fig:SoftInvVal), hard enforcement method
(LABEL:fig:HardInvVal) and the true inlet profile (LABEL:fig:TrueInvVal)
Figure 10: PI-GGN inverse solutions of the inlet velocity field by
assimilating observed velocity data at 19 randomly selected points. The
relative error of the Inferred inlet profile is $e=0.4$ by the soft penalty
method while $e=0.04$ by hard enforcement approach.
In particular, the inlet velocity field is assumed unknown and will be
inferred by assimilating sparse velocity observation data as shown in Fig.
10b. The true inlet has a parabolic profile as shown in Fig. 10e. The
functional form of the profile is not predefined in solving the inverse
problem. Namely, the dimension of the inversion is equal to the degrees of
free of the inlet, which is more than 20. By assimilating velocity observation
data at sparse locations, our proposed method can accurately infer the unknown
inlet velocity profile and also recover the entire velocity and pressure
fields very well. However, it is observed that inferred inlet from the
penalty-based data assimilation approach is not quite accurate, which notably
deviates from the ground truth. Despite using same penalty coefficient as the
previous cases, the inference performance significantly deteriorates. The
proposed way of assimilating data strictly can avoid hyperparameter tuning and
have better robustness.
## 4 Conclusion
In this paper, a novel discrete PINN framework is proposed for solving both
forward and inverse problems governed by PDEs in a unified manner. Built upon
the combination of graph convolutional networks (GCNs) and Galerkin
variational formulation of physics-informed loss functions, the proposed PINN
can naturally handle irregular domains with unstructured meshes, where the
training is performed in an efficient way due to the reduced search space by
polynomials. Thanks to the hard enforcement of boundary conditions and sparse
observation data, the proposed method does not require tuning penalty
parameters and has better robustness. The numerical results from several
forward and inverse problems governed by linear and nonlinear PDEs have shown
the effectiveness of the proposed method. Furthermore, the authors believe
this work contributes to facilitating the healthy combination of scientific
deep learning and classic numerical techniques rather than isolating them
against each other.
## Compliance with Ethical Standards
Conflict of Interest: The authors declare that they have no conflict of
interest.
## Acknowledgment
The authors would like to acknowledge the funds from National Science
Foundation under award numbers CMMI-1934300 and OAC-2047127 (JXW, HG), the Air
Force Office of Scientific Research (AFOSR) under award number
FA9550-20-1-0236 (MZ), and startup funds from the College of Engineering at
University of Notre Dame in supporting this study.
## References
* [1] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics 378 (2019) 686–707.
* [2] A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind, Automatic differentiation in machine learning: a survey, Journal of machine learning research 18.
* [3] Y. Chen, L. Lu, G. E. Karniadakis, L. Dal Negro, Physics-informed neural networks for inverse problems in nano-optics and metamaterials, Optics express 28 (8) (2020) 11618–11633.
* [4] L. Lu, X. Meng, Z. Mao, G. E. Karniadakis, Deepxde: A deep learning library for solving differential equations, SIAM Review 63 (1) (2021) 208–228.
* [5] C. Rao, H. Sun, Y. Liu, Physics-informed deep learning for computational elastodynamics without labeled data, Journal of Engineering Mechanics 147 (8) (2021) 04021043.
* [6] Z. Chen, Y. Liu, H. Sun, Deep learning of physical laws from scarce data, arXiv preprint arXiv:2005.03448.
* [7] F. Sahli Costabal, Y. Yang, P. Perdikaris, D. E. Hurtado, E. Kuhl, Physics-informed neural networks for cardiac activation mapping, Frontiers in Physics 8 (2020) 42.
* [8] L. Sun, H. Gao, S. Pan, J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering 361 (2020) 112732.
* [9] M. Raissi, A. Yazdani, G. E. Karniadakis, Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations, Science 367 (6481) (2020) 1026–1030.
* [10] G. Kissas, Y. Yang, E. Hwuang, W. R. Witschey, J. A. Detre, P. Perdikaris, Machine learning in cardiovascular flows modeling: Predicting arterial blood pressure from non-invasive 4d flow mri data using physics-informed neural networks, Computer Methods in Applied Mechanics and Engineering 358 (2020) 112623\.
* [11] S. Cai, H. Li, F. Zheng, F. Kong, M. Dao, G. E. Karniadakis, S. Suresh, Artificial intelligence velocimetry and microaneurysm-on-a-chip for three-dimensional analysis of blood flow in physiology and disease, Proceedings of the National Academy of Sciences 118 (13).
* [12] A. Arzani, J.-X. Wang, R. M. D’Souza, Uncovering near-wall blood flow from sparse data with physics-informed neural networks, Physics of Fluids.
* [13] X. Jin, S. Cai, H. Li, G. E. Karniadakis, NSFnets (navier-stokes flow nets): Physics-informed neural networks for the incompressible navier-stokes equations, Journal of Computational Physics 426 (2021) 109951.
* [14] Z. Mao, A. D. Jagtap, G. E. Karniadakis, Physics-informed neural networks for high-speed flows, Computer Methods in Applied Mechanics and Engineering 360 (2020) 112789.
* [15] O. Hennigh, S. Narasimhan, M. A. Nabian, A. Subramaniam, K. Tangsali, M. Rietmann, J. d. A. Ferrandis, W. Byeon, Z. Fang, S. Choudhry, Nvidia simnet^$\\{$TM$\\}$: an ai-accelerated multi-physics simulation framework, arXiv preprint arXiv:2012.07938.
* [16] S. Wang, Y. Teng, P. Perdikaris, Understanding and mitigating gradient pathologies in physics-informed neural networks, arXiv preprint arXiv:2001.04536.
* [17] A. D. Jagtap, E. Kharazmi, G. E. Karniadakis, Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems, Computer Methods in Applied Mechanics and Engineering 365 (2020) 113028.
* [18] J. Berg, K. Nyström, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing 317 (2018) 28–41.
* [19] Y. Zhu, N. Zabaras, P.-S. Koutsourelakis, P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics 394 (2019) 56–81.
* [20] H. Gao, L. Sun, J.-X. Wang, Phygeonet: Physics-informed geometry-adaptive convolutional neural networks for solving parameterized steady-state pdes on irregular domain, Journal of Computational Physics (2020) 110079.
* [21] N. Geneva, N. Zabaras, Modeling the dynamics of pde systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics 403 (2020) 109056.
* [22] R. Zhang, Y. Liu, H. Sun, Physics-informed multi-lstm networks for metamodeling of nonlinear structures, Computer Methods in Applied Mechanics and Engineering 369 (2020) 113226.
* [23] N. Wandel, M. Weinmann, R. Klein, Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions, Physics of Fluids 33 (4) (2021) 047117.
* [24] R. Ranade, C. Hill, J. Pathak, Discretizationnet: A machine-learning based solver for navier–stokes equations using finite volume discretization, Computer Methods in Applied Mechanics and Engineering 378 (2021) 113722.
* [25] T. J. Hughes, The finite element method: linear static and dynamic finite element analysis, Courier Corporation, 2012.
* [26] C. A. Duarte, J. T. Oden, H-p clouds—an h-p meshless method, Numerical Methods for Partial Differential Equations: An International Journal 12 (6) (1996) 673–705.
* [27] E. Weinan, B. Yu, The deep ritz method: a deep learning-based numerical algorithm for solving variational problems, Communications in Mathematics and Statistics 6 (1) (2018) 1–12.
* [28] E. Samaniego, C. Anitescu, S. Goswami, V. M. Nguyen-Thanh, H. Guo, K. Hamdia, X. Zhuang, T. Rabczuk, An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications, Computer Methods in Applied Mechanics and Engineering 362 (2020) 112790.
* [29] Y. Zang, G. Bao, X. Ye, H. Zhou, Weak adversarial networks for high-dimensional partial differential equations, Journal of Computational Physics 411 (2020) 109409\.
* [30] E. Kharazmi, Z. Zhang, G. E. Karniadakis, Variational physics-informed neural networks for solving partial differential equations, arXiv preprint arXiv:1912.00873.
* [31] E. Kharazmi, Z. Zhang, G. E. Karniadakis, hp-vpinns: Variational physics-informed neural networks with domain decomposition, Computer Methods in Applied Mechanics and Engineering 374 (2021) 113547.
* [32] R. Khodayi-Mehr, M. Zavlanos, Varnet: Variational neural networks for the solution of partial differential equations, in: Learning for Dynamics and Control, PMLR, 2020, pp. 298–307.
* [33] H. Yao, Y. Gao, Y. Liu, Fea-net: A physics-guided data-driven model for efficient mechanical response prediction, Computer Methods in Applied Mechanics and Engineering 363 (2020) 112892.
* [34] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, P. Battaglia, Graph networks as learnable physics engines for inference and control, in: International Conference on Machine Learning, PMLR, 2018, pp. 4470–4479.
* [35] P. W. Battaglia, R. Pascanu, M. Lai, D. Rezende, K. Kavukcuoglu, Interaction networks for learning about objects, relations and physics, arXiv preprint arXiv:1612.00222.
* [36] R. Maulik, P. Balaprakash, Site-specific graph neural network for predicting protonation energy of oxygenate molecules, arXiv preprint arXiv:2001.03136.
* [37] T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, P. W. Battaglia, Learning mesh-based simulation with graph networks, arXiv preprint arXiv:2010.03409.
* [38] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, P. Battaglia, Learning to simulate complex physics with graph networks, in: International Conference on Machine Learning, PMLR, 2020, pp. 8459–8468.
* [39] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar, Multipole graph neural operator for parametric partial differential equations, arXiv preprint arXiv:2006.09535.
* [40] M. Defferrard, X. Bresson, P. Vandergheynst, Convolutional neural networks on graphs with fast localized spectral filtering, Advances in neural information processing systems 29 (2016) 3844–3852.
* [41] E. BrianDavies, G. Gladwell, J. Leydold, P. Stadler, Discrete nodal domain theorems, Linear Algebra and its Applications 336 (1-3) (2001) 51–60.
* [42] V. Nair, G. E. Hinton, Rectified linear units improve restricted boltzmann machines, in: ICML, 2010.
* [43] P. Letallec, A mixed finite element approximation of the navier-stokes equations, Numerische Mathematik 35 (4) (1980) 381–404.
|
arxiv-papers
| 2021-07-16T20:23:52 |
2024-09-04T03:07:18.489755
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Han Gao and Matthew J. Zahr and Jian-Xun Wang",
"submitter": "Jian-Xun Wang",
"url": "https://arxiv.org/abs/2107.12146"
}
|
2107.12150
|
Tata Institute of Fundamental Research, Mumbai,
[email protected]://orcid.org/0000-0001-6225-9147 Prabhat
Kumar Jha [500]Theory of computation Logic and verification [500]Theory of
computation Verification by model checking [100]Mathematics of computing
Ordinary differential equations We acknowledge support of the Department of
Atomic Energy, Government of India, under project no. RTI4001
###### Acknowledgements.
I want to thank my MSc thesis advisors S. Akshay and Piyush Srivastava for
introducing me to Skolem problem and encouraging me to write this paper. 2
# Cosine and Computation
Prabhat Kumar Jha
###### Abstract
We are interested in solving decision problem $\exists?t\in\mathbb{N},\cos
t\theta=c$ where $\cos\theta$ and $c$ are algebraic numbers. We call this the
$\cos t\theta$ problem. This is an exploration of Diophantine equations with
analytic functions. Polynomial, exponential with real base and cosine function
are closely related to this decision problem:
$\exists?t\in\mathbb{N},u^{T}M^{t}v=0$ where
$u,v\in\mathbb{Q}^{n},M\in\mathbb{Q}^{n\times n}$. This problem is also known
as “Skolem problem” and is useful in verification of linear systems. Its
decidability remains unknown. Single variable Diophantine equations with
exponential function with real algebraic base and $\cos t\theta$ function with
$\theta$ a rational multiple of $\pi$ is decidable. This idea is central in
proving the decidability of Skolem problem when the eigenvalues of $M$ are
roots of real numbers. The main difficulty with the cases when eigenvalues are
not roots of reals is that even for small order cases decidability requires
application of trancendental number theory which does not scale for higher
order cases. We provide a first attempt to overcome that by providing a
$PTIME$ algorithm for $\cos t\theta$ when $\theta$ is not a rational multiple
of $\pi$. We do so without using techniques from transcendental number theory.
One of the main difficulty in Diophantine equations is being unable to use
tools from calculus to solve this equation as the domain of variable is
$\mathbb{N}$. We also provide an attempt to overcome that by providing
reduction of Skolem problem to solving a one variable equation (which involves
polynomials, exponentials with real bases and $\cos t\theta$ function with $t$
ranging over reals and $\theta\in[0,\pi]$) over reals.
###### keywords:
Matrix, Orbit, Subspace, Reachability, Verification, Recurrence, Linear,
Continuization, Cosine
###### category:
## 1 Introduction
Reachability problems are special type of verification problems which ask if a
given system can ever reach a given configuration. If behaviour of the system
is deterministic and can be described as a function of time then the problem
reduces to solve some equation. One such example is discrete-time linear
dynamical systems of which the behaviour can be described as a linear
recurring sequence. The problem of checking existence of a $0$ in a given
linear recurring sequence (LRS) is known as the Skolem problem and
decidability of this problem remains unknown. Finding existence of $0$ in such
sequences over $\mathbb{Q}$ reduces to finding existence of solution of
exponential Diophantine equations over algebraic numbers. The Skolem problem
is in $NP^{RP}$ when eigenvalues of the given LRS are roots of real
numbers[1]. Hence, the challenge is to solve the cases when eigenvalues are
not roots of reals and only known decidability results are for order 2 and 3
using Baker’s method of linear forms of logarithms[8]. Here we will study a
basic case of the exponential Diophantine equations which we will call “ $\cos
t\theta$ problem”.
Given real algebraic numbers $\cos\theta$ and $c$ between $-1$ to $1$, the
$\cos t\theta$ problem asks if there is a natural number $t$ such that $\cos
t\theta=c$. This is a special case of Skolem problem of order 3 over algebraic
numbers and hence is known to be decidable but the bounds obtained by Baker’s
method is exponential which yields $NP^{RP}$ complexity.
Given a square matrix $M$, a vector $u$ and an affine subspace $W$, the affine
subspace reachability problem asks if there is a natural number $t$ such that
$M^{t}u\in W$. Orbit problem is $0$-dimensional case of affine subspace
reachability problem and is known to be decidable in $P$[4, 5].
Our first contribution is a polynomial time algorithm for the $\cos t\theta$
problem by reducing this problem to Orbit problem over $\mathbb{Q}$. Our
reduction uses some facts from algebraic number theory and the algorithm for
Orbit problem also uses algebraic number theory and some properties of
matrices. Hence we do not need the transcendental number theory. Skolem
problem is equivalent to affine subspace reachability problem under polynomial
reductions. Our method is first application of Orbit problem to solve a non-
trivial case of Skolem problem.
We then consider two generalizations of $\cos t\theta$ problem. The first one
is $r^{t}\cos t\theta$ problem and second one is $\Sigma\cos t\theta_{i}$
problem. These problems are special cases of Skolem problem over algebraic
numbers. The first one is known to be decidable using Baker’s method as it is
of order 3 but finding an efficient algorithm or a better lower bound remains
unknown. Another possible way to solve that is using affine subspace
reachability problem of dimension 1 [2] but that also requires Baker’s method
and the gap between known upperbound and lowerbound remains unchanged. The
second one is a not known to be decidable. Only a special case when
$\theta_{i}$s are rational multiple of $\pi$ is known to be $NP$-complete.[1]
Our second contribution is to give a polynomial time reduction of
$\exists?t\in\mathbb{N},\Sigma\cos t\theta_{i}=c$ to
$\exists?t\in\mathbb{R},\Sigma\cos t\theta_{i}=c$. The same technique gives us
that the Skolem problem can be reduced to $\exists?t\in\mathbb{R},\Sigma
r_{i}^{t}p_{i}(t)\cos t\theta_{i}=c$ where $r_{i}$ is an algebraic number and
$p_{i}$ is a polynomial. This problem is a special case of one variable
restriction of extension of theory of real numbers with cosine and power
function. Unrestricted case is known to be undecidable as solving the
Diophantine equation with 4 or more variables is undecidable.
We will go through some preliminaries of computation with algebraic numbers in
2 2. Specifically we will see how to represent algebraic numbers and the
complexity of basic operations. We will also go through basics of linear
recurring sequences and the orbit problem. The algorithm and its analysis for
the $\cos t\theta$ problem is presented in section 3 3. In section 4 4, the
extensions of the $\cos t\theta$ problem has been studied. Second contribution
has been provided in section 5 5. Finally, we give a conclusion and open
problems in section 6 6.
## 2 Preliminaries
In this section, we will go through some preliminaries of computations with
algebraic numbers and linear recurring sequences. We begin with introduction
to computation with algebraic numbers. We refer to Cohen [3] for this topic.
### 2.1 Computation with Algebraic Numbers
In the study of matrices over $\mathbb{Q}$, the eigenvalues are from a
subfield of $\mathbb{C}$ and that is exactly what is known as algebraic
numbers. Since discrete-time linear systems are represented using matrices and
eigenvalues are very important to study properties of matrices, we are
interested in algebraic numbers. We begin with defining algebraic numbers:
###### Definition 2.1 (Algebraic Numbers).
A complex number $\alpha$ is said to be an algebraic number if there is a
polynomial $p\in\mathbb{Z}[x]$ such that $p(\alpha)=0$. There is a unique
polynomial with minimal degree with greatest common divisor of coefficients 1
and it is said to be the minimal polynomial of $\alpha$. Degree $D(\alpha)$ is
degree of the minimal polynomial of $\alpha$. Height $H(\alpha)$ is maximum
absolute value of a coefficient in minimal polynomial of $\alpha$. Roots of
minimal polynomial of $\alpha$ are called Galois conjugates of $\alpha$. Norm
$\mathcal{N}(\alpha)$ is product of Galois conjugates of $\alpha$. If the
leading coefficient of minimal polynomial is 1 then $\alpha$ is said to be an
algebraic integer. $\mathbb{A}$ denotes set of all algebraic numbers and
$\mathcal{O}_{\mathbb{A}}$ denotes set of all algebraic integers.
Now that we have defined algebraic numbers, for the purpose of computation, we
need to represent them. We represent integers as a binary string and rational
numbers as a pair of integers. The canonical representation of an algebraic
number is defined as below.
###### Definition 2.2.
The canonical representation of an algebraic number $\alpha$ is a tuple
$(P,x,y,r)$ where $P$ is the minimal polynomial of $\alpha$ and
$x,y,r\in\mathbb{Q}$ such that $\alpha$ is in the circle centered at $x+\iota
y$ with radius $r$. $x,y,r$ are choosen to distinguish $\alpha$ from its
Galois conjugates.
Note that the canonical representation is not unique but still it is trivial
to check the equality of two algebraic numbers. We also need to make sure that
$x,y,r$ do not have a very large representation as that can increase the
complexity. The theorem below gurantees that.
###### Theorem 2.3.
[7] If two conjugates $\alpha_{i},\alpha_{j}$ of $\alpha$ are not equal then
$|\alpha_{i}-\alpha_{j}|>\frac{\sqrt{6}}{d^{\frac{(d+1)}{2}}H^{d-1}}$.
While canonical representation is common in literatute, we will need another
representation in order to use algebraic numbers as matrices. This uses some
applications of LLL algorithm and we will directly use the following results
without proving. One can have a look in section 2.6 of Cohen [3] for details.
###### Theorem 2.4.
There is a polynomial time algorithm which takes
$z_{1},z_{2},...,z_{k},z\in\mathbb{A}$ as input and outputs whether $z$ is a
$\mathbb{Q}$-linear combination of $z_{1},z_{2},...,z_{k}$. In case of
positive answer it also outputs the coefficients.
Theorem 2.4 provides a way to write an algebraic number as a
$\mathbb{Q}$-vector in a suitable number field. Using this and elementary
linear algebra, one can write the matrix of multiplication with an algebraic
number in polynomial time. In this vector representation, doing basic
operations such as addition, multiplication and division are all computable in
polynomial time. This fact is important to our main result in section 3.
### 2.2 From Skolem problem to Diophantine equations
In this section, we will go through the basics of linear recurring sequences
in order to understand the Skolem problem.
###### Definition 2.5 (Linear Reccuring Sequences).
A linear recurring sequence (LRS) of order $k$ over ring $R$ is a sequence of
elements of $R$ which satisfies $\forall t>k,a_{t}=\Sigma_{1\leq i\leq
k}c_{i}a_{t-i}$ where $c_{i}\in R$.
An LRS of order $k$ can be determined from first $k$ terms as rest of the
terms can be deterministically computed using the recurrence relation. We are
interested in cases when $R$ is one of $\mathbb{Z},\mathbb{Q},\mathbb{A}$.
There is an interesting result about LRS over fields of characteristic 0. This
theorem is about the 0s of an LRS.
###### Theorem 2.6 (Skolem-Mahler-Lech).
[6] The zeros of LRS over a field of characteristic $0$ is a union of a finite
set and finitely many arithmetic progressions.
The known proof of Theorem 2.6 uses $p$-adic methods and proof by
contradiction. The proof is of non-constructive nature. That is this proof
does not provide an algorithm to check whether a given LRS has a 0. The
problem of finding 0 is known as the Skolem problem. Following folklore claim
gives another definition of LRS in terms of matrices.
###### Claim 1.
Given a $k\times k$ matrix $M$ and $k$-dimensional vectors $u$ and $v$, the
sequence $a_{t}=u^{T}M^{t}v$ is an LRS.
Given any LRS $\\{a\\}_{t}$, there exist a matrix $M$ and vectors $u$ and $v$
such that $a_{k+t}=u^{T}M^{t}v$ where $k$ is the order of LRS..
###### Proof 2.7.
Let’s consider the characteristic polynomial of $M$, let it be
$x^{d}-\Sigma_{i=1}^{i=d}a_{i}x^{d-i}$. Caley-Hamilton theorem implies that
$M^{d}=\Sigma_{i=1}^{i=d}M^{d-i}$. Multiplying $M^{t-d}$ both sides, we get
that $M^{t}=\Sigma_{i=1}^{i=d}M^{t-i}$. By multiplying the vectors $u$ and $v$
and using linearity we get, $u^{T}M^{t}v=\Sigma_{i=1}^{i=d}u^{T}M^{t-i}v$.
From Definition 2.5, it follows that $u^{T}M^{t}v$ is an LRS.
Let the first $k$ terms be $a_{1},...,a_{k}$ and the recurrence be
$a_{t}=\Sigma_{i=1}^{k}c_{i}a_{t-k}$. Let $M$ be:
$\begin{bmatrix}c_{1}&c_{2}&...&c_{k-1}&c_{k}\\\
&\mathbf{I}_{k-1}&&&\mathbf{0}\end{bmatrix}$
where $\mathbf{I}_{k-1}$ is identity matrix of order $k-1$ and $\mathbf{0}$ is
a column matrix of size $k-1$ with $0$ as all of its entries. Let
$u=(1,0,...,0)^{T}$ and $v=(a_{k},a_{k-1},...,a_{1})^{T}$. Now using induction
we can verify that $u^{T}M^{t}v=a_{k+t}$.
Now we have another version of Skolem problem that is checking if
$u^{T}M^{t}v$ is $0$ for some $t$. For the case when LRS is over rational
numbers or over algebraic numbers, the equation $u^{T}M^{t}v=0$ has a closed
form. It can be obtained using Jordan canonical form and properties of matrix
multiplication. If the eigenvalues are $\lambda_{1},...,\lambda_{m}$ then the
closed form equation is of the form
$\Sigma_{i=1}^{i=m}p_{i}(t)\lambda_{i}^{t}=0$, where $p_{i}\in\mathbb{Z}[x]$.
In the case when LRS is over rational numbers we know that eigenvalues occur
as conjugates and are algebraic so we get the equation
$\Sigma_{i=1}^{i=m}p_{i}(t)r_{i}^{t}\cos t\theta_{i}$ where
$p_{i}\in\mathbb{Z}[x],r_{i},\cos\theta_{i}\in\mathbb{R}\cap\mathbb{A}$ and
$|\cos\theta_{i}|\leq 1$. We state this as the following lemma:
###### Lemma 2.8.
Skolem problem over rational numbers (or integers) can be reduced to solving
equation $\Sigma_{i=1}^{i=m}p_{i}(t)r_{i}^{t}\cos t\theta_{i}$ where
$p_{i}\in\mathbb{Z}[x],r_{i},\cos\theta_{i}\in\mathbb{R}\cap\mathbb{A}$ and
$\theta_{i}\in[0,\pi]$.
We will use this form of Skolem problem throughout this paper.
### 2.3 Affine Subspace Reachability Problem
Affine Subspace Reachability Problem asks if a given linear system reaches to
a given affine subspace after some steps. We define this problem precisely
here.
###### Definition 2.9.
Input: $M\in\mathbb{Q}^{k\times k},v\in\mathbb{Q}^{k}$ and an affine subspace
$W$ described using linear equations it satisfies.
Output: “Yes” if there is a $t\in\mathbb{N}$ such that $M^{t}v\in W$; “No”
otherwise.
Affine subspaces are defined using equations of form $u^{T}v=c$. It is trivial
that Skolem problem is a special case of affine subspace reachability problem.
An interesting and folklore converse is that affine subspace reachability
problem is polynomial time reducible to Skolem problem. We will now see a
reduction to Skolem problem.
###### Theorem 2.10 (Reduction to Skolem Problem (Folklore)).
Affine subspace reachability problem is polynomial time reducible to Skolem
problem.
###### Proof 2.11 (Proof-Sketch).
If Skolem problem is decidable then we can also compute the zero set
explicitly. Affine subspace reachability problem is like finding intersection
of solution sets of equations of type $u^{T}M^{t}v=0$. Intersection of
arithmetic progressions can be computed using chinese remainder theorem.
Orbit problem is 0-dimensional affine subspace reachability problem.
###### Definition 2.12 (Orbit problem).
Input: $M\in\mathbb{Q}^{k\times k},u,v\in\mathbb{Q}^{k}$ Output: “Yes” if
there is a $t\in\mathbb{N}$ such that $M^{t}u=v$; “No” otherwise.
This problem is known to be decidable in PTIME. The techniques used are from
algebraic number theory. The link between Skolem problem and Orbit problem was
also hinted in [4].
###### Theorem 2.13 (Complexity of Orbit problem).
[4, 5] The Orbit problem is in P.
## 3 The $\cos t\theta$ Problem
In this section we will provide a polynomial time reduction from the $\cos
t\theta$ problem to the Orbit problem. We first prove that the sequence
$a_{t}=\cos t\theta$ is an LRS.
###### Theorem 3.1 ($\cos t\theta$ is an LRS).
The sequence $a_{t}=\cos t\theta$ satisfies a linear recurrence relation over
$\mathbb{Q}$.
###### Proof 3.2.
Let $z=\cos\theta+\iota\sin\theta$ where $\sin\theta=\sqrt{1-\cos^{2}\theta}$.
Consider the minimal polynomial of $z$,
$c_{0}x^{k}-\Sigma_{i=1}^{i=k}c_{i}x^{k-i}$. That implies,
$z^{k}=\Sigma_{i=1}^{i=k}z^{k-i}$. Multiplying with $z^{t-k}$ both sides we
get, $z^{t}=\Sigma_{i=1}^{i=k}z^{t-i}$. Taking real parts of both sides and
using De’Moivere’s identity, $\cos t\theta=\Sigma_{i=1}^{i=k}\cos(t-i)\theta$.
We get a linear recurrence relation.
The eigenvalues of this LRS are exactly the conjugates of $z$. The equation
$\cos t\theta=c$ is an affine subspace reachability problem of co-dimension 1.
This can also be thought of as Skolem problem of order 3 over algebraic
numbers.
Now we will see a reduction from this problem to Orbit problem.
### 3.1 $\cos t\theta$ is in PTIME
The reduction exploits the fact that multiplication with algebraic numbers is
a $\mathbb{Q}$-linear transformation.
###### Theorem 3.3 ($\cos t\theta$ problem is in P).
Given real algebraic numbers $\alpha=\cos\theta,c$ such that $|\alpha|\leq
1,|c|<1$, there is a polynomial time algorithm to check the existence of a
natural number $t$ such that $\cos t\theta=c$.
We provide the following algorithm.
1. 1.
Compute $z=\alpha+\iota\sqrt{1-\alpha^{2}}$
2. 2.
Check if $c\pm\iota\sqrt{1-c^{2}}\in\mathbb{Q}(z)$. If both cases give
negative answer return “No”, otherwise compute the coordinate of the target
vectors (those amongst $c\pm\iota\sqrt{1-c^{2}}$ which are in $\mathbb{Q}(z)$)
and go to next step.
3. 3.
Compute the multiplication matrix for $z$.
4. 4.
Solve $M^{t}\mathbf{1}=v$ for all target vectors.
5. 5.
Return $OR$ of outputs.
Below we provide a proof of Theorem 3.3.
###### Proof 3.4.
As $z=\cos\theta+\iota\sin\theta$ using De’Moivere’s identity, $z^{t}=\cos
t\theta+\iota\sin t\theta$. If $\cos t\theta=c$ then $\sin t\theta$ is either
$\sqrt{1-c^{2}}$ or $-\sqrt{1-c^{2}}$. We can consider both the cases. So we
need to check $\exists?t\in\mathbb{N}z^{t}=c+\iota\sqrt{1-c^{2}}$ or
$\exists?t\in\mathbb{N}z^{t}=c-\iota\sqrt{1-c^{2}}$. Step 2 checks if both of
these are not in $\mathbb{Q}(z)$, since $z^{t}\in\mathbb{Q}$, so we only need
to check for those targets which are in $\mathbb{Q}(z)$. This condition can be
checked in polynomial time as mentioned in Theorem 2.4. Theorem 2.4 also gives
coordinate for the case when it is in $\mathbb{Q}(z)$. Multiplication with $z$
is a linear transformation over $\mathbb{Q}(z)$ which is a vector space over
$\mathbb{Q}$. We can compute this matrix in polynomial time as mentioned in
Theorem 2.4. Now the problem to check
$\exists?t\in\mathbb{N}z^{t}=c+\iota\sqrt{1-c^{2}}$ or $\exists
t\in\mathbb{N}z^{t}=c-\iota\sqrt{1-c^{2}}$ is same as checking
$\exists?t\in\mathbb{N}M^{t}=v$ where $v$ is the vector representation for
$c\pm\iota\sqrt{1-c^{2}}$. This is an instance of Orbit problem. The reduction
is in polynomial time as all the required computation are done in polynomial
time and number of steps is constant. Using Theorem 2.13, we get that $\cos
t\theta$ problem is in P.
## 4 Extensions of $\cos t\theta$ Problem
The $\cos t\theta$ problem is a natural problem from point of view of
Diophantine equations with trigonometric functions. This immediately suggests
inquiry into extensions of the $\cos t\theta$ problem. We will look into two
specific extensions. The first one is due to exponential function while the
second one is due to summation of LRSs.
We begin with the first extension.
### 4.1 The $r^{t}\cos t\theta$ problem
Given an algebraic number $z$ and a real algebraic number $c$, checking the
existence of $t$ such that $Re(z^{t})=c$ is motivation for this extension.
This can also be written as $r^{t}\cos t\theta=c$ where $r=|z|$ and
$\theta=arg(z)$. We call this problem “ $r^{t}\cos t\theta$ problem”. Like
$\cos t\theta$ problem $r^{t}\cos t\theta$ problem is also a case of Skolem
problem of order 3 over algebraic numbers. Hence this problem is also known to
be decidable in $NP^{RP}$.
###### Theorem 4.1 (Polynomial time restrictions of $r^{t}\cos t\theta$
problem).
For the following conditions the $r^{t}\cos t\theta$ problem is in P:
1. 1.
$r\leq 1$
2. 2.
$z$ has a $\mathbb{Q}$-conjugate with absolute value less than or equal to 1
3. 3.
$r=\frac{\alpha}{\beta}$ and $\cos\theta=\frac{\gamma}{\delta}$ where
$\alpha,\beta,\gamma,\delta\in\mathcal{O}_{\mathbb{A}}$ such that ideal
generated by $\alpha$ has a prime factor which does not divide ideal generated
by $\delta$.
###### Proof 4.2 (Proof-sketch).
1\. The $\cos t\theta$ was a special case when $r=1$. The case when $r<1$ is
also decidable in polynomial time as after
$\left\lceil\frac{\log|c|}{\log|r|}\right\rceil$ steps the value of $r^{t}\cos
t\theta<c$ at every later step.
2\. Using the Galois transformations $z\mapsto\gamma$ we can convert
$z^{t}+\overline{z}^{t}=c$ to $\gamma^{t}+\overline{\gamma}^{t}=d$ where
$\gamma$ is a conjugate with $|\gamma|\leq 1$. Then it is same as previous
case.
3\. Using the valuation with respect to a prime factor of such an ideal, we
get that the valuation will be monotonically increasing for $z^{t}$ and that
gives a bound as the valuation of $c$ is fixed.
The gap between upper and lower bounds remain as these cases are not
exhaustive.
### 4.2 The $\Sigma\cos t\theta_{i}$ problem
Another way to extend this problem is by extending the order. This problem is
$\exists t\in\mathbb{N}$ such that $\Sigma_{i=1}^{i=k}c_{i}\cos
t\theta_{i}=0$. We call this “$\Sigma\cos t\theta_{i}$ problem. This problem
is not known to be decidable. The $\cos t\theta$ is an special case of this
problem. This problem is known to be NP-hard[1]. Even a restriction of this
problem when $\theta_{i}$s are restricted to be rational multiples of $\pi$ is
known to be NP-complete. Only case when we know decidability with non-
degenerate $\theta$ is the $\cos t\theta$ problem. We conjecture the
following.
###### Conjecture 4.3.
$\Sigma\cos t\theta_{i}$ problem is decidable only if Skolem problem is
decidable.
## 5 Contiuization and Computation
In this section we will see few steps towards converting the Skolem problem to
its analytical version. We will use continuization to do so. The motivation
behind this is the fact that the equations $r^{t}p(t)\cos t\theta=c$ can be
solved easily for $t\in\mathbb{R}$ where $\theta\in[0,\pi]$. We state the
following theorem as a first step towards continuization of Skolem problem.
###### Proposition 5.1.
If $\exists?t\in\mathbb{R},\Sigma\cos t\theta_{i}=0$ is decidable then
$\exists?t\in\mathbb{N},\Sigma\cos t\theta_{i}=0$ is also decidable.
###### Proof 5.2.
We use the summation of squares method with the fact that $\cos 2\pi t=1$
characterises integers.
$\exists t\in\mathbb{N},\Sigma c_{i}\cos t\theta_{i}=c\iff\exists
t\in\mathbb{R},((\Sigma c_{i}\cos t\theta_{i})-c)^{2}+(\cos 2\pi t-1)^{2}=0$
.
If we expand the square we get some multiplicative terms for example $\cos
t\theta_{i}\cos t\theta_{j}$, using the identity $\cos(A+B)+\cos(A-B)=2\cos
A\cos B$, we can convert them to additive cosine terms and get that the
equation in rhs is also as desired i.e. of form $\Sigma c_{i}\cos t\theta_{i}$
and we get the reduction.
This can be extended to the Skolem problem also we omit the proof as it is
very similar to previous one.
###### Proposition 5.3.
If $\exists?t\in\mathbb{R},\Sigma p_{i}(t)r_{i}^{t}\cos t\theta_{i}=0$ is
decidable then $\exists?t\in\mathbb{N},\Sigma p_{i}(t)r_{i}^{t}\cos
t\theta_{i}=0$ is also decidable.
Note that this different problem from continuous-time Skolem problem as the
base is algebraic numbers for exponentials. However, this technique of
continuization may be extended to membership problem for $P$-recursive
sequences and other sequences. We conjecture two statements one about
$P$-recursive sequences and other about computation in general.
###### Conjecture 5.4.
The membership problem for $P$-recursive sequences is reducible to solving one
variable equation over reals.
###### Conjecture 5.5.
The one variable extension of real number with first order axiomatizable
functions is decidable.
Conjecture 5.5 implies decidability of Skolem problem and membership problem
for $P$-recursive sequences. If this conjecture is false then we get an
extension of reals which is undecidable and that will also have great
implications on the theory of computation.
## 6 Conclusion
In this paper we presented small steps towards some challenges in finding an
algorithm for Skolem problem. The first step is to overcome the use of
transcendental number theory as it does not scale well. This goal is partially
achieved as the $r^{t}\cos t\theta$ problem still needs to use that for some
of the cases. The absence of lower bounds makes it interesting to explore the
lower bounds for $r^{t}\cos t\theta$ problem.
Our second contribution is in the direction of continuization of computation.
The key idea is to interpolate the sequences with some well-behaving functions
over reals and then thinking of the problem as a problem for these sequences.
This can be useful particularly because there is abundance of real analytic
tools to find roots of functions.
We made three conjectures in this paper. The first conjecture is interesting
as it asserts that Skolem problem is hard only for the cases when the
effective eigenvalues are on unit circle but not roots of unity. This
conjecture seems plausible as all the challenges in solving Skolem problem
also remain for the $\Sigma\cos t\theta_{i}$ problem.
The other two conjectures are about the power of continuization in general.
The theory of closed real field with cosine function is known to be
undecidable but restriction to one variable case is an interesting unexplored
problem. Our last conjecture is about weakness of one variable fragment of
extensions of theory of reals.
## References
* [1] S. Akshay, Nikhil Balaji, and Nikhil Vyas. Complexity of restricted variants of skolem and related problems. In MFCS, 2017.
* [2] Ventsislav Chonev, Joël Ouaknine, and James Worrell. On the complexity of the orbit problem. J. ACM, 63(3):23:1–23:18, June 2016. URL: http://doi.acm.org/10.1145/2857050, doi:10.1145/2857050.
* [3] Henri Cohen. A Course in Computational Algebraic Number Theory. Springer Publishing Company, Incorporated, 2010.
* [4] R. Kannan and R. J. Lipton. Polynomial-time algorithm for the orbit problem. J. ACM, 33(4):808–821, August 1986. URL: http://doi.acm.org/10.1145/6490.6496, doi:10.1145/6490.6496.
* [5] Ravindran Kannan and Richard J. Lipton. The orbit problem is decidable. In Proceedings of the Twelfth Annual ACM Symposium on Theory of Computing, STOC ’80, pages 252–261, New York, NY, USA, 1980. ACM. URL: http://doi.acm.org/10.1145/800141.804673, doi:10.1145/800141.804673.
* [6] Christer Lech. A note on recurring series. Ark. Mat., 2(5):417–421, 08 1953. doi:10.1007/BF02590997.
* [7] M. Mignotte. Some Useful Bounds, pages 259–263. Springer Vienna, Vienna, 1983. doi:10.1007/978-3-7091-7551-4_16.
* [8] T.N. Shorey, R. Tijdeman, and M. Mignotte. The distance between terms of an algebraic recurrence sequence. 1984(349):63–76, 1984. URL: https://doi.org/10.1515/crll.1984.349.63, doi:doi:10.1515/crll.1984.349.63.
|
arxiv-papers
| 2021-07-20T05:55:38 |
2024-09-04T03:07:18.515109
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Prabhat Kumar Jha",
"submitter": "Prabhat Kumar Jha",
"url": "https://arxiv.org/abs/2107.12150"
}
|
2107.12159
|
# Enhanced Meta-Displays Using Advanced Phase-Change Materials
Omid Hemmatyar1,† Sajjad Abdollahramezani1,† Ioannis Zeimpekis2 Sergey
Lepeshov3 Alex Krasnok4 Asir Intisar Khan5 Kathryn M. Neilson5 Christian
Teichrib6 Tyler Brown1 Eric Pop5 Daniel W. Hewak2 Matthias Wuttig6 Andrea
Alù4,7 Otto L. Muskens8 Ali Adibi1 1School of Electrical and Computer
Engineering, Georgia Institute of Technology, 778 Atlantic Drive NW, Atlanta,
Georgia 30332-0250, US 2Zepler Institute, Faculty of Engineering and Physical
Sciences, University of Southampton, SO17 1BJ Southampton, United Kingdom
3ITMO University, St. Petersburg 197101, Russia 4Photonics Initiative,
Advanced Science Research Center, City University of New York, New York, New
York 10031, United States 5Department of Electrical Engineering, Department
of Materials Science and Engineering, Precourt Institute for Energy, Stanford
University, Stanford, California 94305, United States 6Physikalisches
Institut IA, RWTH Aachen, Sommerfeldstrasse 14, 52074 Aachen, Germany
7Physics Program, Graduate Center, City University of New York, New York, New
York 10016, United States 8Physics and Astronomy, Faculty of Engineering and
Physical Sciences, University of Southampton, SO17 1BJ Southampton, United
Kingdom, †These authors contributed equally to this work.
(August 27, 2024)
###### Abstract
Structural colors generated due to light scattering from static all-dielectric
metasurfaces have successfully enabled high-resolution, high-saturation, and
wide-gamut color printing applications. Despite recent advances, most
demonstrations of these structure-dependent colors lack post-fabrication
tunability. This hinders their applicability for front-end dynamic display
technologies. Phase-change materials (PCMs), with significant contrast of
their optical properties between their amorphous and crystalline states, have
demonstrated promising potentials in reconfigurable nanophotonics. Herein, we
leverage tunable all-dielectric reflective metasurfaces made of newly emerged
classes of low-loss optical PCMs, i.e., antimony trisulphide (Sb2S3) and
antimony triselenide (Sb2Se3), with superb characteristics to realize
switchable, high-saturation, high-efficiency and high-resolution dynamic meta-
pixels. Exploiting polarization-sensitive building blocks, the presented meta-
pixel can generate two different colors when illuminated by either one of two
orthogonally polarized incident beams. Such degrees of freedom (i.e., material
phase and polarization state) enable a single reconfigurable metasurface with
fixed geometrical parameters to generate four distinct wide-gamut colors. We
experimentally demonstrate, for the first time, an electrically-driven micro-
scale display through the integration of phase-change metasurfaces with an on-
chip heater formed by transparent conductive oxide. Our experimental findings
enable a versatile platform suitable for a wide range of applications,
including tunable full-color printing, enhanced dynamic displays, information
encryption, and anti-counterfeiting.
dynamic metasurfaces, phase-change materials, nanophotonics, structural colors
## Introduction
In the past decades, absorption and emission of light from organic dyes and
chemical pigments have been the most common color generation mechanisms in
color-imaging and display devices [1]. Nevertheless, there are still several
challenges with the developed technologies, such as environmental hazards,
vulnerability to high-intensity light, and limited scalability to smaller
pixel sizes. In order to address these issues, structural colors have emerged
as compelling alternatives. Structural colors are observed in numerous natural
species, whose bright features arise from light scattering and interference in
micro/nanostructured patterns of their skins or scales [2]. Inspired by nature
and enabled by recent advancement in nanofabrication, artificial structural
colors generated via a resonant interaction between incident white light and
miniaturized building blocks in optical metasurfaces [3, 4, 5, 6], i.e.,
arrays of subwavelength patterned nanostructures, have gained great attention
in recent years. In this context, plasmonic metasurfaces made of gold, silver
and aluminum nanostructures have been extensively used to generate structural
colors based on plasmon resonances [7]. Despite their versatility, the broad
and weak plasmon resonances, imposed by the significant inherent ohmic loss of
the constituent metallic materials, result in low color saturation and purity
[8].
Figure 1: Working principle of a polarization-encoded dynamic display composed
of phase-change meta-pixels. a,b, Schematic representation of a reflective
display consisting of phase-change meta-pixels. Each meta-pixel is a
metasurface formed by a periodic arrangement of Sb2S3 nanopillars shown in
(b), which can generate four different colors; two colors for each
polarization attributed to the amorphous and crystalline phases of Sb2S3 (i.e.
A-Sb2S3 and A-Sb2S3, respectively) nanopillars or two colors for each Sb2S3
phase (corresponding to x-polarized and y-polarized incident white light). For
all metasurfaces, the height ($h$) of the nanopillars is fixed while their
periodicity in x- and y-directions (i.e., $p_{x}$ and $p_{y}$, respectively)
change to generate different colors. The major and minor axes of the
nanopillars in x- and y-directions are proportional to the corresponding
periodicities in those directions with a constant aspect ratio, i.e.
$d_{x,y}=\alpha\,p_{x,y}$, in which $\alpha$ is constant. The colors shown in
(a) correspond to Sb2S3 metasurfaces with (i) $p_{x}=p_{y}=310$ nm (for green)
and $p_{x}=p_{y}=390$ nm (for red), (ii) $p_{x}=310$ nm, $p_{y}=390$ nm, and
(iii) $p_{x}=390$ nm and $p_{y}=310$ nm, with $\alpha=0.6$ and $h=120$ nm.
c-f, Multipolar decomposition analysis: c,e, Calculated normalized scattering
cross-sections and simulated reflectance (R) spectrum of a Sb2S3 metasurface
with geometrical parameters of $p_{x,y}=310$ nm, $d_{x,y}=0.6\,p_{x,y}$, and
$h=120$ nm for the (c) amorphous and (e) crystalline phases. The constructive
interference between the electric dipole (ED) and magnetic dipole (MD) modes
at $\lambda_{a}=560$ nm ($\lambda_{c}=652$ nm) boosts the backward scattering
intensity, and in turn, results in a reflectance peak in the case of A-Sb2S3
(C-Sb2S3). d,f, Normalized magnetic field intensity with arrow surface of
electric field (top panel), and normalized electric field intensity with arrow
surface of magnetic field (bottom panel) for the metasurfaces in b(i) at (d)
$\lambda_{a}=560$ nm and (f) $\lambda_{c}=652$ nm, respectively.
To meet the challenges associated with plasmonic metasurfaces, recently, all-
dielectric metasurfaces made of high-refractive-index materials supporting
Mie-type resonances with electric dipole (ED) and magnetic dipole (MD) modes
have been used for generating a full range of vivid and highly saturated
structural colors desired for high-resolution display technologies [9, 10, 11,
12]. However, these colors are fixed-by-design and cannot be tuned since the
geometrical parameters of passive all-dielectric metasurfaces cannot be
changed after fabrication. In order to enable active display applications, a
real-time color tunability is essential.
To realize high-resolution structural color tunability in metasurfaces,
several modulation techniques have been proposed. Some examples are liquid
crystals in conjunction with plasmonic nanoantennae [13, 14], utilizing
mechanically stretchable substrates integrated with plasmonic [15] and
dielectric [16] nanoscatterers, changing the refractive index of the medium
surrounded nanostructures [17], modifying the optical properties of the
constituent magnesium-based nano-cavities of a hybrid plasmonic-dielectric
platform via a chemical reaction [18], and changing the polarization state of
incident light [19]. Despite impressive advancements, these approaches can
hardly meet the requirements for lightweight, flexible, durable, high-
resolution, high-speed, and cost-effective dynamic color displays with high
color contrast and saturation, multiple stable colors, and high refreshing
rates.
To overcome the existing shortcomings, chalcogenide phase-change materials
(PCMs) [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], with
optical properties (e.g., refractive index) that can be strongly modified upon
applying an external stimulus (optical, electrical, or thermal), have been
successfully used as tunable materials for color switching [35, 36, 37, 38,
39]. The advantages of PCM-based color-switching techniques over other
counterparts originate from unique electrical and optical features of PCMs
including nonvolatility, high index contrast, fast reversible switching speeds
(10s-100s nanoseconds) between two stable phases, high durability (up to
$10^{12}$ cycles), notable scalability (down to nm-scale sizes), good thermal
stability (up to several hundred degrees), and adaptability with the CMOS
fabrication technology [22]. Considering these unique features, single [35,
36] or multiple ultrathin films [37] made of germanium antimony telluride (GST
in short) and germanium telluride (GeTe) alloys in a multistack configuration
with other dielectric and/or metallic films have been utilized for color
switching [38]. In spite of the unparalleled properties of PCMs, these
demonstrations suffer from the high absorption loss of GST and GeTe within the
visible wavelength range, which results in low-quality-factor (low-Q)
reflectance resonances. This, in turn, yields colors with low saturation, low
color value (i.e., the reflectance value at the resonance peak) and purity in
both amorphous and crystalline states of these PCMs.
To address these challenges, here we systematically design and experimentally
demonstrate an actively tunable platform for color displays comprising all-
dielectric metasurfaces formed by a geometrical arrangement of phase-change
nanoellipsoids. We leverage a less explored class of PCMs, i.e., antimony
trisulphide (Sb2S3) and antimony triselenide (Sb2Se3), exhibiting low-loss
property in the visible spectral range [40, 41, 42, 43, 44, 45]. Due to their
high refractive indices, these materials support strong Mie-type ED and MD
resonances. The sensitivity of these modes on refractive index enables high-
resolution (up to $\sim$80,000 dots per inch (dpi)) phase-transition-based
color switching with high saturation and purity [11]. Moreover, owing to the
polarization-sensitivity of the constituent asymmetric PCM nanopillars, we can
encode two different colors into two mutually orthogonal polarization states
of the incident light. This results in realization of a display with fixed
geometrical parameters that can generate four different colors upon transition
in the structural state of the contributed PCM. Finally, the integration of an
electrically controlled transparent heater with the polarization-encoded
phase-change meta-pixels, reported for the first time in this work, enables
real-time reconfiguration for applications ranging from tunable full-color
printing and displays, information encryption, and anticounterfeiting to
wearable screens and electronic papers.
## Results and Discussion
Figure 1a demonstrates the operation principle of a dynamic display formed by
phase-change meta-pixels. Each meta-pixel is composed of a periodic array of
rectangular unit cells, with different periodicity along x- and y-directions
(i.e., $p_{x}$ and $p_{y}$ in Fig. 1b (ii)), containing asymmetric elliptical
Sb2S3 nanopillars on top of a glass substrate. The major and minor axes of the
Sb2S3 nanopillars are proportional to the periodicity of the unit cell in the
corresponding directions, i.e., $d_{x,y}=\alpha\,p_{x,y}$, in which $\alpha$
is fixed between 0 and 1. The height of the nanopillars ($h$) is constant for
the fabrication preference. The reflected color upon normally incident
x-polarized white light can change by varying the geometrical parameters of
the elliptical amorphous-Sb2S3 (A-Sb2S3) nanopillars or equivalently those of
the unit cell (see the bottom-left display in Fig. 1a). Upon phase transition,
crystalline-Sb2S3 (C-Sb2S3) nanopillars with the same geometrical parameters
and under the same illumination conditions generate colors that are different
from those generated by their A-Sb2S3 counterparts (compare the top and bottom
displays in Fig. 1a). This phase-change color switching is attributed to the
refractive index change of the constituent Sb2S3 meta-pixels upon transition
between amorphous and crystalline phases.
To reveal the switching mechanism of, we performed the multipole decomposition
analysis of the scattering spectrum of a Sb2S3 meta-pixel under white light
illumination, as shown in Figs. 1c-f (see Supporting Information Note I and
Fig. S1 for more details). The analysis shows negligible contribution of the
higher-order moments so that the optical response of the unit cell is governed
by the electric dipole (ED) and magnetic dipole (MD) moments. In fact, the
strong coupling between these refractive index-dependent ED and MD moments
excited inside the Sb2S3 nanopillars with the directly reflected light yields
the resonances in the reflectance spectra shown in Figs. 1c,e. Therefore, the
spectral position of these resonances, or equivalently the generated color by
a meta-pixel under a specific polarized light, is determined by the refractive
index of the Sb2S3 nanopillars. In addition to the color switching mechanism
described above, the asymmetric nature of nanopillars can enable a
polarization-based color switching in which one meta-pixel can generate
different colors upon white light illumination with different polarization
states (i.e., x- to y-polarization) in each phase of Sb2S3 (compare the left
and right displays in Fig. 1a). Therefore, one meta-pixel with fixed
geometrical parameters can generate four different colors owing to the phase-
change-tunability and polarization-sensitivity of the constituent Sb2S3
nanopillars.
Figure 2: Simulation results and experimental characterization of
polarization-encoded dynamic phase-change meta-pixels. a,b, Simulated (a) and
experimental (b) reflectance spectra for the polarization-insensitive A-Sb2S3
(solid lines) and C-Sb2S3 (dashed lines) metasurfaces as well as their
corresponding colors and SEM images for different periodicity
($p_{x}=p_{y}=p$). The curves are displaced vertically for better visibility
and comparison. The diameter of Sb2S3 nanopillars varies as $d=0.65\,p$. The
sharp resonances observed in (a) and (b) are attributed to the interference
between ED and MD modes inside the Sb2S3 nanopillars as shown in Figs.
1(c)-(f) causing the spectral position of these resonances become refractive-
index-dependent. Therefore, red-shifting is observed upon phase transition of
Sb2S3. c, Corresponding CIE 1931 chromaticity coordinates of the reflectance
spectra shown in (a,b) for A-Sb2S3 (black circles) and C-Sb2S3 (white
squares). d,e, The color palettes for the fabricated (d) A-Sb2S3 and (e)
C-Sb2S3 meta-pixels considering different periodicity in x- and y-directions
($p_{x}$ and $p_{y}$, respectively) varying with 20 nm increments. f, SEM
images showing magnified bird’s eye views of three meta-pixels indicated by
dashed boxes in (d,e). The scale bar in (f) is 500 nm. The height of the Sb2S3
nanopillars is fixed at $h=120$ nm. The images of color pixels are captured
through a $2.5\times$ objective lens with numerical aperture (NA) of 0.075.
To study the effect of material phase on the generated color, we first
consider polarization-insensitive meta-pixels with a square lattice
($p_{x}=p_{y}=p$) and circular ($d_{x}=d_{y}=d$) Sb2S3 nanopillars. We set the
geometrical parameters as $h=120$ nm and $d=0.65\,p$ while we vary the
periodicity of the unit cell from $p=290$ nm to $p=450$ nm, with a step of 20
nm, to cover a wide range of possible colors. The corresponding reflectance
spectra obtained from full-wave simulations (see Method), and in turn, the
generated colors are shown in Fig. 4a. The method used for obtaining the
associated colors with each reflectance spectrum is detailed in Supporting
Information Note II and Fig. S3. As shown in Fig. 4a, by increasing $p$, the
spectral position of the resonance peaks for both amorphous (solid lines) and
crystalline (dashed lines) states red-shift. Moreover, switching the phase of
the material shifts the resonance peak to the longer wavelengths. This is due
to positive refractive index contrast ($\Delta
n_{\textrm{Sb}_{2}\textrm{S}_{3}}=n_{\textrm{C-Sb}_{2}\textrm{S}_{3}}-n_{\textrm{A-Sb}_{2}\textrm{S}_{3}}>0$)
within the visible wavelength range (see Supplementary Fig. S2c). The higher
absorption loss of C-Sb2S3 mean that the nanopillars in the crystalline state
(dashed lines in Fig. 2a) do not support equally strong and sharp resonances
as seen for A-Sb2S3 (solid lines).
To demonstrate the validity of our approach, we fabricated and characterized
$50\times 50$ $\mu$m2 Sb2S3 meta-pixels with the same design parameters as
those in Fig. 4a (see Methods for fabrication and characterization details).
The measured reflectance spectra, associated colors observed under the
microscope as well as magnified top-view scanning electron micrographs (SEMs)
of fabricated meta-pixels are shown in Fig. 4b demonstrating an overall good
agreement with the results obtained from simulations. To qualitatively analyze
the performance of the presented color generation/switching mechanism in terms
of saturation maintenance and hue variation, we display the generated colors
in the amorphous (black circles) and crystalline (white squares) phases in the
same International Commission on Illumination (CIE) 1931 chromaticity
coordinates in Fig. 4c. While for greenish and reddish colors, both simulation
and experimental results demonstrate high saturation values (i.e., those
markers close to the edge of the gamut), the purplish colors cannot be
produced in the experiments. We attribute this to the presence of undesired
secondary peaks observed in the reflectance spectra in Fig. 4b for $p>390$ nm
due to fabrication imperfections. A thorough quantitative study on the color
gamut coverage, saturation, and hue for the case of Sb2S3 meta-pixels and an
other low-loss PCM (i.e., Sb2Se3) is presented in Supplementary Note III and
Figs. S4,5.
Figure 3: Dynamic displays enabled by polarization-sensitive Sb2S3 meta-
pixels. a, Reproduction of the image of The Cheshire Cat by A-Sb2S3 and
C-Sb2S3 meta-pixels. For the case of A-Sb2S3, switching the polarization of
incident white light changes the generated color throughout the image. This
phenomenon is also observed for incident y-polarized light upon
crystallization of A-Sb2S3. Under x-polarization, however, all parts of The
Cheshire Cat body in the amorphous phase vanish except its teeth and eyes upon
switching to the crystalline phase. This is also the case when changing the
polarization of the incident white light from y- to the x-direction in the
crystalline phase. b, The SEM image of the fabricated array of Sb2S3
nanopillars associated with the face of The Cheshire Cat indicated by the blue
dashed box shown in (a). The magnified SEM image shown in the inset
demonstrates that a meta-pixel containing only four Sb2S3 nanopillars is
capable of generating the desired color justifying the high-resolution nature
of the presented color-printing approach. c, Encryption of two images (i.e.,
Georgia Tech logo and symbol) into a display containing an engineered
arrangement of Sb2S3 meta-pixels. One image can be switched to another either
by changing the polarization of incident light in each phase, or by changing
the phase of the Sb2S3 meta-pixels under the same polarization. The latter is
the first experimental demonstration of encryption of two totally different
images into the phase of the constituent material of meta-pixels. d, The SEM
image of the fabricated Sb2S3 meta-pixels corresponding to the blue dashed box
shown in (c). The design strategy and geometrical parameters of different
parts of the images shown in (a-d) is explained in Supplementary Figs. S15-17.
The images in (a) and (c) are captured through $10\times$ (NA = 0.3) and
$2.5\times$ (NA = 0.075) objective lenses, respectively. Figure 4:
Electrically driven dynamic color display device integrating a transparent
heater. a,b, The bright-field microscope images of the electrically tunable
color palettes comprising $50\times 50$ $\mu$m2 (a) A-Sb2S3 and (b) C-Sb2S3
meta-pixels fabricated on a glass substrate and encapsulated by a 150 nm-thick
film of SiO2. The transparent heater is formed by fabrication of a 50 nm-thick
ITO bridge connecting Au probing pads at the two ends on top of the SiO2 film.
The geometrical parameters of the Sb2S3 meta-pixels shown in (a) and (b) are
similar to those used in Fig. 4 (d,e). The images are captured using a
$2.5\times$ objective (NA = 0.075). c, Simulated stationary temperature map in
the cross section of Sb2S3 meta-pixels in the course of applying a 27 V
electrical signal to the Au probing pads. The uniform heat distribution across
the palettes ensures realization of large-scale displays with selective
controllability of the material phase of all meta-pixels. The scale bars are
100 $\mu$m.
In order to add polarization sensitivity to our color-switching approach, from
now on, we also consider elliptical nanopillars in asymmetric unit cells with
different periodicity in the x- and y-directions, i.e., $p_{x}$ and $p_{y}$,
as shown in Fig. 1b. By varying $p_{x}$ and $p_{y}$ from 290 nm to 510 nm with
a 20-nm increment and a fixed ratio with respect to the major and minor axes
of the nanopillars (i.e., $d_{x,y}=0.65\,p_{x,y}$), we fabricate the color
palettes shown in Figs. 4d,e captured under x-polarized illumination.
Measurement under y-polarization yields the same pattern flipped in $p_{x}$
and $p_{y}$ (results not shown here). The magnified bird’s eye view of three
meta-pixels indicated by dashed boxes in Fig. 4d,e are displayed in Fig. 4f.
The simulated palettes as well as a detailed analysis on the polarization-
based and phase-change-based color switching approaches in the presented
platform considering both Sb2S3 and Sb2Se3 meta-pixels are provided in
Supplementary Note IV and Figs. S6-9. In addition to the ratio used in Fig.
4d,e (i.e., $\alpha=0.65$), we fabricate palettes of meta-pixels with other
ratios of 0.45 and 0.55 and plot the captured images in Supplementary Fig.
S10. Moreover, based on the simulation results in Supplementary Fig. S8, we
design and fabricate palettes of Sb2Se3 meta-pixels with different ratios and
display their microscopic images in Fig. S11. The sensitivity of the generated
colors to the polarization angle, incident angle, and different design
parameters of the meta-pixels are analyzed in Supplementary Notes V (Fig.
S12), VI (Fig. S13), and VII (Fig. S14), respectively.
The color switching enabled by the phase transition of Sb2S3 and polarization
of the incident light can be employed for implementation of a dynamic display.
According to the color palettes in Figs. 4d,e, under x- (y-) polarization, a
column (row) of different colors in the amorphous phase can be mapped onto a
column (row) of relatively similar colors in the crystalline phase. We
leverage this unique feature of the Sb2S3 meta-pixels for switching off some
parts of an image while maintaining the colors of the remaining parts. As an
illustrative example, the image of The Cheshire Cat is generated by A-Sb2S3
meta-pixels illuminated with x-polarized white light as shown in Fig. 3a (i).
Upon phase-transition to the C-Sb2S3, all parts of the body vanish, but the
grinning and eyes remain (see Fig. 3a (ii)). It is also the case when changing
the polarization state from y to x in the crystalline phase. On the other
hand, altering the polarization in amorphous phase as well as switching the
material phase under y-polarization result only in a variation of colors in
the components of the image. The SEM image of the fabricated array of Sb2S3
nanopillars associated with the face of The Cheshire Cat indicated by the blue
dashed box shown in Fig. 3a is displayed in Fig. 3b. The magnified SEM image
shown in the inset demonstrates that a meta-pixel containing only four Sb2S3
nanopillars can generate the desired color showing the high-resolution nature
of the presented approach. The geometrical parameters of the meta-pixels used
for the generation of Figs. 3a,b are tabulated in Supplementary Fig. S15.
Another interesting characteristic of our platform is its two degrees of
freedom, i.e., phase-change and polarization-based color switching, at the
same time. In fact, it is possible to darken (brighten) some parts of an image
using the polarization-based control, while brightening (darkening) other
parts using phase-change control of the meta-pixels. We benefit from this
capability to encrypt two different images (i.e., Georgia Tech logo and
symbol) into a display containing an array of Sb2S3 meta-pixels as shown in
Fig. 3c. One image can be switched to another one either by altering the
polarization in each material phase, or by changing the phase of the Sb2S3
meta-pixels under a fixed polarization. While the former has been reported in
previous works, the latter, to the best of our knowledge, is the first
demonstration of encryption of two totally different images into the phase of
the constituent materials of a meta-pixel. The SEM image of the fabricated
Sb2S3 meta-pixels corresponding to the blue dashed box in Fig. 3c is
demonstrated in Fig. 3d. The design strategy of the meta-pixels used for
generating Figs. 3c,d is provided in Supplementary Fig. S16. Moreover, we
demonstrate other examples of dynamic displays using Sb2Se3 meta-pixels in
Supplementary Fig. S17. For the case of Sb2Se3, we demonstrate the encoding
and decoding of four different images into the A-Sb2Se3 (ON-state) and
C-Sb2Se3 (OFF-state), respectively, under x- and y-polarizations. These
capabilities can be used in many applications such as information coding,
cryptography, high-density optical data storage, security encryption, and 3D
displays.
In all experiments shown in Figs. 1-3, the phase-transition in our Sb2S3 meta-
pixels is performed by using a bulky heater for a relatively long annealing
time (see Methods for details). Though laser pulses can be used to expedite
the conversion process [45], the on-chip integration of high-power fast lasers
is challenging if not impossible. This hinders the applicability of our
approach for on-demand compact, high-resolution, fast, and on-chip display. To
promote the presented approach to a practical paradigm, we must electrically
convert the Sb2S3 meta-pixels. Recently, electrical switching of PCMs based on
Joule heating has been successfully demonstrated using metal micro-heaters
[27, 33, 34]. However, none of these platforms is suitable for structural
color generation due to the excessive loss of their constituent material in
the visible range. Thanks to their reduced optical loss, micro-heater formed
in transparent conductive oxides hold the promise to enable next-generation
dynamic structural colors.
As a proof-of-concept demonstration, we leverage an indium tin oxide (ITO)
heater to electrically reconfigurequality if the the phase-change meta-pixels
without compromising the quality of the generated colors. To this end, the
fabricated palettes in Figs. 4d,e are first encapsulated by a SiO2 layer
followed by fabrication of a 50 nm-thick indium tin oxide (ITO) bridge
connecting two gold (Au) probing pads at the two ends on top of the SiO2 film
(see Figs. 4a,b and Methods for fabrication details). The electro-thermal
simulation in Fig. 4c illustrates that a fairly uniform heat distribution can
be realized across the whole area of the display upon applying the voltage
pulse ensuring simultaneous and uniform conversion of all palettes. Such a
Joule heating platform offers the precise electrical control of the
intermediate phases of PCMs (beyond amorphous and crystalline) which is
critical for realization of multicolor displays, a key attribute of our
approach. We further investigate the potential of ITO-based micro-heater for
reversible switching of colors in Supplementary Information Figs. S19-22.
## Conclusion
In summary, we demonstrated here a new platform for generating and switching
high-efficiency, high-saturation, and wide-gamut structural colors using
switchable meta-pixels by employing PCM-based metasurfaces made of low-loss
and less explored Sb2S3 and Sb2Se3 nanopillars. Upon the nonvolatile phase-
transition of the constituent PCM, the generated color in the amorphous phase
switches to a distinctive stable color in the crystalline phase. In addition,
the properly designed asymmetric characteristics of elliptical nanopillars
enable polarization-based color switching. Combining these two tuning
mechanisms, we systematically designed a single-layer meta-pixel capable of
producing four different colors. This can be extended to the realization of
multi-color artificial images by gradually changing the crystallinity of the
constituent PCMs and/or the incident polarization angle. We also showed that
by engineering the arrangement of PCM-based nanopillars, features like image
switching, ON/OFF switching, and color shading can be realized. More
interestingly, we experimentally demonstrate, for the first time, an
electrically driven micro-scale display by integration of an optically-
transparent heater to our color without compromising the color quality. We
believe that this research provides a significant step towards the realization
and commercialization of compact metaphotonic devices for applications like
full-color dynamic displays, information storage, image encryption, and anti-
counterfeiting.
## Acknowledgements
The work was primarily funded by the Office of Naval Research (ONR)
(N00014-18-1-2055, Dr. B. Bennett) and by the Air Force Office of Scientific
Research MURI program. The support of the UK’s Engineering and Physical
Science Research Centre is gratefully acknowledged, through ChAMP–Chalcogenide
Advanced Manufacturing Partnership (EP/M015130/1). The Stanford authors
acknowledge partial support from the Stanford Graduate Fellowship, from the
Nonvolatile Memory Technology Research Initiative (NMTRI), and from Draper
Labs. This work was performed in part at the Georgia Tech Institute for
Electronics and Nanotechnology (IEN), a member of the National Nanotechnology
Coordinated Infrastructure (NNCI), which is supported by NSF (ECCS1542174).
## Disclosures
The authors declare no conflicts of interest.
## Methods
Sample fabrication. The fabrication flow for the Sb2S3 metasurface and
integrated transparent heater of the meta-display is illustrated in Fig. S23.
A Sb2S3 film of nominally 130 nm thickness is first sputtered on a cleaned
fused silica substrate from a stoichiometric target followed by the deposition
of a 15-nm thick ZnS:SiO2 film serving as a protective layer to prevent
oxidation and elemental loss of Sb2S3 undergoing the heating process. Next,
the sample is coated with a layer of hydrogen silsesquioxane (HSQ) negative
e-beam resist and a thin water-soluble conductive layer of ESpacer to hamper
the charge accumulation during the writing process. E-beam lithography is them
performed to define the nanopillar pattern in each 50$\times$50 $\mu$m2 meta-
pixel. After washing out ESpacer using DI water, the exposed photoresist is
developed by subsequently immersing it in a bath of 25% tetramethylammonium
hydroxide (TMAH) and rinsing with gently flowing DI water. Inductively couple
plasma reactive ion etching (ICP-RIE) is performed with a gas mixture of
Ar:CF4 with the etching rate of $\sim$ 75 nm/min to form nanostructure
patterns. The etching process is conducted through two 1-min cycles with a
long-enough cooldown break in between. Right after the etching, a 15 nm
protective layer of SiO2 is grown on the sample using atomic layer depostion
(ALD) at 100 ∘C, which is low enough to prevent the crystallization of Sb2S3.
To convert the material state of Sb2S3 to the crystalline phase, the sample is
annealed at 270 ∘C for 10 mins in a chamber filled with an ultrahigh pure Ar
gas.
To realize the electrically-driven display, the fabricated sample (excluded
from the annealing process) is first transferred to the ALD system to deposit
a 200-nm thick layer of thermal SiO2 as a supporting substrate for the
integrated heater. After defining the pattern of the ITO bridge in the
polymethyl methacrylate (PMMA)-coated sample using e-beam lithography, a 50
nm-thick layer of ITO is deposited by the RF-magnetron sputtering from an
indium oxide/tin oxide (In2O3/SnO2 with 90/10 wt %) target in an argon/oxygen
plasma atmosphere. The prolonged nature of the deposition facilitates the
crystallization of ITO necessary for the formation of a uniform conductive
layer enabling spatially consistent heat generation. After the lift-off
process, to further enhance the electrical conductivity of the ITO film, post-
deposition annealing under a mild flow of oxygen, which also reduces the
optical loss of ITO, is conducted at 200 ∘C for 30 mins. This temperature is
low enough to preclude crystallization of as-deposited Sb2S3, . Two Au/Ti
(250/20 nm) electrodes are formed at the two ends of the ITO bridge through
subsequent e-beam lithography and e-beam evaporation processes. After the
lift-off process, in the final step, a 100 nm layer of SiO2 is grown to
prevent the failure of the heater caused by the electric breakdown of the air
at the sharp corners of the device. To fully transform the Sb2S3 phase from
amorphous to crystalline based on the Joule heating process, a 32 V long-
enough (1 min) pulse is applied to the integrated heater using a source
measurement unit (Keithley 2614B).
Optical measurements. To investigate the optical response of the fabricated
meta-displays, bright-field optical imaging and reflection spectra
measurements of the color palettes are conducted. Optical images are captured
using a conventional upright bright-field reflection microscope (Nikon ECLIPSE
L200, Nikon Inc.) equipped with a high-definition color camera head (DS-Fi2)
and a 50 W halogen lamp light source. To observe different colors of The
Cheshire Cat and Georgia Tech logo and symbol images under different
polarization states of incident white light, the corresponding images are
magnified with a 10$\times$ objective lens (NA = 0.3) and a 2.5$\times$
objective lens (NA = 0.075), respectively, under illumination of polarized
light in both orthogonal directions. The optical spectra ($\lambda$ = 450-850
nm) are measured in reflection mode using a home-built microscope set-up
equipped with a 75 W broadband xenon source (Newport) and a UV-visible-near
infrared (NIR) spectrometer (USB 2000+, Ocean Optics Inc.). The polarized
light illuminates a colour palette at normal incidence through an achromatic
10$\times$ objective lens (NA = 0.25) and is collected through the same
objective and back into the spectrometer and a CCD camera. The measured
reflectance spectra are normalized to the reflected light from an aluminium-
coated mirror. All measurements are carried out at room temperature ($\sim$ 25
∘C).
Numerical simulations. The full-wave simulations of the reflectance spectra of
the metasurfaces are performed using the commercial software Lumerical
Solutions based on the finite-different time-domain (FDTD) technique. The
periodic boundary condition is used in the x- and y-directions to mimic the
periodicity, while perfectly matched layers are used in the z-direction (top
and bottom layers) to model the free space. The refractive index of the glass
substrate is set at 1.46 for the entire wavelength range. The dispersive
optical constants of PCMs obtained from spectroscopic ellipsometry
measurements shown in Supplementary Fig. S2 are incorporated into simulations.
Electro-thermal simulations. A three-dimensional finite element method (FEM)
simulation is performed in the software package COMSOL Multiphysics to
simulate the Joule heating and heat dissipation effects in the electrified
hybrid display. In our simulations, we consider certain assumptions and
boundary conditions to mimic the experimental conditions. The multiphysics
problem is solved through coupling of an electric currents (ec) module to a
heat transfer in solid (ht) physics model. Material properties used for fused
silica, Ti, Au, and ITO are adopted from the available references [46, 47].
The electrical conductivity of ITO obtained from the four-point probe
measurement is set at 1.42$\times$104 S/m. The thermal conductivity, density,
and heat capacity of Sb2S3 are 1.16 W/m·K, 4600 kg/m3, and 120 J/mol.K,
respectively [45]. The ec module is applied to the ITO bridge and electrodes.
Electric insulation are assigned to all boundaries except for the two endfaces
of the bridge where normal current density and electric ground are applied.
The ht physics model is assigned to all domains. The convective cooling
boundary condition with an ambient temperature of 20 ∘C and the heat transfer
coefficient of 5 W/m2.K is used at the top and bottom surfaces. Open boundary
condition is applied to the walls of the substrate in the lateral directions.
## References
* Daqiqeh Rezaei _et al._ [2020] S. Daqiqeh Rezaei, Z. Dong, J. You En Chan, J. Trisno, R. J. H. Ng, Q. Ruan, C.-W. Qiu, N. A. Mortensen, and J. K. Yang, Nanophotonic structural colors, ACS Photonics (2020).
* Vukusic _et al._ [1999] P. Vukusic, J. Sambles, C. Lawrence, and R. Wootton, Quantified interference and diffraction in single morpho butterfly scales, Proceedings of the Royal Society of London. Series B: Biological Sciences 266, 1403 (1999).
* Yu _et al._ [2011] N. Yu, P. Genevet, M. A. Kats, F. Aieta, J.-P. Tetienne, F. Capasso, and Z. Gaburro, Light propagation with phase discontinuities: generalized laws of reflection and refraction, Science 334, 333 (2011).
* Krasnok _et al._ [2012] A. E. Krasnok, A. E. Miroshnichenko, P. A. Belov, and Y. S. Kivshar, All-dielectric optical nanoantennas, Optics Express 20, 20599 (2012).
* Decker _et al._ [2015] M. Decker, I. Staude, M. Falkner, J. Dominguez, D. N. Neshev, I. Brener, T. Pertsch, and Y. S. Kivshar, High-efficiency dielectric huygens’ surfaces, Advanced Optical Materials 3, 813 (2015).
* Kuznetsov _et al._ [2016] A. I. Kuznetsov, A. E. Miroshnichenko, M. L. Brongersma, Y. S. Kivshar, and B. Luk’yanchuk, Optically resonant dielectric nanostructures, Science 354 (2016).
* Duan _et al._ [2017] X. Duan, S. Kamin, and N. Liu, Dynamic plasmonic colour display, Nature Communications 8, 1 (2017).
* Kristensen _et al._ [2016] A. Kristensen, J. K. Yang, S. I. Bozhevolnyi, S. Link, P. Nordlander, N. J. Halas, and N. A. Mortensen, Plasmonic colour generation, Nature Reviews Materials 2, 1 (2016).
* Zhu _et al._ [2017] X. Zhu, W. Yan, U. Levy, N. A. Mortensen, and A. Kristensen, Resonant laser printing of structural colors on high-index dielectric metasurfaces, Science Advances 3, e1602487 (2017).
* Yang _et al._ [2019] B. Yang, W. Liu, Z. Li, H. Cheng, D.-Y. Choi, S. Chen, and J. Tian, Ultrahighly saturated structural colors enhanced by multipolar-modulated metasurfaces, Nano Letters 19, 4221 (2019).
* Hemmatyar _et al._ [2019] O. Hemmatyar, S. Abdollahramezani, Y. Kiarashinejad, M. Zandehshahvar, and A. Adibi, Full color generation with fano-type resonant hfo 2 nanopillars designed by a deep-learning approach, Nanoscale 11, 21266 (2019).
* Yang _et al._ [2020] W. Yang, S. Xiao, Q. Song, Y. Liu, Y. Wu, S. Wang, J. Yu, J. Han, and D.-P. Tsai, All-dielectric metasurface for high-performance structural color, Nature Communications 11, 1 (2020).
* Franklin _et al._ [2015] D. Franklin, Y. Chen, A. Vazquez-Guardado, S. Modak, J. Boroumand, D. Xu, S.-T. Wu, and D. Chanda, Polarization-independent actively tunable colour generation on imprinted plasmonic surfaces, Nature Communications 6, 1 (2015).
* Olson _et al._ [2016] J. Olson, A. Manjavacas, T. Basu, D. Huang, A. E. Schlather, B. Zheng, N. J. Halas, P. Nordlander, and S. Link, High chromaticity aluminum plasmonic pixels for active liquid crystal displays, ACS Nano 10, 1108 (2016).
* Tseng _et al._ [2017] M. L. Tseng, J. Yang, M. Semmlinger, C. Zhang, P. Nordlander, and N. J. Halas, Two-dimensional active tuning of an aluminum plasmonic array for full-spectrum response, Nano Letters 17, 6034 (2017).
* Gutruf _et al._ [2016] P. Gutruf, C. Zou, W. Withayachumnankul, M. Bhaskaran, S. Sriram, and C. Fumeaux, Mechanically tunable dielectric resonator metasurfaces at visible frequencies, ACS Nano 10, 133 (2016).
* King _et al._ [2015] N. S. King, L. Liu, X. Yang, B. Cerjan, H. O. Everitt, P. Nordlander, and N. J. Halas, Fano resonant aluminum nanoclusters for plasmonic colorimetric sensing, ACS Nano 9, 10628 (2015).
* Chen _et al._ [2017] Y. Chen, X. Duan, M. Matuschek, Y. Zhou, F. Neubrech, H. Duan, and N. Liu, Dynamic color displays using stepwise cavity resonators, Nano Letters 17, 5555 (2017).
* Yang _et al._ [2018] B. Yang, W. Liu, Z. Li, H. Cheng, S. Chen, and J. Tian, Polarization-sensitive structural colors with hue-and-saturation tuning based on all-dielectric nanopixels, Advanced Optical Materials 6, 1701009 (2018).
* Wuttig _et al._ [2017] M. Wuttig, H. Bhaskaran, and T. Taubner, Phase-change materials for non-volatile photonic applications, Nature Photonics 11, 465 (2017).
* Ding _et al._ [2019] F. Ding, Y. Yang, and S. I. Bozhevolnyi, Dynamic metasurfaces using phase-change chalcogenides, Advanced Optical Materials 7, 1801709 (2019).
* Abdollahramezani _et al._ [2020] S. Abdollahramezani, O. Hemmatyar, H. Taghinejad, A. Krasnok, Y. Kiarashinejad, M. Zandehshahvar, A. Alù, and A. Adibi, Tunable nanophotonics enabled by chalcogenide phase-change materials, Nanophotonics 9, 1189 (2020).
* Gholipour _et al._ [2013] B. Gholipour, J. Zhang, K. F. MacDonald, D. W. Hewak, and N. I. Zheludev, An all-optical, non-volatile, bidirectional, phase-change meta-switch, Advanced Materials 25, 3050 (2013).
* Zhang _et al._ [2019] Y. Zhang, J. B. Chou, J. Li, H. Li, Q. Du, A. Yadav, S. Zhou, M. Y. Shalaginov, Z. Fang, H. Zhong, _et al._ , Broadband transparent optical phase change materials for high-performance nonvolatile photonics, Nature Communications 10, 1 (2019).
* Taghinejad _et al._ [2021] H. Taghinejad, S. Abdollahramezani, A. A. Eftekhar, T. Fan, A. H. Hosseinnia, O. Hemmatyar, A. E. Dorche, A. Gallmon, and A. Adibi, Ito-based microheaters for reversible multi-stage switching of phase-change materials: towards miniaturized beyond-binary reconfigurable integrated photonics, Optics Express 29, 20449 (2021).
* Ríos _et al._ [2015] C. Ríos, M. Stegmaier, P. Hosseini, D. Wang, T. Scherer, C. D. Wright, H. Bhaskaran, and W. H. Pernice, Integrated all-photonic non-volatile multi-level memory, Nature Photonics 9, 725 (2015).
* Abdollahramezani _et al._ [2021a] S. Abdollahramezani, O. Hemmatyar, M. Taghinejad, H. Taghinejad, A. Krasnok, A. A. Eftekhar, C. Teichrib, S. Deshmukh, M. El-Sayed, E. Pop, _et al._ , Electrically driven programmable phase-change meta-switch reaching 80% efficiency, arXiv preprint arXiv:2104.10381 (2021a).
* Tian _et al._ [2019] J. Tian, H. Luo, Y. Yang, F. Ding, Y. Qu, D. Zhao, M. Qiu, and S. I. Bozhevolnyi, Active control of anapole states by structuring the phase-change alloy ge 2 sb 2 te 5, Nature Communications 10, 1 (2019).
* Michel _et al._ [2019] A.-K. U. Michel, A. Heßler, S. Meyer, J. Pries, Y. Yu, T. Kalix, M. Lewin, J. Hanss, A. De Rose, T. W. Maß, _et al._ , Advanced optical programming of individual meta-atoms beyond the effective medium approach, Advanced Materials 31, 1901033 (2019).
* Abdollahramezani _et al._ [2021b] S. Abdollahramezani, O. Hemmatyar, M. Taghinejad, H. Taghinejad, Y. Kiarashinejad, M. Zandehshahvar, T. Fan, S. Deshmukh, A. A. Eftekhar, W. Cai, _et al._ , Dynamic hybrid metasurfaces, Nano Letters 21, 1238 (2021b).
* Wu _et al._ [2021] C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network, Nature Communications 12, 1 (2021).
* Zheng _et al._ [2020] J. Zheng, Z. Fang, C. Wu, S. Zhu, P. Xu, J. K. Doylend, S. Deshmukh, E. Pop, S. Dunham, M. Li, _et al._ , Nonvolatile electrically reconfigurable integrated photonic switch enabled by a silicon pin diode heater, Advanced Materials 32, 2001218 (2020).
* Zhang _et al._ [2021] Y. Zhang, C. Fowler, J. Liang, B. Azhar, M. Y. Shalaginov, S. Deckoff-Jones, S. An, J. B. Chou, C. M. Roberts, V. Liberman, _et al._ , Electrically reconfigurable non-volatile metasurface using low-loss optical phase-change material, Nature Nanotechnology 16, 661 (2021).
* Wang _et al._ [2021] Y. Wang, P. Landreman, D. Schoen, K. Okabe, A. Marshall, U. Celano, H.-S. P. Wong, J. Park, and M. L. Brongersma, Electrical tuning of phase-change antennas and metasurfaces, Nature Nanotechnology 16, 667 (2021).
* Hosseini _et al._ [2014] P. Hosseini, C. D. Wright, and H. Bhaskaran, An optoelectronic framework enabled by low-dimensional phase-change films, Nature 511, 206 (2014).
* Tao _et al._ [2020] S. Tao, Q. Li, J. Wang, X. Wang, J. Cai, S. Li, W. Xu, K. Zhang, and C. Hu, Phase change materials for nonvolatile, solid-state reflective displays: From new structural design rules to enhanced color-changing performance, Advanced Optical Materials 8, 2000062 (2020).
* Yoo _et al._ [2016] S. Yoo, T. Gwon, T. Eom, S. Kim, and C. S. Hwang, Multicolor changeable optical coating by adopting multiple layers of ultrathin phase change material film, ACS Photonics 3, 1265 (2016).
* Carrillo _et al._ [2019] S. G.-C. Carrillo, L. Trimby, Y.-Y. Au, V. K. Nagareddy, G. Rodriguez-Hernandez, P. Hosseini, C. Ríos, H. Bhaskaran, and C. D. Wright, A nonvolatile phase-change metamaterial color display, Advanced Optical Materials 7, 1801782 (2019).
* de Galarreta _et al._ [2020] C. R. de Galarreta, I. Sinev, A. M. Alexeev, P. Trofimov, K. Ladutenko, S. G.-C. Carrillo, E. Gemo, A. Baldycheva, J. Bertolotti, and C. D. Wright, Reconfigurable multilevel control of hybrid all-dielectric phase-change metasurfaces, Optica 7, 476 (2020).
* Ghosh and Varma [1979] C. Ghosh and B. Varma, Optical properties of amorphous and crystalline sb2s3 thin films, Thin solid films 60, 61 (1979).
* Chen _et al._ [2015] C. Chen, W. Li, Y. Zhou, C. Chen, M. Luo, X. Liu, K. Zeng, B. Yang, C. Zhang, J. Han, _et al._ , Optical properties of amorphous and polycrystalline sb2se3 thin films prepared by thermal evaporation, Applied Physics Letters 107, 043905 (2015).
* Dong _et al._ [2019] W. Dong, H. Liu, J. K. Behera, L. Lu, R. J. Ng, K. V. Sreekanth, X. Zhou, J. K. Yang, and R. E. Simpson, Wide bandgap phase change material tuned visible photonics, Advanced Functional Materials 29, 1806181 (2019).
* Delaney _et al._ [2020] M. Delaney, I. Zeimpekis, D. Lawson, D. W. Hewak, and O. L. Muskens, A new family of ultralow loss reversible phase-change materials for photonic integrated circuits: Sb2s3 and sb2se3, Advanced Functional Materials , 2002447 (2020).
* Delaney _et al._ [2021] M. Delaney, I. Zeimpekis, H. Du, X. Yan, M. Banakar, D. J. Thomson, D. W. Hewak, and O. L. Muskens, Nonvolatile programmable silicon photonics using an ultralow-loss sb2se3 phase change material, Science Advances 7, eabg3500 (2021).
* Liu _et al._ [2020] H. Liu, W. Dong, H. Wang, L. Lu, Q. Ruan, Y. S. Tan, R. E. Simpson, and J. K. Yang, Rewritable color nanoprints in antimony trisulfide films, Science Advances 6, eabb7171 (2020).
* Lide [2004] D. R. Lide, _CRC handbook of chemistry and physics_ , Vol. 85 (CRC press, 2004).
* Rios _et al._ [2018] C. Rios, M. Stegmaier, Z. Cheng, N. Youngblood, C. D. Wright, W. H. Pernice, and H. Bhaskaran, Controlled switching of phase-change materials by evanescent-field coupling in integrated photonics, Optical Materials Express 8, 2455 (2018).
* Jackson [1999] J. D. Jackson, Classical electrodynamics (1999).
* Grahn _et al._ [2012] P. Grahn, A. Shevchenko, and M. Kaivola, Electromagnetic multipole theory for optical nanomaterials, New Journal of Physics 14, 093033 (2012).
* Rezaei _et al._ [2019] S. D. Rezaei, R. J. Hong Ng, Z. Dong, J. Ho, E. H. Koay, S. Ramakrishna, and J. K. Yang, Wide-gamut plasmonic color palettes with constant subwavelength resolution, ACS nano 13, 3580 (2019).
## Supplementary Information
## I Multipolar decomposition
Electromagnetic properties of the nanoparticles in the arrays are numerically
studied by using the commercial software CST Microwave StudioTM. In the
canonical basis we perform a multipole expansion of the scattered field of the
hybrid nanoparticles into vector spherical harmonics, which form a complete
and orthogonal basis allowing the unique expansion of any vectorial field. To
calculate electric (aE(l,m)) and magnetic (aM(l,m)) spherical multipole
coefficients, we project the scattered electric field $\mathbf{E}_{sca}$ on a
spherical surface, enclosing the nanoparticles centered at the symmetric point
of the nanodisc, onto vector spherical harmonics based on the following
relations [48, 49]:
$\begin{split}a_{E}(l,m)=&\frac{(-i)^{l+1}kR}{h_{l}^{(1)}(kR)E_{0}\sqrt{\pi(2l+1)(l+1)l}}\\\
&\int_{0}^{2\pi}\int_{0}^{\pi}Y^{*}_{lm}(\theta,\phi)\mathbf{r}\mathbf{E}_{sca}(\mathbf{r})\sin\theta
d\theta d\phi,\end{split}$ (S.1)
$\begin{split}a_{M}(l,m)=&\frac{(-i)^{l}kR}{h_{l}^{(1)}(kR)E_{0}\sqrt{\pi(2l+1)}}\\\
&\int_{0}^{2\pi}\int_{0}^{\pi}\mathbf{X}^{*}_{lm}(\theta,\phi)\mathbf{E}_{sca}(\mathbf{r})\sin\theta
d\theta d\phi,\end{split}$ (S.2)
where $R$ is the radius of the enclosing sphere, $k$ is the wavenumber,
$h_{l}^{(1)}$ is the Hankel function with the asymptotic of the outgoing
spherical wave, $E_{0}$ is the amplitude of the incident wave, $Y^{*}_{lm}$
and $\mathbf{X}^{*}_{lm}$ are scalar and vector spherical harmonics. The
integers $l$ and $m$ describe the order of the multipole (dipole, quadrupole,
…) and the amount of the z-component of angular momentum that is carried per
photon, respectively. Due to the azimuthal symmetry of the nanoparticles under
normal excitation, the amplitude of the scattering coefficients with opposite
$m$ indices are identical, i.e., $a_{E,M}(l,m)=a_{E,M}(l,-m)$.
## II Color generation
To achieve generated colors, the International Commission on Illumination
(CIE) XYZ tristimulus values corresponding to the reflection spectra are
calculated as [11]:
$\displaystyle
X=\frac{1}{k}\int{I(\lambda)R(\lambda)\bar{x}(\lambda)d\lambda},$
$\displaystyle
Y=\frac{1}{k}\int{I(\lambda)R(\lambda)\bar{y}(\lambda)d\lambda},$ (S.3)
$\displaystyle
Z=\frac{1}{k}\int{I(\lambda)R(\lambda)\bar{z}(\lambda)d\lambda}$
where $k$ is the normalization factor, $I(\lambda)$ is energy distribution of
the reference light; $R(\lambda)$ is the reflection spectrum obtained from the
designed mestasurface under illumination; and $\bar{x}(\lambda)$,
$\bar{y}(\lambda)$, and $\bar{z}(\lambda$) are the CIE 1931 standard color-
matching functions (see Figure S2a). These chromaticity functions are then
normalized as $x=X/(X+Y+Z)$ and $y=Y/(X+Y+Z)$, which fall between 0 to 1, to
represent the colors in the CIE 1931 chromaticity diagram.
## III Quantitative analysis on color gamut coverage, saturation maintenance
and hue variation
As shown in Fig. S4d-f, by increasing $p$, the spectral position of the
reflectance resonances for both amorphous (solid lines) and crystalline
(dashed lines) states red-shifts. The spectral position of each of these
resonances is dependent on the refractive index of the constituent phase-
change material (PCM) owing to the interference between ED and MD modes inside
the PCM nanopillars as will be discussed later. Therefore, by switching the
state of the nanopillars from amorphous to crystalline, the central
wavelengths of the resonances red-shift in the cases of Sb2S3 and Sb2Se3 (see
Fig. S4d,e), and blue-shift in the case of GeSe3 (see Fig. S4f) because
$\Delta n_{\textrm{Sb}_{2}\textrm{S}_{3}}$, $\Delta
n_{\textrm{Sb}_{2}\textrm{Se}_{3}}>0$, while $\Delta n_{\textrm{GeSe}_{3}}<0$
(with $\Delta n=n_{\textrm{C-PCM}}-n_{\textrm{A-PCM}}$) within the visible
wavelength range (see the refractive indices in Fig. S2a,b). The actual shift
of the resonance wavelengths are
$|\Delta\lambda_{\textrm{Sb}_{2}\textrm{S}_{3}}|<180$ nm,
$|\Delta\lambda_{\textrm{Sb}_{2}\textrm{Se}_{3}}|<200$ nm, and
$|\Delta\lambda_{\textrm{GeSe}_{3}}|<70$ nm (see Fig. S4d-f). The relative
strength of the wavelength shifts in these PCMs, i.e.
$|\Delta\lambda_{\textrm{Sb}_{2}\textrm{Se}_{3}}|>|\Delta\lambda_{\textrm{Sb}_{2}\textrm{S}_{3}}|>|\Delta\lambda_{\textrm{GeSe}_{3}}|$,
is attributed to the relative strength of the change in the real part of their
refractive indices upon the phase transition between amorphous and
crystalline, i.e. $|\Delta n_{\textrm{Sb}_{2}\textrm{Se}_{3}}|>|\Delta
n_{\textrm{Sb}_{2}\textrm{S}_{3}}|>|\Delta n_{\textrm{GeSe}_{3}}|$, as shown
in Supplementary Fig. S1. On the other hand, the sharpness of the reflectance
resonances in Fig S4d-f is mainly dependent on the PCM extinction coefficient
shown as the dashed curves in Fig. S4e,f. In the case of Sb2S3 nanopillars,
the high-efficiency resonances (i.e., those with high reflectance value at the
resonance peak) in the low-loss amorphous phase are damped upon the transition
to the crystalline phase with higher absorption loss (compare solid and dashed
curves in Fig. S4d). This high absorption loss arises for both amorphous and
crystalline Sb2Se3 nanopillars, resulting in relatively low-efficiency
reflectance resonances (see Fig. S4f). In contrast, GeSe3 nanopillars remain
very low-loss across the entire visible range for both the amorphous and
crystalline phases, yielding high-efficiency resonances in both cases (see
Fig. S4f).
For a quantitative comparison between the presented three PCMs in terms of
color generation/switching, Figs. 4g-i show the generated colors in the
amorphous (black circles) and crystalline (white squares) phases in the same
International Commission on Illumination (CIE) 1931 chromaticity coordinates
for the three PCMs in top panels, and their corresponding hue and saturation
values for amorphous (solid-circle lines) and crystalline (dashed-square
curves) phases in the bottom panels. The approach of calculating the CIE XYZ
tristimulus of the reflectance spectra and their corresponding hue and
saturation values are given in the Supporting Information Note I. In terms of
the color gamut coverage, the calculated color gamut area for A-Sb2Se3
(C-Sb2Se3) is around 98.3% (43.4%) of the standard RGB (sRGB) and 72.9%
(32.2%) of the Adobe RGB, from Fig. 4g. The color gamut area for the case of
A-Sb2Se3 (C-Sb2Se3) is around 70.1% (33.3%) of the sRGB, and 52% (24.7%) of
the Adobe RGB (from Figure 4h). For the case of A-GeSe3 (C-GeSe3), a full-
range of colors with gamut area of 57.8% (90.8%) of the sRGB, and 42.9%
(67.3%) of the Adobe RGB can be obtained (from Figure 4i). Therefore, in terms
of color gamut area, Sb2S3 and GeSe3 have almost the same performance, yet
better than Sb2Se3. Moreover, these results show that our all-dielectric PCM-
based metasurfaces can generate a wide color gamut larger than the state-of-
the-art plasmonic colors ($\sim 45\%$ of sRGB [50]) for A-Sb2S3, A-Sb2Se3 and
A/C-GeSe3 cases.
In the RGB color-mixing model, the hue (H) is defined as the proportion of the
dominant wavelength (resonance wavelength in this case) with respect to other
wavelengths in the reflected light and is independent of the intensity of the
light. It simply indicates the ”perceived color” by the human eyes and ranges
from $0^{\circ}$ to $360^{\circ}$, in which $0^{\circ}$ (and $360^{\circ}$),
$120^{\circ}$ and $240^{\circ}$ represent pure red, pure green, and pure blue,
respectively (See Supplementary Fig. S3 for more details). The saturation, on
the other hand, is defined as the ratio of the intensity of the reflected
light at the resonance wavelength (associated to the perceived color) to that
of the incident white light, simply indicating the purity of a color and
ranging from 0% to 100%. Considering this definition, the narrower the
bandwidth of the reflectance resonance, the higher the saturation of the
generated color. In the content of color switching between two phases, the
performance measure is achieving two highly-saturated colors in both phases
with a maximum hue variation
($\Delta\textrm{H}=\textrm{H}_{\textrm{C-PCM}}-\textrm{H}_{\textrm{A-PCM}}$)
upon switching. To analyze the performance of the presented phase-transition-
based color-switching approach, the hue and saturation values of the simulated
colors in Fig. S4d-f are plotted in the bottom panels in Fig. S4g-i for both
amorphous (solid-circle lines) and crystalline (dashed-square curves) phases
of the PCMs. In terms of saturation preservation upon phase transition, GeSe3
shows high-saturation values for both amorphous and crystalline cases (due to
sharp reflectance resonances), while Sb2S3 shows highly saturated colors only
in the amorphous phase. Sb2Se3, however, demonstrates a median level of
saturation values in both amorphous and crystalline phases due to the wide
reflectance resonances. With regards to hue variation, the hues of the
generated colors in Sb2S3 and GeSe3 cases change by varying $p$ in both
amorphous and crystalline states while maintaining
$\Delta\textrm{H}<80^{\circ}$ upon phase transition. One may use this feature
to switch the coloration of pixels of an image individually with each pixel
being a Sb2S3 or GeSe3 metasurface formed by of an array of down to 5$\times$5
or 6$\times$6 nanopillars [11]. In the case of Sb2Se3, however, by changing
$p$, all the varying hue values in the amorphous phase switch to an almost
fixed hue in the crystalline phase. Using this property, one can turn off all
the pixels of an image on a display comprising Sb2Se3 metasurfaces (i.e.,
pixels) by switching the phase of the Sb2Se3 nanopillars from the amorphous
state (ON-state) to the crystalline state (OFF-state). This is a unique
feature that is absent in other approaches in previous works, e.g., the
polarization-sensitive color-switching approach [19].
To provide a comparison between Sb2S3, Sb2Se3, and GeSe3 metasurfaces for
color switching applications, a spider chart is shown in Fig. S5. The figure
of merit (FOM) is defined as the maximum variation of the hue over the
refractive index change open phase transition between amorphous and
crystalline, i.e. $|\Delta\textrm{Hue}|/|\Delta n|$ in which $\Delta
n=n_{\textrm{A}}(\lambda_{\textrm{A}})-n_{\textrm{C}}(\lambda_{\textrm{C}})$),
with $n_{\textrm{A}}(\lambda_{\textrm{A}})$ and
$n_{\textrm{C}}(\lambda_{\textrm{C}})$ being the index of refraction in the
amorphous and crystalline phases and at the corresponding resonance
wavelengths ($\lambda_{\textrm{A}},\lambda_{\textrm{C}}$), respectively. While
high FOM is desirable, the saturation and value (i.e. the reflectance value at
the resonance peak) of the colors in both amorphous and crystalline phases
should be as high as possible. Considering all these performance measures,
GeSe3 demonstrates superior properties over Sb2S3 and Sb2Se3 when switching
from a color associated with a reflectance spectrum with a resonance peak at
$\lambda=600$ nm (chosen as the middle wavelength in the visible range from
400 nm to 800 nm) in the amorphous phase, to another color in the crystalline
phase.
## IV Polarization-sensitive dynamic color generation
To add the polarization-sensitivity to our color-switching approach, we also
consider elliptical nanopillars in asymmetric unit cells with different
periodicities in the x- and y-directions, i.e. $p_{x}$ and $p_{y}$, Fig 1b. By
varying $p_{x}$ and $p_{y}$ with a fixed ratio with respect to the major and
minor axes of the nanopillars (i.e., $d_{x,y}=\alpha\,p_{x,y}$), we generate
the color palettes shown in Fig. S6a,d,g, for the case of Sb2S3 ($p_{x,y}$
range from 310 nm to 470 nm with 40-nm increments), Sb2Se3 ($p_{x,y}$ range
from 200 nm to 400 nm with 50-nm increments), and GeSe3 ($p_{x,y}$ range from
270 nm to 430 nm with 40-nm increments), respectively (see Supplementary Fig.
S4-S6 in the for full color palettes). In each figure, the top (bottom) panels
show the colors generated by the x-polarized (y-polarized) incident white
light for amorphous (left panels) and crystalline (right panels) cases. While
Sb2S3 and GeSe3 metasurfaces can generate a full palette considering both
amorphous and crystalline phases (see Fig. S6a,g), respectively, Sb2Se3
metasurfaces cannot generate bluish colors (see Fig. S6d). This stems from the
high optical loss of Sb2Se3 within the blue range of the visible wavelengths
(see Fig. S2a,b). It is also clear that the y-polarization palettes can be
obtained by transposing the x-polarization palette, i.e., replacing each (j,i)
element with corresponding (i,j) element. However, this is not the case for
amorphous and crystalline palettes in Fig. S6a,d,g since the crystalline
palettes contain completely different colors from those in the amorphous
palettes. This shows the advantage of using PCMs as the number of colors in
the phase-transition-based color-switching approach is twice as many as those
in the polarization-based approach.
To analyze the effect of polarization-sensitivity in both amorphous and
crystalline cases on the reflected colors, we select five metasurfaces for
each PCM with geometrical parameters in the dashed boxes in Fig. S6a,d,g, and
plot the corresponding simulated reflectance spectra with their hue and
saturation values in the inset in Fig. S6b,e,h, respectively. It is seen that
by increasing $p_{y}$ in each box, the central wavelength of the reflectance
resonances does not experience a considerable shift for the x-polarization
(see the top panels in Fig. S6b,e,h). This leads to almost unchanged hue
values for the corresponding colors, which in turn results in a limited
trajectory in the corresponding color gamuts shown in the top panels of Fig.
S6c,f,i in which black circles (white squares) represent the colors in
amorphous (crystalline) phase. In contrast, it is observed that increasing
$p_{y}$ results in a tangible redshift in the reflectance spectra for the
y-polarization for all PCMs (see the bottom panels in Fig. S6b,e,h). This
redshift results in a relatively large hue change in all cases, except
C-Sb2Se3, as the corresponding color gamuts in the bottom panels of Fig.
S6c,f,i demonstrate. The simulated full color palettes as well as their
corresponding gamuts are provided in Figs. S7-9. Based on these simulation
results, we designed and fabricated palettes of Sb2S3 and Sb2Se3 meta-pixels
with different ratios and display their corresponding microscopic images in
Fig. S12 and 13, respectively.
Finally, in the Supplementary Note V, we show that by continuously varying the
incident polarization angle ($\varphi$) one can enable dynamic color tuning
(See Figure S12).
## V Sensitivity to the incident polarization angle
To analyze the effect of the variation of the incident polarization angle
($\varphi$) on the reflected colors, we select one metasurface for each type
of PCMs with geometrical parameters shown in Fig. S12a,d,g and change
$\varphi$ from $0^{\circ}$ (y-polarization) to $90^{\circ}$ (x-polarization).
The reflectance spectra of these metasurfaces for $\varphi=0^{\circ}$ and
$\varphi=90^{\circ}$ for both amorphous and crystalline states are plotted in
Fig. S12c,f,i. In both amorphous and crystalline states, a resonance shift of
at least 100 nm is observed, which enables us to dynamically tune the
reflected colors by varying the incident polarization angle. This
polarization-based color tunability is demonstrated in the colors in Fig.
S12b,e,h, which are generated through varying $\varphi$ from $0^{\circ}$ to
$90^{\circ}$ in a step of $15^{\circ}$. The colors in Fig. S12b,e,h and their
corresponding CIE diagrams show that using Sb2S3 and GeSe3 metasurfaces, one
can tune the colors from green to reddish purple to blue, while Sb2Se3 can
enable color tuning from dark green to red to purple.
## VI Sensitivity to the incident angle
To analyze the effect of the incident angle ($\theta$) on the reflectance
spectrum of a metasurface (Fig. 1b), we select a metasurface with Sb2S3
nanopillars, as shown in Fig. S13a,b, and vary the angle of the incident light
from $\theta=0^{\circ}$ to $\theta=30^{\circ}$. Fig. S13c,d show the
reflection spectra for amorphous Sb2S3, and Fig. S13e,f show the results of
crystalline Sb2S3, with TE- and TM-polarized light, respectively. In the case
of TE-polarized light incident on amorphous Sb2S3 (Fig. S13c), the incident
angle has a small impact on the reflection spectrum. The intensity of the
reflected is reduced by 20% when $\theta$ approaches $5^{\circ}$, but the
reflection spectra does not suffer any redshift. The spectra resulted from the
crystalline Sb2S3 experiences a redshift of more than 100 nm and is less
intense and is less intense compared to the amorphous case, but these spectra
remain largely unaffected by the incident angle variation.
In the case of TM-polarized light on amorphous Sb2S3, a much greater
dependence on $\theta$ is observed from Fig. S13d,f. As $\theta$ increases,
two effects can be seen from these figures: 1) the initial peak at
$\theta=0^{\circ}$ seen begins to lose intensity and experiences a redshift,
and 2) a new peak forms and becomes more pronounced, both as $\theta$ goes
beyond $5^{\circ}$. When Sb2S3 is crystalline in this case, no considerable
changes are observed for $0^{\circ}<\theta<20^{\circ}$ after which the peak
redshifts by about 100 nm at $\theta=30^{\circ}$. In addition, for
$\theta>20^{\circ}$, the second peak that was observable in the amorphous case
is not seen in the crystalline case. These results are not surprising; the
reflectance of these metasurfaces is largely due to ED and MD resonances that
are supported by the nanopillar structures, and the ED resonances are the
dominating resonances seen in the reflectance spectra. Since the component of
the electric field parallel to the top surface of the Sb2S3 nanopillars does
not change in the case of the obliquely incident TE-polarized light (See Fig.
S9a), the incident angle should not have a major effect on the output spectra.
Likewise, since the this component of the electric field changes in the case
of obliquely incident TM-polarized light (See Fig. S13b), we should see a
greater impact of varying $\theta$ on the resulting spectra.
## VII Influence of different design parameters
Analysis must also be done to determine the effects that the physical
dimensions of the nanopillars have on the reflection spectrum. Fig. S14a,b,c
show the reflectivity spectrum of a Sb2S3 array with nanopillars Fig 1b of
varying heights (h), periods (p), and diameters (d). A control case is picked
with $h=120$ nm, $p=390$ nm, and $d=0.6\,p$. Fig. S14a shows the effect of
varying $h$ from 100 nm to 400 nm in the control case. This figure shows that
few values of $h$ give sharp reflections. Increasing the height past 100 nm
causes a redshift in the reflection and a severe broadening of the reflection
spectrum, until it decreases around $h=300$ nm and ultimately disappears
around $h=400$ nm. Also, around $h=200$ nm, another reflection appears in the
spectrum. Increasing $h$ beyond this point causes a redshift without the same
severe broadening.
Fig. S14b shows the effect of varying $p$ from 200 nm to 500 nm in the control
case. Fig. S14b shows that increasing $p$ causes a redshift in the reflection
spectrum throughout this test case. Also, the reflected spectrum narrows by
increasing $p$ from around $p=200$ nm to around $p=400$ nm. Fig. 14c shows the
effect of varying $d$ from $d=150$ nm to $d=350$ nm. Figure S8c shows that
increasing $d$ from 150 nm causes a redshift in the resulting spectrum. This
peak decreases for $d>250$ nm. However, another peak starts to appear around
$d>250$ nm and remains at larger values if $d$ in this range. This new peak
does not experience a red shift with an increase in $d$, but another,
narrower, peak starts to appear with the increase in $d$. The change from
amorphous to crystalline Sb2S3 has a nearly uniform effect in all these cases.
The phase change to crystalline severely decreases the reflectivity of the
metasurface and causes a redshift at the same time.
Figure S1: Multipolar decomposition analysis. a,b, Multipolar decomposition of
scattering cross-section in terms of electric dipole (ED, the dotted lines)
and magnetic dipole (MD, the dashed lines) for the case of an periodic array
of (a) amorphous and (b) crystalline Sb2S3 nanopillars with $h=120$ nm,
$d=0.6\,p$ in a lattice with varying periodicity of $p$ on top of a SiO2
substrate. The reflectance (R) response for each case is plotted in solid
lines. Figure S2: Optical characteristics of low-loss phase-change materials.
a,b, Real (solid lines) and imaginary (dashed lines) parts of the refractive
index of Sb2S3, Sb2Se3, and GeSe3 for (a) amorphous (A) and (b) crystalline
(C) phases. c, The absolute value of the change in the refractive index (solid
lines, $\Delta n=|n_{\textrm{C-PCM}}-n_{\textrm{A-PCM}}|$) and the extinction
coefficient (dashed lines, $\Delta k=|k_{\textrm{C-PCM}}-k_{\textrm{A-PCM}}|$)
versus the wavelength upon the transition between amorphous and crystalline
phase-states for Sb2S3, Sb2Se3 and GeSe3. Figure S3: Color generation and
characteristics. a, CIE 1931 standard color-matching functions. b, HSV color
solid cylinder saturation gray [11]. Figure S4: Color switching enabled by
phase-transition of the PCM nanopillars. a-c, Schematic and geometrical
parameters of a unit cell of a polarization-insensitive PCM metasurface made
of (a) Sb2S3, (b) Sb2Se3 and (c) GeSe3 circular nanopillars with a fixed heigh
$h$. The periodicity of the unit cell in both x and y directions is $p$, and
the diameter of the nanopillars is $d=\alpha\,p$ with $\alpha$ being a
constant. d-f, Simulated reflectance spectra for the amorphous (solid lines)
and crystalline (dashed lines) phases and their corresponding colors for
different periodicities ($p$). The PCM is (d), (e), and (f) is Sb2S3, Sb2Se3,
and GeSe3, respectively. The curves for different $p$s are diplaced vertically
for better visibility and comparison. The sharp resonances observed in (d-f)
are attributed to the interference between ED and MD modes inside the PCM
nanopillars. Upon the PCM phase transition, a red-shift of
$|\Delta\lambda_{\textrm{Sb}_{2}\textrm{S}_{3}}|>180$ nm and
$|\Delta\lambda_{\textrm{Sb}_{2}\textrm{Se}_{3}}|>200$ nm is observed for the
case of (d) Sb2S3 and (e) Sb2Se3, respectively, while a blue-shift of
$|\Delta\lambda_{\textrm{Ge}\textrm{Se}_{3}}|<70$ nm is observed for the case
of (f) GeSe3. g-i, Corresponding CIE 1931 chromaticity coordinates of the
reflectance spectra, and the hue and saturation values of the colors shown in
(d-f) for amorphous (black circles in the top panel and circle-solid line in
bottom panel) and crystalline (white squares in the top panel and square-
dashed line in the bottom panel) phases of the corresponding PCMs in (d-f).
Figure S5: Comparison of low-loss PCMs for color switching applications. A
spider chart that compares Sb2S3, Sb2Se3 and GeSe3 in terms of FOM (defined as
the maximum (max) of $|\Delta\textrm{Hue}|/|\Delta n|$ in which $\Delta
n=n_{\textrm{A}}(\lambda_{\textrm{A}})-n_{\textrm{C}}(\lambda_{\textrm{C}})$),
maximum saturation and maximum value (i.e. the reflectance value at the
resonance peak) in amorphous and crystalline phases at
$\lambda_{\textrm{A}}=600$ nm. Figure S6: Multiple color generation enabled by
phase-transition-based and polarization-based color switching mechanisms.
a,d,g, Generated color palettes considering different periodicities in x- and
y-directions ($p_{x}$ and $p_{y}$, respectively) for (a) Sb2S3 ($\alpha=0.6$
and $h=120$ nm), (d) Sb2Se3 ($\alpha=0.55$ and $h=120$ nm), and (g) GeSe3
($\alpha=0.55$ and $h=250$ nm). $p_{x}$ and $p_{y}$ in (a), (d) and (g) vary
with 40 nm, 50 nm, and 40 nm increments, respectively. b,e,h, Reflectance
spectra of the colors indicated by the dashed rectangular boxes shown in the
corresponding color palette in (a,d,g), respectively, with the values of hue
and saturation (sat.) in the inset. c,f,i, Corresponding color gamuts for
amorphous (black circles) and crystalline (white squares) phases of the
corresponding PCM in (a,d,g), respectively. In each figure, the upper (lower)
panel represents the results related to x-polarization (y-polarization).
Figure S7: Dynamic color generation by Sb2S3 meta-pixels. a,b, The color
palettes and c, d, corresponding CIE 1931 chromaticity diagrams generated by
Sb2S3 metasurfaces in (a, c) amorphous and (b, d) crystalline phase-states
under x-polarized normally incident white light. The lattice periodicities in
x- and y-directions vary from $p_{\textrm{x,y}}=310$ nm to
$p_{\textrm{x,y}}=470$ nm with a step of 20 nm while the diameter of the
nanopillars changes as $d_{\textrm{x,y}}=0.6\,p_{\textrm{x,y}}$, and the
height of the nanopillars is fixed at $h=120$ nm. Figure S8: Dynamic color
generation by Sb2Se3 meta-pixels. a,b, The color palettes and c, d,
corresponding CIE 1931 chromaticity diagrams generated by Sb2Se3 metasurfaces
in (a, c) amorphous and (b, d) crystalline phase-states under x-polarized
normally incident white light. The lattice periodicities in x- and
y-directions vary from $p_{\textrm{x,y}}=200$ nm to $p_{\textrm{x,y}}=400$ nm
with a step of 25 nm while the diameter of the nanopillars changes as
$d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, and the height of the nanopillars
is fixed at $h=120$ nm. Figure S9: Dynamic color generation by GeSe3 meta-
pixels. a,b, The color palettes and c, d, corresponding CIE 1931 chromaticity
diagrams generated by GeSe3 metasurfaces in (a, c) amorphous and (b, d)
crystalline phase-states under x-polarized normally incident white light. The
lattice periodicities in x- and y-directions vary from $p_{\textrm{x,y}}=270$
nm to $p_{\textrm{x,y}}=430$ nm with a step of 20 nm while the diameter of the
nanopillars changes as $d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, and the
height of the nanopillars is fixed at $h=250$ nm. Figure S10: Experimental
color palettes of Sb2S3 meta-pixels a,b,c, A-Sb2S3 (left) and C-Sb2S3 (right)
meta-pixels considering different periodicities in x- and y- directions
($p_{x}$ and $p_{y}$, respectively) varying with 20 nm increments while the
diameter of the nanopillars changes as (a)
$d_{\textrm{x,y}}=0.65\,p_{\textrm{x,y}}$, (b)
$d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, (c)
$d_{\textrm{x,y}}=0.45\,p_{\textrm{x,y}}$, and the height of the nanopillars
is fixed at $h=120$ nm. Figure S11: Experimental color palettes of Sb2Se3
meta-pixels a,b, A-Sb2Se3 (left) and C-Sb2Se3 (right) meta-pixels considering
different periodicities in x- and y- directions ($p_{x}$ and $p_{y}$,
respectively) varying with 20 nm increments while the diameter of the
nanopillars changes as (a) $d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, (b)
$d_{\textrm{x,y}}=0.65\,p_{\textrm{x,y}}$, and the height of the nanopillars
is fixed at $h=120$ nm. Sb2Se3 is sputtered in a magnetron sputtering system
using 30 W radio frequency (RF) power at a deposition pressure of 4 mTorr and
Ar flow of 30 sccm. The deposition rate for Sb2Se3 is $\sim$1 nm/min. Before
deposition, the chamber base pressure is maintained at $\sim$10-7 Torr.
Additionally, the samples are capped with 15 nm of SiO2 sputtered in situ, to
prevent oxidation during later characterization. As an aside, several pre- and
post-deposition treatments of the sputtering chamber are performed for
selenide deposition. These include cleaning the chamber followed by annealing
and O2 plasma cleaning. Figure S12: Polarization-based continuous color-
switching enabled by rotating the incident polarization angle. The asymmetric
unit cells of the polarization-sensitive metasurface with the optimized design
parameters are shown in a, d, and g, respectively, with their corresponding
variation of colors with polarization angle $\varphi$ and their color gamts
shown in b, e, and h, respectively. The simulated reflectance spectra from c,
Sb2S3, f, Sb2Se3, and i, GeSe3 metasurfaces for x-polarization
($\varphi=90^{\circ}$) and y-polarization ($\varphi=0^{\circ}$). The
reflection-mode color response varies from reddish purple to the yellowish
green for A-Sb2S3, bluish purple to the reddish orange for C-Sb2S3, red to
dark green for A-Sb2Se3, red purple to brown for C-Sb2Se3, purple to green for
A-GeSe3, and red purple to blue for C-GeSe3. Figure S13: Analysis of the
sensitivity to the angle of incidence. The structure used in the study of the
angle sensitivity of a Sb2S3 metasurface for the case of obliquely incident
plane waves of white light for a, TE and b, TM polarizations, respectively.
c,d,e,f, The simulated reflection spectra of the metasurface, showing the
incident angle (degrees) versus wavelength (nm) for: (c) amorphous phase and
TE polarization, (d) amorphous phase and TM polarization, (e) crystalline
phase and TE polarization, (f) crystalline phase and TM polarization. Figure
S14: Analysis of the effect of different design parameters. Simulated
reflection spectrum of the Sb2S3 metasurface in Figure 2a versus; a, The
height of the constituents nanopillars, i.e., $h$, while other parameters are
fixed at $p=390$ nm and $d=0.6\,p$ for (top) amorphous and (bottom)
crystalline phases; b, period of the unit cell, i.e., $p$, with $d=0.6\,p$ and
$h=120$ nm for (top) amorphous and (bottom) crystalline phases; c, diameter of
the constituent nanopillars, i.e., $d$, with $p=390$ nm and $h=120$ nm for
(top) amorphous and (bottom) crystalline phases. Figure S15: Design strategy
for generating the dynamic image of Cheshire The Cat. The geometrical
parameters of the Sb2S3 metasurfaces used for producing each pixel of the
image of Cheshire The Cat shown in Fig. 3a in the main text
($d_{\textrm{x,y}}=0.65\,p_{\textrm{x,y}}$ and $h=120$ nm in Fig. 1b). Figure
S16: Design strategy for encryption of four different images into the phase
and polarization of Sb2S3 meta-pixels. a,c, Phase-transition-based switching
between two different images. The colors are generated by four different
metasurfaces consisting of Sb2S3 nanopillars (Fig. 1b) with periodicities
reported in the table, diameters $d_{\textrm{x,y}}=0.65\,p_{\textrm{x,y}}$,
and a fixed height $h=120$ nm. b, The definition of different zones in each
image. Figure S17: Design strategy for encryption of four different images
into the phase and polarization of Sb2Se3 and GeSe3 meta-pixels. a,
Polarization-based switching between two different images. The colors are
generated by four different metasurfaces consisting of GeSe3 nanopillars (Fig.
S3b) with periodicities reported in the table, diameters of
$d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, and a fixed height $h=250$ nm. b,
The definition of different zones in each image. c, Phase-transition-based
switching between the ON-state (amorphous) and the OFF-state (crystalline).
The colors are generated by four different metasurfaces consisting of Sb2Se3
nanopillars with periodicities reported in the table, diameters
$d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, and a fixed height $h=120$ nm.
Figure S18: Design strategy for encryption of four different images into the
phase and polarization of Sb2Se3 and GeSe3 meta-pixels. a, Polarization-based
switching between two different images. The colors are generated by four
different metasurfaces consisting of GeSe3 nanopillars (Fig. S3b) with
periodicities reported in the table, diameters
$d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, and a fixed height $h=250$ nm. b,
The definition of different zones in each image. c, Phase-transition-based
switching between the ON-state (amorphous) and the OFF-state (crystalline).
The colors are generated by four different metasurfaces consisting of Sb2Se3
nanopillars with periodicities reported in the table, diameters
$d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, and a fixed height $h=120$ nm.
Figure S19: Electrical conversion of color palettes of Sb2S3 meta-pixels using
ITO heater. a,b, A-Sb2S3 (left) and C-Sb2S3 (right) meta-pixels observed from
(a) top and (b) bottom of the sample considering different periodicities in x-
and y- directions ($p_{x}$ and $p_{y}$, respectively) varying with 20 nm
increments while the diameter of the Sb2S3 nanopillars changes as
$d_{\textrm{x,y}}=0.45\,p_{\textrm{x,y}}$, and the height of the nanopillars
is fixed at $h=120$ nm. The scale bars are 100 $\mu$m. Figure S20: Electrical
conversion of color palettes of Sb2S3 meta-pixels using ITO heater. a,b,
A-Sb2S3 (left) and C-Sb2S3 (right) meta-pixels observed from (a) top and (b)
bottom of the sample considering different periodicities in x- and y-
directions ($p_{x}$ and $p_{y}$, respectively) varying with 20 nm increments
while the diameter of the Sb2S3 nanopillars changes as
$d_{\textrm{x,y}}=0.55\,p_{\textrm{x,y}}$, and the height of the nanopillars
is fixed at $h=120$ nm. The scale bars are 100 $\mu$m. Figure S21: Electrical
conversion of color palettes of Sb2S3 meta-pixels using ITO heater. a,b,
A-Sb2S3 (left) and C-Sb2S3 (right) meta-pixels observed from (a) top and (b)
bottom of the sample considering different periodicities in x- and y-
directions ($p_{x}$ and $p_{y}$, respectively) varying with 20 nm increments
while the diameter of the Sb2S3 nanopillars changes as
$d_{\textrm{x,y}}=0.65\,p_{\textrm{x,y}}$, and the height of the nanopillars
is fixed at $h=120$ nm. The scale bars are 100 $\mu$m. Figure S22: Electrical
conversion of a Sb2S3 meta-pixel using ITO micro-heater. a-d, Microscope
images of (a,b) 100$\times$100 $\mu$m2, and (c,d) 10$\times$10 $\mu$m2 micro-
heaters with 50$\times$50 $\mu$m2 and 5$\times$5 $\mu$m2 meta-pixels at the
center, respectively. e, Simulated temperature distribution in the cross-
section of the meta-pixel in (d) at the end of a 7 V pulse with 15 $\mu$s
duration. d, Real-time temperature profile at the center of the meta-pixel
upon applying the re-amorphization pulse to the microheater. Figure S23:
Fabrication process. Figure S24: Optical characterization setup. Figure S25:
Characterization of the anisotropic C-Sb2S3 crystals. a-d, Optical images a
film of (a) A-Sb2S3 and (b-d) C-Sb2S3 under microscope with (b) unpolarized,
(c) x- and (d) y-polarized incident white light. The crystalized regions at
(i) and (ii) switches from greenish colors to brownish ones going from x- to
y-polarization, while colors in regions (iii) and (iv) changes from brownish
to greenish, and colors in areas (v) and (vi) remains the almost unchanged.
The dark particle at the center of the images is used as the marker for
positioning. Figure S26: Characterization of the anisotropic C-Sb2Se3
crystals. a-d, Optical images a film of (a) A-Sb2Se3 and (b-d) C-Sb2Se3 under
microscope with (b) unpolarized, (c) x- and (d) y-polarized incident white
light. The dark particle at the center of the images is used as the marker for
positioning.
|
arxiv-papers
| 2021-07-19T21:39:30 |
2024-09-04T03:07:18.532403
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Omid Hemmatyar, Sajjad Abdollahramezani, Ioannis Zeimpekis, Sergey\n Lepeshov, Alex Krasnok, Asir Intisar Khan, Kathryn M. Neilson, Christian\n Teichrib, Tyler Brown, Eric Pop, Daniel W. Hewak, Matthias Wuttig, Andrea\n Alu, Otto L. Muskens, and Ali Adibi",
"submitter": "Ali Adibi",
"url": "https://arxiv.org/abs/2107.12159"
}
|
2107.12163
|
# Gravity Effects on Hawking Radiation from Charged Black Strings in Rastall
Theory
Riasat Ali [email protected] Department of Mathematics, GC University
Faisalabad Layyah Campus, Layyah-31200, Pakistan Rimsha Babar
[email protected] Division of Science and Technology, University of
Education, Township, Lahore-54590, Pakistan Muhammad Asgher
[email protected] Department of Mathematics, The Islamia University of
Bahawalpur, Bahawalpur-63100, Pakistan Syed Asif Ali Shah
[email protected] Department of Mathematics and Statistics, The
University of Lahore 1-Km Raiwind Road, Sultan Town Lahore 54000, Pakistan
###### Abstract
The Rastall theory of gravity is the generalized form of the Einstein theory
which describes the conservation law of energy and momentum tensor. In our
work, we compute the charged black strings solution in the background of
Rastall theory by applying the Newman-Janis approach. After computing the
charged black strings solution in the background of Rastall theory, we study
the thermodynamical property (i.e., Hawking temperature) for the charged black
strings. Furthermore, we investigate the graphical representation of Hawking
temperature via event horizon to check the stability conditions of charged
black strings under the influence of Rastall theory. Moreover, we examine the
modified Hawking temperature for charged black strings in Rastall theory by
taking into account the quantum gravity effects. We also discuss the physical
state of charged black strings under the effects of quantum gravity and spin
parameter (appears due to Rastall theory in charged black strings solution).
Black strings; Rastall theory; Newman-Janis algorithm; Hawking temperature.
## I Introduction
General relativity (GR) theory of Einstein, which is assumed to be the most
interesting and simplest gravity theory, obeys the conservation law of energy-
momentum tensor. Although, since its establishment researchers are looking for
different gravity theories and several modified gravity theories have been
developed. In this campaign, Rastall 1 ; 2 introduced a very interesting
potential modification of GR theory, which does not obey the standard
conservation law of energy-momentum tensor (i.e., $T^{uv}_{;u}=0$). However, a
non-minimal coupling of matter field via space-time geometry can be introduced
in the form
$T^{v}_{u;v}=\lambda R_{,u},$ (1)
here $\lambda$ represents the coupling parameter and describes the deviation
from GR. The spherically symmetric static charged as well as uncharged black
hole (BH) metric in the context of perfect fluid surrounded by Rastall gravity
theory have been analyzed 3 . Additionally, some interest has been committed
to provide the static spherically symmetric solutions of the gravitational
field equations in the background of Rastall gravity which incorporates the
BH, wormholes and neutron star solutions 3a ; 3b . The Reissner-Nordström BH
metric solution with cosmological constant in Rastall gravity theory has been
studied 4 .
Spallucci and Smailagic 5 have analyzed the regular BH solution in the
context of Rastall gravity theory. They conclude that a regular BH solution
exists with exotic matter and have no singularity in General Relativity. The
BH solutions (in the background of perfect fluid matter of rotating BHs) in
the Rastall theory have been analyzed 6 ; 7 . The spherically symmetric and
static regular BH metric in the generalized Rastall gravity theory have been
investigated 8 . Moreover, the electromagnetic neutral BHs solution and their
general properties have also been analyzed.
The theory of Rastall gravity is a generalized gravity theory and also studied
the coupling between geometry and matter. According to Visser, the Rastall
theory of gravity is equivalent to Einstein gravity 8a but the Darabi and his
colleagues 9b conclusion is different from Visser’s idea. They proposed that
Rastall gravity is not equivalent to Einstein gravity. The rotating BH
solution by utilizing Demiański-Newman-Janis algorithm to the electrically
charged BH surrounded by quintessence parameter in Rastall gravity theory have
been analyzed 9 . Furthermore, the BH mass and thermodynamical properties
(Hawking temperature, heat capacity and electromagnetic potential) from the
horizon equation have also been examined.
Moradpour et al. have analyzed the conformally flat BH solutions in the
Rastall gravity as well as non-singular BH solutions in the background of
modified Rastall gravity 10 . The Hawking radiation depends on BH geometry and
for different types of particles, we arrive at the same result. Yale 20 have
studied the Hawking temperature for every type of particle fermions, scalars
and bosons spin-$1$, by utilizing the tunneling method. The Hawking
temperature for symmetric BHs can be derived 20 from the following formula
$T_{H}=\frac{\acute{f}(r_{+})}{4\pi}.$ (2)
In order to calculate the Hawking temperature, the Hawking radiation
phenomenon for different BHs have been investigated 11 -Sakalli:2015nza .
Moreover, they have also studied the Hawking temperature for various types of
BHs by taking into account the quantum gravity effects. By considering the
generalized uncertainty principle (GUP) effects, it is feasible to study the
quantum corrected thermodynamical properties of BH 20a . The GUP offers high-
energy remedies to BH thermodynamics, which guides to the possibility of a
minimal length in quantum theory of gravity. The modified fundamental
commutation relation can be describes as
$[x_{\mu},p_{\mu}]=i\hbar\delta_{\mu\nu}[1+\alpha p^{2}]$ 20b .
The expression of GUP can be defined as
$\Delta x\Delta p\geq\frac{\hbar}{2}\left[1+\alpha(\Delta p)^{2}\right],$ (3)
where $\alpha=\frac{\alpha_{0}}{M_{p}^{2}}$, $\alpha_{0}$ denotes the
dimensionless parameter and $\alpha_{0}<10^{5}$, the ${M_{p}^{2}}$ gives the
Plank mass. The $x_{\mu}$ and $p_{\mu}$ denotes the modified position and
momentum operators, respectively, which can be given as
$x_{\mu}=x_{0\mu},p_{\mu}=p_{0\mu}\left(1+\alpha p^{2}_{0\mu}\right),$ where
$p_{0\mu}$ and $x_{0\mu}$ are standard momentum and position operators,
respectively, which satisfy the standard commutation relation
$[x_{0\mu},p_{0\mu}]=i\hbar\delta_{\mu\nu}$. We choose the values of $\alpha$
according to the condition, which satisfies the condition of GUP relation 20c
. For corrected Hawking temperature, we have choose the only first order terms
of $\alpha$ in our calculation. The idea of GUP has been utilized for various
BHs in literature 16 -19 .
The main aim of this article is to study the charged black strings solution in
the background of Rastall theory and to compare our results with previous
literature. This paper is arranged in the following way: In Sec. II, we derive
a charged black strings solution in the context of Rastall theory and also
investigate the Hawking temperature for the charged black strings. Section III
provides the graphical explanation of Hawking temperature via event horizon
and states the stability condition of charged black strings under the Rastall
theory effects. Section IV analyze the modified Hawking temperature for
charged black strings in the Rastall theory. Section V discusses the effects
of quantum gravity and Rastall parameter on charged black strings with the
help of graphical interpretation. Finally, Sec. VI consists of summary and
conclusions.
## II Charged Black Strings Solution in the Context of Rastall Theory
By applying the Demiański-Newman-Janis algorithm, the spin or rotation
parameter $a$ can be derived into a spherically symmetric solution which
provides an extension for Newman-Janis algorithm. The Rastall gravity depends
upon the Rastall theory that established the conservation law of energy-
momentum to the accompanying framework as described in Eq. (1). According to
this theory, the modified Einstein field equation can be defined as jj
$G_{uv}+\kappa\lambda g_{uv}=\kappa\tilde{T}_{uv},$ (4)
where $G_{N}$ stands for gravitational constant for the Newton gravity,
$\kappa=8\pi G_{N}$ represents the gravitational constant of the Rastall
gravity.
Here, we derive a metric for charged static and stationary black strings in
the background of Rastall theory by considering the Newman-Janis algorithm.
Moreover, we investigate the Hawking temperature for the corresponding BH
metric. For this purpose, we consider the charged static black strings
solution 21
$ds^{2}=-F(r)dt^{2}+\frac{1}{F(r)}dr^{2}+r^{2}d\theta^{2}+r^{2}\beta^{2}dy^{2},$
(5)
where
$F(r)=\beta^{2}r^{2}-\frac{b}{\beta r}+\frac{c^{2}}{\beta^{2}r^{2}},$
and
$\beta^{2}=-\frac{\bigwedge}{3},~{}~{}~{}b=4MG,~{}~{}~{}c^{2}=4q^{2}G,$
here the parameters $M$ and $q$ shows the ADM mass per unit length and black
string charge, respectively. Moreover, $\bigwedge$ denotes the cosmological
constant as well as $b$ and $c$ represents the arbitrary parameters.
After putting $\beta^{2}r^{2}-\frac{b}{\beta
r}+\frac{c^{2}}{\beta^{2}r^{2}}=0$, we can evaluate the event horizon in the
given form 22
$r_{+}=\frac{S^{\frac{1}{2}}b^{\frac{1}{3}}+2^{\frac{1}{2}}[S^{2}-4p^{2}-S]^{\frac{1}{4}}}{2\beta},$
(6)
where
$\displaystyle S$ $\displaystyle=$
$\displaystyle\left[0.5+0.5\left(1-\frac{256p^{6}}{27}\right)^{\frac{1}{2}}\right]^{\frac{1}{3}}+\left[0.5-0.5\left(1-\frac{256p^{6}}{27}\right)^{\frac{1}{2}}\right]^{\frac{1}{3}},$
$\displaystyle p^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{b^{\frac{4}{3}}}c^{2}.$
In order to analyze the charged black strings solution in the context of
Rastall theory. Firstly, we consider a transformation for black strings metric
Eq. (5) from coordinates $(t,r,\theta,y)$ to $(u,r,\theta,y)$ as
$\displaystyle du=dt-\frac{dr}{F(r)},$ (7)
under the given transformation the Eq. (5) can be defined as
$ds^{2}=-F(r)du^{2}-2dudr+r^{2}d\theta^{2}+r^{2}\beta^{2}dy^{2}.$ (8)
The components of the inverse metric can be given as
$g^{ur}=g^{ru}=-1,~{}~{}g^{rr}=F,~{}~{}g^{\theta\theta}=\frac{1}{r^{2}},~{}~{}g^{yy}=\frac{1}{r^{2}\beta^{2}}.$
(9)
The inverse metric in the frame of null tetrad can be expressed as
$\displaystyle
g^{\mu\nu}=-l^{\nu}n^{\mu}-l^{\mu}n^{\nu}+m^{\mu}\bar{m}^{\nu}+m^{\nu}\bar{m}^{\mu}.$
(10)
The corresponding elements for null tetrad can be defined in the form
$\displaystyle l^{\mu}$ $\displaystyle=$
$\displaystyle\delta_{r}^{\mu},~{}~{}~{}n^{\mu}=\delta_{u}^{\mu}-\frac{1}{2}F\delta_{r}^{\mu},$
$\displaystyle m^{\mu}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}r}\delta_{\theta}^{\mu}+\frac{i}{\sqrt{2}r\beta}\delta_{y}^{\mu},$
$\displaystyle\bar{m}^{\mu}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}r}\delta_{\theta}^{\mu}-\frac{i}{\sqrt{2}r\beta}\delta_{y}^{\mu},$
At any point in the black string metric, the relations between the null tetrad
and the null vectors becomes
$l_{\mu}l^{\mu}=n_{\mu}n^{\mu}=m_{\mu}m^{\mu}=l_{\mu}m^{\mu}=m_{\mu}m^{\mu}=0,$
and
$l_{\mu}n^{\mu}=-m_{\mu}\bar{m}^{\mu}=1.$
In the $(u,r)$ plane, the coordinate transformation can be defined as
$\displaystyle u$ $\displaystyle\rightarrow$ $\displaystyle u-ia\cos\theta,$
$\displaystyle r$ $\displaystyle\rightarrow$ $\displaystyle r+ia\cos\theta,$
Moreover, we analyze the following transformations
$F(r)\rightarrow f(r,a,\theta),$ (11)
and
$r^{2}+a^{2}\cos^{2}\theta=\Sigma^{2}.$ (12)
In the $(u,r)$ plan the null vectors get the form
$\displaystyle l^{\mu}$ $\displaystyle=$
$\displaystyle\delta_{r}^{\mu},~{}~{}~{}n^{\mu}=\delta_{u}^{\mu}-\frac{1}{2}f\delta_{r}^{\mu},$
$\displaystyle m^{\mu}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}r}\left(\delta_{\theta}^{\mu}+ia\beta(\delta_{u}^{\mu}-\delta_{r}^{\mu})+\frac{i}{\beta}\delta_{y}^{\mu}\right),$
$\displaystyle\bar{m}^{\mu}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}r}\left(\delta_{\theta}^{\mu}-ia\beta(\delta_{u}^{\mu}-\delta_{r}^{\mu})-\frac{i}{\beta}\delta_{y}^{\mu}\right).$
According to the null tetrad definition, the non-zero components of inverse
metric $g^{\mu r}$ in the $(u,r,\theta,y)$ coordinates can be derived as
$\displaystyle g^{uu}$ $\displaystyle=$
$\displaystyle\frac{a^{2}\beta^{2}}{\sum^{2}},~{}~{}~{}g^{ur}=g^{ru}=-1-\frac{a^{2}\beta^{2}}{\sum^{2}},~{}~{}~{}g^{rr}=f(r,\theta)+\frac{a^{2}\beta^{2}}{\sum^{2}},~{}~{}~{}$
$\displaystyle g^{yy}$ $\displaystyle=$
$\displaystyle\frac{1}{\sum^{2}\beta^{2}},~{}~{}~{}g^{uy}=g^{yu}=\frac{a}{\sum^{2}},~{}~{}~{}g^{ry}=g^{yr}=-\frac{a}{\sum^{2}},~{}~{}~{}g^{\theta\theta}=\frac{1}{\sum^{2}},$
here
$f(r,\theta)=\frac{\beta^{2}r^{4}-\frac{4Mr}{\beta}+\frac{4q^{2}}{\beta^{2}}}{\Sigma^{2}}.$
(13)
Furthermore, we analyze a coordinate transformation from $(u,r,\theta,y)$ to
$(t,r,\theta,y)$ coordinates in the given form
$du=dt+\Lambda(r)dr,~{}~{}~{}dy=dy+h(r)dr,$ (14)
here
$\displaystyle\Lambda(r)$ $\displaystyle=$
$\displaystyle\frac{r^{2}+a^{2}}{r^{2}F+a^{2}},$ $\displaystyle h(r)$
$\displaystyle=$ $\displaystyle-\frac{a}{r^{2}F+a^{2}}.$
Finally, we compute the black strings metric in the background of Rastall
theory under $(t,r,\theta,y)$ coordinates in the following form
$\displaystyle ds^{2}$ $\displaystyle=$
$\displaystyle-\left(\frac{\beta^{2}r^{4}-\frac{4Mr}{\beta}+\frac{4q^{2}}{\beta^{2}}}{\Sigma^{2}}\right)dt^{2}-2a\beta^{2}\left(1-\frac{\beta^{2}r^{4}-\frac{4Mr}{\beta}+\frac{4q^{2}}{\beta^{2}}}{\Sigma^{2}}\right)dtdy+\frac{\Sigma^{2}}{\Delta_{r}}dr^{2}$
(15) $\displaystyle+$
$\displaystyle\Sigma^{2}d\theta^{2}+\frac{a^{2}\left[\Sigma^{4}+a^{2}(-4q^{2}+(4Mr-r^{4}\beta^{3}+2\beta\Sigma^{2})\beta)\right]}{\Sigma^{2}}dy^{2},$
where
$\Delta_{r}=\beta^{2}r^{4}-\frac{4Mr}{\beta}+\frac{4q^{2}}{\beta^{2}}.$
The generalized formula for the Hawking temperature has been commonly computed
in the previous literature. By using Eq. (2) the Hawking temperature can be
evaluated in the following expression
$T_{H}=\frac{2\beta^{4}r^{5}_{+}+4M\beta
r^{2}_{+}-8r_{+}q^{2}+a^{2}(4\beta^{4}r^{3}_{+}-4M\beta)}{4\pi\beta^{2}(r^{2}_{+}+a^{2})^{2}}.$
(16)
The temperature $T_{H}$ depends on cosmological constant $\bigwedge$ (i. e.,
$\beta=-\bigwedge/3$), spin parameter $a$, black string mass $M$ and black
string charge $q$. It is worth mentioning here that for $a=0$, we recover the
Hawking temperature for charged black strings 21 , which is independent of the
spin parameter.
$T_{H}=\frac{1}{4\pi}\left[2\beta^{2}r_{+}+\frac{4M}{\beta
r^{2}_{+}}-\frac{8q^{2}}{\beta^{2}r^{3}_{+}}\right].$ (17)
## III Graphical Analysis
This section analyzes the graphical explanation of $T_{H}$ w.r.t horizon
$r_{+}$. We observe the physical importance of the graphs under the influence
of spin parameter and study the stability analysis of corresponding charged
black strings. According to Hawking’s phenomenon when the temperature
increases and more radiations emit then radius of BH reduces. This physical
phenomenon depicts the BH stability.
In Fig. 1: (i) represents the behavior of $T_{H}$ for fixed $M=100$,
$\beta=-0.001$, $a=9$ and varying values of BH charge $q$. It is to be noted
that the temperature $T_{H}$ slowly decreases with the increasing values of
$r_{+}$ in the range $0\leq r_{+}\leq 8$. This behavior shows the stability of
BH.
In (ii), one can observe the behavior of $T_{H}$ for fixed $M=200$,
$\beta=-0.0005$, $q=0.1$ and varying values of spin parameter $a$. Here, we
can see that the $T_{H}$ exponentially decreases as $r_{+}$ increases.
Moreover, it can be also seen that in the range $3.1<r_{+}<3.3$, the
temperature remains same for the various values of $a$. The physical behavior
of $T_{H}$ in the range $0\leq r_{+}\leq 5$ guarantee the stable condition of
BH.
Figure 1: Hawking temperature $T_{H}$ versus event horizon $r_{+}$.
## IV Corrected Temperature for Charged Black Strings in Rastall Theory
This section analyzes the quantum gravity effects on Hawking temperature of
charged black strings in the Rastall theory for massive vector particles. To
do so, we write the Eq. (15) in the following form
$\displaystyle ds^{2}$ $\displaystyle=$ $\displaystyle-
Adt^{2}+Bdr^{2}+Cd\theta^{2}+Ddy^{2}+2Edtdy,$ (18)
where
$\displaystyle A$ $\displaystyle=$
$\displaystyle\left(\frac{\beta^{2}r^{4}-\frac{4Mr}{\beta}+\frac{4q^{2}}{\beta^{2}}}{\Sigma^{2}}\right),~{}~{}D=\frac{a^{2}\left[\Sigma^{4}+a^{2}(-4q^{2}+(4Mr-r^{4}\beta^{3}+2\beta\Sigma^{2})\beta)\right]}{\Sigma^{2}},$
$\displaystyle B$ $\displaystyle=$
$\displaystyle\frac{\Sigma^{2}}{\Delta_{r}},~{}~{}~{}~{}~{}~{}~{}~{}~{}C=\Sigma^{2},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}E=-a\beta^{2}\left(1-\frac{\beta^{2}r^{4}-\frac{4Mr}{\beta}+\frac{4q^{2}}{\beta^{2}}}{\Sigma^{2}}\right).$
The equation of wave motion is defined as 19
$\displaystyle\partial_{\mu}(\sqrt{-g}\chi^{\nu\mu})+\sqrt{-g}\frac{m^{2}}{\hbar^{2}}\chi^{\nu}+\sqrt{-g}\frac{i}{\hbar}A_{\mu}\chi^{\nu\mu}+\sqrt{-g}\frac{i}{\hbar}eF^{\nu\mu}\chi_{\mu}+\alpha\hbar^{2}\partial_{0}\partial_{0}\partial_{0}(\sqrt{-g}g^{00}\chi^{0\nu})$
$\displaystyle-\alpha\hbar^{2}\partial_{i}\partial_{i}\partial_{i}(\sqrt{-g}g^{ii}\chi^{i\nu})=0,$
(19)
here $g$ gives the determinant of coefficient matrix, $\chi^{\nu\mu}$
represents the anti-symmetric tensor and $m$ is the particle mass, since
$\displaystyle\chi_{\nu\mu}$ $\displaystyle=$
$\displaystyle(1-\alpha{\hbar^{2}\partial_{\nu}^{2}})\partial_{\nu}\chi_{\mu}-(1-\alpha{\hbar^{2}\partial_{\mu}^{2}})\partial_{\mu}\chi_{\nu}+(1-\alpha{\hbar^{2}\partial_{\nu}^{2}})\frac{i}{\hbar}eA_{\nu}\chi_{\mu}-(1-\alpha{\hbar^{2}}\partial_{\nu}^{2})\frac{i}{\hbar}eA_{\mu}\chi_{\nu},$
$\displaystyle F_{\nu\mu}$ $\displaystyle=$
$\displaystyle\nabla_{\nu}A_{\mu}-\nabla_{\mu}A_{\nu},$
where $\alpha,~{}A_{\mu},~{}e~{}$ and $\nabla_{\mu}$ represents the
dimensionless positive parameter, vector potential, the charge of particle and
covariant derivatives, respectively. The non-zero components of anti-symmetric
tensor can be computed as
$\displaystyle\chi^{0}=\frac{-D\chi_{0}+E\chi_{3}}{AD+E^{2}},~{}~{}~{}\chi^{1}=\frac{1}{B}\chi_{1},~{}~{}~{}\chi^{2}=\frac{1}{C}\chi_{2},~{}~{}~{}\chi^{3}=\frac{E\chi_{0}+A\chi_{3}}{AD+E^{2}},~{}~{}\chi^{12}=\frac{1}{BC}\chi_{12},~{}\chi^{13}=\frac{1}{BAD+E^{2}}\chi_{13},$
$\displaystyle\chi^{01}=\frac{-D\chi_{01}+E\chi_{13}}{B(AD+E^{2})},~{}~{}~{}\chi^{02}=\frac{-D\chi_{02}}{C(AD+E^{2})},~{}~{}~{}\chi^{03}=\frac{(-AD+A^{2})\chi_{03}}{(AD+E^{2})^{2}},~{}~{}\chi^{23}=\frac{E\chi_{02}+A\chi_{23}}{C(AD+E^{2})},$
The WKB approximation is defined as
$\chi_{\nu}=c_{\nu}\exp\left[\frac{i}{\hbar}\Theta(t,r,\theta,\phi)\right],$
(20)
where
$\Theta(t,r,\theta,\phi)=\Theta_{0}(t,r,\theta,\phi)+{\hbar}\Theta_{1}(t,r,\theta,\phi)+{\hbar}^{2}\Theta_{2}(t,r,\theta,\phi)+....$
(21)
By neglecting the higher order terms and after substituting all the values in
Eq. (19), we obtain the set of wave equations as
$\displaystyle+$
$\displaystyle\frac{D}{B(AD+E^{2})}\Big{[}c_{1}(\partial_{0}\Theta_{0})(\partial_{1}\Theta_{0})+\alpha
c_{1}(\partial_{0}\Theta_{0})^{3}(\partial_{1}\Theta_{0})-c_{0}(\partial_{1}\Theta_{0})^{2}-\alpha
c_{0}(\partial_{1}\Theta_{0})^{4}+c_{1}eA_{0}(\partial_{1}\Theta_{0})$
$\displaystyle+$ $\displaystyle c_{1}\alpha
eA_{0}(\partial_{0}\Theta_{0})^{2}(\partial_{1}\Theta_{0})\Big{]}-\frac{E}{B(AD+E^{2})}\Big{[}c_{3}(\partial_{1}\Theta_{0})^{2}+\alpha
c_{3}(\partial_{1}\Theta_{0})^{4}-c_{1}(\partial_{1}\Theta_{0})(\partial_{3}\Theta_{0})-\alpha
c_{1}(\partial_{1}\Theta_{0})(\partial_{3}\Theta_{0})^{2}\Big{]}$
$\displaystyle+$
$\displaystyle\frac{D}{C(AD+E^{2})}\Big{[}c_{2}(\partial_{0}\Theta_{0})(\partial_{2}\Theta_{0})+\alpha
c_{2}(\partial_{0}\Theta_{0})^{3}(\partial_{2}\Theta_{0})-c_{0}(\partial_{2}\Theta_{0})^{2}-\alpha
c_{0}(\partial_{2}\Theta_{0})^{4}+c_{2}eA_{0}(\partial_{2}\Theta_{0})$
$\displaystyle+$ $\displaystyle
c_{2}eA_{0}\alpha(\partial_{0}\Theta_{0})^{2}(\partial_{1}\Theta_{0})\Big{]}+\frac{AD}{(AD+E^{2})^{2}}\Big{[}c_{3}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})+\alpha
c_{3}(\partial_{0}\Theta_{0})^{3}(\partial_{3}\Theta_{0})-c_{0}(\partial_{3}\Theta_{0})^{2}$
$\displaystyle-$ $\displaystyle\alpha
c_{0}(\partial_{3}\Theta_{0})^{4}+c_{3}eA_{0}(\partial_{3}\Theta_{0})+c_{3}eA_{0}(\partial_{0}\Theta_{0})^{2}(\partial_{3}\Theta_{0})\Big{]}-m^{2}\frac{\tilde{Dc_{0}}-\tilde{Ec_{3}}}{(AD+E^{2})}=0,$
$\displaystyle-$
$\displaystyle\frac{D}{B(AD+E^{2})}\Big{[}c_{1}(\partial_{0}\Theta_{0})^{2}+\alpha
c_{1}(\partial_{0}\Theta_{0})^{4}-c_{0}(\partial_{0}\Theta_{0})(\partial_{1}\Theta_{0})-\alpha
c_{0}(\partial_{0}\Theta_{0})(\partial_{1}\Theta_{0})^{3}+c_{1}eA_{0}(\partial_{0}\Theta_{0})$
$\displaystyle+$ $\displaystyle\alpha
c_{1}eA_{0}(\partial_{0}\Theta_{0})^{3}\Big{]}+\frac{E}{B(AD+E^{2})}\Big{[}c_{3}(\partial_{0}\Theta_{0})(\partial_{1}\Theta_{0})+\alpha
c_{3}(\partial_{0}\Theta_{0})(\partial_{1}\Theta_{0})^{3}-c_{1}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})-\alpha
c_{1}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})^{3}\Big{]}$
$\displaystyle+$
$\displaystyle\frac{1}{BC}\Big{[}c_{2}(\partial_{1}\Theta_{0})(\partial_{2}\Theta_{0})+\alpha
c_{2}(\partial_{1}\Theta_{0})(\partial_{2}\Theta_{0})^{3}-c_{1}(\partial_{2}\Theta_{0})^{2}-\alpha
c_{1}(\partial_{2}\Theta_{0})^{4}\Big{]}+\frac{1}{B(AD+E^{2})}\Big{[}c_{3}(\partial_{1}\Theta_{0})(\partial_{3}\Theta_{0})+\alpha
c_{3}$ $\displaystyle\times$
$\displaystyle(\partial_{1}\Theta_{0})(\partial_{3}\Theta_{0})^{3}-c_{1}(\partial_{3}\Theta_{0})^{2}-\alpha
c_{1}(\partial_{3}\Theta_{0})^{4}\Big{]}+\frac{eA_{0}D}{B(AD+E^{2})}\Big{[}c_{1}(\partial_{0}\Theta_{0})+\alpha
c_{1}(\partial_{0}\Theta_{0})^{3}-c_{0}(\partial_{1}\Theta_{0})-\alpha
c_{0}(\partial_{1}\Theta_{0})^{3}$ $\displaystyle+$ $\displaystyle
eA_{0}c_{1}+\alpha
c_{1}eA_{0}(\partial_{0}\Theta_{0})^{2})\Big{]}+\frac{eA_{0}E}{B(AD+E^{2})}\Big{[}c_{3}(\partial_{1}\Theta_{0})+\alpha
c_{3}(\partial_{1}\Theta_{0})^{3}-c_{1}(\partial_{3}\Theta_{0})-\alpha
c_{1}(\partial_{1}\Theta_{0})^{3}\Big{]}-\frac{m^{2}c_{1}}{B}=0,$ (23)
$\displaystyle+$
$\displaystyle\frac{D}{C(AD+E^{2})}\Big{[}c_{2}(\partial_{0}\Theta_{0})^{2}+\alpha
c_{2}(\partial_{0}\Theta_{0})^{4}-c_{0}(\partial_{0}\Theta_{0})(\partial_{2}\Theta_{0})-\alpha
c_{0}(\partial_{0}\Theta_{0})(\partial_{2}\Theta_{0})^{3}+c_{2}eA_{0}(\partial_{0}\Theta_{0})+\alpha
c_{2}eA_{0}(\partial_{0}\Theta_{0})^{3}\Big{]}$ $\displaystyle+$
$\displaystyle\frac{1}{BC}\Big{[}c_{2}(\partial_{1}\Theta_{0})^{2}+\alpha
c_{2}(\partial_{1}\Theta_{0})^{4}-c_{1}(\partial_{1}\Theta_{0})(\partial_{2}\Theta_{0})-\alpha
c_{1}(\partial_{1}\Theta_{0})(\partial_{2}\Theta_{0})^{3}\Big{]}-\frac{E}{C(AD+E^{2})}\Big{[}c_{2}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})$
$\displaystyle+$ $\displaystyle\alpha
c_{2}(\partial_{0}\Theta_{0})^{3}(\partial_{3}\Theta_{0})-c_{0}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})-\alpha
c_{0}(\partial_{0}\Theta_{0})^{3}(\partial_{3}\Theta_{0})+c_{2}eA_{0}(\partial_{3}\Theta_{0})+\alpha
c_{2}eA_{0}(\partial_{3}\Theta_{0})^{3}\Big{]}$ $\displaystyle+$
$\displaystyle\frac{A}{C(AD+E^{2})}\Big{[}c_{3}(\partial_{2}\Theta_{0})(\partial_{3}\Theta_{0})+\alpha
c_{3}(\partial_{2}\Theta_{0})^{3}(\partial_{3}\Theta_{0})-c_{2}(\partial_{3}\Theta_{0})^{2}-\alpha
c_{2}(\partial_{3}\Theta_{0})^{4}\Big{]}-\frac{m^{2}c_{2}}{C}$
$\displaystyle+$
$\displaystyle\frac{eA_{0}D}{C(AD+E^{2})}\Big{[}c_{2}(\partial_{0}\Theta_{0})+\alpha
c_{2}(\partial_{0}\Theta_{0})^{3}-c_{0}(\partial_{2}\Theta_{0})-\alpha
c_{0}(\partial_{2}\Theta_{0})^{3}+c_{2}eA_{0}+c_{2}\alpha
eA_{0}(\partial_{0}\Theta_{0})^{2}\Big{]}=0,$ $\displaystyle+$
$\displaystyle\frac{(AD)-A^{2}}{(AD+E^{2})^{2}}\Big{[}c_{3}(\partial_{0}\Theta_{0})^{2}+\alpha
c_{3}(\partial_{0}\Theta_{0})^{4}-c_{0}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})-\alpha
c_{0}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})^{3}+{eA_{0}c_{3}}(\partial_{0}\Theta_{0})$
$\displaystyle+$ $\displaystyle\alpha
c_{3}eA_{0}(\partial_{0}\Theta_{0})^{3}\Big{]}-\frac{D}{C(AD+E^{2})}\Big{[}c_{3}(\partial_{1}\Theta_{0})^{2}+\alpha
c_{3}(\partial_{1}\Theta_{0})^{4}-c_{1}(\partial_{1}\Theta_{0})(\partial_{3}\Theta_{0})-\alpha
c_{1}(\partial_{1}\Theta_{0})(\partial_{3}\Theta_{0})^{3}\Big{]}$
$\displaystyle-$
$\displaystyle\frac{E}{C(AD+E^{2})}\Big{[}c_{2}(\partial_{0}\Theta_{0})(\partial_{2}\Theta_{0})+\alpha
c_{2}(\partial_{0}\Theta_{0})^{3}(\partial_{2}\Theta_{0})-c_{0}(\partial_{2}\Theta_{0})^{2}+\alpha
c_{0}(\partial_{2}\Theta_{0})^{4}+{eA_{0}c_{2}}(\partial_{2}\Theta_{0})+\alpha
c_{2}eA_{0}$ $\displaystyle\times$
$\displaystyle(\partial_{0}\Theta_{0})^{2}(\partial_{2}\Theta_{0})\Big{]}-\frac{eA_{0}A}{C(AD+E^{2})}\Big{[}c_{3}(\partial_{2}\Theta_{0})^{2}+\alpha
c_{3}(\partial_{2}\Theta_{0})^{4}-c_{2}(\partial_{2}\Theta_{0})(\partial_{3}\Theta_{0})-\alpha
c_{2}(\partial_{0}\Theta_{0})(\partial_{3}\Theta_{0})^{3}\Big{]}$
$\displaystyle+$
$\displaystyle\frac{eA_{0}(AD)-A^{2}}{(AD+E^{2})^{2}}\Big{[}c_{3}(\partial_{0}\Theta_{0})+\alpha
c_{3}(\partial_{0}\Theta_{0})^{3}-c_{0}(\partial_{3}\Theta_{0})-\alpha
c_{0}(\partial_{3}\Theta_{0})^{3}+c_{3}eA_{0}+\alpha
eA_{0}(\partial_{0}\Theta_{0})^{2}\Big{]}$ $\displaystyle-$
$\displaystyle\frac{m^{2}(Ec_{0}-Ac_{3}}{(AD+E^{2})}=0.$ (25)
Using separation of variables technique, we can choose
$\Theta_{0}=-\acute{E}t+W(r)+J\phi+\nu(\theta),$ (26)
where $\acute{E}=(E-j\omega)$, $E$ denotes the energy of the particle, $J$
represents the particles angular momentum corresponding to angles $\phi$.
After substituting Eq. (26) into set of wave equations, we get a $4\times 4$
matrix
$\mathcal{Z}(c_{0},c_{1},c_{2},c_{3})^{T}=0,$
whose components are given as follows:
$\displaystyle\mathcal{Z}_{00}$ $\displaystyle=$
$\displaystyle\frac{\tilde{-D}}{B(AD+E^{2})}\Big{[}W_{1}^{2}+\alpha
W_{1}^{4}\Big{]}-\frac{D}{C(AD+E^{2})}\Big{[}J^{2}+\alpha
J^{4}\Big{]},-\frac{AD}{(AD+E^{2})^{2}}\Big{[}\nu_{1}^{2}+\alpha\nu_{1}^{4}\Big{]}-\frac{m^{2}D}{(AD+E^{2})},$
$\displaystyle\mathcal{Z}_{01}$ $\displaystyle=$
$\displaystyle\frac{\tilde{-D}}{B(AD+E^{2})}\Big{[}\acute{E}+\alpha\acute{E}^{3}+eA_{0}+\alpha
eA_{0}\acute{E}^{2}\Big{]}W_{1}+\frac{E}{B(AD+E^{2})}+\Big{[}\nu_{1}+\alpha\nu_{1}^{3}\Big{]},$
$\displaystyle\mathcal{Z}_{02}$ $\displaystyle=$
$\displaystyle\frac{\tilde{-D}}{C(AD+E^{2})}\Big{[}\acute{E}+\alpha\acute{E}^{3}-eA_{0}-\alpha
eA_{0}\acute{E}^{2}\Big{]}J,$ $\displaystyle\mathcal{Z}_{03}$ $\displaystyle=$
$\displaystyle\frac{\tilde{-E}}{B(AD+E^{2})}\Big{[}W_{1}^{2}+\alpha
W_{1}^{4}\Big{]}-\frac{AD}{C(AD+E^{2})^{2}}\Big{[}\acute{E}+\alpha\acute{E}^{3}-eA_{0}-\alpha
eA_{0}\acute{E}^{2}\Big{]}\nu_{1}+\frac{m^{2}E}{(AD+E^{2})^{2}},$
$\displaystyle\mathcal{Z}_{11}$ $\displaystyle=$
$\displaystyle\frac{\tilde{-D}}{B(AD+E^{2})}\Big{[}\acute{E}^{2}+\alpha\acute{E}^{4}-eA_{0}\acute{E}-\alpha
eA_{0}\acute{E}W_{1}^{2}\Big{]}+\frac{E}{B(AD+E^{2})}-\frac{m^{2}}{B}$
$\displaystyle+$
$\displaystyle\Big{[}\nu_{1}+\alpha\nu_{1}^{3}\Big{]}\acute{E}-\frac{1}{BC}\Big{[}J^{2}+\alpha
J^{4}\Big{]}-\frac{1}{B(AD+E^{2})}\Big{[}\nu_{1}+\alpha\nu_{1}^{3}\Big{]}+\frac{eA_{0}E}{B(AD+E^{2})}\Big{[}\nu_{1}+\alpha\nu_{1}^{3}\Big{]}$
$\displaystyle-$
$\displaystyle\frac{eA_{0}D}{B(AD+E^{2})}\Big{[}\acute{E}+\alpha\acute{E}^{3}-eA_{0}-\alpha
eA_{0}\acute{E}^{2}\Big{]},~{}~{}~{}~{}~{}~{}\mathcal{Z}_{12}=\frac{1}{BC}[W_{1}+\alpha
W_{1}^{3}]J,$ $\displaystyle\mathcal{Z}_{13}$ $\displaystyle=$
$\displaystyle\frac{\tilde{-E}}{B(AD+E^{2})}\Big{[}W_{1}+\alpha
W_{1}^{3}\Big{]}\acute{E}+\frac{1}{B(AD+E^{2})^{2}}\Big{[}W_{1}+\alpha
W_{1}^{3}\Big{]}\nu_{1}+\frac{EeA_{0}}{B(AD+E^{2})}\Big{[}W_{1}+\alpha
W_{1}^{3}\Big{]},$ $\displaystyle\mathcal{Z}_{22}$ $\displaystyle=$
$\displaystyle\frac{D}{C(AD+E^{2})}\Big{[}\acute{E}^{2}+\alpha\acute{E}^{4}-eA_{0}\acute{E}-\alpha
eA_{0}\acute{E}\Big{]}-\frac{1}{BC}-\frac{m^{2}}{C}$ $\displaystyle-$
$\displaystyle\frac{A}{C(AD+E^{2})}\Big{[}\nu_{1}^{2}+\alpha\nu_{1}^{4}\Big{]}-\frac{eA_{0}D}{C(AD+E^{2})}\Big{[}\acute{E}+\alpha\acute{E}^{3}-eA_{0}-\alpha
eA_{0}\acute{E}^{2}\Big{]}$ $\displaystyle+$
$\displaystyle\frac{E}{C(AD+E^{2})}\Big{[}\acute{E}+\alpha\acute{E}^{3}-eA_{0}-\alpha
eA_{0}\acute{E}^{2}\Big{]}\nu_{1},$ $\displaystyle\mathcal{Z}_{23}$
$\displaystyle=$ $\displaystyle\frac{A}{C(AD+E^{2})}\Big{[}J+\alpha
J^{3}\Big{]}\nu_{1},~{}~{}~{}~{}~{}~{}\mathcal{Z}_{31}=\frac{1}{B(AD+E^{2})}\Big{[}\nu_{1}+\alpha\nu_{1}^{3}\Big{]}W_{1},$
$\displaystyle\mathcal{Z}_{33}$ $\displaystyle=$
$\displaystyle\frac{(AD-\tilde{A^{2}})}{(AD+E^{2})}\Big{[}\acute{E}^{2}+\alpha\acute{E}^{4}-eA_{0}\acute{E}-\alpha
eA_{0}\acute{E}^{3}\Big{]}-\frac{1}{B(AD+E^{2})}\Big{[}W_{1}^{2}+\alpha
W_{1}^{4}\Big{]}$ $\displaystyle-$
$\displaystyle\frac{A}{C(AD+E^{2})}\Big{[}J^{2}+\alpha
J^{4}\Big{]}-\frac{m^{2}A}{(AD+E^{2})}-\frac{eA_{0}(AD-\tilde{A^{2}})}{(AD+E^{2})}\Big{[}\acute{E}+\alpha\acute{E}^{3}-eA_{0}\acute{E}^{2}\Big{]},$
where $J=\partial_{\phi}\Theta_{0}$, $W_{1}=\partial_{r}{\Theta_{0}}$ and
$\nu_{1}=\partial_{\theta}{\Theta_{0}}$. For the non-trivial solution, we set
determinant $\mathcal{Z}$ is equal to zero and get
$\displaystyle ImW^{\pm}$ $\displaystyle=$
$\displaystyle\pm\int\sqrt{\frac{(\acute{E}-eA_{0})^{2}+X_{1}\Big{[}1+\alpha\frac{X_{2}}{X_{1}}\Big{]}}{(AD+E^{2})/BD}}dr,$
(27) $\displaystyle=$
$\displaystyle\pm\pi\frac{(\acute{E}-eA_{0})+\Big{[}1+\alpha\Xi\Big{]}}{2\kappa(r_{+})},$
where
$\displaystyle X_{1}$ $\displaystyle=$
$\displaystyle\frac{BE}{(AD+E^{2})}\Big{[}\acute{E}-eA_{0}\Big{]}\nu_{1}+\frac{AB}{(AD+E^{2})}\nu_{1}^{2}-Bm^{2},$
$\displaystyle X_{2}$ $\displaystyle=$
$\displaystyle\frac{BD}{(AD+E^{2})}\Big{[}\acute{E}^{4}-2eA_{0}\acute{E}^{3}+(eA_{0})^{2}\acute{E}^{2}\Big{]}-\frac{AB}{(AD+E^{2})}\nu_{1}^{4}-W_{1}^{4}$
$\displaystyle+$
$\displaystyle\frac{BE}{C(AD+E^{2})}\Big{[}\acute{E}^{3}-eA_{0}\acute{E}^{2}\Big{]}\nu_{1}.$
The tunneling probability for charged vector particles can be given as
$\Gamma=\frac{\Gamma_{\textmd{emission}}}{\Gamma_{\textmd{absorption}}}=\exp\left[{-2\pi}\frac{(\acute{E}-eA_{0})}{\kappa(r_{+})}\right]\Big{[}1+\alpha\Xi\Big{]}.$
(28)
where
$\kappa(r_{+})=\frac{2\beta^{4}r^{5}_{+}+4M\beta
r^{2}_{+}-8r_{+}q^{2}+a^{2}(4\beta^{4}r^{3}_{+}-4M\beta)}{2\alpha^{2}(r^{2}_{+}+a^{2})^{2}}.$
(29)
The modified Hawking temperature can be derived after expanding the series
$\Big{[}1+\alpha\Xi\Big{]}$ and by using the Boltzmann factor
$\Gamma_{B}=\exp\left[(\acute{E}-eA_{0})/T^{\prime}_{H}\right]$ as
$T^{\prime}_{H}\cong\frac{2\beta^{4}r^{5}_{+}+4M\beta
r^{2}_{+}-8r_{+}q^{2}a^{2}(4\beta^{4}r^{3}_{+}-4M\beta)}{4\pi\alpha^{2}(r^{2}_{+}+a^{2})^{2}}\Big{[}1-\alpha\Xi\Big{]}.$
(30)
The modified Hawking temperature of charged black strings depends upon quantum
gravity parameter $\alpha$, mass $M$, charge $q$, spin parameter $a$ and
cosmological constant $\bigwedge$ (i. e., $\beta=-\bigwedge/3$). In the
absence of quantum gravity parameter $\alpha=0$, we observe the temperature of
Eq. (16).
## V Graphical Analysis
The section comprises the graphical behavior of modified temperature of
charged black strings in Rastall theory. We examine the effects of quantum
gravity parameters $\alpha$ and spin parameter $a$ (appears due to Rastall
theory) from charged black strings. We also study the physical significance of
these plots. These plots depicts the behavior of $T^{\prime}_{H}$ w.r.t
horizon $r_{+}$.
In Fig. 2: (i) indicates the behavior of $T^{\prime}_{H}$ for fixed values of
$M=100,\beta=0.01,q=1,a=9,\Xi=10$ and various values of quantum gravity
parameter $\alpha$ in the range $0\leq r_{+}\leq 8$. One can observe that
$T^{\prime}_{H}$ goes on decreasing for increasing values of $r_{+}$. This is
physical phenomenon and represents the stable condition of charged black
strings under the influence of quantum gravity parameter at high temperature.
(ii) shows the behavior of $T^{\prime}_{H}$ for fixed
$M=100,\beta=0.01,q=8,\alpha=500,\Xi=10$ and various values of spin parameter
$a$. One can see that at first the $T^{\prime}_{H}$ increases very slowly and
it reaches at a maximum height with very high value and the eventually falls
down from height and attains an asymptotically flat form as
$T^{\prime}_{H}\rightarrow 0$ till $r_{+}\rightarrow\infty$. This is purely
stable and physical form of charged black strings under the effects of spin
parameter and quantum gravity. It is notable that as we increase the values of
spin parameter the temperature decreases. Moreover, the very high
$T^{\prime}_{H}$ at non-zero $r_{+}$ indicates the BH remnant.
Figure 2: Hawking temperature $T^{\prime}_{H}$ versus event horizon $r_{+}$.
## VI Summary and Discussion
In our work, we investigated the charged black strings solution in the context
of Rastall theory by applying the Newman-Janis algorithm. After assuming the
spin parameter $(a\rightarrow 0)$ in the Eq. (15), we obtained the black
strings solution without Rastall theory in general relativity. The charged
black strings solution in Rastall theory is quite different from the BH
solution in general theory of relativity.
The Hawking temperature $T_{H}$ depends on cosmological constant $\bigwedge$,
spin parameter $a$, black string mass $M$ and black string charge $q$. It is
worth mentioning here that for $a=0$, we recovered the Hawking temperature for
charged black strings 21 that is independent of the spin parameter. It is
suggested that the back-reaction affects of the emitted particles on the black
string geometry as well as self-gravitating impacts have been neglected and
evaluated Hawking temperature as a term and yields as black string geometry.
The Hawking radiation from the charged black strings have different types of
particles spins (down, upward or zero spin). In this procedure, the Hawking
temperature is associated to the spin parameter and geometry of charged black
strings. We conclude from the graphical interpretation of temperature $T_{H}$
w.r.t horizon $r_{+}$ that the charged black strings solution under the
influence of Rastall theory for various values of charge and spin parameter
depicts its stable form. Furthermore, we examined the quantum gravity effects
for charged black strings in Rastall theory and derived the modified Hawking
temperature. We also discussed the stable and physical form of charged black
strings under the effects of quantum gravity and spin parameter. The spin
parameter which appears due to Rastall theory in charged black string solution
causes the reduction in temperature. The Hawking’s phenomenon depicts that
with the emission of more radiations the size of BH radius reduces and we
observe BH remnant at very high temperature with non-zero horizon. We observe
this physical phenomenon in all plots which guarantee the stable form of
charged black strings. Since, the conclusion still holds if background charged
black strings geometry is more general.
## References
* (1) P. Rastall, Phys. Rev. D 6, 3357(1972).
* (2) P. Rastall, Can. J. Phys. 54, 66(1976).
* (3) Y. Heydarzade, F. Darabi, Phys. Lett. B 771, 365(2017).
* (4) A. M. Oliveira, H. E. S. Velten, J. C. Fabris, L. Casarini, Phys. Rev. D 92, 044020(2015).
* (5) H. Moradpour, N. Sadeghnezhad, Can. J. Phys. 95, 1257(2017).
* (6) Y. Heydarzade, H. Moradpour, F. Darabi, Can. J. Phys. 95, 1253(2017).
* (7) E. Spallucci, A. Smailagic, Int. J. Mod. Phys. D 27, 1850003(2018).
* (8) R. Kumar, S. G. Ghosh, Eur. Phys. J. C 78, 750(2018).
* (9) Z. Xu, X. Hou, X. Gong, J. Wang, Eur. Phys. J. C 78, 513(2018).
* (10) K. Lin, W. L. Qian, Chinese Phys. C 43, 083106(2019).
* (11) M. Visser, Phys. Lett. B 782, 83(2018).
* (12) F. Darabi, H. Moradpour, I. Licata, Y. Heydarzade, C. Corda, Eur. Phys. J. C 78, 25(2018).
* (13) M. F. A. R. Sakti, A. Suroso, F. P. Zen, Ann. Phys. 413, 168062(2020).
* (14) H. Moradpour, Y. Heydarzade, C. Corda, A. H. Ziaie, S. Ghaffari, Mod. Phys. Lett. A 34, 1950304(2019).
* (15) A. Yale, Phys. Lett. B 697, 398(2011).
* (16) W. Javed, G. Abbas, R. Ali, Eur. Phys. J. C 77, 296(2017).
* (17) W. Javed, R. Babar, Adv. High Energy Phys. 2019, 2759641(2019); ibid. Chinese Journal of Phys. 61, 138(2019); Proceedings of the 15th Marcel Grossmann Meeting, http://robot.icranet.org:8080/store/l380.pdf; ibid. Punjab University Journal of Mathematics 52, 6(2020).
* (18) W. Javed, R. Babar, A. Övgün, Mod. Phys. Lett. A 34, 1950057(2019).
* (19) R. Babar, W. Javed, A. Övgün, Mod. Phys. Lett. A 35, 2050104(2020).
* (20) M. Sharif, W. Javed, Can. J. Phys. 90, 903(2012); ibid. Gen. Relativ. Gravit. 45, 1051(2013); ibid. Can. J. Phys. 91, 43(2013); ibid. J. Exp. Theor. Phys. 115, 782(2012); ibid. Proceedings of the 3rd Galileo–Xu Guangqi Meeting, Int. J. Mod. Phys.: Conference Series, 23, 271(2013);ibid. Proceedings of the 13th Marcel Grossmann Meeting (Stockholm, 2012), World Scientific, 3, 1950(2015).
* (21) M. Sharif, W. Javed, Eur. Phys. J. C 72, 1997(2012).
* (22) M. Sharif, W. Javed, J. Korean Phys. Soc. 57, 217(2010).
* (23) A. Övgün, K. Jusufi, Eur. Phys. J. Plus. 132, 298(2017).
* (24) X. Q. Li, G.R. Chen, Phys. Lett. B 751, 34(2015).
* (25) W. Javed, R. Ali, G. Abbas, Can. J. Phys. 97, 176(2018).
* (26) A. Övgün, W. Javed, R. Ali, Adv. High Energy Phys. 2018, 11(2018).
* (27) A. Övgün, Int. J. Theor. Phys. 55, 2919(2016).
* (28) A. Övgün, K. Jusufi, Eur. Phys. J. Plus 132, 298(2017).
* (29) K. Jusufi, A. Ovgun, G. Apostolovska, Adv. High Energy Phys. 2017, 8798657(2017).
* (30) R. Casadio, P. Nicolini, R. da Rocha, Class. Quantum Grav. 35, 185001(2018).
* (31) S. Kanzi, I. Sakalli, Nucl. Phys. B 946, 114703(2019).
* (32) Y. K. Meitei, T. I. Singh, I. A. Meitei, Turk. J. Phys. 44, 373(2020).
* (33) I. Sakalli, A. Övgün, K. Jusufi, Astrophys Space Sci. 361, 330(2016).
* (34) G. Gecim, Y. Sucu, Phys. Lett. B 773, 391(2017).
* (35) I. Sakalli, A. Övgün, General Relativity and Gravitation 48, 1(2016).
* (36) A. Övgün, I. Sakalli, Int. J. Theor. Phys. 57, 322(2018).
* (37) G. Abbas, M. R. Shahzad, Chinese J. Phys. 63, 1(2020).
* (38) A. Övgün, I. Sakalli, J. Saavedra, C. Leiva, Mod. Phys. Lett. A 35, 2050163(2020).
* (39) X. C. Cai, Y. G. Miao, Phys. Rev. D 101, 104023(2020).
* (40) I. Sakalli, A. Ovgun, J. Exp. Theor. Phys. 121, 404(2015).
* (41) J. M. Bardeen, in Conference Proceedings of GR5 (Tbilisi, URSS, 1968), p. 174 J. M. Bardeen, in Conference Proceedings of GR5 (Tbilisi, URSS, 1968), p. 174.
* (42) D. Y. Chen, H. W. Wu, H. T. Yang, J. Cosmol. Astropart. Phys. 03, 036(2014).
* (43) D. Chen, H. Wu, H. Yang, Adv. High Energy Phys. 2013, 432412(2013).
* (44) R. Ali, K. Bamba, S. A. A. Shah, Symmetry. 631, 11(2019).
* (45) R. Ali, K. Bamba, M. Asgher, M. F. Malik, S. A. A. Shah, Symmetry. 12, 1165(2020).
* (46) R. Ali, K. Bamba, M. Asgher, S. A. A. Shah, Int. J. Mod. Phys. D 30, 2150002(2021).
* (47) R. Ali, M. Asgher, M. F. Malik, Mod. Phys. Lett. A 35, 2050225(2020).
* (48) W. Javed, R. Ali, R. Babar, A. Övgün, Eur. Phys. J. Plus 134, 511(2019); ibid. Chinese Phys. C 44, 015104(2020).
* (49) M. F. A. R. Sakti, A. Suroso, F. P. Zen, Annals of Phys. 413, 168062(2020).
* (50) K. Jusufi, A. Övgün, Astrophys Space Sci. 361, 207(2016).
* (51) H. Gohar, K. Saifullah, Astrophys Space Sci. 343, 181(2013).
|
arxiv-papers
| 2021-07-26T12:30:43 |
2024-09-04T03:07:18.549192
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Riasat Ali, Rimsha Babar, Muhammad Asgher, Syed Asif Ali Shah",
"submitter": "Riasat Ali",
"url": "https://arxiv.org/abs/2107.12163"
}
|
2107.12177
|
# Regularity of the Radon-Nikodym Derivative of a Convolution of Orbital
Measures on Noncompact Symmetric Spaces
Boudjemâa Anchouche Department of Mathematics, College of Science, Kuwait
University, P. O. Box 5969, 13060 Safat, Kuwait [email protected]
###### Abstract.
Let $G/K$ be a Riemannian symmetric space of noncompact type, and let
$\nu_{a_{j}}$, $j=1,...,r$ be some orbital measures on $G$ (see the definition
below). The aim of this paper is to study the $L^{2}$-regularity (resp.
$C^{k}$-smoothness) of the Radon-Nikodym derivative of the convolution
$\nu_{a_{1}}\ast...\ast\nu_{a_{r}}$ with respect to a fixed left Haar measure
$\mu_{G}$ on $G$. As a consequence of a result of Ragozin, [11], we prove that
if $r\geq\,\max_{1\leq i\leq s}\dim{G_{i}}/K_{i}$, then
$\nu_{a_{1}}\ast...\ast\nu_{a_{r}}$ is absolutely continuous with respect to
$\mu_{G}$, i.e., $d\big{(}\nu_{a_{1}}\ast...\ast\nu_{a_{r}}\big{)}/d\mu_{G}$
is in $L^{1}(G)$, where $G_{i}/K_{i}$, $i=1,...,s$, are the irreducible
components in the de Rham decomposition of $G/K$. The aim of this paper is to
prove that $d\big{(}\nu_{a_{1}}\ast...\ast\nu_{a_{r}}\big{)}/d\mu_{G}$ is in
$L^{2}(G)$ (resp. $C^{k}\left(G\right)$) for $r\geq\max_{1\leq i\leq
s}\dim\left({G_{i}}/{K_{i}}\right)+1$ (resp. $r\geq\max_{1\leq i\leq
s}\dim\left({G_{i}}/{K_{i}}\right)+k+1$). The case of a compact symmetric
space of rank one was considered in [2] and [3], and the case of a complex
Grassmannian was considered in [1].
###### Key words and phrases:
Convolution of Orbital Measures, Radon-Nikodym Derivative, Symmetric Spaces of
Noncompact Type
###### 2010 Mathematics Subject Classification:
43A85, 28C10, 43A77, 43A90, 53C35
###### Contents
1. 1 Introduction
2. 2 Some Preliminary Results
3. 3 Spherical Transform of the Density Function
4. 4 $L^{2}$-regularity of the Radon-Nikodym derivative
5. 5 $C^{k}$-regularity of the Radon-Nikodym derivative
6. 6 Case of an Arbitrary Symmetric Space of Noncompact Type
111This paper was uploaded on Researchagate on August 2018, DOI:
10.13140/RG.2.2.34657.97122
## 1\. Introduction
Let $G$ be a real, connected, noncompact semisimple Lie group with finite
center, and $K$ a maximal compact subgroup of $G$, hence $G/K$ is a symmetric
space of noncompact type. Unless otherwise stated, we assume in all what
follows that $G/K$ is irreducible. Let $a_{1}$, $\cdots$, $a_{r}$ be points in
$G-N_{G}\left(K\right)$, where $N_{G}\left(K\right)$ is the normalizer of $K$
in $G$. For each integer $j$, $1\leq j\leq r$, let
${\mathscr{I}}_{a_{j}}(f)=\int_{K}\int_{K}f(k_{1}a_{j}k_{2})d\mu_{K}(k_{1})d\mu_{K}(k_{2})$
where $f$ is a continuous function with compact support in $G$ and $\mu_{K}$ a
normalized Haar measure on $K$.
To the linear functional ${\mathscr{I}}_{a_{j}}$ corresponds, by Riesz
representation theorem, see [5], a measure, which will be denoted by
$\nu_{a_{j}}$, i.e., there exists a Borel measure on $G$ such that
${\mathscr{I}}_{a_{j}}(f)=\int_{G}f(g)d\nu_{a_{j}}(g).$
Since
${\mathscr{I}}_{a_{j}}={\mathscr{I}}_{k_{1}a_{j}k_{2}},$
for any $k_{1},k_{2}$ in $K$, the measure $\nu_{a_{j}}$ is $K$-bi-invariant.
The aim of this paper is to study the regularity of the Radon-Nikodym
derivative of the convolution $\nu_{a_{1}}\ast...\ast\nu_{a_{r}}$ with respect
to a fixed Haar measure $\mu_{\mathsf{G}}$ of $G$. More precisely, the aim is
to prove the following
###### Theorem (Main Theorem).
Let $G/K$ be an irreducible symmetric space of noncompact type, $a_{1}$, …,
$a_{r}$ points in $G-N_{G}(K)$, $\nu_{a_{1}}$,…,$\nu_{a_{r}}$ be the
associated orbital measures.
1. (1)
If
$r\geq\dim\,G/K+1,$
then
$\frac{d\left(\nu_{a_{1}}\ast...\ast\nu_{a_{r}}\right)}{d\mu_{G}}\in
L^{2}\left(G\right).$
2. (2)
If
$r\geq\dim\,G/K+k+1,$
then
$\frac{d\left(\nu_{a_{1}}\ast...\ast\nu_{a_{r}}\right)}{d\mu_{G}}\in
C^{k}\left(G\right).$
The paper is organized as follows: Section 2 consists of some preliminary
results. In section 3, we state some results on spherical Transform of the
density function. In section 4, we investigate the $L^{2}$-regularity of the
Radon-Nikodym derivative. In section 5 we study the smoothness of the Radon-
Nikodym derivative. In section 6 we consider the case of a reducible symmetric
space of noncompact type.
## 2\. Some Preliminary Results
Let $G$ be a real, connected, noncompact semisimple Lie group with finite
center, and $K$ a maximal compact subgroup of $G$. Fix a Cartan involution
$\theta$ and let ${\mathfrak{g}}={\mathfrak{k}}\oplus{\mathfrak{p}}$ be the
corresponding Cartan decomposition of the Lie algebra ${\mathfrak{g}}$ of $G$,
where ${\mathfrak{k}}$ is the Lie algebra of $K$ and $\mathfrak{p}$ is the
orthogonal complement of $\mathfrak{k}$ with respect to the Killing form of
$\mathfrak{g}$. It is well known that $G/K$ has a structure of a symmetric
space of noncompact type, where the metric is induced from the restriction of
the Killing form of ${\mathfrak{g}}$ to ${\mathfrak{p}}$. Let $\mathfrak{a}$
be a maximal abelian subspace of $\mathfrak{p}$, ${\mathfrak{a}}^{*}$ its
dual, and ${\mathfrak{a}}_{\mathbb{C}}^{*}$ the complexification of
${\mathfrak{a}}^{*}$, i.e., ${\mathfrak{a}}_{\mathbb{C}}^{*}$ is the set of
linear $\mathbb{R}$ forms on ${\mathfrak{a}}$ with values in $\mathbb{C}$. The
dimension of $\mathfrak{a}$ is independent of the choice of a Cartan
decomposition and $\dim\mathfrak{a}$ is called the rank of the symmetric space
$G/K$ and denoted by $l=\operatorname{rank(G\,/\,K)}$.
The Killing form $B$ of $\mathfrak{g}$ is non-degenerate on $\mathfrak{a}$, so
it induces an isomorphism between $\mathfrak{a}$ and ${\mathfrak{a}}^{*}$. The
extension of the inner product on $\mathfrak{a}$ induced by the Killing form
to ${\mathfrak{a}}_{\mathbb{C}}^{*}$ will be denoted also by
$\left\langle.,.\right\rangle$. For an element
$\lambda\in{\mathfrak{a}}_{\mathbb{C}}^{*}$, we denote by $H_{\lambda}$ the
corresponding element in ${\mathfrak{a}}_{\mathbb{C}}$., i.e., by Riesz
Theorem, there exist $H_{\lambda}$ in $\mathfrak{a}$ such that
$\lambda(H)=\left\langle H,H_{\lambda}\right\rangle$, for all
$H\in{\mathfrak{a}}_{\mathbb{C}}$. Under this correspondence,
${\mathfrak{a}}^{*}$ corresponds to $\mathfrak{a}$. We transfer the inner
product defined on ${\mathfrak{a}}_{\mathbb{C}}$ to an inner product on
${\mathfrak{a}}_{\mathbb{C}}^{*}$, via
$\left\langle\lambda,\mu\right\rangle:=\left\langle
H_{\lambda},H_{\mu}\right\rangle$.
For $\alpha$ in ${\mathfrak{a}}^{*}$, we put
${\mathfrak{g}}_{\alpha}=\Big{\\{}X\in{\mathfrak{g}}\mid\operatorname{ad}(H)X=\alpha(H)X,\text{
for all }H\text{ in }{\mathfrak{a}}\Big{\\}}.$
A nonzero element $\alpha$ in ${\mathfrak{a}}^{*}$ is called a restricted root
if ${\mathfrak{g}}_{\alpha}\neq 0$. So we have
${\mathfrak{g}}={\mathfrak{g}}_{0}\oplus\sum_{\alpha\in\Sigma}{\mathfrak{g}}_{\alpha},$
where
${\mathfrak{g}}_{0}=\Big{\\{}X\in{\mathfrak{g}}\mid\operatorname{ad}(H)X=0,\text{
for all }H\text{ in }{\mathfrak{a}}\Big{\\}}.$
Denote by $\Sigma$ the set of restricted roots on $\mathfrak{a}$, and let
${\mathfrak{a}}^{{}^{\prime}}=\Big{\\{}X\in{\mathfrak{a}}\mid\alpha\left(X\right)\neq
0,\text{ for all }\alpha\text{ in }\Sigma\Big{\\}}.$
The connected components of ${\mathfrak{a}}^{{}^{\prime}}$, which are open
convex sets, are called Weyl chambers. Let $M^{\prime}$ (resp. M) be the
normalizer (resp. centralizer) of $A$ in $K$. Then the group
$\mathcal{W}=M^{\prime}/M$, called the Weyl group, is a finite group acting
transitively on the set of Weyl chambers. The induced action of $\mathcal{W}$
on $\mathfrak{a}^{*}$ is given by $w\lambda(H)=\lambda(w^{-1}H)$.
Fix a connected component $\mathfrak{a}^{+}$ of $\mathfrak{a}^{\prime}$, and
call it a positive Weyl chamber, and let
$\Sigma^{+}=\Big{\\{}\alpha\in\Sigma\mid\alpha\left(X\right)>0,\text{ for all
}X\text{ in }{\mathfrak{a}^{+}}\Big{\\}},\text{ and
}\,\,\Sigma^{-}=\Big{\\{}-\alpha\mid\alpha\in\Sigma^{+}\Big{\\}}.$
The set $\Sigma^{+}$ (resp. $\Sigma^{-}$) is called the set of positive (resp.
negative) restricted roots with respect to the Weyl chamber
${\mathfrak{a}}^{+}$. For $\alpha$ in $\Sigma$, we put
$m_{\alpha}=\dim{\mathfrak{g}}_{\alpha}$, and let
$\varrho=\frac{1}{2}\operatorname{Tr}{\operatorname{ad}_{\mathfrak{n}}}_{\mid\mathfrak{a}}=\frac{1}{2}\sum_{\alpha\in\Sigma^{+}}m_{\alpha}\alpha,\hskip
14.22636pt{}\mathfrak{n}=\sum_{\alpha\in\Sigma^{+}}{\mathfrak{g}}_{\alpha}.$
Let $A,K$, and $N$ be Lie subgroups of $G$ with Lie algebras $\mathfrak{a}$,
$\mathfrak{k}$, and $\mathfrak{n}$. Then we have the Iwasawa decomposition
$G=KAN=K\exp\left(\mathfrak{a}\right)N.$
The Iwasawa projection
$H:G\longrightarrow\mathfrak{a}$
is the map which to each $g$ in $G$ associates the unique element $H(g)$ in
$\mathfrak{a}$ such that $g\in K\exp(H(g))N$. Each element $g$ in $G$ can be
written,
$g=k\left(g\right)\exp\big{(}H\left(g\right)\big{)}n\left(g\right),$
where $k\left(g\right)\in K$, and $n\left(g\right)\in N$. We have also the
Cartan decompostion
$G=KAK,$
or more precisely, the decomposition
$G=K\overline{\exp(\mathfrak{a}^{+})}K,$
where $\overline{\exp(\mathfrak{a}^{+})}$ is th closure of
$\exp(\mathfrak{a}^{+})$. So every $K$-bi-invariant function can be considered
as a function on $A$ or a function on $\overline{\exp(\mathfrak{a}^{+})}$. The
following result of Harish-Chandra gives a characterization of spherical
functions on $G$.
###### Theorem 2.1.
[8, Harish-Chandra] The spherical functions on $G$ are parametrized by
$\mathfrak{a}^{*}_{\mathbb{C}}$. More precisely, for each $\lambda$ in
${\mathfrak{a}}_{\mathbb{C}}^{*}$ corresponds a spherical function
$\varphi_{\lambda}$ on $G$ given by
$\varphi_{\lambda}\left(g\right)=\int_{K}\operatorname{e}^{(\sqrt{-1}\lambda-\varrho)\left(H(gk)\right)}d\mu_{K}\left(k\right).$
Moreover, $\varphi_{\lambda}=\varphi_{\nu}$ if and only if $\lambda$ and $\nu$
are in the same orbit under the action of the Weyl group, i.e., there exists
$s$ in the Weyl group $\mathcal{W}$ such that $\lambda=s\nu$.
Let $L^{1}(K\diagdown G\diagup K)$ be the space of $K$-bi-invariant $L^{1}$
functions on $G$. The Harish-Chandra transform of a function $f$ in
$L^{1}(K\diagdown G\diagup K)$, denoted by ${\mathcal{H}}(f)$, is given by
${\mathcal{H}}(f)\left(\lambda\right)=\int_{G}f\left(g\right)\varphi_{\lambda}\left(g^{-1}\right)d\mu_{G}(g).$
It is known that if $\lambda$ is in $\mathfrak{a}^{*}$, then the $(G,K)$
spherical function $\varphi_{\lambda}$ is positive definite, hence bounded,
[12], Corollary 11.5.11 & Proposition 8.4.2 (i). Therefore
${\mathcal{H}}(f)(\lambda)$ is well defined for $f$ in $L^{1}(G)$ and
$\lambda$ in $\mathfrak{a}^{*}$. The Harish-Chandra transform gives a map from
$L^{1}\left(K\diagdown G\diagup K\right)$ to the $\mathcal{W}$-invariant
function on ${\mathfrak{a}}^{*}$.
Let $B(K\diagdown G\diagup K)$ be the space of all linear combinations of
continuous positive definite $K$-bi-invariant functions
$f:G\longrightarrow\mathbb{C}$. Then we have the following
###### Theorem 2.2.
[12, Theorem 11.5.26, (Plancherel Theorem)] For $f\in B(K\diagdown G\diagup
K)\,\cap\,L^{1}(K\diagdown G\diagup K)$,
$\int_{G}\left|f\left(g\right)\right|^{2}d\mu_{G}\left(g\right)={\frac{1}{\left|\mathcal{W}\right|}}\int_{\mathfrak{a}^{*}}\left|{\mathcal{H}}(f)\left(\lambda\right)\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda$
where $\left|\mathcal{W}\right|$ is the number of elements of the Weyl group
$\mathcal{W}$, $\operatorname{c}$ is the Harish-Chandra function.
The following inversion for the spherical transform is also needed in this
paper
###### Theorem 2.3.
[12, Theorem $11.5.26$, (Inversion Formula)] For $f\in B(K\diagdown G\diagup
K)\,\cap\,L^{1}(K\diagdown G\diagup K)$,
$f\left(g\right)={\frac{1}{\left|\mathcal{W}\right|}}\int_{\mathfrak{a}^{*}}{\mathcal{H}}(f)\left(\lambda\right)\varphi_{\lambda}\left(g\right)\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda$
where $\left|\mathcal{W}\right|$ is the number of elements of the Weyl group
$\mathcal{W}$, $\operatorname{c}$ is the Harish-Chandra function.
## 3\. Spherical Transform of the Density Function
The notations are as in section 2. Let $\varphi_{\lambda}$ be the spherical
function for the Gelfand pair $\left(G,K\right)$ corresponding to $\lambda$ in
${\mathfrak{a}}^{*}$. As was mentioned above, the spherical, or Harish-
Chandra, transform of a function $f$ in $L^{1}\left(G\right)$ is defined by
${\mathcal{H}}(f)\left(\lambda\right)=\int_{G}f\left(g\right)\varphi_{\lambda}\left(g^{-1}\right)d\mu_{G}(g).$
We define the spherical transform of a compactly supported measure $\mu$ by
${\mathcal{H}}(\mu)\left(\lambda\right)=\int_{G}\varphi_{\lambda}\left(g^{-1}\right)d\mu(g).$
It is clear that if $\mu$ is absolutely continuous with respect to a fixed
left Haar measure $\mu_{G}$ of $G$, i.e., $d\,\mu=fd\,\mu_{G}$, then
(1) ${\mathcal{H}}(\mu)={\mathcal{H}}(f).$
To simplify the notation, we denote $\nu_{a_{j}}$ simply by $\nu_{j}$ and
hence, denote the convolution $\nu_{a_{1}}\ast...\ast\nu_{a_{r}}$ by
$\nu_{1}\ast...\ast\nu_{r}$.
###### Proposition 3.1.
${\mathcal{H}\big{(}\nu_{1}\ast...\ast\nu_{r}\big{)}(\lambda)}=\prod_{i=1}^{r}\varphi_{\lambda}(a_{i}^{-1}).$
###### Proof.
Let $r=1$. Then
$\displaystyle{\mathcal{H}}\big{(}\nu_{1}\big{)}(\lambda)$
$\displaystyle=\int_{G}\varphi_{\lambda}(g^{-1})d\nu_{1}(g)$
$\displaystyle=\int_{K}\int_{K}\varphi_{\lambda}\big{(}(k_{1}a_{1}k_{2})^{-1}\big{)}d\mu_{K}(k_{1})d\mu_{K}(k_{2}).$
$\displaystyle=\varphi_{\lambda}(a_{1}^{-1})\,\,\,(\text{ since
$\varphi_{\lambda}$ is $K$-bi-invariant and $\mu_{K}(K)=1$}).$
Consider the case $r=2$, i.e, the spherical transform of $\nu_{1}\ast\nu_{2}$.
$\displaystyle{\mathcal{H}}\big{(}\nu_{1}\ast\nu_{2}\big{)}(\lambda)$
$\displaystyle=\int_{G}\varphi_{\lambda}(g^{-1})d\left(\nu_{1}\ast\nu_{2}\right)(g)$
$\displaystyle=\int_{G}\int_{G}\varphi_{\lambda}(g_{2}^{-1}g_{1}^{-1})d\nu_{1}(g_{1})d\nu_{2}(g_{2})$
$\displaystyle=\int_{G}\left(\int_{K}\int_{K}\varphi_{\lambda}(g_{2}^{-1}k_{2}^{-1}a_{1}^{-1}k_{1}^{-1})d\mu_{K}(k_{1})d\mu_{K}(k_{2})\right)d\nu_{2}(g_{2})$
(2)
$\displaystyle=\int_{G}\left(\int_{K}\int_{K}\varphi_{\lambda}(g_{2}^{-1}k_{2}^{-1}a_{1}^{-1})d\mu_{K}(k_{1})d\mu_{K}(k_{2})\right)d\nu_{2}(g_{2})$
(3)
$\displaystyle=\varphi_{\lambda}(a_{1}^{-1})\int_{G}\varphi_{\lambda}(g_{2}^{-1})d\nu_{2}(g_{2})$
(4)
$\displaystyle=\varphi_{\lambda}(a_{1}^{-1})\int_{K}\int_{K}\varphi_{\lambda}(k_{2}^{-1}a_{2}^{-1}k_{1}^{-1})d\mu_{K}(k_{1})d\mu_{K}(k_{2})$
(5)
$\displaystyle=\varphi_{\lambda}(a_{1}^{-1})\varphi_{\lambda}(a_{2}^{-1}).$
To get $(\ref{123b})$ from $(\ref{123a})$, we used the fact that
$\varphi_{\lambda}$ satisfies
$\int_{K}\varphi_{\lambda}\left(xky\right)d\mu_{K}(k)=\varphi_{\lambda}(x)\varphi_{\lambda}(y),$
and to get $(\ref{123d})$ from $(\ref{123c})$, we used the fact
$\varphi_{\lambda}$ is $K$-bi-invariant.
The argument goes by induction for arbitrary $r$. ∎
It is easy to see that the measures $\nu_{{1}},...,\nu_{{r}}$ are supported on
$Ka_{1}K,...,$ $Ka_{r}K$ and, from [1], we know that the measure
$\nu_{{1}}\ast...\ast\nu_{{r}}$ is absolutely continuous with respect to the
Haar measure of the group $G$ if and only if the set $Ka_{1}K\dots Ka_{r}K$ is
of non-empty interior.
Suppose that $G/K$ is an irreducible symmetric space, hence the linear
isotropy representation of $K$ on the tangent space
$T_{eK}\left(G/K\right)\simeq{\mathfrak{g}}/{\mathfrak{k}}$ is irreducible and
non trivial. Then by Theorem 2.5 in [11], if $r\geq\dim G/K$, then
$\nu_{1}\ast...\ast\nu_{r}$ is absolutely continuous with respect to
$\mu_{G}$. If we denote by $\varrho_{a_{1},\cdots,a_{r}}$ the Radon-Nikodym
derivative of $\nu_{1}\ast...\ast\nu_{r}$ with respect to the Haar measure
$\mu_{G}$ of $G$, then
$\varrho_{a_{1},\cdots,a_{r}}=\frac{d\left(\nu_{1}\ast...\ast\nu_{r}\right)}{d\mu_{G}}\in
L^{1}(G).$
From what was said above, the function $\varrho_{a_{1},\cdots,a_{r}}$ is
$K$-bi-invariant, i.e.,
$\varrho_{a_{1},\cdots,a_{r}}\left(k_{1}gk_{2}\right)=\varrho_{a_{1},\cdots,a_{r}}\left(g\right),\,\,\text{
for all $k_{1}$ and $k_{2}$ in $K$.}$
Moreover, from
$\operatorname{supp}\big{(}\varrho_{a_{1},\cdots,a_{r}}\big{)}=\operatorname{supp}\bigg{(}\nu_{1}\ast...\ast\nu_{r}\bigg{)}=Ka_{1}K...Ka_{r}K,$
we see that $\varrho_{a_{1},\cdots,a_{r}}$ is compactly supported. In what
follows, we will denote by $L^{p}\left({K\diagdown G\diagup K}\right)$ the
space of $K$-bi-invariant functions which are in $L^{p}(G)$. Hence we have the
following
###### Proposition 3.2.
If $r\geq\dim G\diagup K$, then $\nu_{1}\ast...\ast\nu_{r}$ is absolutely
continuous with respect to $\mu_{G}$ and its density function
$\varrho_{a_{1},\cdots,a_{r}}$ is in $L^{1}\left({K\diagdown G\diagup
K}\right)$.
## 4\. $L^{2}$-regularity of the Radon-Nikodym derivative
###### Theorem 4.1.
Let $a_{i}=\exp(H_{i})$, where $H_{i}$ is in $\mathfrak{a}^{+}$ for
$i=1,...,r$. If $r\geq\dim\,G/K+1$, then $\varrho_{a_{1},\cdots,a_{r}}$ is in
$L^{2}\left(G\right)$.
To prove the theorem we need some preparatory results.
###### Lemma 4.1.
There exists a positive constant $c$ such that for all $y$ in
$\mathfrak{a}^{*}$, $y\neq 0$, there exist $i$, $1\leq i\leq r$ such that
(6) $\left|\left<\frac{y}{\left\|y\right\|},\alpha_{i}\right>\right|\geq c.$
###### Proof.
Suppose that the inequality (6) is not true, then there exists a sequence
$(y_{p})_{p\geq 1}$ in $\mathfrak{a}^{*}$, $y_{p}\neq 0,\forall p\geq 1$, such
that for all $i$, $1\leq i\leq r$
$\left|\left<\frac{y_{p}}{\left\|y_{p}\right\|},\alpha_{i}\right>\right|<\frac{1}{p},\,\forall\,p\geq
1.$
Since the sequence $\frac{y_{p}}{\left\|y_{p}\right\|}$ is in the unit sphere
$\mathbb{S}^{l-1}$ in $\mathfrak{a}^{*}$, after extracting a subsequence, if
necessary, we can assume without loss of generality that the sequence
$\frac{y_{p}}{\left\|y_{p}\right\|}$ converges to $u\in\mathbb{S}^{l-1}$.
Hence
$\left<u,\alpha_{i}\right>=0.$
Since $\\{\alpha_{i}\\}_{i=1}^{r}$ is a basis for $\mathfrak{a}^{*}$, there
exist real numbers $c_{i}$, $i=1,...,r$ such that
$u=\sum_{i=1}^{r}c_{i}\alpha_{i}.$
Hence
$\left\|u\right\|^{2}=\sum_{i=1}^{r}c_{i}\left<u,\alpha_{i}\right>=0.$
A contradiction, since $u\in\mathbb{S}^{l-1}$. Therefore, the inequality (6)
is established. ∎
Let $\left(a_{j}^{-1}\right)_{j}$ be a sequence of points contained in a
compact subset $\mathcal{C}$ of $A$, such that the $a_{j}\not\in N_{G}(K)$ for
all $j$. For an element $w$ of $\mathcal{W}$, we put
$\Sigma^{+}_{w}(\mathcal{C})=\bigg{\\{}\alpha\in\Sigma^{+}\mid
w\alpha\big{(}\log a\big{)}\neq 0\,\,\text{ for all
}a\in\mathcal{C}\bigg{\\}}.$
The following result is implicit in [6].
###### Proposition 4.1.
For each $j$, there exists a positive constant $C(a_{j})$ such that for all
$\lambda\in\mathfrak{a}^{*}$,
$\left|\varphi_{\lambda}\left(a_{j}^{-1}\right)\right|\leq
C(a_{j})\sum_{w\in\mathcal{W}}\prod_{\alpha\in\Sigma^{+}_{w}(\mathcal{C})}\bigg{(}1+\left|\left<\lambda,\alpha\right>\right|\bigg{)}^{-\frac{1}{2}m_{\alpha}}.$
###### Proof.
For $\lambda$ in $\mathfrak{a}^{*}$, take
$F_{a,H_{\lambda}}(k)=\operatorname{e}^{\sqrt{-1}\left<H(ak),H_{\lambda}\right>}$,
where $H_{\lambda}$ is defined by $\lambda(H)=\left<H_{\lambda},H\right>$ for
all $H$ in $\mathfrak{a}$, and let
$g(k)=\operatorname{e}^{-\rho\left(H(ak)\right)}$. Then, by Theorem 2.1, we
have
$\varphi_{\lambda}(a)=\int_{K}\operatorname{e}^{\left(\sqrt{-1}\lambda-\rho\right)H(ak)}dk=\int_{K}\operatorname{e}^{\sqrt{-1}F_{a,H_{\lambda}}(k)}g(k)dk.$
The proposition follows from [6, Theorem 11.1]. ∎
As a consequence of Proposition 4.1 we have the following:
###### Corollary 4.1.
Let $a_{i}$, $i=1,...,r$ be elements of $A$ such that
$\Sigma^{+}_{w}(\mathcal{C})=\Sigma^{+}$ for all $w$ in $\mathcal{W}$. Then
there exist positive constants
$\widetilde{C(a_{j})}=\left|\Sigma^{+}\right|C(a_{j})$, with $C(a_{j})$ as in
Proposition 4.1 such that
$\left|\varphi_{\lambda}\left(a_{j}^{-1}\right)\right|\leq\widetilde{C(a_{j})}\prod_{\alpha\in\Sigma^{+}}\bigg{(}1+\left|\left<\lambda,\alpha\right>\right|\bigg{)}^{-\frac{1}{2}m_{\alpha}}.$
###### Lemma 4.2.
[12, Lemma 9.3.3] $B(K\diagdown G\diagup K)\cap L^{1}(K\diagdown G\diagup K)$
is dense in $L^{1}(K\diagdown\,G\,\diagup K)$.
Since $\varrho_{a_{1},\cdots,a_{r}}$ is in $L^{1}(K\diagdown\,G\,\diagup K)$,
by Lemma 4.2, there exist a sequence
$\left(\varrho_{a_{1},\cdots,a_{r}}^{j}\right)_{j}$ in $B(K\diagdown G\diagup
K)\cap L^{1}(K\diagdown G\diagup K)$ converging in $L^{1}$ to
$\varrho_{a_{1},\cdots,a_{r}}$. Since $\varrho_{a_{1},\cdots,a_{r}}$ is of
compact support, we can assume that $\varrho_{a_{1},\cdots,a_{r}}^{j}$ is of
compact support. Then, we have the following
###### Proposition 4.2.
$\lim_{j\rightarrow\infty}{\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}={\mathcal{H}\big{(}\nu_{1}\ast...\ast\nu_{r}\big{)}(\lambda)}$.
###### Proof.
Since
${\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}=\int_{G}\varrho^{j}_{a_{1},\cdots,a_{r}}\left(g\right)\varphi_{\lambda}\left(g^{-1}\right)d\mu_{G}(g),$
and since $\varphi_{\lambda}$ is bounded for $\lambda$ in $\mathfrak{a}^{*}$,
we get
$\displaystyle\left|{\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}-{\mathcal{H}\big{(}\nu_{1}\ast...\ast\nu_{r}\big{)}(\lambda)}\right|$
$\displaystyle=\left|\int_{G}\bigg{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\left(g\right)-\varrho_{a_{1},\cdots,a_{r}}\left(g\right)\bigg{)}\varphi_{\lambda}\left(g^{-1}\right)d\mu_{G}(g)\right|$
$\displaystyle\leq
c\int_{G}\left|\bigg{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\left(g\right)-\varrho_{a_{1},\cdots,a_{r}}\left(g\right)\bigg{)}\right|d\mu_{G}(g).$
The Lemma follows from
$\bigints_{G}\left|\bigg{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\left(g\right)-\varrho_{a_{1},\cdots,a_{r}}\left(g\right)\bigg{)}\right|d\mu_{G}(g)\longrightarrow
0$. ∎
###### Proof of Theorem 4.1.
Plancherel Theorem applied to $\varrho_{a_{1},\cdots,a_{r}}^{j}$ says that
(7)
$\int_{G}\left|\varrho_{a_{1},\cdots,a_{r}}^{j}\left(g\right)\right|^{2}d\mu_{G}\left(g\right)={\frac{1}{\left|\mathcal{W}\right|}}\int_{\mathfrak{a}^{*}}\left|{\mathcal{H}}(\varrho_{a_{1},\cdots,a_{r}}^{j})\left(\lambda\right)\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda.$
Combining Proposition 3.1 and Proposition 4.2, we get
$\left|{\mathcal{H}}(\varrho_{a_{1},\cdots,a_{r}}^{j})\left(\lambda\right)\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}\longrightarrow\left|\prod_{i=1}^{r}\varphi_{\lambda}(a_{i}^{-1})\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}.$
By Proposition 7.2 in [9], there exists a positive constant $C$ such that
(8) $\left|\operatorname{c}\left(\lambda\right)\right|^{-2}\leq
C\big{(}1+\left\|\lambda\right\|\big{)}^{n-l},$
for all $\lambda\in\mathfrak{a}^{*}$, where $n=\dim\,G/K$ and
$l=\operatorname{rank(G\,/\,K)}$. Combining (8), Lemma 4.1, and Corollary 4.1,
we get
$\left|\prod_{i=1}^{r}\varphi_{\lambda}(a_{i}^{-1})\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}\leq
C\frac{\big{(}1+\left\|\lambda\right\|\big{)}^{n-l}}{\big{(}1+c\left\|\lambda\right\|\big{)}^{r}},$
where $C$ is a positive constant, and $c$ is the constant which appears in
$\left(\ref{ineq-import}\right)$. Hence
$\left|\prod_{i=1}^{r}\varphi_{\lambda}(a_{i}^{-1})\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}\text{
is in }L^{1}(\mathfrak{a}^{*})\text{ for }r>n.$
Moreover, since $\varrho_{a_{1},\cdots,a_{r}}^{j}$ is of compact support, by
the Paley-Wiener Theorem for spherical functions on semisimple Lie groups
([7], Theorem 3.5), for each positive integer $N$, there exist a constant
$C_{N,j}$ such that
$\left|{\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}\right|\leq
C_{N,j}\left(1+\left\|\lambda\right\|\right)^{-N},\text{ for arbitrary
$\lambda$ in $\mathfrak{a}^{*}$}.$
Then, we can chose $N$ such that
$\left(1+\left\|\lambda\right\|\right)^{-2N}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}\text{
is in }L^{1}(\mathfrak{a}^{*}).$
As a consequence of Proposition 4.2, without loss of generality, we can take
the sequence $\left(C_{N,j}\right)_{j}$ to be uniformly bounded, i.e., there
exist a positive constant $C$ such that $C_{N,j}\leq C$, for all $j$. Then by
the Lebesgue dominated convergence Theorem, for $r>n$, we have
(9)
$\lim_{j\longrightarrow\infty}\int_{\mathfrak{a}^{*}}\left|{\mathcal{H}}(\varrho_{a_{1},\cdots,a_{r}}^{j})\left(\lambda\right)\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda=\int_{\mathfrak{a}^{*}}\left|\prod_{i=1}^{r}\varphi_{\lambda}(a_{i}^{-1})\right|^{2}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda.$
Using (9) and Fatou’s Lemma, we get
(10)
$\displaystyle\int_{G}\left|\varrho_{a_{1},\cdots,a_{r}}(g)\right|^{2}d\mu_{G}(g)$
$\displaystyle\leq\liminf_{j\rightarrow\infty}\int_{G}\left|\varrho^{j}_{a_{1},\cdots,a_{r}}(g)\right|^{2}d\mu_{G}(g)$
(11)
$\displaystyle\leq\frac{1}{\left|\mathcal{W}\right|}\bigintssss_{{\mathfrak{a}}^{*}}\left(\left|\operatorname{c}(\lambda)\right|^{-1}\prod_{i=1}^{r}\left|\varphi_{\lambda}(a_{i}^{-1})\right|\right)^{2}d\lambda.$
Combining Corollary 4.1 and the estimate (8), we get
$\displaystyle\bigintssss_{G}\left|\varrho_{a_{1},\cdots,a_{r}}(g)\right|^{2}d\mu_{G}(g)$
$\displaystyle\leq\frac{1}{\left|\mathcal{W}\right|}\bigg{(}\prod_{i=1}^{r}\left(\widetilde{C(a_{i})}\right)\bigg{)}^{2}{\bigintss_{{\mathfrak{a}}^{*}}}\big{(}1+\left\|\lambda\right\|\big{)}^{n-l}\Bigg{(}\prod_{\alpha\in\Sigma^{+}}\bigg{(}1+\left|\left<\lambda,\alpha\right>\right|\bigg{)}^{-m(\alpha)}\Bigg{)}^{r}d\lambda$
$\displaystyle\leq
C(a)\bigintssss_{{\mathfrak{a}}^{*}}\big{(}1+\left\|\lambda\right\|\big{)}^{n-l}\prod_{\alpha\in\Sigma^{+}}\bigg{(}1+\left|\left<\lambda,\alpha\right>\right|\bigg{)}^{-rm(\alpha)}\,d\lambda,$
where
$C(a)=C\left(a_{1},...,a_{r}\right)=\frac{1}{\left|\mathcal{W}\right|}\bigg{(}\prod_{i=1}^{r}\left(\widetilde{C(a_{i})}\right)\bigg{)}^{2}$
is a constant which depends only on the points $a_{1},...,a_{r}$.
From the inequality (6), we deduce that
$\bigintssss_{{\mathfrak{a}}^{*}}\big{(}1+\left\|\lambda\right\|\big{)}^{n-l}\prod_{\alpha\in\Sigma^{+}}\bigg{(}1+\left|\left<\lambda,\alpha\right>\right|\bigg{)}^{-r\,m_{\alpha}}\,d\lambda$
$=\bigintss_{0}^{\infty}t^{l-1}\Bigg{(}\bigintss_{\mathbb{S}^{l-1}}\frac{\big{(}1+\left\|t\xi\right\|\big{)}^{n-l}}{\prod_{\alpha\in\Sigma^{+}}\bigg{(}1+\left|\left<t\xi,\alpha\right>\right|\bigg{)}^{r\,m_{\alpha}}}d\sigma(\xi)\Bigg{)}d\,t$
$=\bigintss_{0}^{\infty}t^{l-1}(1+t)^{n-l}\Bigg{(}\bigintss_{\mathbb{S}^{l-1}}\frac{d\sigma(\xi)}{\prod_{\alpha\in\Sigma^{+}}\bigg{(}1+t\left|\left<\xi,\alpha\right>\right|\bigg{)}^{r\,m_{\alpha}}}\Bigg{)}d\,t$
$\leq
C\bigintss_{0}^{\infty}\frac{t^{l-1}(1+t)^{n-l}}{(1+ct)^{r\min\,{m_{\alpha_{i}}}}}d\,t\\\
\leq C\bigintss_{0}^{\infty}\frac{t^{l-1}\,(1+t)^{n-l}}{(1+ct)^{r}}d\,t,$
where $d\,\sigma$ is the induced Lebesgue measure on the unit sphere
$\mathbb{S}^{l-1}$. The Theorem follows from the fact that the integral
$\bigintss_{0}^{\infty}\frac{t^{l-1}(1+t)^{n-l}}{(1+ct)^{r}}d\,t$
is convergent for
$r>n.$
∎
## 5\. $C^{k}$-regularity of the Radon-Nikodym derivative
The aim of this section is to prove part $(2)$ of the Main Theorem. As a
consequence of Lemma 4.2, we have the following
###### Proposition 5.1.
Let $X$ be an element of $\mathfrak{g}$. If $r>n+1$, then
$X\varrho^{j}_{a_{1},\cdots,a_{r}}(g)\longrightarrow
X\varrho_{a_{1},\cdots,a_{r}}(g).$
###### Proof.
Applying the inversion formula for the spherical transform, Theorem 2.3, to
the function $\varrho_{a_{1},\cdots,a_{r}}^{j}$, we get
$\displaystyle\varrho^{j}_{a_{1},\cdots,a_{r}}(g)$
$\displaystyle=c\,\bigintssss_{{\mathfrak{a}}^{*}}{\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}\varphi_{\lambda}\left(g\right)\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda,$
where $c$ is a constant independent of the function
$\varrho^{j}_{a_{1},\cdots,a_{r}}$.
Hence, for $X$ in $\mathfrak{g}$, we have,
$\displaystyle X\varrho^{j}_{a_{1},\cdots,a_{r}}(g)$
$\displaystyle=\frac{d}{dt}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{(}g\exp(tX)\big{)}_{\mid
t=0}$
$\displaystyle=c\,\frac{d}{dt}\bigg{(}\bigintssss_{{\mathfrak{a}}^{*}}{\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}\varphi_{\lambda}\left(g\exp(tX)\right)\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda,\bigg{)}_{\mid\,t=0}$
$\displaystyle=c\,\bigintssss_{{\mathfrak{a}}^{*}}{\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}\frac{d}{dt}\bigg{(}\varphi_{\lambda}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda.$
The justification of the derivation under the integral sign follows the same
argument as in section 4 and uses Gangolli-Varadarajan estimate (see [4],
Proposition 3, (vi)).
From Lemma 4.2, we know that
$\tiny{{\mathcal{H}\big{(}\varrho^{j}_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}\frac{d}{dt}\bigg{(}\varphi_{\lambda}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}\rightarrow{\mathcal{H}\big{(}\varrho_{a_{1},\cdots,a_{r}}\big{)}(\lambda)}\frac{d}{dt}\bigg{(}\varphi_{\lambda}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}}.$
Moreover, as a consequence of Proposition 3.1 we get
$\displaystyle\mathcal{H}\big{(}\varrho_{a_{1},\cdots,a_{r}}\big{)}(\lambda)\frac{d}{dt}\bigg{(}\varphi_{\lambda}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}$
$\displaystyle=\prod_{i=1}^{r}\varphi_{\lambda}(a_{i}^{-1})\frac{d}{dt}\bigg{(}\varphi_{\lambda}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}.$
With the help of Gangolli-Varadarajan estimate (see [4], Proposition 3, (vi)),
we get
$\left|\prod_{i=1}^{r}\varphi_{s\xi}(a_{i}^{-1})\frac{d}{dt}\bigg{(}\varphi_{s\xi}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(s\xi\right)\right|^{-2}\right|\leq
C\,\frac{s^{l-1}(1+s)(1+s)^{n-l}}{(1+cs)^{r}},$
where $C>0$ is independent of $g$, and since for $r>n+1$, the function
$\prod_{i=1}^{r}\varphi_{s\xi}(a_{i}^{-1})\frac{d}{dt}\bigg{(}\varphi_{s\xi}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\operatorname{c}\left(s\xi\right)^{-2}\,\in
L^{1}(\mathfrak{a}^{*}).$
The Proposition follows from the Paley-Wiener Theorem for spherical functions
on semisimple Lie groups ([7], Theorem 3.5) and the Lebesgue dominated
convergence theorem. ∎
Let $r>n+1$. Using Proposition 5.1, by passing to the limit, we get
$\displaystyle X\varrho_{a_{1},\cdots,a_{r}}(g)$
$\displaystyle=c\bigintssss_{{\mathfrak{a}}^{*}}{\mathcal{H}\big{(}\nu_{1}\ast...\ast\nu_{r}\big{)}(\lambda)}\frac{d}{dt}\bigg{(}\varphi_{\lambda}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda,$
$\displaystyle=c\,\bigintssss_{{\mathfrak{a}}^{*}}\prod_{i=1}^{r}\varphi_{\lambda}(a_{i}^{-1})\frac{d}{dt}\bigg{(}\varphi_{\lambda}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(\lambda\right)\right|^{-2}d\,\lambda\big{(}\text{
by Proposition \ref{widehatofprod}}\big{)},$
$\displaystyle=\int_{0}^{\infty}\int_{\mathbb{S}^{l-1}}\prod_{i=1}^{r}\varphi_{s\xi}(a_{i}^{-1})\frac{d}{dt}\bigg{(}\varphi_{s\xi}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(s\xi\right)\right|^{-2}\,d\sigma(\xi)\,ds.$
Since the function
$g\longmapsto\prod_{i=1}^{r}\varphi_{s\xi}(a_{i}^{-1})\frac{d}{dt}\bigg{(}\varphi_{s\xi}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(s\xi\right)\right|^{-2}$
is infinitely differentiable (see [4, Proposition 3]) and
$\left|\prod_{i=1}^{r}\varphi_{s\xi}(a_{i}^{-1})\frac{d}{dt}\bigg{(}\varphi_{s\xi}\left(g\exp(tX)\right)\bigg{)}_{\mid\,t=0}\left|\operatorname{c}\left(s\xi\right)\right|^{-2}\right|\leq
C\,\frac{s^{l-1}(1+s)(1+s)^{n-l}}{(1+cs)^{r}},$
where $C>0$ is independent of $g$, the differentiation under integral sign
theorem implies that if $r>n+1$, then
$\varrho_{a_{1},\cdots,a_{r}}\in C^{1}(G).$
We conclude by iteration that if $r>n+k$, then
$\varrho_{a_{1},\cdots,a_{r}}\in C^{k}(G).$
## 6\. Case of an Arbitrary Symmetric Space of Noncompact Type
Suppose that $G/K$ is an arbitrary symmetric space of noncompact type, not
necessarily irreducible. Then by [10], each irreducible factor in the de Rham
decomposition of $G/K$ is again a Riemannian symmetric space of the noncompact
type. Hence we can write
(12) $G/K=G_{1}/K_{1}\times\cdots\times G_{s}/K_{s},$
where $G_{i}/K_{i}$, $i=1,...,s$ are irreducible symmetric spaces of
noncompact type. Fix left Haar measures $\mu_{G_{i}}$ on $G_{i}$, $i=1,...,s$.
Then
$\mu_{G}=\mu_{G_{1}}\times\cdots\times\mu_{G_{s}}$
is a left Haar measure on $G$.
Let
$a_{i}=\left(a_{i}^{1},\cdots,a_{i}^{s}\right)$
be an element of $G$ and assume that $a_{i}^{j}\not\in N_{G_{j}}(K_{j})$,
where $N_{G_{j}}(K_{j})$ is the normalizer of $K_{j}$ in $G_{j}$,
$j=1,\cdots,s$. Then it can be seen that the measures $\nu_{a_{i}}$, defined
in section 1, can be written as
$\nu_{a_{i}}=\nu_{a_{i}^{1}}\times\cdots\times\nu_{a_{i}^{s}},$
where $\nu_{a_{i}^{j}}$ is the measure corresponding to the linear functional
${\mathscr{I}}_{a_{i}^{j}}(f)=\int_{K_{j}}\int_{K_{j}}f(k_{1}a_{i}^{j}k_{2})d\mu_{K_{j}}(k_{1})d\mu_{K_{j}}(k_{2}),$
where $\mu_{K_{j}}$ a fixed Haar measure on $K_{j}$, and $f$ is a continuous
function with compact suppost on $G_{j}$.
Applying Ragozin’s result to each component of the de Rham decomposition of
$G/K$, we deduce that if
$r\geq\max_{1\leq i\leq s}{\dim G_{i}/K_{i}},$
then $\nu_{a_{1}^{j}}\ast...\ast\nu_{a_{r}^{j}}$ is absolutely continuous with
respect to $\mu_{G_{j}}$ for $j=1,\cdots s$. If we denote by
$\varrho_{a_{1}^{j},\cdots,a_{r}^{j}}$ the Radon-Nikodym derivative of
$\nu_{a_{1}^{j}}\ast...\ast\nu_{a_{r}^{j}}$ with respect to the Haar measure
$\mu_{G_{j}}$, then
$\varrho_{a_{1}^{j},\cdots,a_{r}^{j}}=\frac{d\left(\nu_{a_{1}^{j}}\ast...\ast\nu_{a_{r}^{j}}\right)}{d\mu_{G_{j}}}\in
L^{1}(G_{j}).$
Since
$\nu_{a_{1}}\ast...\ast\nu_{a_{r}}=\left(\nu_{a_{1}^{1}}\ast...\ast\nu_{a_{r}^{1}}\right)\times\cdots\times\left(\nu_{a_{1}^{s}}\ast...\ast\nu_{a_{r}^{s}}\right),$
we deduce that if $r\geq\max_{1\leq i\leq s}{\dim G_{i}/K_{i}}$ then the
measure $\nu_{a_{1}}\ast...\ast\nu_{a_{r}}$ is absolutely continuous with
respect to $\mu_{G}=\mu_{G_{1}}\times\cdots\times\mu_{G_{s}}$. If we denote by
$\varrho_{a_{1},\cdots,a_{r}}$ the Radon-Nikodym derivative of
$\nu_{a_{1}}\ast...\ast\nu_{a_{r}}$ with respect to $\mu_{G}$, then
$\displaystyle\varrho_{a_{1},\cdots,a_{r}}\left(x_{1},\cdots,x_{s}\right)$
$\displaystyle=\frac{d\left(\nu_{a_{1}}\ast...\ast\nu_{a_{r}}\right)}{d\mu_{G}}$
$\displaystyle=\frac{d\bigg{(}\left(\nu_{a_{1}^{1}}\ast...\ast\nu_{a_{r}^{1}}\right)\times\cdots\times\left(\nu_{a_{1}^{s}}\ast...\ast\nu_{a_{r}^{s}}\right)\bigg{)}}{d\left(\mu_{G_{1}}\times\cdots\times\mu_{G_{s}}\right)}$
$\displaystyle=\frac{d\left(\nu_{a_{1}^{1}}\ast...\ast\nu_{a_{r}^{1}}\right)}{d\mu_{G_{1}}}\left(x_{1}\right)\cdots\frac{d\left(\nu_{a_{1}^{s}}\ast...\ast\nu_{a_{r}^{s}}\right)}{d\mu_{G_{s}}}\left(x_{s}\right)$
$\displaystyle=\varrho_{a_{1}^{1},\cdots,a_{r}^{1}}(x_{1})\cdots\varrho_{a_{1}^{s},\cdots,a_{r}^{s}}(x_{s})\in
L^{1}(G).$
Hence
(13)
$\displaystyle\int_{G}\left|\varrho_{a_{1},\cdots,a_{r}}\right|^{2}d\,\mu_{G}$
$\displaystyle=\prod_{i=1}^{s}\int_{G_{i}}\left|\varrho_{a_{1}^{i},\cdots,a_{r}^{i}}\right|^{2}d\,\mu_{G_{i}}.$
As a consequence of $(\ref{produit})$ and part (i) of the Main Theorem, we
deduce that if
$r>\max_{1\leq i\leq s}{\dim G_{i}/K_{i}},$
then
$\varrho_{a_{1},\cdots,a_{r}}\in L^{2}(G).$
Similarly, applying part (ii) of the Main Theorem, we deduce that if
$r>\max_{1\leq i\leq s}{\dim G_{i}/K_{i}}+k,$
then
$\varrho_{a_{1},\cdots,a_{r}}\in C^{k}(G).\\\ $
###### Acknowledgment.
It is a great pleasure for me to thank Kaïs Ammari for helpful suggestions and
his careful reading of the paper. It’s also a great pleasure to thank Mahmoud
Al-Hashami for pointing out some misprints in an earlier version of the paper.
## References
* [1] M. Al-Hashami and B. Anchouche, Convolution of Orbital Measures on Complex Grassmannians, Journal of Lie Theory, 28 (2018), 695–710.
* [2] B. Anchouche, S. K. Gupta, A. Plagne, Orbital measures on $SU(2)/SO(2)$, Monatshefte fur Mathematik, 178(4), 493-520.
* [3] B. Anchouche and S. Gupta, Smoothness of the Radon-Nikodym Derivative of a Convolution of Orbital Measures on Compact Symmetric Spaces of Rank One, Asian Journal of Mathematics, 22 (2018), 0211–0222.
* [4] J. P. Anker, The Spherical Fourier Transform of Rapidly Decreasing Functions. A Simple Proof of a Characterization due to Harish-Chandra, Helgason, Trombi, and Varadarajan, Journal of Functional Analysis, 96 (1991), 331–349.
* [5] J. B. Conway, A Course in Functional Analysis, 96, Springer Verlag, Graduate texts in Mathematics, 1990.
* [6] J. J. Duistermaat, J. A. C. Kolk and V. S. Varadarajan, Functions, flows and oscillatory integrals on flag manifolds and conjugacy classes in real semisimple Lie groups, Compositio Mathematica, 49 (1983), 309–398.
* [7] R. Gangolli, On the Plancherel Formula and the Paley-Wiener Theorem for Spherical Functions on Semisimple Lie Groups. Annals of Mathematics, Second Series, Vol. 93, No. 1 (Jan., 1971), pp. 150-165.
* [8] R. Gangolli and V. S. Varadarajan, Harmonic Analysis of Spherical Functions on Real Reductive Groups, 101, Springer Verlag, Ergebnisse der Mathematik und ihrer Grenzgebiete, 1988.
* [9] S. Helgason, Groups and Geometric Analysis, 83, American mathematical Society, Mathematical Surveys and Monographs, 2002.
* [10] T. Ochiai, Transformation Groups on Riemannian Symmetric Spaces J. Differential Geometry 3 (1969), 231-236
* [11] D. L. Ragozin, Zonal Measures on Isotropy irreducible Homogeneous Spaces”, Journal of Functional Analysis, 17 (1974), 355–376.
* [12] J. A. Wolf, Harmonic Analysis on Commutative Spaces Springer Verlag
|
arxiv-papers
| 2021-07-22T19:56:41 |
2024-09-04T03:07:18.562045
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Boudjemaa Anchouche",
"submitter": "Boudjemaa Anchouche",
"url": "https://arxiv.org/abs/2107.12177"
}
|
2107.12188
|
# An integrated whispering-gallery-mode resonator for solid-state coherent
quantum photonics
Arianne Brooks Xiao-Liu Chu [email protected] [ Zhe Liu [ Rüdiger Schott
Arne Ludwig Andreas D. Wieck [ Leonardo Midolo Peter Lodahl [ Nir
Rotenberg [
###### Abstract
Tailored photonic cavities allow enhancing light-matter interaction ultimately
to create a fully coherent quantum interface. Here, we report on an integrated
microdisk cavity containing self-assembled quantum dots to coherently route
photons between different access waveguides. We measure a Purcell factor of
$F_{exp}=6.9\pm 0.9$ for a cavity quality factor of about 10,000, allowing us
to observe clear signatures of coherent scattering of photons by the quantum
dots. We show how this integrated system can coherently re-route photons
between the drop and bus ports, and how this routing is controlled by detuning
the quantum dot and resonator, or through the strength of the excitation beam,
where a critical photon number less than one photon per lifetime is required.
We discuss the strengths and limitations of this approach, focusing on how the
coherent scattering and single-photon nonlinearity can be used to increase the
efficiency of quantum devices such as routers or Bell-state analyzers.
###### keywords:
Quantum nanophotonics, quantum dots, resonators
Niels Bohr Institute] Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr
Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen,
Denmark [Imperial College London] Present address: MRC London Institute of
Medical Sciences, Du Cane road, London, W12 0NN, United Kingdom Niels Bohr
Institute] Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute,
University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark Bochum]
Lehrstuhl für Angewandte Festkörperphysik, Ruhr-Universität Bochum,
Universitätsstrasse 150, D-44780 Bochum, Germany Niels Bohr Institute] Center
for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute, University of
Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark Niels Bohr Institute]
Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute, University of
Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark [Queens University]
Present address: Department of Physics, Engineering Physics & Astronomy, 64
Bader Lane, Queen’s University, Kingston, Ontario, Canada K7L 3N6 IR,NMR,UV
## 1 Introduction
Photonic resonators enhance light-matter interactions, and have played a
crucial role in quantum optical experiments over the past several decades.
Resonators such as photonic crystal cavities 1 or whispering gallery mode
resonators 2 have been fabricated on photonic chips, leading to pioneering
demonstrations of strong light-matter coupling of single atoms 3 and quantum
dots (QDs) 4, 5, or an increase in the coherent interaction between photons
and single organic molecules 6. Whispering gallery mode resonators also
support chiral quantum interactions 7, 8, where photons are emitted or
scattered unidirectionally, enabling non-reciprocal photonic elements
constructed with single emitters such as optical circulators 9, isolators 10
and atom-photon SWAP gates 11.
Here, we create an integrated photonic circuit consisting of a microdisk
resonator with embedded self-assembled QDs, access waveguides and grating
couplers, as shown in Fig. 1a. The enhancement provided by the resonator
lessens the effect of decoherence mechanisms12, most notably spectral
diffusion, enabling the observation of coherent scattering of photons from the
QD and leading to a coherent switching of photons between the bus and drop
ports. This stands in contrast to earlier demonstrations where QDs embedded in
a photonic crystal cavity could modulate the transmission across a single
channel coupled to the cavity 13, 14. Cryogenic spectroscopy and time-resolved
measurements in conjunction with quantum optical theory allow us to quantify
the effect of the resonators on the QD, and to explore the response of the QD-
resonator detuning and excitation strengths on the photon routing.
## 2 Integrated microdisk resonators
We fabricate GaAs disk-shaped cavities that support whispering gallery modes
that are optically addressed via evanescently coupled single mode waveguides,
as shown in Figure 1a). In the present sample no electrical contacts were
implemented, which otherwise have been shown to efficiently overcome QD
broadening due to electrostatic charge fluctuations 15. However, electrically
contacted samples may increase absorption losses and increases fabrication
complexity 16, such that an alternative strategy using Purcell enhancement to
reduce the influence of noise processes is a favorable approach. The high
intrinsic quantum efficiency of QDs means that any non-radiative processes can
be neglected (c.f. Supplementary Information) 17. Consequently, we need only
consider radiative decay, which occurs with rates $\gamma_{\mathrm{cav}}$ and
$\gamma_{\mathrm{leak}}$ into the resonator modes and free-space,
respectively, as depicted in Figure 1b).
Figure 1: a) Scanning electron microscope image of the integrated disk cavity,
showing the excitation, bus and drop ports. b) Zoom-in of the disk resonator
and access waveguide, with a schematic of the QD position as indicated. The QD
acts as a two-level system that emits into the cavity with rate
$\gamma_{\mathrm{cav}}$ and into free-space with rate
$\gamma_{\mathrm{leak}}$, while the cavity loss rate $\kappa$ arises as a
combination of scattering into free-space with rate $\kappa_{\mathrm{0}}$ and
coupling into the access waveguides with rate $\kappa_{\mathrm{g}}$. The QD
couples to the optical modes of the resonator, which are calculated using
finite element methods and shown in (c).
The disk resonators are fabricated with a 3.5 $\mu$m radius, chosen because a
$\approx 1.1$ $\mu$m support pillar remains after the disk and waveguides are
under-etched (dark region in Figures 1a and b), and because finite element
simulations (COMSOL Multiphysics) reveal negligible bending losses. In fact,
for the first two radial modes, as shown in Figure 1c, we find intrinsic
quality factors (Q-factor) limited only by the computational accuracy
$\left(Q_{\mathrm{theory}}\approx 10^{13}\right)$. This value is well above
typically reported values of $Q=10^{5}$ for QD-based GaAs resonators 18, 19,
20 limited by surface roughness and gap state related surface absorption 21.
However, these effects can be decreased by employing surface passivation
techniques, resulting in ultrahigh Q-factor resonators $\left(Q\geq
10^{6}\right)$ 22. In our case, a further reduction due to coupling between
the resonator and the access waveguides is expected. From the field
distributions, we calculate the effective mode volumes23 of the first and
second radial modes $V_{\mathrm{eff}}^{\left(1\right)}\approx
18(\lambda/n)^{3}$ and $V_{\mathrm{eff}}^{\left(2\right)}\approx
22(\lambda/n)^{3}$ (c.f. Supplementary Information).
To characterize the integrated photonic resonator, we use an optimized grating
coupler 24 to launch light from a tunable continuous-wave laser through the
access waveguides and into the disk. The access waveguide is single mode at
the 940 nm emission wavelength of the QDs, and is tapered to a width of 220 nm
in the vicinity of the resonator to improve coupling to the cavity mode.
Additionally, in a series of different structures, the gap between the disk
and waveguides is varied between 40 and 160 nm, in steps of 30 nm, to
determine the critical coupling geometry.
Working at cryogenic temperatures, we scan the excitation laser frequency over
a 13 THz bandwidth and record the outcoupled intensity transmitted through
both the bus and drop ports, as shown in Figure 1a. Exemplary spectra are
shown in Figure 2a, here for a gap size of 100 nm. In the bus port spectrum,
which has been normalized to a highly dispersive background (all dips are
shown; see Supplementary Information for raw data), we observe sharp dips at
the WGM frequencies where the disk couples light from one access waveguide to
another. Since the different resonator modes couple to the access waveguides
with different efficiencies, the depths of the dips $\Delta T$ vary. As
expected, the dips in the bus port spectrum are well correlated with peaks in
the drop port spectrum, and the different resonance orders can be determined
by the measured free spectral range. Furthermore, the QDs in the cavity are
excited non-resonantly using a Ti:Sapphire laser at 810 nm (Tsunami) and the
emission collected on a spectrometer. Note the strong emission enhancement
when the QDs are on resonance with a cavity mode.
These measurements are repeated for all structures, fitting each cavity
resonance with a Lorentzian function to determine its width $\kappa$, allowing
us to deduce the loaded Q-factor
$Q_{\mathrm{exp}}=\omega_{\mathrm{c}}/\kappa$, where $\omega_{\mathrm{c}}$ is
the central resonance frequency. A collection of $Q_{\mathrm{exp}}$ values are
shown in a histogram in Figure 2b), where a mean of 10600 $\pm$ 4700 is
obtained and the largest average $\bar{Q}_{\mathrm{exp}}$ is measured for 1st
order modes ( $\bar{Q}^{\left(1\right)}=$13600 $\pm$ 5400 vs
$\bar{Q}^{\left(2\right)}=$9300 $\pm$ 4900).
Figure 2: a) Exemplary bus (purple) and drop (green) port transmission
intensity as a function of frequency, measured by scanning the laser and
collecting light from respective port. Signatures of the optical resonances
are clearly visible in both, and their separations agree well with the
calculated free spectral range of the disk. For comparison, the emission
spectrum for QDs excited non-resonantly and measured through the drop port is
also presented (orange), showing strong emission enhancement when the QDs are
on resonance. The first-order resonance that is coupled to a QD is marked with
the red dashed box. b) Histogram of first-order, second-order and all mode
Q-factors extracted from the bus port data for all structures, results in a
mean Q in excess of 10,000. c) Dependence of $\bar{Q}$ (left axis) and average
$\Delta T$ on the structure gap width. Error bars represent the statistical
variance, while solid lines are theoretical fits from Eq. 33.
To determine the optimal, critically coupled configuration, we consider the
gap-width dependence of both $\bar{Q}$ and change in transmission $\Delta T$,
cf. Figure 2c. Qualitatively, as the gap size increases, leading to a weaker
coupling between the resonator and access waveguides, the signature of the
coupling $\Delta T$ decreases with a corresponding increase in $\bar{Q}$. This
trend agrees well with the theoretical prediction (solid curves) for the
loaded ring resonator,25
$\displaystyle\Delta T$
$\displaystyle=1-\left[T_{cc}+(1-T_{cc})\left(\frac{1-\kappa_{g}}{1+\kappa_{g}}\right)^{2}\right],$
(1) $\displaystyle\frac{1}{Q_{exp}}$
$\displaystyle=\frac{1}{Q_{int}}\left(1+\kappa_{g}\right),$ (2)
where $\kappa_{\mathrm{g}}=\kappa_{\mathrm{g0}}e^{-\xi g}$ characterizes the
coupling rate between the cavity and access waveguides, $\xi$ is the
characteristic length constant, and $g$ is the gap size. In these equations,
$T_{\mathrm{cc}}$ is the transmission at critical coupling, while
$Q_{\mathrm{int}}$ is the intrinsic Q-factor of the resonator (i.e. in the
absence of the access waveguides). From modelling the data, we find
$\bar{Q}_{\mathrm{int}}=(2.3\pm 0.1)\times 10^{4}$ and a critical coupling gap
size of $\sim 64\pm 10$ nm, well within the reach of modern nanofabrication
techniques.
## 3 Resonant scattering from a quantum dot
Figure 3: a) Low-power, frequency-dependent intensity as a function of cavity-
laser detuning recorded both from the drop port (green) and bus port (purple),
taken at 7K when the QD and cavity are nearly on resonance. Fits to theory
(orange curves, Eq. 4) enable extracting parameters such as the QD and cavity
linewidths, here 2.8 GHz and 36.6 GHz, respectively. b) Lifetime measurement
for the QD on resonance with the cavity (blue) and in a bulk sample (red) with
corresponding fits. c) Power-dependent change in transmission of the QD of
(a), showing a clear decrease in extinction for higher incident photon fluxes.
Also shown is the theoretical transmission (see main text) for the QD-cavity
system accounting only for dephasing (black) and also for spectral diffusion
(orange). For the latter, a critical photon number of 0.9 photons per lifetime
is found (orange dashed line). For comparison, the predicted saturation curve
for a QD in a waveguide (i.e. with no Purcell enhancement) is shown (purple).
We now turn to the QDs embedded within the disks and study how the resonators
alter the light-matter interaction. Figure 3a) shows typical drop (left axis,
green) and bus (right axis, purple) intensity as a function of cavity-laser
detuning, taken on a sample with a 100 nm gap at 7 K and at 5 $\mu$W
excitation power. In this figure, we see a clear signature of the coherent
interaction between photons and the QD (highlighted by dashed lines in Fig.
3a), resulting in a re-routing of the photons between the bus and drop ports,
at a QD-cavity detuning of $\delta=0.02\kappa$. In the bus port, we observe a
clear extinction by the QD of the transmission that is indicative of
interference between the photons scattered by the QD and the incoming probe
field 26, 15. Similarly, we observe a peak in the bus port intensity at the
same location, as additional photons are scattered into this channel by the
QD.
To accurately model the frequency response presented in Figure 3a we first
require knowledge of the emitter decay rate, which is the sum of decay rates
into free-space $\gamma_{\mathrm{leak}}$ and the cavity mode
$\gamma_{\mathrm{cav}}$. We therefore measure the QD lifetime in both bulk
GaAs and when coupled to the microdisk, presenting exemplary results in Figure
3b. Bulk (red data) measurements are well-fitted by a single-exponential decay
with an average value of $\gamma_{\mathrm{bulk}}$ = (0.63 $\pm$ 0.07) ns-1,
corresponding to the natural linewidth of $\gamma_{\mathrm{bulk}}/2\pi$ = (0.1
$\pm$ 0.01) GHz. In contrast, a double-exponential is needed to fit the cavity
enhanced lifetime measurement (blue data), which we attribute to the different
coupling of the two, orthogonally polarized QD transition dipoles to the
cavity. Here, one dipole is well coupled to the cavity and hence has a fast
decay rate
$\gamma_{\mathrm{fast}}=\gamma_{\mathrm{cav}}+\gamma_{\mathrm{leak}}=(4.97\pm
0.08$) ns-1 ((0.79 $\pm$ 0.01) GHz linewidth), while the other is weakly
coupled with a decay rate $\gamma_{\mathrm{slow}}=(0.83\pm 0.01$) ns-1 ((0.31
$\pm$ 0.002) GHz linewidth). By comparing the decay rate of the well-coupled
transition $\gamma_{\mathrm{fast}}$ to that of bulk $\gamma_{\mathrm{bulk}}$,
we find a lifetime enhancement of 7.9 due to the cavity. While it is likely
that embedding the QD in the microdisk suppresses emission into free-space,
relative to an emitter in the bulk, in what follows we assume that
$\gamma_{\mathrm{leak}}\approx\gamma_{\mathrm{bulk}}$ as is done in
literature27, which means that we extract lower-bounds on the Purcell factor
and the coupling efficiency of our system. Finally, we take the pure-dephasing
rate for the QD embedded in the microdisks and at temperatures ranging from 6
- 12 K to be $\gamma_{\mathrm{dp}}/2\pi=0.01$ GHz, as reported in literature
28.
Having determined $\gamma_{\mathrm{cav}}$, $\gamma_{\mathrm{leak}}$ and
$\gamma_{\mathrm{dp}}$, we repeat the spectral measurements such as those
presented in Figure 3a, increasing the excitation laser. For the drop port
(green data), for example, the transmitted intensity is
$T_{\mathrm{drop}}=\eta\left|t_{\mathrm{drop}}\right|^{2}$, where $\eta$
accounts for the incidence photon flux and the cavity-mediated coupling
efficiency between the bus and drop port waveguides. An analytic form of the
transmission coefficient, including coherent scattering from the QD, is known
to be 29, 30
$\displaystyle t_{\mathrm{drop}}=t_{0}[-1$
$\displaystyle+\frac{f}{(1+S)(f+(1+\frac{2i\Delta\omega}{(\gamma_{\mathrm{leak}}+2\gamma_{\mathrm{dp}})})(1+i\frac{\Delta\omega+\delta}{(\kappa/2)}))}],$
(3)
where $\Delta\omega=\omega_{\mathrm{laser}}-\omega_{\mathrm{QD}}$ is the laser
detuning to the QD resonance,
$f=\gamma_{\mathrm{cav}}/\left(\gamma_{\mathrm{leak}}+2\gamma_{\mathrm{dp}}\right)$,
$t_{0}=1/\left[1+i\left(\Delta\omega+\delta\right)/\left(\kappa/2\right)\right]$
and $S$ is the saturation parameter that accounts for the incident power (see
Supplementary Information for relationship of $S$ to input power and photon
number per lifetime, and the corresponding $t_{\mathrm{bus}}$). Spectral
diffusion in the system results in ‘wandering’ of the QD resonance, which can
be modelled by a convolution of the transmission with a Gaussian with
linewidth $\sigma_{\mathrm{sd}}$:31
$T_{\mathrm{drop,conv}}=|t_{\mathrm{drop}}|^{2}*P(\sigma_{\mathrm{sd}}),$ (4)
where
$P(\sigma_{\mathrm{sd}})=\frac{1}{\sqrt{2\pi}\sigma_{\mathrm{sd}}}\mathrm{exp}\bigg{(}-\frac{1}{2}\big{(}\frac{\Delta\omega-\delta}{\sigma_{\mathrm{sd}}}\big{)}^{2}\bigg{)}.$
(5)
As can be seen in Figure 3a, the frequency response is well reproduced by the
theory.
In practise, bus and drop port frequency-resolved data at different excitation
powers are simultaneously fit with $\delta$, $\omega_{QD}$, $\kappa$, $S$ and
$\sigma_{\mathrm{sd}}$ as free parameters, noting that $\sigma_{\mathrm{sd}}$
is temperature dependent (see Figure 6 in the Supplementary Information). For
the 5 $\mu$W data presented in Figure 3a, we find S = 1.5 $\pm$ 0.2
(corresponding to 1.4 $\pm$ 0.2 photons per lifetime), a QD-cavity detuning of
$\delta=0.02\kappa$ where $\kappa/2\pi=36.6\pm 2$ GHz and spectral diffusion
of $\sigma_{\mathrm{sd}}/2\pi=0.6\pm 0.1$ GHz. We also find a coherent
extinction of photons in the drop port ($\Delta I_{\mathrm{drop}}$) of (-24
$\pm$ 4)$\%$, as those photons are re-routed back into the bus port ($\Delta
I_{\mathrm{bus}}$) by the QD. The measured ratio of
$\sigma_{\mathrm{sd}}/\gamma_{\mathrm{cav}}\approx$0.87 is a factor of 4
better than what has been achieved in slow-light photonic crystals with QDs
that are not electrically contacted 32, where a peak extinction of $8\%$ was
observed.
The routing can be controlled either through the QD-cavity detuning or by
varying the intensity of the incident photon stream. We first demonstrate the
latter, presenting the fraction of photons re-routed from the drop to bus
ports as a function of the incident photon flux per lifetime in Figure 3c.
Here, the extinction measurements (symbols) are compared with three
theoretical predictions (using Eq. 3 above and the fitted parameters): the QD-
resonator with broadening due to pure dephasing and spectral diffusion (orange
curve), with pure dephasing only (as for an electrically contacted sample,
black curve), or a QD in a waveguide (i.e. no emission enhancement, purple
curve). The measurements are well reproduced when both pure dephasing and
spectral diffusion are accounted for, and for this detuning
$\left(\delta=0.02\kappa\right)$ a critical number of photons per lifetime of
$n_{c}=0.94\pm 0.2$ and maximum extinction of $\Delta
I_{\mathrm{drop,max}}=(-53\pm 4)\%$ in the limit of low power ($S=0$) were
found. For comparison, the maximum extinction realizable for a QD without
Purcell enhancement is (-$15\pm 4$) %. For an electrically contacted QD
system, where $\sigma_{sd}=0$, the critical photon number is expected to
decrease to $n_{c}=0.3\pm 0.02$, with a maximum $\Delta
I_{\mathrm{drop,max}}=(-98\pm 2)\%$ being achievable (black curve) (see
Supplementary Information). These results benchmarks the conditions for
coherent routing of photons in between the bus and drop ports at the single-
photon level.
Figure 4: (a) The QD lifetime as a function of QD-cavity detuning (in units of
$\kappa$), with both the decay rate (left) and emission enhancement (right).
The QD is temperature-tuned to the cavity resonance at 8K. Decay rates of both
the well- (fast, purple) and weakly-coupled (slow, orange) dipoles are shown,
and are compared to bulk emission rates (dashed line, shaded region depicts
variance). Extinction as the QD is tuned across the cavity resonance, showing
the maximum extinction $\Delta I_{\mathrm{drop,max}}$ at (b) the QD resonance
and (c) the cavity resonance.
The photon routing can also be controlled by changing the QD-cavity detuning,
which we do by varying the sample temperature, hence tuning the QD through the
cavity resonance from -0.3$\kappa$ to 0.09$\kappa$ (Figure 4 in Supplementary
Information). As the QD is scanned through the resonance,
$\gamma_{\mathrm{fast}}$ increases as can be seen in Figure 4a (purple
symbols) and peaks at a maximum lifetime of $\gamma_{\mathrm{fast}}$ = (5.11
$\pm$ 0.08) $\mathrm{ns}^{-1}$ at 8K ($\delta=-0.04\kappa$), corresponding to
an 8-fold emission enhancement. In contrast, the weakly coupled transition
decay rate (orange symbols) remains constant and near the bulk decay rate
(shaded region).
We observe a similar trend in photon re-routing efficiency, shown in Figure
4b; as the QD becomes resonant with the microdisk, the maximum $\Delta
I_{\mathrm{drop,max}}$ at the QD resonance ($\omega_{\mathrm{QD}}$) becomes
increasingly negative (left axis, green symbols) while the maximum $\Delta
I_{\mathrm{bus,max}}$ increases (right axis, purple symbols), as more photons
are re-routed from the drop to bus port by the emitter, in good agreement with
the theoretical calculations (solid curves). The predicted increase in routing
efficiency for positive detunings is due to the decrease in spectral diffusion
at lower temperatures (c.f. Supplementary Information). For our system, a
maximum of $(23\pm 3)\%$ of the photons are re-routed between the ports,
although for a similar but electrically contacted resonator
($\sigma_{\mathrm{sd}}$ = 0 and $S=1.5$, dashed curves), we predict that up to
$56\pm 3\%$ of the photons can be re-routed (dashed curves).
Figure 4c demonstrates how our system can be used as a coherent photon router
in practice, showing the fraction of photons scattered out of the drop port
(left axis, green symbols) and into the bus port (right axis, purple symbols)
for photons on resonance with the cavity ($\omega_{\mathrm{cav}}$) as a
function of the QD-cavity detuning. Here, we observe that a detuning of the QD
of $0.39\kappa$ (requiring a temperature change of only 6 K) is sufficient to
completely turn off the router, corresponding to a shift of $5.1$ QD
linewidths. For an electrically contacted resonator, the fraction of photons
scattered into the bus port increases to $(56\pm 3)\%$ (dashed curve), where
the intensity of the incoming photon stream adds an additional control knob
that increases this value to $(97\pm 2)\%$ (dotted curve). Instead of
temperature tuning of the QD, it is also possible to achieve similar control
electronically with a contacted sample 33 or even all-optically 26.
## 4 Discussion
Enhancing the quantum light-matter interactions simultaneously increases the
coupling of photons to desired modes and the coherence of the emission, with
implications for a host of quantum technologies. We define this enhancement in
terms of the Purcell factor $F$, which quantifies the change to the radiative
emission rate, such that $\gamma_{\mathrm{cav}}=F\gamma_{\mathrm{bulk}}$.
Experimentally, we find $F_{\mathrm{exp}}=6.9\pm 0.9$ (c.f. Figure 4a), which
can be compared to the predicted value of,34, 30
$F_{\mathrm{ideal}}=\frac{3}{4\pi^{2}}\left(\frac{\lambda}{n}\right)^{3}\frac{Q_{\mathrm{exp}}}{V_{\mathrm{eff}}},$
(6)
where for our resonator $V_{\mathrm{eff}}\approx$ 18$(\lambda/n)^{3}$ and
$Q_{\mathrm{exp}}$ = 8900 $\pm$ 100\. The resulting $F_{\mathrm{ideal}}$ = 38
$\pm$ 1 is larger than measured experimentally due to spatial mismatch of the
QD relative to the field maximum of the optical mode. Deterministic
positioning35, 36 can address this issue.
Increasing $F$ not only increases the emission rate, but also decreases the
relative effect of decoherence mechanisms such as pure dephasing or spectral
diffusion, with clear implications for bright sources of indistinguishable
single photons 37. For our system, the dominant source of decoherence is
spectral diffusion that, in bulk ($F$=1), results in a ratio of
$\sigma_{\mathrm{sd}}/F\gamma_{\mathrm{bulk}}=$ 6\. In Figure 5a, we display
how this ratio decreases as the $F$ increases, noting that moderate $F\geq$6
suffice to reach a unity ratio. A further two orders-of-magnitude reduction in
decoherence is obtainable in the absence of spectral diffusion, motivating the
use of electrically contacted resonators.
Figure 5: Effect of the Purcell enhancement on system parameters. a) As $F$
increases, so too does the emitter-resonator coupling efficiency $\beta$ (left
axis, solid). Conversely, an increase in the emission rate
$\gamma_{\mathrm{cav}}=F\gamma_{\mathrm{bulk}}$ decreases the relative effect
of the spectral diffusion $\sigma_{\mathrm{sd}}$ (right axis, solid) and
dephasing $\gamma_{\mathrm{dp}}$ (right axis, dashed) to the system
decoherence. b) The maximum drop port extinction (left) and critical photon
number $n_{c}$ (right) achievable as a function of $F$, both with (solid) and
without (dotted) spectral diffusion.
A QD-microdisk resonator system is not limited to acting as a source of
quantum light states, but can also be used for their control or processing. As
an example, this system can act as a Bell-state analyzer, a key element of a
quantum optical network 38, either in a standard cavity-QED configuration 39
or as a passive, nonlinear scatterer 40. For cavity-QED, the Purcell factor
can be re-expressed in terms of the QD-cavity coupling strength
$g=\sqrt{F_{\mathrm{exp}}\kappa\gamma_{\mathrm{bulk}}}/2$,. The resulting
$g/2\pi=2.5\pm 0.3$ GHz, which can be used to write the cooperativity of the
system 41
$C=4\left|g\right|^{2}/\left[\kappa\gamma_{\mathrm{bulk}}\right]=6.9\pm 0.9$.
Given the success rate of a cavity-QED based analyzer of $1-1/C$, we expect
our modest $F_{\mathrm{exp}}=6.9$ device to succeed $86\pm 11\%$ of the time.
On the other hand, the success of a Bell-state analyzer based on passive,
coherent scattering from the QD depends on the emitter-waveguide coupling
efficiency $\beta$ 40. By expressing the (lower-bound) $\beta$-factor as
$\beta=\frac{\gamma_{\mathrm{cav}}}{\gamma_{\mathrm{cav}}+\gamma_{\mathrm{leak}}}=\frac{F}{F+1},$
(7)
we find an experimental $\beta_{\mathrm{exp}}\approx 0.87\pm 0.01$, which
increases to $\beta_{\mathrm{ideal}}\approx 0.97$ for an optimally positioned
QD. For this scheme, the success rate scales as $\left(2\beta-1\right)/\beta$,
showing that near-perfect operation should be possible with our system.
Finally, as discussed in the main text, the QD-microdisk resonator can
function as a coherent router, where the re-routing of photons between the
drop and bus ports can be controlled either through the intensity of the
incident photon stream or the QD-microdisk detuning. In Figure 5b we present
the dependence of the maximum change in drop port intensity (i.e. re-routing
efficiency, left axis) on the Purcell factor in the cases where the emitter
suffers from both pure dephasing and spectral diffusion (solid curve) or just
the former (dotted curve). Even for the non-contacted systems, we expect a re-
routing efficiency in excess of $80\%$ for moderate enhancements of $F\approx
20$, while for an electrically contacted sample near-perfect routing is
predicted already at $F\approx 10$. We predict similar dependencies for the
critical photon number (Figure 5b, right axis), where for an on-resonance
emitter we observe that moderate Purcell factors are sufficient to overcome
spectral diffusion for single-photon nonlinearities.
## 5 Conclusions
In summary, we present an integrated whispering gallery mode resonator system
for on-chip quantum photonics based on single self-assembled QDs. For such a
system, which can be easily integrated with other photonic components,
Q-factors in excess of 20,000 are observed, enhancing emission into the
desired optical modes to simultaneously achieve a high coupling efficiency
$\beta>0.85$ and compensate for the majority of the decoherence mechanisms.
Using this platform, we demonstrate coherent re-routing of photons between the
drop and bus ports, observing a peak efficiency of $(42\pm 4)\%$ that is
expected to increase to $(53\pm 4)\%$ at $S=0$ and to $(98\pm 2)\%$ ($S=0$
and, $\sigma_{\mathrm{sd}}=0$) with electrical gating 33. We show control over
this routing using both temperature tuning and via the excitation intensity,
with the latter requiring a critical photon number of only 0.94 photons per
lifetime. Altogether, our platform enables coherent light-matter scattering 31
and efficient quantum optical nonlinearities 32, 42 at the single-photon
level, two key functionalities of solid-state quantum technologies 41.
The authors gratefully acknowledge financial support from Danmarks
Grundforskningsfond (DNRF 139, Hy-Q Center for Hybrid Quantum Networks) and
the European Union’s Horizon 2020 research and innovation programme under
grant agreement No. 824140 (TOCHA, H2020-FETPROACT-01-2018). A.D.W. and A.L.
acknowledge gratefully support of DFG-TRR160 and BMBF - Q.Link.X 16KIS0867.
The supplementary information containing the theoretical models and
fabrication details can be found here below.
## 6 Supplementary Information
### 6.1 Raw bus port scans
We use a tunable continuous (cw) laser (Toptica CTL) to study the sample by
coupling the laser into the waveguide mode via the shallow etch gratings. We
scan the laser frequency over a 13 GHz range and monitor the intensity that is
outcoupled at the bus port with a single-photon detector. An example of the
raw data is shown in Fig. 6, corresponding to Fig. 2a in the main manuscript.
A large background oscillation due to the frequency dependent grating is
visible. To make the transmission dips more comparable, they are each
normalized to the background count rate at their frequency position, allowing
us to obtain the dip transmission $\Delta T$.
Figure 6: Exemplary transmission intensity as a function of frequency, using a
resonant CTL laser and measuring the emission through the bus port.
Transmission dips corresponding to cavity resonances are clearly visible, and
their separations agree well with the calculated FSR of the disk.
### 6.2 Estimating the Purcell factor and coupling strength
The theoretical model in this paper is based on reference29. We start by
considering a single Indium Arsenide (InAs) quantum dot (QD) in bulk Gallium
Arsenide (GaAs). It decays with a rate of:
$\gamma_{\mathrm{bulk}}=\gamma_{0}+\gamma^{\prime}$, corresponding to the
radiative decay and decay to non-radiative channels respectively 43. When the
emitter is placed in a cavity, its overall decay rate is modified and is thus
given by:
$\gamma_{\mathrm{tot}}=\gamma_{\mathrm{cav}}+\gamma^{\prime}+\gamma_{\mathrm{leak}}=F\gamma_{0}+\gamma^{\prime}+\gamma_{\mathrm{leak}},$
(8)
where we consider decay into the cavity mode, non-radiative channels and non-
cavity modes respectively. For InAs QDs in bulk GaAs, quantum efficiencies
($\mathrm{QE}_{\mathrm{bulk}}$) $>$0.9 are routinely reported 44, 17. By
coupling the QDs to a cavity, the enhanced radiative decay rate further
increases its $\mathrm{QE}_{\mathrm{cavity}}$ as follows:
$\mathrm{QE}_{\mathrm{cavity}}=\frac{F\gamma_{0}+\gamma_{\mathrm{leak}}}{F\gamma_{0}+\gamma_{\mathrm{leak}}+\gamma^{\prime}}=\frac{F\gamma_{0}+\gamma_{0}}{F\gamma_{0}+\gamma_{0}+(1-\mathrm{QE}_{\mathrm{bulk}})\gamma_{\mathrm{bulk}}}=\frac{\mathrm{QE}_{\mathrm{bulk}}(F+1)}{\mathrm{QE}_{\mathrm{bulk}}\cdot
F+1},$ (9)
where we have used the approximation
$\gamma_{\mathrm{leak}}=\gamma_{\mathrm{0}}$, as has been reported elsewhere
27. The variation in $\mathrm{QE}_{\mathrm{cavity}}$ as a function of Purcell
enhancement $F$ is displayed in Fig. 7, evaluated with
$\mathrm{QE}_{\mathrm{bulk}}$=0.9 at $F=$1\. Here the
$\mathrm{QE}_{\mathrm{cavity}}$ increases above 0.99 for $F>$6.3, suggesting
that the non-radiative decay rate is small and hence justifying the
approximation $\gamma^{\prime}\approx 0$.
Figure 7: Effect of the Purcell enhancement on system parameters. As $F$
increases, so too does the emitter quantum efficiency (left axis, dashed) and
the emitter-resonator coupling efficiency $\beta$ (left axis, solid).
Conversely, an increase in the emission rate
$\gamma_{\mathrm{cav}}=F\gamma_{\mathrm{bulk}}$ decreases the relative effect
of the spectral diffusion $\sigma_{\mathrm{sd}}$ (right axis, solid) and
dephasing $\gamma_{\mathrm{dp}}$ (right axis, dashed) to the system
decoherence.
By using these approximations, we can also write
$\gamma_{\mathrm{tot}}\approx(F+1)\gamma_{0}$ and the coupling efficiency as
30:
$\beta=\frac{F\gamma_{0}}{\gamma_{\mathrm{tot}}}\approx\frac{F}{(F+1)}$ (10)
To obtain the Purcell factor experimentally, we measure the lifetime of the QD
situated in the cavity and compare it to the average lifetime measured for QDs
in bulk GaAs. A cavity-enhanced decay rate allows us to express:
$\frac{\tau_{\mathrm{bulk}}}{\tau_{\mathrm{tot}}}=\frac{\gamma_{\mathrm{cav}}+\gamma^{\prime}+\gamma_{\mathrm{leak}}}{\gamma_{0}+\gamma^{\prime}}\approx
F+1$ (11)
Given the measured lifetimes, we obtain a $F_{\mathrm{exp}}=6.9\pm 0.9$, with
which we can further estimate the QD-cavity coupling strength as follows 30:
$g=\frac{\sqrt{F_{exp}\kappa\gamma_{\mathrm{bulk}}}}{2}$ (12)
For our system, we obtain
$\\{g,\kappa,\gamma_{\mathrm{bulk}}\\}/2\pi=\\{2.5,36.6,0.1\\}$ GHz.
Subsequently, we estimate the Cooperativity factor $C$ using the following
formula:
$C=\frac{4g^{2}}{\kappa\gamma_{\mathrm{bulk}}}\approx F_{\mathrm{exp}}=6.9\pm
0.9$ (13)
### 6.3 Coherent interaction of the QD-cavity system
To study the coherent interaction of the QD-cavity system, we consider a two-
level system placed in a cavity and coupled to a waveguide, as depicted in
Fig. 1 in the main text. We measure the transmission $T_{\mathrm{bus}}$ in the
bus port and $T_{\mathrm{drop}}$ in the drop port. We start with the rate
equations of the QD-cavity system, which are found in the literature 29:
$\dot{s}=-i\Delta\omega
s-\frac{\gamma_{\mathrm{cav}}}{2}\frac{Q}{Q_{0}}\left[t_{0}+\frac{1}{f}\right]s+i\frac{Q}{Q_{0}}\sqrt{\frac{\gamma_{\mathrm{cav}}}{2}}(2s_{z})b_{in}t_{0}$
(14)
$\dot{s_{z}}=-\gamma_{\mathrm{cav}}\frac{Q}{Q_{0}}\left[\Re{(t_{0})}+\frac{1}{f}\right]\left(s_{z}+\frac{1}{2}\right)+\sqrt{\frac{\gamma_{\mathrm{cav}}}{2}}\frac{Q}{Q_{0}}\left[is^{*}b_{in}t_{0}+c.c.\right]$
(15)
$b_{t}=-\frac{Q}{Q_{0}}t_{0}b_{in}-i\frac{Q}{Q_{0}}\sqrt{\frac{\gamma_{\mathrm{cav}}}{2}}t_{0}s,$
(16)
where $b_{t}$ is the outgoing field into the drop port, $b_{\mathrm{in}}$ is
the incoming field amplitude, $Q$ is the total quality factor including
coupling to leaky modes, $Q_{0}$ is the quality factor of the cavity mode,
$t_{0}$ is the transmission of an empty cavity,
$\Delta\omega=\omega_{laser}-\omega_{QD}$ is the frequency detuning of the
drive field, $\kappa$ is the cavity linewidth, and $\delta$ is the detuning of
the cavity resonance with respect to the QD resonance. The atomic operators
are $S_{z}=\frac{1}{2}(\ket{e}\bra{e}-\ket{g}\bra{g})$ and
$S_{-}=\ket{g}\bra{e}$ and in the above equations we are considering the
expectation values, $s=\langle S_{-}\rangle$, $s_{z}=\langle S_{z}\rangle$,
$b_{t}=\langle b_{t}\rangle$ and $b_{in}=\langle b_{\mathrm{in}}\rangle$.
Here, $t_{0}$ is the bare cavity response in the absence of an emitter:
$t_{0}=\frac{1}{1+i\frac{Q}{Q_{0}}\frac{\Delta\omega+\delta}{(\kappa/2)}}$
(17)
Additionally, the parameter $f$ describes the decay rate into the cavity
versus all other rates:
$f=\frac{\gamma_{\mathrm{cav}}}{\gamma_{\mathrm{leak}}+2\gamma_{\mathrm{dp}}},$
The steady-state solution to $s$ and $s_{z}$ can be found from Eqs.14-15 by
setting $\dot{s}=0$ and $\dot{s_{z}}=0$, which results in:
$s=-\frac{2i\sqrt{\frac{2}{\gamma_{\mathrm{cav}}}}s_{z}b_{in}t_{0}}{t_{0}+\frac{1}{f}+\frac{2i\Delta\omega}{\gamma_{\mathrm{cav}}}\frac{Q_{0}}{Q}}$
(18) $s_{z}=-\frac{1}{2}\frac{1}{1+\frac{|b_{in}|^{2}}{P_{c}}},$ (19)
$P_{c}=\frac{\gamma_{\mathrm{cav}}}{4|t_{0}|^{2}}\left[\frac{1}{f^{2}}+\frac{t_{0}+t_{0}^{*}}{f}+\frac{2i\Delta\omega}{\gamma_{\mathrm{cav}}}\frac{Q_{0}}{Q}(-t_{0}+t_{0}^{*})+\left(\frac{2\Delta\omega}{\gamma_{\mathrm{cav}}}\frac{Q_{0}}{Q}\right)^{2}+|t_{0}|^{2}\right].$
(20)
where $P_{c}$ is the critical power to reach $s_{z}=-1/4$ and scales like the
number of photons per second. Here we can use the following expressions to
simplify $P_{c}$:
$\displaystyle t_{0}+t_{0}^{*}$ $\displaystyle=2\Re{(t_{0})}$ (21)
$\displaystyle t_{0}+t_{0}^{*}$ $\displaystyle=2|t_{0}|^{2}$ (22)
$\displaystyle-t_{0}+t_{0}^{*}$
$\displaystyle=2i\frac{Q}{Q_{0}}\frac{\Delta\omega+\delta}{(\kappa/2)}|t_{0}|^{2}$
(23) $\displaystyle\frac{1}{|t_{0}|^{2}}$
$\displaystyle=1+\left(\frac{Q}{Q_{0}}\frac{\Delta\omega+\delta}{(\kappa/2)}\right)^{2}$
(24)
This allows us to obtain the following expression for the critical power
$P_{c}$:
$P_{c}=\frac{\gamma_{\mathrm{cav}}}{4}\left[\left(1+\frac{1}{f}\right)^{2}+\left(\frac{Q}{Q_{0}}\frac{\Delta\omega+\delta}{f(\kappa/2)}\right)^{2}-\frac{4\Delta\omega}{\gamma_{\mathrm{cav}}}\frac{\Delta\omega+\delta}{(\kappa/2)}+\left(\frac{Q_{0}}{Q}\frac{2\Delta\omega}{\gamma_{\mathrm{cav}}}\right)^{2}+\left(\frac{2\Delta\omega}{\gamma_{\mathrm{cav}}}\frac{\Delta\omega+\delta}{(\kappa/2)}\right)^{2}\right].$
(25)
In the limit of $\gamma_{\mathrm{leak}},\gamma_{\mathrm{dp}}\rightarrow 0$ and
$Q_{0}=Q$, the above expression can be simplified to:
$P_{c}=\frac{\gamma_{\mathrm{cav}}}{4}\left[\left(\frac{2\Delta\omega}{\gamma_{\mathrm{cav}}}\right)^{2}+\left(\frac{2\Delta\omega}{\gamma_{\mathrm{cav}}}\frac{\Delta\omega+\delta}{(\kappa/2)}-1\right)^{2}\right].$
(26)
The atomic population $s$ therefore becomes:
$s=i\frac{Q}{Q_{0}}\sqrt{\frac{\gamma_{\mathrm{cav}}}{2}}b_{in}t_{0}\frac{1}{1+S}\frac{1}{i\Delta\omega+\frac{Q}{Q_{0}}\frac{\gamma_{\mathrm{cav}}}{2}(t_{0}+\frac{1}{f})},$
(27)
where $S=\alpha|b_{in}|^{2}/P_{c}$ is the saturation parameter. We have also
introduced a coupling efficiency $\alpha$ that relates the incoming light to
the light that reaches the cavity, such that for an ideal lossless system
$\alpha=1$. $S$ can hence be expressed as:
$S=\frac{\alpha n_{in}}{n_{c}},$ (28)
where $n_{in}=|b_{in}|^{2}/\gamma_{\mathrm{tot}}$ and
$n_{c}=P_{c}/\gamma_{\mathrm{tot}}$ are the number of incident photons per
lifetime and the critical number of photons per lifetime to reach S = 1,
respectively. In our work, we are considering a ”leaky” cavity where the
emitter in the cavity couples to leaky modes with decay rate
$\gamma_{\mathrm{leak}}$ and experiences pure dephasing with the rate
$\gamma_{\mathrm{dp}}$. Assuming coupling to the cavity via the waveguides
only, we obtain the following transmission coefficient
$t_{\mathrm{drop}}=b_{t}/b_{\mathrm{in}}$:
$t_{\mathrm{drop}}=\frac{b_{t}}{b_{in}}=t_{0}\left[-1+\frac{f}{\left(1+S\right)\left(f+\left(1+\frac{2i\Delta\omega}{\gamma_{\mathrm{leak}}+2\gamma_{\mathrm{dp}}}\right)\left(1+i\frac{\Delta\omega+\delta}{(\kappa/2)}\right)\right)}\right]$
(29)
$t_{\mathrm{bus}}=1+t_{\mathrm{drop}}$ (30)
This gives the steady-state cavity transmittivity in the drop and bus ports:
$T_{\mathrm{drop}}=\chi|t_{\mathrm{drop}}|^{2}$ and
$T_{\mathrm{bus}}=|t_{\mathrm{bus}}|^{2}$, where $\chi$ accounts for the total
unnormalized count rate. Since the transmission by the QD depends on the
incoming power, the first step in our fitting procedure is to use Eq. 29 to
fit the drop and bus port spectra at 7K whilst varying the power. We use the
knowledge of the QD lifetime in the cavity and hence obtain the cavity
linewidth $\kappa$, QD spectral diffusion $\sigma_{\mathrm{sd}}$, detuning
$\delta$ and saturation parameter $S$. In Fig. 8a), we show the QD extinction
for different powers together with its theoretical fit and observe
experimentally that the coherent extinction by the QD on resonance with the
cavity decreases as the power increases. Both the theoretical drop and bus
port extinction $I_{\mathrm{drop}}$ and $I_{\mathrm{bus}}$, excluding spectral
diffusion (pink and black solid lines), are displayed. Their inverse relation
show that the incoming photons are either routed through to the bus port or
drop port, and the ratio can be controlled by the incoming photon flux
impinging on a single QD in the cavity, enabling its use as a photon switch.
This analysis also allows us to obtain the critical photon number $n_{c}$ =
0.94 at a 7K at detuning $0.02\kappa$. In the absence of spectral diffusion
and on resonance (but including dephasing), $n_{c}$ = 0.3, close to the ideal
value of 0.25.
Figure 8: a) Power-dependent extinction of the QD of Fig. 3a in the main
manuscript, showing a clear decrease in extinction for higher powers (green
data points). Also shown are the theoretical $I_{\mathrm{drop}}$ and
$I_{\mathrm{bus}}$ for the QD-cavity system accounting only for dephasing
(black and pink respectively) and also for spectral diffusion (orange and
blue). For the latter, a critical photon number of 0.94 photons per lifetime
is found (orange dashed line). b) The variation in critical photon number
$n_{c}$ as a function of QD-cavity detuning $\delta$.
Using the knowledge of $S$, we are further able to fit the temperature tuned
data where the QD is moved through the cavity resonance, as displayed in Fig.
9.
Figure 9: Temperature tuning of QD through the cavity resonance at a) 6 K, b)
8K, c) 10 K and d) 12 K. The solid curve is the theoretical fit.
For reference, the power-dependent transmission coefficient is further
simplified when the emitter is resonant with the cavity,
$\Delta\omega=\delta=0$:
$t_{\mathrm{drop}}=t_{0}\left[-1+\frac{f}{(1+f)(1+S)}\right].$ (31)
### 6.4 Critical coupling
The proximity of the waveguide to the cavity results in gap-dependent
$\bar{Q}$, that increases as the gap between cavity and waveguide is widened.
Following the formalism in reference 25, the coupling $\Delta T$ on resonance
can be expressed as:
$\displaystyle\Delta T$
$\displaystyle=1-\left[T_{cc}+(1-T_{cc})\left(\frac{1-\kappa_{g}}{1+\kappa_{g}}\right)^{2}\right]$
(32) $\displaystyle\frac{1}{Q_{exp}}$
$\displaystyle=\frac{1}{Q_{int}}\left(1+\kappa_{g}\right)$ (33)
where $\kappa_{g}=\kappa_{g0}e^{-\xi g}$ is the coupling rate between the
cavity and access waveguides, $\xi$ is the characteristic length constant and
$g$ is the gap size. In these expressions, $T_{cc}$ is the transmission at
critical coupling, while $Q_{int}$ is the intrinsic quality factor of the
resonator in the absence of the access waveguides.
### 6.5 Background subtraction in data
The cavity resonance as shown in Fig. 3 in the main text is spectrally
situated in the vicinity of another cavity mode, increasing the count rate on
one side of the cavity mode, as depicted in Fig. 10a. To better fit our data,
we include the second cavity mode in the fitting analysis, which is depicted
in Fig. 10b where all power spectra are fitted simultaneously along with the
second resonance. In order to model the power saturation curve given by Eq. 29
at $S=0$, we subtract the additional counts from the second cavity mode using
the double-cavity fit, which results in fits as shown in Fig. 10c-g. This data
is subsequently fitted with Eq. 29 convoluted with a Gaussian to include
spectral diffusion, with which we also find the parameters $\alpha$, $\delta$,
$\kappa$. Using Eq. 25 (assuming $Q=Q_{0}$) and Eq. 28 we are able to convert
input power to photons per lifetime and obtain Fig. 10h. Each data point in
Fig. 10h is equal to the minimum within the QD dip of the corresponding data.
The errors are primarily due to the dark count noise on our single photon
detectors.
Figure 10: a) The spectra showing both the cavity resonance containing a QD
(red box) and the neighbouring resonance (purple box). b) We fit the raw data
by varying the power, including the existence of the second resonance. c)-g)
The fitted data sets of different power, where second resonance has been
subtracted and normalised. All 5 spectra are fitted together. h) The minimum
data point for each power is plotted against saturation curve obtained from
the power fit.
### 6.6 Temperature dependent spectral diffusion
In our experiment, the QD is tuned across the cavity resonance by controlling
the sample temperature and we denote the cavity-emitter detuning $\delta$.
This is taken into account in the theoretical calculations displayed in Figure
4 in the main text, where the spectral diffusion $\sigma_{\mathrm{sd}}$ varies
with temperature. From the inset in Fig. 11, it is shown that both $\delta$
and $\sigma_{\mathrm{sd}}$ scale linearly with temperature. This allows us to
fit a linear relation between $\delta$ and $\sigma_{\mathrm{sd}}$, which we
use to extract the amount of temperature-dependent spectral diffusion
experienced by the QD.
The linear relation of the spectral diffusion $\sigma_{sd}$ is taken into
account in Fig. 4 in the main text, where the theoretical extinction of the QD
is expected to be at maximum when the QD is on resonance with the cavity, as
depicted on the inset in Fig. 12. Due to the $\sigma_{sd}$ contribution, which
decreases for lower temperatures, the extinction $I_{\mathrm{drop}}$ also
scales linearly as can be seen of Fig. 12.
Figure 11: The amount of spectral diffusion depends on the cavity-QD detuning
$\delta$ and follows a linear trend in this regime. Inset: This relation comes
about due to temperature tuning, of which both the spectral diffusion
$\sigma_{sd}$ and cavity-QD $\delta$ depend on. Figure 12: QD extinction
$I_{\mathrm{drop}}$ with (orange dashed line) and without (orange solid line)
spectral diffusion $\sigma_{sd}$. Since $\sigma_{sd}$ varies with temperature
and is the main source of decoherence in our experiment, the maximum
$I_{\mathrm{drop}}$ is not at cavity-emitter detuning $\delta=$ 0\. Inset:
Normalised QD extinction without spectral diffusion at saturation $S=1.5$ is
expected to be maximized on resonance ($\delta=$ 0). The QD extinction is
normalised to the bare cavity.
### 6.7 Mode profile and volume calculations
A geometry representing the disc structures discussed in this paper was
defined and meshed in COMSOL Multiphysics 5.1. Using the Electromagnetic
Waves, Frequency Domain and rotational symmetry we searched for
eigenfrequencies in the disc in the 910-940nm range and were hence able to
simulate the first and higher order modes of the structure.
All variables are calculated using COMSOL along with the Electromagnetic
Waves, Frequency Domain physics package in order to obtain the final effective
mode volumes of the various modes discussed in the main manuscript, using n =
3.46 for GaAs. In order to obtain a sufficiently small convergence error in
the simulations, the geometry was meshed to a maximum element size of $36nm$
(corresponding to $\frac{1}{5}$ of the height of the disc), while the pedestal
and the surrounding air was meshed to a maximum element size of $230nm$
(corresponding to $\frac{1}{4}$ of the central wavelength within the
simulation). Finally, a perfectly matched layer (PML) enclosing the geometry
and air is added as the outer boundary of the setup. This procedure results in
a convergence error $<10^{-7}$.
We follow the approach presented in 8 but adapt it for a linear dipole, using
the knowledge of how the counter propagating modes are related, and given the
mode volume when the dipole is placed in the field maximum, we obtain a
minimum mode volume for a lossy structure in cylindrical coordinates:
$V=\frac{\pi\int\int drdz\cdot
r\cdot[\epsilon(-E_{r}^{2}-E_{z}^{2}+E_{\phi}^{2})-\mu(H_{r}^{2}+H_{z}^{2}-H_{\phi}^{2})]}{2\epsilon_{0}n^{2}[(\mathrm{max}(E_{r})]^{2}},$
(34)
where $E$ is the electric field while $H$ is the magnetic field of the mode.
The factor $n$ is the refractive index, $\epsilon$ is the permittivity of the
material while $\epsilon_{0}$ is the vacuum permittivity, and $\mu$ is the
permeability of the material.
The normalized mode profile is presented in the main text. In Fig. 13 we see
the contributions from the various components of the mode and note that the
radial component is by far the dominant mode as would be expected.
Figure 13: Spatial components of the first order electric field normalized to
the absolute maximal field strength.
### 6.8 Fabrication
Disc resonators used throughout this study were fabricated on an undoped GaAs
(160nm)/ AlGaAs (1150nm) wafer embedded with InAs quantum dots. The devices
were developed using Electron Beam Lithography (EBL) on a layer of resist
(ZEP520). Our smallest design features are 40 nm, which is larger than the
precision of the EBL-alignment error of $\approx$ 30 nm. The shallow edged
gratings were etched using Reactive Ion Etching (RIE), followed by an
Inductively Coupled Plasma (ICP) etching to a depth of 800 nm. In the final
step, the remaining structures were under-etched to fully suspend the
waveguides and the periphery of the discs ($\approx 3\mu$m) using hydrofluoric
acid (HF, 10%), which is depicted in Fig 14. The under-etching step sets a
lower limit on the size of the disc. Following the under-etching, the sample
was dried using a Critical Point Dryer (CPD).
Figure 14: Three main steps in fabrication of microdisc resonators. a) Shallow
etching a large square to a target depth of around 100nm determined by
reflectivity measurements done during the RIE process. The small pattern of
the gratings obtain a final etch depth around 50nm-60nm. b) Another layer is
etched using a deep ICP etch to a target depth at around 800nm, well within
the $AlGaAs$ layer. c) A solution of 10% HF acid allows us to underetch
through the deep trenches. An underetching of 50s seconds produces
approximately a $3\mu m$ long undercut.
## References
* Akahane et al. 2003 Akahane, Y.; Asano, T.; Song, B.-S.; Noda, S. High-Q photonic nanocavity in a two-dimensional photonic crystal. _Nature_ 2003, _425_ , 944–947
* Armani et al. 2003 Armani, D. K.; Kippenberg, T. J.; Spillane, S. M.; Vahala, K. J. Ultra-high-Q toroid microcavity on a chip. _Nature_ 2003, _421_ , 925–928
* Aoki et al. 2006 Aoki, T.; Dayan, B.; Wilcut, E.; Bowen, W. P.; Parkins, A. S.; Kippenberg, T. J.; Vahala, K. J.; Kimble, H. J. Observation of strong coupling between one atom and a monolithic microresonator. _Nature_ 2006, _443_ , 671–674
* Hennessy et al. 2007 Hennessy, K.; Badolato, A.; Winger, M.; Gerace, D.; Atatüre, M.; Gulde, S.; Fält, S.; Hu, E. L.; Imamoğlu, A. Quantum nature of a strongly coupled single quantum dot–cavity system. _Nature_ 2007, _445_ , 896–899
* Loo et al. 2010 Loo, V.; Lanco, L.; Lemaître, A.; Sagnes, I.; Krebs, O.; Voisin, P.; Senellart, P. Quantum dot-cavity strong-coupling regime measured through coherent reflection spectroscopy in a very high-Q micropillar. _Appl. Phys. Lett._ 2010, _97_ , 241110
* Wang et al. 2019 Wang, D.; Kelkar, H.; Martin-Cano, D.; Rattenbacher, D.; Shkarin, A.; Utikal, T.; Götzinger, S.; Sandoghdar, V. Turning a molecule into a coherent two-level quantum system. _Nature Physics_ 2019, _15_ , 483–489
* Lodahl et al. 2017 Lodahl, P.; Mahmoodian, S.; Stobbe, S.; Rauschenbeutel, A.; Schneeweiss, P.; Volz, J.; Pichler, H.; Zoller, P. Chiral quantum optics. _Nature_ 2017, _541_ , 473–480
* Martin-Cano et al. 2019 Martin-Cano, D.; Haakh, H. R.; Rotenberg, N. Chiral Emission into Nanophotonic Resonators. _ACS Photonics_ 2019, _6_ , 961–966
* Hilico et al. 2016 Hilico, M. S. A.; Will, E.; Volz, J.; Rauschenbeutel, A. Quantum optical circulator controlled by a single chirally coupled atom. _Science_ 2016, _354_ , 1577–1580
* Sayrin et al. 2015 Sayrin, C.; Junge, C.; Mitsch, R.; Albrecht, B.; O’Shea, D.; Schneeweiss, P.; Volz, J.; Rauschenbeutel, A. Nanophotonic Optical Isolator Controlled by the Internal State of Cold Atoms. _Phys. Rev. X_ 2015, _5_ , 041036
* Bechler et al. 2018 Bechler, O.; Borne, A.; Rosenblum, S.; Guendelman, G.; Mor, O. E.; Netser, M.; Ohana, T.; Aqua, Z.; Drucker, N.; Finkelstein, R.; Lovsky, Y.; Bruch, R.; Gurovich, D.; Shafir, E.; Dayan, B. A passive photon–atom qubit swap operation. _Nature Physics_ 2018, _14_ , 996–1000
* Pedersen et al. 2020 Pedersen, F. T.; Wang, Y.; Olesen, C. T.; Scholz, S.; Wieck, A. D.; Ludwig, A.; Löbl, M. C.; Warburton, R. J.; Midolo, L.; Uppu, R.; Lodahl, P. Near Transform-Limited Quantum Dot Linewidths in a Broadband Photonic Crystal Waveguide. _ACS Photonics_ 2020, _7_ , 2343–2349
* Englund et al. 2012 Englund, D.; Majumdar, A.; Bajcsy, M.; Faraon, A.; Petroff, P.; Vučković, J. Ultrafast Photon-Photon Interaction in a Strongly Coupled Quantum Dot-Cavity System. _Phys. Rev. Lett._ 2012, _108_ , 093604
* Sun et al. 2018 Sun, S.; Kim, H.; Luo, Z.; Solomon, G. S.; Waks, E. A single-photon switch and transistor enabled by a solid-state quantum memory. _Science_ 2018, _361_ , 57–60
* Thyrrestrup et al. 2018 Thyrrestrup, H. et al. Quantum Optics with Near-Lifetime-Limited Quantum-Dot Transitions in a Nanophotonic Waveguide. _Nano Lett._ 2018, _18_ , 1801–1806
* Wang et al. 2021 Wang, Y.; Uppu, R.; Zhou, X.; Papon, C.; Scholz, S.; Wieck, A. D.; Ludwig, A.; Lodahl, P.; Midolo, L. Electroabsorption in gated GaAs nanophotonic waveguides. _Appl. Phys. Lett._ 2021, _118_ , 131106
* Wang et al. 2011 Wang, Q.; Stobbe, S.; Lodahl, P. Mapping the Local Density of Optical States of a Photonic Crystal with Single Quantum Dots. _Phys. Rev. Lett._ 2011, _107_ , 167404
* Gayral et al. 2001 Gayral, B.; Gérard, J. M.; Sermage, B.; Lemaître, A.; Dupuis, C. Time-resolved probing of the Purcell effect for InAs quantum boxes in GaAs microdisks. _Appl. Phys. Lett._ 2001, _78_ , 2828–2830
* Michael et al. 2007 Michael, C. P.; Srinivasan, K.; Johnson, T. J.; Painter, O.; Lee, K. H.; Hennessy, K.; Kim, H.; Hu, E. Wavelength- and material-dependent absorption in GaAs and AlGaAs microcavities. _Appl. Phys. Lett._ 2007, _90_ , 051108
* Baker et al. 2011 Baker, C.; Belacel, C.; Andronico, A.; Senellart, P.; Lemaitre, A.; Galopin, E.; Ducci, S.; Leo, G.; Favero, I. Critical optical coupling between a GaAs disk and a nanowaveguide suspended on the chip. _Appl. Phys. Lett._ 2011, _99_ , 151117
* Najer et al. 2021 Najer, D.; Tomm, N.; Javadi, A.; Korsch, A. R.; Petrak, B.; Riedel, D.; Dolique, V.; Valentin, S. R.; Schott, R.; Wieck, A. D.; Ludwig, A.; Warburton, R. J. Suppression of Surface-Related Loss in a Gated Semiconductor Microcavity. _Phys. Rev. Appl._ 2021, _15_ , 044004
* Guha et al. 2017 Guha, B.; Marsault, F.; Cadiz, F.; Morgenroth, L.; Ulin, V.; Berkovitz, V.; Lemaître, A.; Gomez, C.; Amo, A.; Combrié, S.; Gérard, B.; Leo, G.; Favero, I. Surface-enhanced gallium arsenide photonic resonator with quality factor of $6\times 10^{6}$. _Optica_ 2017, _4_ , 218–221
* Sauvan et al. 2013 Sauvan, C.; Hugonin, J. P.; Maksymov, I. S.; Lalanne, P. Theory of the Spontaneous Optical Emission of Nanosize Photonic and Plasmon Resonators. _Phys. Rev. Lett._ 2013, _110_ , 237401
* Zhou et al. 2018 Zhou, X.; Kulkova, I.; Lund-Hansen, T.; Hansen, S. L.; Lodahl, P.; Midolo, L. High-efficiency shallow-etched grating on GaAs membranes for quantum photonic applications. _Appl. Phys. Lett._ 2018, _113_ , 251103
* Ding et al. 2010 Ding, L.; Senellart, P.; Lemaître, A.; Ducci, S.; Leo, G.; Favero, I. GaAs micro-nanodisks probed by a looped fiber taper for optomechanics applications. _Proc. SPIE_ 2010, _7712_ , 771211
* Türschmann et al. 2017 Türschmann, P.; Rotenberg, N.; Renger, J.; Harder, I.; Lohse, O.; Utikal, T.; Götzinger, S.; Sandoghdar, V. Chip-Based All-Optical Control of Single Molecules Coherently Coupled to a Nanoguide. _Nano Lett._ 2017, _17_ , 4941–4945
* Srinivasan and Painter 2007 Srinivasan, K.; Painter, O. Linear and nonlinear optical spectroscopy of a strongly coupled microdisk–quantum dot system. _Nature_ 2007, _450_ , 862–866
* Tighineanu et al. 2018 Tighineanu, P.; Dreeßen, C. L.; Flindt, C.; Lodahl, P.; Sørensen, A. S. Phonon Decoherence of Quantum Dots in Photonic Structures Broadening of the Zero-Phonon Line and the Role of Dimensionality. _Phys. Rev. Lett._ 2018, _120_ , 257401
* Aufféves-Garnier et al. 2007 Aufféves-Garnier, A.; Simon, C.; Gérard, J.-M.; Poizat, J.-P. Giant optical nonlinearity induced by a single two-level system interacting with a cavity in the Purcell regime. _Phys. Rev. A_ 2007, _75_ , 1–16
* Wang et al. 2017 Wang, D.; Kelkar, H.; Martin-Cano, D.; Utikal, T.; Götzinger, S.; Sandoghdar, V. Coherent Coupling of a Single Molecule to a Scanning Fabry-Perot Microcavity. _Phys. Rev. X_ 2017, _7_ , 021014
* Jeannic et al. 2021 Jeannic, H. L.; Ramos, T.; Simonsen, S. F.; Pregnolato, T.; Liu, Z.; Schott, R.; Wieck, A. D.; Ludwig, A.; Rotenberg, N.; García-Ripoll, J. J.; Lodahl, P. Experimental Reconstruction of the Few-Photon Nonlinear Scattering Matrix from a Single Quantum Dot in a Nanophotonic Waveguide. _Phys. Rev. Lett._ 2021, _126_ , 023603
* Javadi et al. 2015 Javadi, A.; Söllner, I.; Arcari, M.; Hansen, S. L.; Midolo, L.; Mahmoodian, S.; Kiršanske, G.; Pregnolato, T.; Lee, E.; Song, J.; Stobbe, S.; Lodahl, P. Single-photon non-linear optics with a quantum dot in a waveguide. _Nat. Commun._ 2015, _6_ , 8655
* Uppu et al. 2020 Uppu, R.; Pedersen, F. T.; Wang, Y.; Olesen, C. T.; Papon, C.; Zhou, X.; Midolo, L.; Scholz, S.; Wieck, A. D.; Ludwig, A.; Lodahl, P. Scalable integrated single-photon source. _Science Adv._ 2020, _6_ , eabc826
* Purcell 1946 Purcell, E. M. Spontaneous Emission Probabilities at Radio Frequencies. _Phys. Rev._ 1946, _69_ , 674
* Schnauber et al. 2018 Schnauber, P.; Schall, J.; Bounouar, S.; Höhne, T.; Park, S.-I.; Ryu, G.-H.; Heindel, T.; Burger, S.; Song, J.-D.; Rodt, S.; Reitzenstein, S. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography. _Nano Lett._ 2018, _18_ , 2336–2342
* Pregnolato et al. 2020 Pregnolato, T.; Chu, X.-L.; Schröder, T.; Schott, R.; Wieck, A. D.; Ludwig, A.; Lodahl, P.; Rotenberg, N. Deterministic positioning of quantum dots in nanophotonic waveguides. _APL Photonics_ 2020, _5_ , 086101
* C. et al. 2002 C.,; Santori,; Fattal, D.; Vučkovi/’c, J.; Solomon, G. S.; Yamamoto, Y. Indistinguishable photons from a single-photon device. _Nature_ 2002, _419_ , 594–597
* Lodahl 2018 Lodahl, P. Quantum-dot based photonic quantum networks. _Quantum Sci. Technol._ 2018, _3_ , 013001
* Duan and Kimble 2004 Duan, L.-M.; Kimble, H. J. Scalable Photonic Quantum Computation through Cavity-Assisted Interactions. _Phys. Rev. Lett._ 2004, _92_ , 127902
* Ralph et al. 2015 Ralph, T. C.; Söllner, I.; Mahmoodian, S.; White, A. G.; Lodahl, P. Photon Sorting, Efficient Bell Measurements, and a Deterministic Controlled-Z Gate Using a Passive Two-Level Nonlinearity. _Phys. Rev. Lett._ 2015, _114_ , 173603
* Borregaard et al. 2019 Borregaard, J.; Sørensen, A. S.; Lodahl, P. Quantum Networks with Deterministic Spin–Photon Interfaces. _Adv. Quant. Tech._ 2019, _2_ , 1800091
* Türschmann et al. 2019 Türschmann, P.; Jeannic, H. L.; Simonsen, S. F.; Haakh, H. R.; Götzinger, S.; Sandoghdar, V.; Lodahl, P.; Rotenberg, N. Coherent nonlinear optics of quantum emitters in nanophotonic waveguides. _Nanophotonics_ 2019, _8_ , 1641–1657
* Johansen et al. 2008 Johansen, J.; Stobbe, S.; Nikolaev, I. S.; Lund-Hansen, T.; Kristensen, P. T.; Hvam, J. M.; Vos, W. L.; Lodahl, P. Size dependence of the wavefunction of self-assembled InAs quantum dots from time-resolved optical measurements. _Phys. Rev. B_ 2008, _77_ , 073303
* Stobbe et al. 2009 Stobbe, S.; Johansen, J.; Kristensen, P. T.; Hvam, J. M.; Lodahl, P. Frequency dependence of the radiative decay rate of excitons in self-assembled quantum dots: Experiment and theory. _Phys. Rev. B_ 2009, _80_ , 155307
|
arxiv-papers
| 2021-07-26T12:51:39 |
2024-09-04T03:07:18.590874
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Arianne Brooks, Xiao-Liu Chu, Zhe Liu, Rudiger Schott, Arne Ludwig,\n Andreas D. Wieck, Leonardo Midolo, Peter Lodahl and Nir Rotenberg",
"submitter": "Xiao-Liu Chu",
"url": "https://arxiv.org/abs/2107.12188"
}
|
2107.12197
|
# Coexistence of isospin $I=0$ and $I=1$ pairings in asymmetric nuclear matter
Y. -J. Yan Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou
730000, China School of Nuclear Science and Technology, University of Chinese
Academy of Sciences, Beijing 100049, China CAS Key Laboratory of High
Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy
of Sciences, Lanzhou 730000, China X. -L. Shang [email protected]
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000,
China School of Nuclear Science and Technology, University of Chinese Academy
of Sciences, Beijing 100049, China CAS Key Laboratory of High Precision
Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of
Sciences, Lanzhou 730000, China J.-M. Dong Institute of Modern Physics,
Chinese Academy of Sciences, Lanzhou 730000, China School of Nuclear Science
and Technology, University of Chinese Academy of Sciences, Beijing 100049,
China CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of
Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China W. Zuo
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000,
China School of Nuclear Science and Technology, University of Chinese Academy
of Sciences, Beijing 100049, China CAS Key Laboratory of High Precision
Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of
Sciences, Lanzhou 730000, China
###### Abstract
The coexistence of neutron-neutron (n-n), proton-proton (p-p), and neutron-
proton (n-p) pairings is investigated by adopting an effective density-
dependent contact pairing potential. These three types of pairings can coexist
only if the n-p pairing is stronger than the n-n and p-p pairings for isospin
asymmetric nuclear matter. In addition, the existence of n-n and p-p pairs
might enhance n-p pairings in asymmetric nuclear matter when the n-p pairing
strength is significantly stronger than the n-n and p-p ones. Conversely, the
n-p pairing is reduced by the n-n and p-p pairs when the n-p pairing
interaction approaches n-n and p-p pairings.
###### pacs:
21.60.De, 21.45.Ff, 21.65.Cd, 21.30.Fe
## I INTRODUCTION
The importance of pairing correlation in nuclear systems was realized very
early bohr . In finite nuclei, neutron-neutron (n-n) and proton-proton (p-p)
pairing effects are realized in several nuclear properties such as
deformation, moments of inertia, alignments, and mass systematics nnimp1 ;
nnimp2 ; nnimp3 . In extended systems, nuclear pairing is expected to occur in
the dense matter inside the neutron stars ns1 ; ns2 . This pairing is crucial
for understanding various phenomena in neutron star physics, from the cooling
of new born stars ns3 ; ns4 to the afterburst relaxation in X-ray transients
ns5 , as well as in the understanding of glitches ns6 . However, insufficient
attention is paid to the isospin-singlet pairing, i.e., the neutron-proton
(n-p) paring. Recently, it was suggested that the isospin-singlet pairing is
possibly crucial in understanding of some nuclear structural issues, such as
the Gamow-Teller transition npimp1 ; npimp2 . In addition, considering the
spin and isospin degrees of freedom, the nuclear Cooper pairs contain very
interesting inner structures huang .
It is well-known that pair correlations crucially depend on the pairing near
the Fermi surface. Because neutrons and protons share the same Fermi energy in
symmetric nuclear matter, n-n (p-p) pairs compete intensely with n-p pairs .
Generally, the most energetically favored excludes the others. The
investigations on nuclear pairs almost focus on a single pairing structure,
i.e., either the n-n (p-p) or n-p pair only shen ; nn1 ; nn2 ; sh2 ; sdbaldo ;
np1 ; np2 ; np3 ; np4 ; sh3 . Nevertheless, coexistence may emerge in special
cases such as in the case of isospin asymmetric nuclear matter. In a neutron-
rich system, although the isospin-singlet n-p pairing may be favored, the
excess neutrons can as well form isospin-triplet n-n pairs coexisting with the
other, and they can influence each other. Furthermore, the nuclei far from the
beta-stability line, i.e. the exotic nuclei, can be obtained from heavy-ion
collisions (HIC), which has been addressed as a laboratory for the dynamic
evolution of the superfluid state of nuclear matter lab1 . New aspects of
pairing could appear in these exotic nuclei with regard to isospin
asymmetries, one of which might be the interplay between n-n and n-p pairings
in the nuclei owing to the significant overlap of proton and neutron orbits
huang ; lab2 .
In Ref. huang , the coexistence of isospin $I=1$ and $I=0$ pairing are
considered to study the inner phase structure and phase transition at low
density where the BCS-BEC crossover occurs. The result obtained indicates that
including the $I=1$ channel pairing significantly alters the phase structure
and phase transition properties. In nuclear matter, another concern is the
interplay between the $I=1$ and $I=0$ pairings. Based on this motivation, to
investigate the coexistence of the n-n, p-p, and n-p pairing in asymmetric
nuclear matter with effective contact pairing interaction in this study, we
employ the extend Nambu-Gorkov propagator, which includes the isospin triplet
n-n and p-p pairings and the isospin singlet n-p pairing.
The paper is organized as follows: In Sec. II, we briefly derive the gap
equation and thermodynamics, as well as introduce the adopted effective
pairing interaction. The numerical results and discussion are presented in
Sec. III, where the results of the coexistence of three types of pairings are
compared with the single pairing at certain density. Finally, a summary and a
conclusion are given in Sec. IV.
## II Formalism
The Nambu-Gorkov propagator at finite temperatures, including the n-n, n-p,
and p-p pairings huang , is expressed as:
$\displaystyle
G=\left(\begin{array}[]{llll}i\omega_{\upsilon}-\varepsilon_{n}&\ \ \ \ 0&\ \
\Delta_{np}&\ \ \Delta_{nn}\\\ \\\ \ \ \ \
0&i\omega_{\upsilon}-\varepsilon_{p}&\ \ \Delta_{pp}&-\Delta_{np}\\\ \\\ \ \
\Delta_{np}&\ \ \Delta_{pp}&i\omega_{\upsilon}+\varepsilon_{p}&\ \ \ \ 0\\\
\\\ \ \ \Delta_{nn}&-\Delta_{np}&\ \ \ \
0&i\omega_{\upsilon}+\varepsilon_{n}\end{array}\right)^{-1},$ (8)
where $\omega_{\upsilon}=(2\upsilon+1)\pi k_{B}T$ with $\upsilon\in\mathbb{Z}$
represents the Matsubara frequencies.
$\varepsilon_{n/p}=\textbf{p}^{2}/(2m)-\mu_{n/p}$ is the single particle
(s.p.) energy with chemical potential $\mu_{n/p}$. In addition, $\Delta_{nn}$,
$\Delta_{pp}$, and $\Delta_{np}$ are the isospin-triplet n-n, isospin-triplet
p-p, and isospin-singlet n-p pairing gaps, respectively.
### II.1 Gap equations
The neutron-proton anomalous propagator, which corresponds to $G_{13}$, reads
$\displaystyle F_{np}^{\dagger}(\omega_{\upsilon},\textbf{p})$
$\displaystyle=$
$\displaystyle\frac{-\Delta_{np}[(i\omega_{\upsilon})^{2}+i\omega_{\upsilon}(\varepsilon_{n}-\varepsilon_{p})-\varepsilon_{n}\varepsilon_{p}-\Delta_{np}^{2}-\Delta_{nn}\Delta_{pp}]}{\big{[}(i\omega_{\nu})^{2}-E_{+}^{2}\big{]}\big{[}(i\omega_{\nu})^{2}-E_{-}^{2}\big{]}}$
$\displaystyle=$
$\displaystyle\frac{-\Delta_{np}\big{\\{}[(i\omega_{\upsilon})^{2}-\varepsilon_{+}^{2}]-i\omega_{\upsilon}(2\delta\mu)+2\delta\mu^{2}+\frac{(\Delta_{nn}-\Delta_{pp})^{2}}{2}\big{\\}}}{\big{[}(i\omega_{\nu})^{2}-E_{+}^{2}\big{]}\big{[}(i\omega_{\nu})^{2}-E_{-}^{2}\big{]}},$
where
$E_{\pm}=\sqrt{\varepsilon_{+}^{2}\pm\sqrt{\varepsilon_{-}^{4}+\varepsilon_{\Delta}^{4}}}$
is the quasi-particle energy in the condensate with the definition
$\varepsilon_{\Delta}^{4}=\Delta_{np}^{2}[(\varepsilon_{n}-\varepsilon_{p})^{2}+(\Delta_{nn}-\Delta_{pp})^{2}]$
and
$2\varepsilon_{\pm}^{2}=\varepsilon_{n}^{2}+\Delta_{nn}^{2}+\Delta_{np}^{2}\pm(\varepsilon_{p}^{2}+\Delta_{pp}^{2}+\Delta_{np}^{2})$.
$\delta\mu=(\varepsilon_{p}-\varepsilon_{n})/2=(\mu_{n}-\mu_{p})/2$ represents
the Fermi surface mismatch. The summation over the Matsubara frequencies
provides the density matrix of particles in the condensate, i.e, the n-p
pairing probabilities,
$\displaystyle\nu_{np}(\textbf{p})$ $\displaystyle=$
$\displaystyle-\frac{\Delta_{np}}{2}\Big{\\{}\big{[}\frac{1-2f(E_{+})}{2E_{+}}+\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
$\displaystyle+$
$\displaystyle\frac{2\delta\mu^{2}+\frac{(\Delta_{nn}-\Delta_{pp})^{2}}{2}}{\sqrt{\varepsilon_{-}^{4}+\varepsilon_{\Delta}^{4}}}\big{[}\frac{1-2f(E_{+})}{2E_{+}}-\frac{1-2f(E_{-})}{2E_{-}}\big{]}\Big{\\}}.$
Here $f(E)=[1+\exp(\frac{E}{k_{B}T})]^{-1}$ is the well-known Fermi-Dirac
distribution function under a temperature $T$. Accordingly, the n-p gap
equation is expressed as
$\displaystyle\Delta_{np}$ $\displaystyle=$
$\displaystyle\int\frac{d\textbf{p}}{(2\pi)^{3}}V_{np}\frac{\Delta_{np}}{2}\Big{\\{}\big{[}\frac{1-2f(E_{+})}{2E_{+}}+\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
$\displaystyle+$
$\displaystyle\frac{2\delta\mu^{2}+\frac{(\Delta_{nn}-\Delta_{pp})^{2}}{2}}{\sqrt{\varepsilon_{-}^{4}+\varepsilon_{\Delta}^{4}}}\big{[}\frac{1-2f(E_{+})}{2E_{+}}-\frac{1-2f(E_{-})}{2E_{-}}\big{]}\Big{\\}}.$
In the absence of the n-n and p-p pairings, the quasi-particle energy
$E_{\pm}$ becomes
$E_{\pm}=\sqrt{[(\varepsilon_{n}+\varepsilon_{p})/2]^{2}+\Delta_{np}^{2}}\pm\delta\mu=E_{\Delta}\pm\delta\mu$,
and the gap equation is reduced to a more familiar form for the n-p pairing in
asymmetric nuclear matter:
$\displaystyle\Delta_{np}$ $\displaystyle=$
$\displaystyle\int\frac{d\textbf{p}}{(2\pi)^{3}}V_{np}\frac{\Delta_{np}[1-f(E_{+})-f(E_{-})]}{2E_{\Delta}}.$
Similarly, the n-n and p-p pairing gaps are respectively expressed as
$\displaystyle\Delta_{nn}$ $\displaystyle=$
$\displaystyle\int\frac{d\textbf{p}}{(2\pi)^{3}}V_{nn}\frac{\Delta_{nn}}{2}\Big{\\{}\big{[}\frac{1-2f(E_{+})}{2E_{+}}+\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
$\displaystyle+$
$\displaystyle\frac{\varepsilon_{-}^{2}+\Delta_{np}^{2}(1-\frac{\Delta_{pp}}{\Delta_{nn}})}{\sqrt{\varepsilon_{-}^{4}+\varepsilon_{\Delta}^{4}}}\big{[}\frac{1-2f(E_{+})}{2E_{+}}-\frac{1-2f(E_{-})}{2E_{-}}\big{]}\Big{\\}},$
and
$\displaystyle\Delta_{pp}$ $\displaystyle=$
$\displaystyle\int\frac{d\textbf{p}}{(2\pi)^{3}}V_{pp}\frac{\Delta_{pp}}{2}\Big{\\{}\big{[}\frac{1-2f(E_{+})}{2E_{+}}+\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
$\displaystyle-$
$\displaystyle\frac{\varepsilon_{-}^{2}+\Delta_{np}^{2}(\frac{\Delta_{nn}}{\Delta_{pp}}-1)}{\sqrt{\varepsilon_{-}^{4}+\varepsilon_{\Delta}^{4}}}\big{[}\frac{1-2f(E_{+})}{2E_{+}}-\frac{1-2f(E_{-})}{2E_{-}}\big{]}\Big{\\}},$
The occupation numbers, corresponding to the matrix elements $G_{11}$ and
$G_{22}$, can be calculated by
$\displaystyle n_{n}$ $\displaystyle=$
$\displaystyle\frac{1}{2}-\frac{\varepsilon_{n}}{2}\big{[}\frac{1-2f(E_{+})}{2E_{+}}+\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
$\displaystyle-$
$\displaystyle\frac{\varepsilon_{-}^{2}\varepsilon_{n}-2\delta\mu\Delta_{np}^{2}}{2\sqrt{\varepsilon_{-}^{4}+\varepsilon_{\Delta}^{4}}}\big{[}\frac{1-2f(E_{+})}{2E_{+}}-\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
and
$\displaystyle n_{p}$ $\displaystyle=$
$\displaystyle\frac{1}{2}-\frac{\varepsilon_{p}}{2}\big{[}\frac{1-2f(E_{+})}{2E_{+}}+\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
$\displaystyle+$
$\displaystyle\frac{\varepsilon_{-}^{2}\varepsilon_{p}-2\delta\mu\Delta_{np}^{2}}{2\sqrt{\varepsilon_{-}^{4}+\varepsilon_{\Delta}^{4}}}\big{[}\frac{1-2f(E_{+})}{2E_{+}}-\frac{1-2f(E_{-})}{2E_{-}}\big{]}$
The neutron and proton densities are respectively defined as
$\displaystyle\rho_{n}=2\int\frac{d\textbf{p}}{(2\pi)^{3}}n_{n},\ \
\rho_{p}=2\int\frac{d\textbf{p}}{(2\pi)^{3}}n_{p}.$ (17)
Notably, the n-n, p-p, and n-p pairing gaps couple to each other. For
asymmetric nuclear matter at the fixed neutron and proton densities, these gap
equations (4), (6), and (7) should be solved self-consistently with the
densities (10) at give densities and temperatures.
### II.2 Pairing interaction
In principle, the nucleon-nucleon pairing interaction in nuclear matter
originates from the attractive component of the bare two-body potential and
the three-body force, and this pairing interaction is modified by the nuclear
medium, such as the polarization effect sup2 ; sup3 ; sup4 ; sup5 ; scr1 ;
scr2 ; ulbd . In this research, to obtain qualitative conclusions from the
coexistence of n-n, p-p, and n-p pairs, we adopt the density-dependent contact
interaction developed by Gorrido et al. ddci to model the pairing potential.
For uniform nuclear matter, the potential takes the form
$\displaystyle
V_{I}(\textbf{r},\textbf{r}^{\prime})=g_{I}\delta(\textbf{r}-\textbf{r}^{\prime}),$
(18)
with the effective coupling constant
$\displaystyle g_{I}=v_{I}[1-\eta_{I}(\rho_{I}/\rho_{0})^{\gamma_{I}}].$ (19)
Here, $v_{I}$, $\eta_{I}$, and $\gamma_{I}$ are adjustable parameters and
$I=0,1$ denote the total isospin of the pairs. For the n-n (p-p) pairing,
$\rho_{I}=\rho_{n}$ ($\rho_{I}=\rho_{p}$) and for the n-p pairing,
$\rho_{I}=\rho_{n}+\rho_{p}$. $\rho_{0}=0.17\text{fm}^{-3}$ represents the
saturation density. Taking suitable values of the parameters, the pairing gap
$\Delta(k_{F})$ can be reproduced as a function of the Fermi momentum
$k_{F}=(3\pi^{2}\rho_{I})^{1/3}$ in the channel $L=0$, $I=1$, $S=0$ (n-n and
p-p) and $k_{F}=(3\pi^{2}\rho_{I}/2)^{1/3}$ in channel $L=0$, $I=0$, $S=1$
(n-p). We would like to emphasize that there is also a kind of n-p pairing in
the channel $L=0$, $I=1$, $S=0$ for the symmetric nuclear matter. In this
channel, the n-p pairing force is approximately the same as the n-n or p-p
pairing force. As will be discussed in Sec. III, even a minor asymmetry will
destroy the n-p pairing in this channel. Therefore, the $I=1$ pairings only
represent neutron-neutron and proton-proton pairings hereafter.
Figure 1: The density-dependent contact pairing interaction with parameters
calibrated to the calculated pairing gaps. The dots represent the pairing gaps
in Ref. shen ; sdbaldo , whereas the lines correspond to the calculation from
the effective pairing interaction. The left (right) panel is relate to the
isospin triplet (singlet) channel.
In addition to the polarization effect, the self-energy effect of the medium
quenches the pairing gaps shen ; sh2 . Because the self-energy effect for
nuclear pairing remais an open question in asymmetric nuclear matter, we adopt
the calculated pairing gaps shen ; sdbaldo under the Hartree-Fock approaches
to calibrate the parameters presented in Fig.1. It should be noted that the
self-energy sh2 and polarization ulbd effects should be included to obtain a
more reliable pairing interaction. As is well known that, to avoid the
ultraviolet divergence, an energy cut is required for the contact interaction.
Here, we fix the energy at approximately $80$ MeV for both cases. The left
(right) panel corresponds to the $I=1$ ($I=0$) pairings.
### II.3 Thermodynamics
Now, we are in a position to determine the key thermodynamic quantities.
Because the occupation of the quasi-particle states is given by the Fermi-
Dirac distribution function, the entropy of the system is obtained from
$\displaystyle
S=-2k_{B}\sum_{\textbf{p}}\sum_{i}\big{[}f(E_{i})lnf(E_{i})+\overline{f}(E_{i})ln\overline{f}(E_{i})\big{]},$
(20)
where $\overline{f}(E_{i})=1-f(E_{i})$ and $i=\pm$. The internal energy of the
superfluid state is expressed as
$\displaystyle
U=2\sum_{\textbf{p}}\big{[}\varepsilon_{n}n^{n}+\varepsilon_{p}n^{p}\big{]}+\sum_{\textbf{p}}\big{[}g_{nn}\nu_{nn}^{2}+g_{pp}\nu_{pp}^{2}+2g_{np}\nu_{np}^{2}\big{]},$
The factor $2$ corresponds to the spin summation. The first term of Eq. (14)
includes the kinetic energy of the quasi-particle, as a function of the
pairing gap and chemical potential. The BCS mean-field interaction among the
particles in the condensate is embodied in the second term of Eq. (14). It
should be noted that for asymmetric nuclear matter, the n-n and p-p pairing
interactions can be different, i.e., $g_{nn}\neq g_{pp}$, owing to
$\rho_{n}\neq\rho_{p}$. Accordingly, the thermodynamic potential can be given
as
$\displaystyle\Omega=U-TS.$ (22)
Once the contact pairing interaction is adopted, the pairing gap is momentum
independent. Therefore, the thermodynamic potential can be obtained in a
simple form:
$\displaystyle\Omega$ $\displaystyle=$ $\displaystyle
2\frac{\Delta_{np}^{2}}{g_{np}}+\frac{\Delta_{nn}^{2}}{g_{nn}}+\frac{\Delta_{pp}^{2}}{g_{pp}}+\int\frac{d\textbf{p}}{(2\pi)^{3}}$
(23) $\displaystyle\times$
$\displaystyle\Big{\\{}\varepsilon_{n}+\varepsilon_{p}-\sum_{i=\pm}\big{[}E_{i}+2k_{B}Tln(1+e^{\frac{-E_{i}}{k_{B}T}})\big{]}\Big{\\}}.$
Her, We Consider the property
$f(\omega)lnf(\omega)+\overline{f}(\omega)ln\overline{f}(\omega)=-\frac{\omega}{k_{B}T}-ln(1+e^{-\omega/(k_{B}T)})$.
The gap equations (4), (6), and (7) and the densities of Eq. (10) can be
equivalently expressed as
$\displaystyle\frac{\partial\Omega}{\partial\Delta_{np}}$ $\displaystyle=$
$\displaystyle 0,\ \ \frac{\partial\Omega}{\partial\Delta_{nn}}=0,\ \
\frac{\partial\Omega}{\partial\Delta_{pp}}=0,$ $\displaystyle\rho_{n}$
$\displaystyle=$ $\displaystyle-\frac{\partial\Omega}{\partial\mu_{n}},\ \ \ \
\rho_{p}=-\frac{\partial\Omega}{\partial\mu_{p}}.$ (24)
It should be noted that the solution of these equations corresponds to the
global minimum of the free energy $F=\Omega+\mu_{n}\rho_{n}+\mu_{p}\rho_{p}$,
which is the essential quantity that describes the thermodynamics of
asymmetric nuclear matter.
## III RESULTS AND DISCUSSION
The numerical calculations in this study focus on the coexistence of three
different types of pairs in isospin asymmetric nuclear matter with total
density $\rho=\rho_{n}+\rho_{p}$ and isospin asymmetry
$\beta=(\rho_{n}-\rho_{p})/\rho$. We adopt the effective contact pairing
interaction at zero temperature. Fig.2 illustrates the pairing gaps as a
function of asymmetry $\beta$ at the total density $\rho=0.068\text{fm}^{-3}$,
at which both the $I=1$ and $I=0$ pairing interactions are most attractive.
The thick lines correspond to the results of the coexistence of three types of
pairings, which include $\Delta_{nn}\neq 0,\Delta_{np}\neq 0,\Delta_{pp}\neq
0$. In the symmetric matter, neutrons and protons share the same Fermi
surface, i.e., $k_{Fn}=k_{Fp}=k_{F}$, and the region near the Fermi surface
contributes dominantly to the pairing gaps. Two neutrons and two protons near
the Fermi surface can form a n-n pair and a p-p pair or two n-p pairs. Because
the n-p pairing strength is significantly stronger than that of n-n and p-p,
the nucleons prefer to form n-p pair instead of n-n (p-p) pair. Equivalently,
the n-p pairings severely suppress the n-n and p-p pairings for $\beta=0$. As
illustraed in Fig.2, the n-n (p-p) gap disappears in symmetric case. In
asymmetric nuclear matter, the dominant region, which contributes
significantly to the n-n (p-p) pairing gap, is located at the neutron (proton)
Fermi momentum $k_{Fn}$ ($k_{Fp}$), whereas the region for n-p pairing is
between $k_{Fp}$ and $k_{Fn}$ (the average Fermi surface related to the
average chemical potential of neutrons and protons). The split between neutron
and proton Fermi surfaces separates the dominant regions for n-n, p-p, and n-p
pairings, which enables the n-n and p-p pairing. And this discrepancy between
$k_{Fp}$ and $k_{Fn}$ increases with the increasing isospin asymmetry.
Therefore the n-n and p-p pairing gaps increase with $\beta$.
Figure 2: (Color online) The n-n, p-p, n-p pairing gaps as a function of the
isospin asymmetry $\beta$, at the total density $\rho=0.068\text{fm}^{-3}$.
The thick and thin lines correspond to the coexistence of three types of
pairings and single pairings, respectively. The dashed, short-dashed, and
solid lines are related to the n-n, p-p, and n-p pairings, respectively.
In addition, the results for single pairing, i.e., $\Delta_{nn}\neq
0,\Delta_{pp}=\Delta_{np}=0$, $\Delta_{np}\neq 0,\Delta_{nn}=\Delta_{pp}=0,$,
or $\Delta_{pp}\neq 0,\Delta_{nn}=\Delta_{np}=0,$ are depicted as thin lines
in Fig. 2 for comparison. Owing to the suppression from the mismatched Fermi
surfaces, n-p pairing gaps decrease with $\beta$ and disappear at certain
asymmetries for both the single pairing and the coexistence of three types of
pairings. In the calculation of the coexistence of three types of pairings,
$\Delta_{nn}$ and $\Delta_{pp}$ coincide with the results obtained from the
single pairing calculation when the n-p pairing vanishes. In fact, if
$\Delta_{np}=0$ the coupled equations (17) degenerates into two groups of
completely independent equations, which are the gap equation for $\Delta_{nn}$
with the neutron density and the gap equation for $\Delta_{pp}$ with proton
density.
Compared to single pairing, the critical isospin asymmetry, where
$\Delta_{np}$ vanishes, is enhanced by the existence of n-n and p-p pairs, as
demonstrated in Fig. 2. Unfortunately, this conclusion cannot be considered as
definite, as the effective pairing interaction is simply obtained from the
pairing gaps under the Hartree-Fock approximation. In addition, the effective
n-p pairing interaction can be significantly reduced by the nucleon-nucleon
correlation beyond the Hartree-Fock approaches sh2 . Owing to the complexity
of the nuclear many-body medium effects, the exact effective pairing
interaction remais an open problem. To eliminate the uncertainty of the
effective pairing strength, we adjust the effective neutron-proton pairing
interaction artificially to obtain the qualitative conclusion. The results
obtained are presented in Fig. 3. The solid and dashed lines correspond to the
results obtained from the coexistence of the three types of pairings and the
single pairing, respectively. For the effective interaction obtained from Ref.
sdbaldo , $g_{np}/g_{nn}=1.3837$. If we reduce the n-p pairing strength
$g_{np}$, the enhancement of the n-p pairing from the existence of the n-n
(p-p) pairs is reduced. When $g_{np}/g_{nn}$ is under a certain value, the
existence of n-n (p-p) pairing might suppress the n-p pairing eventually. An
interesting property is that if $g_{np}\simeq g_{nn}$, $\Delta_{np}$ decreases
rapidly with $\beta$. As mentioned in Sec. II (B), the channel $L=0$, $I=1$,
$S=0$ embodies n-n, p-p, and n-p pairings, and the pairing interactions are
approximately the same for the asymmetric case. A negligible asymmetry can
destroy the n-p pairing in the $L=0$, $I=1$, $S=0$ channel. Therefore, in
general, the $I=1$ pairing solely refers to the n-n and p-p pairings.
Figure 3: The n-p pairing gaps as a function of isospin asymmetry at total
density $\rho=0.068\text{fm}^{-3}$ for different n-p pairing strengths,
$g_{np}/g_{nn}=1.3837$, $1.2$, $1.12$, $1.01$. The solid and dashed lines
correspond to the coexistence of three types of pairings and single pairing,
respectively.
One straightforward way to understand the enhancement of n-p pairing from the
existing n-n and p-p pairs is to investigate the n-p pairing probabilities
near the average Fermi surface (related to the average chemical potentials of
the neutron and proton). The results obtained are depicted in Fig. 4, in the
case where total density $\rho=0.068\text{fm}^{-3}$ and isospin asymmetry
$\beta=0.3$. The n-p pairing strength is set to be $g_{np}/g_{nn}=1.3837$. For
the single n-p pairing, the pairing is forbidden in a window around the
average Fermi surface owing to the absence of protons. Once the n-n and p-p
pairings are included, the dispersion of neutron and proton Fermi surfaces can
provide the kinematical phase space near the average Fermi surface for the
occurence of the n-p pairing phenomena. This is a positive mechanism, such
that the existence of n-n and p-p pairs enhances the n-p pairing.
Figure 4: The n-p pairing probabilities as a function of $k$ near the average
Fermi surface with the total density $\rho=0.068\text{fm}^{-3}$ and isospin
asymmetry $\beta=0.3$. Here, $k=p/\hbar$ is the wave number. The pairing
strength is set to be $g_{np}/g_{nn}=1.3837$. The solid and dashed lines
correspond to the coexistence of three types of pairings and single pairing,
respectively.
Another effect of the existence of n-n and p-p pairs is that a n-n pair and a
p-p pair ought to be broken up to form two n-p pairs. Exclusively, when the
pairing energy of n-n and p-p pairs is smaller than that of two n-p pairs, the
existence of n-n and p-p pairs can enhance the n-p pairing. The pairing energy
is related to the pairing strength directly. As presented in Fig. 5, when the
n-p pairing strength is insufficient, the n-p pairing probability is
suppressed significantly by n-n and p-p pairs.
Figure 5: The pairing probabilities vs $k$ near the average Fermi surface at
the total density $\rho=0.068\text{fm}^{-3}$ with isospin asymmetry
$\beta=0.15$. Here $k=p/\hbar$ is the wave number. The dashed, short-dashed,
and solid lines correspond to the n-n, pp, and n-p pairings, respectively. The
n-p pairing strength $g_{np}/g_{nn}$ is set to be $1.3837$ ($1.12$) in left
(right) panel.
In the calculations of this study, the temperature is set to be zero. However,
for asymmetric nuclear matter, the temperature can also disperse the neutron
and proton Fermi surfaces, which will eventually reduce the suppression of
Fermi surface mismatch at low temperature. At high temperature, the
temperature will destroy all types of pairings. Once the temperature is
included, the enhanced and reduced effects on n-p paring from the existence of
n-n and p-p pairings should be weakened.
In finite nuclei, the n-p pairing might be suppressed by the strong spin-orbit
splitting sp1 ; sp2 . However, in nuclei where the spin-splitting becomes
small, the coexistence of three types of pairings may occur. Understanding the
enhanced and reduced effects on n-p paring owing to the existence of n-n and
p-p pairings could be beneficial in elucidating the n-p pairing in $N\approx
Z$ nuclei. For asymmetric nuclei, the interplay between n-n and n-p pairings
might be the same as that in asymmetric nuclear matter.
## IV SUMMARY
In this study, we investigated the coexistence of n-n, p-p, and n-p pairings
in isospin asymmetric nuclear matter with an effective density-dependent
contact pairing interaction. The three types of pairings cannot coexist in
symmetric nuclear matter, only n-p pairs can survive when the n-p pairing
strength is stronger than that of the n-n and p-p pairs, whereas the n-n and
p-p pairs are preferred if the n-n and p-p pairing interactions become strong.
Furthermore, n-n, p-p, and n-p pairs can coexist in isospin asymmetric nuclear
matter when the n-p pairing interaction is stronger than n-n and p-p pairs.
Compared to the single pairing calculation (gap equation with only one kind of
nucleon pair), the results indicate two effects of the existence of n-n and
p-p pairs. On the one hand, the existence of n-n and p-p pairs can disperse
the neutron and proton Fermi surfaces, which increase the phase-space overlap
between neutrons and protons and eventually enhance the n-p pairing near the
average Fermi surface. This positive mechanism can reduce the suppression
owing to the mismatched Fermi surface of neutrons and protons in the isospin
asymmetric nuclear matter. On the other hand, a n-n pair and a p-p pair should
be broken up to form two n-p pairs. In this process, the pairing interaction
plays a crucial role. The final results are determined by these two effects.
In isospin asymmetric nuclear matter, the existence of n-n and p-p pairs can
enhance the n-p pairing when the n-p pairing strength is significantly
stronger than that of n-n and p-p pairs. However, the existence of n-n and p-p
pairs would reduce the n-p pairing probability when the n-p pairing
interaction decreases in strength. Moreover, when the n-p pairing strength
becomes approximately that of n-n and p-p pairs, the n-p pairing rapidly
disappears with the isospin asymmetries.
In this paper, the gap solution is only thermodynamically stable. The Cooper
pair momentum should also be included in the future to avoid dynamic
instability ins1 ; ins2 . In addition, in future works, the pairing
interaction should be calibrated to the pairing gaps, including the
polarization correction and the correlation effect. As a prospect, this
interesting coexistence of the three types of pairings would should also be
applied to the studies on pairing correlations in finite nuclei.
## Acknowledgments
This work is supported by National Natural Science Foundation of China (No.
11975282, 11775276, 11435014, 11505241), the Strategic Priority Research
Program of Chinese Academy of Sciences, Grant No. XDB34000000, the Youth
Innovation Promotion Association of Chinese Academy of Sciences (Grant No.
Y2021414, Y201871).
## References
* (1) A. Bohr, B. R. Mottelson, and D. Pines, Phys. Rev. 100 936 (1958).
* (2) D. Brink, and R. Broglia, Nuclear Super uidity: Pairing in Finite Systems (Cambridge University Press, Cambridge, 2005).
* (3) R. A. Broglia, V. V. Zelevinsky (Eds.), 50 years of BCS, (World Science Pub., 2012).
* (4) D. J. Dean and M. Hjorth-Jensen, Rev. Mod. Phys. 75 607 (2003).
* (5) A. B. Migdal, Zh. Eksp. Teor. Fiz. 37 249 (1959).
* (6) G. Baym, C. Pethick, D. Pines, Nature 224 673 (1969).
* (7) J. M. Lattimer, K. A. Van Riper, M. Prakash, M. Prakash, ApJ 425, 802 (1994).
* (8) S. Burrello, M. Colonna, and F. Matera Phys. Rev. C 94, 012801(R) (2016).
* (9) D. Page, S. Reddy, Neutron Star crust. edited by C. Bertulani and J. Piekarewicz, Nova Science Publ., 281 (2012).
* (10) J. Piekarewicz, F.J. Fattoyev, C.J. Horowitz, Phys. Rev. C 90, 015803 (2014).
* (11) C. L. Bai, H. Sagawa, M. Sasano, T. Uesaka, K. Hagino, H.Q. Zhang, X. Z. Zhang, and F.R. Xu, Phys. Lett. B 719 116 (2013).
* (12) K. Kaneko, Y. Sun, and T. Mizusaki, Phys. Rev. C 97, 054326 (2018).
* (13) S. J. Mao, X. G. Huang, and P. F. Zhuang, Phys. Rev. C 79, 034304 (2009).
* (14) Caiwan Shen, U. Lombardo, P. Schuck, W. Zuo, and N. Sandulescu, Phys. Rev. C 67 061302 (R) (2003).
* (15) U. Lombardo, P. Schuck, and W. Zuo, Phys. Rev. C 64 021301 (R) (2001).
* (16) J. M. Dong, U. Lombardo, and W. Zuo, Phys. Rev. C 87 062801 (R) (2013).
* (17) X. -H. Fan, X. -L. Shang, J. -M. Dong, and W. Zuo, Phys. Rev. C 99 0665804 (2019).
* (18) M. Baldo, U. Lombardo, H. -J. Schulze, and Zuo Wei, Phys. Rev. C 66 054304 (2002).
* (19) U. Lombardo, H.-J. Schulze, and W. Zuo, Phys. Rev. C 59, 2927 (1999).
* (20) X. L. Shang, and W. Zuo, Phys. Rev. C 88, 025806 (2013).
* (21) X. L. Shang, P. Wang, P. Yin, and W. Zuo, J. Phys. G 42, 055105 (2015).
* (22) P. Bożek, Phys. Rev. C 62 054316 (2000).
* (23) X. Meng, S. S. Zhang, L. Gio, L. S. Geng, and L. G. Cao, Phys. Rev. C 102 064322 (2020).
* (24) M. Baldo, U. Lombardo and P. Schuck, Phys. Rev. C 52 975 (1995).
* (25) U. Lombardo, C. W. Shen, H. -J. Schulze, and W. Zuo, Int. J. Mod. Phys. E 14 513 (2005).
* (26) J. Wambach, T.L. Ainsworth, D. Pines, Nucl. Phys. A 555 128 (1993).
* (27) H.-J. Schulze, J. Cugnon, A. Lejeune, M. Baldo, and U. Lombardo, Phys. Lett. B 375 1 (1996).
* (28) H.-J. Schulze, A. Polls, and A. Ramos, Phys. Rev. C 63 044310 (2001).
* (29) C. W. Shen, U. Lombardo, and P. Schuck, Phys. Rev. C 71, 054301 (2005).
* (30) L. G. Cao, U.Lombardo, P.Schuck, Phys. Rev. C. 74, 064301 (2006).
* (31) S. S. Zhang, L. G. Cao, U. Lombardo and P. Schuck, Phys. Rev. C 93 044329 (2016).
* (32) Wenmei Guo, U. lombardo and P. Schuck, Phys. Rev. C 99, 014310 (2019)
* (33) E. Garrido _et al._ , Phys. Rev. C 60, 064312 (1999); 63 037304\.
* (34) G. F. Bertsch, and Y. Luo, Phys. Rev. C 81, 064320 (2010).
* (35) H. Sagawa, C. L. Bai, and G. Colò, Phys. Scr. 91, 083011 (2016).
* (36) I. M. Khalatnikov, Pis'ma Zh. Eksp. Teor. Fiz. 17, 534 (1973) [JETP Lett. 17, 386 (1973)]; V. P. Mineev, Zh. Eksp. Teor. Fiz. 67, 263 (1974) [Sov. Phys. JETP 40, 132 (1974)]
* (37) L. Y. He, M. Jin, and P. F. Zhuang, Phys. Rev. B 73 214527 (2006).
|
arxiv-papers
| 2021-07-26T13:14:14 |
2024-09-04T03:07:18.610683
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Yi-Jun Yan, Xin-Le Shang, Jian-Min Dong, and Wei Zuo",
"submitter": "Xinle Shang",
"url": "https://arxiv.org/abs/2107.12197"
}
|
2107.12201
|
July, 2021
AdS (super)projectors in three dimensions and partial masslessness
Daniel Hutchings, Sergei M. Kuzenko and Michael Ponds
Department of Physics M013, The University of Western Australia
35 Stirling Highway, Crawley W.A. 6009, Australia
Email: [email protected], [email protected],
[email protected]
We derive the transverse projection operators for fields with arbitrary
integer and half-integer spin on three-dimensional anti-de Sitter space, AdS3.
The projectors are constructed in terms of the quadratic Casimir operators of
the isometry group $\mathsf{SO}(2,2)$ of AdS3. Their poles are demonstrated to
correspond to (partially) massless fields. As an application, we make use of
the projectors to recast the conformal and topologically massive higher-spin
actions in AdS3 into a manifestly gauge-invariant and factorised form. We also
propose operators which isolate the component of a field that is transverse
and carries a definite helicity. Such fields correspond to irreducible
representations of $\mathsf{SO}(2,2)$. Our results are then extended to the
case of ${\cal N}=1$ AdS3 supersymmetry.
###### Contents
1. 1 Introduction
2. 2 Transverse projectors in AdS3
1. 2.1 On-shell fields
2. 2.2 Spin projection operators
1. 2.2.1 Bosonic case
2. 2.2.2 Fermionic case
3. 2.3 Helicity projectors
4. 2.4 Longitudinal projectors and lower-spin extractors
5. 2.5 Linearised higher-spin Cotton tensors
6. 2.6 Results in Minkowski space
3. 3 Transverse superprojectors in AdS3|2
1. 3.1 On-shell superfields
2. 3.2 Superspin projection operators
3. 3.3 Longitudinal projectors
4. 3.4 Linearised higher-spin super-Cotton tensors
4. 4 Conclusion
5. A Notation and conventions
6. B Generating function formalism
## 1 Introduction
The spin projection operators, or transverse and traceless (TT) spin-$s$
projectors, were first derived in four-dimensional (4d) Minkowski space
$\mathbb{M}^{4}$ by Behrends and Fronsdal [1, 2]. Given a symmetric tensor
field on $\mathbb{M}^{4}$ that obeys the Klein-Gordon equation, it decomposes
into a sum of constrained fields describing irreducible representations of the
Poincaré group with varying spin. The Behrends-Fronsdal projectors allow one
to extract the component of this decomposition corresponding to the
representation with the highest spin. Many applications for the TT projectors
have been found within the landscape of high energy physics. For example, they
played a crucial role in the original formulation of conformal higher-spin
gauge actions [3].
Since the work of [1, 2], the spin projection operators have been generalised
to diverse dimensions and symmetry groups. In the case of $\mathbb{M}^{d}$,
the TT projectors were first derived by Segal [4] (see also [5, 6, 7, 8]) in
the bosonic case and later by Isaev and Podoinitsyn [8] for half-integer
spins. In four dimensions, the projection operators in ${\cal N}=1$ Minkowski
superspace, $\mathbb{M}^{4|4}$, were introduced by Salam and Strathdee [9] in
the case of a scalar superfield, and by Sokatchev [10] for superfields of
arbitrary rank. The superpojectors derived in [10] were formulated in terms of
Casimir operators. A few years later Rittenberg and Sokatchev [11] made use of
a similar method to construct the superprojectors in ${\cal N}$-extended
Minkowski superspace $\mathbb{M}^{4|4{\cal N}}$ (see also [12]). An
alternative powerful construction of the superprojectors in
$\mathbb{M}^{4|4{\cal N}}$ was given in [13, 14].111This approach has found
numerous applications, e.g. the derivation of gauge-invariant actions [15,
16]. Recently, the superprojectors in three-dimensional ${\cal N}$-extended
Minkowski superspace, ${\mathbb{M}}^{3|2{\cal N}}$, were derived in Ref. [17],
which built upon the earlier work of [18].
It is of interest to construct spin projection operators for fields on
(anti-)de Sitter space, (A)dS. In particular, in order to describe irreducible
representations of the AdSd isometry algebra, $\mathfrak{so}(d-1,2)$, fields
on AdSd must satisfy certain differential constraints involving the Lorentz-
covariant derivative ${\cal D}_{a}$ for AdSd. Since both dS and AdS spaces
have non-vanishing curvature, the construction of the TT projectors proves to
be technically challenging. However, recent progress has been made in [20,
19], where the (super)projectors in AdS4 were derived. The next logical step
is to derive the TT (super)projectors in AdSd. In this work we consider the
case $d=3$, which serves as a starting point for this program.
This paper is organised as follows. In section 2.1, we begin by reviewing on-
shell fields in AdS3 and the corresponding irreducible representations of
$\mathfrak{so}(2,2)$ which they furnish. In section 2.2, we derive the spin
projection operators for fields of arbitrary rank. More specifically, let us
denote by ${\cal V}_{(n)}$ the space of totally symmetric rank-$n$ spinor
fields
$\phi_{\alpha(n)}:=\phi_{\alpha_{1}\dots\alpha_{n}}=\phi_{(\alpha_{1}\dots\alpha_{n})}$
on AdS3. For any integer $n\geq 2$, we derive the rank-$n$ spin projection
operator, $\Pi^{\perp}_{[n]}$, which is defined by its action on ${\cal
V}_{(n)}$ according to the rule:
$\displaystyle\Pi^{\perp}_{[n]}:{\cal V}_{(n)}\longrightarrow{\cal
V}_{(n)}~{},\qquad\phi_{\alpha(n)}\longmapsto\Pi^{\perp}_{[n]}\phi_{\alpha(n)}~{}=:\phi^{\perp}_{\alpha(n)}~{}.$
(1.1)
For fixed $n$, this operator is defined by the following properties:
1. 1.
Idempotence: $\Pi^{\perp}_{[n]}$ is a projector in the sense that it squares
to itself,
$\Pi^{\perp}_{[n]}\Pi^{\perp}_{[n]}=\Pi^{\perp}_{[n]}~{}.$ (1.2a)
2. 2.
Transversality: $\Pi^{\perp}_{[n]}$ maps $\phi_{\alpha(n)}$ to a transverse
field,
${\cal D}^{\beta(2)}\phi^{\perp}_{\beta(2)\alpha(n-2)}=0~{}.$ (1.2b)
3. 3.
Surjectivity: Every transverse field belongs to the image of
$\Pi^{\perp}_{[n]}$,
$\mathcal{D}^{\beta(2)}\psi_{\beta(2)\alpha(n-2)}=0~{}\quad\implies\quad\Pi^{\perp}_{[n]}\psi_{\alpha(n)}=\psi_{\alpha(n)}~{}.$
(1.2c)
In other words, $\Pi^{\perp}_{[n]}$ acts as the identity operator on the space
of transverse fields.
Any operator satisfying all three of these properties may be considered to be
an AdS3 analogue of the Behrends-Fronsdal projector.222We refer to any
operator satisfying properties (1.2a), (1.2b) and (1.2c) as a spin projection
operator. In section 2.2 we show that, under an additional assumption, such an
operator is unique. In general, operators satisfying properties (1.2a) and
(1.2b) will be called transverse projectors. The latter are sometimes referred
to as TT projectors, which is a slight abuse of terminology, since in vector
notation the field $\phi_{\alpha(n)}$ is already traceless. However, the field
$\phi^{\perp}_{\alpha(n)}$ will correspond to a reducible representation of
$\mathfrak{so}(2,2)$. In order to isolate the component describing an
irreducible representation, it is necessary to bisect the projectors according
to $\Pi^{\perp}_{[n]}=\mathbb{P}_{[n]}^{(+)}+\mathbb{P}_{[n]}^{(-)}$. The
operator $\mathbb{P}_{[n]}^{(\pm)}$ is a helicity projector since it satisfies
the properties333Whilst $\mathbb{P}_{[n]}^{(\pm)}$ satisfies the properties
(1.2a) and (1.2b), it does not satisfy (1.2c). (1.2a) and (1.2b) and selects
the component of $\phi_{\alpha(n)}$ carrying the definite value
$\pm\frac{n}{2}$ of helicity. They are constructed in section 2.3. In section
2.4 we make use of the orthogonal compliment of $\Pi^{\perp}_{[n]}$ to
decompose an unconstrained field $\phi_{\alpha(n)}$ into a sum of transverse
fields $\phi^{\perp}_{\alpha(n-2j)}$ where $0\leq j\leq\lfloor n/2\rfloor$. We
then provide an operator ${\mathbb{S}}^{\perp}_{\alpha(n-2j)}$ which extracts
the field $\phi^{\perp}_{\alpha(n-2j)}$ from this decomposition.
Making use of these projection operators, we derive a number of interesting
and non-trivial results. In particular, in section 2 we show that all
information about (partially) massless fields is encoded in the poles of the
transverse projectors. The novelty of our approach is that all projectors are
derived in terms of the quadratic Casimir operators of $\mathfrak{so}(2,2)$.
This allows us to recast the AdS3 higher-spin Cotton tensors and their
corresponding conformal actions into a manifestly gauge-invariant and
factorised form. Similar results are provided for new topologically massive
(NTM) spin-$s$ gauge models, which are of order $2s$ in derivatives, where $s$
is a positive (half-)integer. In the case when $s$ is an integer, it is
possible to construct NTM models of order $2s-1$. In $\mathbb{M}^{3}$ such
models were recently proposed in [21], here we extend them to AdS3. The above
results are detailed in section 2.5. Finally, in section 2.6 we study the flat
limit of these results, and obtain new realisations for the spin projection
operators, the helicity projectors and the conformal higher-spin actions in
$\mathbb{M}^{3}$.
In section 3, we extend some of these results to the case of ${\cal N}=1$ AdS3
supersymmetry. Alongside concluding comments, new realisations of the
Behrends-Fronsdal projectors in $\mathbb{M}^{4}$, expressed in terms of the
Casimir operators of the 4d Poincaré algebra, are given in section 4. The main
body is accompanied by two technical appendices. Appendix A summarises our
conventions and notation. We review the generating function formalism in
Appendix B, which is a convenient framework used in deriving the non-
supersymmetric results of section 2.
Our findings in this paper can be viewed as generalisations of the earlier
results in AdS4 [20, 19] and AdS3 [22], which in turn were based on the
structure of (super)projectors in Minkowski (super)space [18, 17]. Throughout
this work we make use of the convention
$\displaystyle
U_{\alpha(n)}V_{\alpha(m)}=U_{(\alpha_{1}...\alpha_{n}}V_{\alpha_{n+1}...\alpha_{n+m)}}~{}.$
(1.3)
## 2 Transverse projectors in AdS3
The geometry of AdS3 is described by the Lorentz covariant derivative,
$\displaystyle{\cal
D}_{a}=e_{a}{}^{m}\partial_{m}+\frac{1}{2}\omega_{a}{}^{bc}M_{bc}=e_{a}{}^{m}\partial_{m}+\frac{1}{2}\omega_{a}{}^{\beta\gamma}M_{\beta\gamma}~{},$
(2.1)
which satisfies the commutation relation
$[{\cal D}_{a},{\cal D}_{b}]=-4{\cal
S}^{2}M_{ab}\quad\Longleftrightarrow\quad\ [{\cal D}_{\alpha\beta},{\cal
D}_{\gamma\delta}]=4{\cal
S}^{2}\Big{(}\varepsilon_{\gamma(\alpha}M_{\beta)\delta}+\varepsilon_{\delta(\alpha}M_{\beta)\gamma}\Big{)}~{}.$
(2.2)
Here $e_{a}{}^{m}$ is the inverse vielbein, $\omega_{a}{}^{bc}$ is the Lorentz
connection and the parameter ${\cal S}$ is related to the scalar curvature $R$
via $R=-24{\cal S}^{2}$. The Lorentz generators with vector ($M_{ab}=-M_{ba}$)
and spinor ($M_{\alpha\beta}=M_{\beta\alpha}$) indices are defined in appendix
A. In our subsequent analysis, we will make use of the quadratic Casimir
operators of the AdS3 isometry algebra
$\mathfrak{so}(2,2)=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(2,\mathbb{R})$,
for which we choose (see, e.g., [23])
$\displaystyle\mathcal{F}:$
$\displaystyle=\mathcal{D}^{\alpha\beta}M_{\alpha\beta}~{},$
$\displaystyle[\mathcal{F},\mathcal{D}_{\alpha\beta}]=0~{},$ (2.3a)
$\displaystyle\mathcal{Q}:$
$\displaystyle=\Box-2\mathcal{S}^{2}M^{\alpha\beta}M_{\alpha\beta}~{},\qquad$
$\displaystyle[\mathcal{Q},\mathcal{D}_{\alpha\beta}]=0~{}.$ (2.3b)
Here $\Box:={\cal D}^{a}{\cal D}_{a}=-\frac{1}{2}{\cal D}^{\alpha\beta}{\cal
D}_{\alpha\beta}$ is the d’Alembert operator in AdS3. The operators ${\cal F}$
and ${\cal Q}$ are related to each other as follows
$\displaystyle\mathcal{F}^{2}\phi_{\alpha(n)}=n^{2}\big{[}\mathcal{Q}-(n-2)(n+2)\mathcal{S}^{2}\big{]}\phi_{\alpha(n)}+n(n-1)\mathcal{D}_{\alpha(2)}\mathcal{D}^{\beta(2)}\phi_{\beta(2)\alpha(n-2)}~{},$
(2.4)
for an arbitrary symmetric rank-$n$ spinor field $\phi_{\alpha(n)}$. The
structure
$\mathcal{D}_{\alpha(2)}\mathcal{D}^{\beta(2)}\phi_{\beta(2)\alpha(n-2)}$ in
(2.4) is not defined for the cases $n=0$ and $n=1$. However, it is multiplied
by $n(n-1)$ which vanishes for these cases.
### 2.1 On-shell fields
In any irreducible representation of the AdS3 isometry group
$\mathsf{SO}(2,2)$, the Casimir operators ${\cal F}$ and ${\cal Q}$ must be
multiples of the identity operator. Therefore, in accordance with (2.4), one
is led to consider on-shell fields of the type
$\displaystyle{\cal D}^{\beta(2)}\phi_{\beta(2)\alpha(n-2)}$ $\displaystyle=$
$\displaystyle 0~{},$ (2.5a)
$\displaystyle\big{(}\mathcal{F}-\mu\big{)}\phi_{\alpha(n)}$ $\displaystyle=$
$\displaystyle 0~{},$ (2.5b)
for some real mass parameter $\mu$.
Unitary representations of the Lie algebra $\mathfrak{so}(2,2)$ may be
realised in terms of the on-shell fields (2.5) for certain values of $\mu$. As
is well known (see, e.g., [24, 25] and references therein), the irreducible
unitary representations of $\mathfrak{so}(2,2)$ are denoted $D(E_{0},s)$,
where $E_{0}$ is the minimal energy (in units of $\mathcal{S}$), $s$ the
helicity and $|s|$ is the spin. In this paper we are interested in only those
representations carrying integer or half-integer spin with $|s|\geq 1$ and,
consequently, the allowed values of $s$ are $s=\pm 1,\pm\frac{3}{2},\pm
2,\dots$ . In order for the representation $D(E_{0},s)$ to be unitary, the
inequality $E_{0}\geq|s|$, known as the unitarity bound, must be satisfied.
The representation $D(E_{0},s)\equiv D(E_{0},\sigma|s|)$, with $\sigma:=\pm
1$, may be realised on the space of symmetric rank-$n$ spinor fields
$\phi_{\alpha(n)}$ satisfying the following differential constraints:
$\displaystyle{\cal D}^{\beta(2)}\phi_{\beta(2)\alpha(n-2)}$ $\displaystyle=$
$\displaystyle 0~{},$ (2.6a) $\displaystyle{\cal
D}_{(\alpha_{1}}{}^{\beta}\phi_{\alpha_{2}...\alpha_{n})\beta}$
$\displaystyle=$ $\displaystyle\sigma\frac{\rho}{n}\phi_{\alpha(n)}~{}.$
(2.6b)
Here the integer $n\geq 2$ is related to $s$ via $n=2|s|$. The real parameter
$\rho\geq 0$, which carries mass dimension one, is called the pseudo-mass and
is related to $E_{0}$ through
$\displaystyle E_{0}=1+\frac{\rho}{2n\mathcal{S}}~{}.$ (2.7)
In terms of $\rho$ and $n$, the unitarity bound reads $\rho\geq
n(n-2)\mathcal{S}$. With this in mind, we will label the representations using
$\rho$ in place of $E_{0}$, and use the notation
${\mathfrak{D}}(\rho,\sigma\frac{n}{2})$.
The equations (2.6) were introduced in [25]. In the flat-space limit, these
equations reduce to those proposed in [26, 27].
The first-order equation (2.6b) is equivalent to (2.5b) with $\mu=\sigma\rho$.
Any field $\phi_{\alpha(n)}$ satisfying both constraints (2.6a) and (2.6b), is
an eigenvector of the Casimir operator $\mathcal{Q}$,
$\displaystyle\big{(}\mathcal{Q}-m^{2}\big{)}\phi_{\alpha(n)}=0~{},\qquad
m^{2}:=(\rho/n)^{2}+(n-2)(n+2)\mathcal{S}^{2}~{}.$ (2.8)
In place of (2.6a) and (2.6b), one may instead consider tensor fields
$\phi_{\alpha(n)}$ constrained by the equations (2.6a) and (2.8),
$\displaystyle{\cal D}^{\beta(2)}\phi_{\beta(2)\alpha(n-2)}$ $\displaystyle=$
$\displaystyle 0~{},$ (2.9a)
$\displaystyle\big{(}\mathcal{Q}-m^{2}\big{)}\phi_{\alpha(n)}$
$\displaystyle=$ $\displaystyle 0~{}.$ (2.9b)
In this case, the equation (2.4) becomes
$\displaystyle\big{(}\mathcal{F}-\rho\big{)}\big{(}\mathcal{F}+\rho\big{)}\phi_{\alpha(n)}=0~{}.$
(2.10)
It follows that such a $\phi_{\alpha(n)}$ furnishes the reducible
representation
${\mathfrak{D}}\Big{(}\rho,-\frac{n}{2}\Big{)}\oplus{\mathfrak{D}}\Big{(}\rho,\frac{n}{2}\Big{)}~{}.$
(2.11)
It may be shown that when the pseudo-mass takes on any of the special values
$\rho\equiv\rho_{(t,n)}=n(n-2t){\cal S}~{},\qquad 1\leq t\leq\lfloor
n/2\rfloor~{},$ (2.12)
then the representation ${\mathfrak{D}}(\rho,\sigma\frac{n}{2})$, with either
sign for $\sigma$, shortens. At the field-theoretic level, this is manifested
by the appearance of a depth-$t$ gauge symmetry
$\displaystyle\delta_{\zeta}\phi^{(t)}_{\alpha(n)}=\big{(}\mathcal{D}_{\alpha(2)}\big{)}^{t}\zeta_{\alpha(n-2t)}~{},$
(2.13)
under which the system of equations (2.6), with $\rho$ given by (2.12) and
$\sigma$ arbitrary, is invariant.444This is true when the gauge parameter
satisfies conditions analogous to (2.6), see [22] for the details. A field
which satisfies the constraints (2.9a) and (2.8), and has pseudo-mass (2.12),
will be said to be partially-massless with depth $t$ and denoted by
$\phi^{(t)}_{\alpha(n)}$.555Partially massless fields have been studied in
diverse dimensions for over 35 years, see e.g. [28, 29, 30, 31, 32] for some
of the earlier works. For the field $\phi^{(t)}_{\alpha(n)}$ the second order
equation (2.8) takes the form
$\big{(}{\cal
Q}-\tau_{(t,n)}\mathcal{S}^{2}\big{)}\phi^{(t)}_{\alpha(n)}=0~{},\qquad\tau_{(t,n)}=\big{[}2n(n-2t)+4(t-1)(t+1)\big{]}~{},$
(2.14)
where the parameters $\tau_{(t,n)}$ are known as the partially massless
values. For $t>1$, the pseudo-mass $\rho_{(t,n)}$, eq. (2.12), violates the
unitarity bound and hence the partially massless representations are non-
unitary.
### 2.2 Spin projection operators
Given a tensor field $\phi_{\alpha(n)}$ on AdS3, the spin projection operator
$\Pi^{\perp}_{[n]}$ with the defining properties (1.2), selects the component
$\phi^{\perp}_{\alpha(n)}$ of $\phi_{\alpha(n)}$ which is transverse. If, in
addition, $\phi_{\alpha(n)}$ satisfies the second order mass-shell equation
(2.8), then $\Pi^{\perp}_{[n]}\phi_{\alpha(n)}$ furnishes the reducible
representation
${\mathfrak{D}}(\rho,-\frac{n}{2})\oplus{\mathfrak{D}}(\rho,\frac{n}{2})$ of
$\mathfrak{so}(2,2)$.
In this section we derive the spin projection operators $\Pi^{\perp}_{[n]}$.
For this purpose it is convenient to make use of the generating function
formalism, which is described in appendix B. In this framework, the properties
(1.2a) and (1.2b) take the following form:
$\Pi^{\perp}_{[n]}\Pi^{\perp}_{[n]}\phi_{(n)}=\Pi^{\perp}_{[n]}\phi_{(n)}~{},\qquad\mathcal{D}_{(-2)}\Pi^{\perp}_{[n]}{\phi}_{(n)}=0~{}.\qquad$
(2.15)
It is necessary to separately analyse the cases with $n$ even and $n$ odd.
#### 2.2.1 Bosonic case
We will begin by studying the bosonic case, $n=2s$, for integer $s\geq 1$. Let
us introduce the differential operator $\mathbb{T}_{[2s]}$ of order $2s$ in
derivatives666When the upper bound in a product is less than the lower bound,
we define the result to be unity.
$\mathbb{T}_{[2s]}=\sum_{j=0}^{s}2^{2j}s\frac{(s+j-1)!}{(s-j)!}\prod_{t=1}^{j}\big{(}\mathcal{Q}-\tau_{(s-t+1,2s)}\mathcal{S}^{2}\big{)}\mathcal{D}_{(2)}^{s-j}\mathcal{D}_{(-2)}^{s-j}~{}.$
(2.16)
Here $\tau_{(t,n)}$ denotes the partially massless values (2.14). We refer the
reader to appendix B for an explanation of the other notation. Given an
arbitrary field $\phi_{(2s)}\in\mathcal{V}_{(2s)}$, using (B.3b) one may show
that this operator maps it to a transverse field
$\mathcal{D}_{(-2)}\mathbb{T}_{[2s]}{\phi}_{(2s)}=0~{}.$ (2.17)
However, it is not a projector on $\mathcal{V}_{(2s)}$ since it does not
square to itself,
$\mathbb{T}_{[2s]}\mathbb{T}_{[2s]}{\phi}_{(2s)}=2^{2s-1}(2s)!\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s)}\mathcal{S}^{2}\big{)}\mathbb{T}_{[2s]}{\phi}_{(2s)}~{}.$
(2.18)
To prove this identity, we observe that only the $j=s$ term of the sum in
(2.16) survives when $\mathbb{T}_{[2s]}$ acts on a transverse field such as
$\mathbb{T}_{[2s]}{\phi}_{(2s)}$.
To obtain a projector, we define the following dimensionless operator
$\widehat{\Pi}^{\perp}_{[2s]}:=\Big{[}2^{2s-1}(2s)!\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s)}\mathcal{S}^{2}\big{)}\Big{]}^{-1}\mathbb{T}_{[2s]}~{}.$
(2.19)
On $\mathcal{V}_{(2s)}$ it inherits its transversality from
$\mathbb{T}_{[2s]}$, and is idempotent by virtue of (2.18). In a fashion
similar to the proof of (2.18), it may also be shown that
$\widehat{\Pi}^{\perp}_{[2s]}$ acts as the identity on the space of
rank-$(2s)$ transverse fields. Thus, $\widehat{\Pi}^{\perp}_{[2s]}$ satisfies
the properties (1.2) and is hence the spin projection operator on
$\mathcal{V}_{(2s)}$. Making the indices explicit, the latter reads
$\displaystyle\widehat{\Pi}^{\perp}_{[2s]}\phi_{\alpha(2s)}$ $\displaystyle=$
$\displaystyle\Big{[}\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s)}\mathcal{S}^{2}\big{)}\Big{]}^{-1}\sum_{j=0}^{s}2^{2j-2s}\frac{2s}{s+j}\binom{s+j}{2j}$
(2.20)
$\displaystyle\times\prod_{t=1}^{j}\big{(}\mathcal{Q}-\tau_{(s-t+1,2s)}\mathcal{S}^{2}\big{)}\mathcal{D}_{\alpha(2)}^{s-j}\big{(}\mathcal{D}^{\beta(2)}\big{)}^{s-j}\phi_{\alpha(2j)\beta(2s-2j)}~{}.$
It is possible to construct a spin projection operator solely in terms of the
two quadratic Casimir operators (2.3). To this end, we introduce the operator
$\displaystyle\Pi^{\perp}_{[2s]}=\frac{1}{2^{2s-1}(2s)!}\prod_{j=1}^{s}\frac{\Big{(}{\cal
F}^{2}-4(j-1)^{2}\big{(}{\cal Q}-4j(j-2){\cal
S}^{2}\big{)}\Big{)}}{\big{(}\mathcal{Q}-\tau_{(j,2s)}\mathcal{S}^{2}\big{)}}~{}.$
(2.21)
Let us show that (2.21) satisfies the three defining properties (1.2) on
$\mathcal{V}_{(2s)}$. Given an arbitrary transverse field $\psi_{\alpha(2s)}$,
$\mathcal{D}_{(-2)}\psi_{(2s)}=0$, using (2.4) one may show that
$\displaystyle\prod_{j=1}^{s}\Big{(}{\cal F}^{2}-4(j-1)^{2}\big{(}{\cal
Q}-4j(j-2){\cal S}^{2}\big{)}\Big{)}\psi_{(2s)}$
$\displaystyle=2^{2s-1}(2s)!\prod_{j=1}^{s}\Big{(}{\cal Q}-\tau_{(j,2s)}{\cal
S}^{2}\Big{)}\psi_{(2s)}~{}.$ (2.22)
It follows that ${\Pi}^{\perp}_{[2s]}$ acts as the identity on the space of
transverse fields,
$\displaystyle\mathcal{D}_{(-2)}\psi_{(2s)}=0\quad\implies\quad\Pi^{\perp}_{[2s]}\psi_{(2s)}=\psi_{(2s)}~{}.$
(2.23)
Next, the image of any unconstrained field $\phi_{(2s)}$ under
$\Pi^{\perp}_{[2s]}$ is transverse, which follows elegantly from (B.3c)
${\cal D}_{(-2)}\Pi^{\perp}_{[2s]}\phi_{(2s)}=\Pi^{\perp}_{[2s]}{\cal
D}_{(-2)}\phi_{(2s)}\propto\mathcal{D}_{(2)}^{s}\mathcal{D}_{(-2)}^{s+1}\phi_{(2s)}=0~{}.$
(2.24)
Finally, using (2.23) and (2.24) one can show that $\Pi^{\perp}_{[2s]}$
squares to itself
$\displaystyle\Pi^{\perp}_{[2s]}\Pi^{\perp}_{[2s]}\phi_{(2s)}=\Pi^{\perp}_{[2s]}\phi_{(2s)}~{}.$
(2.25)
Thus $\Pi^{\perp}_{[2s]}$ satisfies (1.2a), (1.2b) and (1.2c) and can also be
identified as a spin projector.
Although it is not immediately apparent, the two projectors
$\widehat{\Pi}^{\perp}_{[2s]}$ and $\Pi^{\perp}_{[2s]}$ actually coincide.
Indeed, an operator satisfying the three properties (1.2), and which commutes
with ${\cal D}_{a}$, must be unique. Let us explain why this is so. Take an
arbitrary $\phi_{(2s)}$ and act on it first with
$\widehat{\Pi}^{\perp}_{[2s]}$ and then with $\Pi^{\perp}_{[2s]}$. Since
$\widehat{\Pi}^{\perp}_{[2s]}\phi_{(2s)}$ is transverse, and
$\Pi^{\perp}_{[2s]}$ acts as the identity on this space, we have
$\displaystyle\Pi^{\perp}_{[2s]}\widehat{\Pi}^{\perp}_{[2s]}\phi_{(2s)}=\widehat{\Pi}^{\perp}_{[2s]}\phi_{(2s)}~{}.$
(2.26)
Next, we perform the same operation but in the opposite order,
$\displaystyle\widehat{\Pi}^{\perp}_{[2s]}\Pi^{\perp}_{[2s]}\phi_{(2s)}=\Pi^{\perp}_{[2s]}\phi_{(2s)}~{},$
(2.27)
and subtract (2.26) from (2.27). Using the fact that $\Pi^{\perp}_{[2s]}$ is
composed solely from Casimir operators, and hence commutes with
$\widehat{\Pi}^{\perp}_{[2s]}$, it follows that on $\mathcal{V}_{(2s)}$ the
two are equal to one another,
$\widehat{\Pi}^{\perp}_{[2s]}\phi_{(2s)}=\Pi^{\perp}_{[2s]}\phi_{(2s)}~{}.$
(2.28)
So far our analysis of the spin projection operators
$\widehat{\Pi}_{[2s]}^{\perp}$ and $\Pi_{[2s]}^{\perp}$ has been restricted to
the linear space $\mathcal{V}_{(2s)}$. However, for fixed $s$, the operator
$\Pi_{[2s]}^{\perp}$ given by eq. (2.21) is also defined to act on the linear
spaces $\mathcal{V}_{(2s^{\prime})}$ with $s^{\prime}<s$. In fact, making use
of (2.4) and (B.3c), it is possible to show that the following holds true
$\Pi^{\perp}_{[2s]}\phi_{(2s^{\prime})}=0~{},\qquad 1\leq s^{\prime}\leq
s-1~{}.$ (2.29)
This important identity states that $\Pi^{\perp}_{[2s]}$ annihilates any
lower-rank field $\phi_{\alpha(2s^{\prime})}\in\mathcal{V}_{(2s^{\prime})}$.
It should be mentioned that $\Pi^{\perp}_{[2s]}$ does not annihilate lower-
rank fermionic fields $\phi_{\alpha(2s^{\prime}+1)}$. When acting on ${\cal
V}_{(2s^{\prime})}$, the two operators $\widehat{\Pi}_{[2s]}^{\perp}$ and
$\Pi_{[2s]}^{\perp}$ are no longer equal to each other, and in particular
$\widehat{\Pi}_{[2s]}^{\perp}\phi_{(2s^{\prime})}\neq 0$. It is for this
reason that we will continue to use different notation for the two operators.
It follows from (2.21) that the poles of $\Pi^{\perp}_{[2s]}$ correspond to
the partially massless values $\tau_{(j,2s)}$ defined by (2.14).
#### 2.2.2 Fermionic case
We now turn our attention to the fermionic case, $n=2s+1$, for integers $s\geq
1$. Let us introduce the differential operator $\mathbb{T}_{[2s+1]}$ of order
$2s$ in derivatives
$\mathbb{T}_{[2s+1]}=\sum_{j=0}^{s}2^{2j}\frac{(s+j)!}{(s-j)!}\prod_{t=1}^{j}\big{(}\mathcal{Q}-\tau_{(s-t+1,2s+1)}\mathcal{S}^{2}\big{)}\mathcal{D}_{(2)}^{s-j}\mathcal{D}_{(-2)}^{s-j}~{}.$
(2.30)
Here $\tau_{(t,n)}$ are the partially massless values (2.14). The operator
$\mathbb{T}_{[2s+1]}$ maps $\phi_{(2s+1)}$ to a transverse field
$\mathcal{D}_{(-2)}\mathbb{T}_{[2s+1]}{\phi}_{(2s+1)}=0~{}.$ (2.31)
However, this operator does not square to itself on ${\cal V}_{(2s+1)}$
$\mathbb{T}_{[2s+1]}\mathbb{T}_{[2s+1]}\phi_{(2s+1)}=2^{2s}(2s)!\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s+1)}\mathcal{S}^{2}\big{)}\mathbb{T}_{[2s+1]}\phi_{(2s+1)}~{}.$
(2.32)
As a result, one can immediately define the dimensionless operator
$\widehat{\Pi}^{\perp}_{[2s+1]}:=\Big{[}2^{2s}(2s)!\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s+1)}\mathcal{S}^{2}\big{)}\Big{]}^{-1}~{}\mathbb{T}_{[2s+1]}~{},$
(2.33)
which is a transverse projector by construction. Following a derivation
similar to that of (2.32), it can be shown that the operator
$\widehat{\Pi}^{\perp}_{[2s+1]}$ acts like the identity on the space of
transverse fields. Hence, the operator $\widehat{\Pi}^{\perp}_{[2s+1]}$
satisfies properties (1.2), and is thus a spin projection operator on ${\cal
V}_{(2s+1)}$. Converting (2.33) to spinor notation yields
$\displaystyle\widehat{\Pi}^{\perp}_{[2s+1]}\phi_{\alpha(2s+1)}$
$\displaystyle=$
$\displaystyle\Big{[}\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s+1)}\mathcal{S}^{2}\big{)}\Big{]}^{-1}\sum_{j=0}^{s}2^{2j-2s}\frac{2s+1}{2j+1}\binom{s+j}{2j}~{}$
(2.34)
$\displaystyle\times\prod_{t=1}^{j}\big{(}\mathcal{Q}-\tau_{(s-t+1,2s+1)}\mathcal{S}^{2}\big{)}\mathcal{D}_{\alpha(2)}^{s-j}\big{(}\mathcal{D}^{\beta(2)}\big{)}^{s-j}\phi_{\alpha(2j+1)\beta(2s-2j)}~{}.$
As in the bosonic case, one can construct a fermionic projector purely in
terms of the quadratic Casimir operators (2.3). Let us introduce the operator
$\displaystyle{\Pi}^{\perp}_{[2s+1]}=\frac{1}{2^{2s}(2s)!}\prod_{j=1}^{s}\frac{\Big{(}{\cal
F}^{2}-(2j-1)^{2}\big{(}{\cal Q}-(2j-3)(2j+1){\cal
S}^{2}\big{)}\Big{)}}{\big{(}\mathcal{Q}-\tau_{(j,2s+1)}\mathcal{S}^{2}\big{)}}~{}.$
(2.35)
We wish to show that (2.35) indeed satisfies the properties (1.2) on ${\cal
V}_{(2s+1)}$. Given an arbitrary transverse field $\psi_{(2s+1)}$, using (2.4)
one can derive the identity
$\displaystyle\prod_{j=1}^{s}\Big{(}{\cal
F}^{2}-\big{(}2j-1\big{)}^{2}\big{(}{\cal Q}-(2j-3)(2j+1){\cal
S}^{2}\big{)}\Big{)}\psi_{(2s+1)}$ (2.36)
$\displaystyle=2^{2s}(2s)!\prod_{j=1}^{s}\Big{(}{\cal Q}-\tau_{(j,2s+1)}{\cal
S}^{2}\Big{)}\psi_{(2s+1)}~{}.$
It follows that ${\Pi}^{\perp}_{[2s+1]}$ acts like the identity on the space
of transverse fields
${\cal
D}_{(-2)}\psi_{(2s+1)}=0\quad\Longrightarrow\quad{\Pi}^{\perp}_{[2s+1]}\psi_{(2s+1)}=\psi_{(2s+1)}~{}.$
(2.37)
By making use of (B.3c), one can show that the operator
${\Pi}^{\perp}_{[2s+1]}$ maps $\phi_{(2s+1)}$ to a transverse field
${\cal
D}_{(-2)}{\Pi}^{\perp}_{[2s+1]}\phi_{(2s+1)}={\Pi}^{\perp}_{[2s+1]}{\cal
D}_{(-2)}\phi_{(2s+1)}\propto\mathcal{D}_{(2)}^{s}\mathcal{D}_{(-2)}^{s+1}\phi_{(2s+1)}=0~{}.$
(2.38)
Finally, using (2.37) in conjunction with (2.38), one can show that
${\Pi}^{\perp}_{[2s+1]}$ is idempotent
$\displaystyle{\Pi}^{\perp}_{[2s+1]}{\Pi}^{\perp}_{[2s+1]}\phi_{(2s+1)}={\Pi}^{\perp}_{[2s+1]}\phi_{(2s+1)}~{}.$
(2.39a)
Hence, ${\Pi}^{\perp}_{[2s+1]}$ satisfies (1.2), and can thus be classified as
a spin projector on AdS3.
In a similar fashion to the bosonic case, it may be shown that
$\widehat{\Pi}^{\perp}_{[2s+1]}$ and ${\Pi}^{\perp}_{[2s+1]}$ are equivalent
on ${\cal V}_{(2s+1)}$,
$\widehat{\Pi}^{\perp}_{[2s+1]}\phi_{(2s+1)}={\Pi}^{\perp}_{[2s+1]}\phi_{(2s+1)}~{}.$
(2.40)
Stepping away from ${\cal V}_{(2s+1)}$, one can show that for fixed $s$, the
projector $\Pi_{[2s+1]}^{\perp}$ annihilates any lower-rank field
$\phi_{(2s^{\prime}+1)}\in\mathcal{V}_{(2s^{\prime}+1)}$
$\Pi^{\perp}_{[2s+1]}\phi_{(2s^{\prime}+1)}=0~{},\qquad 1\leq s^{\prime}\leq
s-1~{}.$ (2.41)
The two operators $\widehat{\Pi}_{[2s+1]}^{\perp}$ and $\Pi_{[2s+1]}^{\perp}$
are not equivalent on $\mathcal{V}_{(2s^{\prime}+1)}$. We remark that
$\Pi^{\perp}_{[2s+1]}$ does not annihilate lower-rank bosonic fields
$\phi_{\alpha(2s^{\prime}+2)}$.
It follows from (2.35) that the poles of $\Pi^{\perp}_{[2s+1]}$ correspond to
the partially massless values $\tau_{(j,2s+1)}$ defined by (2.14).
An important property of the projectors (2.21) and (2.35) is that they are
symmetric operators, that is
$\displaystyle\int\text{d}^{3}x\,e\,\psi^{\alpha(n)}\Pi^{\perp}_{[n]}\phi_{\alpha(n)}=\int\text{d}^{3}x\,e\,\phi^{\alpha(n)}\Pi^{\perp}_{[n]}\psi_{\alpha(n)}~{},\qquad
e^{-1}:=\text{det}(e_{a}{}^{m})~{},$ (2.42)
for arbitrary well-behaved fields $\psi_{\alpha(n)}$ and $\phi_{\alpha(n)}$.
### 2.3 Helicity projectors
As previously mentioned, given a rank-$n$ field $\phi_{\alpha(n)}$ satisfying
the mass-shell equation (2.8), its projection
$\Pi_{[n]}^{\perp}\phi_{\alpha(n)}$ furnishes the reducible representation
${\mathfrak{D}}(\rho,-\frac{n}{2})\oplus{\mathfrak{D}}(\rho,\frac{n}{2})$. In
particular, representations with both signs of helicity $\pm\frac{n}{2}$
appear in this decomposition.
In order to isolate the component of $\phi_{\alpha(n)}$ describing an
irreducible representation of $\mathfrak{so}(2,2)$, it is necessary to split
the spin projection operators $\Pi_{[n]}^{\perp}$ according to
$\displaystyle\Pi_{[n]}^{\perp}=\mathbb{P}^{(+)}_{[n]}+\mathbb{P}^{(-)}_{[n]}~{}.$
(2.43)
Each of the helicity projectors $\mathbb{P}^{(\pm)}_{[n]}$ should satisfy
(1.2a) and (1.2b). In addition, they should project out the component of
$\phi_{\alpha(n)}$ carrying a single value of helicity. The last two
requirements are equivalent to the equations
$\displaystyle\mathcal{D}^{\beta(2)}\phi^{(\pm)}_{\beta(2)\alpha(n-2)}$
$\displaystyle=0~{},$ (2.44a)
$\displaystyle\big{(}\mathcal{F}\mp\rho\big{)}\phi^{(\pm)}_{\alpha(n)}$
$\displaystyle=0~{},$ (2.44b)
where we have denoted
$\phi^{(\pm)}_{\alpha(n)}:=\mathbb{P}^{(\pm)}_{[n]}\phi_{\alpha(n)}$. It
follows that $\phi^{(\pm)}_{\alpha(n)}$ furnishes the irreducible
representation ${\mathfrak{D}}(\rho,\pm\frac{n}{2})$.
It is not difficult to show that the following operators satisfy these
requirements
$\mathbb{P}^{(\pm)}_{[n]}:=\frac{1}{2}\bigg{(}\mathds{1}\pm\frac{\mathcal{F}}{n\sqrt{\mathcal{Q}-(n+2)(n-2)\mathcal{S}^{2}}}\bigg{)}{\Pi}^{\perp}_{[n]}~{}.$
(2.45)
Here ${\Pi}^{\perp}_{[n]}$ are the spin projectors written in terms of Casimir
operators, and are given by (2.21) and (2.35) in the bosonic and fermionic
cases respectively. Of course, on ${\cal V}_{(n)}$, one could instead choose
to represent the latter in their alternate form (2.19) and (2.33).
Using the defining features of ${\Pi}^{\perp}_{[n]}$, it can be shown that the
operators $\mathbb{P}^{(+)}_{[n]}$ and $\mathbb{P}^{(-)}_{[n]}$ are orthogonal
projectors when restricted to $\mathcal{V}_{(n)}$:
$\mathbb{P}^{(\pm)}_{[n]}\mathbb{P}^{(\pm)}_{[n]}=\mathbb{P}^{(\pm)}_{[n]}~{},\qquad\mathbb{P}^{(\pm)}_{[n]}\mathbb{P}^{(\mp)}_{[n]}=0~{}.$
(2.46)
It is also clear that (2.45) projects onto the transverse subspace of
$\mathcal{V}_{(n)}$– it inherits this property from ${\Pi}_{[n]}$. Moreover,
the off-shell field $\phi^{(\pm)}_{\alpha(n)}$ satisfies the constraint
$\Big{(}\mathcal{F}\mp
n\sqrt{\mathcal{Q}-(n-2)(n+2)\mathcal{S}^{2}}\Big{)}\phi^{(\pm)}_{\alpha(n)}=0~{}.$
(2.47)
If $\phi^{(\pm)}_{\alpha(n)}$ is on the mass-shell, eq. (2.8), then (2.47)
reduces to (2.44b).
### 2.4 Longitudinal projectors and lower-spin extractors
In this section we study the operator $\Pi^{\parallel}_{[n]}$ which is the
compliment of $\Pi^{\perp}_{[n]}$,
$\Pi^{\parallel}_{[n]}:=\mathds{1}-\Pi^{\perp}_{[n]}~{}.$ (2.48)
By construction, the two operators $\Pi^{\perp}_{[n]}$ and
$\Pi^{\parallel}_{[n]}$ resolve the identity,
$\mathds{1}=\Pi^{\parallel}_{[n]}+\Pi^{\perp}_{[n]}$, and form an orthogonal
set of projectors
$\displaystyle\Pi^{\perp}_{[n]}\Pi^{\perp}_{[n]}$
$\displaystyle=\Pi^{\perp}_{[n]}~{},\qquad\Pi^{\parallel}_{[n]}\Pi^{\parallel}_{[n]}=\Pi^{\parallel}_{[n]}~{},$
(2.49a) $\displaystyle\Pi^{\parallel}_{[n]}\Pi^{\perp}_{[n]}$
$\displaystyle=0~{},\qquad~{}~{}~{}\phantom{.}\Pi^{\perp}_{[n]}\Pi^{\parallel}_{[n]}=0~{}.$
(2.49b)
Moreover, it can be shown that $\Pi^{\parallel}_{[n]}$ projects a field
$\phi_{\alpha(n)}$ onto its longitudinal component. A rank-$n$ field
$\psi_{\alpha(n)}$ is said to be longitudinal if there exists a rank-$(n-2)$
field $\psi_{\alpha(n-2)}$ such that $\psi_{\alpha(n)}$ may be expressed as
$\psi_{\alpha(n)}=\mathcal{D}_{\alpha(2)}\psi_{\alpha(n-2)}$. Such fields are
also sometimes referred to as being pure gauge. Therefore, we find that
$\displaystyle\phi^{\parallel}_{\alpha(n)}:=\Pi^{\parallel}_{[n]}\phi_{\alpha(n)}=\mathcal{D}_{\alpha(2)}\phi_{\alpha(n-2)}~{},$
(2.50)
for some unconstrained field $\phi_{\alpha(n-2)}$. For $\phi_{\alpha(n)}$ off-
shell, $\phi_{\alpha(n-2)}$ will be non-local in general. For example, in the
case of a vector field $\phi_{a}$, we have
$\phi^{\parallel}_{a}=\mathcal{D}_{a}\phi$ where
$\phi=\frac{1}{\mathcal{Q}}\mathcal{D}^{a}\phi_{a}$.
Using the fact that $\Pi^{\perp}_{[n]}$ and $\Pi^{\parallel}_{[n]}$ resolve
the identity, one can decompose an arbitrary field $\phi_{\alpha(n)}$ as
follows
$\phi_{\alpha(n)}=\phi^{\perp}_{\alpha(n)}+{\cal
D}_{\alpha(2)}\phi_{\alpha(n-2)}~{}.$ (2.51)
Here $\phi^{\perp}_{\alpha(n)}$ is transverse and $\phi_{\alpha(n-2)}$ is
unconstrained. Repeating this process iteratively, we obtain the following
decomposition
$\displaystyle\phi_{\alpha(n)}$ $\displaystyle=$
$\displaystyle\sum_{j=0}^{\lfloor n/2\rfloor}\big{(}{\cal
D}_{\alpha(2)}\big{)}^{j}\phi^{\perp}_{\alpha(n-2j)}~{}.$ (2.52)
Here each of the fields $\phi^{\perp}_{\alpha(n-2j)}$ are transverse, except
of course $\phi^{\perp}$ and $\phi^{\perp}_{\alpha}$. We note that, using
(2.43), one may take the decomposition (2.52) a step further and bisect each
term into irreducible components which are transverse and have positive or
negative helicity,
$\displaystyle\phi_{\alpha(n)}=\sum_{j=0}^{\lfloor n/2\rfloor}\big{(}{\cal
D}_{\alpha(2)}\big{)}^{j}\Big{(}\phi^{(+)}_{\alpha(n-2j)}+\phi^{(-)}_{\alpha(n-2j)}\Big{)}~{}.$
(2.53)
Making use of the projectors (2.21) and (2.35) and their corresponding
properties, one can construct operators which extract the component
$\phi_{\alpha(n-2j)}^{\perp}$ from the decomposition (2.52), where $1\leq
j\leq\lfloor n/2\rfloor$. In particular, we find that the spin
$\frac{1}{2}(n-2j)$ component may be extracted via
$\displaystyle\phi_{\alpha(n)}\mapsto\phi^{\perp}_{\alpha(n-2j)}=\big{(}\mathbb{S}_{[n-2j]}^{\perp}\phi\big{)}_{\alpha(n-2j)}\equiv\mathbb{S}_{\alpha(n-2j)}^{\perp}(\phi)~{},$
(2.54)
where we have defined
$\displaystyle\mathbb{S}_{\alpha(n-2j)}^{\perp}(\phi)$
$\displaystyle=\frac{(-1)^{j}}{2^{2j}}\binom{n}{j}\prod_{k=1}^{j}\big{(}{\cal
Q}-\tau_{(k,n-2j+2k)}{\cal S}^{2}\big{)}^{-1}\Pi^{\perp}_{[n-2j]}\big{(}{\cal
D}^{\beta(2)}\big{)}^{j}\phi_{\alpha(n-2j)\beta(2j)}~{}.$ (2.55)
From this expression, it is clear that
$\mathbb{S}_{\alpha(n-2j)}^{\perp}(\phi)$ is transverse,
$0={\cal D}^{\beta(2)}\mathbb{S}_{\beta(2)\alpha(n-2j-2)}^{\perp}(\phi)~{}.$
(2.56)
Therefore it is appropriate to call $\mathbb{S}_{[n-2j]}^{\perp}$ the
transverse spin $\frac{1}{2}(n-2j)$ extractor. It is not a projector, since it
is dimensionful and reduces the rank of the field on which it acts.
Let $\psi_{\alpha(n)}$ be some longitudinal field,
$\psi_{\alpha(n)}=\mathcal{D}_{\alpha(2)}\zeta_{\alpha(n-2)}$. We do not
assume it to be in the image of $\Pi^{\parallel}_{[n]}$. However, since
$\Pi_{[n]}^{\perp}$ commutes with $\mathcal{D}_{\alpha(2)}$ and annihilates
all lower-rank fields, eq. (2.29), it follows that it also annihilates any
rank-$n$ longitudinal field777This also implies that
$\widehat{\Pi}^{\perp}_{[n]}\psi_{\alpha(n)}=0$, since both
$\widehat{\Pi}^{\perp}_{[n]}$ and $\Pi^{\perp}_{[n]}$ are equal on
$\mathcal{V}_{(n)}$.
$\displaystyle\psi_{\alpha(n)}=\mathcal{D}_{\alpha(2)}\zeta_{\alpha(n-2)}\qquad\implies\qquad\Pi^{\perp}_{[n]}\psi_{\alpha(n)}=0~{}.$
(2.57)
As a consequence, given two integers $m,n$ satisfying $2\leq m\leq n$, it
immediately follows that $\Pi^{\parallel}_{[n]}$ acts as the identity operator
on the space of rank-$m$ longitudinal fields $\psi_{\alpha(m)}$,
$\displaystyle\psi_{\alpha(m)}=\mathcal{D}_{\alpha(2)}\psi_{\alpha(m-2)}\qquad\implies\qquad\Pi^{\parallel}_{[m+2s]}\psi_{\alpha(m)}=\psi_{\alpha(m)}~{},$
(2.58)
with $s$ a non-negative integer. These properties will be useful in section
2.5.
Decompositions similar to (2.51) are well-known in the literature (usually
they are stated without a derivation) and are used in the framework of path-
integral quantisation, see e.g. [33]. Making use of the projectors allows one
to reconstruct $\phi^{\perp}_{\alpha(n)}$ and $\phi_{\alpha(n-2)}$ from
$\phi_{\alpha(n)}$. Quite often such decompositions are given in vector
notation in terms of a symmetric field $\varphi_{a_{1}\dots
a_{s}}=\varphi_{(a_{1}\dots a_{s})}$ subject to the double traceless
constraint $\varphi_{a_{1}\dots a_{s-4}bc}{}^{bc}=0$ (Fronsdal’s field [34]).
The decomposition in AdS3 reads [33]
$\displaystyle\varphi_{a_{1}\dots a_{s}}=\varphi^{\rm TT}_{a_{1}\dots
a_{s}}+\eta_{(a_{1}a_{2}}\widetilde{\varphi}_{a_{3}\dots a_{s})}+{\cal
D}_{(a_{1}}\zeta_{a_{2}\dots a_{s})}~{},\qquad{\cal D}^{b}\varphi^{\rm
TT}_{ba_{1}\dots a_{s-1}}=0~{},$ (2.59)
where $\varphi^{\rm TT}_{a_{1}\dots a_{s}}$, $\widetilde{\varphi}_{a_{1}\dots
a_{s-2}}$ and $\zeta_{a_{1}\dots a_{s-1}}$ are symmetric and traceless. This
decomposition for a symmetric second-rank tensor field,
$\varphi_{ab}=\varphi_{ba}$, in a curved four-dimensional space was introduced
long ago [35, 36, 37, 38]. In this paper we consider only symmetric traceless
fields $\varphi_{a_{1}\dots a_{s}}$ satisfying the constraint
$\varphi_{a_{1}\dots a_{s-2}b}{}^{b}=0$. In this case,
$\widetilde{\varphi}_{a_{1}\dots a_{s-2}}$ in the decomposition (2.59) is
given by
$\displaystyle\widetilde{\varphi}_{a_{1}\dots a_{s-2}}=-\frac{s-1}{2s-1}{\cal
D}^{b}\zeta_{a_{1}\dots a_{s-2}b}~{}.$ (2.60)
### 2.5 Linearised higher-spin Cotton tensors
Further applications of spin projection operators can be found in modern
conformal higher-spin theories. In particular, we will show that the spin
projectors can be used to obtain new realisations of the linearised higher-
spin Cotton tensors, which were recently derived in [22]. For integer $n\geq
2$, the higher-spin bosonic and fermionic Cotton tensors
$\mathfrak{C}_{\alpha(n)}(h)$ take the respective closed forms
$\displaystyle\mathfrak{C}_{\alpha(2s)}(h)$
$\displaystyle=\frac{1}{2^{2s-1}}\sum_{j=0}^{s-1}2^{2j+1}\binom{s+j}{2j+1}\prod_{t=1}^{j}\Big{(}{\cal
Q}-\tau_{(s-t,2s)}{\cal S}^{2}\Big{)}$
$\displaystyle\phantom{\frac{1}{2^{2s-1}}\sum_{j=0}^{s-1}2^{2j+1}\binom{s+j}{2j+1}}\times{\cal
D}_{\alpha(2)}^{s-j-1}{\cal D}_{\alpha}{}^{\beta}\big{(}{\cal
D}^{\beta(2)}\big{)}^{s-j-1}h_{\alpha(2j+1)\beta(2s-2j-1)}~{},$ (2.61a)
$\displaystyle\mathfrak{C}_{\alpha(2s+1)}(h)$
$\displaystyle=\frac{1}{2^{2s}}\sum_{j=0}^{s}2^{2j}\binom{s+j}{2j}\frac{(2s+1)}{(2j+1)}\prod_{t=1}^{j}\Big{(}{\cal
Q}-\tau_{(s-t+1,2s+1)}{\cal
S}^{2}\Big{)}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\phantom{\frac{1}{2^{2s}}\sum_{j=0}^{s}\binom{s+j}{2j}\frac{(2s+1)}{(2j+1)}}\times{\cal
D}_{\alpha(2)}^{s-j}\big{(}{\cal
D}^{\beta(2)}\big{)}^{s-j}h_{\alpha(2j+1)\beta(2s-2j)}~{}.$ (2.61b)
The Cotton tensors are primary descendents of the conformal gauge field
$h_{\alpha(n)}$, which is a real field defined modulo gauge transformations of
the form
$\delta_{\zeta}h_{\alpha(n)}={\cal D}_{\alpha(2)}\zeta_{\alpha(n-2)}~{},$
(2.62)
for some real unconstrained gauge parameter $\zeta_{\alpha(n-2)}$. The Cotton
tensors (2.61) are characterised by the properties:
1. 1.
$\mathfrak{C}_{\alpha(n)}(h)$ is transverse
${\cal D}^{\beta\gamma}\mathfrak{C}_{\beta\gamma\alpha(n-2)}(h)=0~{}.$ (2.63a)
2. 2.
$\mathfrak{C}_{\alpha(n)}(h)$ is gauge-invariant
$\mathfrak{C}_{\alpha(n)}(\delta_{\zeta}h)=0~{}.$ (2.63b)
Making use of the bosonic (2.19) and fermionic (2.33) spin projectors
$\widehat{\Pi}^{\perp}_{[n]}$, we see that the higher-spin Cotton tensors
(2.61) can be recast into the simple form:
$\displaystyle\mathfrak{C}_{\alpha(2s)}(h)$ $\displaystyle=$
$\displaystyle\frac{1}{2s}\prod_{t=1}^{s-1}\big{(}\mathcal{Q}-\tau_{(t,2s)}\mathcal{S}^{2}\big{)}\mathcal{F}\widehat{\Pi}^{\perp}_{[2s]}h_{\alpha(2s)}~{},$
(2.64a) $\displaystyle\mathfrak{C}_{\alpha(2s+1)}(h)$ $\displaystyle=$
$\displaystyle\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s+1)}\mathcal{S}^{2}\big{)}\widehat{\Pi}^{\perp}_{[2s+1]}h_{\alpha(2s+1)}~{}.$
(2.64b)
The identity ${\cal F}{\cal D}_{(-2)}^{s}\phi_{\alpha(2s)}=0$ proves useful in
deriving (2.64a). In the flat space limit, ${\cal S}\rightarrow 0$, (2.64)
coincides with the closed form expressions of $\mathfrak{C}_{\alpha(n)}(h)$
given in [39, 40].888It can be shown that the Cotton tensors are equivalent to
those derived in [41, 42]. Moreover, we can make use of the equivalent family
of projectors $\Pi^{\perp}_{[n]}$ to recast $\mathfrak{C}_{\alpha(n)}(h)$
purely in terms of the quadratic Casimir operators (2.3). Explicitly, they
read
$\displaystyle\mathfrak{C}_{\alpha(2s)}(h)$ $\displaystyle=$
$\displaystyle\frac{{\cal F}}{2^{2s-1}(2s-1)!}\prod_{j=1}^{s-1}\Big{(}{\cal
F}^{2}-4j^{2}\big{(}{\cal Q}-4(j-1)(j+1){\cal
S}^{2}\big{)}\Big{)}h_{\alpha(2s)}~{},$ (2.65a)
$\displaystyle\mathfrak{C}_{\alpha(2s+1)}(h)$ $\displaystyle=$
$\displaystyle\frac{1}{2^{2s}(2s)!}\prod_{j=0}^{s-1}\Big{(}{\cal
F}^{2}-(2j+1)^{2}\big{(}{\cal Q}-(2j-1)(2j+3){\cal
S}^{2}\big{)}\Big{)}h_{\alpha(2s+1)}~{}.$ (2.65b)
There are many advantages to expressing the Cotton tensors in terms of spin
projection operators. Firstly, in both (2.64) and (2.65), the properties of
(i) transversality (2.63a) and (ii) gauge invariance (2.63b) are manifest, as
a consequence of the projector properties (1.2b) and (2.57) respectively.
Using this gauge freedom, one may impose the transverse gauge condition on
$h_{\alpha(n)}$,
$\displaystyle h_{\alpha(n)}\equiv h^{\rm T}_{\alpha(n)}~{},\qquad
0=\mathcal{D}^{\beta(2)}h^{\rm T}_{\beta(2)\alpha(n-2)}~{}.$ (2.66)
On account of (1.2c), in this gauge the Cotton tensors become manifestly
factorised into products of second order differential operators involving all
partial masses,
$\displaystyle\mathfrak{C}_{\alpha(2s)}(h^{\rm T})$ $\displaystyle=$
$\displaystyle\frac{1}{2s}\prod_{t=1}^{s-1}\big{(}\mathcal{Q}-\tau_{(t,2s)}\mathcal{S}^{2}\big{)}\mathcal{F}h^{\rm
T}_{\alpha(2s)}~{},$ (2.67a) $\displaystyle\mathfrak{C}_{\alpha(2s+1)}(h^{\rm
T})$ $\displaystyle=$
$\displaystyle\prod_{t=1}^{s}\big{(}\mathcal{Q}-\tau_{(t,2s+1)}\mathcal{S}^{2}\big{)}h^{\rm
T}_{\alpha(2s+1)}~{}.$ (2.67b)
This property was observed in [22] without the use of projectors. An
interesting feature of the new realisation (2.65), which was not observed in
[22], is that the Cotton tensors are manifestly factorised in terms of second-
order differential operators without having to enter the transverse gauge.
By virtue of the above observations, it follows that the conformal higher-spin
action [43, 44]
$\displaystyle S_{\text{CHS}}^{(n)}[h]=\frac{\text{i}^{n}}{2^{\lceil
n/2\rceil+1}}\int\text{d}^{3}x\,e\,h^{\alpha(n)}\mathfrak{C}_{\alpha(n)}(h)$
(2.68)
is manifestly gauge invariant and factorised when
$\mathfrak{C}_{\alpha(n)}(h)$ is expressed as in (2.65).
Analogous factorised expressions can be given for the so-called new
topologically massive (NTM) models. For bosonic fields they were first
introduced in [45] in Minkowski space. Extensions of these models to fields
with half-integer spin were proposed in [43], where their generalisations to
an AdS background were also given. These models are formulated solely in terms
of the gauge prepotentials $h_{\alpha(n)}$ and the associated Cotton tensors
$\mathfrak{C}_{\alpha(n)}(h)$. Given an integer $n\geq 2$, the gauge-invariant
NTM action for the field $h_{\alpha(n)}$ given in [43] is
$\displaystyle S_{\text{NTM}}^{(n)}[h]=\frac{\text{i}^{n}}{2^{\lceil
n/2\rceil+1}}\frac{1}{\rho}\int\text{d}^{3}x\,e\,h^{\alpha(n)}\big{(}\mathcal{F}-\sigma\rho\big{)}\mathfrak{C}_{\alpha(n)}(h)~{},$
(2.69)
where $\rho$ is some positive mass parameter and $\sigma:=\pm 1$. Making use
of the representation (2.65) leads to a manifestly gauge invariant and
factorised form for the action (2.69). The equation of motion obtained by
varying (2.69) with respect to the field $h^{\alpha(n)}$ is
$\displaystyle
0=\big{(}\mathcal{F}-\sigma\rho\big{)}\mathfrak{C}_{\alpha(n)}(h)~{}.$ (2.70)
By analysing (2.70), it can be shown that on-shell, the action (2.69)
describes a propagating mode with pseudo-mass $\rho$, spin $n/2$ and helicity
$\sigma n/2$ given $\rho\neq\rho_{(t,2s)}$. For the case $\rho=\rho_{(t,2s)}$,
the model describes only pure gauge degrees of freedom.
Recently, a new variant of the NTM model for bosonic fields in
$\mathbb{M}^{3}$ was proposed in [21]. This model also does not require
auxilliary fields, but is of order $2s-1$ in derivatives, whereas those given
in [45] are of order $2s$. Given an integer $s\geq 1$, the actions of [21] may
be readily extended to AdS3 as follows
$\displaystyle\widetilde{S}_{\text{NTM}}^{(2s)}[h]=\int{\rm
d}^{3}x\,e\,h^{\alpha(2s)}\big{(}{\cal
F}-\sigma\rho\big{)}\mathfrak{W}_{\alpha(2s)}(h)~{},$ (2.71)
where $\rho$ is a positive mass parameter, $\sigma:=\pm 1$, and
$\mathfrak{W}_{\alpha(2s)}(h)$ is the field strength,
$\mathfrak{W}_{\alpha(2s)}(h):=\prod_{t=1}^{s-1}\big{(}\mathcal{Q}-\tau_{(t,2s)}\mathcal{S}^{2}\big{)}{\Pi}^{\perp}_{[2s]}h_{\alpha(2s)}~{}.$
(2.72)
Due to the properties of $\Pi^{\perp}_{[2s]}$, the action (2.71) is manifestly
gauge invariant and factorised. The descendent $\mathfrak{W}_{\alpha(2s)}(h)$
may be obtained from $\mathfrak{C}_{\alpha(2s)}(h)$ by stripping off
$\mathcal{F}$:
$\displaystyle\mathfrak{C}_{\alpha(2s)}(h)=\frac{1}{2s}\mathcal{F}\mathfrak{W}_{\alpha(2s)}(h)~{}.$
(2.73)
A similar construction does not appear to be possible in the fermionic case.
The equation of motion obtained by varying (2.71) with respect to the field
$h^{\alpha(2s)}$ is
$\displaystyle 0=({\cal F}-\sigma\rho)\mathfrak{W}_{\alpha(2s)}(h)~{}.$ (2.74)
By analysing (2.74), it can be shown that on-shell, the model (2.71) has the
same particle content as the NTM model (2.69).
### 2.6 Results in Minkowski space
In this section we study the flat-space limit of various results derived in
section 2. Of particular interest are the transverse projectors which are
constructed in terms of the Casimir operators of $\mathfrak{so}(2,2)$. In this
limit we obtain novel realisations for the transverse projectors on
$\mathbb{M}^{3}$ which did not appear in [8, 18]. They are expressed in terms
of the quadratic Casimir operators of the three dimensional Poincaré algebra
$\mathfrak{iso}(2,1)$,
$\displaystyle\Box$
$\displaystyle:=\partial^{a}\partial_{a}=-\frac{1}{2}\partial^{\alpha\beta}\partial_{\alpha\beta}~{},$
(2.75a) $\displaystyle\mathcal{W}$
$\displaystyle:=\partial^{\alpha\beta}M_{\alpha\beta}~{},$
$\displaystyle[{\cal W},\partial_{\alpha\beta}]=0~{}.$ (2.75b)
Here $\partial_{\alpha\beta}$ are the partial derivatives of $\mathbb{M}^{3}$
and $\mathcal{W}$ is the Pauli-Lubanski pseudo-scalar. We recall that an
irreducible representation of $\mathfrak{iso}(2,1)$ with mass $\rho$ and
helicity $\sigma n/2$ may be realised on the space of totally symmetric
rank-$n$ spinor fields $\phi_{\alpha(n)}$ satisfying the differential
equations
$\displaystyle\partial^{\beta(2)}\phi_{\beta(2)\alpha(n-2)}$
$\displaystyle=0~{},$ (2.76a) $\displaystyle\big{(}\mathcal{W}-\sigma
n\rho\big{)}\phi_{\alpha(n)}$ $\displaystyle=0~{},$ (2.76b)
where $\sigma=\pm 1$. These equations are equivalent to those given in [26,
27]. We are concerned only with representations carrying (half-)integer spin.
By taking the limit ${\cal S}\rightarrow 0$ of the corresponding AdS3
expressions given above, one may obtain the following results in Minkowski
space:
* •
The bosonic (2.21) and fermionic (2.35) spin projection operators reduce to
$\displaystyle{\cal P}^{\perp}_{[2s]}$ $\displaystyle=$
$\displaystyle\frac{1}{2^{2s-1}(2s)!\Box^{s}}\prod_{j=0}^{s-1}\Big{(}{\cal
W}^{2}-(2j)^{2}\Box\Big{)}~{},$ (2.77a) $\displaystyle{\cal
P}^{\perp}_{[2s+1]}$ $\displaystyle=$
$\displaystyle\frac{1}{2^{2s}(2s)!\Box^{s}}\prod_{j=0}^{s-1}\Big{(}{\cal
W}^{2}-(2j+1)^{2}\Box\Big{)}~{}.$ (2.77b)
* •
The orthogonal helicity projectors (2.45) reduce to
$\mathds{P}^{(\pm)}_{[n]}=\frac{1}{2}\bigg{(}\mathds{1}\pm\frac{{\cal
W}}{n\sqrt{\Box}}\bigg{)}{\cal P}^{\perp}_{[n]}~{}.$ (2.78)
From (2.47) it follows that the field
$\phi_{\alpha(n)}^{(\pm)}:=\mathds{P}^{(\pm)}_{[n]}\phi_{\alpha(n)}$ satisfies
$\big{(}{\cal W}\mp n\sqrt{\Box}\big{)}\phi_{\alpha(n)}^{(\pm)}=0~{}.$ (2.79)
For a $\phi_{\alpha(n)}$ lying on the mass shell,
$\big{(}\Box-\rho^{2}\big{)}\phi_{\alpha(n)}=0$, this reduces to (2.76b).
* •
The transverse spin $\frac{1}{2}(n-2j)$ extractors (2.55), where $1\leq
j\leq\lfloor n/2\rfloor$, are given by
$\displaystyle\mathds{S}_{\alpha(n-2j)}^{\perp}(\phi)$
$\displaystyle=\frac{(-1)^{j}}{2^{2j}}\binom{n}{j}\frac{1}{\Box^{j}}{\cal
P}^{\perp}_{[n-2j]}\big{(}\partial^{\beta(2)}\big{)}^{j}\phi_{\alpha(n-2j)\beta(2j)}~{}.$
(2.80)
* •
The new realisations for the higher-spin Cotton tensors (2.65) become
$\displaystyle{\cal C}_{\alpha(2s)}(h)$ $\displaystyle=$
$\displaystyle\frac{{\cal W}}{2^{2s-1}(2s-1)!}\prod_{j=1}^{s-1}\Big{(}{\cal
W}^{2}-(2j)^{2}\Box\Big{)}h_{\alpha(2s)}~{},$ (2.81a) $\displaystyle{\cal
C}_{\alpha(2s+1)}(h)$ $\displaystyle=$
$\displaystyle\frac{1}{2^{2s}(2s)!}\prod_{j=0}^{s-1}\Big{(}{\cal
W}^{2}-(2j+1)^{2}\Box\Big{)}h_{\alpha(2s+1)}~{}.$ (2.81b)
It may be shown that each of these expressions are equivalent to the
corresponding ones given in [18], except for the lower-spin extractors, which
were not discussed in [18].
## 3 Transverse superprojectors in AdS3|2
In this section, we derive the superprojectors in ${\cal N}=1$ AdS superspace,
AdS3|2, and explore several of their applications. We remind the reader that
AdS3|2 is the maximally supersymmetric solution of three-dimensional ${\cal
N}=1$ AdS supergravity [14].
We begin by reviewing the geometric structure of AdS3|2, as presented in [46],
which is described in terms of its covariant derivatives999In the hope that no
confusion arises, we use the same notation for the vector covariant derivative
in AdS3 and in AdS3|2.
${\cal D}_{A}=({\cal D}_{a},{\cal
D}_{\alpha})=E_{A}{}^{M}{\partial}_{M}+\frac{1}{2}\Omega_{A}{}^{bc}M_{bc}~{}.$
(3.1)
Here $E_{A}{}^{M}$ is the inverse supervielbein and $\Omega_{A}{}^{bc}$ the
Lorentz connection. The covariant derivatives obey the following
(anti-)commutation relations101010In vector notation, the commutation
relations (3.2b) take the form $[{\cal D}_{a},{\cal D}_{\beta}]={\cal
S}(\gamma_{a})_{\beta}{}^{\gamma}{\cal D}_{\gamma}$ and $[{\cal D}_{a},{\cal
D}_{b}]=-4{\cal S}^{2}M_{ab}$.
$\\{{\cal D}_{\alpha},{\cal D}_{\beta}\\}=2{\rm i}{\cal D}_{\alpha\beta}-4{\rm
i}{\cal S}M_{\alpha\beta}~{},\\\ $ (3.2a) $\ [{\cal D}_{\alpha\beta},{\cal
D}_{\gamma}]=-2{\cal S}\varepsilon_{\gamma(\alpha}{\cal D}_{\beta)}~{},\qquad\
[{\cal D}_{\alpha\beta},{\cal D}_{\gamma\delta}]=4{\cal
S}^{2}\Big{(}\varepsilon_{\gamma(\alpha}M_{\beta)\delta}+\varepsilon_{\delta(\alpha}M_{\beta)\gamma}\Big{)}~{},$
(3.2b)
where ${\cal S}\neq 0$ is a real constant parameter which determines the
curvature of AdS3|2.
We list several identities which prove indispensable for calculations:
$\displaystyle{\cal D}_{\alpha}{\cal D}_{\beta}$ $\displaystyle=$
$\displaystyle{\rm i}{\cal D}_{\alpha\beta}-2{\rm i}{\cal
S}M_{\alpha\beta}+\frac{1}{2}\varepsilon_{\alpha\beta}{\cal D}^{2}~{},$ (3.3a)
$\displaystyle{\cal D}^{\beta}{\cal D}_{\alpha}{\cal D}_{\beta}$
$\displaystyle=$ $\displaystyle 4{\rm i}{\cal S}{\cal
D}_{\alpha}~{},\quad\\{{\cal D}^{2},{\cal D}_{\alpha}\\}=4{\rm i}{\cal S}{\cal
D}_{\alpha}~{},$ (3.3b) $\displaystyle{\cal D}^{2}{\cal D}_{\alpha}$
$\displaystyle=$ $\displaystyle 2{\rm i}{\cal S}{\cal D}_{\alpha}+2{\rm
i}{\cal D}_{\alpha\beta}{\cal D}^{\beta}-4{\rm i}{\cal S}{\cal
D}^{\beta}M_{\alpha\beta}~{},$ (3.3c) $\displaystyle\qquad\ [{\cal
D}_{\alpha}{\cal D}_{\beta},{\cal D}^{2}]$ $\displaystyle=$ $\displaystyle
0\quad\Longrightarrow\quad\ [{\cal D}_{\alpha\beta},{\cal D}^{2}]=0~{},$
(3.3d)
where we have denoted ${\cal D}^{2}={\cal D}^{\alpha}{\cal D}_{\alpha}$. These
relations can be derived from the algebra of covariant derivatives (3.2).
Crucial to our analysis are two independent Casimir operators of the ${\cal
N}=1$ AdS3 isometry supergroup
$\text{OSp}(1|2;\mathbb{R})\times\text{SL}(2,\mathbb{R})$. They are [22, 43]
$\displaystyle\mathbb{Q}:$ $\displaystyle=-\frac{1}{4}{\cal D}^{2}{\cal
D}^{2}+{\rm i}{\cal S}{\cal D}^{2}~{},\qquad$ $\displaystyle[\mathbb{Q},{\cal
D}_{A}]=0~{},$ (3.4a) $\displaystyle\mathbb{F}:$ $\displaystyle=-\frac{{\rm
i}}{2}{\cal D}^{2}+2{\cal D}^{\alpha\beta}M_{\alpha\beta}~{},$
$\displaystyle[\mathbb{F},{\cal D}_{A}]=0~{}.$ (3.4b)
Making use of the identity
$\displaystyle-\frac{1}{4}{\cal D}^{2}{\cal D}^{2}$ $\displaystyle=$
$\displaystyle\Box-2{\rm i}{\cal S}{\cal D}^{2}+2{\cal S}{\cal
D}^{\alpha\beta}M_{\alpha\beta}-2{\cal
S}^{2}M^{\alpha\beta}M_{\alpha\beta}~{},$ (3.5)
allows us to express $\mathbb{Q}$ in terms of the d’Alembert operator
$\Box={\cal D}^{a}{\cal D}_{a}$. The operators $\mathbb{Q}$ and $\mathbb{F}$
are related to each other as follows
$\displaystyle\mathbb{F}^{2}\Phi_{\alpha(n)}$ $\displaystyle=$
$\displaystyle\Big{(}(2n+1)^{2}\mathbb{Q}+(2n+1)(2n^{2}+2n-1){\rm i}{\cal
S}{\cal D}^{2}+4n^{2}(n+2)^{2}{\cal S}^{2}\Big{)}\Phi_{\alpha(n)}$ (3.6)
$\displaystyle+4(2n^{2}+n-2){\rm i}{\cal S}{\cal D}_{\alpha}{\cal
D}^{\beta}\Phi_{\beta\alpha(n-1)}-4{\rm i}n{\cal D}_{\alpha\beta}{\cal
D}^{\beta}{\cal D}^{\gamma}\Phi_{\gamma\alpha(n-1)}~{}$
$\displaystyle+4n(n-1){\cal D}_{\alpha(2)}{\cal
D}^{\beta(2)}\Phi_{\beta(2)\alpha(n-2)}~{},$
for an arbitrary symmetric rank-$n$ spinor superfield $\Phi_{\alpha(n)}$.
### 3.1 On-shell superfields
We begin by reviewing aspects of on-shell superfields in AdS3|2, as presented
in [22]. Given an integer $n\geq 1$, the real symmetric superfield
$\Phi_{\alpha(n)}$ is said to be on-shell if it satisfies the two constraints
$\displaystyle 0$ $\displaystyle=$ $\displaystyle{\cal
D}^{\beta}\Phi_{\beta\alpha(n-1)}~{},$ (3.7a) $\displaystyle 0$
$\displaystyle=$ $\displaystyle\big{(}\mathbb{F}-\sigma
M\big{)}\Phi_{\alpha(n)}~{},$ (3.7b)
where $\sigma:=\pm 1$ and $M\geq 0$ is a real parameter of unit mass
dimension. Such a field furnishes an irreducible representation of the ${\cal
N}=1$ AdS3 superalgebra
$\mathfrak{osp}(1|2;{\mathbb{R}})\oplus\mathfrak{sl}(2,{\mathbb{R}})$, which
we denote as $\mathfrak{S}(M,\sigma\frac{n}{2})$. It can be shown that the
representation $\mathfrak{S}(M,\sigma\frac{n}{2})$ decomposes into two
irreducible representations of $\mathfrak{so}(2,2)$,
$\mathfrak{S}\Big{(}M,\sigma\frac{n}{2}\Big{)}=\mathfrak{D}\Big{(}\rho_{A},\sigma_{A}\frac{n}{2}\Big{)}\oplus\mathfrak{D}\Big{(}\rho_{B},\sigma_{B}\frac{n+1}{2}\Big{)}~{}.$
(3.8)
Here, the pseudo-masses are given by
$\displaystyle\rho_{A}=\frac{n}{2n+1}\Big{|}\sigma M-(n+2){\cal
S}\Big{|}~{},\qquad\rho_{B}=\frac{n+1}{2n+1}\Big{|}\sigma M+(n-1){\cal
S}\Big{|}~{},$ (3.9)
and the corresponding signs of the superhelicities are
$\displaystyle\sigma_{A}$ $\displaystyle=$ $\displaystyle\frac{\sigma
M-(n+2){\cal S}}{\big{|}\sigma M-(n+2){\cal
S}\big{|}}~{},\qquad\sigma_{B}=\frac{\sigma M+(n-1){\cal S}}{\big{|}\sigma
M+(n-1){\cal S}\big{|}}~{}.$ (3.10)
The representation $\mathfrak{S}(M,\sigma\frac{n}{2})$ is unitary if the
parameter $M$ obeys the unitarity bound $M\geq 2(n-1)(n+1){\cal S}$. This
bound ensures that both representations appearing in the decomposition (3.8)
are unitary.
A superfield satisfying the first condition (3.7a) is said to be transverse.
Any transverse superfield may be shown to satisfy the following relation
$-\frac{{\rm i}}{2}{\cal D}^{2}\Phi_{\alpha(n)}={\cal
D}_{(\alpha_{1}}{}^{\beta}\Phi_{\alpha_{2}...\alpha_{n})\beta}+(n+2){\cal
S}\Phi_{\alpha(n)}~{}.$ (3.11)
If a transverse superfield also satisfies (3.7b), we say that it carries
pseudo-mass $M$, superspin $n/2$ and superhelicity
$\frac{1}{2}(n+\frac{1}{2})\sigma$. From (3.11) it follows that an on-shell
superfield (3.7) satisfies
$-\frac{{\rm i}}{2}{\cal D}^{2}\Phi_{\alpha(n)}=\frac{1}{2n+1}\Big{(}\sigma
M+2n(n+2){\cal S}\Big{)}\Phi_{\alpha(n)}~{},$ (3.12)
and hence the second-order mass-shell equation
$\displaystyle 0$
$\displaystyle=\big{(}{\mathbb{Q}}-\lambda^{2}\big{)}\Phi_{\alpha(n)}~{},$
(3.13a) $\displaystyle\lambda^{2}:=\frac{1}{(2n+1)^{2}}\big{[}\sigma M+2n$
$\displaystyle(n+2){\cal S}\big{]}\big{[}\sigma M+2(n-1)(n+1){\cal
S}\big{]}~{}.$ (3.13b)
The equations (3.7a) and (3.12) were introduced in [47]. On the other hand,
one may instead consider a superfield $\Phi_{\alpha(n)}$ satisfying (3.7a) and
(3.13a). In this case, using the identity (3.6), one can show that (3.13a)
becomes
$\displaystyle
0=\Big{(}\mathbb{F}-\sigma_{(-)}|M_{(-)}|\Big{)}\Big{(}\mathbb{F}-\sigma_{(+)}|M_{(+)}|\Big{)}~{},$
(3.14)
where we have defined $\sigma_{(\pm)}=\text{sgn}(M_{(\pm)})$ and
$\displaystyle M_{(\pm)}:=-(2n^{2}+2n-1){\cal
S}\pm(2n+1)\sqrt{\lambda^{2}+{\cal S}^{2}}~{}.$ (3.15)
It follows that such a field furnishes the reducible representation
$\mathfrak{S}\Big{(}|M_{(-)}|,\sigma_{(-)}\frac{n}{2}\Big{)}\oplus\mathfrak{S}\Big{(}|M_{(+)}|,\sigma_{(+)}\frac{n}{2}\Big{)}~{}.$
(3.16)
In AdS3|2 there exist two distinct types of on-shell partially massless
superfields [22], which are distinguished by the sign $\sigma$ of their
superhelicity. More specifically, they are described by an on-shell superfield
(3.7) whose pseudo-mass and parameter $\sigma$ assume the special combinations
$\displaystyle\sigma=+1~{},\qquad M$ $\displaystyle\equiv
M^{(+)}_{(t,n)}=2\big{[}n(n-2t+1)-(t-1)\big{]}{\cal S}~{},$ $\displaystyle 1$
$\displaystyle\leq t\leq\lfloor n/2\rfloor~{},$ (3.17a)
$\displaystyle\sigma=-1~{},\qquad M$ $\displaystyle\equiv
M^{(-)}_{(t,n)}=2\big{[}n(n-2t)-(t+1)\big{]}{\cal S}~{},$ $\displaystyle 0$
$\displaystyle\leq t\leq\lceil n/2\rceil-1~{}.$ (3.17b)
The integer $t$ is called the (super)depth and the corresponding
supermultiplets are denoted by $\Phi^{(t,+)}_{\alpha(n)}$ and
$\Phi^{(t,-)}_{\alpha(n)}$ respectively. Their second order equations (3.13)
take the form
$0=\big{(}{\mathbb{Q}}-\lambda_{(t,n)}^{(+)}{\cal
S}^{2}\big{)}\Phi_{\alpha(n)}^{(t,+)}~{},\quad
0=\big{(}{\mathbb{Q}}-\lambda_{(t,n)}^{(-)}{\cal
S}^{2}\big{)}\Phi_{\alpha(n)}^{(t,-)}~{},$ (3.18)
where we have introduced the partially massless values
$\lambda_{(t,n)}^{(+)}=4(n-t)(n-t+1)~{},\qquad\lambda_{(t,n)}^{(-)}=4t(t+1)~{}.$
(3.19)
The gauge symmetry associated with positive and negative superhelicity
partially massless superfields of depth-$t$ is
$\displaystyle\delta_{\Lambda}\Phi^{(t,+)}_{\alpha(n)}$
$\displaystyle=\phantom{\rm{i}^{n}}\big{(}\mathcal{D}_{\alpha(2)}\big{)}^{t}\Lambda_{\alpha(n-2t)}~{},$
$\displaystyle 1\leq t\leq\lfloor n/2\rfloor~{},$ (3.20a)
$\displaystyle\delta_{\Lambda}\Phi^{(t,-)}_{\alpha(n)}$
$\displaystyle=\text{i}^{n}\big{(}\mathcal{D}_{\alpha(2)}\big{)}^{t}\mathcal{D}_{\alpha}\Lambda_{\alpha(n-2t-1)}~{},$
$\displaystyle 0\leq t\leq\lceil n/2\rceil-1~{}.$ (3.20b)
In particular, the system of equations (3.7) and (3.17) is invariant under
these transformations for an on-shell real gauge parameter.
### 3.2 Superspin projection operators
We wish to find supersymmetric generalisations of the spin projection
operators in AdS3 which were computed in section 2. More precisely, let us
denote by $\mathds{V}_{(n)}$ the space of totally symmetric rank-$n$
superfields $\Phi_{\alpha(n)}$ on AdS3|2. For any integer $n\geq 1$, we define
the rank-$n$ superspin projection operator111111The four-dimensional analogue
was recently given in [19]. $\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ to act on
$\mathds{V}_{(n)}$ by the rule
$\displaystyle\mbox{\boldmath$\Pi$}^{\perp}_{[n]}:\mathds{V}_{(n)}\longrightarrow\mathds{V}_{(n)}~{},\qquad\Phi_{\alpha(n)}\longmapsto\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\Phi_{\alpha(n)}~{}=:\Phi^{\perp}_{\alpha(n)}~{},$
(3.21)
which satisfies the following properties:
1. 1.
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ is idempotent,
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\mbox{\boldmath$\Pi$}^{\perp}_{[n]}=\mbox{\boldmath$\Pi$}^{\perp}_{[n]}~{}.$
(3.22a)
2. 2.
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ maps $\Phi_{\alpha(n)}$ to a transverse
superfield,
${\cal D}^{\beta}\Phi^{\perp}_{\beta\alpha(n-1)}=0~{}.$ (3.22b)
3. 3.
Every transverse superfield $\Psi_{\alpha(n)}$ belongs to the image of
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$,
$\mathcal{D}^{\beta}\Psi_{\beta\alpha(n-1)}=0~{}\quad\implies\quad\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\Psi_{\alpha(n)}=\Psi_{\alpha(n)}~{}.$
(3.22c)
In other words, the superprojector $\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ maps
$\Phi_{\alpha(n)}$ to a supermultiplet with the properties of a conserved
supercurrent.
To obtain a superprojector, we introduce the operator
$\Delta^{\alpha}{}_{\beta}$ [43]
$\Delta^{\alpha}{}_{\beta}:=-\frac{{\rm i}}{2}{\cal D}^{\alpha}{\cal
D}_{\beta}-2{\cal S}\delta^{\alpha}{}_{\beta}~{},\quad{\cal
D}^{\beta}\Delta^{\alpha}{}_{\beta}=\Delta^{\alpha}{}_{\beta}{\cal
D}_{\alpha}=0~{},$ (3.23)
and its corresponding extensions [22]
$\Delta^{\alpha}_{[j]}{}_{\beta}:=-\frac{{\rm i}}{2}{\cal D}^{\alpha}{\cal
D}_{\beta}-2j{\cal S}\delta^{\alpha}{}_{\beta}~{}.$ (3.24)
Note that for the case $j=1$, (3.24) coincides with (3.23). It can be shown
that the operator (3.24) has the following properties
$\displaystyle[\Delta^{\alpha_{1}}_{[j]}{}_{\beta_{1}},\Delta^{\alpha_{2}}_{[k]}{}_{\beta_{2}}]$
$\displaystyle=$ $\displaystyle\varepsilon_{\beta_{1}\beta_{2}}{\cal
S}\big{(}{\cal D}^{\alpha(2)}-{\cal
S}M^{\alpha(2)}\big{)}-\varepsilon^{\alpha_{1}\alpha_{2}}{\cal S}\big{(}{\cal
D}_{\beta(2)}-{\cal S}M_{\beta(2)}\big{)}~{},$ (3.25a)
$\displaystyle\varepsilon^{\beta_{1}\beta_{2}}\Delta^{\alpha_{1}}_{[j]}{}_{\beta_{1}}\Delta^{\alpha_{2}}_{[j+1]}{}_{\beta_{2}}$
$\displaystyle=$ $\displaystyle-j\varepsilon^{\alpha_{1}\alpha_{2}}{\cal
S}\big{(}{\rm i}{\cal D}^{2}+4(j+1){\cal S}^{2}\big{)}~{},$ (3.25b)
$\displaystyle\varepsilon_{\alpha_{1}\alpha_{2}}\Delta^{\alpha_{1}}_{[j+1]}{}_{\beta_{1}}\Delta^{\alpha_{2}}_{[j]}{}_{\beta_{2}}$
$\displaystyle=$ $\displaystyle j\varepsilon_{\beta_{1}\beta_{2}}{\cal
S}\big{(}{\rm i}{\cal D}^{2}+4(j+1){\cal S}^{2}\big{)}~{},$ (3.25c)
$\displaystyle\Delta^{\beta}_{[j]}{}_{\alpha}\Delta^{\gamma}_{[k]}{}_{\beta}$
$\displaystyle=$ $\displaystyle-\frac{{\rm i}}{2}{\cal
D}^{2}\Delta^{\gamma}_{[1]}{}_{\alpha}+(j+k-1){\rm i}{\cal S}{\cal
D}^{\gamma}{\cal D}_{\alpha}+4jk{\cal S}^{2}\delta_{\alpha}{}^{\gamma}~{},$
(3.25d) $\displaystyle\ [\Delta^{\alpha}_{[j]}{}_{\beta},{\cal D}^{2}]$
$\displaystyle=$ $\displaystyle 0~{},$ (3.25e)
for arbitrary integers $j$ and $k$.
Let us define the operator $\mathbb{T}_{[n]}$, which acts on
$\mathds{V}_{(n)}$ by the rule
$\mathbb{T}_{[n]}\Phi_{\alpha(n)}\equiv\mathbb{T}_{\alpha(n)}(\Phi)=\Delta^{\beta_{1}}_{[1]}{}_{(\alpha_{1}}\Delta^{\beta_{2}}_{[2]}{}_{\alpha_{2}}\cdots\Delta^{\beta_{n}}_{[n]}{}_{\alpha_{n})}\Phi_{\beta(n)}~{}.$
(3.26)
This operator maps $\Phi_{\alpha(n)}$ to a transverse superfield
${\cal D}^{\beta}\mathbb{T}_{\beta\alpha(n-1)}(\Phi)=0~{}.$ (3.27)
To see this, one needs to open the symmetrisation in (3.26)
$\displaystyle{\cal D}^{\beta}\mathbb{T}_{\beta\alpha(n-1)}(\Phi)$
$\displaystyle=$ $\displaystyle{\cal
D}^{\gamma}\Delta^{\beta_{1}}_{[1]}{}_{(\gamma}\Delta^{\beta_{2}}_{[2]}{}_{\alpha_{1}}\cdots\Delta^{\beta_{n}}_{[n]}{}_{\alpha_{n-1})}\Phi_{\beta(n)}~{}$
(3.28) $\displaystyle\propto$ $\displaystyle{\cal
D}^{\gamma}\big{(}\Delta^{\beta_{1}}_{[1]}{}_{\gamma}\Delta^{\beta_{2}}_{[2]}{}_{\alpha_{1}}\cdots\Delta^{\beta_{n}}_{[n]}{}_{\alpha_{n-1}}+(n!-1)~{}\text{permutations}\big{)}\Phi_{\beta(n)}~{}.$
By making use of (3.25b), it can be shown that the remaining $(n!-1)$ terms
can be expressed in the same form as the first. Then transversality follows
immediately as a consequence of property (3.23). However, $\mathbb{T}_{[n]}$
does not square to itself on $\mathds{V}_{(n)}$
$\displaystyle\mathbb{T}_{[n]}\mathbb{T}_{[n]}\Phi_{\alpha(n)}=\frac{1}{(2n+1)^{n}}\prod_{t=0}^{\lceil
n/2\rceil-1}\big{(}\mathbb{F}+M^{(-)}_{(t,n)}\big{)}\prod_{t=1}^{\lfloor
n/2\rfloor}\big{(}\mathbb{F}-M^{(+)}_{(t,n)}\big{)}\mathbb{T}_{[n]}\Phi_{\alpha(n)}~{},$
(3.29)
where $M^{(\pm)}_{(t,n)}$ denotes the pseudo-masses associated with a
partially massless superfield (3.17). We can immediately introduce the
dimensionless operator
$\displaystyle\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\Phi_{\alpha(n)}:=(2n+1)^{n}\bigg{[}\prod_{t=0}^{\lceil
n/2\rceil-1}\big{(}\mathbb{F}+M^{(-)}_{(t,n)}\big{)}\prod_{t=1}^{\lfloor
n/2\rfloor}\big{(}\mathbb{F}-M^{(+)}_{(t,n)}\big{)}\bigg{]}^{-1}\mathbb{T}_{[n]}\Phi_{\beta(n)}~{},$
(3.30)
which is idempotent and transverse by construction. In addition, it can be
shown that the operator $\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ acts as the
identity on the space of transverse superfields (3.22c). Hence,
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ satisfies properties (3.22) and can be
identified as a rank-$n$ superprojector on AdS3|2.
An alternative form of the superprojector
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ can be derived, which instead makes
contact with the Casimir operator $\mathbb{Q}$. Let us introduce the
dimensionless operator
$\displaystyle\widehat{\mbox{\boldmath$\Pi$}}{}^{\perp}_{[n]}\Phi_{\alpha(n)}$
$\displaystyle=$ $\displaystyle\bigg{[}\prod_{t=0}^{n-1}\big{(}\mathbb{Q}+{\rm
i}t{\cal S}{\cal
D}^{2}\big{)}\bigg{]}^{-1}\widehat{\Delta}^{\beta_{1}}_{[1]}{}_{(\alpha_{1}}\widehat{\Delta}^{\beta_{2}}_{[2]}{}_{\alpha_{2}}...\widehat{\Delta}^{\beta_{n}}_{[n]}{}_{\alpha_{n})}\Phi_{\beta(n)}~{},$
(3.31)
where we denote $\widehat{\Delta}^{\beta}_{[j]}{}_{\alpha}$ as
$\widehat{\Delta}^{\beta}_{[j]}{}_{\alpha}:=-\frac{{\rm i}}{2}{\cal
D}^{2}{\Delta}^{\beta}_{[j]}{}_{\alpha}~{}.$ (3.32)
In the flat superspace limit,
$\widehat{\mbox{\boldmath$\Pi$}}{}^{\perp}_{[n]}$ coincides with the
superprojector derived in [17]. Making use of the properties of
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ and the identity
$-\frac{{\rm i}}{2}{\cal
D}^{2}\Psi_{\alpha(n)}=\frac{1}{2n+1}\big{(}\mathbb{F}+2n(n+2){\cal
S}\big{)}\Psi_{\alpha(n)}~{},$ (3.33)
where $\Psi_{\alpha(n)}$ is an arbitrary transverse superfield, it can be
shown that $\widehat{\mbox{\boldmath$\Pi$}}{}^{\perp}_{[n]}\Phi_{\alpha(n)}$
satisfies properties (3.22) and is also a superprojector on AdS3|2. Using an
analogous proof employed to show the coincidence of the two bosonic projectors
in section 2.2, it can be shown that ${\mbox{\boldmath$\Pi$}}^{\perp}_{[n]}$
and $\widehat{\mbox{\boldmath$\Pi$}}{}^{\perp}_{[n]}$ are indeed equivalent.
So far, we have been unable to obtain an expression for
${\mbox{\boldmath$\Pi$}}^{\perp}_{[n]}$ which is purely in terms of the Casmir
operators $\mathbb{F}$ and $\mathbb{Q}$.
We recall that in the non-supersymmetric case, one starts with a field
$\phi_{\alpha(n)}$ lying on the mass-shell (2.9b) and its projection
$\Pi^{\perp}_{[n]}\phi_{\alpha(n)}$ furnishes the reducible representation
(2.11). A single irreducible representation from the decomposition (2.11) can
be singled out via application of the helicity projectors (2.45). The
significance of the condition (2.9b) is that it allows one to resolve the
poles in both types of projectors.
In the supersymmetric case, the equation analogous to (2.9b) which
$\Phi_{\alpha(n)}$ should satisfy is (3.13a). Upon application of
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ on such a $\Phi_{\alpha(n)}$, one
obtains the reducible representation (3.16). However, it appears that the
imposition of (3.13a) does not allow one to resolve the poles of the
superprojector in either of the forms (3.30) or (3.31). Therefore, rather then
imposing (3.13a), one must start with a superfield $\Phi_{\alpha(n)}$ obeying
the first-order constraint (3.7b), which does allow for resolution of the
poles. In this case, after application of
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$, the superfield $\Phi_{\alpha(n)}$
already corresponds to an irreducible representation with fixed superhelicity,
relinquishing the need for superhelicity projectors. Thus, it suffices to
provide only the superspin projection operators
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$.
### 3.3 Longitudinal projectors
For $n\geq 1$, let us define the orthogonal compliment of
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ acting on $\Phi_{\alpha(n)}$ by the rule
$\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}\Phi_{\alpha(n)}=\big{(}\mathds{1}-\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\big{)}\Phi_{\alpha(n)}~{}.$
(3.34)
By construction, the operators $\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ and
$\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}$ resolve the identity,
$\mathds{1}=\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}+\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$,
and are orthogonal projectors
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\mbox{\boldmath$\Pi$}^{\perp}_{[n]}=\mbox{\boldmath$\Pi$}^{\perp}_{[n]}~{},\qquad\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}=\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}~{},\qquad\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}\mbox{\boldmath$\Pi$}^{\perp}_{[n]}=\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}=0~{}.$
(3.35)
It can be shown that $\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}$ extracts the
longitudinal component of a superfield $\Phi_{\alpha(n)}$. A rank-$n$
superfield $\Psi_{\alpha(n)}$ is said to be longitudinal if there exists a
rank-$(n-1)$ superfield $\Psi_{\alpha(n-1)}$ such that $\Psi_{\alpha(n)}$ can
be expressed as $\Psi_{\alpha(n)}={\rm i}^{n}{\cal
D}_{\alpha}\Psi_{\alpha(n-1)}$. Thus, we find
$\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}\Phi_{\alpha(n)}={\rm i}^{n}{\cal
D}_{\alpha}\Phi_{\alpha(n-1)}~{},$ (3.36)
for some unconstrained real superfield $\Phi_{\alpha(n-1)}$. In order to see
this, it proves beneficial to make use of the superprojector
$\widehat{\mbox{\boldmath$\Pi$}}{}^{\perp}_{[n]}$, and express the operator
$\widehat{\Delta}^{\beta}_{[j]}{}_{\alpha}$ in the form
$\widehat{\Delta}^{\beta}_{[j]}{}_{\alpha}:=-\frac{1}{4}{\cal D}_{\alpha}{\cal
D}^{\beta}{\cal D}^{2}+\big{(}\mathbb{Q}+{\rm i}(j-1){\cal S}{\cal
D}^{2}\big{)}\delta_{\alpha}{}^{\beta}~{}.$ (3.37)
Using the fact that the $\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}$ and
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ resolve the identity, it follows that
one can decompose any superfield $\Phi_{\alpha(n)}$ in the following manner
$\Phi_{\alpha(n)}=\Phi^{\perp}_{\alpha(n)}+{\rm i}^{n}{\cal
D}_{\alpha}\Phi_{\alpha(n-1)}~{}.$ (3.38)
Here, $\Phi^{\perp}_{\alpha(n)}$ is transverse and $\Phi_{\alpha(n-1)}$ is
unconstrained. Repeating this prescription iteratively yields the
decomposition
$\displaystyle\Phi_{\alpha(n)}$ $\displaystyle=$
$\displaystyle\sum_{j=0}^{\lfloor n/2\rfloor}\big{(}{\cal
D}_{\alpha(2)}\big{)}^{j}\Phi^{\perp}_{\alpha(n-2j)}+{\rm
i}^{n}\sum_{j=0}^{\lceil n/2\rceil-1}\big{(}{\cal
D}_{\alpha(2)}\big{)}^{j}{\cal D}_{\alpha}\Phi^{\perp}_{\alpha(n-2j-1)}~{}.$
(3.39a)
Here, the real superfields $\Phi^{\perp}_{\alpha(n-2j)}$ and
$\Phi^{\perp}_{\alpha(n-2j-1)}$ are transverse, except for $\Phi^{\perp}$.
It can be shown that the superprojector $\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$
annihilates any longitudinal superfield. Indeed, let us consider the action of
$\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ on a superfield $\Psi_{\alpha(n)}={\rm
i}^{n}{\cal D}_{\alpha}\Lambda_{\alpha(n-1)}$. Opening the symmetrisation
present in $\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ gives
$\displaystyle\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\Psi_{\alpha(n)}$
$\displaystyle=$ $\displaystyle{\rm
i}^{n}\Delta^{\beta_{1}}_{[1]}{}_{(\alpha_{1}}\Delta^{\beta_{2}}_{[2]}{}_{\alpha_{2}}...\Delta^{\beta_{n}}_{[n]}{}_{\alpha_{n})}{\cal
D}_{(\beta_{1}}\Lambda_{\beta_{2}...\beta_{n})}~{}$ $\displaystyle=$
$\displaystyle\frac{{\rm
i}^{n}}{n!}\Delta^{\beta_{1}}_{[n]}{}_{(\alpha_{1}}\Delta^{\beta_{2}}_{[n-1]}{}_{\alpha_{2}}...\Delta^{\beta_{n}}_{[1]}{}_{\alpha_{n})}\big{(}{\cal
D}_{\beta_{n}}\Lambda_{\beta_{1}...\beta_{n-1}}+(n!-1)~{}\text{permutations}\big{)}~{}.$
Note that we have made use of the identity (3.25a) to rearrange the operators
$\Delta^{\beta}_{[j]}{}_{\alpha}$. Making use of the relation (3.25c) allows
us to express the other $(n!-1)$ permutations in the same form as the first.
Then due to the property (3.23), it follows that
$\Psi_{\alpha(n)}={\rm i}^{n}{\cal
D}_{\alpha}\Lambda_{\alpha(n-1)}\qquad\implies\qquad\mbox{\boldmath$\Pi$}^{\perp}_{[n]}\Psi_{\alpha(n)}=0~{}.$
(3.41)
Consequently, the operator $\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}$ acts as
unity on the space of rank-$n$ longitudinal superfields $\Psi_{\alpha(n)}$
$\Psi_{\alpha(n)}={\rm i}^{n}{\cal
D}_{\alpha}\Lambda_{\alpha(n-1)}\qquad\implies\qquad\mbox{\boldmath$\Pi$}^{\parallel}_{[n]}\Psi_{\alpha(n)}=\Psi_{\alpha(n)}~{}.$
(3.42)
### 3.4 Linearised higher-spin super-Cotton tensors
In this section, we make use of the rank-$n$ superprojector to study the
properties of superconformal higher-spin (SCHS) theories. In particular, we
will make use of $\mbox{\boldmath$\Pi$}^{\perp}_{[n]}$ to construct the
higher-spin super-Cotton tensors in AdS3|2, which were recently derived in
[22]. The super-Cotton tensors $\mathfrak{W}_{\alpha(n)}(H)$ were shown to
take the explicit form
$\mathfrak{W}_{\alpha(n)}(H)=\Delta^{\beta_{1}}_{[1]}{}_{(\alpha_{1}}\Delta^{\beta_{2}}_{[2]}{}_{\alpha_{2}}\cdots\Delta^{\beta_{n}}_{[n]}{}_{\alpha_{n})}H_{\beta(n)}~{},$
(3.43)
which is a real primary descendent of the SCHS superfield $H_{\alpha(n)}$. The
latter is defined modulo gauge transformations of the form
$\delta_{\Lambda}H_{\alpha(n)}={\rm i}^{n}{\cal
D}_{\alpha}\Lambda_{\alpha(n-1)}~{},$ (3.44)
where the gauge parameter $\Lambda_{\alpha(n-1)}$ is a real unconstrained
superfield. The super-Cotton tensor (3.43) satisfies the defining properties:
(i) it is transverse
${\cal D}^{\beta}\mathfrak{W}_{\beta\alpha(n-1)}(H)=0~{};\\\ $ (3.45a) and
(ii) it is invariant under the gauge transformations (3.44)
$\mathfrak{W}_{\alpha(n)}(\delta_{\Lambda}H)=0~{}.$ (3.45b)
The superprojectors (3.30) can be used to recast the super-Cotton tensors
(3.43) in the simple form
$\mathfrak{W}_{\alpha(n)}(H)=\frac{1}{(2n+1)^{n}}\prod_{t=0}^{\lceil
n/2\rceil-1}\big{(}\mathbb{F}+M^{(-)}_{(t,n)}\big{)}\prod_{t=1}^{\lfloor
n/2\rfloor}\big{(}\mathbb{F}-M^{(+)}_{(t,n)}\big{)}\mbox{\boldmath$\Pi$}^{\perp}_{[n]}H_{\alpha(n)}~{},$
(3.46)
where $M^{(\pm)}_{(t,n)}$ denotes the partial pseudo-masses (3.17). In the
flat superspace limit, ${\cal S}\rightarrow 0$, the super-Cotton tensor (3.46)
reduces to those given in [39, 48]. Expressing $\mathfrak{W}_{\alpha(n)}(H)$
in the form (3.46) is beneficial for the following reasons: (i) transversality
of $\mathfrak{W}_{\alpha(n)}(H)$ is manifest on account of property (3.27);
(ii) gauge invariance is also manifest as a consequence of (3.41); and (iii)
in the transverse gauge
$H_{\alpha(n)}\equiv H^{\text{T}}_{\alpha(n)}~{},\qquad{\cal
D}^{\beta}H^{\text{T}}_{\beta\alpha(n-1)}=0~{},$ (3.47)
it follows from (3.22c) that $\mathfrak{W}_{\alpha(n)}(H)$ factorises as
follows
$\mathfrak{W}_{\alpha(n)}(H^{\text{T}})=\frac{1}{(2n+1)^{n}}\prod_{t=0}^{\lceil
n/2\rceil-1}\big{(}\mathbb{F}+M^{(-)}_{(t,n)}\big{)}\prod_{t=1}^{\lfloor
n/2\rfloor}\big{(}\mathbb{F}-M^{(+)}_{(t,n)}\big{)}H^{\text{T}}_{\alpha(n)}~{}.$
(3.48)
From the above observations, it follows that the action [43, 44] for the
superconformal higher-spin prepotential $H_{\alpha(n)}$
$\displaystyle\mathbb{S}^{(n)}_{\rm SCHS}[H]=-\frac{{\rm
i}^{n}}{2^{\left\lfloor{n/2}\right\rfloor+1}}\int{\rm d}^{3}x{\rm
d}^{2}\theta\,E\,H^{\alpha(n)}\mathfrak{W}_{\alpha(n)}(H)~{},\qquad
E^{-1}={\rm Ber}(E_{A}{}^{M})~{},$ (3.49)
is manifestly gauge-invariant. In the transverse gauge (3.47), the kinetic
operator in (3.49) factorises into wave operators associated with partially
massless superfields of all depths, in accordance with (3.48).
## 4 Conclusion
Given a maximally symmetric spacetime, the unitary irreducible representations
of its isometry algebra may be realised on the space of tensor fields
satisfying certain differential constraints. The purpose of a spin projection
operator is to take an unconstrained field, which describes a multiplet of
irreducible representations, and return the component corresponding to the
irreducible representation with maximal spin.121212In three dimensions, in
order to single out an irreducible representation, one needs to bisect the
spin projector into helicity projectors. In this paper we have derived the
spin projection operators for fields of arbitrary rank on AdS3 space and their
extensions to $\mathcal{N}=1$ AdS superspace. We leave generalisations of our
results to the $(p,q)$ AdS superspaces [46] with ${\cal N}=p+q>1$ for future
work.
Making use of the (super)spin projection operators, we obtained new
representations for the linearised higher-spin (super)Cotton tensors and the
corresponding (super)conformal actions in AdS3. The significance of these new
realisations is that the following properties are each made manifest: (i)
gauge invariance; (ii) transversality; and (iii) factorisation. We also show
that the poles of the (super)projectors are intimately related to partially
massless (super)fields. This property was first established in the case of
AdS4 (super)space in [20, 19], and appears to be a universal feature of the
(super)projectors. It would be interesting to verify this in the case of AdSd
with $d>4$.
As compared with previous approaches in AdS4 (super)space [20, 19], a novel
feature of the spin projectors derived here is that they are formulated
entirely in terms of Casimir operators of the AdS3 algebra.131313We were not
able to obtain expressions for the superspin projection operators in AdS3|2
which involve only Casimir operators. Studying their zero curvature limit has
allowed us to obtain new realisations of the spin projection operators in $3d$
Minkowski space in terms of only the Pauli-Lubanski scalar and the momentum
squared operator. This idea may be straightforwardly applied to the case of
4$d$ Minkowski space to derive new realisations of the Behrends-Fronsdal
projectors.
In particular, let us define the square of the Pauli-Lubankski vector,
$\displaystyle\mathbb{W}^{2}=\mathbb{W}^{a}\mathbb{W}_{a}~{},\qquad\mathbb{W}_{a}:=-\frac{1}{2}\varepsilon_{abcd}M^{bc}\partial^{d}~{}.$
(4.1)
On the field $\phi_{\alpha(m){\dot{\alpha}}(n)}$ of Lorentz type
$(\frac{m}{2},\frac{n}{2})$, it may be shown that $\mathbb{W}^{2}$ assumes the
form (see, e.g. [49])
$\displaystyle\mathbb{W}^{2}\phi_{\alpha(m){\dot{\alpha}}(n)}=s(s+1)\Box\phi_{\alpha(m){\dot{\alpha}}(n)}+mn\partial_{\alpha{\dot{\alpha}}}\partial^{\beta{\dot{\beta}}}\phi_{\alpha(m-1)\beta{\dot{\alpha}}(n-1){\dot{\beta}}}~{},$
(4.2)
where we have defined $s:=\frac{1}{2}(m+n)$. On any transverse field
$\psi_{\alpha(m){\dot{\alpha}}(n)}$ this reduces to
$\big{(}\mathbb{W}^{2}-s(s+1)\Box\big{)}\psi_{\alpha(m){\dot{\alpha}}(n)}=0$.
It is possible to express the Behrends-Fronsdal spin projection operators
$\Pi^{\perp}_{(m,n)}$ solely in terms of the Casimir operators
$\mathbb{W}^{2}$ and $\Box$ of the $4d$ Poincaré algebra as follows141414These
expressions may be easily converted to vector or four component notation.
$\displaystyle\Pi^{\perp}_{(m,n)}\phi_{\alpha(m){\dot{\alpha}}(n)}=\frac{m!}{(m+n)!n!}\frac{1}{\Box^{n}}$
$\displaystyle\prod_{j=0}^{n-1}\Big{(}\mathbb{W}^{2}-(s-j)(s-j-1)\Box\Big{)}\phi_{\alpha(m){\dot{\alpha}}(n)}$
(4.3a) $\displaystyle=\frac{n!}{(m+n)!m!}\frac{1}{\Box^{m}}$
$\displaystyle\prod_{j=0}^{m-1}\Big{(}\mathbb{W}^{2}-(s-j)(s-j-1)\Box\Big{)}\phi_{\alpha(m){\dot{\alpha}}(n)}~{}.$
(4.3b)
The operators $\Pi^{\perp}_{(m,n)}$ satisfy the four dimensional analogues of
the properties (1.2).
In a similar fashion, it should be possible to obtain new realisations for the
AdS4 spin projection operators of [20] in terms of the Casimir operators of
the algebra $\mathfrak{so}(3,2)$. In this case, $\Box$ should be replaced with
the quadratic Casimir operator
$\displaystyle\mathbb{Q}:=\Box_{\text{AdS}}-\mathcal{S}^{2}\big{(}M^{2}+\bar{M}^{2}\big{)}~{},\qquad
M^{2}:=M^{\alpha\beta}M_{\alpha\beta}~{},\quad\bar{M}^{2}:=\bar{M}^{{\dot{\alpha}}{\dot{\beta}}}\bar{M}_{{\dot{\alpha}}{\dot{\beta}}}~{}.$
(4.4)
Finally, the role of $\mathbb{W}^{2}$ will be played by the quartic Casimir
operator $\mathbb{W}^{2}_{\text{AdS}}$,151515Here we use the convention
$\big{[}\mathcal{D}_{\alpha{\dot{\alpha}}},\mathcal{D}_{\beta{\dot{\beta}}}\big{]}=-2\mathcal{S}^{2}\big{(}\varepsilon_{\alpha\beta}\bar{M}_{{\dot{\alpha}}{\dot{\beta}}}+\varepsilon_{{\dot{\alpha}}{\dot{\beta}}}M_{\alpha\beta}\big{)}$,
where $\mathcal{S}^{2}$ is related to the AdS4 scalar curvature via
$R=-12\mathcal{S}^{2}$.
$\displaystyle\mathbb{W}^{2}_{\text{AdS}}$ $\displaystyle:=$
$\displaystyle-\frac{1}{2}\big{(}\mathbb{Q}+2\mathcal{S}^{2}\big{)}\big{(}M^{2}+\bar{M}^{2}\big{)}+\mathcal{D}^{\alpha{\dot{\alpha}}}\mathcal{D}^{\beta{\dot{\beta}}}M_{\alpha\beta}\bar{M}_{{\dot{\alpha}}{\dot{\beta}}}$
(4.5)
$\displaystyle-\frac{1}{4}\mathcal{S}^{2}\big{(}M^{2}M^{2}+\bar{M}^{2}\bar{M}^{2}+6M^{2}\bar{M}^{2}\big{)}~{}.$
Both operators commute with the AdS4 covariant derivative
$\big{[}\mathbb{Q},\mathcal{D}_{\alpha{\dot{\alpha}}}\big{]}=\big{[}\mathbb{W}^{2}_{\text{AdS}},\mathcal{D}_{\alpha{\dot{\alpha}}}\big{]}=0$.
Note added in proof:
When $m=n=s$, the spin projection operator (4.3) takes the form
$\displaystyle\Pi^{\perp}_{(s,s)}\equiv\Pi^{\perp}_{(s)}=\frac{1}{\Box^{s}(2s)!}\prod_{j=0}^{s-1}\Big{(}\mathbb{W}^{2}-j(j+1)\Box\Big{)}~{}.$
(4.6)
In this case, it may be shown that $\Pi^{\perp}_{(s)}$ annihilates any field
$\phi_{\alpha(s^{\prime}){\dot{\alpha}}(s^{\prime})}$ of lower rank:
$\displaystyle\Pi_{(s)}^{\perp}\phi_{\alpha(s^{\prime}){\dot{\alpha}}(s^{\prime})}=0~{},\qquad
s^{\prime}<s~{}.$ (4.7)
Let us comment on the implications of (4.7) on fields with vectorial indices.
Consider a field $\mbox{\boldmath$h$}_{a_{1}\dots a_{s}}$ which is totally
symmetric in its vector indices and has a non-zero trace
$\displaystyle\mbox{\boldmath$h$}_{a_{1}\dots
a_{s}}=\mbox{\boldmath$h$}_{(a_{1}\dots
a_{s})}\equiv\mbox{\boldmath$h$}_{a(s)}~{},\qquad\eta^{bc}\mbox{\boldmath$h$}_{bca(s-2)}\neq
0~{},$ (4.8)
where $\eta_{ab}=\text{diag}(-1,1,1,1)$. Upon converting to $4d$ two component
spinor notation, see e.g. [49] for the details, $\mbox{\boldmath$h$}_{a(s)}$
decomposes into irreducible $\mathsf{SL}(2,\mathbb{C})$ fields as follows
$\displaystyle\mbox{\boldmath$h$}_{\alpha_{1}{\dot{\alpha}}_{1},\dots,\alpha_{s}{\dot{\alpha}}_{s}}:=(\sigma^{a_{1}})_{\alpha_{1}{\dot{\alpha}}_{1}}\cdots(\sigma^{a_{s}})_{\alpha_{s}{\dot{\alpha}}_{s}}\mbox{\boldmath$h$}_{a_{1}\dots
a_{s}}=h_{\alpha(s){\dot{\alpha}}(s)}+\cdots~{}.$ (4.9)
Here $h_{\alpha(s){\dot{\alpha}}(s)}$ is associated with the traceless part of
$\mbox{\boldmath$h$}_{a(s)}$, whilst the $+\cdots$ represent lower-rank fields
$h_{\alpha(s^{\prime}){\dot{\alpha}}(s^{\prime})}$ associated with the trace
of $\mbox{\boldmath$h$}_{a(s)}$. From (4.7) it follows that the operator
$\Pi^{\perp}_{(s)}$ selects the transverse and traceless (TT) component of
$\mbox{\boldmath$h$}_{a(s)}$,
$\displaystyle\partial^{b}h^{\text{TT}}_{ba(s-1)}=0~{},\qquad\eta^{bc}h^{\text{TT}}_{bca(s-2)}=0~{},\qquad
h^{\text{TT}}_{a(s)}:=\Pi^{\perp}_{(s)}\mbox{\boldmath$h$}_{a(s)}~{}.$ (4.10)
Therefore, the spin-$s$ projection operator (4.6) is a TT projector when
acting on a rank-$s$ field which is symmetric and traceful in its vectorial
indices. Similar conclusions hold in the three dimensional case. This is
because the spin projection operators (2.21) and (2.35) in AdS3 (and hence
also those in $\mathbb{M}^{3}$ given by eq. (2.77)) satisfy a property
analogous to (4.7), as pointed out in eqs. (2.29) and (2.41).
Acknowledgements:
The work of DH is supported by the Jean Rogerson Postgraduate Scholarship and
an Australian Government Research Training Program Scholarship. The work of
SMK is supported in part by the Australian Research Council, project No.
DP200101944. The work of MP is supported by the Hackett Postgraduate
Scholarship UWA, under the Australian Government Research Training Program.
## Appendix A Notation and conventions
We follow the notation and conventions adopted in [50]. In particular, the
Minkowski metric is $\eta_{ab}=\mbox{diag}(-1,1,1)$. The spinor indices are
raised and lowered using the $\rm SL(2,{\mathbb{R}})$ invariant tensors
$\displaystyle\varepsilon_{\alpha\beta}=\left(\begin{array}[]{cc}0{}&-1\\\
1{}&0\end{array}\right)~{},\qquad\varepsilon^{\alpha\beta}=\left(\begin{array}[]{cc}0{}&1\\\
-1{}&0\end{array}\right)~{},\qquad\varepsilon^{\alpha\gamma}\varepsilon_{\gamma\beta}=\delta^{\alpha}_{\beta}$
(A.5)
by the standard rule:
$\displaystyle\psi^{\alpha}=\varepsilon^{\alpha\beta}\psi_{\beta}~{},\qquad\psi_{\alpha}=\varepsilon_{\alpha\beta}\psi^{\beta}~{}.$
(A.6)
We make use of real gamma-matrices,
$\gamma_{a}:=\big{(}(\gamma_{a})_{\alpha}{}^{\beta}\big{)}$, which obey the
algebra
$\gamma_{a}\gamma_{b}=\eta_{ab}{\mathbbm{1}}+\varepsilon_{abc}\gamma^{c}~{},$
(A.7)
where the Levi-Civita tensor is normalised as
$\varepsilon^{012}=-\varepsilon_{012}=1$. Given a three-vector $V_{a}$, it can
be equivalently described by a symmetric second-rank spinor $V_{\alpha\beta}$
defined as
$\displaystyle
V_{\alpha\beta}:=(\gamma^{a})_{\alpha\beta}V_{a}=V_{\beta\alpha}~{},\qquad
V_{a}=-\frac{1}{2}(\gamma_{a})^{\alpha\beta}V_{\alpha\beta}~{}.$ (A.8)
Any antisymmetric tensor $F_{ab}=-F_{ba}$ is Hodge-dual to a three-vector
$F_{a}$, specifically
$\displaystyle F_{a}=\frac{1}{2}\varepsilon_{abc}F^{bc}~{},\qquad
F_{ab}=-\varepsilon_{abc}F^{c}~{}.$ (A.9)
Then, the symmetric spinor $F_{\alpha\beta}=F_{\beta\alpha}$, which is
associated with $F_{a}$, can equivalently be defined in terms of $F_{ab}$:
$\displaystyle
F_{\alpha\beta}:=(\gamma^{a})_{\alpha\beta}F_{a}=\frac{1}{2}(\gamma^{a})_{\alpha\beta}\varepsilon_{abc}F^{bc}~{}.$
(A.10)
These three algebraic objects, $F_{a}$, $F_{ab}$ and $F_{\alpha\beta}$, are in
one-to-one correspondence to each other, $F_{a}\leftrightarrow
F_{ab}\leftrightarrow F_{\alpha\beta}$. The corresponding inner products are
related to each other as follows:
$\displaystyle-F^{a}G_{a}=\frac{1}{2}F^{ab}G_{ab}=\frac{1}{2}F^{\alpha\beta}G_{\alpha\beta}~{}.$
(A.11)
The Lorentz generators with two vector indices ($M_{ab}=-M_{ba}$), one vector
index ($M_{a}$) and two spinor indices ($M_{\alpha\beta}=M_{\beta\alpha}$) are
related to each other by the rules: $M_{a}=\frac{1}{2}\varepsilon_{abc}M^{bc}$
and $M_{\alpha\beta}=(\gamma^{a})_{\alpha\beta}M_{a}$. These generators act on
a vector $V_{c}$ and a spinor $\Psi_{\gamma}$ as follows:
$\displaystyle
M_{ab}V_{c}=2\eta_{c[a}V_{b]}~{},~{}~{}~{}~{}~{}~{}M_{\alpha\beta}\Psi_{\gamma}=\varepsilon_{\gamma(\alpha}\Psi_{\beta)}~{}.$
(A.12)
The following identities hold:
$\displaystyle M_{\alpha_{1}}{}^{\beta}\Phi_{\beta\alpha_{2}...\alpha_{n}}$
$\displaystyle=$ $\displaystyle-\frac{1}{2}(n+2)\Phi_{\alpha(n)}~{},$ (A.13a)
$\displaystyle M^{\beta\gamma}M_{\beta\gamma}\Phi_{\alpha(n)}$
$\displaystyle=$ $\displaystyle-\frac{1}{2}n(n+2)\Phi_{\alpha(n)}~{}.$ (A.13b)
## Appendix B Generating function formalism
We employ the generating function formalism which was developed in [22].
Within this framework, a one-to-one correspondence between a homogenous
polynomial $\phi_{(n)}(\Upsilon)$ of degree $n$ and a rank-$n$ spinor field
$\phi_{\alpha(n)}$ is established via the rule
$\phi_{(n)}(\Upsilon):=\Upsilon^{\alpha_{1}}\cdots\Upsilon^{\alpha_{n}}\phi_{\alpha(n)}~{}.$
(B.1)
Here, we have introduced the commuting real auxiliary variables
$\Upsilon^{\alpha}$, which are inert under the action of the Lorentz
generators $M_{\alpha\beta}$.
Making use of the auxiliary fields $\Upsilon^{\alpha}$, and their
corresponding partial derivatives,
$\partial_{\beta}:=\frac{\partial}{\partial\Upsilon^{\beta}}$, we can realise
the AdS3 derivatives as index-free operators on the space of homogenous
polynomials of degree $n$. We introduce the differential operators which
increase and decrease the degree of homogeniety by $2$, $0$ and $-2$
respectively:
${\cal D}_{(2)}:=\Upsilon^{\alpha}\Upsilon^{\beta}{\cal
D}_{\alpha\beta}~{},\quad{\cal D}_{(0)}:=\Upsilon^{\alpha}{\cal
D}_{\alpha}{}^{\beta}\partial_{\beta},\quad{\cal D}_{(-2)}:={\cal
D}^{\alpha\beta}\partial_{\alpha}\partial_{\beta}.$ (B.2)
Note that the action of ${\cal D}_{(0)}$ is equivalent to that of the Casimir
operator ${\cal F}$.
Making use of the algebra (2.2), one can derive the important identities
$\displaystyle\big{[}{\cal D}_{(2)},{\cal
D}^{\phantom{.}t}_{(-2)}\big{]}\phi_{(n)}$
$\displaystyle=4t(n-t+2)\big{(}{\cal Q}-\tau_{(t,n+2)}{\cal S}^{2}\big{)}{\cal
D}_{(-2)}^{t-1}\phi_{(n)}~{},$ (B.3a) $\displaystyle\big{[}{\cal
D}_{(-2)},{\cal D}^{\phantom{.}t}_{(2)}\big{]}\phi_{(n)}$
$\displaystyle=-4t(n+t)\big{(}{\cal Q}-\tau_{(t,n+2t)}{\cal S}^{2}\big{)}{\cal
D}_{(2)}^{t-1}\phi_{(n)}~{},$ (B.3b) $\displaystyle{\cal D}^{t}_{(2)}{\cal
D}^{t}_{(-2)}\phi_{(n)}$ $\displaystyle=\prod_{j=0}^{t-1}\Big{(}{\cal
F}^{2}-\big{(}n-2j\big{)}^{2}\big{(}{\cal Q}-(n-2j-2)(n-2j+2){\cal
S}^{2}\big{)}\Big{)}\phi_{(n)}~{},$ (B.3c)
via induction on $t$. Here ${\cal Q}$ and ${\cal F}$ are the quadratic Casimir
operators (2.3) and $\tau_{(t,n)}$ are the partially massless values (2.14).
## References
* [1] R. E. Behrends and C. Fronsdal, “Fermi decay of higher spin particles,” Phys. Rev. 106, no.2, 345 (1957).
* [2] C. Fronsdal, “On the theory of higher spin fields,” Nuovo Cim. 9, 416 (1958).
* [3] E. S. Fradkin and A. A. Tseytlin, “Conformal supergravity,” Phys. Rept. 119, 233-362 (1985).
* [4] A. Y. Segal, “Conformal higher spin theory,” Nucl. Phys. B 664, 59-130 (2003) [arXiv:hep-th/0207212 [hep-th]].
* [5] D. Francia, J. Mourad and A. Sagnotti, “Current exchanges and unconstrained higher spins,” Nucl. Phys. B 773, 203-237 (2007) [arXiv:hep-th/0701163 [hep-th]].
* [6] D. Ponomarev and A. A. Tseytlin, “On quantum corrections in higher-spin theory in flat space,” JHEP 05, 184 (2016) [arXiv:1603.06273 [hep-th]].
* [7] R. Bonezzi, “Induced action for conformal higher spins from worldline path integrals,” Universe 3, no.3, 64 (2017) [arXiv:1709.00850 [hep-th]].
* [8] A. P. Isaev and M. A. Podoinitsyn, “Two-spinor description of massive particles and relativistic spin projection operators,” Nucl. Phys. B 929, 452-484 (2018) [arXiv:1712.00833 [hep-th]].
* [9] A. Salam and J. A. Strathdee, “On superfields and Fermi-Bose symmetry,” Phys. Rev. D 11, 1521 (1975).
* [10] E. Sokatchev, “Projection operators and supplementary conditions for superfields with an arbitrary spin,” Nucl. Phys. B 99, 96 (1975).
* [11] V. Rittenberg and E. Sokatchev, “Decomposition of extended superfields into irreducible representations of supersymmetry,” Nucl. Phys. B 193, 477-501 (1981).
* [12] E. Sokatchev, “Irreducibility conditions for extended superfields,” Phys. Lett. B 104, 38-40 (1981).
* [13] W. Siegel and S. J. Gates, Jr., “Superprojectors,” Nucl. Phys. B 189, 295-316 (1981).
* [14] S. J. Gates Jr., M. T. Grisaru, M. Roček and W. Siegel, Superspace, or One Thousand and One Lessons in Supersymmetry, Benjamin/Cummings (Reading, MA), 1983, hep-th/0108200.
* [15] S. J. Gates Jr. and W. Siegel, “(3/2, 1) superfield of O(2) supergravity,” Nucl. Phys. B 164, 484 (1980).
* [16] S. J. Gates Jr., S. M. Kuzenko and J. Phillips, “The off-shell (3/2,2) supermultiplets revisited,” Phys. Lett. B 576, 97 (2003) [arXiv:hep-th/0306288].
* [17] E. I. Buchbinder, D. Hutchings, J. Hutomo and S. M. Kuzenko, “Linearised actions for $\mathcal{N}$-extended (higher-spin) superconformal gravity,” JHEP 08, 077 (2019) [arXiv:1905.12476 [hep-th]].
* [18] E. I. Buchbinder, S. M. Kuzenko, J. La Fontaine and M. Ponds, “Spin projection operators and higher-spin Cotton tensors in three dimensions,” Phys. Lett. B 790, 389 (2019) [arXiv:1812.05331 [hep-th]].
* [19] E. I. Buchbinder, D. Hutchings, S. M. Kuzenko and M. Ponds, “AdS superprojectors,” JHEP 04, 074 (2021) [arXiv:2101.05524 [hep-th]].
* [20] S. M. Kuzenko and M. Ponds, “Spin projection operators in (A)dS and partial masslessness,” Phys. Lett. B 800 (2020), 135128 [arXiv:1910.10440 [hep-th]].
* [21] D. Dalmazi and A. L. R. d. Santos, “On higher spin analogues of linearized topologically massive gravity and linearized “new massive gravity”,” [arXiv:2107.08879 [hep-th]].
* [22] S. M. Kuzenko and M. Ponds, “Higher-spin Cotton tensors and massive gauge-invariant actions in AdS3,” JHEP 05, 275 (2021) [arXiv:2103.11673 [hep-th]].
* [23] N. Boulanger, D. Ponomarev, E. Sezgin and P. Sundell, “New unfolded higher spin systems in $AdS_{3}$,” Class. Quant. Grav. 32, no.15, 155002 (2015) [arXiv:1412.8209 [hep-th]].
* [24] S. Deger, A. Kaya, E. Sezgin and P. Sundell, “Spectrum of D = 6, N=4b supergravity on AdS in three-dimensions x S**3,” Nucl. Phys. B 536, 110 (1998) [hep-th/9804166].
* [25] E. A. Bergshoeff, O. Hohm, J. Rosseel, E. Sezgin and P. K. Townsend, “On critical massive (super)gravity in adS3,” J. Phys. Conf. Ser. 314, 012009 (2011) [arXiv:1011.1153 [hep-th]].
* [26] I. V. Gorbunov, S. M. Kuzenko and S. L. Lyakhovich, “On the minimal model of anyons,” Int. J. Mod. Phys. A 12, 4199 (1997) [hep-th/9607114].
* [27] I. V. Tyutin and M. A. Vasiliev, “Lagrangian formulation of irreducible massive fields of arbitrary spin in (2+1) dimensions,” Teor. Mat. Fiz. 113N1, 45 (1997) [Theor. Math. Phys. 113, 1244 (1997)] [hep-th/9704132].
* [28] S. Deser and R. I. Nepomechie, “Anomalous propagation of gauge fields in conformally flat spaces,” Phys. Lett. 132B, 321 (1983).
* [29] A. Higuchi, “Symmetric tensor spherical harmonics on the $N$ sphere and their application to the de Sitter group SO($N$,1),” J. Math. Phys. 28, 1553 (1987).
* [30] S. Deser and A. Waldron, “Partial masslessness of higher spins in (A)dS,” Nucl. Phys. B 607, 577 (2001) [hep-th/0103198].
* [31] Y. M. Zinoviev, “On massive high spin particles in AdS,” [arXiv:hep-th/0108192 [hep-th]].
* [32] R. R. Metsaev, “Gauge invariant formulation of massive totally symmetric fermionic fields in (A)dS space,” Phys. Lett. B 643, 205 (2006) [hep-th/0609029].
* [33] M. R. Gaberdiel, R. Gopakumar and A. Saha, “Quantum $W$-symmetry in $AdS_{3}$,” JHEP 02, 004 (2011) [arXiv:1009.6087 [hep-th]].
* [34] C. Fronsdal, “Singletons and massless, integral-spin fields on de Sitter space,” Phys. Rev. D 20, 848 (1979).
* [35] S. Deser, “Covariant decomposition and the gravitational Cauchy problem,” Ann. Inst. H. Poincare Phys. Theor. 7, 149-188 (1967).
* [36] J. W. York, Jr., “Conformally invariant orthogonal decomposition of symmetric tensors on Riemannian manifolds and the initial value problem of general relativity,” J. Math. Phys. 14, 456-464 (1973).
* [37] J. W. York, Jr., “Covariant decompositions of symmetric tensors in the theory of gravitation,” Ann. Inst. H. Poincare Phys. Theor. 21, 319-332 (1974).
* [38] G. W. Gibbons and M. J. Perry, “Quantizing gravitational instantons,” Nucl. Phys. B 146, 90-108 (1978).
* [39] S. M. Kuzenko, “Higher spin super-Cotton tensors and generalisations of the linear–chiral duality in three dimensions,” Phys. Lett. B 763, 308 (2016) [arXiv:1606.08624 [hep-th]].
* [40] C. N. Pope and P. K. Townsend, “Conformal higher spin in (2+1)-dimensions,” Phys. Lett. B 225, 245 (1989).
* [41] M. Henneaux, S. Hörtner and A. Leonard, “Higher spin conformal geometry in three dimensions and prepotentials for higher spin gauge fields,” JHEP 01, 073 (2016) [arXiv:1511.07389 [hep-th]].
* [42] M. Henneaux, V. Lekeu, A. Leonard, J. Matulich and S. Prohazka, “Three-dimensional conformal geometry and prepotentials for four-dimensional fermionic higher-spin fields,” JHEP 11, 156 (2018) [arXiv:1810.04457 [hep-th]].
* [43] S. M. Kuzenko and M. Ponds, “Topologically massive higher spin gauge theories,” JHEP 10, 160 (2018) [arXiv:1806.06643 [hep-th]].
* [44] S. M. Kuzenko and M. Ponds, “Conformal geometry and (super)conformal higher-spin gauge theories,” JHEP 05, 113 (2019) [arXiv:1902.08010 [hep-th]].
* [45] E. A. Bergshoeff, M. Kovacevic, J. Rosseel, P. K. Townsend and Y. Yin, “A spin-4 analog of 3D massive gravity,” Class. Quant. Grav. 28, 245007 (2011) [arXiv:1109.0382 [hep-th]].
* [46] S. M. Kuzenko, U. Lindström and G. Tartaglino-Mazzucchelli, “Three-dimensional (p,q) AdS superspaces and matter couplings,” JHEP 08, 024 (2012) [arXiv:1205.4622 [hep-th]].
* [47] S. M. Kuzenko, J. Novak and G. Tartaglino-Mazzucchelli, “Higher derivative couplings and massive supergravity in three dimensions,” JHEP 1509, 081 (2015) [arXiv:1506.09063 [hep-th]].
* [48] S. M. Kuzenko and M. Tsulaia, “Off-shell massive N=1 supermultiplets in three dimensions,” Nucl. Phys. B 914, 160-200 (2017) [arXiv:1609.06910 [hep-th]].
* [49] I. L. Buchbinder and S. M. Kuzenko, Ideas and Methods of Supersymmetry and Supergravity or a Walk Through Superspace, IOP, Bristol, 1995 (Revised Edition: 1998).
* [50] S. M. Kuzenko, U. Lindström and G. Tartaglino-Mazzucchelli, “Off-shell supergravity-matter couplings in three dimensions,” JHEP 03, 120 (2011) [arXiv:1101.4013 [hep-th]].
|
arxiv-papers
| 2021-07-26T13:18:22 |
2024-09-04T03:07:18.627456
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Daniel Hutchings, Sergei M. Kuzenko and Michael Ponds",
"submitter": "Michael Ponds",
"url": "https://arxiv.org/abs/2107.12201"
}
|
2107.12203
|
# Revisiting Negation in Neural Machine Translation
Gongbo Tang1 Philipp Rönchen1 Rico Sennrich2,3 Joakim Nivre1
1Department of Linguistics and Philology, Uppsala University
2Department of Computational Linguistics, University of Zurich
3School of Informatics, University of Edinburgh
firstname.lastname@{lingfil.uu.se, ed.ac.uk}
###### Abstract
In this paper, we evaluate the translation of negation both automatically and
manually, in English–German (EN–DE) and English–Chinese (EN–ZH). We show that
the ability of neural machine translation (NMT) models to translate negation
has improved with deeper and more advanced networks, although the performance
varies between language pairs and translation directions. The accuracy of
manual evaluation in EN$\rightarrow$DE, DE$\rightarrow$EN, EN$\rightarrow$ZH,
and ZH$\rightarrow$EN is 95.7%, 94.8%, 93.4%, and 91.7%, respectively. In
addition, we show that under-translation is the most significant error type in
NMT, which contrasts with the more diverse error profile previously observed
for statistical machine translation. To better understand the root of the
under-translation of negation, we study the model’s information flow and
training data. While our information flow analysis does not reveal any
deficiencies that could be used to detect or fix the under-translation of
negation, we find that negation is often rephrased during training, which
could make it more difficult for the model to learn a reliable link between
source and target negation. We finally conduct intrinsic analysis and
extrinsic probing tasks on negation, showing that NMT models can distinguish
negation and non-negation tokens very well and encode a lot of information
about negation in hidden states but nevertheless leave room for improvement.
## 1 Introduction
Negation is an important linguistic phenomenon in machine translation, as
errors in translating negation may change the meaning of source sentences
completely. There are many studies on negation in statistical machine
translation (SMT) Collins et al. (2005); Li et al. (2009); Wetzel and Bond
(2012); Baker et al. (2012); Fancellu and Webber (2014, 2015), but studies on
negation in neural machine translation (NMT) are quite limited and results are
partly conflicting. For example, Bentivogli et al. (2016) find that negation
is still challenging, whereas Bojar et al. (2018) show that NMT models almost
make no mistakes on negation using 130 sentences with negation from three
language pairs as the evaluation set. Hence, it is still not clear how well
NMT models perform on the translation of negation.
In this paper, we present both automatic and manual evaluation of negation in
NMT, in English–German (EN–DE) and English–Chinese (EN–ZH). The automatic
evaluation is based on contrastive translation pairs and studies translation
from English into German/Chinese (EN$\rightarrow$DE/ZH). The manual evaluation
targets translation in all four translation directions. We find that the
modeling of negation in NMT has improved with deeper and more advanced
networks. The contrastive evaluation shows that deleting negation from
references is more confusing to NMT models compared to inserting negation into
references. For the manual evaluation, NMT models make fewer mistakes on
negation in EN–DE, than in EN–ZH, and there are more errors on negation in
DE/ZH$\rightarrow$EN than in EN$\rightarrow$DE/ZH. Moreover, under-translation
is the most prominent error type in three out of four directions.
The black-box nature of neural networks makes it hard to interpret how NMT
models handle the translation of negation. In Ding et al. (2017), neither
attention weights nor layer-wise relevance propagation (LRP) can explain why
negation is under-translated. We are interested in whether the information
about negation is not well passed to the decoder. Thus, we investigate the
negation information flow in NMT models by raw attention weights and attention
flow Abnar and Zuidema (2020). We demonstrate that the under-translation of
cues is not caused simply by a lack of negation information transferred to the
decoder. We further explore the mismatch between source and target sentences —
negation cues appearing only on the source side or only on the target side. We
find that there are roughly 17.4% mismatches in the training data in ZH–EN.
These mismatches could confuse NMT models and make the learning harder. We
suggest to distill or filter training data by removing the sentence pairs with
mismatches to make the learning easier. In addition, we conduct intrinsic
analysis and extrinsic probing tasks, to explore how much information about
negation has been learned by NMT models. The intrinsic analysis based on
cosine similarity shows that NMT models can distinguish negation and non-
negation tokens very well. The probing results on negation detection reveal
that NMT can encode a lot of information about negation in hidden states but
still leaves much room for improvement. Moreover, encoder hidden states
capture more information about negation than decoder hidden states.
## 2 Related Work
### 2.1 Negation in MT
Fancellu and Webber (2015) conduct a detailed manual error analysis and
consider three categories of errors, deletion, insertion, and reordering. They
find that negation scope is most challenging and reordering is the most
frequent error type in SMT. Here we study the performance of NMT models on
translating negation.
Bentivogli et al. (2016) and Beyer et al. (2017) find that NMT is superior to
SMT in translating negation. Bentivogli et al. (2016) observe that placing the
German negation cue nicht correctly during translation is a challenge for NMT
models, which is determined by the focus of negation and need to detect the
focus correctly. Bojar et al. (2018) evaluate MT models on negation,
translating from English into Czech, German, and Polish, using 61, 36, 33
sentences with negation as the test sets. They find that NMT models almost
make no mistakes on negation compared to SMT – NMT models only make two
mistakes in the English–Czech test set. In this paper, we will conduct manual
evaluation on four directions with larger evaluation sets, to get a more
comprehensive picture of the performance on translating negation.
Sennrich (2017) evaluates subword-level and character-level NMT models on the
polarity set of LingEval97 and finds that negation is still a challenge for
NMT, via scoring contrastive translation pairs. More specifically, the
deletion of negation cues causes more errors. Ataman et al. (2019) show that
character-level models perform better than subword-level models on negation.
Instead, we evaluate NMT models with different neural networks to learn their
abilities to translate negation, by scoring contrastive translate pairs.
Ding et al. (2017) find that neither attention weights nor LRP can explain
under-translation errors on a negation instance. Thus understanding the
mechanism of dealing with negation is still a challenge for NMT. Most
recently, Hossain et al. (2020) study the translation of negation on 17
translation directions. They show that negation is still a challenge to NMT
models and find that there are fewer negation related errors when the language
is similar to English, with respect to the typology of negation. In our work,
we conduct both automatic and manual evaluation on negation, and explore the
information flow of negation to answer whether under-translation errors are
caused by a lack of negation information transferred to the decoder.
### 2.2 Negation in Other Areas of NLP
Negation projection is the task of projecting negations from one language to
another language, which can alleviate the workload of annotating negation. Liu
et al. (2018) find that using word alignment to project negation does not help
the annotation process. They also provide the NegPar corpus, an EN–ZH parallel
corpus annotated for negation. Here we apply probing classifiers to directly
generate negation annotations on Chinese using hidden states.
Negation detection is the task of recognizing negation tokens, which can
estimate the ability of a model to learn negation. Fancellu et al. (2018)
utilize LSTMs, dependency LSTMs, and graph convolutional networks (GCN) to
detect negation scope, using part-of-speech tags, dependency tags, negation
cues as features. Recently the pre-trained contextualized representations have
been widely used in various NLP tasks. Khandelwal and Sawant (2020) employ
BERT Devlin et al. (2019) for negation detection, including negation cue
detection, scope detection and event detection. Sergeeva et al. (2019) apply
ELMo Peters et al. (2018) and BERT to negation scope detection and achieve new
state-of-the-art results on two negation data sets. Instead of pursuing better
results, here we aim to probe how much information about negation has been
encoded in hidden states in a negation detection task.
## 3 Background
### 3.1 Negation
Negation in text generally has four components: cues, events, scope, and
focuses. The cues are the words expressing negation. An event is the lexical
component that a cue directly refers to. The scope is the part of the meaning
that is negated and the focus is the most explicitly negated part of the scope
Huddleston and Pullum (2002); Morante and Daelemans (2012).
NegPar is a parallel EN–ZH corpus annotated for negation. The English part is
based on ConanDoyle-neg Morante and Daelemans (2012), a collection of four
Sherlock Holmes stories. Some scope-related phenomena are re-annotated for
consistency. The annotations are extended onto its Chinese translations. Here
are two annotation examples:
English: There was no response.
Chinese: mei you ren da ying.
(no have people answer reply.)
In these examples, no and mei marked in bold are the cues; response and da
ying enclosed in boxes are the events; the underlined words belong to the
negation scope. In NegPar, negation events are subsets of negation scope, and
negation focuses are not annotated. Table 1 shows detailed statistics of
NegPar. Note that a negation instance may not have all the three components.
Moreover, not all parallel sentence pairs have negation in both source and
target sentences. For more details, please refer to Liu et al. (2018).
Due to the lack of parallel data annotated for negation, most of the negated
sentences in the previous studies are selected randomly. In NegPar, not only
negation cues, but also events and scope are annotated which is beneficial to
evaluating NMT models on negation and exploring the ability of NMT models to
translate negation.
### 3.2 Contrastive Translation Pairs
| | Train | Dev | Test | Total
---|---|---|---|---|---
English | Cue | 984 | 173 | 264 | 1,421
Event | 616 | 122 | 173 | 0,911
Scope | 887 | 168 | 249 | 1,304
Chinese | Cue | 1,209 | 231 | 339 | 1,779
Event | 0,756 | 163 | 250 | 1,169
Scope | 1,160 | 227 | 338 | 1,725
Table 1: Statistics of negation components in NegPar.
Deletion | Insertion
---|---
deleting nicht (not) | inserting nicht
replacing kein (no) with ein (a) | replacing ein with kein
deleting un- | inserting un-
Table 2: Six ways to reverse the polarity of sentences from the polarity
category of LingEval97.
Since we evaluate NMT models explicitly on negation, BLEU Papineni et al.
(2002) as a metric of measuring overall translation quality is not helpful. We
conduct the targeted evaluation with contrastive test sets in which human
reference translations are paired with one or more contrastive variants, where
a specific type of error is introduced automatically.
NMT models are conditional language models that assign a probability $P(T|S)$
to a given source sentence $S$ and the target sentence $T$. If a model assigns
a higher probability to the correct target sentence than to a contrastive
variant that contains an error, we consider it as a correct decision. The
accuracy of a model on such a test set is the percentage of cases where the
correct target sentence is scored higher than all contrastive variants.
LingEval97 Sennrich (2017) has over 97,000 EN$\rightarrow$DE contrastive
translation pairs featuring different linguistic phenomena. In this paper, we
focus on the polarity category which is related to negation and consists of
26,803 instances. For contrastive variants, the polarity of translations are
reversed by inserting or deleting negation cues. Table 2 illustrates how the
polarity is reversed.
### 3.3 Attention Flow
In Transformer models, the hidden state of each token is getting more
contextualized as we move to higher layers. Thus, the raw attention weights
are not the actual attention to the input tokens.
Recently, Abnar and Zuidema (2020) have proposed attention flow to approximate
the information flow. Attention flow considers not only the attention weights
to the previous layer but also to all the lower layers. Formally, in the self-
attention networks, given a directed graph $G=(V,E)$, where $V$ is the set of
nodes, and $E$ is the set of edges; each hidden state or word embedding from
different layers is a node; the attention weight is the value of an edge.
Given a source node $s$ and a target node $t$, the attention flow is the flow
of edges between $s$ and $t$, where the flow value should not exceed the
capacity of each edge and input flow should be equal to output flow for the
intermediate nodes in the path $s$ to $t$. They apply a maximum flow algorithm
to find the flow between $s$ and $t$ in a flow network.
In short, the attention flow utilizes the minimum value of the attention
weights in each path, and also employs the residual connections of attention
weights. They find that the patterns of attention flow get more distinctive in
higher layers compared to the raw attention. Moreover, attention flow yields
higher correlations with the importance scores of input tokens obtained by the
input gradients, compared to using the raw attention weights. Abnar and
Zuidema (2020) explore the attention flow of the encoder self-attention in the
case of pre-trained language models. Here we compute the attention flow from
decoder layers to source word embeddings, in the context of NMT.
## 4 Evaluation
In this section, we present the results of both automatic and manual
evaluation on negation in EN–DE and EN–ZH, to get a more comprehensive picture
of the performance on translating negation.
### 4.1 NMT Models
We use the Sockeye Hieber et al. (2017) toolkit to train NMT models. For
EN$\rightarrow$DE, we train RNN-, CNN-, and Transformer-based models,
following the settings provided by Tang et al. (2018). For the other
directions, we only train Transformer models. Table 3 shows the more detailed
settings.
Neural network depth | 8/6 (EN–DE/ZH)
---|---
Kernel size of CNNs | 3
Trans. Att. head | 8
Learning rate (initial) | 2e-04
Embedding&hidden unit size | 512
Mini-batch size (token) | 4,096
Dropout (Trans./RNN&CNN) | 0.1/0.2
RNN encoder | 1 biLSTM + 6 uniLSTM
Optimizer | Adam (Kingma and Ba, 2015)
Checkpoint frequency | 4,000
Label smoothing | 0.1
Early stopping | 32
Table 3: Settings for training NMT models.
EN$\rightarrow$DE | DE$\rightarrow$EN | EN$\rightarrow$ZH | ZH$\rightarrow$EN
---|---|---|---
RNN | CNN | Trans. | Trans. | Trans. | Trans.
25.2 | 25.3 | 27.6 | 34.3 | 33.9 | 23.5
Table 4: BLEU scores of NMT models with different architectures on the test
sets (newstest2017). Trans. is short for Transformer.
The training data is from the WMT17 shared task Bojar et al.
(2017).111http://www.statmt.org/wmt17/translation-task.html There are about
5.9 million and 24.7 million sentence pairs in the training set of EN–DE and
EN–ZH, respectively, after preprocessing with Moses scripts. Note that the
training data on EN–ZH is from the official preprocessed
data.222http://data.statmt.org/wmt18/translation-task/preprocessed/zh-en/ The
Chinese segmentation is based on Jieba.333https://github.com/fxsjy/jieba We
learn a joint BPE model with 32K subword units Sennrich et al. (2016) for
EN–DE, and two BPE models with 32K subword units for Chinese and English,
respectively. We employ the single model that has the best perplexity on the
validation set for the evaluation, without any ensembles. Table 4 shows the
BLEU scores of the trained NMT models on newstest2017, which are computed by
sacrebleu Post (2018).444https://github.com/mjpost/sacrebleu
Since these NMT models are trained with single sentences, feeding an input
with multiple sentences into these models is likely to get an incomplete
translation. To avoid these errors, we feed the sentence with negation cues
into NMT models individually for the manual evaluation.
Figure 1: Performance of NMT models on scoring contrastive translations, in
EN$\rightarrow$DE, using the polarity category of LingEval97. The first three
groups are on negation deletion, deleting nicht, kein and affixes, while the
last three groups are on negation insertion.
### 4.2 Automatic Evaluation
For the automatic evaluation, we let NMT models score contrastive translation
pairs, in EN$\rightarrow$DE and EN$\rightarrow$ZH.
#### 4.2.1 EN$\rightarrow$DE
Sennrich (2017) has evaluated subword-level and character-level RNN-based
models. Here we evaluate NMT models with different architectures, RNN-, CNN-,
and Transformer-based models. The test set is the polarity category of
LingEval97. Figure 1 displays the accuracy of NMT models.
Our NMT models are superior to the models in Sennrich (2017), except that CNN
is inferior in the group nicht_del. Generally, we see that the performance on
negation is getting better with the evolution of NMT models, with the
Transformer consistently scoring best, and substantially better (by up to 8
percentage points) than the shallow RNN Sennrich (2017). The accuracy of the
Transformer varies from 93.2% to 99.8%, depending on the group, which we
consider quite strong.
It is interesting that NMT models make fewer mistakes when inserting negation
cues into the reference compared to deleting negation cues from the reference,
which means that positive contrastive variants are more confusing to NMT
models. This is consistent with the results in Fancellu and Webber (2015),
that SMT models make more errors when generating positive sentences than
generating negative sentences, in terms of insertion/deletion errors. We will
explore under-translation errors in the following sections.
#### 4.2.2 EN$\rightarrow$ZH
Following the polarity category in LingEval97, we create a contrastive
evaluation set for negation on EN$\rightarrow$ZH, using the development and
test sets from the WMT shared translation task
2017–2020.555https://github.com/tanggongbo/negation-evaluation-nmt The
contrastive evaluation set also has two sub-categories: negation deletion and
negation insertion. We first select the five most popular Chinese negation
cues – “bu”, “mei”, “wu”, “fei”, and “bie”. Then, we manually delete the
negation cue from the reference or insert a negation cue into the reference,
without affecting the grammaticality. The negation deletion and negation
insertion categories have 2,005 and 3,062 instances with contrastive
translations, respectively.
As Transformer models are superior to RNN- and CNN-based models, here we only
evaluate Transformer models. The accuracy on negation deletion and negation
insertion categories is 92.1% and 99.0%, respectively. We can see that
Transformer models perform quite well on EN$\rightarrow$ZH, but not as well as
on EN$\rightarrow$DE. In accord with the finding in EN$\rightarrow$DE,
Transformer models here in EN$\rightarrow$ZH also perform worse on the
negation deletion category.
Category | Description
---|---
Correct | cues are translated into cues correctly
Rephrased | cues are translated correctly but not into a cue
Reordered | cues are translated but modify wrong constituents (incorrect scope/focus)
Incorrect | cues are translated but the event is translated incorrectly or the meaning is reversed
Dropped | cues are not translated at all
Table 5: Descriptions of the five translation categories.
| Correct | Rephrased | Reordered | Incorrect | Dropped | Accuracy
---|---|---|---|---|---|---
EN$\rightarrow$DE | 258 (92.8%) | 08 (02.9%) | 2 (0.7%) | 03 (1.1%) | 07 (2.5%) | 95.7%
DE$\rightarrow$EN | 232 (92.8%) | 05 (02.0%) | 2 (0.8%) | 11 (4.4%) | 00 (0.0%) | 94.8%
EN$\rightarrow$ZH | 393 (90.0%) | 15 (03.4%) | 3 (0.7%) | 10 (2.3%) | 16 (3.7%) | 93.4%
ZH$\rightarrow$EN | 451 (80.1%) | 65 (11.6%) | 3 (0.5%) | 21 (3.7%) | 23 (4.1%) | 91.7%
Table 6: Manual evaluation results in EN–DE and EN–ZH. Accuracy is the sum of
correct and rephrased.
### 4.3 Manual Evaluation
We have evaluated NMT models on negation with contrastive translation pairs.
However, scoring contrastive translation pairs is not the same as evaluating
the translations directly. The contrastive translations only insert or delete
a negation cue compared to the references, which is quite different from the
generation of NMT models. In addition, the automatic evaluation only gives us
the general performance on negation without any details on how negation is
translated. Thus, we further conduct manual evaluation on EN–DE and EN–ZH.
Due to the lack of parallel data annotated for negation, most of the negated
sentences in previous studies have no annotations and are selected randomly.
In NegPar, not only negation cues, but also events and scope are annotated,
which is beneficial for evaluating NMT models on negation and exploring the
ability of NMT models to learn negation. These annotations allow us to
evaluate negation from the perspectives of cues, events, and scope, rather
than negation cues only. Thus, for EN–ZH, we conduct the manual evaluation
based on NegPar, using both the development set and the test set. For EN–DE,
we evaluate 250 sentences with negation cues that are randomly selected from
LingEval97 in each direction.
Given the strong performance of Transformer models in the automatic
evaluation, we focus on this architecture for the manual evaluation. We
classify the translations of negation into five categories: Correct,
Rephrased, Reordered, Incorrect, and Dropped, depending on whether the cue,
event and the scope are translated correctly. More detailed descriptions are
provided in Table 5.
Table 6 gives the absolute frequency and percentage of each translation
category in all the translation
directions.666https://github.com/tanggongbo/negation-evaluation-nmt provides
the details. The accuracy of translating negation is the sum of correct and
rephrased, and the accuracy in EN$\rightarrow$DE, DE$\rightarrow$EN,
EN$\rightarrow$ZH, and ZH$\rightarrow$EN is 95.7%, 94.8%, 93.4%, and 91.7%,
respectively. We can see that NMT models perform better at translating
negation in DE–EN than in ZH–EN. In addition, under-translation errors are the
main errors in three out of four directions while reordering errors only
account for less than 1% in all directions. This contrasts with the results
reported for SMT by Fancellu and Webber (2015), where reordering was a more
severe problem than under-translation. It is reasonable because NMT models are
conditional language models, and have fewer word order errors, compared to SMT
models Bentivogli et al. (2016), thus there are fewer reordering errors on
translating negation. We can tell that the main error types with respect to
negation have shifted from SMT to NMT.
#### 4.3.1 EN–DE
As Table 6 shows, most of the translations belong to correct. The accuracy in
EN$\rightarrow$DE is 0.9% greater than that in DE$\rightarrow$EN. 2.5%
negation cues are not translated in EN$\rightarrow$DE, while all the negation
cues are translated by NMT models in DE$\rightarrow$EN. However, there are
more sentences where the negation events are not translated correctly in
DE$\rightarrow$EN. Compared to Bojar et al. (2018), our evaluation results for
EN-DE are 4.3% lower. One possible reason for the difference is that our
evaluation is based on a larger data set; another possible reason is that we
also consider the translation of negation events and scope.
#### 4.3.2 EN–ZH
Similar to the results in EN–DE, the accuracy in translating from English is
greater than in translating into English. The accuracy in ZH$\rightarrow$EN is
1.7% lower than in EN$\rightarrow$ZH. There are more instances of negation
that are rephrased in the translations in ZH$\rightarrow$EN, without any
negation cues in the translations. The NMT model in ZH$\rightarrow$EN also
makes more under-translation errors.
Category | Source | Translation | Reference
---|---|---|---
Correct | would do him no harm | bu hui shang hai ta (not able to harm him) | dui ta bu hui you shen me hai chu (to him no able have any harm)
Rephrased | bu xi fei yong (no spare expense use) | able to spend enough money | spare no expense
Reordered | yi ge xing qi bu jian mian (a week no meet) | no one could meet for a week | be invisible for a week
Incorrect | spare no expense | bu yao hua qian mai (not spend money to buy) | bu xi fei yong (not spare expense)
Dropped | bu xing, Mo li luo zhi dao le (not fortunate, Murillo know truth already) | fortunately, Murillo knew that | Unhappily , Murillo heard of
Table 7: Translation examples (segments) from different categories. These
segments are a subset of negation scope. The word in bold in the source is the
cue. Words with dashed lines below are correct translations and words with
wavy lines below are incorrect translations.
Table 7 further provides some translation examples. In the category Rephrased,
negation cues are not directly translated into negation cues. Instead, the
negation is paraphrased in a positive translation. In the Rephrased example,
although there is no cue in the translation, the meaning is paraphrased by
translating bu xi (no spare) into spend. In the Reordered example, the cue bu
in the source is supposed to modify jian (meet), but the translation of the
cue is placed before one, modifying the subject one instead of meet. In
addition, even though the negation cues are translated, the negation events
could be translated incorrectly, which can also have a severe impact on the
translation. For the fourth example, there is a cue in the translation but
spare in the source is translated into spend, which reverses the meaning
completely. For the last example, the cue bu (no) is skipped and only the
event xing (fortunate) gets translated.
We further check the under-translation errors of negation cues and find that
some of them are caused by multi-word expressions (idioms), especially when
translating Chinese into English. For example, wu (no) in wu_bing_shen_yin (no
disease groan cry) is not translated. Fancellu and Webber (2015) have shown
that the cues will not be under-translated if they are separate units in SMT.
Thus, these words are then segmented into separate characters and the input is
fed into NMT models again. This does fix a few errors. The wu (no) in
wu_bing_shen_yin gets translated but the second bu (not) in bu_gao_bu_ai (not
tall not short) is still not translated. Note that we only changed the
segmentation during inference which is sub-optimal. We aim to show that the
segmentation also could cause under-translation errors.
## 5 Interpretation
There are few studies on interpreting NMT models with respect to negation.
Since Table 6 has shown that NMT models in EN–ZH suffer from more errors on
negation, and since NegPar provides annotations of negation, we focus on
interpreting NMT models in EN–ZH. NMT models consist of several components and
we are interested in the information flow of negation to answer whether the
under-translation is caused by not passing enough negation information to
decoders, as well as exploring the ability of NMT models to learn negation.
### 5.1 Under-Translation Errors
Under-translation is the most frequent error type in our evaluation. If a
negation cue is not translated by NMT models, either the negation information
is not passed to the decoder properly, or the decoder does not utilize such
information for negation generation. We employ raw attention weights and
attention flow to explore the information flow.
#### 5.1.1 Attention Distribution
Encoder-decoder attention weights can be viewed as the degree of contribution
to the current word prediction. They have been utilized to locate unknown
words and to estimate the confidence of translations (Jean et al., 2015;
Gulcehre et al., 2016; Rikters and Fishel, 2017). However, previous studies
have found that attention weights cannot explain the under-translation of
negation cues (Ding et al., 2017). In this section, we first focus on the
under-translated negation cues, checking the negation information that is
passed to the decoder by the encoder-decoder attention. We compare the
attention weights paid to negation cues, when they are under-translated and
when they are translated into reference translations.
We extract attention distributions from each attention layer when translating
sentences from the development set. Each attention layer has multiple heads
and we average777We also used maximum weights to avoid misleading conclusions
when using average weights if the negation is modeled by a specific head, and
we got the same conclusion. the attention weights from all the heads. We
utilize constrained decoding (Post and Vilar, 2018) to generate reference
translations to get gold attention distribution. We find that source negation
cues attract much less attention compared to when they are translated into
references. Thus, we hypothesize that sufficient information about negation
has not been passed to the decoder, and we can utilize the attention
distribution to detect under-translated cues.
Now we further explore the attention distribution of under-translated and
correctly translated cues, without using the gold attention distribution. We
compute the Spearman correlation ($\rho$) between the weights and categories.
If $|\rho|$ is close to $1$, then categories have a high correlation with
attention weights. However, the largest $|\rho|$ in EN$\rightarrow$ZH and
ZH$\rightarrow$EN is 0.15 and 0.23, respectively, which means that there is
almost no correlation between attention weights and categories. We inspect the
weights and find that the weights to correctly translated cues range from 0.01
to 0.68, which cover most of the weights to dropped cues. This means that we
cannot detect under-translated cues by raw attention weights.
As raw attention weights in Transformer are not the actual attention to input
tokens, in the next section, we will apply attention flow, which has been
shown to have higher correlation with the input gradients, to measure the
negation flow.
#### 5.1.2 Attention Flow
We compute the attention flow to negation cues belonging to different groups;
the input nodes are the hidden states from decoder layers; the output node is
the word embedding of the negation cue. We utilize the maximum attention flow
from the decoder to represent the attention flow to each source cue, and
report the average value of all the attention flow. Table 8 shows the
attention flow values from different decoder layers to source cues, and the
absolute value of Spearman correlation ($\rho$) between attention flow and the
cue’s category. The attention flow values range from 0.70 to 0.91 for all the
cues, which means that most of the cue information has been passed to the
decoder and that the under-translation is not caused by not passing negation
information to the decoder.
| | EN$\rightarrow$ZH | ZH$\rightarrow$EN
---|---|---|---
Layer | Group | Attention flow | |$\rho|$ | Attention flow | $|\rho|$
2 | ✓ | 0.89 | 0.04 | 0.80 | 0.15
✗ | 0.90 | 0.70
4 | ✓ | 0.89 | 0.06 | 0.85 | 0.08
✗ | 0.91 | 0.84
6 | ✓ | 0.77 | 0.06 | 0.82 | 0.07
✗ | 0.78 | 0.72
Table 8: Attention flow values from different decoder layers to source cues,
and the absolute value of Spearman correlation ($\rho$) between attention flow
and the cue’s category. ✓ represents the correctly translated cues and ✗
represents the under-translated cues.
In addition, the attention flow values in Dropped and Correct are almost the
same in EN$\rightarrow$ZH and the correlation is smaller than 0.1. In
ZH$\rightarrow$EN, the attention flow is more distinct in the two cue groups,
but the correlation values are still smaller than 0.15. Compared to raw
attention weights, attention flow can provide more accurate information flow
to the decoder, but neither raw attention weights nor attention flow exhibit
any correlation between under-translation and the amount of negation
information passed to the decoder.
Our analysis indicates that under-translation of negation cues may still occur
even though there is information flow from the source negation cue to the
decoder. This indicates that methods to manipulate the attention flow, such as
coverage models or context gates (Tu et al., 2016, 2017) may not be sufficient
to force the model to produce negation cues. Our results also indicate that
under-translation of negation cues may not be easily detectable via an
analysis of attention.
#### 5.1.3 Training Data Considerations
Figure 2: Cosine similarity between negation cues and events, scope, and non-
negation tokens in ZH–EN, using hidden states from different layers. ENC$i$
represents hidden states from the $i$th encoder layer and DEC6 denotes hidden
states from the 6th decoder layer.
To further investigate why a model would fail to learn the seemingly simple
correspondence (in the language pairs under consideration) between source and
target side negation cues, we turn to an analysis of the parallel training
data. Our manual analysis of the test sets has shown a sizeable amount (2–11%)
of rephrasing where the translation of a negation is correct, but avoids
grammatical negation. We hypothesize that such training examples could weaken
the link between grammatical negation cues in the source and target, and
favour their under-translation.
EN ZH | has_cue | no_cue
---|---|---
has_cue | 2.60M (10.5%) | 20.15M (70.6%)
no_cue | 4.16M (16.8%) | 17.84M (72.1%)
Table 9: Statistics of sentence pairs with and without cues in ZH–EN,
including absolute number and ratio. “M” is short for million. Numbers in bold
denote sentence pairs with cue-mismatch.
We perform an automatic estimate of cue-matches and cue-mismatches between
source and target in the training data based on a short list of negation
words.888English negation words: no, non, not, ’t, nothing, without, none,
never, neither. Chinese negation characters: bu, mei, wu, fei, bie, wei, fou,
wu. Table 9 displays the amount of cue-match and cue-mismatch sentence pairs.
There are 17.4% sentence pairs with cue-mismatch,999Note that this is only a
simple approximation. We aim to demonstrate the sizeable mismatched training
data rather than the accurate distribution. We manually checked 100 randomly
selected sentence pairs, of which 30% are classified incorrectly. These errors
are caused by ignoring English words with negative prefixes/suffixes or
viewing any Chinese words with negative characters as negative words, such as
unknown in English and nan fei (South Africa) in Chinese. predominantly in
ZH$\rightarrow$EN, which agrees with the high amount of rephrasing we observed
in our manual evaluation (Table 6). Such cue-mismatch sentence pairs, along
with cue-match pairs, can make the learning harder and cause under-translation
errors when there is no paraphrase to compensate for the dropped negation cue.
Thus, one possible solution is to distill or filter training data to remove
cue-mismatch sentence pairs to make the learning easier.
### 5.2 Intrinsic Investigation
We are also interested in exploring whether NMT models can distinguish
negation and non-negation tokens, and therefore conduct an intrinsic
investigation on hidden states – by computing the cosine similarity between
tokens with different negation tags. Since NMT models can translate most
negation instances correctly, we hypothesize that the hidden states are
capable of distinguishing negation from non-negation tokens. We investigate
hidden states from both encoders and decoders. As the hidden state in the last
decoder layer is used for predicting the translation, we only explore the
decoder hidden states at the $6$th layer. We use $Sim_{ce}$ to represent the
cosine similarity between negation cues and negation events, $Sim_{cs}$ to
represent the cosine similarity between negation cues and tokens belonging to
negation scope, and $Sim_{co}$ to represent the cosine similarity between
negation cues and non-negation tokens. We simply use the mean representation
for tokens that are segmented into subwords.
Figure 2 shows the cosine similarity between negation cues and events, scope,
and non-negation tokens, using hidden states from encoders and decoders.
$Sim_{ce}$ is substantially higher than $Sim_{cs}$, and $Sim_{cs}$ is higher
than $Sim_{co}$. This result reveals that negation events are closer to
negation cues compared to tokens belonging to the negation scope. We can also
infer that NMT models can tell negation and non-negation tokens apart as
$Sim_{co}$ is distinctly lower than $Sim_{ce}$ and $Sim_{cs}$. However, even
the highest $Sim_{ce}$ is only around 0.5, which means that the
representations of negation components are quite different.
| Cues | Scope | Events
---|---|---|---
Data | Model | P | R | F1 | P | R | F1 | P | R | F1
Dev | Liu et al. (2018) | 0.490 | 0.42 | 0.450 | 0.640 | 0.440 | 0.500 | 0.400 | 0.270 | 0.320
ENC | 0.915 | 0.665 | 0.770 | 0.814 | 0.530 | 0.642 | 0.598 | 0.335 | 0.429
DEC | 0.754 | 0.488 | 0.592 | 0.738 | 0.489 | 0.588 | 0.487 | 0.272 | 0.348
Test | Liu et al. (2018) | 0.478 | 0.382 | 0.425 | 0.583 | 0.312 | 0.406 | 0.338 | 0.180 | 0.235
ENC | 0.892 | 0.581 | 0.704 | 0.743 | 0.496 | 0.595 | 0.496 | 0.285 | 0.362
DEC | 0.686 | 0.362 | 0.474 | 0.656 | 0.456 | 0.538 | 0.470 | 0.225 | 0.304
Table 10: Precision (P), recall (R), and F1 scores of the negation projection
tasks in EN$\rightarrow$ZH, using NMT hidden states, comparing with the word
alignment based method Liu et al. (2018). ENC represents the hidden states
from the 1st encoder layer in cue projection, and represents the hidden states
from the 6th encoder layer in scope/event projection. DEC denotes the hidden
states from the 6th decoder layer.
In the encoder, $Sim_{ce}$, $Sim_{cs}$ and $Sim_{co}$ have the same trend that
the similarity is higher in upper layers. In addition, we can tell that
negation cues interact with events and scope, but also non-negation tokens.
Compared to the negation representations from encoders, the negation
representations from decoders are less distinct because they are closer to
each other. $Sim_{ce}$, $Sim_{cs}$ and $Sim_{co}$ are higher when using the
hidden states from the $6$th decoder layer (DEC6) than when using the $6$th
encoder layer (ENC6). We attribute this to the fact that hidden states in
decoders are more contextualized because they consider contextual information
from both the source and the target.
### 5.3 Probing NMT Models on Negation
We have shown that NMT models can distinguish negation and non-negation tokens
in the previous section, but how much information about negation has been
captured by NMT models is still unclear. In this section we will investigate
the ability to model negation in an extrinsic way, i.e., probing hidden states
on negation in a negation projection task Liu et al. (2018) and a negation
detection task Fancellu et al. (2018). In the negation projection task,
instead of projecting English negation annotations to Chinese translations
using word alignment, we use probing classifiers trained on Chinese to
directly generate the negation annotations. In the negation detection task in
English, we employ simple classifiers rather than specifically designed models
to detect each token. In brief, given a hidden state, we train classifiers to
predict its negation tag, cue, event, scope, or others.
#### 5.3.1 Settings
The probing task on negation cues is a binary classification task, the output
space is $\\{\textit{cue},\textit{others}\\}$, while the classifiers for event
and scope are tri-class classification tasks with an output space
$\\{\textit{cue},\textit{event/scope},\textit{others}\\}$, because only
predicting event/scope is challenging to these classifiers.
The probing classifiers in this section are feed-forward neural networks (MLP)
with only one hidden layer, using ReLU non-linear activation. The size of the
hidden layer is set to 512 and we use the Adam learning algorithm. The
classifiers are trained using cross-entropy loss. Each classifier is trained
on the training set for 100 epochs and tuned on the development set. We select
the model that performs best (F1 score) on the development set and apply it to
the test set. In addition, we train 5 times with different seeds for each
classifier and report average results. We use precision, recall, and F1 score
as evaluation metrics.
Figure 3: Results (%) on negation scope detection in English. MLP is the
probing classifier; GCN is graph convolutional network; D-LSTM is
bidirectional dependency LSTM. Figure 4: Results on negation cue/scope
detection in ZH–EN, using encoder hidden states from sentences where the
negation is correctly translated (Correct) and incorrectly translated
(Incorrect).
#### 5.3.2 Negation Projection
Table 10 shows the projection results of negation cues, scope, and events, on
both development and test sets. ENC/DEC refers to using hidden states from
encoders or decoders. ENC achieves the best result on all the negation
projection tasks and is significantly better than the word alignment based
method in Liu et al. (2018). ENC also performs better than DEC, which means
that negation is better modeled in encoder hidden states than in decoder
hidden states.
Figure 5: F1 scores of the negation projection tasks, on the development set,
using hidden states from different encoder layers.
In addition, we investigate hidden states from different encoder layers.
Figure 5 shows the F1 scores on the development set, using hidden states from
different encoder layers. We can see that hidden states from lower layers
perform better in negation cue projection, while hidden states from upper
layers are better in negation event/scope projection. One possible explanation
is that negation cues in upper layers are fused with other negation
information, which confuses the classifier. However, negation events/scope in
upper layers interact more with negation cues and non-negation tokens, which
makes them more distinctive.
#### 5.3.3 Negation Scope Detection
Figure 3 shows the results of the negation scope detection task. We only
report the results of using encoder hidden states that perform the best. The
MLP classifier trained on encoder hidden states achieves 74.31%, 75.14%, and
74.72% on precision, recall, and F1, respectively,101010Here we only report
the result of using hidden states from the $6$th encoder layer. We also tried
hidden states from other encoder layers and decoders and got similar results
as in the negation projection task. and it is distinctly inferior to the other
two models. However, methods from Fancellu et al. (2018) are specifically
designed for negation scope detection and add extra information (negation
cues, POS tags) to supervise the model, while the MLP classifier is designed
to jointly predict negation cues as well, only using hidden states. We can
conclude that some information about negation scope is well encoded in hidden
states, but there is still room for improvement.
#### 5.3.4 Incorrectly Translated Sentences
We further probe encoder hidden states from correctly and incorrectly
translated sentences on negation cues and scope, to explore the quality of
hidden states from incorrectly translated sentences. Note that we do not
consider the under-translated cues. Figure 4 exhibits the performance of
negation detection on cues and scope. Correct represents hidden states from
correctly translated sentences and Incorrect stands for hidden states from
incorrectly translated sentences. Incorrect performs worse than Correct,
especially on the negation cue detection task, which confirms the
effectiveness of using probing tasks to explore the information about negation
in hidden states.
## 6 Conclusion
In this paper, we have explored the ability of NMT models to translate
negation through evaluation and interpretation. The accuracy of manual
evaluation in EN$\rightarrow$DE, DE$\rightarrow$EN, EN$\rightarrow$ZH, and
ZH$\rightarrow$EN is 95.7%, 94.8%, 93.4%, and 91.7%, respectively. The
contrastive evaluation shows that deleting a negation cue from references is
more confusing to NMT models than inserting a negation cue into references,
which indicates that NMT models have a bias against sentences with negation.
We show that NMT models make fewer mistakes in EN–DE than in EN–ZH. Moreover,
there are more errors in DE/ZH$\rightarrow$EN than in EN$\rightarrow$DE/ZH.
We also have investigated the information flow of negation by computing the
attention weights and attention flow. We demonstrate that the negation
information has been well passed to the decoder, and that there is no
correlation between the amount of negation information transferred and whether
the cues are under-translated or not. Thus, we consider attempts to detect or
even fix under-translation of cues via an analysis or manipulation of the
attention flow to have little promise. However, our analysis of the training
data shows that negation is often rephrased, leading to cue mismatches which
could confuse NMT models. This suggests that distilling or filtering training
data to make grammatical negation more consistent between source and target
could reduce this under-translation problem.
In addition, we show that NMT models can distinguish negation and non-negation
tokens very well, and NMT models can encode substantial information about
negation in hidden states but nevertheless leave room for improvement.
Moreover, encoder hidden states capture more information about negation than
decoder hidden states; negation cues are better modeled in lower encoder
layers while negation events and tokens belonging to negation scope are better
modeled in higher encoder layers.
Overall, we show that the modeling of negation in NMT has improved with the
evolution of NMT – with deeper and more advanced networks; the performance on
translating negation varies between language pairs and directions. We also
find that the main error types on negation have shifted from SMT to NMT –
under-translation is the most frequent error type in NMT while other error
types such as reordering were equally or more prominent in SMT.
We only conduct evaluation in EN–DE and EN–ZH, and German/Chinese and English
are very similar in expressing negation. It will be interesting to explore
languages have different characteristics on negation in the future, such as
Italian, Spanish, and Portuguese, where double negation is very common.
## Acknowledgments
We thank all reviewers, action editors (Mauro Cettolo and Chris Quirk) for
their valuable and insightful comments. We also thank Qianchu Liu for
providing the NegPar data set. We acknowledge the computational resources
provided by CSC in Helsinki and Sigma2 in Oslo through NeIC-NLPL
(www.nlpl.eu). GT was mainly funded by the Chinese Scholarship Council (NO.
201607110016).
## References
* Abnar and Zuidema (2020) Samira Abnar and Willem Zuidema. 2020. Quantifying Attention Flow in Transformers. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4190–4197, Online. Association for Computational Linguistics.
* Ataman et al. (2019) Duygu Ataman, Orhan Firat, Mattia A. Di Gangi, Marcello Federico, and Alexandra Birch. 2019. On the Importance of Word Boundaries in Character-Level Neural Machine Translation. In _Proceedings of the 3rd Workshop on Neural Generation and Translation_ , pages 187–193, Hong Kong, China. Association for Computational Linguistics.
* Baker et al. (2012) Kathryn Baker, Michael Bloodgood, Bonnie J. Dorr, Chris Callison-Burch, Nathaniel W. Filardo, Christine Piatko, Lori Levin, and Scott Miller. 2012. Modality and Negation in SIMT Use of Modality and Negation in Semantically-Informed Syntactic MT. _Computational Linguistics_ , 38(2):411–438.
* Bentivogli et al. (2016) Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus Phrase-Based Machine Translation Quality: A Case Study. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 257–267, Austin, USA. Association for Computational Linguistics.
* Beyer et al. (2017) Anne Beyer, Vivien Macketanz, Aljoscha Burchardt, and Philip Williams. 2017. Can Out-of-the-Box NMT Beat a Domain-Trained Moses on Technical Data? In _The 20th Annual Conference of the European Association for Machine Translation (EAMT)_ , pages 41–46, Prague, Czech Republic. Charles University.
* Bojar et al. (2017) Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 Conference on Machine Translation (WMT17). In _Proceedings of the Second Conference on Machine Translation_ , pages 169–214, Copenhagen, Denmark. Association for Computational Linguistics.
* Bojar et al. (2018) Ondřej Bojar, Philip Williams, David Mareček, Martin Popel, Rudolf Rosa, Josef Jon, and Michal Kašpar. 2018. Final Report on Employing Semantic Role Labelling and Shallow Proxies for Negation and Fidelity Checking in MT. Technical report, The University of Edinburgh.
* Collins et al. (2005) Michael Collins, Philipp Koehn, and Ivona Kučerová. 2005. Clause Restructuring for Statistical Machine Translation. In _Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics)_ , pages 531–540, Ann Arbor, USA. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, USA. Association for Computational Linguistics.
* Ding et al. (2017) Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and Understanding Neural Machine Translation. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1150–1159, Vancouver, Canada. Association for Computational Linguistics.
* Fancellu et al. (2018) Federico Fancellu, Adam Lopez, and Bonnie Webber. 2018. Neural Networks for Cross-lingual Negation Scope Detection. _CoRR_ , cs.CL/1810.02156v1.
* Fancellu and Webber (2014) Federico Fancellu and Bonnie Webber. 2014. Applying the Semantics of Negation to SMT Through N-best List Re-ranking. In _Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 598–606, Gothenburg, Sweden. Association for Computational Linguistics.
* Fancellu and Webber (2015) Federico Fancellu and Bonnie Webber. 2015. Translating Negation: A Manual Error Analysis. In _Proceedings of the Second Workshop on Extra-Propositional Aspects of Meaning in Computational Semantics (ExProM 2015)_ , pages 2–11, Denver, USA. Association for Computational Linguistics.
* Gulcehre et al. (2016) Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016\. Pointing the Unknown Words. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 140–149, Berlin, Germany. Association for Computational Linguistics.
* Hieber et al. (2017) Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A Toolkit for Neural Machine Translation. _CoRR_ , cs.CL/1712.05690v1.
* Hossain et al. (2020) Md Mosharaf Hossain, Antonios Anastasopoulos, Eduardo Blanco, and Alexis Palmer. 2020. It’s not a Non-Issue: Negation as a Source of Error in Machine Translation. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 3869–3885, Online. Association for Computational Linguistics.
* Huddleston and Pullum (2002) Rodney Huddleston and Geoffrey K. Pullum. 2002. _The Cambridge Grammar of the English Language_. Cambridge University Press, Cambridge, UK.
* Jean et al. (2015) Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On Using Very Large Target Vocabulary for Neural Machine Translation. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 1–10, Beijing, China. Association for Computational Linguistics.
* Khandelwal and Sawant (2020) Aditya Khandelwal and Suraj Sawant. 2020. NegBERT: A Transfer Learning Approach for Negation Detection and Scope Resolution. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 5739–5748, Marseille, France. European Language Resources Association.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In _Proceedings of the 3rd International Conference on Learning Representations_ , San Diego, California, USA.
* Li et al. (2009) Jin-Ji Li, Jungi Kim, Dong-Il Kim, and Jong-Hyeok Lee. 2009. Chinese Syntactic Reordering for Adequate Generation of Korean Verbal Phrases in Chinese-to-Korean SMT. In _Proceedings of the Fourth Workshop on Statistical Machine Translation_ , pages 190–196, Athens, Greece. Association for Computational Linguistics.
* Liu et al. (2018) Qianchu Liu, Federico Fancellu, and Bonnie Webber. 2018. NegPar: A Parallel Corpus Annotated for Negation. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_ , Miyazaki, Japan. European Language Resources Association.
* Morante and Daelemans (2012) Roser Morante and Walter Daelemans. 2012. ConanDoyle-neg: Annotation of Negation in Conan Doyle Stories. In _Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12)_ , pages 1563–1568, Istanbul, Turkey. European Language Resources Association.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In _Proceedings of 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, USA. Association for Computational Linguistics.
* Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 2227–2237, New Orleans, USA. Association for Computational Linguistics.
* Post (2018) Matt Post. 2018. A Call for Clarity in Reporting BLEU Scores. In _Proceedings of the Third Conference on Machine Translation: Research Papers_ , pages 186–191. Association for Computational Linguistics.
* Post and Vilar (2018) Matt Post and David Vilar. 2018. Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1314–1324, New Orleans, USA. Association for Computational Linguistics.
* Rikters and Fishel (2017) Matīss Rikters and Mark Fishel. 2017. Confidence Through Attention. In _Proceedings of the 16th Machine Translation Summit (MT Summit 2017)_ , pages 299–311, Nagoya, Japan.
* Sennrich (2017) Rico Sennrich. 2017. How Grammatical is Character-Level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers_ , pages 376–382, Valencia, Spain. Association for Computational Linguistics.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
* Sergeeva et al. (2019) Elena Sergeeva, Henghui Zhu, Amir Tahmasebi, and Peter Szolovits. 2019. Neural Token Representations and Negation and Speculation Scope Detection in Biomedical and General Domain Text. In _Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)_ , pages 178–187, Hong Kong, China. Association for Computational Linguistics.
* Tang et al. (2018) Gongbo Tang, Mathias Müller, Annette Rios, and Rico Sennrich. 2018. Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4263–4272, Brussels, Belgium. Association for Computational Linguistics.
* Tu et al. (2017) Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017. Context Gates for Neural Machine Translation. _Transactions of the Association for Computational Linguistics_ , 5:87–99.
* Tu et al. (2016) Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling Coverage for Neural Machine Translation. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 76–85, Berlin, Germany. Association for Computational Linguistics.
* Wetzel and Bond (2012) Dominikus Wetzel and Francis Bond. 2012. Enriching Parallel Corpora for Statistical Machine Translation with Semantic Negation Rephrasing. In _Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation_ , pages 20–29, Jeju, Republic of Korea. Association for Computational Linguistics.
|
arxiv-papers
| 2021-07-26T13:19:57 |
2024-09-04T03:07:18.646176
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Gongbo Tang, Philipp R\\\"onchen, Rico Sennrich, Joakim Nivre",
"submitter": "Gongbo Tang",
"url": "https://arxiv.org/abs/2107.12203"
}
|
2107.12204
|
# Discovery of multiple p-mode pulsation frequencies in the roAp star, HD
86181
Fangfei Shi1,2, Donald W. Kurtz3,4, Daniel L. Holdsworth4, Hideyuki Saio5,
Margarida S. Cunha6, Huawei Zhang1,2, Jianning Fu7, G. Handler8
1Department of Astronomy, School of Physics, Peking University, Beijing
100871, P. R. China
2Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing
100871, P. R. China
3Centre for Space Research, Physics Department, North-West University,
Mahikeng 2745, South Africa
4Jeremiah Horrocks Institute, University of Central Lancashire, Preston PR1
2HE, UK
5Astronomical Institute, Graduate School of Science, Tohoku University, Sendai
980-8578, Japan
6Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP,
Rua das Estrelas, PT4150-762 Porto, Portugal
7Department of Astronomy, Beijing Normal University, Beijing 100875, P. R.
China
8Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul.
Bartycka 18, 00-716, Warsaw, Poland
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
We report the frequency analysis of a known roAp star, HD 86181 (TIC
469246567), with new inferences from TESS data. We derive the rotation
frequency to be $\nu_{\rm rot}=0.48753\pm 0.00001$ d-1. The pulsation
frequency spectrum is rich, consisting of two doublets and one quintuplet,
which we interpret to be oblique pulsation multiplets from consecutive, high-
overtone dipole, quadrupole and dipole modes. The central frequency of the
quintuplet is $232.7701$ d-1 (2.694 mHz). The phases of the sidelobes, the
pulsation phase modulation, and a spherical harmonic decomposition all show
that the quadrupole mode is distorted. Following the oblique pulsator model,
we calculate the rotation inclination, $i$, and magnetic obliquity, $\beta$,
of this star, which provide detailed information about the pulsation geometry.
The $i$ and $\beta$ derived from the best fit of the pulsation amplitude and
phase modulation to a theoretical model, including the magnetic field effect,
slightly differ from those calculated for a pure quadrupole, indicating the
contributions from $\ell=4,6,8,...$ are small. Non-adiabatic models with
different envelope convection conditions and physics configurations were
considered for this star. It is shown that models with envelope convection
almost fully suppressed can explain the excitation at the observed pulsation
frequencies.
###### keywords:
stars: oscillations – stars: variables – stars: individual HD 86181 (TIC
469246567; V437 Car) – star: chemically peculiar – techniques: photometric –
asteroseismology
††pubyear: 2019††pagerange: Discovery of multiple p-mode pulsation frequencies
in the roAp star, HD 86181–LABEL:lastpage
## 1 Introduction
The Ap (chemically peculiar A-type) stars have non-uniform distributions of
chemical abundances on their surfaces and strong magnetic fields. These
magnetic fields suppress surface convection that then leads to element
stratification. For some heavy elements, such as Eu, Sr and Si, the radiation
pressure can lift them up to the surface against gravity leading to many
absorption features. These elemental overabundances occur in spots, making Ap
stars obliquely rotating variable stars of a class known as $\alpha^{2}$ CVn
stars (Pyper, 1969).
Some cool Ap stars exhibit high-overtone, low-degree pressure pulsation modes
with periods between 4.7 and 24 min (frequencies in the range $55.8-300$ d-1;
$0.6-3.5$ mHz (Holdsworth et al., 2021)) and photometric amplitudes up to
0.018 mag in Johnson $B$ (Cunha et al., 2019; Kochukhov, 2009; Smalley et al.,
2015). They are called rapidly oscillating Ap (roAp) stars. Some of these
stars show both rotation features with periods of days to decades, and
pulsation features in their light curves.
Stibbs (1950) developed the oblique rotator model of the Ap stars, which
accounts for the magnetic, spectral, and light variations observed in Ap
stars. Following this model, Kurtz (1982) introduced the oblique pulsator
model, which was generalized with the effects of both the magnetic field and
rotation taken into account (Kurtz, 1982; Dziembowski & Goode, 1985;
Shibahashi & Takata, 1993; Takata & Shibahashi, 1994, 1995; Saio & Gautschy,
2004; Bigot & Dziembowski, 2002; Bigot & Kurtz, 2011). According to this
model, the pulsation axis is misaligned with the rotation axis, and generally
closely aligned to the magnetic axis. When the star rotates, the viewing
aspect of the pulsation modes varies along the line of sight, leading to
apparent amplitude and phase modulation. This modulation can provide
information on the geometry of observed pulsations, hence mode identification,
which is necessary for asteroseismic inference with forward modelling.
Since the first roAp stars were discovered by Kurtz (1982), 88 roAp stars have
been found (Smalley et al., 2015; Hey et al., 2019; Cunha et al., 2019;
Balona, Holdsworth & Cunha, 2019; Holdsworth et al., 2021). Asteroseismology
is a useful method to diagnose stellar structure and interior physics from the
evidence of surface pulsations (Cunha, Fernandes & Monteiro, 2003). Progress
of this research for roAp stars has been hindered by the relatively small
number of known stars, and because their rapid pulsation requires dedicated
observations and high accuracy to detect the small pulsation amplitudes (Hey
et al., 2019; Cunha et al., 2019; Balona, Holdsworth & Cunha, 2019).
The space telescopes Kepler and the TESS (Transiting Exoplanet Survey
Satellite) provide an opportunity to detect oscillations well below the
amplitude threshold of ground-based observations. Both Kepler and TESS have
short cadence (2 min for TESS and 58.89 s for Kepler) observations, but Kepler
only observed 512 stars in this mode during each observing ‘quarter’. However,
the standard long cadence sampling frequency of the Kepler 30-min observations
is generally too low for studying the pulsation of roAp stars in detail.
Murphy, Shibahashi & Kurtz (2013) showed that the Nyquist ambiguity in the LC
data can be resolved as a result of the Barycentric corrections applied to
Kepler time stamps, and Hey et al. (2019) discovered 6 roAp candidates through
this method. Compared to the Kepler 58.89-s observations, TESS is observing
many more stars with 2-min observations with sufficiently long time bases to
detect pulsations. Up to now, 21 new roAp stars have been found from just TESS
sectors 1 to 13 (Cunha et al., 2019; Balona, Holdsworth & Cunha, 2019;
Holdsworth et al., 2021).
Before the TESS observations of our target, HD 86181, Kurtz & Martinez (1994)
discovered it to be a roAp star from 4.85 hr of ground-based data. They
reported the star to have a pulsation period of 6.2 min and with an amplitude
of 0.35 mmag through a Johnson $B$ filter. That period corresponds to a
frequency of 2.688 mHz, or 232.26 d-1. No further detailed studies of the
pulsations in HD 86181 have been published.
Parameters for this star are listed in Table 1. The effective temperature was
estimated using the Str$\ddot{o}$mgren photometric indices extracted from the
catalogue of Hauck & Mermilliod (1998) and the calibrations in the TEMPLOGG
code (Rogers 1995) which were developed based on the work of Moon & Dworetsky
(1985) and Napiwotzki, Schoenberner & Wenske (1993). Since no convincing
uncertainty is given by this method, we indicate, instead, a range of values
of $T_{\rm eff}$ published in the literatures.
The luminosity was calculated through the relation $-2.5\log
L=M_{G}+BC_{G}(\rm T_{eff})-M_{bol,\odot}$, where $BC_{G}(\rm T_{eff})$ is a
temperature dependent bolometric correction defined in Andrae et al. (2018),
and the uncertainty of BC (Bolometric Correction) is 0.13, based on a
comparison with Ap data that is described in some detail in Cunha et al.
(2019). While the uncertainty derived in Cunha et al. (2019) was based on a
comparison of Ap-star measurements with the empirical BCV calibration by
Flower (1996) and, thus, the consistency of using it with the BGG values
derived from the calibration of Andrae et al. (2018) may be questionable, it
provides a more conservative result than the uncertainty derived from Andrae
et al. (2018), which does not account for the stars’ peculiarities. The
extinction in the $G$ band used to calculate $M_{G}$ here was from Anders et
al. (2019), and the uncertainty is 0.2, which is the value indicated in the
Figure 20 in (Anders et al., 2019). The parallax was from Gaia eDR3 (Gaia
Collaboration, 2020). $M_{\rm bol,\odot}$ adopted is 4.74 as defined by IAU
Resolution 2015
B2111https://www.iau.org/static/resolutions/IAU2015_English.pdf.
Table 1: Parameters of HD 86181.
Apparent $G$ magnitude | $9.341\pm 0.003$ | Gaia Collaboration (2020)
---|---|---
Extinction in $G$ band | $0.1\pm 0.2$ | Anders et al. (2019)
Spectral type | F0 Sr | Renson & Manfroid (2009)
Parallax (mas) | $4.15\pm 0.01$ | Gaia Collaboration et al. (2020)
Distance (pc) | $241.0\pm 0.6$ | derived from parallax
$b-y$ | 0.175 | Perry (1991)
$m_{1}$ | 0.245 | Perry (1991)
$c_{1}$ | 0.702 | Perry (1991)
Hβ | 2.804 | Perry (1991)
$T_{\rm eff}$(K) | $7750$; [7240-7910] | This work∗; Literature+
Luminosity (L⊙) | $8.8\pm 1.9$ | Andrae et al. (2018)
Mean longitudinal | $536\pm 75$ | Bagnulo et al. (2015)
magnetic field (G) | |
* •
∗ based on Rogers 1995
* •
+ Trifonov et al. (2020), Anders et al. (2019), Mathys, Kharchenko & Hubrig (1996)
## 2 TESS observations
HD 86181 was observed by TESS in sectors 9 and 10 in 2-min cadence. The data
have a time span of 51.76 d with a centre point in time of $t_{0}={\rm
BJD}~{}2458569.80077$, and comprise 33832 data points after some outliers were
removed. The standard PDC SAP (pre-search data conditioning simple aperture
photometry) fluxes provided by MAST (Mikulski Archive for Space Telescopes)
were used and normalised by dividing by the median flux separately for each
sector. Relative magnitudes were then calculated from the processed fluxes,
giving the light curve shown in the top panel of Fig. 1.
There are obvious rotational variations from spots, as is typical of the
magnetic Ap stars. Within the oblique rotator model, the double wave nature of
the rotational variations suggests that two principal spots with enhanced
brightness on the stellar surface are seen. The high frequency pulsation
cannot be seen in this figure at this resolution.
## 3 Frequency analysis
### 3.1 Rotation frequency analysis
Before we conducted a detailed analysis of the rotation frequency of HD 86181,
we first measured the rotation frequency with a coarse Discrete Fourier
Transform (DFT; Kurtz, 1985) such that we could bin the data every one
rotation cycle. This allowed us to assess the instrumental variation, which we
subsequently fit with a polynomial and removed from the original light curve.
We then calculated a DFT with a finer frequency grid, as shown in Fig. 2, to
measure the stellar rotation frequency. The low frequencies dominate in the
spectrum, so we zoom in to both the low frequency range (second panel) and
high frequency range (third panel). From the amplitude spectrum at low
frequency, the rotational harmonics are clearly seen. Although the highest
peak is at a frequency of 0.97 d-1, considering the phase plot, we derive the
rotation frequency to be around 0.48 d-1. Because the variation is a double
wave, the second harmonic has the highest amplitude.
A linear least-squares fit was calculated to find the best amplitudes and
phases of the rotation frequency and its 4 visible harmonics, and then a non-
linear least-squares fit to get optimized results. The rotational frequency is
derived to be $\nu_{\rm rot}=0.48753\pm 0.00001$ d-1 ($P_{\rm rot}=2.05116\pm
0.00004$ d) by dividing the frequency of the highest amplitude second harmonic
by two, which has better signal-to-noise ratio. Besides the rotation frequency
and its harmonics, there are still some signals left in the low frequency
range, probably because instrumental variation has not been removed
completely. These signals were removed prior to the non-linear least square
fits for better estimates of the uncertainties. The uncertainties were derived
following Montgomery & O′donoghue (1999). The rotation period is short among
the known roAp stars, after HD 43226 (Cunha et al. 2019), HD 216641 (Cunha et
al. 2019), and HD 6532 (Kurtz et al. 1996a, Kurtz et al. 1996b), which have
similar rotation periods of $P_{\rm rot}=1.71441$ d, $P_{\rm rot}=1.876660$ d,
and $P_{\rm rot}=1.944973$ d, respectively.
Figure 1: Top: The light curve of HD 86181 showing the rotational variations.
Bottom: Phase folded light curve of HD 86181, folded on the rotation period of
2.05116 d; two rotation cycles are shown for clarity. The data are from TESS
sectors 9 and 10. The time zero-point, BJD 2458569.26128, is the time of
pulsation maximum. The phases are binned every 0.001 phase bin.
Figure 2: The frequency spectrum of HD 86181. Top: The amplitude spectrum of
the S9–10 data out to 300 d-1. The rotational frequencies at low frequency
dominate. The pulsation frequencies centred on 232.2 d-1 are difficult to see
at this scale. Second: The low frequency rotational harmonics. Third: the
pulsation frequencies for the high-pass filtered data. Bottom: The frequency
spectrum after the frequencies in Table 2 have been removed. The red
horizontal lines are 4 times of noise level. The top x-axis is the
corresponding frequency in $\mu$Hz.
### 3.2 The pulsations
To study the pulsations, a high-pass filter was used to remove the rotational
light variations, any remaining instrumental artefacts and other low
frequencies. The high-pass filter was a simple consecutive pre-whitening of
low frequency peaks extracted by Fourier analysis until the noise level was
reached in the frequency range $0-6$ d-1. The third panel in Fig. 2 shows the
amplitude spectrum for the high-pass filtered data around the high-frequency
variability. By inspection it can be seen that there is a central quintuplet
and two doublets, one at higher and another at lower frequency than the
quintuplet. After removing these three groups of frequencies, five singlets
still remain (see the bottom panel of Fig. 2). However, their frequencies are
similar to the quintuplet and two doublets within the uncertainties. These may
be caused by amplitude or frequency modulation over the time span of the data
set, 51.76 d.
To test this, we removed the doublets and singlets from the light curve and
fitted $\nu_{1}$ to sections of the data that are exactly one rotation cycle
long and calculated the amplitude and phase. Fig. 3 shows there is amplitude
and phase variability with time. By choosing exactly one rotation length of
data, the amplitude and phase variations due to oblique pulsation were
smoothed.
If the frequency were stable, there would be no phase variations. As the data
were fitted with the function $\Delta m=A\cos(\nu(t-t_{0})+\phi)$, the
frequency and phase terms are inextricably intertwined (see the section 5.3.2
in Holdsworth et al., 2014), thus a change in one can be interpreted as a
change in the other. Therefore, although we show a change in the phase in Fig.
3, the change could be in the frequency. Such variability is common in roAp
stars studied with high precision data (Holdsworth, 2021).
Figure 3: The pulsation amplitude and phase variations of HD 86181 for the
dominant quadrupole mode. Top: pulsation amplitude variations as a function of
time. Bottom: pulsation phase variations as a function of time.
As in the analysis of rotation frequency, linear and non-linear least squares
fits were used to get optimised results of frequencies, amplitudes and phases.
The non-linear least squares fit results are shown in Table 2. Within the
uncertainties, the sidelobes of the quintuplet are exactly split by the
rotation frequency. In addition to the quintuplet, there are two doublets that
are split by 2$\nu_{\rm rot}$; these are the sidelobes of two dipole pulsation
frequencies that are labeled as $\nu_{2}$ and $\nu_{3}$. For a pure dipole or
quadrupole pulsation, the oblique pulsator model requires that the sidelobes
are split by exactly the rotation frequency of the star, and that the phases
of all components are equal at the time of pulsation maximum. To test this,
the frequency of the quintuplet sidelobes were fixed to be equally spaced by
the rotation frequency, and the zero-point in time was chosen such that the
phases of the first pair of sidelobes are the same, then a linear least
squares fit was applied to the data with the results show in Table 3. The
phases of the quintuplet sidelobes are not equal within the uncertainties,
which indicates this star pulsates in a distorted quadrupole mode.
Table 2: A non-linear least squares fit of the frequency multiplets for HD 86181. The zero point for the phases is $t_{0}={\rm BJD}~{}2458569.26128$. | frequency | amplitude | phase
---|---|---|---
| d-1 | mmag | radians
| | $\pm 0.007$ |
$\nu_{rot}$ | $0.48765\pm 0.00003$ | 2.970 | $5.829\pm 0.003$
$2\nu_{rot}$ | $0.97506\pm 0.00001$ | 7.296 | $2.776\pm 0.001$
$3\nu_{rot}$ | $1.46233\pm 0.00013$ | 0.732 | $0.312\pm 0.013$
$4\nu_{rot}$ | $1.95043\pm 0.00013$ | 0.761 | $6.177\pm 0.013$
$6\nu_{rot}$ | $2.92585\pm 0.00050$ | 0.190 | $0.215\pm 0.049$
$\nu_{2}-\nu_{\rm rot}$ | $229.6162\pm 0.0012$ | 0.059 | $0.28\pm 0.17$
$\nu_{2}+\nu_{\rm rot}$ | $230.5897\pm 0.0014$ | 0.050 | $0.49\pm 0.20$
$\nu_{1}-2\nu_{\rm rot}$ | $231.7947\pm 0.0008$ | 0.091 | $6.00\pm 0.11$
$\nu_{1}-\nu_{\rm rot}$ | $232.2853\pm 0.0013$ | 0.055 | $6.27\pm 0.18$
$\nu_{1}$ | $232.7701\pm 0.0003$ | 0.273 | $6.24\pm 0.04$
$\nu_{1}+\nu_{\rm rot}$ | $233.2587\pm 0.0011$ | 0.062 | $6.17\pm 0.16$
$\nu_{1}+2\nu_{\rm rot}$ | $233.7438\pm 0.0008$ | 0.080 | $0.14\pm 0.12$
$\nu_{3}-\nu_{\rm rot}$ | $235.2495\pm 0.0010$ | 0.071 | $6.11\pm 0.14$
$\nu_{3}+\nu_{\rm rot}$ | $236.2261\pm 0.0012$ | 0.063 | $5.94\pm 0.16$
Table 3: A least squares fit of the frequency multiplets for HD 86181, where the frequency splitting of the rotational sidelobes has been forced to be exactly the rotation frequency. The zero point for the phases, $t_{0}={\rm BJD}~{}2458569.26128$, has been chosen to be a time when the first two orbital sidelobes of the quintuplet have equal phase. | frequency | amplitude | phase
---|---|---|---
| d-1 | mmag | radians
| | $\pm 0.007$ |
$\nu_{2}-\nu_{\rm rot}$ | 229.6162 | 0.058 | $0.25\pm 0.11$
$\nu_{2}+\nu_{\rm rot}$ | 230.5913 | 0.049 | $0.40\pm 0.14$
$\nu_{1}-2\nu_{\rm rot}$ | 231.7950 | 0.091 | $-0.26\pm 0.07$
$\nu_{1}-\nu_{\rm rot}$ | 232.2826 | 0.053 | $-0.13\pm 0.12$
$\nu_{1}$ | 232.7701 | 0.273 | $-0.05\pm 0.02$
$\nu_{1}+\nu_{\rm rot}$ | 233.2576 | 0.061 | $-0.13\pm 0.11$
$\nu_{1}+2\nu_{\rm rot}$ | 233.7452 | 0.080 | $0.14\pm 0.08$
$\nu_{3}-\nu_{\rm rot}$ | 235.2494 | 0.071 | $-0.13\pm 0.09$
$\nu_{3}+\nu_{\rm rot}$ | 236.2245 | 0.063 | $-0.11\pm 0.11$
We also investigated the impact of the spots on the pulsations. From the
second panel of Fig. 1, the rotational variation caused by the spots amounts
to 20 ppt peak-to-peak. We therefore expect the modulation of the pulsation
caused by spots to be also a factor of 0.02 of the pulsation amplitude, which
is down to $\mu$mag, much below the noise level. So the effect of spots on the
pulsation amplitude is negligible.
Finally, harmonics of the pulsation frequencies were also searched for beyond
the Nyquist frequency, $\nu_{Ny}=359.804$ d-1. Only three similar alias groups
centred at $2\nu_{Ny}-\nu_{1}$, $2\nu_{Ny}-\nu_{2}$ and $2\nu_{Ny}-\nu_{3}$
were found, with no evidence of harmonics of pulsation frequencies.
### 3.3 Pulsation amplitude and phase modulation
To study the rotation modulation of the pulsation amplitudes and phases, the
light curve was divided into 217 segments each containing 50 pulsation cycles,
thus each segment had a time span of 0.21d, or 0.1 of a rotation cycle. Linear
least-squares fitting was applied to these segments at fixed frequency,
$\nu_{1}=232.7701$ d-1, to calculate the pulsation amplitude and phase as a
function of rotation phase. Fig. 4 shows these modulations along with the
rotation light variations for comparison.
Figure 4: The pulsation modulation for pulsation frequency $\nu_{1}$ of HD
86181. Top: The phase folded rotation light curve. Middle: pulsation amplitude
variations as a function of rotation phase. Amplitude points with $1\sigma$
errors greater than 0.12 mmag are not plotted here. Bottom: pulsation phase
variations as a function of rotation phase. Phase points with $1\sigma$ errors
greater than 0.8 rad are not plotted here. The red lines are theoretical
amplitude modulation modelled following Kurtz (1992) with the components from
Table 4. The blue line was calculated based on an oblique pure quadruple mode
(see section 4). Two rotation cycles are shown. The time zero-point is
$t_{0}={\rm BJD}~{}2458569.26128$.
The maxima of the pulsation amplitude depend on the aspect of the pulsation
axis, while the light extrema depend on the spots. The difference between the
occurrences of the extrema of the pulsation amplitude and the rotational light
variations indicates the position of spots relative to the pulsation axis. In
many Ap stars the surface positions where spots form – particularly for the
rare earth elements – is related to the magnetic field. In the case that the
spots are centred on the pulsation axis which is also fixed close to the
magnetic axis, the rotation phase of pulsation maximum coincides with, or is
near to, the rotation phase of the light extrema. As Handler et al. (2006)
showed for HD 99563, the maximum of pulsation amplitude coincides with the
maximum of rotation light in red filters, and the minimum in blue filters. The
antiphase variations in blue and red filters are related to the flux
redistribution from UV to optical caused by line blocking (Leckrone, 1973).
For HD 86181, pulsation amplitude maximum coincides with the secondary maximum
of the light curve, and after half a cycle, the secondary maximum of pulsation
amplitude coincides with the maximum of the light curve. For a pure quadrupole
pulsator, the intrinsic pulsation amplitude peaks at both pulsation poles and
at the equator. The pulsation maximum at the poles is twice that at the
equator, but with inverse phase. We assume the maximum of pulsation amplitude
is generated at the pole, while the secondary maximum by equator. This
assumption is verified with the the oblique pulsator model below.
At rotation phase 0, which we chose to be the time of pulsation maximum for
the quadrupole mode, we see that the spots show the secondary rotational light
maximum. In contrast, for another roAp star with a quadrupole mode, KIC
10685175, the maximum of the pulsation amplitude coincides with the minimum of
the rotational light (Shi et al., 2020) (Fig. 5).
Figure 5: Same as Fig. 4 for KIC 10685175. The time zero-point is $t_{0}={\rm
BJD}~{}2458711.21931$.
The pulsation phase as a function of rotation does not show a $\pi$-rad phase
reversal expected at the times of amplitude minima as would be the case for an
undistorted mode, although the pulsation phase shows bumps at those times.
This then argues for a distorted quadrupole mode, and also is similar to what
is observed in other roAp stars with well-studied quadrupole modes
(Holdsworth, Saio & Kurtz, 2019; Holdsworth et al., 2018b, c, a, 2014; Kurtz
et al., 1996b; Holdsworth et al., 2016).
We also checked the pulsation amplitude and phase modulations of the two
central frequencies ($\nu_{2}=230.1038$ d-1 and $\nu_{3}=235.7370\,$d-1) of
the dipole modes, as seen in Fig. 6. However, because of the low amplitudes,
the modulation curves are quite scattered, especially the phase modulation
curve. The two dipole modes show similar behaviour: the pulsation amplitude
reaches primary and secondary maximum at rotation phases 0 and 0.5,
respectively, the same as the quadrupole mode. The pulsation phase variations
have large errors, hence $\pi$-rad pulsation phase changes at rotation phases
0.25 and 0.75 – typical behaviour for dipole modes – are neither ruled out,
nor supported by the plots in Fig. 6.
Figure 6: Top panel: The pulsation amplitude (left) and phase (right)
modulation of the dipole central frequency $\nu_{2}=230.1038$ d-1. Bottom
panel: The pulsation amplitude (left) and phase (right) modulation of the
dipole central frequency $\nu_{3}=235.7370$ d-1. Phase points with $1\sigma$
errors greater than 1.0 rad are not plotted here. The red lines are
theoretical amplitude modulation modelled following Kurtz (1992) with the
components from Table 4. The time zero-point is $t_{0}={\rm
BJD}~{}2458569.26128$.
## 4 Oblique pulsator model
The oblique pulsator model describes the pulsation pattern of an oblique
pulsator and only considers the surface geometry of non-radial pulsation
modes. However, some spectroscopic observations (e.g. Kochukhov 2006;
Freyhammer et al. 2009) and simulations (e.g. Khomenko & Kochukhov 2009) have
shown that properties of pulsations change rapidly with height in the stellar
atmosphere and modes are substantially distorted by the magnetic field. Sousa
& Cunha (2011) and Quitral-Manosalva, Cunha & Kochukhov (2018a) have also
studied this extensively theoretically.
Recently, TESS observations of HD 6532 and HD 80316 (Holdsworth et al., 2021)
have shown that there are changes in multiplet structure comparing to the
former ground-based $B$ observations. The TESS filter is broad-band white-to-
red, which probes to a different depth in the stellar atmosphere than the $B$
filter. These new observations show the complexity of roAp pulsations and
importance of the vertical dimension. Nevertheless, the oblique pulsator model
still allows us a simple first look at the geometry of the pulsation modes.
For a normal quadrupole pulsator, the ratio of the sidelobes to the central
peak can be calculated with eqns 8 and 10 from Kurtz (1992):
$\frac{A_{+1}+A_{-1}}{A_{0}}=\frac{12\sin\beta\cos\beta\sin i\cos
i}{(3\cos^{2}\beta-1)(3\sin^{2}i-1))}$ (1)
and
$\frac{A_{+2}+A_{-2}}{A_{0}}=\frac{3\sin^{2}\beta\sin^{2}i}{(3\cos^{2}\beta-1)(3\sin^{2}i-1))}.$
(2)
Dividing the two equations leads to a standard constraint for oblique
pulsators with quadrupole modes:
$\tan i\tan\beta=4\frac{A_{+2}+A_{-2}}{A_{+1}+A_{-1}}.$ (3)
We can calculate the rotation inclination $i$ and magnetic obliquity $\beta$
of a quadrupole pulsator. Although this relation applies in the case of a pure
quadrupole mode, the results can provide us some information about the
geometry of the mode in HD 86181 for the pure case.
The determination of $\tan i\tan\beta$ for a dipole mode is similar to that
shown in eqn 3, but it is not possible to constrain $i$ and $\beta$
independently. However, for a normal quadrupole mode, eqns 1 and 2 provide two
equations in two unknowns, allowing us nearly uniquely to derive values for
$i$ and $\beta$. From eqns 1 and 3 we find $i=84\pm 3^{\circ}$, $\beta=30\pm
3^{\circ}$, or vice versa, for HD 86181. The uncertainties were calculated
through MCMC fits. With $i$, together with the rotation period and the
estimated radius, $v\sin i=6.5$ km s-1 can be derived. Although there is no
published $v\sin i$ value for comparison, this value is reasonable for a roAp
star.
For an axisymmetric quadrupole mode, the pulsation amplitude at the poles is
twice that at the equator and in antiphase. Maximum pulsation amplitude for
the angles determined above comes when $i-\beta=54^{\circ}$. Since the surface
nodes for an $\ell=2,m=0$ quadrupole lie at co-latitudes $\pm 54.7^{\circ}$,
at the time of pulsation maximum the pole is inclined $i-\beta=54^{\circ}$ to
the line of sight, one surface node is tangent to the lower limb of the star,
and the other surface node is over the the top limb. Hence we are seeing only
the pulsation polar cap at that time. Half a rotation later, the pole is
inclined by $i+\beta=114^{\circ}$; i.e., the pole we were seeing is now on the
other side of the star. The second pole has come into view, but is at poorer
viewing aspect, being inclined $66^{\circ}$ to the line of sight. That then
puts one of the surface nodes close to the line of sight, i.e.
$66-54=12^{\circ}$. Hence much of the visible hemisphere is dominated by the
equatorial region. Figure 7 shows schematically this geometry at four rotation
phases.
Figure 7: The schematic diagram of the viewing geometry of quadrupole mode of
HD 86181 through one rotation cycle. Red dots indicate the pulsation poles,
and blue dashed lines indicate the surface nodes at co-latitudes $\pm
54.7^{\circ}$.
For a pure oblique quadrupole mode, the pulsation amplitude distribution on
the surface, $A_{\theta}$, is proportional to $\frac{1}{2}(3\cos^{2}\theta-1)$
where $\theta$ is co-latitude, the angle to the poles. With knowledge of the
rotational inclination, $i$, and magnetic obliquity, $\beta$, we can calculate
an integral to obtain the pulsation amplitude at any time during a rotation
cycle. Numerically, the sphere surface of the star is divided into a grid;
then, with the formula, the pulsation amplitude for each cell of the grid can
be calculated. With $i$, $\beta$ and the rotation angle at a given time (t),
$2{\pi}{\nu}_{rot}t$, we know which grid cells can be seen by us and also the
projection of each cell. Then the integrated and projected surface pulsation
amplitude can be derived. The limb-darkening model for TESS (Claret, 2018) is
used here. The results are shown as the blue curves in Fig. 4. The maximum of
the integral pulsation amplitude is fixed to be the same as the one derived
from the model (red line). Since the calculation just considers the pulsation
as a pure quadrupole mode, the difference between blue and red line shows the
contribution from the radial and dipole component.
## 5 Spherical harmonic decomposition
Using the technique of Kurtz (1992), the quintuplet for HD 86181 can be
decomposed into a spherical harmonic series. This model is also based on the
oblique pulsator model. Although there are some caveats of this model,
estimates of pulsation amplitudes at some special phases and pulsation
amplitude ratios can be made easily.
The decomposition was done using the frequencies, amplitudes and phases from
Table 3. In order to interpret the two maximum pulsation amplitudes, we
calculated the decomposition with the time zero point $t_{0}={\rm
BJD}~{}2458569.26128$. The results are shown in Table 4.
Table 4: Results of the spherical harmonic decomposition (with the time zero point $t_{0}={\rm BJD}~{}2458569.26128$) of the quadrupole mode in HD 86181 for $i=84^{\circ}$ and $\beta=30^{\circ}$. $\ell$ | $A^{(\ell)}_{-2}$ (mmag) | $A^{(\ell)}_{-1}$ (mmag) | $A^{(\ell)}_{0}$ (mmag) | $A^{(\ell)}_{+1}$ (mmag) | $A^{(\ell)}_{+2}$ (mmag) | $\phi$ (rad)
---|---|---|---|---|---|---
2 | 0.093 | 0.060 | $-0.278$ | 0.056 | 0.081 | -0.322
1 | | 0.015 | 0.005 | 0.014 | | 1.962
0 | | | 0.542 | | | -0.204
In recent works, we have corrected a small error in the decomposition code.
The original code used to calculate the decomposition of HD 6532 (Kurtz et
al., 1996b) and several stars miscoded equations (8) and (10) in Kurtz (1992).
The decomposition components of HD 86181 show that at phase = 0, the dipole
$\ell=1$ component contributes only 0.034 mmag to the quadrupole mode – almost
nothing comparing to the strong radial contribution, which means that the
polar amplitude is increased and the equatorial amplitude is reduced compared
to a pure quadrupole mode. These results verify the assumption that the
pulsation amplitude maximum comes from the poles, with the secondary maximum
from the equator.
As an example, we estimate the pulsation amplitude maximum at phase 0.
According to the eqns (20), (21) and (22) in Kurtz (1992), at phase 0, the
pulsation amplitude is
$A=\sqrt{(\sum\limits_{\ell=0}^{2}\sum\limits_{m=-\ell}^{\ell}A^{\ell}_{m}\cos{\phi^{\ell}})^{2}+(\sum\limits_{\ell=0}^{2}\sum\limits_{m=-\ell}^{\ell}A^{\ell}_{m}\sin{\phi^{\ell}})^{2}}$.
The amplitude of the dipole mode ($A^{\ell=1}_{m}$) is negligible, and the
quadrupole and radial components have similar phases, meaning $\phi^{\ell=2}$
and $\phi^{\ell=0}$ can be considered as the same, so they can add at the time
of amplitude maximum. Therefore, the pulsation amplitude is
$A=0.542+0.093+0.060-0.278+0.056+0.081=0.559$ mmag, which fits the pulsation
phase plot well. Of course, the decomposition technique was designed to fit
the data, so it is not a surprise that it does. This discussion is to give a
mental picture of why this is so. More precisely, a fit of all three spherical
harmonic components taking into account that the exact phases seen in Table 4
gives the fit shown in Fig. 4 as the red curves.
In addition to the quintuplet for HD 86181, there are two doublets. With the
$i$ and $\beta$ in section 7, we derive $\tan i\tan\beta=4.76$. For dipoles
that gives
$\frac{A_{+1}+A_{-1}}{A_{0}}=4.76.$ (4)
We therefore expect to see triplets with very small central components at
$\nu_{2}$ and $\nu_{3}$, with amplitudes only about 0.03 mmag, which is at the
detection limit for these data. This supports the identification of $\nu_{2}$
and $\nu_{3}$ as dipole modes, and it is therefore no surprise that we see
doublets separated by twice the rotation frequency.
## 6 The large separation and acoustic cut-off frequency
The large separation, $\Delta\nu$, is the separation in frequency of modes of
the same degree and consecutive radial orders, and is proportional to the
square-root of the mean density of the star, i.e.,
$\Delta\nu\propto\sqrt{\rho}$ (e.g. Gabriel et al. 1985). This relation was
developed for the frequencies of high-order, acoustic, adiabatic, non-radial
oscillations (Tassoul, 1980, 1990). Since the roAp pulsations are in the
asymptotic regime, they are also applicable here. If the pulsation modes, or
at least relative radial orders are identified, the large separation can, in
principle, be determined.
To calculate the large separation, stellar radius and mass are required. We
estimate the radius of HD 86181 from $L=4{\pi}{\sigma}R^{2}T_{\rm eff}^{4}$,
and its mass from $M/{\rm M}_{\odot}=(L/{\rm L}_{\odot})^{1/4}$ (derived from
stellar homology relations; see, e.g., Eddington 1924) with the luminosity in
Table 1. We find $R=1.65$ R⊙, $M=1.72$ M⊙, and $\log g=4.19$ (cgs) for HD
86181. Although the mass is obtained from a rough scaling relation, it is fine
for estimating the large separation and the cut-off frequency in this section.
With the knowledge that the doublets we see are the result of dipole modes
with undetected central peaks, we are able to derive the mode frequencies to
be $\nu_{2}=230.103$ d-1 and $\nu_{3}=235.737$ d-1 by taking the average of
the two sets of sidelobes. That then gives the mode frequency separations to
be $\nu_{1}-\nu_{2}=2.668$ d-1 = 30.87 $\mu$Hz, and $\nu_{3}-\nu_{1}=2.967$
d-1 = 34.32 $\mu$Hz. Using the radius, mass and $\log$g estimated above and
the value of the solar large frequency separation
$\Delta\nu_{\odot}=134.88\,\mu$Hz (Huber et al., 2011), through
$\Delta\nu\propto\sqrt{\frac{g}{R}}$, we estimate $\Delta\nu/2=38.2$ $\mu$Hz,
which is consistent with the observations.
In roAp stars part of the pulsation mode energy can be refracted back into the
star by the influence of the magnetic field, even when the frequency of the
mode is above the acoustic cut-off frequency, $\nu_{ac}$ (Sousa & Cunha, 2008;
Quitral-Manosalva, Cunha & Kochukhov, 2018b). Therefore, there is no reason to
assume that very high frequency modes will not be observed in these pulsators.
Nevertheless, theory predicts that the excitation by the opacity mechanism
takes place in a frequency range that is close to, but does not exceed the
cut-off frequency and, thus, that an alternative excitation mechanism would be
required to excite modes of yet higher frequencies (Cunha et al., 2013). It is
therefore of interest to estimate the cut-off frequency in HD 86181 based on
the star’s global properties. Using the mass, radius and the effective
temperature in solar values in Table 1, and the scaling relation
$\nu_{ac}\propto g/\sqrt{T_{\rm eff}}$ (Brown et al., 1991) with
$\nu_{ac,\odot}=5.55$ mHz (Fossat et al., 1992), we find that in HD 86181
$\nu_{ac}\approx 3.03$ mHz, which is slightly larger than the observed mode
frequencies, around 2.73 mHz.
## 7 Modelling oblique quadrupole pulsations distorted by dipole magnetic
fields
In this section, we present comparisons of the observed amplitude and phase
modulations of HD 86181 with a quadrupole pulsation calculated by the method
of Saio (2005) including the effect of a dipole magnetic field. We assume that
the pulsations in roAp stars are axisymmetric with the pulsation axis aligned
with the axis of the dipole magnetic field. The strength of the field is
denoted by $B_{\rm p}$, the magnetic field strength at the poles.
In the presence of a magnetic field, the pulsation frequency is modified only
slightly (see Fig. 8), while the eigenfunction is distorted considerably
because the magnetic effect generates $\ell=0,4,6,\ldots$ components of
spherical harmonics in addition to the main $\ell=2$ component. (We have
included twelve components; i.e, up to $\ell=22$.) The eigenfunction gives
pulsation amplitude and phase on each point on the surface as a function of
the angle from the magnetic (or pulsation) axis. The amplitude/phase
distribution can be converted to observational amplitude/phase modulation as a
function of rotation phase (see Saio & Gautschy 2004 for details) for a set of
$(\beta,i)$. The method of comparison is also discussed in Shi et al. (2020).
According to the estimated luminosity range, we selected some models on the
1.65, 1.68, and 1.70 M⊙ evolutionary tracks as indicated by triangles in the
HR diagram of Fig. 9, in which the initial composition $(X,Z)=(0.70,0.02)$ is
adopted, while the helium abundance is assumed to be depleted to 0.01 (mass
fraction) in the layers above the second helium ionisation zone (polar model
in Balmforth et al. 2001). For a stellar model, we find, firstly without
including a magnetic field, a quadrupole mode having a pulsation frequency
close to $\nu_{1}=232.77\,{\rm d}^{-1}$. Then, we re-calculate the quadrupole
mode by taking into account the effect of an assumed dipole magnetic field of
$B_{\rm p}$.
For each case, an appropriate set of $(\beta,i)$ is determined by fitting the
amplitude modulation of HD 86181. Then, the phase modulation is compared with
the observations. Generally, for most assumed values of $B_{\rm p}$, the
obliquity and inclination angles $(\beta,i)$ can be determined easily by
fitting the predicted amplitude modulation with the observations, while the
theoretical phase modulation tends to be very small except for a certain range
of $B_{\rm p}$. Fig. 8 shows how theoretical phase modulations change with
changing $B_{\rm p}$ for a 1.68 M⊙ model. In this model, $6.5\lesssim B_{\rm
p}$/kG $\lesssim 8$ gives phase modulations that are comparable with the
observed ones. The required $B_{p}$ tends to be smaller in more massive models
because the mean density of the envelope is smaller in more massive stars.
Filled triangles in Fig. 9 indicate the loci of models whose amplitude and
phase modulations agree with the observed ones of HD 86181; agreements occur
if $B_{\rm p}\sim 9.0-6.0$ kG is assumed depending on the assumed stellar mass
of 1.65, 1.68, 1.70-M⊙. Among them, the three red triangles denote the models
whose large frequency separations agree with that of HD 86181. We have chosen
the 1.68-M⊙ model as the best model because the luminosity agrees with our
derived value better than the luminosity of the 1.70-M⊙ model does. However,
$\log T_{\rm eff}=3.859$ ($T_{\rm eff}=7230$ K) of the most approriate fit
model is somewhat lower than 7750 K listed in Table 1. This $T_{\rm eff}$
value is closer to $T_{\rm eff}=7320$ K obtained by McDonald, Zijlstra & Boyer
(2012) from a comparison of the SED with model atmospheres, and to $T_{\rm
eff}=7205$ K obtained by Masana, Jordi & Ribas (2006) from 2MASS photometry.
Fig. 10 compares amplitudes of the rotational sidelobes (top), amplitude
(middle) and phase (bottom) modulations between the best model with $B_{\rm
p}=7.0$ kG and HD 86181.By fitting the amplitude modulation, we find
$(\beta,i)$ = ($40^{\circ}$,$80^{\circ}$) each with an uncertainly of 5∘. The
$(\beta,i)$ given by the magnetically distorted model are only slightly
different from the pure quadrupole pulsator model: $i$ given by the distorted
model is consistent with the pure quadrupole pulsator model within the
1$\sigma$, while $\beta$ is consistent within 2$\sigma$. The range of the
phase modulation of the quadrupole model is small, which can be attributed to
contributions from $\ell=$4, 6, 8, …..
The dipole mode frequencies just above and below the quadrupole mode of the
best fitting model are 235.51 and 229.92 d-1, respectively, at $B_{\rm p}=7.0$
kG, which yield a large frequency spacing of 5.59 d-1 (or 64.7 $\mu$Hz), which
agrees with the observed large frequency spacing, $\nu_{3}-\nu_{2}=5.63$ d-1
(or 65.2 $\mu$Hz).222The frequency spacing of this model at $B_{\rm p}=0$ is
5.39 d-1 (or 62.4 $\mu$Hz).
For HD 42659, another roAp star pulsating in a distorted mode (Holdsworth,
Saio & Kurtz, 2019), the distorted model predicted the polar magnetic field
strength to be 0.8 kG by assuming that star pulsates in a quadrupole mode.
That result was consistent with the measured mean longitudinal magnetic field,
$\langle B_{l}\rangle=0.4$ kG (Kochukhov & Bagnulo, 2006; Hubrig et al.,
2006). However, the polar magnetic field strength predicted by our model for
HD 86181, $B_{p}=7.0$ kG, is significantly larger than the measured mean
longitudinal magnetic field, $\langle B_{l}\rangle=0.54$ kG (Bagnulo et al.,
2015). The cause of the difference is not clear. It could be a depth effect;
i.e., the magnetic field required in our model refers to the strength in the
hydrogen-rich envelope, while the measured magnetic field corresponds to the
strength in the outermost superficial layers. Also, there are some aspects
that are not considered in the model, such as the effects of surface spots.
Figure 8: Phase modulations (solid black lines) obtained by assuming various
strengths of magnetic fields for the quadrupole mode in the same model shown
in Fig. 10. Red dots are observed phase modulations of HD 86181, while dashed
magenta lines are the same as the one in the bottom panel of Fig. 10, which
are obtained from the oblique pulsator model of Kurtz (1992) (red lines in
Fig. 4). For all cases, $(\beta,i)=(40^{\circ},80^{\circ})$ are adopted, for
which the theoretical amplitude modulations are consistent with that of HD
86181, while Bp in the range 6.5 - 8 kG (e.g. 7 kG; Fig. 10) gives phase
modulation comparable with the observed one (see also Fig. 10). Figure 9: Loci
of roAp stars on the HR diagram with some evolutionary tracks with initial
composition of $(X,Z)=(0.70,0.02)$. The number along the ZAMS of each track
indicates the stellar mass in solar units. HD 86181 is shown in red and other
distorted quadrupole pulsators are shown in blue for comparison. (J1940 is not
shown because its location is very close to J1640.) Triangles on 1.70, 1.68
and 1.65 M⊙ tracks indicate the loci of models for which pulsation amplitude
and phase modulations are calculated; filled (open) triangles indicate models
whose phase modulations can (cannot) be fitted with the HD 86181 phase
modulation. Red filled triangles indicate models which have large frequency
spacings similar to the observed one. Parameters of roAp stars other than HD
86181 are adopted from Holdsworth et al. (2018b). Figure 10: The amplitude
spectrum of rotational sidelobes (top panel) and amplitude (middle
panel)/phase (bottom panel) modulations of the quadrupole pulsation mode of HD
86181 are shown by red lines or dots. Dashed magenta lines (middle and bottom
panels) are obtained from the oblique pulsator model of Kurtz (1992) (red line
in Fig. 4). Black lines show the results of a best model of 1.68 M⊙ with
$B_{\rm p}=7$ kG, for which parameters are shown on the top of the diagram.
## 8 Driving of pulsations
The driving of pulsations in roAp stars is still a matter of debate. Non-
adiabatic pulsation calculations, assuming that envelope convection is
suppressed by the magnetic field at least in some angular region around the
magnetic pole, have been reasonably successful in explaining the driving of
most oscillations observed in roAp stars through the opacity mechanism acting
on the hydrogen ionization region (Balmforth et al., 2001; Cunha, 2002). The
same model also predicts that very high frequencies may be excited by the
turbulent pressure mechanism, a fact that has been suggested to explain the
pulsation frequencies observed in the roAp star $\alpha$ Cir (Cunha et al.,
2013). In this section we adopt the models discussed in these earlier works to
perform theoretical non-adiabatic pulsation calculations for HD 86181.
The analysis follows closely that presented by Cunha et al. (2013). In short,
the equilibrium model is derived from the matching of two spherically
symmetric models, one with envelope convection suppressed (the polar model)
and the other with convection treated according to a non-local mixing length
prescription (Spiegel, 1963; Gough, 1977a) (the equatorial model). It takes as
input the stellar mass, luminosity, effective temperature, chemical
composition (hydrogen, $X$, and helium, $Y$, mass fractions) and the
parameters associated with convection. The atmosphere is described by a
$T-\tau$ relation, which can be chosen amongst different options, with the
minimum optical depth, $\tau_{\rm min}$, being an additional input parameter.
Finally, helium settling can also be considered both in the polar and in the
equatorial regions, following a parametrized description with the surface
helium abundance in each region being additional input parameters.
The stability analysis is performed in each region separately and can consider
two different options for the surface boundary condition applied at the
minimum optical depth, namely, one that guarantees a full reflection of the
mode and one that allows waves with frequencies above the acoustic cut-off
frequency to propagate. In the equatorial model, the final non-adiabatic
solutions are computed using a non-local, time-dependent mixing-length
treatment of convection (Gough, 1977b; Balmforth, 1992). The results from the
non-adiabatic analysis in each region can then be combined to derive the
growth rates of modes in the model where convection is assumed to be
suppressed only in some angular region around the magnetic pole (the composite
model). Further details on the models can be found in Balmforth et al. (2001)
and references therein.
For each set of ($M$,$L$,$T_{\rm eff}$), four different physics configurations
were considered by varying different input parameters identified in previous
works as having significant impact on the stability results, namely: the
minimum optical depth, the outer boundary condition, and the amount of surface
helium. Table 5 summarizes the options in each case. Other parameters and
physics not mentioned here were fixed following the options adopted in
Balmforth et al. (2001).
Table 5: Modelling parameters for the cases illustrated in Fig. 11, all computed with $M=1.72\,{\rm M}_{\odot}$, $L=8.69\,{\rm L}_{\odot}$, $Y=0.278$, $X=0.705$. Model | Polar $Y_{\rm surf}$ | Equatorial $Y_{\rm surf}$ | $\tau_{\rm min}$ | Boundary | symbols
---|---|---|---|---|---
| | | | condition | in Fig. 11
A | 0.01 | 0.278 | $3.5\times 10^{-5}$ | Reflective | circles
B | 0.01 | 0.278 | $3.5\times 10^{-4}$ | Reflective | squares
C | 0.01 | 0.278 | $3.5\times 10^{-5}$ | Transmissive | upward triangles
D | 0.1 | 0.1 | $3.5\times 10^{-5}$ | Reflective | rightward triangles
Fig. 11 shows an example of the results from the stability analysis in blue
and red, for polar and equatorial models, respectively, adopting the effective
temperature and luminosity in Table 1. Here we plot the relative growth rates
$\eta/\omega$ as a function of the cyclic pulsation frequency $\nu$, where
$\eta$ and $\omega$ are the imaginary and real parts of the angular
eigenfrequency, respectively, and a positive growth rate indicates the mode is
intrinsically unstable, thus excited. From the red symbols in the figure we
can see that all modes are stable in the equatorial model, independently of
the physics configuration adopted. In the polar models (blue symbols), a few
modes have positive growth rates at frequencies from $\sim$ 2.1 mHz up to
$\sim$ 2.7 mHz, depending on the physics considered. The range of excited
frequencies scales approximately with the square root of the mean density
(Cunha, 2002; Cunha et al., 2013). Given the uncertainty on the radius of the
star, one can thus confidently conclude that the region where the oscillation
frequencies are observed is within the range where the polar models predict
instability. Despite this, the growth rates on these polar models are one
order of magnitude smaller than the growth rates of the corresponding modes in
the equatorial model (in absolute value). This means that envelope convection
needs to be almost fully suppressed in order for these modes to be unstable in
the composite model (cf. figure 4 of Balmforth et al., 2001) and, thus,
explain the observations.
Figure 11: Normalized growth rates for polar (blue) and equatorial (red
symbols) regions as a function of the cyclic frequency $\nu=\omega/2\pi$.
Excited modes have positive growth rates. Different shape symbols represent
different modelling parameters we have used; circles are for model A in Table
5, while squares, upward triangles and rightward triangles for model B, C and
D, respectively. Zero growth rate is indicated by the horizontal dashed line
and the green shadowed region marks the range of observed frequencies.
## 9 Discussion and conclusions
We analysed HD 86181 with TESS data, and confirm it as a roAp star. The
rotation frequency is derived to be $\nu_{\rm rot}=0.48765\pm 0.00003$ d-1
($P_{\rm rot}=2.0507\pm 0.0001$ d). The pulsation frequency spectrum is rich,
consisting of one doublet, one quintuplet and another doublet. The central
frequency of the quintuplet is 232.7701 d-1 (2.694 mHz). The two doublets are
very likely to be sidelobes of two triplets, the amplitudes of whose central
frequencies are too small to be observed. With this interpretation, we
calculate the two central frequencies of the triplets to be 230.1028 d-1
(2.663 mHz) and 235.7361 d-1 (2.728 mHz).
Pulsation amplitude and phase modulation were calculated as a function of
rotation phase and shown to be modulated. Two maxima can be seen in the
rotational light curve, which indicates we see two primary spots in the TESS
pass-band, but the spot geometry is complex and further work is needed to
construct the chemical and magnetic map of this star.
We calculated the rotation inclination, $i$, and magnetic obliquity, $\beta$,
for HD 86181, which provided detailed information of the geometry and we used
those values with a spherical harmonic decomposition to better understand the
pulsation geometry and the distortion from a pure quadrupole mode.
Models considering the dipole magnetic field distortion were calculated and
compared with the observed amplitude and phase modulation. The best fit model
gives $B_{\rm p}=7.0$ kG and $(\beta,i)=(40^{\circ},80^{\circ})$. The
$(\beta,i)$ given by magnetic distortion model are only slightly different
from the pure quadrupole pulsator model with the relevant differences of (25
per cent, 5 per cent) for $(\beta,i)$, respectively. Also, the difference from
the phase modulation of the quadrupole model is small, which can be attributed
to higher degree components, $\ell=4,6,8,...$. The pulsation frequency and the
large frequency spacing given by this model are comparable with the
observation.
To explain the driving mechanism of this star, two non-adiabatic models were
constructed for HD 86181, one with envelope convection suppressed (the polar
model) and another considering convection (the equatorial model). We find that
polar model predicted the excitation of modes in the observed range.
The rich pulsation frequency spectrum let us study the large frequency
separation, $\Delta\nu$. The $\Delta\nu$ derived from $g$ and $R$ is
consistent with the observed value. The acoustic cut-off frequency,
$\nu_{ac}$, of this star is larger than the observed mode frequencies.
## References
* Anders et al. (2019) Anders F. et al., 2019, A&A, 628, A94
* Andrae et al. (2018) Andrae R. et al., 2018, A&A, 616, A8
* Bagnulo et al. (2015) Bagnulo S., Fossati L., Landstreet J. D., Izzo C., 2015, A&A, 583, A115
* Balmforth (1992) Balmforth N. J., 1992, MNRAS, 255, 603
* Balmforth et al. (2001) Balmforth N. J., Cunha M. S., Dolez N., Gough D. O., Vauclair S., 2001, MNRAS, 323, 362
* Balona, Holdsworth & Cunha (2019) Balona L. A., Holdsworth D. L., Cunha M. S., 2019, MNRAS, 487, 2117
* Bigot & Dziembowski (2002) Bigot L., Dziembowski W. A., 2002, A&A, 391, 235
* Bigot & Kurtz (2011) Bigot L., Kurtz D. W., 2011, A&A, 536, A73
* Brown et al. (1991) Brown T. M., Gilliland R. L., Noyes R. W., Ramsey L. W., 1991, ApJ, 368, 599
* Claret (2018) Claret A., 2018, A&A, 618, A20
* Cunha (2002) Cunha M. S., 2002, MNRAS, 333, 47
* Cunha et al. (2013) Cunha M. S., Alentiev D., Brandão I. M., Perraut K., 2013, MNRAS, 436, 1639
* Cunha et al. (2019) Cunha M. S. et al., 2019, MNRAS, 487, 3523
* Cunha, Fernandes & Monteiro (2003) Cunha M. S., Fernandes J. M. M. B., Monteiro M. J. P. F. G., 2003, MNRAS, 343, 831
* Dziembowski & Goode (1985) Dziembowski W., Goode P. R., 1985, ApJ, 296, L27
* Eddington (1924) Eddington A. S., 1924, Nature, 113, 786
* Flower (1996) Flower P. J., 1996, ApJ, 469, 355
* Fossat et al. (1992) Fossat E. et al., 1992, A&A, 266, 532
* Freyhammer et al. (2009) Freyhammer L. M., Kurtz D. W., Elkin V. G., Mathys G., Savanov I., Zima W., Shibahashi H., Sekiguchi K., 2009, MNRAS, 396, 325
* Gabriel et al. (1985) Gabriel M., Noels A., Scuflaire R., Mathys G., 1985, A&A, 143, 206
* Gaia Collaboration (2020) Gaia Collaboration, 2020, VizieR Online Data Catalog, I/350
* Gaia Collaboration et al. (2020) Gaia Collaboration, Brown A. G. A., Vallenari A., Prusti T., de Bruijne J. H. J., Babusiaux C., Biermann M., 2020, arXiv e-prints, arXiv:2012.01533
* Gough (1977a) Gough D., 1977a, The current state of stellar mixing-length theory, Spiegel E. A., Zahn J. P., eds., Vol. 71, pp. 15–56
* Gough (1977b) Gough D. O., 1977b, ApJ, 214, 196
* Handler et al. (2006) Handler G. et al., 2006, MNRAS, 366, 257
* Hauck & Mermilliod (1998) Hauck B., Mermilliod M., 1998, A&AS, 129, 431
* Hey et al. (2019) Hey D. R. et al., 2019, MNRAS, 488, 18
* Holdsworth (2021) Holdsworth D. L., 2021, Frontiers in Astronomy and Space Sciences, 8, 31
* Holdsworth et al. (2021) Holdsworth D. L. et al., 2021, arXiv e-prints, arXiv:2105.13274
* Holdsworth et al. (2018a) —, 2018a, MNRAS, 473, 91
* Holdsworth et al. (2016) Holdsworth D. L., Kurtz D. W., Smalley B., Saio H., Handler G., Murphy S. J., Lehmann H., 2016, MNRAS, 462, 876
* Holdsworth et al. (2018b) Holdsworth D. L., Saio H., Bowman D. M., Kurtz D. W., Sefako R. R., Joyce M., Lambert T., Smalley B., 2018b, MNRAS, 476, 601
* Holdsworth, Saio & Kurtz (2019) Holdsworth D. L., Saio H., Kurtz D. W., 2019, MNRAS, 489, 4063
* Holdsworth et al. (2018c) Holdsworth D. L., Saio H., Sefako R. R., Bowman D. M., 2018c, MNRAS, 480, 2405
* Holdsworth et al. (2014) Holdsworth D. L., Smalley B., Kurtz D. W., Southworth J., Cunha M. S., Clubb K. I., 2014, MNRAS, 443, 2049
* Huber et al. (2011) Huber D. et al., 2011, ApJ, 743, 143
* Hubrig et al. (2006) Hubrig S., North P., Schöller M., Mathys G., 2006, Astronomische Nachrichten, 327, 289
* Khomenko & Kochukhov (2009) Khomenko E., Kochukhov O., 2009, ApJ, 704, 1218
* Kochukhov (2006) Kochukhov O., 2006, A&A, 446, 1051
* Kochukhov (2009) —, 2009, Communications in Asteroseismology, 159, 61
* Kochukhov & Bagnulo (2006) Kochukhov O., Bagnulo S., 2006, A&A, 450, 763
* Kurtz (1982) Kurtz D. W., 1982, MNRAS, 200, 807
* Kurtz (1985) —, 1985, MNRAS, 213, 773
* Kurtz (1992) —, 1992, MNRAS, 259, 701
* Kurtz et al. (1996a) Kurtz D. W., Marang F., van Wyk F., Roberts G., 1996a, MNRAS, 280, 1
* Kurtz & Martinez (1994) Kurtz D. W., Martinez P., 1994, Information Bulletin on Variable Stars, 4013, 1
* Kurtz et al. (1996b) Kurtz D. W., Martinez P., Koen C., Sullivan D. J., 1996b, MNRAS, 281, 883
* Leckrone (1973) Leckrone D. S., 1973, ApJ, 185, 577
* Masana, Jordi & Ribas (2006) Masana E., Jordi C., Ribas I., 2006, A&A, 450, 735
* Mathys, Kharchenko & Hubrig (1996) Mathys G., Kharchenko N., Hubrig S., 1996, A&A, 311, 901
* McDonald, Zijlstra & Boyer (2012) McDonald I., Zijlstra A. A., Boyer M. L., 2012, MNRAS, 427, 343
* Montgomery & O′donoghue (1999) Montgomery M. H., O′donoghue D., 1999, Delta Scuti Star Newsletter, 13, 28
* Moon & Dworetsky (1985) Moon T. T., Dworetsky M. M., 1985, MNRAS, 217, 305
* Murphy, Shibahashi & Kurtz (2013) Murphy S. J., Shibahashi H., Kurtz D. W., 2013, MNRAS, 430, 2986
* Napiwotzki, Schoenberner & Wenske (1993) Napiwotzki R., Schoenberner D., Wenske V., 1993, A&A, 268, 653
* Perry (1991) Perry C. L., 1991, PASP, 103, 494
* Pyper (1969) Pyper D. M., 1969, ApJS, 18, 347
* Quitral-Manosalva, Cunha & Kochukhov (2018a) Quitral-Manosalva P., Cunha M. S., Kochukhov O., 2018a, MNRAS, 480, 1676
* Quitral-Manosalva, Cunha & Kochukhov (2018b) —, 2018b, MNRAS, 480, 1676
* Renson & Manfroid (2009) Renson P., Manfroid J., 2009, A&A, 498, 961
* Saio (2005) Saio H., 2005, MNRAS, 360, 1022
* Saio & Gautschy (2004) Saio H., Gautschy A., 2004, MNRAS, 350, 485
* Shi et al. (2020) Shi F., Kurtz D., Saio H., Fu J., Zhang H., 2020, ApJ, 901, 15
* Shibahashi & Takata (1993) Shibahashi H., Takata M., 1993, PASJ, 45, 617
* Smalley et al. (2015) Smalley B. et al., 2015, MNRAS, 452, 3334
* Sousa & Cunha (2011) Sousa J. C., Cunha M. S., 2011, MNRAS, 414, 2576
* Sousa & Cunha (2008) Sousa S. G., Cunha M. S., 2008, MNRAS, 386, 531
* Spiegel (1963) Spiegel E. A., 1963, ApJ, 138, 216
* Stibbs (1950) Stibbs D. W. N., 1950, MNRAS, 110, 395
* Takata & Shibahashi (1994) Takata M., Shibahashi H., 1994, PASJ, 46, 301
* Takata & Shibahashi (1995) —, 1995, PASJ, 47, 219
* Tassoul (1980) Tassoul M., 1980, ApJS, 43, 469
* Tassoul (1990) —, 1990, ApJ, 358, 313
* Trifonov et al. (2020) Trifonov T., Tal-Or L., Zechmeister M., Kaminski A., Zucker S., Mazeh T., 2020, A&A, 636, A74
## acknowledgements
This work was funded by the National Key R $\&$ D Program of China under grant
No.2019YFA0405504 and the National Natural Science Foundation of China (NSFC)
under grants No. 11973001, No. 11833002, No. 12090040 and No. 12090042. This
work includes data collected by the TESS mission. Funding for the TESS mission
is provided by the NASA Explorer Program. M. S. Cunha is supported by national
funds through FCT in the form of a work contract and through the research
grants UIDB/04434/2020, UIDP/04434/2020 and PTDC/FIS-AST/30389/2017, and by
FEDER - Fundo Europeu de Desenvolvimento Regional through COMPETE2020 -
Programa Operacional Competitividade e Internacionalização (grant:
POCI-01-0145-FEDER-030389). Daniel L. Holdsworth acknowledges financial
support from the Science and Technology Facilities Council (STFC) via grant
ST/M000877/1. Gerald Handler gratefully acknowledges funding through NCN grant
2015/18/A/ST9/00578. We thank the anonymous referee for a thorough,
knowledgeable review that improved this paper.
## Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
|
arxiv-papers
| 2021-07-26T13:21:50 |
2024-09-04T03:07:18.661465
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Fangfei Shi, Donald W. Kurtz, Daniel L. Holdsworth, Hideyuki Saio,\n Margarida S. Cunha, Huawei Zhang, Jianning Fu, G. Handler",
"submitter": "Fangfei Shi",
"url": "https://arxiv.org/abs/2107.12204"
}
|
2107.12208
|
# Local Quantum State Marking
Samrat Sen School of Physics, IISER Thiruvananthapuram, Vithura, Kerala
695551, India. Edwin Peter Lobo School of Physics, IISER Thiruvananthapuram,
Vithura, Kerala 695551, India. Sahil Gopalkrishna Naik School of Physics,
IISER Thiruvananthapuram, Vithura, Kerala 695551, India. Ram Krishna Patra
School of Physics, IISER Thiruvananthapuram, Vithura, Kerala 695551, India.
Tathagata Gupta Physics and Applied Mathematics Unit, Indian Statistical
Institute, 203 B.T. Road, Kolkata 700108, India. Subhendu B. Ghosh Physics
and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road,
Kolkata 700108, India. Sutapa Saha Physics and Applied Mathematics Unit,
Indian Statistical Institute, 203 B.T. Road, Kolkata 700108, India. Mir
Alimuddin School of Physics, IISER Thiruvananthapuram, Vithura, Kerala
695551, India. Tamal Guha Department of Computer Science, The University of
Hong Kong, Pokfulam road 999077, Hong Kong. Some Sankar Bhattacharya
International Centre for Theory of Quantum Technologies (ICTQT), University of
Gdask, Bazynskiego 8, 80-309 Gdansk, Poland. Manik Banik School of Physics,
IISER Thiruvananthapuram, Vithura, Kerala 695551, India.
###### Abstract
We propose the task of local state marking (LSM), where some multipartite
quantum states chosen randomly from a known set of states are distributed
among spatially separated parties without revealing the identities of the
individual states. The collaborative aim of the parties is to correctly mark
the identities of states under the restriction that they can perform only
local quantum operations (LO) on their respective subsystems and can
communicate with each other classically (CC) – popularly known as the
operational paradigm of LOCC. While mutually orthogonal states can always be
marked exactly under global operations, this is in general not the case under
LOCC. We show that the LSM task is distinct from the vastly explored task of
local state distinguishability (LSD) – perfect LSD always implies perfect LSM,
whereas we establish that the converse does not hold in general. We also
explore entanglement assisted marking of states that are otherwise locally
unmarkable and report intriguing entanglement assisted catalytic LSM
phenomenon.
## I Introduction
Discrimination task, wherein the aim is to distinguish among physical or
mathematical objects viz. states, processes, circuits, probability
distributions, is one of the rudimentary steps that appear in information
processing protocols, statistical inference, and hypothesis testing Shannon48
; Lehmann05 . Distinct objects or perfectly distinguishable states of a system
can be used to store information which assures readability of the information
without any ambiguity. Information protocols in the quantum world Wiesner83 ;
Bennett84 ; Ekert91 ; Bennett92 ; Bennett92(1) ; Bennett93 , however, are
governed by rules that are fundamentally different from our classical
worldview. For instance, classical information encoded in non-orthogonal
quantum states, either pure or mixed, cannot be perfectly decoded since the
no-cloning theorem Wootters82 ( more generally the no-broadcasting theorem
Barnum96 ) puts restriction on their perfect discrimination. Such a constraint
is strictly quantum (more precisely, non-classical Barnum07 ; Banik19 ) in
nature as pure classical states are always perfectly distinguishable Self1 .
While a set of mutually orthogonal quantum states can always be distinguished
perfectly, interesting situations arise for multipartite quantum systems when
discriminating operations among the spatially separated parties holding
different subsystems are limited to local quantum operation assisted with
classical communication (LOCC). This constitutes the framework for the problem
of local state discrimination (LSD) Bennett99 ; Walgate00 ; Ghosh01 ;
Walgate02 ; Ghosh04 ; Horodecki03 ; Watrous05 ; Hayashi06 . During the last
two decades LSD has been studied in great detail resulting in a plethora of
interesting conclusions Bennett99(1) ; DiVincenzo03 ; Niset06 ; Duan07 ;
Calsamiglia10 ; Bandyopadhyay11 ; Chitambar14 ; Halder18 ; Demianowicz18 ;
Halder19 ; Halder19(1) ; Agrawal19 ; Rout19 ; Bhattacharya20 ; Banik20 ;
Rout20 and it also finds applications in useful tasks Terhal01 ; DiVincenzo02
; Eggeling02 ; Markham08 ; Matthews09 . Apart from LSD and more general
quantum state discrimination problems Helstrom69 ; Holevo73 ; Yuen75 , several
other discrimination tasks, eg. channel/sub-channel discrimination, process
discrimination, circuit discrimination, have been studied during the recent
past Chiribella08 ; Piani09 ; Chiribella12 ; Hirche21 that subsequently
motivate several novel information protocols Pirandola19 ; Takagi19 ;
Takagi19(1) ; Chiribella21 ; Bhattacharya21 . In this paper, we introduce a
novel variant of discrimination task, which we call local state marking (LSM).
A subset of states chosen randomly from a known set of multipartite states is
provided to spatially separated parties without revealing the identities of
the individual states. The aim is to mark the identities of the states under
the operational paradigm of LOCC. For a given set of multipartite states
$\mathcal{S}$ one can, in fact, define a class of discrimination tasks denoted
by $m$-LSM. Here $1\leq m\leq|\mathcal{S}|$ with $1$-LSM corresponding to the
task of LSD and the $|\mathcal{S}|$-LSM task we will denote simply as LSM
where $|\mathcal{S}|$ is the cardinality of set $\mathcal{S}$. It turns out
that the task of LSM is distinct from the task of LSD. In particular, we show
that local distinguishability of an arbitrary set of states always implies
local markability, but the converse does not always hold true. We provide an
example of mutually orthogonal states that are locally markable but not
locally distinguishable. We then provide examples of orthogonal states that
can neither be distinguished nor marked perfectly under local operations. Some
generic implications between $m$-LSM and $m^{\prime}$-LSM tasks are also
analyzed when $m\neq m^{\prime}$. We then study entanglement assisted local
marking of states where additional entanglement is provided as a resource to
mark the states that are otherwise locally unmarkable. There we report
intriguing entanglement assisted catalytic LSM phenomenon– a locally
unmarkable set of states can be perfectly marked when additional entanglement
is supplied as a resource. Interestingly, the entanglement is returned (either
partially or completely) once the marking task is done.
## II Notations and preliminaries
A quantum system prepared in a pure state is represented by a vector
$\ket{\psi}\in\mathcal{H}$, where $\mathcal{H}$ is the Hilbert space
associated with the system. Throughout this work we will consider finite
dimensional quantum systems and consequently $\mathcal{H}$ will be isomorphic
to some complex Euclidean space $\mathbb{C}^{d}$. An $N$ partite quantum
system is associated with the tensor product Hilbert space
$\bigotimes_{i=1}^{N}\mathbb{C}^{d_{i}}_{A_{i}}$, where
$\mathbb{C}^{d_{i}}_{A_{i}}$ is the Hilbert space of the $i^{th}$ subsystem
held by the $i^{th}$ party Self2 ; Carcassi21 . A state
$\ket{\psi}_{A_{1}\cdots
A_{N}}\in\bigotimes_{i=1}^{N}\mathbb{C}^{d_{i}}_{A_{i}}$ is called separable
across $\mathcal{A}$ vs $\mathcal{A}^{\mathsf{C}}$ cut if it is of the form
$\ket{\psi}_{A_{1}\cdots
A_{N}}=\ket{\psi}_{\mathcal{A}}\otimes\ket{\psi}_{\mathcal{A}^{\mathsf{C}}}$,
where
$\ket{\psi}_{\mathcal{A}}\in\bigotimes_{i|A_{i}\in\mathcal{A}}\mathbb{C}^{d_{i}}_{A_{i}}$
and
$\ket{\psi}_{\mathcal{A}^{\mathsf{C}}}\in\bigotimes_{i|A_{i}\in\mathcal{A}^{\mathsf{C}}}\mathbb{C}^{d_{i}}_{A_{i}}$
with $\mathcal{A}$ being a proper nonempty subset of
$\mathbb{A}\equiv\\{A_{1},\cdots,A_{N}\\}$ and
$\mathcal{A}^{\mathsf{C}}\equiv\mathbb{A}\setminus\mathcal{A}$. A multiparty
state $\ket{\psi}_{A_{1}\cdots A_{N}}$ is fully separable if it is separable
across all possible bipartite cuts, i.e., $\ket{\psi}_{A_{1}\cdots
A_{N}}=\bigotimes_{i=1}^{N}\ket{\psi}_{A_{i}}$ with
$\ket{\psi}_{A_{i}}\in\mathbb{C}^{d_{i}}_{A_{i}}$. For the sake of notational
brevity, we will avoid the party index when there is no confusion. A set of
quantum states is perfectly distinguishable whenever they are pairwise
orthogonal. Moreover, in accordance with the no-cloning theorem Wootters82 ,
pairwise orthogonality turns out to be the necessary requirement for perfect
distinguishability.
In the multipartite scenario, when different parts of the quantum systems are
held by spatially separated parties, the class of operations LOCC captures the
‘distant lab’ paradigm. Although it is extremely hard to characterize
structure of LOCC operations Chitambar14(1) , this restricted paradigm plays
crucial role to understand the resource of quantum entanglement and it
constitutes the scenario for the task of $m$-LSM.
###### Definition 1.
[$m$-LSM] $m$ number of states chosen randomly from a known set of pairwise
orthogonal N-party quantum states
$\mathcal{S}\equiv\left\\{\ket{\psi_{j}}\leavevmode\nobreak\
|\leavevmode\nobreak\
\langle\psi_{i}|\psi_{j}\rangle=\delta_{ij}\right\\}\subset\bigotimes_{i=1}^{N}\mathbb{C}^{d_{i}}_{A_{i}}$
are distributed among spatially separated parties without revealing the
identity of each state. The $m$-LSM task is to perfectly identify/mark each of
the states under the operational paradigm of LOCC.
In Definition 1, $m$ can take values from $1$ to $|\mathcal{S}|$ and
accordingly they constitute different discrimination tasks (see Fig.1). The
task of $1$-LSM is more popular as LSD which has been explored in great detail
during the last two decades Bennett99 ; Walgate00 ; Ghosh01 ; Walgate02 ;
Ghosh04 ; Horodecki03 ; Watrous05 ; Hayashi06 ; Bennett99(1) ; DiVincenzo03 ;
Niset06 ; Duan07 ; Calsamiglia10 ; Bandyopadhyay11 ; Chitambar14 ; Halder18 ;
Demianowicz18 ; Halder19 ; Halder19(1) ; Agrawal19 ; Rout19 ; Bhattacharya20 ;
Banik20 ; Rout20 . The problem of LSD has also been studied with ensembles
containing non-orthogonal states Peres91 ; Chitambar13 . Similarly, Definition
1 can also be generalized for such ensembles. In that case the quantity of
interest will be the difference between maximum success probabilities of the
corresponding marking task under global and local operations, respectively.
Figure 1: (Color online) The task of $m$-LSM is illustrated for the bipartite
scenario. $m$ states chosen randomly from a set of $K$ states are distributed
between spatially separated Alice and Bob without revealing the identities of
the individual states. They have to identify the indices $i_{1},\cdots,i_{m}$
using LOCC. In this particular example the indices are identified to be
$(i_{1}=3,i_{2}=5,\cdots,i_{m}=1$). The special case of $m=1$ corresponds to
the task of LSD.
## III Results
We will start the technical part of this article by establishing some generic
results.
###### Lemma 1.
For a set of multipartite states $\mathcal{S}$, perfect
$(|\mathcal{S}|-2)$-LSM always implies perfect LSM.
###### Proof.
Perfect $(|\mathcal{S}|-2)$-LSM of the set $\mathcal{S}$ implies that given
arbitrary $(|\mathcal{S}|-2)$ states from the set, they can be marked locally.
So we are left with two more states to identify locally. According to a
standard result by Walgate et al., any two multipartite pure orthogonal states
can be distinguished locally Walgate00 , which proves our claim. ∎
While proof of Lemma 1 follows straightforwardly from the result of Walgate et
al., in the next we establish a rather nontrivial thesis.
###### Theorem 1.
For a set of multipartite states $\mathcal{S}$, perfect LSD (i.e. $1$-LSM)
always implies perfect LSM (i.e. $|\mathcal{S}|$-LSM).
###### Proof.
Let the set of states
$\mathcal{S}_{K}\equiv\left\\{\ket{\psi_{1}},\cdots,\ket{\psi_{K}}\right\\}\subset\bigotimes_{i=1}^{N}\mathbb{C}^{d_{i}}_{A_{i}}:=\mathcal{H}$
be locally distinguishable. The problem of LSM for the set $\mathcal{S}_{K}$
can be reformulated as an LSD problem of the set of states
$\mathcal{S}_{\mathcal{P}[\\{K\\}]}\equiv\left\\{\mathcal{P}\left(\otimes_{i=1}^{K}\ket{\psi_{i}}\right)\right\\}\subset\mathcal{H}^{\otimes
K}$, where
$\left\\{\mathcal{P}\left(\otimes_{i=1}^{K}\ket{\psi_{i}}\right)\right\\}$
denotes the set of tensor product states generated through permutations of the
indices $\\{1,\cdots,K\\}$. For instance,
$\mathcal{S}_{\mathcal{P}[\\{3\\}]}:=\left\\{\mathcal{P}\left(\otimes_{i=1}^{3}\ket{\psi_{i}}\right)\right\\}\equiv\\{\ket{\psi_{1}\psi_{2}\psi_{3}},$
$\ket{\psi_{1}\psi_{3}\psi_{2}},\leavevmode\nobreak\
\ket{\psi_{2}\psi_{3}\psi_{1}},\leavevmode\nobreak\
\ket{\psi_{2}\psi_{1}\psi_{3}},\leavevmode\nobreak\
\ket{\psi_{3}\psi_{2}\psi_{1}},\leavevmode\nobreak\
\ket{\psi_{3}\psi_{1}\psi_{2}}\\}$, where $\ket{x\leavevmode\nobreak\
y\leavevmode\nobreak\ z}:=\ket{x}\otimes\ket{y}\otimes\ket{z}$. The states in
$\mathcal{S}_{\mathcal{P}[\\{K\\}]}$ can be expressed group-wise as follows,
$\displaystyle\mathcal{G}_{l}:=\ket{\psi_{l}}\otimes\mathcal{S}_{\mathcal{P}[\\{K\\}\setminus
l]}\equiv\ket{\psi_{l}}\otimes\left\\{\mathcal{P}\left(\otimes_{i\neq
l}\ket{\psi_{i}}\right)\right\\},$
where $l\in\\{1,\cdots,K\\}$. Clearly, the groups $\mathcal{G}_{l}$ make
disjoint partitions of the set $\mathcal{S}_{\mathcal{P}[\\{K\\}]}$, i.e.,
$\mathcal{S}_{\mathcal{P}[\\{K\\}]}\equiv\bigcup_{l=1}^{K}\mathcal{G}_{l}$
s.t. $\mathcal{G}_{l}\cap\mathcal{G}_{l^{\prime}}=\emptyset$ whenever $l\neq
l^{\prime}$. Since the states in $\mathcal{S}_{K}$ are locally
distinguishable, by local operations on the first part of the tensor product
states in $\mathcal{S}_{\mathcal{P}[\\{K\\}]}$ we can know with certainty in
which of the above groups the given state lies. If the group turns out to be
$\mathcal{G}_{l^{\star}}$ (i.e., if the index $l$ has been identified to be
$l^{*}$), the given state $\ket{\psi_{l^{\star}}}\otimes(\cdots)$ evolves to
$\ket{\psi^{\prime}_{l^{\star}}}\otimes(\cdots)$ due to the LOCC protocol,
where the term within the brackets remain unchanged and hence further LOCC
protocols can be applied on them. The group of states
$\mathcal{G}_{l^{\star}}=\ket{\psi^{\prime}_{l^{\star}}}\otimes\mathcal{S}_{\mathcal{P}[\\{K\\}\setminus{l^{\star}}]}$
can be further partitioned into disjoint subsets as,
$\displaystyle\mathcal{G}_{l^{\star}}\equiv\bigcup\mathcal{G}_{l^{\star},m}\leavevmode\nobreak\
\leavevmode\nobreak\ \mbox{s.t.}\leavevmode\nobreak\ \leavevmode\nobreak\
\mathcal{G}_{l^{\star},m}\cap\mathcal{G}_{l^{\star},m^{\prime}}=\emptyset\leavevmode\nobreak\
\forall\leavevmode\nobreak\ m\neq m^{\prime},$
$\displaystyle\mbox{where}\leavevmode\nobreak\
\mathcal{G}_{l^{\star},m}:=\ket{\psi^{\prime}_{l^{\star}}}\otimes\ket{\psi_{m}}\otimes\mathcal{S}_{\mathcal{P}[\\{K\\}\setminus\\{l^{\star},m\\}]},$
and $m,m^{\prime}\in\\{1,\cdots,K\\}\setminus l^{\star}$. Since any subset of
a locally distinguishable set of states is also locally distinguishable, the
identity of the index $m$ can be known perfectly by applying some local
protocol on the $\ket{\psi_{m}}$ part of the given state. As before, the
remaining parts of the state will not change. We can continue this process
till we completely determine the identity of the state in
$\mathcal{S}_{\mathcal{P}[\\{K\\}]}$ which in turn marks the state in
$\mathcal{S}_{K}$. This completes the proof. ∎
While Theorem 1 deals with the implications between two extreme cases,
particularly establishing $1$-LSM $\implies|\mathcal{S}|$-LSM, the following
corollaries establish few more nontrivial implications among generic $m$-LSM
tasks.
###### Corollary 1.
For a set of multipartite states $\mathcal{S}$, perfect $m$-LSM always implies
perfect $m^{\prime}$-LSM, where $1\leq m\leq
m^{\prime}(:=nm)\leq|\mathcal{S}|$ with $n\in\mathbb{N}$.
###### Proof.
Given a set
$\mathcal{S}_{K}\equiv\left\\{\ket{\psi_{1}},\cdots,\ket{\psi_{K}}\right\\}\subset\bigotimes_{i=1}^{N}\mathbb{C}^{d_{i}}_{A_{i}}$
is $m$-LSM we are supposed to prove that it is $m^{\prime}$-LSM, where
$m^{\prime}=nm$ with $n\in\mathbb{N}$. Intuitively, the proof goes as follows.
Let $\mathcal{L}_{m}$ be the local protocol that successfully completes the
$m$-LSM task for the set $\mathcal{S}_{K}$. For the $m^{\prime}$-LSM task, we
divide the set of $m^{\prime}$ states into $n$ arbitrary disjoint sets each
containing $m$ states. Treating each of these $n$ sets independently, we can
mark them locally by following the protocol $\mathcal{L}_{m}$. Thus, by
successively applying the protocol $\mathcal{L}_{m}$ we can construct the
local protocol $\mathcal{L}_{m^{\prime}}$ for the $m^{\prime}$-LSM task .
We can also reformulate this as an LSD task as was done in Theorem ${\bf 1}$.
We begin by noting that from the set
$\mathcal{S}_{K}\equiv\left\\{\ket{\psi_{1}},\cdots,\ket{\psi_{K}}\right\\}\subset\bigotimes_{i=1}^{N}\mathbb{C}^{d_{i}}_{A_{i}}$
one can choose $m$ states in $\prescript{K}{}{C}_{m}$ different ways. Let
denote each such choice of states by the set $\mathcal{S}_{m}^{j}$ where
$j\in\\{1,\cdots,\prescript{K}{}{C}_{m}\\}$. Therefore, $m$-LSM problem of
$\mathcal{S}_{K}$ can be reformulated as the LSD problem of the set of states
$\displaystyle\mathcal{S}_{\prescript{K}{}{C}_{m}\times m!}$
$\displaystyle\equiv\bigcup_{j=1}^{\prescript{K}{}{C}_{m}}\mathcal{S}_{\mathcal{P}[\\{m\\}]}^{j}$
$\displaystyle\mbox{s.t.}\leavevmode\nobreak\ \leavevmode\nobreak\
\mathcal{S}_{\mathcal{P}[\\{m\\}]}^{j}$
$\displaystyle\bigcap\mathcal{S}_{\mathcal{P}[\\{m\\}]}^{j^{\prime}}=\emptyset\leavevmode\nobreak\
\mbox{for}\leavevmode\nobreak\ j\neq j^{\prime},$
where $\mathcal{S}_{\mathcal{P}[\\{m\\}]}^{j}$ is defined similarly as in
Theorem ${\bf 1}$. We are given that perfect $m$-LSM of the the set
$\mathcal{S}_{K}$ is possible, i.e., there exists a local protocol
$\mathcal{L}_{m}$ that perfectly distinguishes the states in
$\mathcal{S}_{\prescript{K}{}{C}_{m}\times m!}$. While considering the
$m^{\prime}$-LSM problem, or equivalently, the LSD problem of the set
$\mathcal{S}_{\prescript{K}{}{C}_{m^{\prime}}\times m^{\prime}!}$, the states
in $\mathcal{S}_{\mathcal{P}[\\{m^{\prime}\\}]}^{j}$ can be expressed group-
wise as
$\mathcal{G}^{j}_{l_{1},...,l_{m}}:=\ket{\psi_{l_{1}},...,\psi_{l_{m}}}\otimes\mathcal{S}_{\mathcal{P}[\\{m^{\prime}\\}\setminus{\\{l_{1},...,l_{m}\\}}]}^{j}$
for each value of $j$. Thus the groups $\mathcal{G}^{j}_{l_{1},...,l_{m}}$
make a disjoint partition of the the set
$\mathcal{S}_{\prescript{K}{}{C}_{m^{\prime}}\times m^{\prime}!}$. Since
$\mathcal{S}_{K}$ is $m$-LSM, by performing local operations on the first
$m$-parts of the tensor product states in
$\mathcal{S}_{\prescript{K}{}{C}_{m^{\prime}}\times m^{\prime}!}$ we can fix
the indices $l_{1},..,l_{m}$ of $\mathcal{G}^{j}_{l_{1},...,l_{m}}$. If
$l_{1},..,l_{m}$ is identified to be $l^{*}_{1},..,l^{*}_{m}$ then we know
with certainty that the given state lies in
$\bigcup\limits_{j}\mathcal{G}^{j}_{l^{*}_{1},...,l^{*}_{m}}$ and the given
state is identified to be of the form
$\ket{\psi_{l^{*}_{1}},...,\psi_{l^{*}_{m}}}\otimes(...)$ and evolves to
$\ket{\psi^{\prime}_{l^{*}_{1}},...,\psi^{\prime}_{l^{*}_{m}}}\otimes(...)$
after the protocol has been performed, where the terms in the brackets remain
unchanged and hence further protocols can be performed on that part. The
groups
$\mathcal{G}^{j}_{l^{*}_{1},...,l^{*}_{m}}=\ket{\psi^{\prime}_{l^{*}_{1}},...,\psi^{\prime}_{l^{*}_{m}}}\otimes\mathcal{S}_{\mathcal{P}[\\{m^{\prime}\\}\setminus{\\{l_{1},...,l_{m}}\\}]}^{j}$
can be further partitioned into disjoint subsets as
$\mathcal{G}^{j}_{l^{*}_{1},...,l^{*}_{m}}=\bigcup\limits_{t_{1},\cdots,t_{m}}\mathcal{G}^{j}_{l^{*}_{1},...,l^{*}_{m},t_{1},...,t_{m}}$
for each value of $j$, where
$\mathcal{G}^{j}_{l^{*}_{1},...,l^{*}_{m},t_{1},...,t_{m}}\equiv\ket{\psi^{\prime}_{l^{*}_{1}},...,\psi^{\prime}_{l^{*}_{m}}}\otimes\ket{\psi_{t_{1}},...,\psi_{t_{m}}}\otimes\mathcal{S}_{\mathcal{P}[\\{m^{\prime}\\}\setminus{\\{l^{*}_{1},...,l^{*}_{m},t_{1},...,t_{m}}\\}]}^{j}$.
Since $\mathcal{S}_{K}$ is $m$-LSM, any subset of $\mathcal{S}_{K}$ is also
$m$-LSM. Hence we can further fix the indices $t_{1},..,t_{m}$ by performing
local operations on the second $m$-parts of the tensor product states in
$\bigcup\limits_{j}\mathcal{G}^{j}_{l^{*}_{1},...,l^{*}_{m}}$. We continue
this process $n$-times which will completely fix all the $nm=m^{\prime}$
indices. This will also fix the index $j$ since we have completely
distinguished the state. This completes the proof. For the special case of
$m=1$, LSD (i.e., $1$-LSM) implies perfect $m$-LSM with $1\leq
m\leq|\mathcal{S}|$. ∎
###### Corollary 2.
For a set of multipartite states $\mathcal{S}$ containing only product states,
perfect $m$-LSM always implies perfect $m^{\prime}$-LSM, where $1\leq m\leq
m^{\prime}\leq|\mathcal{S}|$.
###### Proof.
It is sufficient to show that $m$-LSM implies $(m+1)$-LSM for any set
$\mathcal{S}$ containing product states only. Given $(m+1)$-states to be
marked, we begin by marking the first $m$-states by using the protocol for
$m$-LSM. However, during this process the first $m$-states are destroyed. But
since we have determined the identity of the first $m$-states, we can locally
create the original set of first $m$-states once again. It is to be noted that
we can locally create any set of multipartite states whose identity is known
if and only if the set contains product states only. We now run the protocol
for $m$-LSM once again but this time on the last $m$-states. Thus we have
identified all the $(m+1)$-states. This completes the proof. Reformulation of
this proof in terms of the LSD problem is straightforward and follows in a
similar fashion as the proof of Corollary ${\bf 1}$. ∎
We note in passing that $m$-LSM does not trivially imply $(m-1)$-LSM for
product states. In the $m$-LSM task, $m$ states are accessible to the parties
in order to mark their identities. However, for $(m-1)$-LSM, the number of
accessible states reduce to $(m-1)$ and no trivial inferences can be drawn.
Although we were unable to prove for product states, whether m-LSM implies
(m-1)-LSM or not, if this really were the case, then the consequences would be
very exciting. Using the contrapositive of the statement that m-LSM implies
(m-1)-LSM, we would have thus given a protocol for constructing locally
indistinguishable states in higher dimensions starting from states like
Bennett’s UPBs for which perfect LSD is not possible.
Our next result, however, establishes that the converse statement of Theorem 1
does not hold in general and we show this by providing an explicit example
wherein given a set of entangled states, known to be locally
indistinguishable, the task of LSM is still possible, and that too with a
substantial amount of surplus entanglement shared between the distant parties
at the end of the protocol.
###### Theorem 2.
Perfect LSM of a given set of states $\mathcal{S}$ does not necessarily imply
perfect LSD of $\mathcal{S}$.
###### Proof.
The proof is constructive. We provide a set of pairwise orthogonal states that
can be perfectly marked under LOCC but does not allow perfect local
distinguishability. To this aim consider the set of states
$\mathcal{X}_{4}\equiv\\{\ket{\chi_{i}}\\}_{i=1}^{4}\subset\mathbb{C}^{4}_{A}\otimes\mathbb{C}^{4}_{B}$
shared between Alice and Bob, where
$\displaystyle\ket{\chi_{1}}$
$\displaystyle:=\ket{\phi^{+}}_{A_{1}B_{1}}\otimes\ket{\phi^{+}}_{A_{2}B_{2}},\leavevmode\nobreak\
\ket{\chi_{2}}:=\ket{\phi^{-}}_{A_{1}B_{1}}\otimes\ket{\phi^{-}}_{A_{2}B_{2}},$
$\displaystyle\ket{\chi_{3}}$
$\displaystyle:=\ket{\psi^{+}}_{A_{1}B_{1}}\otimes\ket{\phi^{-}}_{A_{2}B_{2}},\leavevmode\nobreak\
\ket{\chi_{4}}:=\ket{\psi^{-}}_{A_{1}B_{1}}\otimes\ket{\phi^{-}}_{A_{2}B_{2}},$
with $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\
\ket{\phi^{\pm}}:=\frac{\ket{00}\pm\ket{11}}{\sqrt{2}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\ket{\psi^{\pm}}:=\frac{\ket{01}\pm\ket{10}}{\sqrt{2}},$
and $A_{1},A_{2}$ subsystems are with Alice while $B_{1},B_{2}$ are with Bob.
The part of $\ket{\chi_{i}}$ indexed with $A_{1}B_{1}$ we will call the first
part and the part with index $A_{2}B_{2}$ will be the second part. For LSM,
Alice and Bob are provided the state
$\ket{\chi_{p}}\otimes\ket{\chi_{q}}\otimes\ket{\chi_{r}}\otimes\ket{\chi_{s}}\in\left(\mathbb{C}^{4}_{A}\otimes\mathbb{C}^{4}_{B}\right)^{\otimes
4}$, without specifying the indices $p,q,r,s\in\\{1,\cdots,4\\}$ and $p,q,r,s$
are all distinct. Their collaborative aim is to identify the indices where the
collaboration is restricted to LOCC. It turns out that there exists a local
strategy that marks the states exactly (detailed protocol is provided in
Appendix A). Now, local indistinguishability of the set $\mathcal{X}_{4}$
follows from the results of Yu et al Yu12 . There the authors have proved that
states in $\mathcal{X}_{4}$ cannot be distinguished perfectly by any PPT POVM,
a larger class of operations that strictly contains all LOCC operations. This
completes the proof. ∎
It turns out that at the end of local marking strategy described in Theorem 2
$3$-ebit of entanglement (on average) remains between Alice and Bob (see
Appendix A). However, the optimality of the protocol in terms of retaining
entanglement remains an open problem.
So far we have considered sets that are locally markable. Our next result
provides an example of a set of mutually orthogonal states that cannot be
marked perfectly under LOCC.
###### Proposition 1.
The two qubit Bell basis
$\mathcal{B}_{4}\equiv\\{\ket{b_{1}}:=\ket{\phi^{+}},\ket{b_{2}}:=\ket{\phi^{-}},\ket{b_{3}}:=\ket{\psi^{+}},\ket{b_{4}}:=\ket{\psi^{-}}\\}\subset\mathbb{C}^{2}\otimes\mathbb{C}^{2}$
is locally unmarkable.
###### Proof.
LSM of $\mathcal{B}_{4}$ is equivalent to LSD of the set
$\mathcal{B}_{\mathcal{P}[\\{4\\}]}$ that contains $24$ pairwise orthogonal
maximally entangled states in $\mathbb{C}^{16}\otimes\mathbb{C}^{16}$.
Therefore, the desired thesis follows from the fact that $n$ pairwise
orthogonal maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$
cannot be perfectly distinguished locally whenever $n>d$ Hayashi06 . ∎
Furthermore, in accordance with Lemma 1, the set $\mathcal{B}_{4}$ does not
allow perfect $2$-LSM and it also straightforwardly follows that perfect
$3$-LSM of $\mathcal{B}_{4}$ is impossible (in fact, for any set
$\mathcal{S}$, $(|\mathcal{S}|-1)$-LSM always implies $|\mathcal{S}|$-LSM). A
generalization of Proposition 1 follows arguably.
###### Proposition 2.
Consider any set of maximally entangled states
$\mathcal{B}_{K}(d):=\left\\{\ket{b_{i}}\leavevmode\nobreak\
|\leavevmode\nobreak\ \langle
b_{i}|b_{j}\rangle=\delta_{ij}\right\\}_{i=1}^{K}\subset\mathbb{C}^{d}\otimes\mathbb{C}^{d}$.
The set is locally unmarkable whenever $K!>d^{K}$.
We now move on to the possibility of entanglement assisted marking of states
that otherwise are locally unmarkable. It might happen that given
$\delta$-ebit of entanglement some LSM task can be performed exactly which is
otherwise impossible to do locally, and moreover $\epsilon$-ebit of
entanglement is left at the end of the protocol. Such a protocol we will call
$(\delta,\epsilon)$ entanglement catalytic protocol and $(\delta-\epsilon)$
quantifies the amount of entanglement consumed to accomplish the given LSM
task.
Recall that given $1$-ebit of entanglement as additional resource, the two-
qubit Bell basis can be distinguished perfectly. One of the party teleports
his/her part of the unknown Bell state to the other party who then performs
the Bell basis measurement to identify the state. Furthermore, it is known
that $1$-ebit entanglement is the necessary resource required for perfect
discrimination of the $2$-qubit Bell basis Ghosh01 . Coming to the question of
entanglement assisted marking of the set $\mathcal{B}_{4}$, we obtain the
following result.
###### Proposition 3.
There exists a $(2,1)$ entanglement catalytic perfect protocol for LSM of the
set $\mathcal{B}_{4}$.
###### Proof.
LSM of the set $\mathcal{B}_{4}$ is equivalent to LSD of the set
$\mathcal{B}_{\mathcal{P}[\\{4\\}]}$ containing states of the form
$\ket{b_{p}}\otimes\ket{b_{q}}\otimes\ket{b_{r}}\otimes\ket{b_{s}}$ with
$p,q,r,s\in\\{1,\cdots,4\\}\leavevmode\nobreak\ \&\leavevmode\nobreak\
p,q,r,s$ are distinct. Let some supplier provides two EPR states for
discriminating the set $\mathcal{B}_{\mathcal{P}[\\{4\\}]}$. Using the
teleportation protocol Alice and Bob can know any of the two indices among
$p,q,r,s$. Say they identify the indices $p$ and $q$. Then the value of $r$
has only two possibilities and the result of Walgate et al. ensures that this
value can be known exactly under LOCC Walgate00 . While determining the value
of $r$, the entanglement of the state $\ket{b_{r}}$ gets destroyed. However,
at the end of the protocol, entanglement of the state $\ket{b_{s}}$ remains
intact and its identity is also known. Therefore, $1$-ebit of entanglement can
be returned back to the supplier. So the protocol consumes $1$-ebit of
entanglement in catalytic sense. ∎
Once again we are not sure about optimality of the protocol in Proposition 3
in terms of resource consumption, and leave the question open here for further
research. One can obtain a more exotic example of entanglement catalytic local
marking phenomenon. To this aim we first prove the following result.
###### Proposition 4.
Any three Bell states of the two-qubit system is unmarkable under one-way
LOCC.
###### Proof.
Consider a set of maximally entangled states
$\\{\ket{\phi_{k}}\\}_{k=1}^{n}\subset\mathbb{C}^{d}\otimes\mathbb{C}^{d}$
that can be obtained from
$\ket{\phi_{0}(d)}:=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}\ket{ii}$ by applying
some unitary on one part of the system, i.e.,
$\ket{\phi_{k}}=(U_{k}\otimes\mathbb{I})\ket{\phi_{0}(d)}$. According to a
criterion, conjectured in Ghosh04 and subsequently derived in
Bandyopadhyay11(0) , the states can be discriminated under one-way LOCC if and
only if there exists a $\ket{\psi}\in\mathbb{C}^{d}$, such that
$\bra{\psi_{i}}\psi_{j}\rangle=\delta_{ij},\forall i,j\in\\{1,2,\cdots,n\\}$,
where $\ket{\psi_{k}}=U_{k}\ket{\psi}$. In our case, without loss of any
generality we can consider
$\mathcal{B}_{3}\equiv\\{\ket{b_{1}}:=\ket{\phi^{+}},\ket{b_{2}}:=\ket{\phi^{-}},\ket{b_{3}}=\ket{\psi^{+}}\\}$,
such that
$\mathcal{B}_{\mathcal{P}[\\{3\\}]}\equiv\\{\ket{\phi_{k}}\\}_{k=1}^{6}\subset\mathbb{C}^{8}\otimes\mathbb{C}^{8}$,
where
$\displaystyle\ket{b_{1}b_{2}b_{3}}:=\ket{\phi_{1}}=(U_{1}\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)}=([\mathbb{I}_{2}\otimes\sigma_{z}\otimes\sigma_{x}]\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)},$
$\displaystyle\ket{b_{1}b_{3}b_{2}}:=\ket{\phi_{2}}=(U_{2}\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)}=([\mathbb{I}_{2}\otimes\sigma_{x}\otimes\sigma_{z}]\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)},$
$\displaystyle\ket{b_{2}b_{3}b_{1}}:=\ket{\phi_{3}}=(U_{3}\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)}=([\sigma_{z}\otimes\sigma_{x}\otimes\mathbb{I}_{2}]\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)},$
$\displaystyle\ket{b_{2}b_{1}b_{3}}:=\ket{\phi_{4}}=(U_{4}\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)}=([\sigma_{z}\otimes\mathbb{I}_{2}\otimes\sigma_{x}]\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)},$
$\displaystyle\ket{b_{3}b_{1}b_{2}}:=\ket{\phi_{5}}=(U_{5}\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)}=([\sigma_{x}\otimes\mathbb{I}_{2}\otimes\sigma_{z}]\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)},$
$\displaystyle\ket{b_{3}b_{2}b_{1}}:=\ket{\phi_{6}}=(U_{6}\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)}=([\sigma_{x}\otimes\sigma_{z}\otimes\mathbb{I}_{2}]\otimes\mathbb{I}_{8})\ket{\phi_{0}(8)}.$
Here $\ket{\phi_{0}(8)}:=\ket{\phi^{+}}^{\otimes
3}\in\mathbb{C}^{8}\otimes\mathbb{C}^{8}$. Now, consider an arbitrary quantum
state $\ket{\chi}:=\sum_{i=0}^{7}a_{i}\ket{i}\in\mathbb{C}^{8}$, where
$a_{i}\in\mathbb{C}\leavevmode\nobreak\ \&\leavevmode\nobreak\
\sum_{i=0}^{7}|a_{i}|^{2}=1$. Thus the condition for distinguishability of the
set $\mathcal{B}_{\mathcal{P}[\\{3\\}]}$ under one-way LOCC turns out to be
$\langle\psi_{i}|\psi_{j}\rangle=\delta_{ij}$, where
$\ket{\psi_{k}}:=U_{k}\ket{\chi}$. It boils down to a numerical exercise to
show that the aforesaid condition is not satisfied for any
$\ket{\chi}\in\mathbb{C}^{8}$. ∎
While Proposition 4 proves impossibility of LSM of the set $\mathcal{B}_{3}$
under one-way LOCC, we have the following stronger result if we consider
$2$-LSM of the set.
###### Corollary 3.
Perfect $2$-LSM of the set $\mathcal{B}_{3}$ is not possible even under two-
way LOCC protocol.
Proof follows from the fact that $6$ pairwise maximally entangled states in
$(\mathbb{C}^{4})^{\otimes 2}$ are not locally distinguishable. Moving on to
the question of entanglement assisted discrimination of the set
$\mathcal{B}_{3}$, it has been established that perfect discrimination
requires $1$-ebit entanglement Bandyopadhyay15 . Regarding entanglement
assisted marking of $\mathcal{B}_{3}$ we have the following result.
###### Proposition 5.
There exists a $(1,1)$ entanglement catalytic protocol for perfect LSM of the
set $\mathcal{B}_{3}$.
###### Proof.
Let Alice and Bob have $1$-ebit entanglement (received from some supplier) to
distinguish the state $\ket{b_{p}}\otimes\ket{b_{q}}\otimes\ket{b_{r}}$, where
$p,q,r\in\\{1,2,3\\}\leavevmode\nobreak\ \&\leavevmode\nobreak\ p,q,r$ are
distinct. Using the teleportation scheme they can identify one of the indices
(say) $p$. Then, using the method of Walgate et al. Walgate00 , they identify
the remaining two indices. At the end of this protocol $1$-ebit entanglement
remains with Alice and Bob which they can return to the supplier. So, in a
catalytic sense, the protocol consumes $0$-ebit of entanglement. ∎
Note that the protocol in Proposition 5 involves two-way CC. If the
teleportation step is from Alice to Bob and thus requires CC from Alice to
Bob, then the Walgate step requires CC from Bob to Alice. The question remains
open whether there exists some local protocol with two-way CC that perfectly
marks the set $\mathcal{B}_{3}$ without involving entanglement even in the
catalytic sense.
## IV Discussions
To further highlight the implication of the results from the previous section,
a few comments are in order. Although both the problems of LSM and LSD stem
from a common notion of state identification, the present work strives to
point out a subtle difference between them. To elaborate this difference one
can consider the following three-party information theoretic task.
Let us suppose three parties Alice, Bob and Charlie are spatially separated.
Charlie shares quantum transmission lines with both Alice and Bob, but Alice
and Bob are restricted to classical communication between themselves only.
Charlie would like to communicate a classical message to both Alice and Bob.
But to do that he is provided with an ensemble of $n$ orthogonal bi-partite
states of local dimension $d$ which are not locally distinguishable. A
justification for communicating in this way is to avoid the message being
decoded by non-communicating eavesdroppers between Charlie-Alice and Charlie-
Bob.
Now Charlie can provide Alice and Bob multiple copies of the unknown state
from the ensemble, so that they can perform perfect LSD. Let us suppose $k$
copies are necessary for perfect LSD. Thus Charlie could communicate to Alice
and Bob $\log\leavevmode\nobreak\ n$ bits by sending $k$ qudits, i.e.
$\frac{\log\leavevmode\nobreak\ n}{k}$ bits per qudit.
Alternatively Charlie can provide Alice and Bob states from the ensemble
corresponding to LSM task, i.e. an ensemble of size $\log\leavevmode\nobreak\
n!$. Possibility of perfect LSM of this ensemble under LOCC will result in a
communication of $\log\leavevmode\nobreak\ n!$ bits by sending $n$ qudits,
i.e. $\frac{\log\leavevmode\nobreak\ n!}{n}$ bits per qudit.
To compare the average communication per qudit, let us consider the ensemble
in Theorem 2 of the main text. The ensemble $\mathcal{X}_{4}$ of $4$
orthogonal states with local dimension $4$ is given to Charlie. This ensemble
does not allow perfect LSD (according to Theorem 2) but $2$ copies of the
unknown state is sufficient for perfect LSD. So the average communication per
ququad is $\frac{\log\leavevmode\nobreak\ 4}{2}=1$ bit. On the other hand
perfect LSM of this ensemble (as in Theorem 2) implies average communication
per ququad to be $\frac{\log\leavevmode\nobreak\
4!}{4}=\frac{\log\leavevmode\nobreak\ 24}{4}=\frac{3+\log\leavevmode\nobreak\
3}{4}$ bits which is greater than the average communication for the protocol
based on multi-copy LSD. In this sense, LSM is more economical over the
conventional multi-copy LSD.
Proposition 4 is also interesting from a different perspective. It is known
that any set of $d+1$ mutually orthogonal $d\otimes d$ maximally entangled
states is locally indistinguishable Hayashi06 . But answer to the same
question for smaller sets $(<d+1)$ is known only in a few cases. Although the
result of Walgate et al. ensures local distinguishability of any two maximally
entangled states in $2\otimes 2$ and later Nathanson proved that any three
mutually orthogonal $3\otimes 3$ maximally entangled states are locally
distinguishable Nathanson05 , the authors in Ghosh04 ; Yu12 ; Singal17
provide examples of $4$ maximally entangled states in $4\otimes 4$ that are
not local distinguishable. In Ref.Bandyopadhyay11(0) one can find an example
of $4$ maximally entangled states in $5\otimes 5$ as well as an example of $5$
maximally entangled states in $6\otimes 6$ that cannot be perfectly
distinguished under one-way LOCC. In a similar spirit, the set
$\mathcal{B}_{\mathcal{P}[\\{3\\}]}$ constitutes an example of $6$ maximally
entangled states in $8\otimes 8$ that cannot be distinguished under one-way
LOCC.
## V Conclusions
We have proposed a class of novel discrimination tasks, namely the $m$-LSM
task, that goes beyond the much explored task of local state discrimination.
The present study unravels several curious and intricate features of the
proposed task.
Although Lemma 1, Corollary 1-2, and Theorem 1-2 unveil some general features
of local state marking task and Proposition 1-5 report some interesting
consequences by considering specific set of states, the present work leaves
open a number of important questions and possibilities for further study. In
the following we summarize some of those. First, it is important to resolve
the question of optimal resource consumption for local state marking task with
and without catalysts as mentioned in the discussions after Theorem 2 &
Proposition 3, respectively. Second, all the ensembles considered in the
present work consist of bipartite entangled states. Except Corollary 2, the
present work does not provide much insight for the local state marking of
ensembles containing only product states. Does there exist such a product
ensemble that cannot be marked locally? If yes, would it imply a stronger
notion of nonlocality without entanglement Bennett99 ? In recent past, this
phenomena of nonlocality without entanglement has been studied in the
generalized probabilistic theory framework Bhattacharya20 . It might be
interesting to extend the study of LSM in this framework. Third, in the same
spirit of multipartite LSD Hayashi06 , exploring LSM for multipartite systems
might unveil new features of LOCC as well as of multipartite entanglement.
Finally, local indistinguishability has also been shown to have practical
implications in cryptographic primitives such as data hiding and secret
sharing. It would be quite interesting to find such novel applications for the
LSM task introduced here.
Acknowledgment: R.K.P. acknowledges support from the CSIR Project No.
09/997(0079)/2020-EMR-I. T.G. was supported by the Hong Kong Research Grant
Council through Grant No. 17300918 and through the Senior Research Fellowship
Scheme SRFS2021-7S02. S.S.B. acknowledges partial support by the Foundation
for Polish Science (IRAP project, ICTQT, Contract No. MAB/2018/5, co-financed
by EU within Smart Growth Operational Programme). M.A. and M.B. acknowledge
support through the research grant of INSPIRE Faculty fellowship from the
Department of Science and Technology, Government of India. M.B. acknowledges
funding from the National Mission in Interdisciplinary Cyber-Physical systems
from the Department of Science and Technology through the I-HUB Quantum
Technology Foundation (Grant No. I-HUB/PDF/2021-22/008) and the start-up
research grant from SERB, Department of Science and Technology (Grant No.
SRG/2021/000267).
## Appendix A Detailed proof of Theorem 2
###### Proof.
Here we show that the set
$\mathcal{X}_{4}\equiv\\{\ket{\chi_{i}}\\}_{i=1}^{4}\subset\mathbb{C}^{4}_{A}\otimes\mathbb{C}^{4}_{B}$
allows perfect LSM ( more particularly, $4$-LSM); where
$\displaystyle\ket{\chi_{1}}$
$\displaystyle:=\ket{\phi^{+}}_{A_{1}B_{1}}\otimes\ket{\phi^{+}}_{A_{2}B_{2}},\leavevmode\nobreak\
\ket{\chi_{2}}:=\ket{\phi^{-}}_{A_{1}B_{1}}\otimes\ket{\phi^{-}}_{A_{2}B_{2}},$
$\displaystyle\ket{\chi_{3}}$
$\displaystyle:=\ket{\psi^{+}}_{A_{1}B_{1}}\otimes\ket{\phi^{-}}_{A_{2}B_{2}},\leavevmode\nobreak\
\ket{\chi_{4}}:=\ket{\psi^{-}}_{A_{1}B_{1}}\otimes\ket{\phi^{-}}_{A_{2}B_{2}},$
with $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\
\ket{\phi^{\pm}}:=\frac{\ket{00}\pm\ket{11}}{\sqrt{2}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\ket{\psi^{\pm}}:=\frac{\ket{01}\pm\ket{10}}{\sqrt{2}}.$
The $A_{1},A_{2}$ subsystems are with Alice while $B_{1},B_{2}$ are with Bob.
As already mentioned, the part of the state $\ket{\chi_{i}}$ indexed with
$A_{1}B_{1}$ will be called the first part and the part with index
$A_{2}B_{2}$ will be called the second part. The set $\mathcal{X}_{4}$ can be
thought of as the union of two disjoint sets of states
$\displaystyle\mathcal{X}_{4}=G_{1}\cup G_{2}\leavevmode\nobreak\
\leavevmode\nobreak\ $ $\displaystyle\&\leavevmode\nobreak\
\leavevmode\nobreak\ G_{1}\cap G_{2}=\emptyset,$
$\displaystyle\mbox{where},\leavevmode\nobreak\ \leavevmode\nobreak\
G_{1}:=\\{\ket{\chi_{1}},\ket{\chi_{2}}\\},$
$\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\
G_{2}:=\\{\ket{\chi_{3}},\ket{\chi_{4}}\\}.$
The LSM task can be considered as identifying the indices
$p,q,r,s\in\\{1,\cdots,4\\}$ in the state
$\ket{\Sigma}:=\ket{\chi_{p}}\otimes\ket{\chi_{q}}\otimes\ket{\chi_{r}}\otimes\ket{\chi_{s}}\in\left(\mathbb{C}^{4}_{A}\otimes\mathbb{C}^{4}_{B}\right)^{\otimes
4}$ locally, where $p,q,r,s$ are distinct. Note that the state $\ket{\Sigma}$
is a composition (tensor product) of four different states and we will call
$\ket{\chi_{p}}$ the first state, $\ket{\chi_{q}}$ the second state and so on
(of course, the indices $p,q,\cdots$ are not known and the aim is to identify
them locally). The local marking strategy of Alice and Bob goes as follows:
See Figure 3Step-AStep-AStep-A[Case-I]Step-A[Case-
II]Step-A[Case-I]Step-A[Case-II] Figure 2: (Color online) Flow chart of the
protocol if correlated outcomes are obtained in Step-1. The number written in
each branch indicates the probability of occurrence of that branch. The
average amount of entanglement left in this case is $\left[\frac{1}{2}\times
3+\frac{1}{2}\left\\{\frac{1}{3}\times 4+\frac{2}{3}\left(\frac{1}{2}\times
3+\frac{1}{2}\times 2\right)\right\\}\right]=3$ ebits.
Step-1: Both Alice and Bob perform the Pauli-Z measurement on the first part
of the first state (i.e., the state $\ket{\chi_{p}}$). If they obtain
correlated (C) outcomes, i.e., If Alice and Bob both obtain the same outcome,
then they conclude that $\ket{\chi_{p}}\in G_{1}$, whereas anti-correlated
(AC) outcomes imply $\ket{\chi_{p}}\in G_{2}$. Depending on the results
obtained in Step-1 they determine their protocol for the next step. For
instance, if they obtain correlated outcomes then their protocol is discussed
below.
See Figure 2 Figure 3: (Color online) Flow chart of the protocol if anti-
correlated outcomes are obtained in Step-1. The average amount of entanglement
left in this case is $\left[\frac{1}{3}\times
4+\frac{2}{3}\left\\{\frac{1}{2}\times 2+\frac{1}{2}\times
3\right\\}\right]=3$ ebits.
Step-2: Knowing that $\ket{\chi_{p}}\in G_{1}$, both Alice and Bob perform
Pauli-X measurement on the second part of $\ket{\chi_{p}}$. Correlated
outcomes imply that the first state is $\ket{\chi_{1}}$ (i.e., $p=1$), else it
is $\ket{\chi_{2}}$ (i.e., $p=2$). Accordingly, two different branches open up
at the next step.
Step-3 :[Case-I] $p=1$ in Step-2 implies that the second part of all the
states $\ket{\chi_{q}},\ket{\chi_{r}}\leavevmode\nobreak\
\&\leavevmode\nobreak\ \ket{\chi_{s}}$ is $\ket{\phi^{-}}$. Using the second
part of the second state (i.e., $\ket{\chi_{q}}$) Alice and Bob follow the
teleportation protocol (TP) to prepare the first part of $\ket{\chi_{q}}$ at
Alice’s laboratory. Alice now performs the Bell basis measurement (BM) on the
first part of $\ket{\chi_{q}}$ and depending upon the measurement outcome
marks the state exactly.
Step-4 :[Case-I] Since two states $\ket{\chi_{p}}$ and $\ket{\chi_{q}}$ are
marked exactly (in this case $p=1$ and $q=2\leavevmode\nobreak\
or\leavevmode\nobreak\ 3\leavevmode\nobreak\ or\leavevmode\nobreak\ 4$), the
result of Walgate et al. Walgate00 allows us to mark the state
$\ket{\chi_{r}}$ by a local protocol on the first part of the state (In the
flow charts of Figure 2 & 3 we will call it the Walgate Protocol and denote it
as WP). The remaining state $\ket{\chi_{s}}$ is immediately marked as the set
$\mathcal{X}_{4}$ is known. For the sake of completeness we list the different
possibilities and the corresponding Walgate Protocols:
* •
$p=1$ (in Step-2) and $q=2$ (in Step-3 [Case- I]): Both Alice and Bob perform
the Pauli-X measurement on the first part of $\ket{\chi_{r}}$. Correlated
outcomes imply $r=3$ and $s=4$. Anti-correlated outcomes imply $r=4$ and
$s=3$.
* •
$p=1$ (in Step-2) and $q=3$ (in Step-3 [Case- I]): Both Alice and Bob perform
the Pauli-Z measurement on the first part of $\ket{\chi_{r}}$. Correlated
outcomes imply $r=2$ and $s=4$. Anti-correlated outcomes imply $r=4$ and
$s=2$.
* •
$p=1$ (in Step-2) and $q=4$ (in Step-3 [Case- I]): Both Alice and Bob perform
the Pauli-Z measurement on the first part of $\ket{\chi_{r}}$. Correlated
outcomes imply $r=2$ and $s=3$. Anti-correlated outcomes imply $r=3$ and
$s=2$.
Note that the entanglement of $\ket{\chi_{p}}$, $\ket{\chi_{q}}$, and the
first part of $\ket{\chi_{r}}$ gets destroyed in the protocol, whereas the
entanglement of $\ket{\chi_{s}}$ and the second part of $\ket{\chi_{r}}$
remains intact. So, whatever the outcome of BM at Step-3, the protocol ends
with $3$-ebit entanglement that can be used as a resource.
Step-3 :[Case-II] Let Step-2 yield the conclusion that $p=2$. Then, both Alice
and Bob perform the Pauli-Z measurement on the first part of the second state
(i.e., the state $\ket{\chi_{q}}$).
* •
If correlated outcomes are obtained then $q=1$.
* •
If anti-correlated outcomes are obtained then $q=3$ or $4$.
Step-4 :[Case-II] If correlated outcome is obtained in Step-3 [Case-II], then
we have $p=2$ and $q=1$. Again, the result of Walgate et al. ensures that
local marking of $\ket{\chi_{r}}$ is possible by a local protocol on the the
first part of the state and accordingly the remaining state $\ket{\chi_{s}}$
is also marked. This leaves us with $4$-ebit of entanglement at the end of the
protocol – $1$-ebit each in $\ket{\chi_{q}}$ and $\ket{\chi_{r}}$, and
$2$-ebit in $\ket{\chi_{s}}$.
If anti-correlated outcome is obtained in Step-3 [Case-II] then Alice and Bob
know that the second part of $\ket{\chi_{q}}$ is $\ket{\phi^{-}}$. Utilizing
this $\ket{\phi^{-}}$ they teleport and prepare the first part of
$\ket{\chi_{r}}$ at Alice’s laboratory. Alice performs the Bell basis
measurement (BM) on the first part of $\ket{\chi_{r}}$ and marks the state
exactly.
* •
If $\ket{\chi_{r}}$ is identified as $\ket{\chi_{4}}$ then we have
$p=2,q=3,r=4,s=1$. If $\ket{\chi_{r}}$ is identified as $\ket{\chi_{3}}$ then
we have $p=2,q=4,r=3,s=1$. In both theses cases we are left with $3$-ebit of
entanglement.
* •
If the state $\ket{\chi_{r}}$ is identified as $\ket{\chi_{1}}$, then we have
$p=2$ and $r=1$. WP allows us to mark the state $\ket{\chi_{s}}$ by a local
protocol on its first part. Therefore we have either $p=2,q=4,r=1,s=3$ or
$p=2,q=3,r=1,s=4$. Both these cases leave us with $2$-ebit of entanglement.
So far, we have discussed the protocol if we obtain correlated outcomes in
Step-1. The protocol is summarized in the flow-chart shown in Figure 2.
However, to complete the proof we need to analyze the case if anti-correlated
outcomes are obtained in Step-1. The corresponding flow-chart is shown in
Figure 3. From the flow charts it straightforwardly follows that on an average
$3$-ebit $\left[\frac{1}{2}(3+3)\right]$ of entanglement is left at the end of
the protocol. ∎
## References
* (1) C. E. Shannon; A mathematical theory of communication, Bell Syst. Tech. J. 27, 379 (1948).
* (2) E.L. Lehmann and J.P. Romano; Testing Statistical Hypotheses, Springer (2005).
* (3) S. Wiesner; Conjugate coding, ACM SIGACT News 15, 78 (1983).
* (4) C. H. Bennett and G. Brassard, in Proceedings of IEEE International Conference on Computers, Systems, and Signal Processing (IEEE, New York, 1984).
* (5) A. K. Ekert; Quantum cryptography based on Bell’s theorem, Phys. Rev. Lett. 67, 661 (1991).
* (6) C. H. Bennett, G. Brassard, and N. D. Mermin; Quantum cryptography without Bell’s theorem, Phys. Rev. Lett. 68, 557 (1992).
* (7) C. H. Bennett and S. J. Wiesner; Communication via one- and two-particle operators on Einstein-Podolsky-Rosen states, Phys. Rev. Lett. 69, 2881 (1992).
* (8) C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters; Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels, Phys. Rev. Lett. 70, 1895 (1993).
* (9) W. Wootters and W Zurek; A Single Quantum Cannot be Cloned, Nature 299, 802 (1982).
* (10) H. Barnum, C. M. Caves, C. A. Fuchs, R. Jozsa, and B. Schumacher; Noncommuting Mixed States Cannot Be Broadcast, Phys. Rev. Lett. 76, 2818 (1996).
* (11) H. Barnum, J. Barrett, M. Leifer, and A. Wilce; Generalized No-Broadcasting Theorem, Phys. Rev. Lett. 99, 240501 (2007).
* (12) M. Banik, S. Saha, T. Guha, S. Agrawal, S. S. Bhattacharya, A. Roy, and A. S. Majumdar; Constraining the state space in any physical theory with the principle of information symmetry, Phys. Rev. A 100, 060101(R) (2019).
* (13) State space of a classical system having finite number of perfectly distinguishable states is described by some simplex embedded in some $\mathbb{R}^{d}$ where extreme points of the simplex correspond to the pure states. On the other hand, distributions on phase space represents mixed state while delta distributions, i.e., the phase space points correspond to pure states that are unaccountably many in numbers but perfectly distinguishable at least in principle.
* (14) C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. A. Smolin, and W. K. Wootters; Quantum nonlocality without entanglement, Phys. Rev. A 59, 1070 (1999).
* (15) J. Walgate, A. J. Short, L. Hardy, and V. Vedral; Local Distinguishability of Multipartite Orthogonal Quantum States, Phys. Rev. Lett. 85, 4972 (2000).
* (16) S. Ghosh, G. Kar, A. Roy, A. Sen(De), and U. Sen; Distinguishability of Bell States, Phys. Rev. Lett. 87, 277902 (2001).
* (17) J. Walgate and L. Hardy; Nonlocality, Asymmetry, and Distinguishing Bipartite States, Phys. Rev. Lett. 89, 147901 (2002).
* (18) S. Ghosh, G. Kar, A. Roy, and D. Sarkar; Distinguishability of maximally entangled states, Phys. Rev. A 70, 022304 (2004).
* (19) M. Horodecki, A. Sen(De), U. Sen, and K. Horodecki; Local Indistinguishability: More Nonlocality with Less Entanglement, Phys. Rev. Lett. 90, 047902 (2003).
* (20) J. Watrous; Bipartite Subspaces Having No Bases Distinguishable by Local Operations and Classical Communication, Phys. Rev. Lett. 95, 080505 (2005).
* (21) M. Hayashi, D. Markham, M. Murao, M. Owari, and S. Virmani; Bounds on Multipartite Entangled Orthogonal State Discrimination Using Local Operations and Classical Communication, Phys. Rev. Lett. 96, 040501 (2006).
* (22) C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal; Unextendible Product Bases and Bound Entanglement, Phys. Rev. Lett. 82, 5385 (1999).
* (23) D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, B. M. Terhal; Unextendible Product Bases, Uncompletable Product Bases and Bound Entanglement, Comm. Math. Phys. 238, 379 (2003).
* (24) J. Niset and N. J. Cerf; Multipartite nonlocality without entanglement in many dimensions, Phys. Rev. A 74, 052103 (2006).
* (25) R. Duan, Y. Feng, Z. Ji, and M. Ying; Distinguishing Arbitrary Multipartite Basis Unambiguously Using Local Operations and Classical Communication, Phys. Rev. Lett. 98, 230502 (2007).
* (26) J. Calsamiglia, J. I. de Vicente, R. Muñoz-Tapia, and E. Bagan; Local Discrimination of Mixed States, Phys. Rev. Lett. 105, 080504 (2010).
* (27) S. Bandyopadhyay; More Nonlocality with Less Purity, Phys. Rev. Lett. 106, 210402 (2011).
* (28) E. Chitambar, R. Duan, and Min-Hsiu Hsieh; When do Local Operations and Classical Communication Suffice for Two-Qubit State Discrimination? IEEE Trans. Inform. Theory 60, 1549 (2014).
* (29) S. Halder; Several nonlocal sets of multipartite pure orthogonal product states, Phys. Rev. A 98, 022303 (2018).
* (30) M. Demianowicz and R. Augusiak; From unextendible product bases to genuinely entangled subspaces, Phys. Rev. A 98, 012313 (2018).
* (31) S. Halder, M. Banik, S. Agrawal, and S. Bandyopadhyay; Strong Quantum Nonlocality without Entanglement, Phys. Rev. Lett. 122, 040403 (2019).
* (32) S. Halder, M. Banik, and S. Ghosh; Family of bound entangled states on the boundary of the Peres set, Phys. Rev. A 99, 062329 (2019).
* (33) S. Agrawal, S. Halder, and M. Banik; Genuinely entangled subspace with all-encompassing distillable entanglement across every bipartition, Phys. Rev. A 99, 032335 (2019).
* (34) S. Rout, A. G. Maity, A. Mukherjee, S. Halder, and M. Banik; Genuinely nonlocal product bases: Classification and entanglement-assisted discrimination, Phys. Rev. A 100, 032321 (2019).
* (35) S. S. Bhattacharya, S. Saha, T. Guha, and M. Banik; Nonlocality without entanglement: Quantum theory and beyond, Phys. Rev. Research 2, 012068(R) (2020).
* (36) M. Banik, T. Guha, M. Alimuddin, G. Kar, S. Halder, S. S. Bhattacharya; Multicopy Adaptive Local Discrimination: Strongest Possible Two-Qubit Nonlocal Bases, Phys. Rev. Lett. 126, 210505 (2021)
* (37) S. Rout, A. G. Maity, A. Mukherjee, S. Halder, and M. Banik; Local State Discrimination and Ordering of Multipartite Entangled States, arXiv:1910.14308.
* (38) B. M. Terhal, D. P. DiVincenzo, and D. W. Leung; Hiding bits in Bell states, Phys. Rev. Lett. 86, 5807 (2001).
* (39) D. P. DiVincenzo, D. W. Leung, and B. M. Terhal; Quantum data hiding, IEEE Trans. Inf. Theory 48, 580 (2002).
* (40) T. Eggeling and R. F. Werner; Hiding Classical Data in Multipartite Quantum States, Phys. Rev. Lett. 89, 097905 (2002).
* (41) D. Markham and B. C. Sanders; Graph states for quantum secret sharing, Phys. Rev. A 78, 042309 (2008).
* (42) W. Matthews, S. Wehner, and A. Winter; Distinguishability of quantum states under restricted families of measurements with an application to quantum data hiding, Commun. Math. Phys. 291, 813 (2009).
* (43) C. W. Helstrom; Quantum Detection and Estimation Theory, J. Stat. Phys. 1, 231 (1969).
* (44) A. S. Holevo; Statistical decision theory for quantum systems, J. Multivar. Anal. 3, 337 (1973).
* (45) H. Yuen, R. Kennedy, and M. Lax; Optimum testing of multiple hypotheses in quantum detection theory, IEEE Trans. Inf. Theory 21, 125 (1975).
* (46) G. Chiribella, G. M. D’Ariano, and P. Perinotti; Quantum Circuit Architecture, Phys. Rev. Lett. 101, 060401 (2008).
* (47) M. Piani and J. Watrous; All Entangled States are Useful for Channel Discrimination, Phys. Rev. Lett. 102, 250501 (2009).
* (48) G. Chiribella; Perfect discrimination of no-signalling channels via quantum superposition of causal structures, Phys. Rev. A 86, 040301 (2012).
* (49) C. Hirche; Quantum Network Discrimination, arXiv:2103.02404.
* (50) S. Pirandola, R. Laurenza, C. Lupo, and J. L. Pereira; Fundamental limits to quantum channel discrimination, npj Quantum Information 5, 50 (2019).
* (51) R. Takagi, B. Regula, K. Bu, Zi-Wen Liu, and G. Adesso; Operational Advantage of Quantum Resources in Subchannel Discrimination, Phys. Rev. Lett. 122, 140402 (2019).
* (52) R. Takagi and B. Regula; General Resource Theories in Quantum Mechanics and Beyond: Operational Characterization via Discrimination Tasks, Phys. Rev. X 9, 031053 (2019).
* (53) G. Chiribella, M. Banik, S. S. Bhattacharya, T. Guha, M. Alimuddin, A. Roy, S. Saha5, S. Agrawal, and G. Kar; Indefinite causal order enables perfect quantum communication with zero capacity channels, New J. Phys. 23, 033039 (2021)
* (54) S. S. Bhattacharya, A. G. Maity, T. Guha, G. Chiribella, and M. Banik; Random-Receiver Quantum Communication, PRX Quantum 2, 020350 (2021).
* (55) While formulation of quantum mechanics assume this tensor product structure as a postulate, in a recent letter Carcassi21 Carcassi et al. have derived the tensor product rule from the state postulate and from the measurement postulate starting with a natural definition of a composite system as a set containing the component systems.
* (56) G. Carcassi, L. Maccone, and C. A. Aidala; Four Postulates of Quantum Mechanics Are Three, Phys. Rev. Lett. 126, 110402 (2021).
* (57) E. Chitambar, D. Leung, L. Mancinska, M. Ozols, and A. Winter; Everything You Always Wanted to Know About LOCC (But Were Afraid to Ask), Commun. Math. Phys. 328, 303 (2014).
* (58) A. Peres and W. K. Wootters; Optimal detection of quantum information, Phys. Rev. Lett. 66, 1119 (1991).
* (59) E. Chitambar and Min-Hsiu Hsieh; Revisiting the optimal detection of quantum information, Phys. Rev. A 88, 020302(R) (2013).
* (60) N. Yu, R. Duan, and M. Ying; Four Locally Indistinguishable Ququad-Ququad Orthogonal Maximally Entangled States, Phys. Rev. Lett. 109, 020506 (2012).
* (61) S. Bandyopadhyay, S. Ghosh, and G. Kar; LOCC distinguishability of unilaterally transformable quantum states, New J. Phys. 13, 123013 (2011).
* (62) S. Bandyopadhyay, A. Cosentino, N. Johnston, V. Russo, J. Watrous, N. Yu; Limitations on separable measurements by convex optimization, IEEE Trans. Inf. Theory. 61, 3593 (2015).
* (63) M. Nathanson; Distinguishing bipartitite orthogonal states using LOCC: Best and worst cases, J. Math. Phys.46, 062103 (2005).
* (64) T. Singal, R. Rahaman, S. Ghosh, and G. Kar; Necessary condition for local distinguishability of maximally entangled states: Beyond orthogonality preservation, Phys. Rev. A 96, 042314 (2017).
|
arxiv-papers
| 2021-07-26T13:24:56 |
2024-09-04T03:07:18.676304
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Samrat Sen, Edwin Peter Lobo, Sahil Gopalkrishna Naik, Ram Krishna\n Patra, Tathagata Gupta, Subhendu B. Ghosh, Sutapa Saha, Mir Alimuddin, Tamal\n Guha, Some Sankar Bhattacharya, Manik Banik",
"submitter": "Manik Banik",
"url": "https://arxiv.org/abs/2107.12208"
}
|
2107.12212
|
# Raw Differentiable Architecture Search
for Speech Deepfake and Spoofing Detection
###### Abstract
End-to-end approaches to anti-spoofing, especially those which operate
directly upon the raw signal, are starting to be competitive with their more
traditional counterparts. Until recently, all such approaches consider only
the learning of network parameters; the network architecture is still hand
crafted. This too, however, can also be learned. Described in this paper is
our attempt to learn automatically the network architecture of a speech
deepfake and spoofing detection solution, while jointly optimising other
network components and parameters, such as the first convolutional layer which
operates on raw signal inputs. The resulting raw differentiable architecture
search system delivers a tandem detection cost function score of 0.0517 for
the ASVspoof 2019 logical access database, a result which is among the best
single-system results reported to date.
## 1 Introduction
End-to-end (E2E) solutions are attracting growing attention across a broad
range of speech processing tasks [1, 2, 3]. In contrast to the more common
approach whereby front-end feature extraction and the back-end classifier or
network are separately optimised, E2E solutions allow for pre-processing and
post-processing components to be combined within a single network. With both
components being encapsulated within a single model, front-end and back-end
components can be jointly optimised. In this case the front-end might have a
better chance of capturing more discriminative information for the task in
hand [4, 5, 6], whereas the back-end might be able to function more
effectively upon the information to produce more reliable scores.
Many solutions to anti-spoofing for automatic speaker verification have
focused upon the design of deep neural network (DNN) based back-end
classifiers. Most combine fixed, hand-crafted features, usually in the form of
some spectro-temporal decomposition [7, 8], with a convolutional neural
network (CNN) to learn higher-level representations. The literature shows that
the use of specially designed network modules [9, 10, 11] and loss functions
[12, 13, 14] generally leads to better performing models. Still, their
potential is fundamentally dependent upon the information captured in the
initial features; information lost in initial feature extraction cannot be
recovered. Several works have also shown that the performance of a given model
can vary substantially when fed with different features [9, 10, 12]. These
observations point toward the importance of learning and optimising not just
the higher-level representation, but also the initial features, in unison with
the classifier.
E2E solutions have been a focus of our research group for some time [15].
Fundamental to this pursuit is operation upon the raw signal. A recent attempt
[5] adopted the RawNet2 architecture [16, 17]. Using a bank of sinc-shaped
filters, it operates directly upon the raw audio waveform through time-domain
convolution, with the remaining network components being optimised in the
usual way. Results show that systems that use automatically learned features
are competitive and complementary to systems that use hand crafted features.
While these findings are encouraging, improvements to performance are perhaps
only modest. Despite the emphasis upon the E2E learning of both features and
classifier, one aspect of our model remains hand-crafted [5]. This is also the
case for every E2E solution proposed thus far [4, 6, 16]; the network
_parameters_ are learned, but the network _architecture_ is still hand-
crafted.
We have hence explored automatic approaches to learn the network architecture
as well. Our first attempt [18] was based upon a specific variant of
differentiable architecture search [19] known as partially-connected
differentiable architecture search (PC-DARTS) [20]. Architecture search is
performed using a pair of core network components referred to as cells. Cells
are defined by both architecture parameters and network parameters, both of
which are jointly optimised during the first of two stages referred to as the
_architecture search_ stage.
We showed [18] that PC-DARTS learns more compact models that are nonetheless
competitive with the state of the art. As the very first attempt to harness
the power of differentiable architecture search for anti-spoofing, this work
was performed with hand-crafted features. Our latest work has hence sought to
combine architecture search with fully E2E learning. In this paper, we present
Raw PC-DARTS. It is the first E2E speech deepfake and spoofing detection
solution which operates directly upon the raw waveform while allowing for the
joint optimisation of both the network architecture and network parameters.
The remainder of the paper is organised as follows. Section 2 introduces the
related works. The proposed system is described in Section 3. Reported in
Sections 4 and 5 are our experiments and results. Our conclusions are reported
in Section 6.
## 2 Related works
In this section we introduce the two stages of DARTS-based NAS solutions [19,
20, 21], namely the architecture search stage using partial connections [20]
and the train from scratch stage.
The architecture search stage aims to determine a base component or building
block upon which the full model is constructed. This base component is
referred to as a cell. The term _architecture_ refers to the configuration of
nodes and interconnections within the cell.
Figure 1: An illustration of architecture search: (a) a neural cell with $N=5$
nodes; (b) an illustration of the candidate operations performed on each edge
that are optimised during architecture search; (c) resulting optimised cell
with $2$ inputs to each intermediate node.
As shown in Fig. 1, each cell has a pair of inputs:
$\mathbf{x}^{\left(1\right)}$ and $\mathbf{x}^{\left(2\right)}$. Cells have a
single output, denoted by $\mathbf{x}^{\left(N\right)}$ ($N=5$ in Fig. 1).
Nodes in between the inputs and output are referred to as intermediate nodes
($\mathbf{x}^{\left(3\right)}$ and $\mathbf{x}^{\left(4\right)}$ in Fig. 1).
Architecture search involves the selection of candidate operations $o$ from
search space $\mathcal{O}$ (solid coloured lines). Operations between
intermediate nodes and the output are fixed to concatenation operations (solid
black lines). Each intermediate node is calculated according to:
$\mathbf{x}^{\left(j\right)}=\sum_{i<j}o^{\left(i,j\right)}\left(\mathbf{x}^{\left(i\right)}\right)$
(1)
where $o^{\left(i,j\right)}$ is the operation performed on edge $(i,j)$
connecting $\mathbf{x}^{\left(i\right)}$ to $\mathbf{x}^{\left(j\right)}$.
During the architecture search stage, the full set of operation candidates are
active, with each being assigned a weight $\alpha_{o}^{\left(i,j\right)}$. The
operation performed on edge $(i,j)$ is then defined as:
$\bar{o}^{\left(i,j\right)}\left(\mathbf{x}^{\left(i\right)}\right)=\sum_{o\in\mathcal{O}}\frac{\exp\left(\alpha_{o}^{\left(i,j\right)}\right)}{\sum_{o^{\prime}\in\mathcal{O}}\exp\left(\alpha_{o^{\prime}}^{\left(i,j\right)}\right)}\,o\left(\mathbf{x}^{\left(i\right)}\right)$
(2)
When architecture search is complete, only the single operation with the
highest weight $\alpha_{o}^{\left(i,j\right)}$ is retained. All other
operations are discarded; their weights are set to zero.
Because the set of operation weights
$\boldsymbol{\alpha}=\\{\alpha^{\left(i,j\right)}\\}$ are learnable, the
search process is a bi-level optimisation problem. We seek to determine the
weight parameters $\boldsymbol{\alpha}$ which minimise the validation loss
$L_{val}$, while the set of network parameters $\boldsymbol{\omega}$ is
determined by minimising the training loss
$L_{train}(\boldsymbol{\omega},\boldsymbol{\alpha})$:
$\displaystyle\min_{\boldsymbol{\alpha}}L_{val}(\boldsymbol{\omega}^{*},\boldsymbol{\alpha})$
(3)
$\displaystyle\text{s.t.}\;\;\boldsymbol{\omega}^{*}=\underset{\boldsymbol{\omega}}{\operatorname{argmin}}\;L_{train}(\boldsymbol{\omega},\boldsymbol{\alpha})$
The bi-level optimisation process is demanding in terms of GPU memory and
computation. Partial channel connections [20] were proposed as a solution to
improve efficiency, reducing demands on both computation and memory. A binary
masking operator $\mathbf{S}^{\left(i,j\right)}$ is used in partially
connected (PC) DARTS in order to reduce the complexity of (2). The number of
active channels in $\mathbf{x}^{\left(i\right)}$ is reduced through either
selection (marked as $\mathbf{S}^{\left(i,j\right)}=1$) or masking (marked as
$\mathbf{S}^{\left(i,j\right)}=0$) according to:
$\bar{o}^{\left(i,j\right)}\left(\mathbf{x}^{\left(i\right)}\right)=\sum_{o\in\mathcal{O}}\frac{\exp{\left(\alpha_{o}^{\left(i,j\right)}\right)}}{\sum_{o^{\prime}\in\mathcal{O}}\exp{\left(\alpha_{o^{\prime}}^{\left(i,j\right)}\right)}}\,o\left(\mathbf{S}^{\left(i,j\right)}\odot\mathbf{x}^{\left(i\right)}\right)\\\
+\left(1-\mathbf{S}^{\left(i,j\right)}\right)\odot\mathbf{x}^{\left(i\right)}$
(4)
where $\odot$ indicates element wise multiplication. In practice, only a
number $1/K_{C}$ of channels in $\mathbf{x}^{\left(i\right)}$ are selected.
The factor $K_{C}$ is set as a hyper-parameter and acts to trade off
performance (smaller $K_{C}$) for efficiency (larger $K_{C}$).
After architecture search, the cells are concatenated multiple times (Fig. 2)
in similar fashion to a ResNet architecture to produce a deeper, more complex
model before being further optimised.
Figure 2: An illustration of train from scratch stage: normal cells (blue) and
reduction cells (yellow) are stacked to form a deeper network.
## 3 Raw PC-DARTS
In this section, we describe the proposed Raw PC-DARTS approach. The model
structure is detailed in Table LABEL:tab:model_structure. We describe the bank
of front-end sinc filters, the application of filter masking, the
modifications made to the back-end classifier design and base cell
architecture, embedding extraction and the loss function.
Table 1: The proposed network structure. Each cell receives outputs of its two previous cells/layers. Conv($k$, $s$, $c$) stands for a convolutional operation with kernel size $k$, stride $s$ and output channel $c$. BN refers to batch normalisation. Layer | Input:64000 samples | Output shape
---|---|---
| Conv(128, 1, 64) |
Sinc Filters | Maxpooling(3) | (21290, 64)
| BN & LeakyReLU |
| Conv(3, 2, 64) |
Conv_1 | BN & LeakyReLU | (10645, 64)
Normal Cells | $\left\\{\begin{array}[]{c}\text{BN \& LeakyReLU}\\\ \text{Operations}\\\ \text{Maxpooling(2) }\\\ \end{array}\right\\}\times 2$ | (2661,256)
| BN & LeakyReLU |
Expand Cell | Operations | (1330, 512)
| Maxpooling(2) |
Normal Cells | $\left\\{\begin{array}[]{c}\text{BN \& LeakyReLU}\\\ \text{Operations}\\\ \text{Maxpooling(2) }\\\ \end{array}\right\\}\times 2$ | (332, 512)
| BN & LeakyReLU |
Expand Cell | Operations | (166, 1024)
| Maxpooling(2) |
Normal Cells | $\left\\{\begin{array}[]{c}\text{BN \& LeakyReLU}\\\ \text{Operations}\\\ \text{Maxpooling(2) }\\\ \end{array}\right\\}\times 2$ | (41, 1024)
GRU | GRU(1024) | (1024)
Embedding | FC(1024) | (1024)
Output Score | P2SActivationLayer(2) | (2)
### 3.1 Sinc filters and masking
The input waveform is fixed to a duration of 4 seconds ($16000\times 4$
samples) either by concatenation or truncation of source audio data. Feature
extraction is performed using a set of $C$ sinc filters [1]. Each filter
performs time-domain convolution upon the input waveform. The impulse response
of each filter is defined according to:
$g[n,f_{1},f_{2}]=2f_{2}sinc(2\pi f_{2}n)-2f_{1}sinc(2\pi f_{1}n)$ (5)
where $f_{1}$ and $f_{2}$ are the cut in and cut off frequencies, and
$sinc(x)=sin(x)/x$ is the sinc function. The cut in and cut off frequencies
can be initialised according to any given frequency scale. Both $f_{1}$ and
$f_{2}$ are learnable model parameters, though we consider both learnable and
fixed configurations.
Filter masking is applied to mask a number of the sinc filters. This is akin
to channel drop-out [22, 23] and frequency masking [13, 24, 25] and acts to
encourage the learning of better generalised representations. In practice,
sinc filters in the range of $[C_{1},C_{2})$ are set to zero (masked), where
$C_{1}$ is the first masked filter selected at random and $C_{2}=C_{1}+f$. The
number of masked filters $f$ is chosen from a uniform distribution $[0,F)$,
where $F$ is a pre-defined maximum value. After $f$ is generated, $C_{1}$ is
then chosen from a uniform distribution $[0,C-f)$.
### 3.2 Search space and cell architectures
In contrast to the approach described in [18] where input features can be seen
as a 2D image, operations in Raw PC-DARTS are performed directly upon the raw
time-domain waveform. Thus, the search space $\mathcal{O}$ is designed based
on 1D convolutional operations, which includes: standard convolution and
dilated convolution with kernel size {3, 5}; max pooling and average pooling
with kernel size {3}; skip connections; no connections.
The original DARTS approach searches for the architectures of two types of
cells, namely a normal cell and a reduction cell. The model is formed by
stacking these cells sequentially, with the reduction cells being placed at
$\frac{1}{3}$ and $\frac{2}{3}$ of the total network depth. While the normal
cell preserves the feature map dimension, the reduction cell reduces the
dimension by one-half, while the number of channels is doubled. A global
average pooling layer is then used after the stacked network to extract
embeddings.
This stacked cell design works well for spectro-temporal representations since
their dimensions are close to those used typically in image classification
tasks to which DARTS was first applied [26, 27]. For speech classification
tasks and for solutions that operate upon raw inputs, however, the feature
dimension remains large at the stacked cell output and the use of global
pooling will result in the substantial loss of information. While a larger
number of reduction cells can be added manually to help reduce the feature
dimension, this would defeat the purpose of searching the architecture
automatically. The introduction of each additional reduction cell also doubles
the number of channels, which in turn increases prohibitively both
computational complexity as well as demands upon GPU memory.
To address this problem in Raw PC-DARTS, we apply maxpooling to each cell
output to reduce the feature dimension by one-half. This simple, yet efficient
solution helps the model to learn a more compact, high-level representation,
without increasing the number of channels, thereby reducing computational
complexity and demands upon GPU memory. An added benefit is that the same
architecture depth and initial number of channels can be used for both
architecture search as well as train from scratch stages. The so-called _depth
gap_ [21, 28] is therefore avoided, where the searched operations may not fit
the deeper network in the second stage due to the depth mismatch between
architecture search and train from scratch stages. Thus, the cells used in Raw
PC-DARTS are referred to as a _normal_ cell and an _expand_ cell. Both cells
halve the input feature dimension, whereas only the expand cell doubles the
number of channels. Expand cells are placed at the same network depth as
reduction cells in the original DARTS approach.
### 3.3 Embedding extraction and loss function
Frame-level representations produced by the final cell are fed to a gated
recurrent unit (GRU) layer to obtain utterance-level representations. These
representations are then fed to a fully connected layer which extracts the
embedding. We use mean-square error (MSE) for P2SGrad [12] as the loss
function. An activation layer is first applied to calculate the cosine
distance $\cos\theta$ between the input embedding and the class weight. As in
[29], this step is hyper-parameter-free, which reduces the sensitivity of
margin-based softmax towards its scale and angular margin parameter settings,
thus giving relatively consistent results. The network loss is the MSE between
$\cos\theta$ and the target class label. Scores used for performance
evaluation are $\cos\theta$ for the bona fide class.
## 4 Experiments
### 4.1 Database and metrics
All experiments were performed using the ASVspoof 2019 Logical Access (LA)
database [30] which comprises three independent partitions: train, development
and evaluation. Each partition is used in the same way reported in [18].
During architecture search, network parameters are updated using 50% of the
bona fide utterances and 50% of the spoofed utterances in the training
partition. Remaining data is used to update architecture parameters. The cell
architectures are selected from those which give the best classification
accuracy for the full development partition. During the train from scratch
stage, all network parameters, except those of the first convolutional layer,
are updated using the full training partition and the best model is selected
according to that which gives the best classification accuracy for the full
development partition. We report results according to two different metrics:
the pooled minimum normalised tandem detection cost function (min-tDCF) [31];
the pooled equal error rate (EER).
### 4.2 Implementation details
We experimented with 3 different sinc filter frequency scales: Mel, inverse-
Mel and linear [5]. We tested two settings in each case, namely _fixed_ and
_learnable_. Fixed scales are set and left unchanged for both architecture
search and train from scratch stages. Learnable scales are initialised in the
same way, but the configuration is updated during architecture search. They
are then fixed and left unchanged during the train from scratch stage. We also
tested a randomly initialised, learnable convolution block denoted Conv_0, in
place of sinc filters. The kernel size, stride and the number of output
channels for the Conv_0 system are set to the same as that of systems that use
sinc filters. The maximum number of masked filters is set to $F=16$.
Following [18], the number of nodes in each cell is fixed to $N=7$ and the
number of intermediate node inputs is fixed to 2. Models comprise 8 cells (6
normal cells and 2 expand cells) with $C=64$ initial channels in both stages.
During architecture search, we perform 30 epochs of training. In the first 10
designated warm-up epochs, only network parameters are updated. Both
architecture parameters and network parameters are updated in the subsequent
20 epochs. In all cases, the batch size is set to 14 and learning is performed
using Adam optimisation. Architecture parameters are updated using a learning
rate of 6e-4 and a weight decay of 0.001. Network parameters are updated using
a learning rate of 5e-5. Partial channel selection is performed with a value
of $K_{C}=2$. During the train from scratch stage, all models are trained for
100 epochs with a batch size of 32. The initial learning rate of 5e-5 is
annealed down to 2e-5 following a cosine schedule.
All models reported in this paper are trained once with the same random seed
on a single NVIDIA GeForce RTX 3090 GPU. Architecture search takes
approximately 21.5 hours, whereas the train from scratch process takes
approximately 9.5 hours. Results are reproducible with the same random seed
and GPU environment using the implementation available
online111https://github.com/eurecom-asp/raw-pc-darts-anti-spoofing.
## 5 Results
First we report a set of experiments which assess the performance of Raw PC-
DARTS when using different first layer sinc filter scales. Next, we present a
comparison of performance to existing state-of-the-art solutions. Finally, we
present an analysis of generalisability in terms of performance stability
across different spoofing attacks.
### 5.1 Raw PC-DARTS with different sinc scales
Table 2 shows results in terms of both the min t-DCF and EER for the ASVspoof
2019 LA evaluation partition. Results are shown for four different sinc scale
configurations: Mel; inverse-Mel; linear and with randomly initialised,
learnable convolution blocks — Conv_0. With the exception of Conv_0, results
in each case are shown for both fixed and learnable configurations.
The lowest min t-DCF of 0.0517 (EER of 1.77%) is obtained using fixed Mel
scale sinc filters. For both inverse-Mel and linear scales, learnable
configurations give better results than fixed configurations, with the second
best result with a min t-DCF of 0.0583 (2.1%) being achieved using a linear
scale. While the Conv_0 system achieves a respectable EER of 2.49%, the min
t-DCF of 0.0733 is notably worse than that of the better performing
configurations.
The cell architectures for the best configuration (Mel-Fixed) is illustrated
in Fig. 3. We observed that, even though architecture parameters are randomly
initialised, after several warm-up epochs, those for dilated convolution
operations tend to dominate. This may indicated that, compared to other
candidate operations within the search space, dilated convolutions contribute
more to representation learning when applied to raw waveforms. Dilated
convolutions act to increase the receptive field [6, 32, 33]. The use of
greater contextual information then helps to improve performance.
Table 2: EER results for the ASVspoof 2019 LA database, evaluation partition. Results shown for different Raw PC-DARTS setups using different first layer sinc scale initialisations. | Fixed | Learnable
---|---|---
Type | min-tDCF | EER | min-tDCF | EER
Mel | 0.0517 | 1.77 | 0.0899 | 3.62
Inverse-Mel | 0.0700 | 3.25 | 0.0655 | 2.80
Linear | 0.0926 | 3.29 | 0.0583 | 2.10
Conv_0 | $\times$ | $\times$ | 0.0733 | 2.49
(a) Normal cell
(b) Expand cell
Figure 3: An illustration of the normal (a) and expand (b) cells produced by
the architecture search stage for the Mel-Fixed Raw PC-DARTS configuration.
### 5.2 Comparison to competing systems
Table 3: A performance comparison between proposed models and competing state-of-the-art systems reported in the literature. Results for the ASVspoof LA evaluation partition. Systems | Features | min-tDCF | EER | Params | Worst attack | Worst EER
---|---|---|---|---|---|---
Res-TSSDNet [6] | waveform | 0.0482 | 1.64 | 0.35M | A17 | 6.01
Raw PC-DARTS Mel-F | waveform | 0.0517 | 1.77 | 24.48M | A08 | 4.96
ResNet18-LCML-FM [13] | LFB | 0.0520 | 1.81 | - | A17 | 6.19
LCNN-LSTM-sum [12] | LFCC | 0.0524 | 1.92 | 0.28M | A17 | 9.24
Capsule Network [34] | LFCC | 0.0538 | 1.97 | 0.30M | A17 | 3.76
Raw PC-DARTS Linear-L | waveform | 0.0583 | 2.10 | 24.40M | A08 | 6.23
ResNet18-OC-Softmax [14] | LFCC | 0.0590 | 2.19 | - | A17 | 9.22
Res2Net [10] | CQT | 0.0743 | 2.50 | 0.96M | - | -
ResNet18-AM-Softmax [14] | LFCC | 0.0820 | 3.26 | - | A17 | 13.45
ResNet18-GAT-T [11] | LFB | 0.0894 | 4.71 | - | A17 | 28.02
ResNet18-GAT-S [11] | LFB | 0.0914 | 4.48 | - | A17 | 21.74
PC-DARTS [18] | LFCC | 0.0914 | 4.96 | 7.51M | A17 | 30.20
RawNet2 [5] | waveform | 0.1294 | 4.66 | 25.43M | A18 | 16.30
Table 3 shows a comparison of results for the two best performing Raw PC-DARTS
systems to that of the top-performing systems reported in the
literature222Number of learnable parameters and the decomposed EER results for
Res-TSSDNet and LCNN-LSTM-sum were obtained using open-source codes available
online. Those for Capsule Network were provided by the authors of [34], those
for ResNet18-GAT and RawNet2 were provided by the authors of [5, 11].. Among
the illustrated systems, four operate upon raw inputs, including the top two
systems, the first of which is the Res-TSSDNet system reported in [6] and the
second of which is the proposed Raw PC-DARTS. The fourth system which operates
on the raw waveform is the RawNet2 system reported in [5]. It also uses a
first layer of sinc filters, GRU and fully connected layer for embedding
extraction.
These results point toward the competitiveness of solutions that operate upon
the raw waveform but also show that solutions whose cell architectures are
learned automatically can perform almost as well or better that those that are
hand-crafted.
### 5.3 Complexity
The number of network parameters for the systems illustrated in Table 3 is
shown in column 5 (where such numbers are available). The two best Raw PC-
DARTS architectures have in excess of 24M parameters. For the Mel-Fixed
configuration, 77% (18.89M) of the learnable network parameters correspond to
GRU layers wereas only 18% (4.52M) correspond to the stacked cells. The
RawNet2 system, which also uses a GRU, has over 25M parameters. Other systems
have far fewer parameters, including the top Res-TSSDNet system which has
0.35M parameters. It uses ResNet-style 1D convolution blocks and 3 FC layers,
without GRUs. The use of dilated convolutions helps to control network
complexity while increasing the receptive field [6]. Though the LCNN-LSTM-sum
system uses two bidirectional LSTM layers, which is normally computationally
expensive, use of a hidden size of 48 nonetheless means that the complexity is
the lowest of all illustrated systems. The additional complexity of the Raw
PC-DARTS architecture is currently a limitation in the approach, yet a
compromise that might be acceptable given that learning and optimisation is a
one-step process requiring comparatively little human effort.
### 5.4 Worst case scenario
Generalisation has been focus of anti-spoofing research since the inception of
the ASVspoof initiative. It is well known that even top-performing systems can
struggle to detect the full range of spoofing attacks [35]. There is hence
interest in minimising not just pooled performance, but also that for the so-
called _worst case scenario_ which, for the ASVspoof 2019 LA database, is
generally the infamous A17 attack.
The worst case attack and corresponding EER for each system is shown in
columns 6 and 7 of Table 3. Here we see a distinct advantage of systems that
operate upon raw inputs. The Res-TSSDNet [6] and both Raw PC-DARTS systems
have among the lowest worse case EERs. This observation indicates that the
waveform based systems can capture discriminative artefacts that are missed by
systems that use hand-crafted inputs. Were an adversary to discover the
attacks to which a system is most vulnerable and exploit only attacks of this
nature, then the Raw PC-DARTS countermeasures would offer the second-best
protection among all competing systems.
## 6 Conclusion
In this paper, we proposed an end-to-end differentiable architecture search
approach to speech deepfake and spoofing detection, named Raw PC-DARTS. We
show that the components of a deep network model, including pre-processing
operations, network architecture and parameters, can all be learned
automatically from raw waveform inputs and that the resulting system is
competitive with the state of the art.
While the best performance is obtained using a fixed front-end, rather than
with a learnable configuration, the latter is only marginally behind, while
both systems give among the best performance reported to date for the ASVspoof
2019 logical access database. The use of gated recurrent units means that the
resulting models are, however, substantially more complex than competing
systems and may exhibit some redundancies. While it may be possible to reduce
redundancy, and while the results reported in the paper are the first to show
the genuine potential of learned architectures, further work to tackle
complexity is required if they are to be competitive when computational
capacity is limited and a design criteria, e.g. for embedded applications. One
avenue for future research in this direction is to evaluate the replacement of
gated recurrent units, with a number of parameters in the millions, with
concatenated fully connected layers with orders of magnitude fewer parameters.
We also observe that the Raw PC-DARTS solution generalises better to unseen
forms of spoofing attacks than their hand-crafted counterparts. Performance
for the worst case A17 attack is notably better than that for competing
systems. We are currently working to understand what information or cues
missed by handcrafted solutions are captured successfully by fully learned
solutions. With answers to these questions, we may be able to combine the
benefits of both in order to improve reliability further while also protecting
complexity.
## 7 Acknowledgements
This work is supported by the TReSPAsS-ETN project funded from the European
Union’s Horizon 2020 research and innovation programme under the Marie
Skłodowska-Curie grant agreement No.860813. It is also supported by the
ExTENSoR project funded by the French Agence Nationale de la Recherche (ANR).
## References
* [1] M. Ravanelli and Y. Bengio, “Speaker recognition from raw waveform with sincnet,” IEEE Signal Processing Letters, pp. 1021–1028, 2018.
* [2] Y. Luo and N. Mesgarani, “Conv-TasNet: Surpassing ideal time–frequency magnitude masking for speech separation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 8, pp. 1256–1266, 2019.
* [3] D. Peter, W. Roth, and F. Pernkopf, “End-to-end keyword spotting using neural architecture search and quantization,” arXiv preprint arXiv:2104.06666, 2021.
* [4] H. Dinkel, N. Chen, Y. Qian, and K. Yu, “End-to-end spoofing detection with raw waveform CLDNNS,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 4860–4864.
* [5] H. Tak, J. Patino, M. Todisco, A. Nautsch, N. Evans, and A. Larcher, “End-to-end anti-spoofing with RawNet2,” in Proc. ICASSP, 2021, pp. 6369–6373.
* [6] G. Hua, A. B.-j. Teoh, and H. Zhang, “Towards end-to-end synthetic speech detection,” IEEE Signal Processing Letters, 2021.
* [7] M. Todisco, H. Delgado, and N. Evans, “A new feature for automatic speaker verification anti-spoofing: Constant Q cepstral coefficients,” in Proc. Speaker Odyssey, 2016, pp. 283–290.
* [8] M. Todisco, X. Wang, V. Vestman, M. Sahidullah, H. Delgado, A. Nautsch, et al., “ASVspoof 2019: Future horizons in spoofed and fake audio detection,” in Proc. INTERSPEECH, 2019, pp. 1008–1012.
* [9] G. Lavrentyeva, S. Novoselov, A. Tseren, M. Volkova, A. Gorlanov, and A. Kozlov, “STC antispoofing systems for the ASVspoof2019 challenge,” in Proc. INTERSPEECH, 2019, pp. 1033–1037.
* [10] X. Li, N. Li, C. Weng, X. Liu, D. Su, D. Yu, and H. Meng, “Replay and synthetic speech detection with Res2Net architecture,” in Proc. ICASSP, 2021, pp. 6354–6358.
* [11] H. Tak, J.-w. Jung, J. Patino, M. Todisco, and N. Evans, “Graph attention networks for anti-spoofing,” Proc. INTERSPEECH, 2021.
* [12] X. Wang and J. Yamagishi, “A comparative study on recent neural spoofing countermeasures for synthetic speech detection,” Proc. INTERSPEECH, 2021.
* [13] T. Chen, A. Kumar, P. Nagarsheth, G. Sivaraman, and E. Khoury, “Generalization of audio deepfake detection,” in Proc. Speaker Odyssey, 2020, pp. 1–5.
* [14] Y. Zhang, F. Jiang, and Z. Duan, “One-class learning towards synthetic voice spoofing detection,” IEEE Signal Processing Letters, vol. 28, pp. 937–941, 2021.
* [15] G. Valenti, H. Delgado, M. Todisco, N. Evans, and L. Pilati, “An end-to-end spoofing countermeasure for automatic speaker verification using evolving recurrent neural networks,” in Proc. Speaker Odyssey, 2018, pp. 288–295.
* [16] J.-w. Jung, H.-s. Heo, J.-h. Kim, H.-j. Shim, and H.-j. Yu, “Rawnet: Advanced end-to-end deep neural network using raw waveforms for text-independent speaker verification,” in Proc. INTERSPEECH, 2019, pp. 1268–1272.
* [17] J.-w. Jung, S.-b. Kim, H.-j. Shim, J.-h. Kim, and H.-j. Yu, “Improved RawNet with feature map scaling for text-independent speaker verification using raw waveforms,” in Proc. INTERSPEECH, 2020, pp. 1496–1500.
* [18] W. Ge, M. Panariello, J. Patino, M. Todisco, and N. Evans, “Partially-connected differentiable architecture search for deepfake and spoofing detection,” in Proc. INTERSPEECH, 2021.
* [19] H. Liu, K. Simonyan, and Y. Yang, “DARTS: Differentiable architecture search,” in Proc. ICML 2019, 2019.
* [20] Y. Xu, L. Xie, X. Zhang, X. Chen, G. Qi, Q. Tian, and H. Xiong, “PC-DARTS: Partial channel connections for memory-efficient architecture search,” 8th International Conference on Learning Representations, ICLR, 2020.
* [21] X. Chen, L. Xie, J. Wu, and Q. Tian, “Progressive differentiable architecture search: Bridging the depth gap between search and evaluation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1294–1303.
* [22] S. Cai, Y. Shu, G. Chen, B. C. Ooi, W. Wang, and M. Zhang, “Effective and efficient dropout for deep convolutional neural networks,” arXiv preprint arXiv:1904.03392, 2019.
* [23] S. Hou and Z. Wang, “Weighted channel dropout for regularization of deep convolutional neural network,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 8425–8432.
* [24] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “SpecAugment: A simple data augmentation method for automatic speech recognition,” in Proc. INTERSPEECH, 2019.
* [25] H. Wang, Y. Zou, and W. Wang, “SpecAugment++: A hidden space data augmentation method for acoustic scene classification,” Proc. INTERSPEECH, 2021.
* [26] J. Deng, W. Dong, R. Socher, L.-j. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
* [27] A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009\.
* [28] A. Yang, P. M. Esperança, and F. M. Carlucci, “NAS evaluation is frustratingly hard,” in International Conference on Learning Representations, 2020.
* [29] X. Zhang, R. Zhao, J. Yan, M. Gao, Y. Qiao, X. Wang, and H. Li, “P2sgrad: Refined gradients for optimizing deep face models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9906–9914.
* [30] X. Wang, J. Yamagishi, M. Todisco, H. Delgado, A. Nautsch, N. Evans, M. Sahidullah, V. Vestman, T. Kinnunen, K. A. Lee, et al., “ASVspoof 2019: A large-scale public database of synthesized, converted and replayed speech,” Computer Speech & Language, vol. 64, pp. 101114, 2020.
* [31] T. Kinnunen, H. Delgado, N. Evans, K. A. Lee, V. Vestman, A. Nautsch, M. Todisco, X. Wang, M. Sahidullah, J. Yamagishi, and D. A. Reynolds, “Tandem assessment of spoofing countermeasures and automatic speaker verification: Fundamentals,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 2195–2210, 2020.
* [32] K. Tan, J. Chen, and D. Wang, “Gated residual networks with dilated convolutions for supervised speech separation,” in Proc. ICASSP, 2018.
* [33] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” in 4th International Conference on Learning Representations, ICLR, 2016.
* [34] A. Luo, E. Li, Y. Liu, X. Kang, and Z. J. Wang, “A capsule network based approach for detection of audio spoofing attacks,” in Proc. ICASSP, 2021.
* [35] A. Nautsch, X. Wang, N. Evans, T. Kinnunen, V. Vestman, M. Todisco, H. Delgado, M. Sahidullah, J. Yamagishi, and K. A. Lee, “ASVspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 2, pp. 252–265, 2021.
|
arxiv-papers
| 2021-07-26T13:36:14 |
2024-09-04T03:07:18.689947
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Wanying Ge, Jose Patino, Massimiliano Todisco and Nicholas Evans",
"submitter": "Wanying Ge",
"url": "https://arxiv.org/abs/2107.12212"
}
|
2107.12213
|
# Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action
Recognition
Yuxin Chen1,2, Ziqi Zhang1,2, Chunfeng Yuan1, Bing Li1, Ying Deng4, Weiming
Hu1,3
1NLPR, Institute of Automation, Chinese Academy of Sciences
2School of Artificial Intelligence, University of Chinese Academy of Sciences
3CAS Center for Excellence in Brain Science and Intelligence Technology
4School of Aeronautical Manufacturing Engineering, Nanchang Hangkong
University
[email protected], {ziqi.zhang,cfyuan,bli,wmhu}@nlpr.ia.ac.cn
Corresponding author.
###### Abstract
Graph convolutional networks (GCNs) have been widely used and achieved
remarkable results in skeleton-based action recognition. In GCNs, graph
topology dominates feature aggregation and therefore is the key to extracting
representative features. In this work, we propose a novel Channel-wise
Topology Refinement Graph Convolution (CTR-GC) to dynamically learn different
topologies and effectively aggregate joint features in different channels for
skeleton-based action recognition. The proposed CTR-GC models channel-wise
topologies through learning a shared topology as a generic prior for all
channels and refining it with channel-specific correlations for each channel.
Our refinement method introduces few extra parameters and significantly
reduces the difficulty of modeling channel-wise topologies. Furthermore, via
reformulating graph convolutions into a unified form, we find that CTR-GC
relaxes strict constraints of graph convolutions, leading to stronger
representation capability. Combining CTR-GC with temporal modeling modules, we
develop a powerful graph convolutional network named CTR-GCN which notably
outperforms state-of-the-art methods on the NTU RGB+D, NTU RGB+D 120, and NW-
UCLA datasets.111 https://github.com/Uason-Chen/CTR-GCN.
## 1 Introduction
Human action recognition is an important task with various applications
ranging from human-robot interaction to video surveillance. In recent years,
skeleton-based human action recognition has attracted much attention due to
the development of depth sensors and its robustness against complicated
backgrounds.
Figure 1: Channel-wise topology refinement. Lines of different colors
correspond to topologies in different channels and the thickness of lines
indicates the correlation strength between joints.
Early deep-learning-based methods treat human joints as a set of independent
features and organize them into a feature sequence or a pseudo-image, which is
fed into RNNs or CNNs to predict action labels. However, these methods
overlook inherent correlations between joints, which reveals human body
topology and is important information of human skeleton. Yan _et al_. [32]
firstly modeled correlations between human joints with graphs and apply GCNs
along with temporal convolutions to extract motion features. While the
manually defined topology they employ is difficult to achieve relationship
modeling between unnaturally connected joints and limits representation
capability of GCNs. In order to boost power of GCNs, recent approaches [24,
35, 34] adaptively learn the topology of human skeleton through attention or
other mechanisms. They use a topology for all channels, which forces GCNs to
aggregate features with the same topology in different channels and thus
limits the flexibility of feature extraction. Since different channels
represent different types of motion features and correlations between joints
under different motion features are not always the same, it’s not optimal to
use one shared topology. Cheng _et al_. [3] set individual parameterized
topologies for channel groups. However, the topologies of different groups are
learned independently and the model becomes too heavy when setting channel-
wise parameterized topologies, which increases the difficulty of optimization
and hinders effective modeling of channel-wise topologies. Moreover,
parameterized topologies remain the same for all samples, which is unable to
model sample-dependent correlations.
In this paper, we propose a channel-wise topology refinement graph convolution
which models channel-wise topology dynamically and effectively. Instead of
learning topologies of different channels independently, CTR-GC learns
channel-wise topologies in a refinement way. Specifically, CTR-GC learns a
shared topology and channel-specific correlations simultaneously. The shared
topology is a parameterized adjacency matrix that serves as topological priors
for all channels and provides generic correlations between vertices. The
channel-specific correlations are dynamically inferred for each sample and
they capture subtle relationships between vertices within each channel. By
refining the shared topology with channel-specific correlations, CTR-GC
obtains channel-wise topologies (illustrated in Figure 1). Our refinement
method avoids modeling the topology of each channel independently and
introduces few extra parameters, which significantly reduces the difficulty of
modeling channel-wise topologies. Moreover, through reformulating four
categories of graph convolutions into a unified form, we verify the proposed
CTR-GC essentially relaxes strict constraints of other categories of graph
convolutions and improves the representation capability.
Combining CTR-GC with temporal modeling modules, we construct a powerful graph
convolutional network named CTR-GCN for skeleton-based action recognition.
Extensive experimental results on NTU RGB+D, NTU RGB+D 120, and NW-UCLA show
that (1) our CTR-GC significantly outperforms other graph convolutions
proposed for skeleton-based action recognition with comparable parameters and
computation cost; (2) Our CTR-GCN exceeds state-of-the-art methods notably on
all three datasets.
Our contributions are summarized as follows:
* •
We propose a channel-wise topology refinement graph convolution which
dynamically models channel-wise topologies in a refinement approach, leading
to flexible and effective correlation modeling.
* •
We mathematically unify the form of existing graph convolutions in skeleton-
based action recognition and find that CTR-GC relaxes constraints of other
graph convolutions, providing more powerful graph modeling capability.
* •
The extensive experimental results highlight the benefits of channel-wise
topology and the refinement method. The proposed CTR-GCN outperforms state-of-
the-art methods significantly on three skeleton-based action recognition
benchmarks.
## 2 Related Work
### 2.1 Graph Convolutional Networks
Convolutional Neural Networks (CNNs) have achieved remarkable results in
processing Euclidean data like images. To process non-Euclidean data like
graphs, there is an increasing interest in developing Graph Convolutional
Networks (GCNs). GCNs are often categorized as spectral methods and spatial
methods. Spectral methods conduct convolution on spectral domain [1, 5, 11].
However, they depend on the Laplacian eigenbasis which is related to graph
structure and thus can only be applied to graphs with same structure. Spatial
methods define convolutions directly on the graph [7, 21, 29]. One of the
challenges of spatial methods is to handle different sized neighborhoods.
Among different GCN variants, the GCN proposed by Kipf _et al_. [11] is widely
adapted to various tasks due to its simplicity. The feature update rule in
[11] consists of two steps: (1) Transform features into high-level
representations; and (2) Aggregate features according to graph topology. Our
work adopts the same feature update rule.
### 2.2 GCN-based Skeleton Action Recognition
GCNs have been successfully adopted to skeleton-based action recognition [20,
24, 32, 34, 36, 27] and most of them follow the feature update rule of [11].
Due to the importance of topology (namely vertex connection relationship) in
GCN, many GCN-based methods focus on topology modeling. According to the
difference of topology, GCN-based methods can be categorized as follows: (1)
According to whether the topology is dynamically adjusted during inference,
GCN-based methods can be classified into static methods and dynamic methods.
(2) According to whether the topology is shared across different channels,
GCN-based methods can be classified into topology-shared methods and topology-
non-shared methods.
Figure 2: Framework of the proposed channel-wise topology refinement graph
convolution. The channel-wise topology modeling refines the trainable shared
topology with inferred channel-specific correlations. The feature
transformation aims at transforming input features into high-level
representations. Eventually, the output feature is obtained by channel-wise
aggregation.
Static / Dynamic Methods. For static methods, the topologies of GCNs keep
fixed during inference. Yan _et al_. [32] proposed an ST-GCN which predefines
topology according to human body structure and the topology is fixed in both
training and testing phase. Liu _et al_. [20] and Huang _et al_. [9]
introduced multi-scale graph topologies to GCNs to enable multi-range joint
relationship modeling. For dynamic methods, the topologies of GCNs are
dynamically inferred during inference. Li _et al_. [15] proposed an A-links
inference module to capture action-specific correlations. Shi _et al_. [24]
and Zhang _et al_. [35] enhanced topology learning with self-attention
mechanism, which models correlation between two joints given corresponding
features. These methods infer correlations between two joints with local
features. Ye _et al_. [34] proposed a Dynamic GCN, where contextual features
of all joints are incorporated to learn correlations between any pairs of
joints. Compared with static methods, dynamic methods have stronger
generalization ability due to dynamic topologies.
Topology-shared / Topology-non-shared Methods. For topology-shared methods,
the static or dynamic topologies are shared in all channels. These methods
force GCNs to aggregate features in different channels with the same topology,
limiting the upper bound of model performance. Most GCN-based methods follow
topology-shared manner, including aforementioned static methods [9, 20, 32]
and dynamic methods [15, 24, 34, 35]. Topology-non-shared methods use
different topologies in different channels or channel groups, which naturally
overcome limitations of topology-shared methods. Cheng _et al_. [3] proposed a
DC-GCN which sets individual parameterized topologies for different channel
groups. However, the DC-GCN faces difficulty of optimization caused by
excessive parameters when setting channel-wise topologies. To our best
knowledge, topology-non-shared graph convolutions are rarely explored in the
skeleton-based action recognition, and this work is the first to model dynamic
channel-wise topologies. Note that our method also belongs to dynamic methods
because topologies are dynamically inferred during inference.
## 3 Method
In this section, we first define related notations and formulate conventional
graph convolution. Then we elaborate our Channel-wise Topology Refinement
Graph Convolution (CTR-GC) and mathematically analyze the representation
capability of CTR-GC and other graph convolutions. Finally, we introduce the
structure of our CTR-GCN.
### 3.1 Preliminaries
Notations. A human skeleton is represented as a graph with joints as vertices
and bones as edges. The graph is denoted as $\mathcal{G=(V,E,X)}$, where
$\mathcal{V}=\\{v_{1},v_{2},......,v_{N}\\}$ is the set of $N$ vertices.
$\mathcal{E}$ is the edge set, which is formulated as an adjacency matrix
$\mathbf{A}\in\mathbb{R}^{N\times N}$ and its element $a_{ij}$ reflects the
correlation strength between $v_{i}$ and $v_{j}$. The neighborhood of $v_{i}$
is represented as $\mathcal{N}(v_{i})=\\{v_{j}|a_{ij}\neq 0\\}$. $\mathcal{X}$
is the feature set of $N$ vertices, which is represented as a matrix
$\mathbf{X}\in\mathbb{R}^{N\times C}$ and $v_{i}$’s feature is represented as
$\mathbf{x_{i}}\in\mathbb{R}^{C}$.
Topology-shared Graph Convolution. The normal topology-shared graph
convolution utilizes the weight $\mathbf{W}$ for feature transformation and
aggregate representations of $v_{i}$’s neighbor vertices through $a_{ij}$ to
update its representation $\mathbf{z_{i}}$, which is formulated as
$\vspace{-0.2cm}\mathbf{z_{i}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}a_{ij}\mathbf{x_{j}}\mathbf{W}$
(1)
For static methods, $a_{ij}$ is defined manually or set as trainable
parameter. For dynamic methods, $a_{ij}$ is usually generated by the model
depending on the input sample.
### 3.2 Channel-wise Topology Refinement Graph Convolution
The general framework of our CTR-GC is shown in Figure 2. We first transform
input features into high-level features, then dynamically infer channel-wise
topologies to capture pairwise correlations between input sample’s joints
under different types of motion features, and aggregate features in each
channel with corresponding topology to get the final output. Specifically, our
CTR-GC contains three parts: (1) Feature transformation which is done by
transformation function $\mathcal{T}(\cdot)$; (2) Channel-wise topology
modeling which consists of correlation modeling function $\mathcal{M}(\cdot)$
and refinement function $\mathcal{R}(\cdot)$; (3) Channel-wise aggregation
which is completed by aggregation function $\mathcal{A}(\cdot)$. Given the
input feature $\mathbf{X}\in\mathbb{R}^{N\times C}$, the output
$\mathbf{Z}\in\mathbb{R}^{N\times C^{\prime}}$ of CTR-GC is formulated as
$\vspace{-0.1cm}\mathbf{Z}=\mathcal{A}\big{(}\mathcal{T}(\mathbf{X}),\mathcal{R}(\mathcal{M}(\mathbf{X}),\mathbf{A})\big{)},$
(2)
where $\mathbf{A}\in\mathbb{R}^{N\times N}$ is the learnable shared topology.
Next, we introduce these three parts in detailed.
Feature Transformation. As shown in the orange block in Figure 2, feature
transformation aims at transforming input features into high-level
representations via $\mathcal{T}(\cdot)$. We adopt a simple linear
transformation here as the topology-shared graph convolution, which is
formulated as
$\vspace{-0.3cm}\mathbf{\widetilde{X}}=\mathcal{T}(\mathbf{X})=\mathbf{XW},$
(3)
where $\mathbf{\widetilde{X}}\in\mathbb{R}^{N\times C^{\prime}}$ is the
transformed feature and $\mathbf{W}\in\mathbb{R}^{C\times C^{\prime}}$ is the
weight matrix. Note that other transformations can also be used, _e.g_.,
multi-layer perceptron.
Channel-wise Topology Modeling. The channel-wise topology modeling is shown in
the blue block in Figure 2. The adjacency matrix is used as shared topology
for all channels and is learned through backpropagation. Moreover, we learn
channel-specific correlations $\mathbf{Q}\in\mathbb{R}^{N\times N\times
C^{\prime}}$ to model specific relationships between vertices in $C^{\prime}$
channels. Then the channel-wise topologies $\mathbf{R}\in\mathbb{R}^{N\times
N\times C^{\prime}}$ are obtained by refining the shared topology $\mathbf{A}$
with $\mathbf{Q}$.
Specifically, we first employ correlation modeling function
$\mathcal{M}(\cdot)$ to model channel-wise correlations between vertices. To
reduce computation cost, we utilize linear transformations $\phi$ and $\psi$
to reduce feature dimension before sending input features into
$\mathcal{M}(\cdot)$. Given a pair of vertices $(v_{i},v_{j})$ and their
corresponding features $(\mathbf{x_{i}},\mathbf{x_{j}})$, we design two simple
yet effective correlation modeling functions. The first correlation modeling
function $\mathcal{M}_{1}(\cdot)$ is formulated as
$\vspace{-0.16cm}\mathcal{M}_{1}(\psi(\mathbf{x_{i}}),\phi(\mathbf{x_{j}}))=\sigma(\psi(\mathbf{x_{i}})-\phi(\mathbf{x_{j}})),$
(4)
where $\sigma(\cdot)$ is activation function. $\mathcal{M}_{1}(\cdot)$
essentially calculates distances between $\psi(\mathbf{x_{i}})$ and
$\phi(\mathbf{x_{j}})$ along channel dimension and utilizes the nonlinear
transformations of these distances as channel-specific topological
relationship between $v_{i}$ and $v_{j}$. The second correlation modeling
function $\mathcal{M}_{2}(\cdot)$ is formulated as
$\vspace{-0.16cm}\mathcal{M}_{2}(\psi(\mathbf{x_{i}}),\phi(\mathbf{x_{j}}))=MLP(\psi(\mathbf{x_{i}})||\phi(\mathbf{x_{j}})),$
(5)
where $||$ is concatenate operation and MLP is multi-layer perceptron. We
utilize MLP here due to its powerful fitting capability.
Based on the correlation modeling function, the channel-specific correlations
$\mathbf{Q}\in\mathbb{R}^{N\times N\times C^{\prime}}$ are obtained by
employing linear transformation $\xi$ to raise the channel dimension, which is
formulated as
$\vspace{-0.16cm}\mathbf{q_{ij}}=\xi\Big{(}\mathcal{M}\big{(}\psi(\mathbf{x_{i}}),\phi(\mathbf{x_{j}})\big{)}\Big{)},\
i,j\in\\{1,2,\cdots,N\\},$ (6)
where $\mathbf{q_{ij}}\in\mathbb{R}^{C^{\prime}}$ is a vector in $\mathbf{Q}$
and reflects the channel-specific topological relationship between $v_{i}$ and
$v_{j}$. Note that $\mathbf{Q}$ is not forced to be symmetric, _i.e_.,
$\mathbf{q_{ij}}\neq\mathbf{q_{ji}}$, which increases the flexibility of
correlation modeling.
Eventually, the channel-wise topologies $\mathbf{R}\in\mathbb{R}^{N\times
N\times C^{\prime}}$ are obtained by refining the shared topology $\mathbf{A}$
with channel-specific correlations $\mathbf{Q}$:
$\vspace{-0.16cm}\mathbf{R}=\mathcal{R}(\mathbf{Q},\mathbf{A})=\mathbf{A}+\alpha\cdot\mathbf{Q},$
(7)
where $\alpha$ is a trainable scalar to adjust the intensity of refinement.
The addition is conducted in a broadcast way where $\mathbf{A}$ is added to
each channel of $\alpha\times\mathbf{Q}$.
Channel-wise Aggregation. Given the refined channel-wise topologies
$\mathbf{R}$ and high-level features $\mathbf{\widetilde{X}}$, CTR-GC
aggregates features in a channel-wise manner. Specifically, CTR-GC constructs
a channel-graph for each channel with corresponding refined topology
$\mathbf{R_{c}}\in\mathbb{R}^{N\times N}$ and feature
$\mathbf{\tilde{x}_{:,c}}\in\mathbb{R}^{N\times 1}$, where $\mathbf{R_{c}}$
and $\mathbf{\tilde{x}_{:,c}}$ are respectively from $\mathbf{c}$-th channel
of $\mathbf{R_{c}}$ and $\mathbf{\widetilde{X}}$
($c\in\\{1,\cdots,C^{\prime}\\}$). Each channel-graph reflects relationships
of vertices under a certain type of motion feature. Consequently, feature
aggregation is performed on each channel-graph, and the final output
$\mathbf{Z}$ is obtained by concatenating the output features of all channel-
graphs, which is formulated as
$\vspace{-0.15cm}\mathbf{Z}=\mathcal{A}(\mathbf{\widetilde{X},R})=[\mathbf{R_{1}}\mathbf{\tilde{x}_{:,1}}||\mathbf{R_{2}}\mathbf{\tilde{x}_{:,2}}||\cdots||\mathbf{R_{C^{\prime}}}\mathbf{\tilde{x}_{:,C^{\prime}}}],$
(8)
where $||$ is concatenate operation. During the whole process, the inference
of channel-specific correlations $\mathbf{Q}$ relies on input samples as shown
in Equation 6. Therefore, the proposed CTR-GC is a dynamic graph convolution
and it adaptively varies with different input samples.
### 3.3 Analysis of Graph Convolutions
We analyze the representation capability of different graph convolutions by
reformulating them into a unified form and comparing them with dynamic
convolution [2, 33] employed in CNNs.
We first recall dynamic convolution which enhances vanilla convolution with
dynamic weights. In dynamic convolution, each neighbor pixel $p_{j}$ of the
center pixel $p_{i}$ has a corresponding weight in the convolution kernel, and
the weight can be dynamically adjusted according to different input samples,
which makes the dynamic convolution have strong representation ability. The
dynamic convolution can be formulated as
$\vspace{-0.15cm}\mathbf{z_{i}^{k}}=\sum_{p_{j}\in\mathcal{N}(p_{i})}\mathbf{x_{j}^{k}}\mathbf{W_{j}^{k}},$
(9)
where $\mathbf{k}$ indicates the index of input sample. $\mathbf{x_{j}^{k}}$
and $\mathbf{z_{i}^{k}}$ are the input feature of $p_{j}$ and the output
feature of $p_{i}$ of the $\mathbf{k}$-th sample. $\mathbf{W_{j}^{k}}$ is the
dynamic weight.
Due to the irregular structure of the graph, the correspondence between
neighbor vertices and weights is difficult to establish. Thus, graph
convolutions (GCs) degrade convolution weights into adjacency weights (_i.e_.,
topology) and weights shared in the neighborhood. However, sharing weights in
the neighborhood limits representation capability of GCs. To analyze the gap
of representation ability between different GCs and dynamic convolution, we
integrate adjacency weights and weights shared in the neighborhood into a
generalized weight matrix $\mathbf{E^{k}_{ij}}$. Namely, we formulate all GCs
in the form of
$\mathbf{z_{i}^{k}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x_{j}^{k}}\mathbf{E^{k}_{ij}}$
where $\mathbf{E^{k}_{ij}}$ is generalized weight. We classified GCs into four
categories as mentioned before.
Static Topology-shared GCs. In static topology-shared GCs, the topologies keep
fixed for different samples and are shared across all channels, which can be
formulated as
$\mathbf{z_{i}^{k}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}a_{ij}\mathbf{x_{j}^{k}}\mathbf{W}=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x_{j}^{k}}(a_{ij}\mathbf{W}),$
(10)
where $a_{ij}\mathbf{W}$ is the generalized weight of static topology-shared
GC. From Equation 9 and 10, it can be seen that the difference between dynamic
convolution and static topology-shared GC lies in their (generalized) weights.
Specifically, the weights of dynamic convolution $\mathbf{W_{j}^{k}}$ is
individual for each $j$ and $k$, while generalized weights of static topology-
shared GC is subject to following constraints:
Constraint 1: $\mathbf{E^{k_{1}}_{ij}}$ and $\mathbf{E^{k_{2}}_{ij}}$ are
forced to be same.
Constraint 2: $\mathbf{E^{k}_{ij_{1}}}$ and $\mathbf{E^{k}_{ij_{2}}}$ differ
by a scaling factor.
Note that $\mathbf{k_{1}},\mathbf{k_{2}}$ are different sample indices and
$\mathbf{j_{1}},\mathbf{j_{2}}$ are different neighbor vertex indices. These
constraints cause the gap of representation ability between static topology-
shared GCs and dynamic convolutions. Note that we concentrate on the
neighborhood rooted at $v_{i}$ and do not consider the change of $v_{i}$ for
simplicity.
Dynamic topology-shared GCs. Compared with static topology-shared GCs, the
dynamic ones infer topologies dynamically and thus have better generalization
ability. The formulation of dynamic topology-shared GCs is
$\vspace{-0.15cm}\mathbf{z_{i}^{k}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}a_{ij}^{k}\mathbf{x_{j}^{k}}\mathbf{W}\\\
=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x_{j}^{k}}(a_{ij}^{k}\mathbf{W}),$
(11)
where $a_{ij}^{k}$ is dynamic topological relationship between $v_{i}$,
$v_{j}$ and depends on input sample. It can be seen that the generalized
weights of dynamic topology-shared GCs still suffer from Constraint 2 but
relax Constraint 1 into the following constraint:
Constraint 3: $\mathbf{E^{k_{1}}_{ij}}$, $\mathbf{E^{k_{2}}_{ij}}$ differ by a
scaling factor.
Static topology-non-shared GCs. This kind of GCs utilize different topologies
for different channels (groups). Here we just analyze static GCs with channel-
wise topologies because it is the most generalized form of static topology-
non-shared GCs and can degenerate into others, _e.g_., static group-wise-
topology GCs. The specific formulation is
$\displaystyle\mathbf{z_{i}^{k}}$
$\displaystyle=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{p_{ij}}\odot(\mathbf{x_{j}^{k}}\mathbf{W})$
(12)
$\displaystyle=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x_{j}^{k}}\big{(}[p_{ij1}\mathbf{w_{:,1}},\cdots,p_{ijC^{\prime}}\mathbf{w_{:,C^{\prime}}}]\big{)},\vspace{-0.5cm}$
(13)
where $\odot$ is element-wise multiplication and
$\mathbf{p_{ij}}\in\mathbb{R}^{C^{\prime}}$ is channel-wise topological
relationship between $v_{i}$, $v_{j}$. $p_{ijc}$ is the $c$-th element of
$\mathbf{p_{ij}}$. $\mathbf{w_{:,c}}$ is the $c$-th column of $\mathbf{W}$.
(We omit the derivation of Equation 12 and 13 for clarity. The details can be
found in the supplementary materials.) From Equation 13, we observe that
generalized weights of this kind of GCs suffer from Constraint 1 due to static
topology but relax Constraint 2 into the following constraint:
Constraint 4: Different corresponding columns of $\mathbf{E^{k}_{ij_{1}}}$ and
$\mathbf{E^{k}_{ij_{2}}}$ differ by different scaling factors.
Dynamic topology-non-shared GCs. The only difference between static topology-
non-shared GCs and dynamic topology-non-shared GCs is that dynamic topology-
non-shared GCs infers non-shared topologies dynamically, thus dynamic
topology-non-shared GCs can be formulated as
$\vspace{-0.15cm}\mathbf{z_{i}^{k}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x_{j}^{k}}\big{(}[r_{ij1}^{k}\mathbf{w_{:,1}},\cdots,r_{ijC^{\prime}}^{k}\mathbf{w_{:,C^{\prime}}}]\big{)},$
(14)
where $r_{ijc}^{k}$ is the $\mathbf{k}$-th sample’s dynamic topological
relationship between $v_{i}$, $v_{j}$ in the $c$-th channel. Obviously,
generalized weights of dynamic topology-non-shared graph convolution relax
both Constraint 1 and 2. Specifically, it relaxes Constraint 2 into Constraint
4 and relaxes Constraint 1 into the following constraint:
Constraint 5: Different corresponding columns of $\mathbf{E^{k_{1}}_{ij}}$ and
$\mathbf{E^{k_{2}}_{ij}}$ differ by different scaling factors.
Topology | Constraints | Instance
---|---|---
Non-shared | Dynamic | 1 | 2 | 3 | 4 | 5
✗ | ✗ | ✓ | ✓ | | | | ST-GC[32]
✗ | ✓ | | ✓ | ✓ | | | AGC [24], Dy-GC[34]
✓ | ✗ | ✓ | | | ✓ | | DC-GC [3]
✓ | ✓ | | | | ✓ | ✓ | CTR-GC (ours)
Table 1: Constraints on different categories of graph convolutions and
corresponding instances. The number 1-5 correspond to five constraints. Red,
Green and Blue respectively indicate the relatively High, Mid and Low
constraint strength.
We conclude different categories of graph convolutions and their constraints
in Table 1. It can be seen that dynamic topology-non-shared GC is the least
constrained. Our CTR-GC belongs to dynamic topology-non-shared GC and Equation
8 can be reformulated to Equation 14, indicating that theoretically CTR-GC has
stronger representation capability than previous graph convolutions [3, 24,
32, 34]. The specific reformulation is shown in supplemental materials.
### 3.4 Model Architecture
Based on CTR-GC, we construct a powerful graph convolutional network CTR-GCN
for skeleton-based action recognition. We set the neighborhood of each joint
as the entire human skeleton graph, which is proved to be more effective in
this task by previous work [4, 24]. The entire network consists of ten basic
blocks, followed by a global average pooling and a softmax classifier to
predict action labels. The number of channels for ten blocks are
64-64-64-64-128-128-128-256-256-256. Temporal dimension is halved at the 5-th
and 8-th blocks by strided temporal convolution. The basic block of our CTR-
GCN is shown in Figure 3 (a). Each block mainly consists of a spatial modeling
module, a temporal modeling module and residual connections.
Spatial Modeling. In a spatial modeling module, we use three CTR-GCs in
parallel to extract correlations between human joints and sum up their results
as output. For clarity, an instance of CTR-GC with $\mathcal{M}_{1}(\cdot)$ is
illustrated in Figure 3 (b). Our CTR-GC is designed to extract features of a
graph with input feature $\mathbf{X}\in\mathbb{R}^{N\times C}$. To adopt CTR-
GC to a skeleton graph sequence $\mathbf{S}\in\mathbb{R}^{T\times N\times C}$,
we pool $\mathbf{S}$ along temporal dimension and use pooled features to infer
channel-wise topologies. Specifically, CTR-GC first utilizes $\phi$ and $\psi$
with reduction rate $r$ to extract compact representations. Then temporal
pooling is used to aggregate temporal features. After that, CTR-GC conducts
pair-wise subtraction and activation following Equation 4. The channel
dimension of activation is then raised with $\xi$ to obtain channel-specific
correlations, which are used to refine the shared topology $\mathbf{A}$ to
obtain channel-wise topologies. Eventually, channel-wise aggregation
(implemented by batch matrix multiplication) is conducted in each skeleton
graph to obtain the output representation $\mathbf{S^{o}}$.
Temporal Modeling. To model actions with different duration, we design a
multi-scale temporal modeling module following [20]. The main difference is
that we use fewer branches for that too many branches slow down inference
speed. As shown in Figure 3 (a), this module contains four branches, each
containing a $1\times 1$ convolution to reduce channel dimension. The first
three branches contain two temporal convolutions with different dilations and
one MaxPool respectively following $1\times 1$ convolution. The results of
four branches are concatenated to obtain the output.
Figure 3: (a) The basic block of our CTR-GCN. (b)CTR-GC with correlation
modeling function $\mathcal{M}_{1}(\cdot)$ or $\mathcal{M}_{2}(\cdot)$.
## 4 Experiments
### 4.1 Datasets
NTU RGB+D. NTU RGB+D [22] is a large-scale human action recognition dataset
containing 56,880 skeleton action sequences. The action samples are performed
by 40 volunteers and categorized into 60 classes. Each sample contains an
action and is guaranteed to have at most 2 subjects, which is captured by
three Microsoft Kinect v2 cameras from different views concurrently. The
authors of this dataset recommend two benchmarks: (1) cross-subject (X-sub):
training data comes from 20 subjects, and testing data comes from the other 20
subjects. (2) cross-view (X-view): training data comes from camera views 2 and
3, and testing data comes from camera view 1.
NTU RGB+D 120. NTU RGB+D 120 [17] is currently the largest dataset with 3D
joints annotations for human action recognition, which extends NTU RGB+D with
additional 57,367 skeleton sequences over 60 extra action classes. Totally
113,945 samples over 120 classes are performed by 106 volunteers, captured
with three cameras views. This dataset contains 32 setups, each denoting a
specific location and background. The authors of this dataset recommend two
benchmarks: (1) cross-subject (X-sub): training data comes from 53 subjects,
and testing data comes from the other 53 subjects. (2) cross-setup (X-setup):
training data comes from samples with even setup IDs, and testing data comes
from samples with odd setup IDs.
Northwestern-UCLA. Northwestern-UCLA dataset [31] is captured by three Kinect
cameras simultaneously from multiple viewpoints. It contains 1494 video clips
covering 10 action categories. Each action is performed by 10 different
subjects. We follow the same evaluation protocol in [31]: training data from
the first two cameras, and testing data from the other camera.
### 4.2 Implementation Details
All experiments are conducted on one RTX 2080 TI GPU with the PyTorch deep
learning framework. Our models are trained with SGD with momentum 0.9, weight
decay 0.0004. The training epoch is set to 65 and a warmup strategy [8] is
used in the first 5 epochs to make the training procedure more stable.
Learning rate is set to 0.1 and decays with a factor 0.1 at epoch 35 and 55.
For NTU RGB+D and NTU RGB+D 120, the batch size is 64, each sample is resized
to 64 frames, and we adopt the data pre-processing in [35]. For Northwestern-
UCLA, the batch size is 16, and we adopt the data pre-processing in [4].
### 4.3 Ablation Study
In this section, we analyze the proposed channel-wise topology refinement
graph convolution and its configuration on the X-sub benchmark of the NTU
RGB+D 120 dataset.
Effectiveness of CTR-GC.
Methods | Param. | Acc (%)
---|---|---
Baseline | 1.22M | 83.4
+2 CTR-GC | 1.26M | 84.2 ↑0.8
+5 CTR-GC | 1.35M | 84.7 ↑1.3
CTR-GCN w/o Q | 1.22M | 83.7 ↑0.3
CTR-GCN w/o A | 1.46M | 84.0 ↑0.6
CTR-GCN | 1.46M | 84.9 ↑1.5
Table 2: Comparisons of accuracies when adding CTR-GCs gradually and removing
A or Q from CTR-GCN.
We employ ST-GCN [32] as the baseline, which belongs to static topology-shared
graph convolution and the topology is untrainable. We further add residual
connections in ST-GCN as our basic block and replace its temporal convolution
with temporal modeling module described in Section 3.4 for fair comparison.
The experimental results are shown in Table 2. First, we gradually replace GCs
with CTR-GCs (shown in Figure 3 (b) and $r=8$) in the baseline. We observe
that accuracies increase steadily and the accuracy is substantially improved
when all GCs are replaced by CTR-GCs (CTR-GCN), which validates the
effectiveness of CTR-GC.
Then we validate effects of the shared topology A and the channel-specific
correlations Q respectively by removing either of them from CTR-GCN. CTR-GCN
w/o Q shares a trainable topology across different channels. We observe that
its performance drops 1.2% compared with CTR-GCN, indicating the importance of
modeling channel-wise topologies. The performance of CTR-GCN w/o A drops 0.9%,
confirming that it’s hard to model individual topology for each channel
directly and topology refinement provides an effective approach to solve this
problem.
Configuration Exploration.
Methods | $\boldsymbol{\mathcal{M}}$ | $\boldsymbol{r}$ | $\boldsymbol{\sigma}$ | Param. | Acc (%)
---|---|---|---|---|---
Baseline | - | - | - | 1.21M | 83.4
A | $\mathcal{M}_{1}^{+}$ | 8 | Tanh | 1.46M | 84.9↑1.5
B | $\mathcal{M}_{1}$ | 8 | Tanh | 1.46M | 84.9↑1.5
C | $\mathcal{M}_{2}$ | 8 | Tanh | 1.48M | 84.8↑1.4
D | $\mathcal{M}_{1}$ | 4 | Tanh | 1.69M | 84.8↑1.4
E | $\mathcal{M}_{1}$ | 16 | Tanh | 1.34M | 84.5↑1.1
F | $\mathcal{M}_{1}$ | 8 | Sig | 1.46M | 84.6↑1.2
G | $\mathcal{M}_{1}$ | 8 | ReLU | 1.46M | 84.8↑1.4
Table 3: Comparisons of the validation accuracy of CTR-GC with different
settings.
We explore different configurations of CTR-GC, including the choice of
correlation modeling functions $\mathcal{M}$, the reduction rate $r$ of $\phi$
and $\psi$, activation function $\sigma$ of correlation modeling function. As
shown in Table 3, we observe that models under all configurations outperform
the baseline, confirming the robustness of CTR-GC. (1) Comparing models A, B
and C, we find models with different correlation modeling functions all
achieve good performance, which indicates that channel-wise topology
refinement is a generic idea and is compatible with many different correlation
modeling functions ($\mathcal{M}_{1}^{+}$ replaces the subtraction in
$\mathcal{M}_{1}$ with addition). (2) Comparing models B, D and E, we find
models with $r=4,8$ (models B, D) achieve better results and the model with
$r=8$ (model B) performs better slightly with fewer parameters. Model E with
$r=16$ performs worse because too few channels are used in correlation
modeling function, which is not sufficient to effectively model channel-
specific correlations. (3) Comparing models B, F and G, Sigmoid and ReLU
perform worse than Tanh and we argue that non-negative output values of
Sigmoid and ReLU constrains the flexibility of correlation modeling.
Considering performance and efficiency, we choose model B as our final model.
### 4.4 Comparison with Other GCs
Topology | Methods | Param. | FLOPs | Acc (%)
---|---|---|---|---
Non-share | Dynamic
✗ | ✗ | ST-GC [32] | 1.22M | ~1.65G | 83.4
✗ | ✓ | AGC [24] | 1.55M | ~2.11G | 83.9
✗ | ✓ | Dy-GC [34] | 1.73M | ~1.66G | 83.9
✓ | ✗ | DC-GC [3] | 1.51M | ~1.65G | 84.2
✓ | ✗ | DC-GC*[3] | 3.37M | ~1.65G | 84.0
✓ | ✓ | CTR-GC | 1.46M | ~1.97G | 84.9
Table 4: Comparisons of CTR-GC with other graph convolutions. The first two
columns show the categories of graph convolutions.
In order to validate the effectiveness of our CTR-GC, we compare performance,
parameters and computation cost of CTR-GC against other graph convolutions in
Table 4. Specifically, we keep the backbone of the baseline model and only
replace graph convolutions for fair comparison. Note that DC-GC split channels
into 16 groups and set a trainable adjacency matrix for each group, while DC-
GC* set a trainable adjacency matrix for each channel. From Table 4, we
observe that (1) On the whole, topology-non-shared methods achieve better
performance than topology-shared methods, and dynamic methods perform better
than static methods, indicating the importance of modeling non-shared
topologies and dynamic topologies; (2) Compared with DC-GC, DC-GC* performs
worse while has much more parameters, confirming that it’s not effective to
model channel-wise topologies with parameterized adjacency matrices alone; (3)
CTR-GC outperforms DC-GC* by 0.9%, proving that our refinement approach is
effective to model channel-wise topologies. Moreover, our CTR-GC introduces
little extra parameters and computation cost compared with other graph
convolutions.
Figure 4: (a) The shared topology. (b) and (c) The refined channel-wise
topologies of different channels.
### 4.5 Visualization of Learned Topologies
We illustrate the shared topology and refined channel-wise topologies of an
action sample “typing on the keyboard” in Figure 4. The values close to 0
indicate weak relationships between joints and vice versa. We observe that (1)
the shared topology is different from refined channel-wise topologies,
indicating that our method can effectively refine the shared topology. (2) the
refined channel-wise topologies are different, demonstrating that our method
can learn individual topologies depending on specific motion features for
different channels. (3) Some correlations are consistently strong in all
channels, indicating that these joint pairs are strongly relevant in general,
_e.g_., the correlation between left elbow and left-hand tip (blue square in
the green box), and the correlation between left-hand tip and left wrist (red
square in the green box). It’s reasonable for “typing on the keyboard” where
main motion happens on hands.
### 4.6 Comparison with the State-of-the-Art
Methods | NTU-RGB+D 120
---|---
X-Sub (%) | X-Set (%)
ST-LSTM[18] | 55.7 | 57.9
GCA-LSTM[19] | 61.2 | 63.3
RotClips+MTCNN[10] | 62.2 | 61.8
SGN[35] | 79.2 | 81.5
2s-AGCN[24] | 82.9 | 84.9
Shift-GCN[4] | 85.9 | 87.6
DC-GCN+ADG[3] | 86.5 | 88.1
MS-G3D[20] | 86.9 | 88.4
PA-ResGCN-B19 [26] | 87.3 | 88.3
Dynamic GCN [34] | 87.3 | 88.6
CTR-GCN (Bone Only) | 85.7 | 87.5
CTR-GCN (Joint+Bone) | 88.7 | 90.1
CTR-GCN | 88.9 | 90.6
Table 5: Classification accuracy comparison against state-of-the-art methods on the NTU RGB+D 120 dataset. Methods | NTU-RGB+D
---|---
X-Sub (%) | X-View (%)
Ind-RNN[16] | 81.8 | 88.0
HCN[14] | 86.5 | 91.1
ST-GCN[32] | 81.5 | 88.3
2s-AGCN[24] | 88.5 | 95.1
SGN[35] | 89.0 | 94.5
AGC-LSTM[25] | 89.2 | 95.0
DGNN[23] | 89.9 | 96.1
Shift-GCN[4] | 90.7 | 96.5
DC-GCN+ADG[3] | 90.8 | 96.6
PA-ResGCN-B19 [26] | 90.9 | 96.0
DDGCN[12] | 91.1 | 97.1
Dynamic GCN[34] | 91.5 | 96.0
MS-G3D[20] | 91.5 | 96.2
CTR-GCN | 92.4 | 96.8
Table 6: Classification accuracy comparison against state-of-the-art methods on the NTU RGB+D dataset. Methods | Northwestern-UCLA
---|---
Top-1 (%)
Lie Group[28] | 74.2
Actionlet ensemble[30] | 76.0
HBRNN-L[6] | 78.5
Ensemble TS-LSTM[13] | 89.2
AGC-LSTM[25] | 93.3
Shift-GCN[4] | 94.6
DC-GCN+ADG[3] | 95.3
CTR-GCN | 96.5
Table 7: Classification accuracy comparison against state-of-the-art methods
on the Northwestern-UCLA dataset.
Many state-of-the-art methods employ a multi-stream fusion framework. We adopt
same framework as [4, 34] for fair comparison. Specifically, we fuse results
of four modalities, _i.e_., joint, bone, joint motion, and bone motion.
We compare our models with the state-of-the-art methods on NTU RGB+D 120, NTU
RGB+D and NW-UCLA in Tables 5, 6 and 7 respectively. On three datasets, our
method outperforms all existing methods under nearly all evaluation
benchmarks. On NTU-RGB+D 120, our model with joint-bone fusion achieves state-
of-the-art performance, and our CTR-GCN outperforms current state-of-the-art
Dynamic GCN [34] by 1.6% and 2.0% on the two benchmarks respectively. Notably,
our method is the first to model channel-wise topologies dynamically which is
very effective in skeleton-based action recognition.
## 5 Conclusion
In this work, we present a novel channel-wise topology refinement graph
convolution (CTR-GC) for skeleton-based action recognition. CTR-GC learns
channel-wise topologies in a refinement way which shows powerful correlation
modeling capability. Both mathematical analysis and experimental results
demonstrate that CTR-GC has stronger representation capability than other
graph convolutions. On three datasets, the proposed CTR-GCN outperforms state-
of-the-art methods.
Acknowledgment This work is supported by the National Key R&D Plan
(No.2018YFC0823003), Beijing Natural Science Foundation (No.L182058), the
Natural Science Foundation of China (No.61972397,62036011,61721004), the Key
Research Program of Frontier Sciences, CAS (No.QYZDJ-SSW-JSC040), National
Natural Science Foundation of China (No.U2033210).
## References
* [1] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
* [2] Yinpeng Chen, Xiyang Dai, Mengchen Liu, Dongdong Chen, Lu Yuan, and Zicheng Liu. Dynamic convolution: Attention over convolution kernels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11030–11039, 2020.
* [3] Ke Cheng, Yifan Zhang, Congqi Cao, Lei Shi, Jian Cheng, and Hanqing Lu. Decoupling gcn with dropgraph module for skeleton-based action recognition. In Proceedings of the European Conference on Computer Vision (ECCV), 2020.
* [4] Ke Cheng, Yifan Zhang, Xiangyu He, Weihan Chen, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 183–192, 2020.
* [5] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852, 2016.
* [6] Yong Du, Wei Wang, and Liang Wang. Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1110–1118, 2015.
* [7] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pages 2224–2232, 2015.
* [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [9] Zhen Huang, Xu Shen, Xinmei Tian, Houqiang Li, Jianqiang Huang, and Xian-Sheng Hua. Spatio-temporal inception graph convolutional networks for skeleton-based action recognition. In Proceedings of the 28th ACM International Conference on Multimedia, pages 2122–2130, 2020.
* [10] Qiuhong Ke, Mohammed Bennamoun, Senjian An, Ferdous Sohel, and Farid Boussaid. Learning clip representations for skeleton-based 3d action recognition. IEEE Transactions on Image Processing, 27(6):2842–2855, 2018.
* [11] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
* [12] Matthew Korban and Xin Li. Ddgcn: A dynamic directed graph convolutional network for action recognition. In European Conference on Computer Vision, pages 761–776. Springer, 2020.
* [13] Inwoong Lee, Doyoung Kim, Seoungyoon Kang, and Sanghoon Lee. Ensemble deep learning for skeleton-based action recognition using temporal sliding lstm networks. In Proceedings of the IEEE international conference on computer vision, pages 1012–1020, 2017.
* [14] Chao Li, Qiaoyong Zhong, Di Xie, and Shiliang Pu. Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. arXiv preprint arXiv:1804.06055, 2018.
* [15] Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3595–3603, 2019.
* [16] Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (indrnn): Building a longer and deeper rnn. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5457–5466, 2018.
* [17] Jun Liu, Amir Shahroudy, Mauricio Lisboa Perez, Gang Wang, Ling-Yu Duan, and Alex Kot Chichung. Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding. IEEE transactions on pattern analysis and machine intelligence, 2019\.
* [18] Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. Spatio-temporal lstm with trust gates for 3d human action recognition. In European conference on computer vision, pages 816–833. Springer, 2016.
* [19] Jun Liu, Gang Wang, Ling-Yu Duan, Kamila Abdiyeva, and Alex C Kot. Skeleton-based human action recognition with global context-aware attention lstm networks. IEEE Transactions on Image Processing, 27(4):1586–1599, 2017.
* [20] Ziyu Liu, Hongwen Zhang, Zhenghao Chen, Zhiyong Wang, and Wanli Ouyang. Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 143–152, 2020.
* [21] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pages 2014–2023, 2016.
* [22] Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1010–1019, 2016.
* [23] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with directed graph neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7912–7921, 2019.
* [24] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12026–12035, 2019.
* [25] Chenyang Si, Wentao Chen, Wei Wang, Liang Wang, and Tieniu Tan. An attention enhanced graph convolutional lstm network for skeleton-based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1227–1236, 2019.
* [26] Yi-Fan Song, Zhang Zhang, Caifeng Shan, and Liang Wang. Stronger, faster and more explainable: A graph convolutional baseline for skeleton-based action recognition. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1625–1633, 2020.
* [27] Yansong Tang, Yi Tian, Jiwen Lu, Peiyang Li, and Jie Zhou. Deep progressive reinforcement learning for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5323–5332, 2018.
* [28] Vivek Veeriah, Naifan Zhuang, and Guo-Jun Qi. Differential recurrent neural networks for action recognition. In Proceedings of the IEEE international conference on computer vision, pages 4041–4049, 2015.
* [29] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
* [30] Jiang Wang, Zicheng Liu, Ying Wu, and Junsong Yuan. Learning actionlet ensemble for 3d human action recognition. IEEE transactions on pattern analysis and machine intelligence, 36(5):914–927, 2013.
* [31] Jiang Wang, Xiaohan Nie, Yin Xia, Ying Wu, and Song-Chun Zhu. Cross-view action modeling, learning and recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2649–2656, 2014.
* [32] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. arXiv preprint arXiv:1801.07455, 2018.
* [33] Brandon Yang, Gabriel Bender, Quoc V. Le, and Jiquan Ngiam. Condconv: Conditionally parameterized convolutions for efficient inference. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 1305–1316, 2019.
* [34] Fanfan Ye, Shiliang Pu, Qiaoyong Zhong, Chao Li, Di Xie, and Huiming Tang. Dynamic gcn: Context-enriched topology learning for skeleton-based action recognition. In Proceedings of the 28th ACM International Conference on Multimedia, pages 55–63, 2020.
* [35] Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, and Nanning Zheng. Semantics-guided neural networks for efficient skeleton-based human action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1112–1121, 2020.
* [36] Rui Zhao, Kang Wang, Hui Su, and Qiang Ji. Bayesian graph convolution lstm for skeleton based action recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6882–6892, 2019.
## Supplemental Materials for Channel-wise Topology Refinement Graph
Convolution for Skeleton-Based Action Recognition
This supplemental materials include details about formula derivation,
architecture setting, more visualizations and other ablation studies.
Specifically, we give the derivation from Equation 12 to 13 and from Equation
8 to 14. Then we show the detailed architecture of CTR-GCN, including input
size, output size and specific hyperparameters of each block. Moreover, we
visualize shared topologies and channel-specific correlations. At last, we
conduct ablation studies on the effect of CTR-GC’s number per block, temporal
convolution, and analyze the performance of different graph convolutions on
hard classes.
## Formula Derivation
We first give the derivation from Equation 12 to 13. The Equation 12 is
$\vspace{-0.16cm}\mathbf{z^{k}_{i}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{p_{ij}}\odot(\mathbf{x^{k}_{j}W}),$
(15)
where $\mathbf{z^{k}_{i}}\in\mathbb{R}^{1\times C^{\prime}}$ is the output
feature of $v_{i}$ and $\mathbf{p_{ij}}\in\mathbb{R}^{1\times C^{\prime}}$ is
the channel-wise relationship between $v_{i}$ and $v_{j}$.
$\mathbf{x^{k}_{j}}\in\mathbb{R}^{1\times C}$ is the input feature of $v_{j}$
and $\mathbf{W}\in\mathbb{R}^{C\times C^{\prime}}$ is weight matrix. The
$c$-th element of $\mathbf{z^{k}_{i}}$ is formulated as
$\displaystyle z^{k}_{ic}$
$\displaystyle=\sum_{v_{j}\in\mathcal{N}(v_{i})}p_{ijc}\mathbf{(x^{k}_{j}W)_{c}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}p_{ijc}\mathbf{(x^{k}_{j}w_{:,c})}$
$\displaystyle=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x^{k}_{j}}(p_{ijc}\mathbf{w_{:,c}}),\vspace{-0.16cm}$
(16)
where $p_{ijc}$ is the $c$-th element of $\mathbf{p_{ij}}$.
$\mathbf{(x^{k}_{j}W)_{c}}\in\mathbb{R}^{1}$ is the $c$-th element of
$\mathbf{x^{k}_{j}W}$ and $\mathbf{w_{:,c}}\in\mathbb{R}^{C\times 1}$ is the
$c$-th column of $\mathbf{W}$. Therefore, $\mathbf{z^{k}_{i}}$ can be
formulated as
$\displaystyle\mathbf{z^{k}_{i}}$
$\displaystyle=\left[\begin{array}[]{c}\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x^{k}_{j}}(p_{ij1}\mathbf{w_{:,1}})\\\
\vdots\\\
\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x^{k}_{j}}(p_{ijC^{\prime}}\mathbf{w_{:,C^{\prime}}})\end{array}\right]^{T}$
(20)
$\displaystyle=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x^{k}_{j}}([p_{ij1}\mathbf{w_{:,1}},\cdots,p_{ijC^{\prime}}\mathbf{w_{:,C^{\prime}}}]),$
(21)
which is the same as Equation 13.
Then we give the derivation from Equation 8 to 14. We add sample index
$\mathbf{k}$ in Equation 8, which is formulated as
$\vspace{-0.16cm}\mathbf{Z^{k}}=[\mathbf{R^{k}_{1}\tilde{x}^{k}_{:,1}}||\mathbf{R^{k}_{2}\tilde{x}^{k}_{:,2}}||\cdots||\mathbf{R^{k}_{C^{\prime}}\tilde{x}^{k}_{:,C^{\prime}}}].$
(22)
The $c$-th column of $\mathbf{Z^{k}}\in\mathbb{R}^{N\times C^{\prime}}$ can be
formulated as
$\vspace{-0.16cm}\mathbf{z^{k}_{:,c}}=\mathbf{R^{k}_{c}\tilde{x}^{k}_{:,c}}=\mathbf{R^{k}_{c}}\mathbf{(X^{k}W)_{:,c}}=\mathbf{R^{k}_{c}}\mathbf{(X^{k}w_{:,c})},$
(23)
where $\mathbf{X^{k}}\in\mathbb{R}^{N\times C}$ is the input feature. The
$i$-th element of $\mathbf{z^{k}_{:,c}}$, _i.e_., the $c$-th element of
$v_{i}$’s output feature is
$\displaystyle z^{k}_{ic}$
$\displaystyle=\mathbf{r^{k}_{i,:,c}}\mathbf{(X^{k}w_{:,c})}=\sum_{v_{j}\in\mathcal{N}(v_{i})}r^{k}_{ijc}\mathbf{(x^{k}_{j}w_{:,c})}$
$\displaystyle=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x^{k}_{j}}(r^{k}_{ijc}\mathbf{w_{:,c}}),$
(24)
where $\mathbf{r^{k}_{i,:,c}}\in\mathbb{R}^{1\times N}$ is the $i$-th row of
$\mathbf{R^{k}_{c}}\in\mathbb{R}^{N\times N}$. It can be seen that Equation 24
has the similar form with Equation 16. Thus Equation 24 can be reformulated to
the similar form with Equation 21, which is
$\vspace{-0.16cm}\mathbf{z^{k}_{i}}=\sum_{v_{j}\in\mathcal{N}(v_{i})}\mathbf{x^{k}_{j}}([r^{k}_{ij1}\mathbf{w_{:,1}},\cdots,r^{k}_{ijC^{\prime}}\mathbf{w_{:,C^{\prime}}}]).$
(25)
It can be seen that Equation 25 is the same as Equation 14, _i.e_., Equation 8
can be reformulated to Equation 14.
Layers | Output Sizes | Hyperparameters
---|---|---
Basic Block 1 | M$\times$T$\times$N | $\begin{bmatrix}\text{SM: 3, C}\\\ \text{TM: C, C, 1}\end{bmatrix}$
Basic Block 2 | M$\times$T$\times$N | $\begin{bmatrix}\text{SM: C, C}\\\ \text{ TM: C, C, 1}\end{bmatrix}$
Basic Block 3 | M$\times$T$\times$N | $\begin{bmatrix}\text{SM: C, C}\\\ \text{ TM: C, C, 1}\end{bmatrix}$
Basic Block 4 | M$\times$T$\times$N | $\rm\begin{bmatrix}\text{SM: C, C}\\\ \text{TM: C, C, 1}\end{bmatrix}$
Basic Block 5 | M$\times$$\frac{\text{T}}{\text{2}}\times$N | $\begin{bmatrix}\text{SM: C, 2C}\\\ \text{TM: 2C, 2C, 2}\end{bmatrix}$
Basic Block 6 | M$\times$$\frac{\text{T}}{\text{2}}\times$N | $\begin{bmatrix}\text{SM: 2C, 2C }\\\ \text{TM: 2C, 2C, 1}\end{bmatrix}$
Basic Block 7 | M$\times$$\frac{\text{T}}{\text{2}}\times$N | $\begin{bmatrix}\text{SM: 2C, 2C }\\\ \text{TM: 2C, 2C, 1}\end{bmatrix}$
Basic Block 8 | M$\times$$\frac{\text{T}}{\text{4}}\times$N | $\begin{bmatrix}\text{SM: 2C, 4C}\\\ \text{TM: 4C, 4C, 2}\end{bmatrix}$
Basic Block 9 | M$\times$$\frac{\text{T}}{\text{4}}\times$N | $\begin{bmatrix}\text{SM: 4C, 4C }\\\ \text{TM: 4C, 4C, 1}\end{bmatrix}$
Basic Block 10 | M$\times$$\frac{\text{T}}{\text{4}}\times$N | $\begin{bmatrix}\text{SM: 4C, 4C}\\\ \text{TM: 4C, 4C, 1}\end{bmatrix}$
Classification | 1$\times$1$\times$1 | $\begin{bmatrix}\text{global averge pool }\\\ \text{$n_{c}$-d fc}\\\ \text{softmax}\end{bmatrix}$
Table 8: Detailed architecture of CTR-GCN. M, T, and N refer to the number of
people, the length, and the number of joints of input sequences. “SM” and “TM”
indicate the spatial modeling module and temporal modeling module
respectively. The two numbers after SM are the input channel and output
channel of SM. The three numbers after TM are the input channel, output
channel and temporal stride. $n_{c}$ is the number of action classes.
## Detailed Architecture
The detailed architecture of the proposed CTR-GCN is shown in Table 8, CTR-GCN
contains ten basic blocks and a classification layer which consists of a
global average pooling, a fully connected layer and a softmax operation. M
refers to the number of people in the sequences, which is set to 2, 2, and 1
for NTU RGB+D, NTU RGB+D 120, and NW-UCLA respectively. In a sequence, M
skeleton sequences are processed independently by ten basic blocks and are
average pooled by the classification layer to obtain the final score. T and N
refer to the length and the number of joints of input skeleton sequences,
which are {64, 25}, {64, 25} and {52, 20} for NTU-RGB+D, NTU-RGB+D 120, and
NW-UCLA respectively. C is the basic channel number which is set to 64 for
CTR-GCN. “SM” and “TM” indicate the spatial modeling module and temporal
modeling module respectively. The two numbers after SM are the input channel
and output channel of SM. The three numbers after TM are the input channel,
output channel and temporal stride. At the Basic Blocks 5 and 8, the strides
of convolutions in temporal modeling module (TM) are set to 2 to reduce the
temporal dimension by half. $n_{c}$ is the number of action classes, which is
60, 120, 10 for NTU-RGB+D, NTU-RGB+D120, and NW-UCLA respectively.
## Visualization
Figure 5: Visualization of the shared topologies and channel-specific
correlations. The green lines show the natural connections of human skeleton.
The intensity of red lines indicates the connection strength of correlations.
As shown in Figure 5, we visualize the shared topologies and channel-specific
correlations of our CTR-GCN. The input sample belongs to “typing on a
keyboard”. It can be seen that (1) the shared topologies in three layers tend
to be coarse and dense, which captures global features for recognizing
actions; (2) the channel-specific correlations varies with different channels,
indicating that our CTR-GCN models individual joints relationships under
different types of motion features; (3) most channel-specific correlations
focus on two hands, which capture subtle interactions on hands and are helpful
for recognizing “typing on a keyboard”.
## Ablation Study
Number | Param. | Acc (%)
---|---|---
3(CTR-GCN) | 1.46M | 84.9
1 | 0.85M | 84.3 ↓0.5
2 | 1.15M | 84.7 ↓0.2
4 | 1.76M | 85.2 ↑0.3
5 | 2.07M | 85.4 ↑0.5
6 | 2.37M | 85.0 ↑0.1
Table 9: Comparisons of model performances with different number of CTR-GCs.
Effect of CTR-GC’s number. In CTR-GCN, we use three CTR-GCs for fair
comparison with other methods (e.g., AGCN, MSG3D), which mostly use three or
more GCs to increase model capacity. To verify the effectiveness of CTR-GC’s
number to our method, We test the model with 1 6 CTR-GCs. As shown in Table 9,
accuracies first increase due to increased model capacity, but drops at 6 CTR-
GCs, which may be caused by overfitting.
Temporal Modeling | Acc (%)
---|---
Temporal Conv(CTR-GCN) | 84.9
Temporal Pooling | 72.8 ↓12.1
Table 10: Comparisons of model performances with different number of CTR-GCs.
Effect of temporal convolutions. It’s a common practice to use (multi-scale)
temporal convolutions for temporal modeling in skeleton-based action
recognition. To validate the effect of temporal convolutions, we try to use
global average pooling for temporal modeling. As shown in Table 10, the
performance drops from 84.9% to 72.8%, probably because the pooling loses too
much temporal information to extract joints’ trajectory features effectively.
Figure 6: Comparison of classification accuracy of different graph
convolutions on hard action classes.
Performance on hard classes. We further analyze the performance of different
graph convolutions on hard classes on NTU-RGB+D 120, _i.e_., “staple book”,
“count money”, “play with phone” and “cut nails”, “playing magic cube” and
“open bottle”. These actions mainly involve subtle interactions between
fingers, making them difficult to be recognized correctly. As shown in Figure
6, CTR-GC outperforms other graph convolutions on all classes. Especially,
CTR-GC exceeds other methods at least by 7.03% and 4.36% on “cut nails” and
“open bottle” respectively, showing that, compared with other GCs, our CTR-GC
can effectively extract features of subtle interactions and classify them more
accurately.
|
arxiv-papers
| 2021-07-26T13:37:50 |
2024-09-04T03:07:18.702488
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Yuxin Chen, Ziqi Zhang, Chunfeng Yuan, Bing Li, Ying Deng, Weiming Hu",
"submitter": "Yuxin Chen",
"url": "https://arxiv.org/abs/2107.12213"
}
|
2107.12217
|
# Effective Capacity Analysis of HARQ-enabled D2D Communication in Multi-Tier
Cellular Networks
Syed Waqas Haider Shah, Student Member, IEEE, M. Mahboob Ur Rahman, Member,
IEEE, Adnan Noor Mian, Member, IEEE, Octavia A. Dobre, Fellow, IEEE, and Jon
Crowcroft, Fellow, IEEE
###### Abstract
This work does the statistical quality-of-service (QoS) analysis of a block-
fading device-to-device (D2D) link in a multi-tier cellular network that
consists of a macro-BS ($BS_{{}_{MC}}$) and a micro-BS ($BS_{{}_{mC}}$) which
both operate in full-duplex (FD) mode. For the D2D link under consideration,
we first formulate the mode selection problem—whereby D2D pair could either
communicate directly, or, through the $BS_{{}_{mC}}$, or, through the
$BS_{{}_{MC}}$—as a ternary hypothesis testing problem. Next, to compute the
effective capacity (EC) for the given D2D link, we assume that the channel
state information (CSI) is not available at the transmit D2D node, and hence,
it transmits at a fixed rate $r$ with a fixed power. This allows us to model
the D2D link as a Markov system with six-states. We consider both overlay and
underlay modes for the D2D link. Moreover, to improve the throughput of the
D2D link, we assume that the D2D pair utilizes two special automatic repeat
request (ARQ) schemes, i.e., Hybrid-ARQ (HARQ) and truncated HARQ.
Furthermore, we consider two distinct queue models at the transmit D2D node,
based upon how it responds to the decoding failure at the receive D2D node.
Eventually, we provide closed-form expressions for the EC for both HARQ-
enabled D2D link and truncated HARQ-enabled D2D link, under both queue models.
Noting that the EC looks like a quasi-concave function of $r$, we further
maximize the EC by searching for an optimal rate via the gradient-descent
method. Simulation results provide us the following insights: i) EC decreases
with an increase in the QoS exponent, ii) EC of the D2D link improves when
HARQ is employed, iii) EC increases with an increase in the quality of self-
interference cancellation techniques used at $BS_{{}_{mC}}$ and $BS_{{}_{MC}}$
in FD mode.
###### Index Terms:
Effective capacity, D2D communication, retransmission, automatic repeat
request, hybrid-ARQ, quality-of-service.
000
Copyright (c) 2015 IEEE. Personal use of this material is permitted. However,
permission to use this material for any other purposes must be obtained from
the IEEE by sending a request to [email protected]. Syed Waqas Haider
Shah and Jon Crowcroft are with the Computer Lab, University of Cambridge, 15
JJ Thomson Avenue, Cambridge, UK CB3 0FD ({sw920,
jon.crowcroft}@cl.cam.ac.uk). Syed Waqas Haider Shah is also with the
Electrical Engineering Department, Information Technology University, Lahore
54000, Pakistan ([email protected]). Muhammad Mahboob Ur Rahman and
Adnan Noor Mian are with the Electrical Engineering Department, Information
Technology University, Lahore 54000, Pakistan ({mahboob.rahman,
adnan.noor}@itu.edu.pk Octavia A. Dobre is with the Department of Electrical
and Computer Engineering, Memorial University, St. John’s, NL A1B 3X5, Canada
([email protected])
## I Introduction
In wireless communication, reliability is considered one of the key
performance indicators for data transmission. It becomes more critical with
the emergence of mission-critical and delay-sensitive communication paradigms
and their potential applications in society, such as video streaming, online
gaming, and augmented reality, etc. These communication paradigms strive to
accommodate services with ultra-reliable and low latency requirements. The
quality of the wireless channel, which is defined by shadowing, multi-path
fading, and inter-user and inter-channel interference, affects the achievable
reliability. Prior knowledge of the channel conditions at the transmitter
plays an important role in achieving the required reliability. When the
transmitter has the perfect channel state information (CSI) before the
transmission, it adjusts its transmission power and the transmission rate
according to the channel conditions. This way, the optimal performance of the
channel can be achieved [1]. However, in practice, perfect knowledge of the
CSI at the transmitter is hard to acquire due to rapidly changing wireless
channel conditions (slow and fast fading). Therefore, in practical wireless
systems, block fading channel models are used. In these models, pilot bits are
transmitted at the start of each fading/time block to approximate the fading
process of the channel. This fading process is supposed to remain the same for
the entire fading/time block. However, if the block length is long or the
pathloss changes rapidly, this approximation does not truly represent the
entire fading/time block.
Device-to-device (D2D) communication, on the other hand, is a type of
communication with opportunistic channel allocation [2]. In this type, a D2D
device transmits data in either a direct-D2D mode, using overlay (orthogonal
channel allocation) and underlay (non-orthogonal channel allocation) settings,
or in a cellular mode (relaying through the base station) [3]. The
opportunistic nature of channel allocation in the D2D communication paradigm
makes it hard (or not even feasible sometimes) to acquire CSI at a D2D
transmitting device [4, 5]. Data transmission without prior knowledge of CSI
at the transmitting device leads to an increase in the packet drop ratio due
to rapidly changing channel conditions. To this end, multiple techniques can
be used to ensure reliability, such as shortening the length of the
time/fading block (to allow more retransmissions) or reducing the packet size.
In particular, automatic repeat request (ARQ) and hybrid-ARQ (HARQ) schemes
were proposed to enhance the reliability of the communication channel when CSI
is not available at the transmitter prior to the transmission.
In the ARQ retransmission scheme, parity bits are added to the transmitted
packets for error detection (ED) at the receiver. If the receiver detects an
error, it sends a negative-acknowledgment (NACK) using an error-free feedback
link; then, the transmitter retransmits the packet. Retransmission of the same
packet continues until the transmitter receives a positive ACK. One of the
major drawbacks of the ARQ scheme is that the throughput does not remain
constant; instead, it falls rapidly as the channel conditions deteriorate (due
to high retransmission frequency). To enhance the performance of the ARQ and
to reduce the retransmission frequency, another scheme was introduced. This
scheme, known as HARQ, uses forward error correction (FEC) codes, along with
ARQ [6]. HARQ is generally used in two settings, namely type-I HARQ and type-
II HARQ. In the former, packets are encoded with ED and FEC codes before the
transmission, and the receiver tries to remove the error using these codes
when an erroneous packet is received (instead of sending NACK right away).
Retransmission of the same packet is only requested when the receiver fails to
decode the received packet. In the latter, the transmitter first sends data
and ED codes only. If the receiver fails to decode the received packet, it
sends a NACK, and then the transmitter sends ED and FEC codes in the second
transmission attempt. If the packet still has an error, the receiver combines
the information received in both transmissions for error correction [7]. This
phenomenon is known as chase combining or soft combining. The transmitter
keeps sending the same parity bits (ED and FEC codes) in each retransmission
attempt. If the transmitter sends different parity bits every time it receives
a NACK, it is known as type-III HARQ (also known as incremental redundancy)
[8]. HARQ overall provides better performance in terms of throughput and
reliability when compared to ARQ. In the case of D2D communication, HARQ can
be used in both direct-D2D and cellular-D2D modes. In the direct-D2D mode, a
D2D receiver sends ACK/NACK directly to the transmitter in either overlay or
underlay settings depending upon the allocated channel. In the cellular-mode,
the D2D receiver first sends ACK/NACK to the base station (BS), which the BS
then relays to the D2D transmitter. The cellular mode allows HARQ to reuse the
existing downlink and uplink channels with minimal changes, at the cost of
additional overhead and possibly a longer delay in feedback. Throughput
analysis is one of the most common tools used to measure the performance of
the retransmission schemes. However, for D2D communication or other delay-
sensitive wireless applications, throughput analysis may not provide the
required delay guarantees. Moreover, throughput varies with varying channel
conditions and drops quickly when the channel conditions deteriorate.
In delay-sensitive wireless applications, it is desirable to have system
throughput subject to given quality-of-service (QoS) requirements. The
Effective Capacity (EC) is an analytical tool to find the maximum constant
arrival rate that can be supported by the time-varying channel conditions
while satisfying the statistical QoS guarantees imposed at the transmitter’s
queue [9]. It provides statistical QoS guarantees for throughput in terms of
delay bounds. The EC has been used for various wireless channels, including
cognitive radios [10], two-hop wireless channels [11], D2D [12], licensed-
unlicensed interoperable D2D [13], MIMO wireless networks [14], and underwater
acoustic channels [15]. More recently, the EC analysis has also been performed
for different retransmission schemes [16, 17, 18]. However, to the best of the
authors’ knowledge, this is the first study which provides the EC analysis of
HARQ-enabled D2D communication in multi-tier future cellular networks.
More specifically, this work provides the following contributions:
* •
We formulate a mode selection mechanism for D2D communication in multi-tier
cellular networks as a ternary hypothesis testing problem and compute the
corresponding error and correct-detection probabilities (Section III). This
mechanism selects a communication mode among the three available modes
(direct-D2D, micro-cell D2D, and macro-cell D2D) based on the pathloss
measurements of the transmission link.
* •
We perform the EC analysis of HARQ-enabled D2D communication in multi-tier
cellular networks. We also provide an analysis of the impact of the mode
selection mechanism on the EC of HARQ-enabled multi-tier D2D communication. We
assume that the CSI is not available at the transmit D2D node, and hence, it
transmits at a fixed rate with a fixed power. It allows us to model the D2D
link as a Markov system with six-states. We then perform the Markov chain
modeling of the D2D link in both overlay and underlay settings.
* •
We provide the EC analysis of HARQ-enabled D2D communication for two distinct
queue models at the transmit D2D node, based upon how it responds to the
decoding failure at the receive D2D node. We provide closed-form expressions
for the EC of HARQ-enabled D2D link under both queue models.
* •
We propose a special-case of truncated HARQ-enabled D2D communication in which
a transmitting device transmits a packet only twice. It transmits in underlay
settings in the first attempt, and if the receiver fails to decode the
received packet successfully, it retransmits the same packet in overlay
settings. If the receiver fails to decode the packet in the second
transmission attempt, the transmitting device either drops the packet or
lowers the transmission priority of the packet (based on the queue model in
use). We then perform the EC analysis and provide the closed-form expressions
for the EC of truncated HARQ-enabled D2D communication under both queue
models.
* •
Lastly, we provide closed-form expressions for the optimal transmission rates
for our proposed case of truncated-HARQ enabled D2D communication under both
queue models.
The remainder of this paper is organized as follows. Section II presents the
system model for our proposed multi-tier D2D communication and some background
knowledge of EC, full-duplex, and ARQ. Section III introduces the mode
selection mechanism for the proposed model. Section IV provides the EC
analysis. Sections IV-A and IV-B describe the Markov chain modelling for the
proposed ternary hypothesis testing (THT) problem. Section IV-C and IV-D
present the EC of HARQ-enabled multi-tier D2D and of the truncated HARQ case
of multi-tier D2D, respectively. Section V provides a detailed numerical
investigation using simulation results. Finally, the paper concludes in
Section VI.
## II System Model and Background
### II-A System Model
We consider a two-tier cellular network scenario in which a micro-cell (mC) BS
is deployed in a coverage region of a macro-cell (MC) BS, as shown in Fig. 1.
In a 5G multi-tier network architecture, MC-BS and mC-BS usually operate on
lower frequencies and higher millimeter-wave frequencies, respectively [19].
Therefore, they do not experience inter-tier interference.111In scenarios
where all the tiers in a multi-tier network architecture use the same
frequency spectrum, one needs to consider inter-tier interference for
calculating the respective channel capacities [20]. MC-BS provides low-rate
connectivity to a large number of users in a wide coverage area. On the other
hand, mC-BS provides high data rate connectivity to a small number of users in
a limited coverage area. In two-tier cellular networks, a D2D transmitting
device can communicate with its receiver in three possible communication
modes. It can either communicate directly (direct-D2D mode) or by relaying its
data through MC-BS (MC-D2D mode) or mC-BS (mC-D2D mode), as shown in Fig. 1.
It can also use either underlay (reusing the cellular user’s resources) or
overlay (using orthogonal resource blocks) settings for data transmission
based on the network conditions. This problem of selecting a communication
mode from the available ones is known as mode selection [12].
Figure 1: System model: D2D communication in multi-tier cellular networks.
$D_{T}$ communicates with $D_{R}$ in direct-D2D mode (shown as blue dotted
arrows), mC-D2D mode (shown as red arrows), or MC-D2D mode (shown as black
arrows); solid and dotted arrows show the uplink and downlink transmissions,
respectively.
We also make the following assumptions for our analysis: i) direct-D2D,
mC-D2D, and MC-D2D channels are block-fading channels that have Rayleigh
distribution, and fading remains constant for each block, changing
independently between blocks; ii) both the mC-BS and MC-BS use decode-and-
forward operation to relay data to $D_{R}$ in mC-D2D and MC-D2D communication
modes; iii) both mC-BS and MC-BS operate in full-duplex mode [21], so we use
the residual self-interference (SI) as a factor of noise in our analysis.
### II-B Background
#### Effective Capacity (EC)
EC is the maximum constant arrival rate that can be supported by the time-
varying channel while satisfying the statistical QoS guarantees imposed as
delay constraints at the transmitter’s queue. It is defined as the log moment
generating function (MGF) of the cumulative channel service process [9]:
$EC=-\frac{\Lambda(-\theta)}{\theta}=-\lim_{t\to\infty}\frac{1}{\theta
t}\log\operatorname{\mathbb{E}}[e^{-\theta\tsum\slimits@_{k=1}^{t}s(k)}]$ (1)
where $s(k)$ is the channel service process in slot $k$,
$\operatorname{\mathbb{E}}[.]$ is the expectation operator, and $\theta$ is
the QoS exponent. $\theta\to\infty$ ($\theta\to 0$) refers to delay-sensitive
(delay-tolerant) communication.
#### Full-Duplex Communication
In full-duplex communication, nodes can transmit and receive at the same
frequency and at the same time, therefore theoretically, the communication
link can achieve double throughput. In a full-duplex system, the transmit
signal interferes with the receive signal, thus introduce a SI. Generally, the
SI cancellation is performed in two stages. In the first stage, passive
cancellation techniques, such as antenna-separation and antenna-shielding are
used [22]. In the second stage, active cancellation techniques, which can be
digital or analog, are used [23, 24, 25]. However, a complete SI cancellation
is impossible in practical full-duplex systems [26]. Therefore, a residual SI
can still be experienced at the transmit node even after employing these
cancellation techniques. To this end, we use the residual SI in our analysis
as a factor of noise.
#### Automatic Repeat Request (ARQ)
In ARQ, parity bits are added to the transmitted packets for error detection
at the receiver node. If the receiver node detects an error, it sends a NACK,
and the transmitter retransmits the packet. There are also some variants of
ARQ, such as go-back-N, stop-and-wait, and selective repeat. In HARQ, FEC
codes are also added along with parity bits to the transmitted packet [27]. In
this protocol, the receiver node first tries to remove the error using the FEC
codes when an erroneous packet is received, rather than sending NACK right
away. Retransmission of the packet continues until the receiver node
successfully decodes the received packet. HARQ is generally used in three
different settings, explained in Section I. Additionally, if an upper limit is
set for the packet’s retransmission attempts, it is called truncated HARQ
[28]. Network coding can also be used to enhance the performance of HARQ in
wireless broadcasting and multi-user networks, such as network-coded HARQ (NC-
HARQ) [29] and network-turbo-coding based HARQ [30]. Basic NC-HARQ protocols
may increase the computational complexity and delay. It can be avoided using
low-complexity turbo coding techniques [31]. Moreover, to enhance the
throughput of NC-HARQ even further, adaptive random network coding (ARNC) can
be used [32]. It adaptively encodes multiple packets with the highest priority
in each time slot.
## III Mode Selection
The problem of mode selection at the transmit device $D_{T}$ is basically
choosing the best transmission path among a set of candidate paths. For the
considered system model, mode selection implies selection between direct path
($D_{T}$ $D_{R}$), via micro-BS ($D_{T}\to BS_{mC}\to D_{R}$), and via macro-
BS ($D_{T}\to BS_{MC}\to D_{R}$). Mode selection is traditionally feature-
based whereby the features of the candidate channels (e.g., received signal
strength, instant CSI, statistical CSI, instant signal to noise ratio, etc.)
are utilized to select the most suitable channel for transmission during
upcoming uplink slot. Furthermore, since the acquisition of instant CSI is
quite demanding, this work does mode selection based upon statistical CSI
(i.e., pathloss) only. 222Statistical CSI (pathloss) is used as the sole
feature for mode selection because it varies slowly in the wireless channel,
and once estimated, can last for multiple seconds. On the other hand,
instantaneous CSI changes quickly due to small-scale fading (if the wireless
channel is stationary even then, small-scale fading needs to be estimated
multiple times in one second). Moreover, the overhead associated with the
channel estimation for instantaneous CSI is also large due to the channel
training. In our system model, $BS_{{}_{MC}}$ performs the mode selection
mechanism. Specifically, during time slot $k$, the pathloss for all the three
candidate channels ($D_{T}\to D_{R}$, $D_{T}\to BS_{{}_{mC}}$, and $D_{T}\to
BS_{{}_{MC}}$) is measured by $D_{R}$, $BS_{{}_{mC}}$, and $BS_{{}_{MC}}$,
respectively (see Appendix A). All the three pathloss measurements reach
$BS_{{}_{MC}}$, which performs mode selection for the upcoming time slot
($k+1$ time slot). In short, $BS_{{}_{MC}}$ does the mode selection for time
slot $k+1$ based upon the pathloss measurements of the current time slot (time
slot $k$). Thus, by mode selection, $BS_{{}_{MC}}$ chooses the communication
link with the smallest estimated pathloss and then conveys this information to
$D_{T}$ through a downlink control channel. Further, because the proposed mode
selection problem selects a communication mode based on the estimated pathloss
measurements, we provide a step-by-step procedure for pathloss estimation in
Appendix A.
### III-A Ternary Hypothesis Testing (THT)
The mode selection problem is formulated as the following THT problem.
$\begin{cases}H_{0}:&\text{direct-D2D mode ($D_{T}\to D_{R}$)}\\\
H_{1}:&\text{micro cell (mC)-D2D mode ($D_{T}\to BS_{{}_{mC}}\to D_{R}$)}\\\
H_{2}:&\text{macro-cell (MC)-D2D mode ($D_{T}\to BS_{{}_{MC}}\to
D_{R}$).}\end{cases}$ (2)
Where the hypothesis $H_{0}$, $H_{1}$, $H_{2}$ states that communication via
direct link, via micro-BS, via macro-BS is most suitable for transmission
during the upcoming slot.
Let $L_{d}$, $L_{mC}$, $L_{MC}$ represent the true pathloss of $D_{T}\to
D_{R}$, $D_{T}\to BS_{mC}$, and $D_{T}\to BS_{MC}$ links, respectively.
Moreover, let $\widehat{L}_{d}$, $\widehat{L}_{mC}$, $\widehat{L}_{MC}$
represent the noisy measurement of $L_{d}$, $L_{mC}$, $L_{MC}$. Appendix A
provides a step-by-step procedure for calculating the noisy measurement of
pathloss for all the three candidate links. According to the mode selection
problem, the direct-D2D mode will be selected when the estimated pathloss of
$D_{T}\to D_{R}$ ($\widehat{L}_{d}$) link is the smallest among the estimated
pathlosses of the candidate links. Similarly, mC-D2D and MC-D2D modes will be
selected when the estimated pathloss of $D_{T}\to BS_{mC}$
($\widehat{L}_{mC}$), and $D_{T}\to BS_{MC}$ ($\widehat{L}_{MC}$) link is the
smallest, respectively. Now, the THT problem in (2) could be re-cast as
follows:
$\begin{cases}H_{0}:&\widehat{L}_{d}=\min\big{\\{}\widehat{L}_{d},\widehat{L}_{{}_{mC}},\widehat{L}_{{}_{MC}}\big{\\}}\\\
H_{1}:&\widehat{L}_{{}_{mC}}=\min\big{\\{}\widehat{L}_{d},\widehat{L}_{{}_{mC}},\widehat{L}_{{}_{MC}}\big{\\}}\\\
H_{2}:&\widehat{L}_{{}_{MC}}=\min\big{\\{}\widehat{L}_{d},\widehat{L}_{{}_{mC}},\widehat{L}_{{}_{MC}}\big{\\}},\end{cases}$
(3)
where $\widehat{L}_{d}\sim\mathcal{N}(L_{d},\sigma^{2})$,
$\widehat{L}_{{}_{mC}}\sim\mathcal{N}(L_{{}_{mC}},\sigma^{2})$, and
$\widehat{L}_{{}_{MC}}\sim\mathcal{N}(L_{{}_{MC}},\sigma^{2})$ are the
probability distribution of the noisy measurement of pathloss in direct-D2D,
mC-D2D, and MC-D2D modes, respectively (see Appendix A). From eq. (3), we can
see that $H_{0}$ will be selected when the noisy measurement of the pathloss
of $D_{T}\to D_{R}$ link ($\widehat{L}_{d}$) is the smallest. Similarly
$H_{1}$ and $H_{2}$ will be selected when $\widehat{L}_{{}_{mC}}$ and
$\widehat{L}_{{}_{MC}}$ are the smallest among the candidate links’
pathlosses, respectively.
Let $\mathbf{l}=[L_{d},L_{mC},L_{MC}]^{T}$. Also, let
$\mathbf{l}^{(s)}=\text{sort}(\mathbf{l})$, where sort(.) operator sorts the
elements of a vector in ascending order. Let
$\mathbf{l}^{(s)}=[L_{A},L_{B},L_{C}]^{T}$; thus, $L_{A}<L_{B}<L_{C}$ (see
Fig. 2). In other words, $\mathbf{l}$, $\mathbf{l}^{(s)}$ are $3\times 1$
vector each that contain the unsorted pathlosses, and sorted pathlosses of the
three candidate links, respectively. Then, the following holds:
$\hat{L}_{A}\sim\mathcal{N}(L_{A},\sigma^{2})$,
$\hat{L}_{B}\sim\mathcal{N}(L_{B},\sigma^{2})$,
$\hat{L}_{C}\sim\mathcal{N}(L_{C},\sigma^{2})$, where $\hat{L}_{A}$,
$\hat{L}_{B}$, and $\hat{L}_{C}$ denote the noisy measurements of $L_{A}$,
$L_{B}$, and $L_{C}$, respectively. Then, the THT problem for the sorted
pathlosses could be formulated as the following two log-likelihood ratio tests
(LLRT) (see section 3.2 of [33]):
$\displaystyle\log_{e}(f_{\widehat{L}_{A}}(\widehat{l}_{A})\underset{H_{A}}{\overset{H_{B}}{\gtrless}}\log_{e}(f_{\widehat{L}_{B}}(\widehat{l}_{B}))$
(4a)
$\displaystyle\log_{e}(f_{\widehat{L}_{B}}(\widehat{l}_{B}))\underset{H_{B}}{\overset{H_{C}}{\gtrless}}\log_{e}(f_{\widehat{L}_{C}}(\widehat{l}_{C})),$
(4b)
where $f_{X}(x)$ represents the probability density function (pdf) of the
random variable $X$. (4a) states that when the pdf of the estimated pathloss
$\bar{L}_{A}$ is smaller than the pdf of the estimated pathloss $\bar{L}_{B}$,
$H_{A}$ will be selected, and vice-versa. Similarly, (4b) represents that when
the pdf of the estimated pathloss $\bar{L}_{B}$ is smaller than the pdf of the
estimated pathloss $\bar{L}_{C}$, $H_{B}$ will be selected, and vice-versa.
Figure 2: The pdfs $f(\widehat{L}_{A})$, $f(\widehat{L}_{B})$, and
$f(\widehat{L}_{C})$: $C_{A,B}$ and $C_{B,C}$ are the decision thresholds;
$L_{A}$, $L_{B}$, and $L_{C}$ are the true (but ordered) pathloss values of
the three candidate links.
### III-B Performance of THT
We evaluate the performance of THT by using the correct-detection and error
probabilities. Let $C_{A,B}$ and $C_{B,C}$ represent the decision thresholds
(see Fig. 2).Then, the three probabilities of correct-detection are given as:
$\displaystyle\begin{split}P_{{}_{d,A}}&=\mathbb{P}(\widehat{L}_{A}<C_{A,B})\\\
&=1-Q\big{(}\frac{C_{A,B}-L_{A}}{\sigma}\big{)}\end{split}$ (5a)
$\displaystyle\begin{split}P_{{}_{d,B}}&=\mathbb{P}(C_{A,B}<\widehat{L}_{B}<C_{B,C})\\\
&=Q\big{(}\frac{C_{A,B}-L_{B}}{\sigma}\big{)}-Q\big{(}\frac{C_{B,C}-L_{B}}{\sigma}\big{)}\end{split}$
(5b)
$\displaystyle\begin{split}P_{{}_{d,C}}&=\mathbb{P}(\widehat{L}_{C}>C_{B,C})\\\
&=Q\big{(}\frac{C_{B,C}-L_{C}}{\sigma}\big{)},\end{split}$ (5c)
where $Q(x)=\frac{1}{\sqrt{2\pi}}\int_{x}e^{-\frac{t^{2}}{2}}dt$ is the
complementary cumulative distribution function (CCDF) of a standard normal
random variable. (5a), (5b), and (5c) represent the correct detection
probabilities of selecting $H_{A}$, $H_{B}$, and $H_{C}$, respectively. More
specifically, $P_{d,A}$ corresponds to the probability of the scenario when
the estimated value of the smallest pathloss from the sorted pathloss vector
($1^{(s)}$) is smaller than $C_{A,B}$ (threshold between the pdfs of
$\bar{L}_{A}$ and $\bar{L}_{B}$). In other words, this shows the probability
that the mode selection mechanism selects $H_{A}$ when $\bar{L}_{A}$ was the
smallest. Similarly, $P_{d,B}$ corresponds to the probability that the mode
selection mechanism selects $H_{B}$ when $\bar{L}_{B}$ is smaller than $L_{C}$
and greater than $L_{A}$. Lastly, $P_{d,C}$ represents the probability that
the mode selection mechanism selects $H_{C}$ when $\bar{L}_{C}$ was the
biggest estimated pathloss among the estimated pathloss values of the three
candidate links. Moreover, the THT mechanism also incurs three kinds of
errors, i.e., $P_{{}_{e,A}}=1-P_{{}_{d,A}}$, $P_{{}_{e,B}}=1-P_{{}_{d,B}}$,
and $P_{{}_{e,C}}=1-P_{{}_{d,C}}$.
So far, we have computed the error and correct-detection probabilities for the
ordered/sorted pathloss values ($L_{A}$, $L_{B}$, and $L_{C}$). However, the
actual hypothesises are based on unsorted pathloss values. To this end, a
relation needs to be established among the error and correct-detection
probabilities of sorted/ordered pathloss values with the error and correct-
detection probabilities of unsorted pathloss values. Let $P_{{}_{d,H_{0}}}$
($P_{{}_{e,H_{0}}}$) represents the correct-detection (error) probability for
selecting the direct-D2D mode. $P_{{}_{d,H_{0}}}$ shows that the direct-D2D
link was the best (pathloss of the direct-D2D link was the smallest among all
three pathloss), and the mode selection problem also detects the direct-D2D
link. Whereas, $P_{{}_{e,H_{0}}}$ shows that the direct-D2D link was the best,
but the mode selection problem makes an error and selects either mC-D2D or
MC-D2D links for packet transmission. Similarly, $P_{{}_{d,H_{1}}}$
($P_{{}_{e,H_{1}}}$) and $P_{{}_{d,H_{2}}}$ ($P_{{}_{e,H_{2}}}$) represent the
correct-detection (error) probabilities for selecting mC-D2D and MC-D2D modes,
respectively. 333Ideally, the mode selection mechanism should be based on the
true pathloss values of the candidate communication links. However, according
to fundamental principles of statistical inference, the true pathloss can
never be measured (since the received signal itself is corrupted with additive
white Gaussian noise and channel fading). Therefore, the mode selection
mechanism is based on the estimated pathloss values of the three communication
links. These pathloss measurements come with some uncertainty (Gaussian, to be
specific, shown in Appendix A), and due to this, the mode selection will not
always be error-free. In other words, the uncertainty in the pathloss
measurements introduces the error. Thus, the errors can never be made zero,
but the hypothesis testing mechanism computes the thresholds in a way that
these errors are minimized. Further, to understand the relation between the
error and correct-detection probabilities given in (5) and the probabilities
for the error and the correct-detection of the actual hypothesises ($H_{0}$,
$H_{1}$ and $H_{2}$), we provide the following example.
#### Example
Let $L_{d}=90.7$, $L_{{}_{mC}}=80.9$, and $L_{{}_{MC}}=85.4$. Thus,
$\mathbf{l}=[90.7,80.9,85.4]^{T}$. Then,
$\mathbf{l}^{(s)}=\text{sort}(l)=[80.9,85.4,90.7]^{T}$. Thus, $L_{A}=80.9$,
$L_{B}=85.4$, and $L_{C}=90.7$. Furthermore, let $\sigma=1$. Then,
$\widehat{L}_{d}\sim\mathcal{N}(L_{d},\sigma^{2})$. Thus,
$\widehat{L}_{d}\sim\mathcal{N}(90.7,1)$. Similarly,
$\widehat{L}_{{}_{mC}}\sim\mathcal{N}(80.9,1)$ and
$\widehat{L}_{{}_{MC}}\sim\mathcal{N}(85.4,1)$. For measurements
$\widehat{L}_{A}$, $\widehat{L}_{B}$, and $\widehat{L}_{C}$ of sorted pathloss
values, we could write: $\widehat{L}_{A}\sim\mathcal{N}(L_{A},\sigma^{2})$.
Thus, $\widehat{L}_{A}\sim\mathcal{N}(80.9,1)$. Similarly,
$\widehat{L}_{B}\sim\mathcal{N}(85.4,1)$ and
$\widehat{L}_{C}\sim\mathcal{N}(90.7,1)$. Then, the correct-detection
probabilities are $P_{{}_{d,A}}=1-Q(2.25)=0.988$,
$P_{{}_{d,B}}=Q(-2.25)-Q(2.5)=0.981$, and $P_{{}_{d,C}}=Q(-2.8)=0.997$. Next,
recall the following mapping due to the sort operation: $L_{A}=L_{{}_{mC}}$,
$L_{B}=L_{{}_{MC}}$, and $L_{C}=L_{d}$. Thus,
$P_{{}_{d,H_{1}}}=P_{{}_{d,A}}=0.988$, $P_{{}_{d,H_{2}}}=P_{{}_{d,B}}=0.981$,
and $P_{{}_{d,H_{0}}}=P_{{}_{d,C}}=0.997$. Similarly,
$P_{{}_{e,H_{1}}}=P_{{}_{e,A}}=0.012$, $P_{{}_{e,H_{2}}}=P_{{}_{e,B}}=0.019$,
and $P_{{}_{e,H_{0}}}=P_{{}_{e,C}}=0.003$.
Using the error and correct-detection probabilities of hypothesis $H_{0}$,
$H_{1}$, and $H_{2}$, one can measure the performance of the mode selection
mechanism. Next, we perform the statistical QoS analysis for HARQ-enabled D2D
communication and observe the impact of mode selection on the analysis.
## IV Effective Capacity Analysis
In our analysis, we consider $D_{T}$ is unaware of CSI prior to the
transmission; therefore, it transmits using a fixed transmit power $\bar{P}$
at a fixed rate $r$ (bits/sec). Consequently, for each of the three hypotheses
(direct-D2D, mC-D2D, and MC-D2D modes), the D2D link is considered ON when the
instantaneous channel capacity of the link is greater than the fixed
transmission rate of $D_{T}$; otherwise, the D2D link is considered in the OFF
condition. To sum things up, due to the mode selection and the nonavailability
of CSI at the transmitter (CSIT), one can model the D2D link as a Markovian
process. Below, we describe the details of the Markov chain modelling of the
D2D link for the overlay scenario and the underlay scenario, followed by the
EC analysis of HARQ-enabled D2D communication.
### IV-A Markov Chain Modelling of Overlay-D2D
Let us consider $C_{d}(k)$, $C_{{}_{mC}}(k)$, and $C_{{}_{MC}}(k)$ as the
instantaneous channel capacities, during time slot $k$, of the direct-D2D,
mC-D2D, and MC-D2D links, respectively. When $r<C_{d}(k)$, $r<C_{{}_{mC}}(k)$,
and $r<C_{{}_{MC}}(k)$, the direct-D2D, mC-D2D, and MC-D2D links,
respectively, transmit $r$ bits/sec; thus, they are considered as being in the
ON state. On the other hand, when $r>C_{d}(k)$, $r>C_{{}_{mC}}(k)$, and
$r>C_{{}_{MC}}(k)$, the direct-D2D, mC-D2D, and MC-D2D links, respectively,
transmit $0$ bits/sec; thus, they are considered as being in the OFF state.
This leads to the six-state Markovian process, as shown in Table I,
TABLE I: Markov Chain Representation of Six States.
State | Description | Notation | Action
---|---|---|---
$s_{1}$ | | Direct-D2D mode is
---
selected and the link is ON
| $H_{0}$ &
---
$r<C_{d}(k)$
| decoding successful at $D_{R}$,
---
$r$ bits received
$s_{2}$ | | Direct-D2D mode is
---
selected and the link is OFF
| $H_{0}$ &
---
$r>C_{d}(k)$
| decoding failure at $D_{R}$,
---
$0$ bits received
$s_{3}$ | | mC-D2D mode is
---
selected and the link is ON
| $H_{1}$ &
---
$r<C_{{}_{mC}}(k)$
| decoding successful at $D_{R}$,
---
$r$ bits received
$s_{4}$ | | mC-D2D mode is
---
selected and the link is OFF
| $H_{1}$ &
---
$r>C_{{}_{mC}}(k)$
| decoding failure at $D_{R}$,
---
$0$ bits received
$s_{5}$ | | MC-D2D mode is
---
selected and the link is ON
| $H_{2}$ &
---
$r<C_{{}_{MC}}(k)$
| decoding successful at $D_{R}$,
---
$r$ bits received
$s_{6}$ | | MC-D2D mode is
---
selected and the link is OFF
| $H_{2}$ &
---
$r>C_{{}_{MC}}(k)$
| decoding failure at $D_{R}$,
---
$0$ bits received
The instantaneous channel capacity of the direct-D2D link is,
${C^{o}_{d}}(k)=B\log_{2}\bigg{(}1+\frac{\bar{P}Z_{d}(k)}{L_{d}(k)N_{0}}\bigg{)}=B\log_{2}\big{(}1+\gamma_{d}(k)\big{)}$
(6)
where $Z_{d}(k)$ and $L_{d}(k)$ represent the channel coefficients and the
pathloss of the direct-D2D link in time slot $k$, respectively, $B$ represents
the bandwidth allocated to the transmit D2D node, and $\gamma_{d}(k)$
represent the signal-to-noise-ratio (SNR) of the direct-D2D link in time slot
$k$. Before finding the instantaneous channel capacity for the mC-D2D link, we
note that $BS_{{}_{mC}}$ operates in full-duplex mode. Therefore, to find it’s
channel capacity, we have the following proposition 1.
###### Proposition 1.
The instantaneous channel capacity of full-duplex enabled mC-D2D link
($D_{T}\to BS_{mC}\to D_{R}$) in overlay settings is,
${C^{o}_{{}_{mC}}}(k)=B\log_{2}\big{(}1+\gamma_{{}_{mC}}(k)\big{)}$
where
$\gamma_{{}_{mC}}(k)=\min\big{\\{}\gamma^{{}^{mC}}_{{}_{ul}}(k),\gamma^{{}^{mC}}_{{}_{dl}}(k)\big{\\}}$
is the net-SNR of the mC-D2D link, and $\gamma^{{}^{mC}}_{{}_{ul}}(k)$ and
$\gamma^{{}^{mC}}_{{}_{dl}}(k)$ are the SNRs of the uplink and the downlink
channels of mC-D2D mode, respectively.
###### Proof.
Given in Appendix B. ∎
Similarly, one can find the instantaneous channel capacity for full-duplex
enabled MC-D2D link ($D_{T}\to BS_{MC}\to D_{R}$) in overlay settings
($C^{o}_{{}_{MC}}(k)$) by following the similar steps given in Appendix B.
Consequently, it turns out to be,
$C^{o}_{{}_{MC}}(k)=B\log_{2}\big{(}1+\gamma_{{}_{MC}}(k)\big{)}.$ (7)
Where
$\gamma_{{}_{MC}}(k)=\min\big{\\{}\gamma^{{}^{MC}}_{{}_{ul}}(k),\gamma^{{}^{MC}}_{{}_{dl}}(k)\big{\\}}$
is the net-SNR of the MC-D2D link, and $\gamma^{{}^{MC}}_{{}_{ul}}(k)$ and
$\gamma^{{}^{MC}}_{{}_{dl}}(k)$ are the SNRs of the uplink and the downlink
channels of MC-D2D mode, respectively.
Next, we find the state transition probabilities for states
$s_{1},s_{2},s_{3},s_{4},s_{5}$, and $s_{6}$, as shown in Table I. Let
$p_{i,j}=[\mathbf{P}_{o}]_{i,j}$ be the transition probability from state $i$
to state $j$, with $\mathbf{P}_{o}$ as the transition probability matrix for
overlay-D2D. Due to the block-fading nature of the channel, state change for
the D2D link occurs in every timeblock. Now, we calculate the state transition
probabilities for the Markov chain model, starting with the following: 444Note
that, state transition probability for each state depends upon two factors;
the decision of the mode selection problem and the condition on the
transmission rate.
$p_{1,1}=\mathbb{P}\big{\\{}H_{0}(k)\;\&\;r<C_{d_{o}}(k)\big{|}H_{0}(k-1)\;\&\;r<C_{d_{o}}(k-1)\big{\\}}.$
(8)
The condition on the transmission rate can also be translated into the SNR of
the transmission link lower bounded by a minimum required value of SNR. This
is shown in the following:
$p_{1,1}=\mathbb{P}\big{\\{}H_{0}(k)\;\&\;\gamma_{d}(k)>\gamma_{req}\big{|}H_{0}(k-1)\;\&\;\gamma_{d}(k-1)>\gamma_{req}\big{\\}},$
(9)
where $\gamma_{req}=2^{r/B}-1$. Because the mode selection process is
independent of the fading process $\\{\gamma_{d}\\}_{k}$, we can write:
$p_{1,1}=\mathbb{P}\big{\\{}H_{0}(k)\big{|}H_{0}(k-1)\big{\\}}\mathbb{P}\big{\\{}\gamma_{d}(k)>\gamma_{req}\big{|}\gamma_{d}(k-1)>\gamma_{req}\big{\\}}.$
(10)
Moreover, we note that the fading process $\\{\gamma_{d}\\}_{k}$ as well as
the mode selection process are memoryless (because these processes change
independently between time slots). Specifically,
$\mathbb{P}(H_{0}(k)|H_{y}(k-1))=\mathbb{P}(H_{0}(k))$ for $y\in\\{0,1,2\\}$,
and $\mathbb{P}(\gamma_{d}(k)|\gamma_{d}(k-1))=\mathbb{P}(\gamma_{d}(k))$.
Therefore,
$p_{1,1}=\mathbb{P}\big{(}H_{0}(k)\big{)}\mathbb{P}\big{(}\gamma_{d}(k)>\gamma_{req}\big{)},$
(11)
where
$\mathbb{P}(H_{0}(k))=\mathbb{P}(H_{0}|H_{0})+\mathbb{P}(H_{0}|H_{1})+\mathbb{P}(H_{0}|H_{2})$,
and $\mathbb{P}(H_{0}|H_{0})=P_{{}_{d,H_{0}}}$. Because the SNR
$\gamma_{d}(k)$ is exponentially distributed,
$\mathbb{P}(\gamma_{d}(k)>\gamma_{req})=1-\mathbb{P}(\gamma_{d}(k)<\gamma_{req})=e^{-\gamma_{req}/\operatorname{\mathbb{E}}(\gamma_{d}(k))}$,
where $\operatorname{\mathbb{E}}(\gamma_{d}(k))=\frac{\bar{P}}{L_{d}N_{0}}$.
Now, one can see that the transition probability $p_{1,1}$ does not depend on
the original state. Therefore, $p_{i,1}=p_{1}$. Similarly,
$\begin{split}p_{i,2}&=p_{2}=\mathbb{P}\big{(}H_{0}(k)\big{)}\mathbb{P}\big{(}\gamma_{d}(k)<\gamma_{req}\big{)}\\\
p_{i,3}&=p_{3}=\mathbb{P}\big{(}H_{1}(k)\big{)}\mathbb{P}\big{(}\gamma_{{}_{mC}}(k)>\gamma_{req}\big{)}\\\
p_{i,4}&=p_{4}=\mathbb{P}\big{(}H_{1}(k)\big{)}\mathbb{P}\big{(}\gamma_{{}_{mC}}(k)<\gamma_{req}\big{)}\\\
p_{i,5}&=p_{5}=\mathbb{P}\big{(}H_{2}(k)\big{)}\mathbb{P}\big{(}\gamma_{{}_{MC}}(k)>\gamma_{req}\big{)}\\\
p_{i,6}&=p_{6}=\mathbb{P}\big{(}H_{2}(k)\big{)}\mathbb{P}\big{(}\gamma_{{}_{MC}}(k)<\gamma_{req}\big{)},\end{split}$
(12)
where
$\mathbb{P}(\gamma_{d}(k)<\gamma_{req})=1-e^{-\gamma_{req}/\operatorname{\mathbb{E}}(\gamma_{d}(k))}$.
$\mathbb{P}(H_{1}(k))=\mathbb{P}(H_{1}|H_{0})+\mathbb{P}(H_{1}|H_{1})+\mathbb{P}(H_{1}|H_{2})$,
where $\mathbb{P}(H_{1}|H_{1})=P_{{}_{d,H_{1}}}$. Similarly,
$\mathbb{P}(H_{2}(k))=\mathbb{P}(H_{2}|H_{0})+\mathbb{P}(H_{2}|H_{1})+\mathbb{P}(H_{2}|H_{2})$,
where $\mathbb{P}(H_{2}|H_{2})=P_{{}_{d,H_{2}}}$. Note that
$\gamma_{{}_{mC}}(k)$ and $\gamma_{{}_{MC}}(k)$ are also exponentially
distributed random variables (R.V.) (because the minimum of two exponentially
distributed R.V.s is also an exponential R.V.). Thus,
$\mathbb{P}(\gamma_{{}_{mC}}(k)>\gamma_{req})=e^{-\gamma_{req}/\operatorname{\mathbb{E}}(\gamma_{{}_{mC}}(k))}$,
where
$\operatorname{\mathbb{E}}(\gamma_{{}_{mC}}(k))=\frac{\operatorname{\mathbb{E}}[\gamma^{{}^{mC}}_{{}_{ul}}]\operatorname{\mathbb{E}}[\gamma^{{}^{mC}}_{{}_{dl}}]}{\operatorname{\mathbb{E}}[\gamma^{{}^{mC}}_{{}_{ul}}]+\operatorname{\mathbb{E}}[\gamma^{{}^{mC}}_{{}_{dl}}]}$,
with
$\operatorname{\mathbb{E}}[\gamma^{{}^{mC}}_{{}_{ul}}]=\frac{\bar{P}}{1+\bar{\alpha}\bar{P}_{{}_{mC}}^{\beta}}$
and
$\operatorname{\mathbb{E}}[\gamma^{{}^{mC}}_{{}_{dl}}]=\frac{\bar{P}_{{}_{mC}}}{L^{{}^{mC}}_{{}_{dl}}(k)N_{0}}$.
Finally,
$\mathbb{P}(\gamma_{{}_{mC}}(k)<\gamma_{req})=1-e^{-\gamma_{req}/\operatorname{\mathbb{E}}(\gamma_{{}_{mC}}(k))}$.
Similarly, one can find $\mathbb{P}(\gamma_{{}_{MC}}(k)>\gamma_{req})$ and
$\mathbb{P}(\gamma_{{}_{MC}}(k)<\gamma_{req})$ using the same framework, which
turns out to be
$e^{-\gamma_{req}/\operatorname{\mathbb{E}}(\gamma_{{}_{MC}}(k))}$ and
$e^{-\gamma_{req}/\operatorname{\mathbb{E}}(\gamma_{{}_{MC}}(k))}$,
respectively. With this, each row of $\mathbf{P}_{o}$ becomes:
$\mathbf{p}_{o,i}=[p_{o,1},p_{o,2},p_{o,3},p_{o,4},p_{o,5},p_{o,6}]$. Note
that, due to identical rows, $\mathbf{P}_{o}$ has rank 1.
###### Remark.
The mC-D2D and MC-D2D modes transfer data from $D_{T}$ to $D_{R}$ using a two-
hop communication link. This implies two queues in the network; one at $D_{T}$
and the other at the BS. However, this work assumes that both BSs
($BS_{{}_{mC}}$ and $BS_{{}_{MC}}$) have infinite-sized queues, know perfect
CSI ($Z_{dl}^{{}^{mC}}$ and $Z_{dl}^{{}^{MC}}$), and their average transmit
powers ($\bar{P}_{{}_{mC}}$ and $\bar{P}_{{}_{MC}}$) are greater than the
average transmit power of $D_{T}$ ($\bar{P}$). Therefore, the problem of queue
overflow does not occur at either of the BS.
### IV-B Markov Chain Modelling of Underlay-D2D
In the underlay-D2D scenario, $D_{T}$ and $D_{R}$ reuses the cellular user’s
resources; hence, they experiences interference from $U_{T}$. Therefore, to
compute the channel capacities $C^{u}_{d}(k)$, $C^{u}_{{}_{mC}}(k)$, and
$C^{u}_{{}_{MC}}(k)$, we calculate the signal-to-interference-and-noise ratio
(SINR) in each communication mode, which is defined as $\Gamma_{d}(k)$,
$\Gamma_{{}_{mC}}(k)$, and $\Gamma_{{}_{MC}}(k)$. The SINR for the direct-D2D
mode can be calculated as :
$\Gamma_{d}(k)=\frac{\bar{P}Z_{d}(k)/L_{d}}{I_{d}+N_{0}}$, where
$I_{d}=\frac{\bar{P}_{U_{T}}Z_{U_{T},D_{R}}}{L_{U_{T},D_{R}}}$.
$\bar{P}_{U_{T}}$ is the average transmit power of $U_{T}$, and pathloss and
the channel coefficients between $U_{T}$ and $D_{R}$ are $L_{U_{T},D_{R}}$ and
$Z_{U_{T},D_{R}}$, respectively. The SINRs of UL and DL of the mC-D2D mode can
be written as
$\Gamma^{{}^{mC}}_{{}_{ul}}(k)=\frac{\bar{P}Z^{{}^{mC}}_{{}_{ul}}(k)/L^{{}^{mC}}_{{}_{ul}}}{I^{{}^{mC}}_{{}_{ul}}+N_{0}+\alpha\bar{P}_{{}_{mC}}^{\beta}}$
and
$\Gamma^{{}^{mC}}_{{}_{dl}}(k)=\frac{\bar{P}_{mC}Z^{{}^{mC}}_{{}_{ul}}(k)/L^{{}^{mC}}_{{}_{ul}}(k)}{I_{d}+N_{0}}$,
where
$I^{{}^{mC}}_{{}_{ul}}=\frac{\bar{P}_{U_{T}}Z_{U_{T},mC}(k)}{L_{U_{T},mC}}$.
Here, $Z_{U_{T},mC}(k)$ and $L_{U_{T},mC}(k)$ represent the channel
coefficient and pathloss between $U_{T}$ and $BS_{{}_{mC}}$ in time slot $k$,
respectively. Similarly, the SINRs on UL and DL in MC-D2D mode are
$\Gamma^{{}^{MC}}_{{}_{ul}}(k)=\frac{\bar{P}Z^{{}^{MC}}_{{}_{ul}}(k)/L^{{}^{MC}}_{{}_{ul}}}{I^{{}^{MC}}_{{}_{ul}}+N_{0}+\alpha\bar{P}_{{}_{MC}}^{\beta}}$
and
$\Gamma^{{}^{MC}}_{{}_{dl}}(k)=\frac{\bar{P}_{MC}Z^{{}^{MC}}_{{}_{ul}}(k)/L^{{}^{MC}}_{{}_{ul}}(k)}{I_{d}+N_{0}}$,
respectively, where
$I^{{}^{MC}}_{{}_{ul}}=\frac{\bar{P}_{U_{T}}Z_{U_{T},MC}(k)}{L_{U_{T},MC}}$.
Note that the underlay scenario requires re-computation of six probabilities
given in (11) and (12). To do so, we consider an interference-limited
scenario, whereby, by neglecting noise, we obtain signal-to-interference (SIR)
expressions for all three communication modes. For the case of direct-D2D
mode, the SIR expression would be: $\Upsilon_{d}=\frac{\Psi_{d}}{I_{d}}$,
where $\Psi_{d}=\frac{\bar{P}Z_{d}(k)}{L_{d}}$. Observe that
$\Psi_{d}\sim\exp(\frac{L_{d}}{\bar{P}})$ and
$I_{d}\sim\exp(\frac{L_{U_{T},D_{R}}}{\bar{P_{U_{T}}}})$. Then, the outage
probability for the direct-D2D mode becomes
$\begin{split}\mathbb{P}(\Upsilon_{d}(k)<\gamma_{req})&=\frac{L_{d}/\bar{P}}{\frac{L_{d}/\bar{P}+L_{U_{T},D_{R}}/P_{U_{T}}}{\gamma_{{}_{req}}}}\\\
&=\frac{L_{d}\gamma_{{}_{req}}P_{U_{T}}}{L_{d}P_{U_{T}}+L_{U_{T},D_{R}}\bar{P}}.\end{split}$
(13)
Similarly, the probability $\mathbb{P}(\Upsilon_{d}(k)<\gamma_{req})$ becomes
$\mathbb{P}(\Upsilon_{d}(k)>\gamma_{req})=1-\frac{L_{d}\gamma_{{}_{req}}P_{U_{T}}}{L_{d}P_{U_{T}}+L_{U_{T},D_{R}}\bar{P}}.$
(14)
The probabilities in (13) and (14) allow us to compute $p_{u,1}$ and
$p_{u,2}$. Now, for the mC-D2D mode, let
$\Upsilon_{{}_{mC}}=\min\\{\Upsilon^{{}^{mC}}_{{}_{ul}},\Upsilon^{{}^{mC}}_{{}_{dl}}\\}$,
where
$\Upsilon^{{}^{mC}}_{{}_{ul}}=\Psi_{{}_{ul}}^{{}^{mC}}/I_{{}_{ul}}^{{}^{mC}}$
and $\Upsilon^{{}^{mC}}_{{}_{dl}}=\Psi_{{}_{dl}}^{{}^{mC}}/I_{d}$ are the SIR
expressions for $D_{T}\to BS_{{}_{mC}}$ and $BS_{{}_{mC}}\to D_{R}$ links,
respectively, and where
$\Psi_{{}_{ul}}^{{}^{mC}}=\bar{P}Z_{{}_{ul}}^{{}^{mC}}/L_{{}_{ul}}^{{}^{mC}}$
and
$\Psi_{{}_{dl}}^{{}^{mC}}=\bar{P}_{{}_{mC}}Z_{{}_{dl}}^{{}^{mC}}/L_{{}_{dl}}^{{}^{mC}}$.
Also, observe that
$P_{{}_{ul}}^{{}^{mC}}\sim\exp(L_{{}_{ul}}^{{}^{mC}}/\bar{P})$,
$P_{{}_{dl}}^{{}^{mC}}\sim\exp(L_{{}_{dl}}^{{}^{mC}}/\bar{P}_{{}_{mC}})$, and
$I_{{}_{ul}}^{{}^{mC}}\sim\exp(L_{{}_{U_{T},mC}}/\bar{P}_{{}_{mC}})$. Because
$\Upsilon_{{}_{ul}}^{{}^{mC}}$ and $\Upsilon_{{}_{dl}}^{{}^{mC}}$ are
independent R.V., the outage probability for mC-D2D mode becomes
$\begin{split}&\mathbb{P}(\Upsilon_{{}_{mC}}(k)<\gamma_{{}_{req}})\\\
&=\frac{L_{{}_{ul}}^{{}^{mC}}/\bar{P}}{\frac{L_{{}_{ul}}^{{}^{mC}}/\bar{P}+L_{{}_{dl}}^{{}^{mC}}/\bar{P}_{{}_{mC}}}{\gamma_{{}_{req}}}}+\frac{L_{{}_{U_{T},mC}}/\bar{P}_{{}_{mC}}}{\frac{L_{{}_{U_{T},mC}}/\bar{P}_{{}_{mC}}+L_{U_{T},D_{R}}/\bar{P}_{U_{T}}}{\gamma_{{}_{req}}}}\\\
&-\frac{L_{{}_{ul}}^{{}^{mC}}/\bar{P}}{\frac{L_{{}_{ul}}^{{}^{mC}}/\bar{P}+L_{{}_{dl}}^{{}^{mC}}/\bar{P}_{{}_{mC}}}{\gamma_{{}_{req}}}}\times\frac{L_{{}_{U_{T},mC}}/\bar{P}_{{}_{mC}}}{\frac{L_{{}_{U_{T},mC}}/\bar{P}_{{}_{mC}}+L_{U_{T},D_{R}}/\bar{P}_{U_{T}}}{\gamma_{{}_{req}}}}\\\
&=\frac{\gamma_{{}_{req}}\big{[}L^{{}^{mC}}_{{}_{ul}}\bar{P}_{{}_{mC}}(-\gamma_{{}_{req}}\bar{P}_{U_{T}}+\bar{P}_{{}_{mC}}+2\bar{P}_{{}_{mC}})+L_{{}_{dl}}^{{}^{mC}}\bar{P}\bar{P}_{U_{T}}\big{]}}{(\bar{P}_{U_{T}}+\bar{P}_{{}_{mC}})(L_{{}_{dl}}^{{}^{mC}}\bar{P}+L_{{}_{ul}}^{{}^{mC}}\bar{P}_{{}_{mC}})}.\end{split}$
(15)
Similarly, the probability
$\mathbb{P}(\Upsilon_{{}_{mC}}(k)>\gamma_{{}_{req}})$ becomes
$\begin{split}&\mathbb{P}(\Upsilon_{{}_{mC}}(k)>\gamma_{{}_{req}})\\\
&=1-\frac{\gamma_{{}_{req}}\big{[}L^{{}^{mC}}_{{}_{ul}}\bar{P}_{{}_{mC}}(-\gamma_{{}_{req}}\bar{P}_{U_{T}}+\bar{P}_{{}_{mC}}+2\bar{P}_{{}_{mC}})+L_{{}_{dl}}^{{}^{mC}}\bar{P}\bar{P}_{U_{T}}\big{]}}{(\bar{P}_{U_{T}}+\bar{P}_{{}_{mC}})(L_{{}_{dl}}^{{}^{mC}}\bar{P}+L_{{}_{ul}}^{{}^{mC}}\bar{P}_{{}_{mC}})}.\end{split}$
(16)
The probabilities in (15) and (16) allow us to compute $p_{u,3}$ and
$p_{u,4}$. For the MC-D2D mode, let
$\Upsilon_{{}_{MC}}=\min\\{\Upsilon^{{}^{MC}}_{{}_{ul}},\Upsilon^{{}^{MC}}_{{}_{dl}}\\}$,
where
$\Upsilon^{{}^{MC}}_{{}_{ul}}=\Psi_{{}_{ul}}^{{}^{MC}}/I_{{}_{ul}}^{{}^{MC}}$
and $\Upsilon^{{}^{MC}}_{{}_{dl}}=\Psi_{{}_{dl}}^{{}^{MC}}/I_{d}$ are the SIR
expressions for $D_{T}\to BS_{{}_{MC}}$ and $BS_{{}_{MC}}\to D_{R}$ links,
respectively. Here,
$\Psi_{{}_{ul}}^{{}^{MC}}=\bar{P}Z_{{}_{ul}}^{{}^{MC}}/L_{{}_{ul}}^{{}^{MC}}$
and
$\Psi_{{}_{dl}}^{{}^{MC}}=\bar{P}_{{}_{MC}}Z_{{}_{dl}}^{{}^{MC}}/L_{{}_{dl}}^{{}^{MC}}$.
Also, observe that
$P_{{}_{ul}}^{{}^{MC}}\sim\exp(L_{{}_{ul}}^{{}^{MC}}/\bar{P})$,
$P_{{}_{dl}}^{{}^{MC}}\sim\exp(L_{{}_{dl}}^{{}^{MC}}/\bar{P}_{{}_{MC}})$, and
$I_{{}_{ul}}^{{}^{MC}}\sim\exp(L_{{}_{U_{T},mC}}/\bar{P}_{{}_{MC}})$. Similar
to the case of the mC-D2D mode, $\Upsilon^{{}^{MC}}_{{}_{ul}}$ and
$\Upsilon^{{}^{MC}}_{{}_{dl}}$ are independent random variables; therefore,
$\mathbb{P}(\Upsilon_{{}_{MC}}(k)<\gamma_{{}_{req}})$ and
$\mathbb{P}(\Upsilon_{{}_{MC}}(k)>\gamma_{{}_{req}})$ can be calculated by
substituting $L_{{}_{ul}}^{{}^{MC}}$, $L_{{}_{dl}}^{{}^{MC}}$, and
$\bar{P}_{{}_{MC}}$ into (15) and (16), respectively. These probabilities will
then allow us to compute $p_{u,5}$ and $p_{u,6}$. By using the probabilities
found above, we can find the state transition probability matrix for underlay
D2D ($\mathbf{P}_{u}$). Similar to $\mathbf{P}_{o}$, $\mathbf{P}_{u}$ is also
of unit rank 1, with each row
$\mathbf{p}_{u,i}=[p_{u,1},p_{u,2},p_{u,3},p_{u,4},p_{u,5},p_{u,6}]$.
### IV-C Effective Capacity of HARQ-enabled D2D
In our analysis, we use HARQ for retransmission of the packet. In HARQ, each
data packet is encoded into $M$ codeword blocks, and $M$ defines the maximum
number of the allowed retransmissions of a packet, which is adjustable
according to the reliability and delay requirements of the system [34]. Let us
consider a transmission period $T$ containing $M$ codewords/fading blocks,
with $l$ as the size of each fading block. In each transmission period, a
codeword is transmitted; if $D_{R}$ decodes the codeword successfully, it
sends an ACK, and the transmission period ends. Contrarily, if decoding fails
at $D_{R}$, a NACK is sent to $D_{T}$; then, $D_{T}$ retransmits the packet
with a new set of parity bits (codeword). This process continues until the
packet is decoded successfully at $D_{R}$ or until the maximum limit of the
retransmissions ($M$) is reached. Note that in HARQ, when $D_{R}$ decodes the
received packet at the $m^{\text{th}}$ retransmission attempt (using $m$
number of codewords), it means that $m-1$ number of trials have finished and
were unsuccessful. If $D_{R}$ fails to decode a packet on the $M^{\text{th}}$
retransmission attempt, an outage occurs. At that point, $D_{T}$ has two
options: either delete that packet from the queue or reduce the priority of
that packet and transmit the next packet with the highest priority. In the
second option, the failed packet will then be transmitted when its priority
becomes highest. We have modelled this scenario into two queue models. In
model 1 ($n_{1}$), if a packet is not successfully decoded by $D_{R}$ even
after the deadline occurs ($M$ number of unsuccessful attempts), then the
packet’s priority is reduced and the packet possessing the highest priority is
transmitted in the following transmission period. In model 2 ($n_{2}$), the
packet is deleted from $D_{T}$’s queue if not successfully decoded by $D_{R}$
after $M$ number of retransmission attempts.
The EC of HARQ-enabled D2D communication under the assumption of constant
arrival ($a$) and transmission rates ($r$), given the QoS exponent $\theta$
and the specified retransmission constraint $M$, is given as follows [16],
$EC^{{}^{\text{HARQ}}}_{n_{j}}=\frac{-1}{\theta}\log_{e}(\lambda_{n_{j}}+),$
(17)
where $\lambda_{n_{j}}+$ =
$\max\\{|\lambda_{1,n_{j}}|,|\lambda_{2,n_{j}}|,\dots,|\lambda_{M,n_{j}}|\\}$
is the spectral radius of $\mathbf{B}_{n_{j}}$ and $j\in\\{1,2\\}$.
$\mathbf{B}_{n_{j}}$ is a block-companion matrix of size $M\times M$ and is
defined as,
$\mathbf{B}_{n_{j}}=\begin{bmatrix}b_{1,n_{j}}&b_{2,n_{j}}&\dots&b_{M-1,n_{j}}&b_{M,n_{j}}\\\
1&0&\dots&0&0\\\ 0&1&\dots&0&0\\\ \@vdots&\@vdots&\ddots&\@vdots&\@vdots\\\
0&0&\dots&1&0\end{bmatrix}.$ (18)
To find the entries of the matrix $\mathbf{B}_{n_{j}}$, first we have to find
the decoding error and successful decoding probabilities at $D_{R}$ in each
queue model. According to the finite block length coding rate model [35], the
decoding error probability of the $m^{\text{th}}$ transmission attempt in
direct-D2D mode ($\zeta^{d}_{m}(Z)$), mC-D2D mode ($\zeta^{{}^{mC}}_{m}(Z)$),
and MC-D2D mode ($\zeta^{{}^{MC}}_{m}(Z)$) can be written as [17]
$\displaystyle\zeta^{d}_{m}(Z)$
$\displaystyle=Q\bigg{(}\frac{\tsum\slimits@_{k=1}^{m}\log_{2}(1+\gamma_{d}(k))+{\log(ml)/l}-r}{\log_{2}e\sqrt{\tsum\slimits@_{k=1}^{m}\frac{(2+\gamma_{d}(k))\gamma_{d}(k)}{l(\gamma_{d}(k)+1)^{2}}}}\bigg{)}$
(19a) $\displaystyle\zeta^{{}^{mC}}_{m}(Z)$
$\displaystyle=Q\bigg{(}\frac{\tsum\slimits@_{k=1}^{m}\log_{2}(1+\gamma_{{}_{mC}}(k))+{\log(ml)/l}-r}{\log_{2}e\sqrt{\tsum\slimits@_{k=1}^{m}\frac{(2+\gamma_{{}_{mC}}(k))\gamma_{{}_{mC}}(k)}{l(\gamma_{{}_{mC}}(k)+1)^{2}}}}\bigg{)}$
(19b) $\displaystyle\zeta^{{}^{MC}}_{m}(Z)$
$\displaystyle=Q\bigg{(}\frac{\tsum\slimits@_{k=1}^{m}\log_{2}(1+\gamma_{{}_{MC}}(k))+{\log(ml)/l}-r}{\log_{2}e\sqrt{\tsum\slimits@_{k=1}^{m}\frac{(2+\gamma_{{}_{MC}}(k))\gamma_{{}_{MC}}(k)}{l(\gamma_{{}_{MC}}(k)+1)^{2}}}}\bigg{)}.$
(19c)
Where $\gamma_{d}(k)$, $\gamma_{{}_{mC}}(k)$, and $\gamma_{{}_{MC}}(k)$ are
the SNR of the direct-D2D, mC-D2D, and MC-D2D modes, respectively. Let us
define $P_{t,\nu,n_{j}}$ as the probability of $\nu$, the number of removed
packets from $D_{T}$’s queue, for the queue model, $j$, in time period, $t$.
We know from the deadline constraint that $1\leq t\leq M$ and
$\nu\in\\{0,1\\}$ (considering that only one packet is being transmitted in
one transmission period).
#### Queue Model 1 $(n_{1})$
In $n_{1}$, $\nu=0$ when outage occurs ($t=M$); therefore, we can say that
$P_{t,0,n_{1}}$ is the probability that no successful decoding happens at
$D_{R}$ when $M$ is reached. Contrarily, $P_{t,1,n_{1}}$ represents the
probability that a transmission period ended successfully in the
$t^{\text{th}}$ time block. From here, we have the following:
$P_{t,0,n_{1}}=\begin{cases}\left.\begin{aligned} 0,\quad&t<M\\\
\varepsilon_{d},\quad&t=M\end{aligned}\;\right\\}\quad\text{direct-D2D
mode}\\\ \left.\begin{aligned} 0,\quad&t<M\\\
\varepsilon_{{}_{mC}},\quad&t=M\end{aligned}\;\right\\}\quad\text{mC-D2D
mode}\\\ \left.\begin{aligned} 0,\quad&t<M\\\
\varepsilon_{{}_{MC}},\quad&t=M\end{aligned}\;\right\\}\quad\text{MC-D2D
mode}\end{cases}$ (20)
where $\varepsilon_{d}$, $\varepsilon_{{}_{mC}}$, and $\varepsilon_{{}_{MC}}$
are the outage probabilities in direct-D2D, mC-D2D, and MC-D2D modes,
respectively. These outage probabilities can be defined as
$\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{M}]$,
$\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{M}]$, and
$\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{M}]$, respectively. The
probability that $D_{R}$ successfully decodes the packet in $t^{\text{th}}$
time block is equal to the probability of $D_{R}$ decoding the packet within
$t$ time blocks minus the probability of $D_{R}$ decoding the packet within
$t-1$ time blocks. Therefore, $P_{t,1,n_{1}}$ can be defined as
$P_{t,1,n_{1}}=\begin{cases}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{t-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{t}],&\text{direct-D2D
mode}\\\
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{t-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{t}],&\text{mC-D2D
mode}\\\
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{t-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{t}],&\text{MC-D2D
mode}\end{cases}.$ (21)
#### Queue Model 2 $(n_{2})$
In $n_{2}$, $\nu=1$ due to the fact that a packet surely leaves $D_{T}$’s
queue as each transmission period ends. It is either because of the successful
decoding of the packet at $D_{R}$ or because of the packet dropped by
$D_{T}$’s queue when $M$ is reached. In the $n_{2}$ model, $t<M$ corresponds
to the successful transmission of the packet, as it also did in the $n_{1}$
model. In the $n_{2}$ model, $t=M$ corresponds to two cases. The first case is
when $D_{R}$ decodes the packet successfully in the $M^{\text{th}}$ time
block. The second case is when an outage occurs, consequently dropping the
packet from $D_{T}$’s queue. Therefore, we have the following cases
$P_{t,1,n_{2}}=\begin{cases}\left.\begin{aligned}
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{t-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{t}],\quad&t<M\\\
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{M-1}],\quad&t=M\end{aligned}\;\right\\}\quad\text{direct-D2D
mode}\\\ \left.\begin{aligned}
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{t-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{t}],\quad&t<M\\\
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{M-1}],\quad&t=M\end{aligned}\;\right\\}\quad\text{mC-D2D
mode}\\\ \left.\begin{aligned}
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{t-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{t}],\quad&t<M\\\
\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{M-1}],\quad&t=M\end{aligned}\;\right\\}\quad\text{MC-D2D
mode}\\\ \end{cases}$ (22)
For the case of $t=M$, we use
$P_{t,1,n_{2}}=\operatorname{\mathbb{E}}_{z}[\zeta_{M-1}]-\operatorname{\mathbb{E}}_{z}[\zeta_{M}]+\varepsilon$,
where $\operatorname{\mathbb{E}}_{z}[\zeta_{M}]=\varepsilon$.
Now, to find the entries of the block companion matrix $\mathbf{B}_{n_{j}}$,
we utilize the results from (20), (21), and (22); consequently, we obtain the
following
$b_{k,n_{j}}=\begin{cases}\mathbf{q}_{{}_{1}}\mathbf{\Phi}(-\theta)\mathbf{p}_{i},&k=1\\\
\mathbf{q}_{{}_{2}}\mathbf{\Phi}(-\theta)\mathbf{p}_{i},&2\leq k\leq M-1\\\
\mathbf{q}_{{}_{3}}\mathbf{\Phi}(-\theta)\mathbf{p}_{i}+\varepsilon_{{}_{ac}},&k=M\;\text{and}\;j=1\\\
\mathbf{q}_{{}_{4}}\mathbf{\Phi}(-\theta)\mathbf{p}_{i},&k=M\;\text{and}\;j=2,\end{cases}$
(23)
where
$\mathbf{q}_{{}_{1}}=\big{[}1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{1}],1,1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{1}],1,1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{1}],1\big{]}$,
$\mathbf{q}_{{}_{2}}=\big{[}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{k-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{k}],1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{k-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{k}],1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{k-1}]-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{k}],1\big{]}$,
$\mathbf{q}_{{}_{3}}=\big{[}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{M-1}]-\varepsilon_{d},1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{M-1}]-\varepsilon_{{}_{mC}},1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{M-1}]-\varepsilon_{{}_{MC}},1\big{]}$,
and
$\mathbf{q}_{{}_{4}}=\big{[}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{M-1}],1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{M-1}],1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{M-1}],1\big{]}$.
$\varepsilon_{{}_{ac}}=\varepsilon_{d}+\varepsilon_{{}_{mC}}+\varepsilon_{{}_{MC}}$
is the accumulative outage probability, $\mathbf{p}_{i}$ is a vector
containing all the state transition probabilities (due to unit rank),
$\mathbf{p}_{i}=[p_{1},p_{2},p_{3},p_{4},p_{5},p_{6}]$, and
$\mathbf{\Phi}(\theta)$ is the diagonal matrix containing the MGF of the
processes in the six states ($s_{1}$, $s_{2}$, $s_{3}$, $s_{4}$, $s_{5}$,
$s_{6}$). Because $S(k)=r$ for states $s_{1}$, $s_{3}$, and $s_{5}$ (ON
states) and $S(k)=0$ for states $s_{2}$, $s_{4}$, and $s_{6}$ (OFF states).
Therefore, the MGFs for states $s_{1}$, $s_{2}$, $s_{3}$, $s_{4}$, $s_{5}$,
and $s_{6}$ become $e^{lr\theta}$, 1, $e^{lr\theta}$, 1, $e^{lr\theta}$, and
1, respectively. Thus, $\mathbf{\Phi}(-\theta)$ can be expressed as
$\mathbf{\Phi}(-\theta)=\text{diag}[e^{-lr\theta},1,e^{-lr\theta},1,e^{-lr\theta},1]$.
By substituting these values in (23) and by setting a limit on the packet
retransmissions, one can find entries of the block companion matrix
$B_{n_{j}}$. Further, by calculating the spectral radius of $B_{n_{j}}$, one
can find the EC of HARQ-enabled D2D communication for both queue models. Next,
we investigate a special case of HARQ by adjusting the retransmission limit to
2, and provide its statistical QoS analysis.
Figure 3: Flow diagram of truncated HARQ-enabled D2D communication.
### IV-D Effective Capacity of Truncated HARQ-enabled D2D
In this subsection, we discuss a special case of HARQ (truncated HARQ [28])
and also provide closed-form expression for the EC of truncated HARQ-enabled
D2D communication. We restrict the maximum number of packet transmissions in a
transmission period to its lowest value, which is $M=2$. In this case, $D_{T}$
first transmits a packet using underlay settings by reusing the cellular
user’s channel. If the packet fails to be decoded at $D_{R}$, then the packet
is retransmitted using overlay settings in the same transmission period, as
shown in Fig. 3. This way, we can achieve higher reliability by utilizing less
network resources. For $M=2$, the block companion matrix $\mathbf{B}_{n_{j}}$
would become
$\mathbf{B}_{n_{j}}=\begin{bmatrix}b_{1,n_{j}}&b_{2,n_{j}}\\\
1&0\end{bmatrix},$ (24)
and the corresponding characteristic equation is
$\lambda_{n_{j}}^{2}-\lambda_{n_{j}}(b_{1,n_{j}})-b_{2,n_{j}}=0$, with the
largest positive root for queue model ${n_{j}}$
$\lambda_{n_{j}}+=\frac{1}{2}\bigg{(}b_{1,n_{j}}+\sqrt{(b_{1,n_{j}})^{2}+4(b_{2,n_{j}})}\bigg{)}.$
(25)
Now, to find the EC expressions for queue model $n_{1}$ and $n_{2}$, we have
to find the largest positive roots of the corresponding block companion
matrices. For the largest positive root for queue model $n_{1}$
($\lambda_{n_{1}}+$), we have the following Lemma 1.
###### Lemma 1.
The largest positive root of the block companion matrix for queue model
$n_{1}$ is given as,
$\begin{split}\lambda_{n_{1}}+&=\frac{1}{2}\biggl{(}\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4\big{(}e^{-lr\theta}\big{[}\vartheta\big{]}+p^{\text{off}}_{o}+\varepsilon_{{}_{ac}}\big{)}}\biggl{)}.\end{split}$
Where
$\varphi=p_{u,1}\big{(}\alpha_{d}\big{)}+p_{u,3}\big{(}\alpha_{{}_{mC}}\big{)}+p_{u,5}\big{(}\alpha_{{}_{MC}}\big{)}$,
$\vartheta=p_{o,1}\big{(}\beta_{d}\big{)}+p_{o,3}\big{(}\beta_{{}_{mC}}\big{)}+p_{o,5}\big{(}\beta_{{}_{MC}}\big{)}$,
$p^{\text{off}}_{u}=p_{u,2}+p_{u,4}+p_{u,6}$, and
$p^{\text{off}}_{o}=p_{o,2}+p_{o,4}+p_{o,6}$
###### Proof.
Given in Appendix C. ∎
By using results from Lemma 1 and solving (17), we can find the closed-form
expression for the EC of truncated HARQ-enabled D2D for queue model $n_{1}$,
which is
$\begin{split}EC^{{}^{\text{HARQ}}}_{n_{1}}&=\frac{-1}{\theta}\log_{e}\bigg{\\{}\frac{1}{2}\biggl{(}\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4\big{(}e^{-lr\theta}\big{[}\vartheta\big{]}+p^{\text{off}}_{o}+\varepsilon_{{}_{ac}}\big{)}}\biggl{)}\bigg{\\}}.\end{split}$
(26)
Similarly, for queue model $n_{2}$, the expression for the largest positive
root ($\lambda_{n_{2}}+$) can be found by using the following Lemma 2.
###### Lemma 2.
The largest positive root of the block companion matrix for queue model
$n_{2}$ is given as,
$\begin{split}\lambda_{n_{2}}+=\frac{1}{2}\bigg{(}\big{(}e^{-lr\theta}&\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4e^{-lr\theta}\big{[}\varrho\big{]}+p^{\text{off}}_{o}}\bigg{)}.\end{split}$
Where
$\varrho=p_{o,1}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}]+p_{o,3}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}]+p_{o,5}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}]$.
###### Proof.
Given in Appendix D. ∎
Now, by using results from Lemma 2 and solving (17), we can find the closed-
form expression for the EC of truncated HARQ-enabled D2D for the queue model
$n_{2}$, which is
$\begin{split}EC^{{}^{\text{HARQ}}}_{n_{2}}&=\frac{-1}{\theta}\log_{e}\bigg{\\{}\frac{1}{2}\bigg{(}\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4e^{-lr\theta}\big{[}\varrho\big{]}+p^{\text{off}}_{o}}\bigg{)}\bigg{\\}}.\end{split}$
(27)
We provide numerical investigation and insights of these EC expressions for
both of the queue models in Section V.
### IV-E Optimal Transmission Rate
As discussed above, we assume that CSIT is not available; therefore, the
transmitting device sends data using a fixed transmission rate. To achieve the
maximum EC, it is essential to transmit data using an optimal transmission
rate. Therefore, in this section, we find the optimized transmission rates for
$n_{1}$ and $n_{2}$ models that maximize the EC in respective queue models.
These optimal transmission rates can be written as
$r_{n_{j}}^{*}=\arg\max_{r_{n_{j}}>0}EC^{{}^{\text{HARQ}}}_{n_{j}}$. For
$n_{1}$ model, it becomes
$\begin{split}r_{n_{1}}^{*}&=\arg\max_{r_{n_{1}}>0}\frac{-1}{\theta}\log_{e}\bigg{\\{}\frac{1}{2}\biggl{(}\big{(}e^{-lr_{n_{1}}\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr_{n_{1}}\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4\big{(}e^{-lr_{n_{1}}\theta}\big{[}\vartheta\big{]}+p^{\text{off}}_{o}+\varepsilon_{{}_{ac}}\big{)}}\biggl{)}\bigg{\\}}.\end{split}$
(28)
Equivalently, we can write
$\begin{split}r_{n_{1}}^{*}&=\arg\min_{r_{n_{1}}>0}\bigg{\\{}\big{(}e^{-lr_{n_{1}}\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr_{n_{1}}\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4\big{(}e^{-lr_{n_{1}}\theta}\big{[}\vartheta\big{]}+p^{\text{off}}_{o}+\varepsilon_{{}_{ac}}\big{)}}\bigg{\\}}.\end{split}$
(29)
From Table I, we can see that the transmission is only possible in states
$s_{1}$, $s_{3}$, and $s_{5}$ and that no transmission occurs during states
$s_{2}$, $s_{4}$, and $s_{6}$. Therefore, the transmission probabilities
$p_{2}$, $p_{4}$, and $p_{6}$, in both overlay and underlay scenarios, are
irrelevant when optimizing (29) with respect to $r_{n_{1}}$. By discarding the
irrelevant terms, the final optimization problem becomes
$\begin{split}r_{n_{1}}^{*}=\arg\min_{r_{n_{1}}>0}\bigg{\\{}&e^{-lr_{n_{1}}\theta}[\varphi]+\\\
&\sqrt{\big{(}e^{-lr_{n_{1}}\theta}[\varphi]\big{)}^{2}+4\big{(}e^{-lr_{n_{1}}\theta}[\vartheta]+\varepsilon_{{}_{ac}}\big{)}}\bigg{\\}}.\end{split}$
(30)
Let
$F=e^{-lr_{n_{1}}\theta}[\varphi]+\sqrt{(e^{-lr_{n_{1}}\theta}[\varphi])^{2}+4(e^{-lr_{n_{1}}\theta}[\vartheta]+\varepsilon_{{}_{ac}})}$
be the cost function. Because $F$ is a convex function [36], we can find its
closed-form by taking the derivative with respect to $r_{n_{1}}$. By taking
the derivative of $F$ and by employing the chain rule and the sum/difference
rule, we obtain the following result
$\frac{\partial F}{\partial r_{n_{1}}}=-l\theta
e^{-lr_{n_{1}}\theta}[\varphi]-\frac{l\theta
e^{-2lr_{n_{1}}\theta}([\varphi]+2[\vartheta]e^{lr_{n_{1}}\theta})}{\sqrt{(e^{-lr_{n_{1}}\theta}[\varphi])^{2}+4(e^{-lr_{n_{1}}\theta}[\vartheta]+\varepsilon_{{}_{ac}})}}.$
(31)
Now, to find the closed-form expression, we set $\frac{\partial F}{\partial
r_{n_{1}}}=0$. Consequently, we obtain
$l\theta e^{-lr_{n_{1}}\theta}[\varphi]=-\frac{l\theta
e^{-2lr_{n_{1}}\theta}([\varphi]+2[\vartheta]e^{lr_{n_{1}}\theta})}{\sqrt{(e^{-lr_{n_{1}}\theta}[\varphi])^{2}+4(e^{-lr_{n_{1}}\theta}[\vartheta]+\varepsilon_{{}_{ac}})}}.$
(32)
Solving (32) for $r_{n_{1}}$ requires a great deal of computation, and the
computational complexity of the solution is very high. Therefore, we employ
the iterative gradient decent (GD) method to determine the optimal
transmission rate $r_{n_{1}}^{*}$. To control the convergence of the GD
method, we have the following rule
$r_{n_{1}}(x)=r_{n_{1}}(x-1)-\Omega\nabla\big{|}_{r_{n_{1}}(x)},$ (33)
where $\Omega$ is the step-size, $x$ is the number of the iteration, and is
the gradient of $F$. This gradient can be written as $\nabla=\frac{\partial
F}{\partial r_{n_{1}}}$ and is given in (31).
Similarly, for $n_{2}$ model, the optimized transmission rate can be
calculated using the following expression:
$r_{n_{2}}^{*}=\arg\min_{r_{n_{2}}>0}\bigg{(}e^{-lr_{n_{2}}\theta}\big{[}\varphi\big{]}+\sqrt{\big{(}e^{-lr_{n_{2}}\theta}\big{[}\varphi\big{]}\big{)}^{2}+4\big{(}e^{-lr_{n_{2}}\theta}\big{[}\varrho\big{]}\big{)}}\bigg{)}.$
(34)
To solve (34) and to find the optimal value of $r_{n_{2}}$, one can follow the
same procedure used for $n_{1}$ model.
###### Remark.
In our system model, the MC-BS performs the mode selection mechanism (to find
the best mode for D2D communication) and executes the GD algorithm (to compute
the optimal yet fixed transmission rates). The MC-BS then communicates the
outcome of the mode selection and the optimal transmission rate to $D_{T}$
through the downlink control channel. Moreover, the mode selection and the
optimal transmission rates have to be recomputed every time pathloss of the
D2D link changes (due to the D2D users’ mobility). The MC-BS performs these
tasks because we assume that it has adequate resources to execute the GD
algorithm. It also keeps track of the D2D users’ mobility to decide when to
recompute the optimal transmission rates.
## V Numerical Results
In this section, we further investigate the EC of HARQ-enabled D2D
communication and the impact of mode selection on the performance of the D2D
link, and we provide simulation results to support our analysis.
### V-A Simulation Setup
We consider an MC of radius 500 m and an mC of radius 100 m in the MC’s
coverage area. Two pairs of user equipment are positioned in the coverage area
of the mC using uniform distribution. One pair is referred to as the D2D pair
($D_{T}$ and $D_{R}$) and the other as the cellular user pair ($U_{T}$ and
$U_{R}$). We use the pathloss as a sole-feature for mode selection, with the
following pathloss model [37]: L($d$)=128.1+37.6$\log_{10}(d)$. We use power
class 1 devices at the transmitter and receiver, with their average transmit
power set to be 27 dBm. The average transmit powers of MC-BS and mC-BS are 47
dBm and 37 dBm, respectively. We assume that the channels $D_{T}\to D_{R}$,
$D_{T}\to BS_{{}_{mC}}\to D_{R}$, and $D_{T}\to BS_{{}_{MC}}\to D_{R}$ are
Rayleigh fading channels and follow independent distributions.
### V-B Simulation Results
Figure 4: $EC^{{}^{\text{HARQ}}}$ is a quasi-concave function of $r$; an
exhaustive search to find the optimal $r$ for different values of QoS exponent
$\theta$ ($M=2$).
Fig. 4 presents a comprehensive search to determine the optimal value of the
fixed transmission rate with a constant arrival rate. It can be seen that the
EC of truncated HARQ-enabled D2D is a quasi-concave function of $r$ and that a
globally optimal value of $r$ ($r_{n_{1}}^{*}=r_{n_{2}}^{*}=29$) exists that
maximizes the EC. This is because $r$ introduces a significant outage
probability when it is too large. Consequently, a large amount of packet drop
happens due to the deadline constraint. On the other hand, when $r$ is too
small, it forces the departure rate low as well. In short, for large $r$, the
decoding error probability is the bottleneck, and for small $r$, low departure
rate is the bottleneck. From the figure, we can also see the impact of using
different queue models. For instance, the $n_{1}$ queue model provides a
higher EC on the optimal value of $r$ than does the $n_{2}$ queue model. This
is because the unsuccessful packet is discarded when a deadline is approached
in the $n_{2}$ model. On the other hand, in the $n_{1}$ model, packet
transmission priority is reduced, rather than discarded, when it remains
unsuccessful, even after the deadline is reached. Moreover, one can also see
the impact of imposing strict QoS constraints on the EC; for instance, a lower
EC is achieved at the optimal $r$ when stricter QoS constraints are imposed at
$D_{T}$’s queue.
Figure 5: The EC vs the QoS exponent $\theta$: a comparison of HARQ-enabled
D2D communication with traditional D2D communication.
Next, we investigate the effect of the QoS exponent on the EC of our proposed
system model. Fig. 5 shows that the EC is a decreasing function of $\theta$.
Specifically, the EC decreases exponentially fast for lower values of
$\theta$. For higher values of $\theta$, this rate of decrease slows down and
ultimately reaches zero when $\theta$ approaches 1. It also shows that our
proposed scheme of truncated HARQ-enabled D2D outperforms other D2D schemes,
such as overlay and underlay D2D. However, this gain over other D2D schemes
decreases as stricter QoS constraints are imposed at $D_{T}$’s queue.
Moreover, we also observe a significant performance loss when the finite
blocklength ($l$) increases. This is because we consider a block-fading
channel model; in such models, when the length of the fading block increases,
the effect of slow-fading plays an important role. This occurs because slow-
fading makes a strong attenuation last for a long time in delay-sensitive
networks operating under statistical QoS constraints. This attenuation then
causes an increase in the buffer overflow probability, which affects the
performance of the system and results in reduced EC. Additionally, the results
also show that the $n_{1}$ model with a large blocklength ($l=1000$) still
outperforms the $n_{2}$ model with a small blocklength ($l=100$). It shows the
efficacy of the $n_{1}$ model over the $n_{2}$ model in terms of performance
but at the cost of more resources.555Note that the $n_{1}$ model requires
comparatively more resources than the $n_{2}$ model because in the $n_{1}$
model, a packet is not discarded even after the retransmission deadline is
reached, whereas in the $n_{2}$ model, a packet is discarded after the
retransmission deadline is reached (which in our case occurs after two
unsuccessful attempts). This phenomenon poses an extra burden on the resources
available for D2D communication.
Figure 6: Impact of the mode selection mechanism on the EC of HARQ-enabled D2D
system: The EC of truncated HARQ-enabled D2D vs the standard deviation of the
estimation error for $n_{1}$ and $n_{2}$ queue models.
Fig. 6 presents the impact of our proposed mode selection on the EC of
truncated HARQ-enabled D2D communication. The EC decreases initially with an
increase in the standard deviation of the estimation error ($\sigma$) of
pathloss measurements, and it becomes stable for $\sigma\geq 5$. This occurs
because the EC decreases as the quality of the pathloss estimation decreases.
This trend shows a strong impact of the proposed mode selection on the EC of
the truncated HARQ-enabled D2D communication. Additionally, we observe that
the impact of the quality of the pathloss estimation is significantly higher
when strict QoS constraints are imposed and when the $n_{1}$ queue model is
used. We also observe that although the $n_{1}$ model provides better EC, the
impact of the quality of pathloss estimation is higher on the $n_{1}$ model
compared to the $n_{2}$ model.
Figure 7: Impact of half-duplex and full-duplex relaying on the EC of HARQ-
enabled D2D communication: EC of truncated HARQ-enabled D2D vs the quality of
the SI cancellation techniques.
Last but not the least, we investigate the impact of half-duplex and full-
duplex relaying (in mC-D2D and MC-D2D modes) on the EC of the truncated HARQ-
enabled D2D communication, as shown in Fig. 7. We observe that the EC
increases with an increase in the quality of SI cancellation techniques
($\beta$). When $\beta$ approaches 1, it means perfect SI cancellation at the
relay node (mC-BS and MC-BS), and consequently, the EC of full-duplex becomes
greater than the EC of half-duplex. It is because D2D communication in half-
duplex mode consumes two time-slots, and therefore, a factor of 1/2 is
multiplied with the half-duplex channel capacity. On the other hand, D2D
communication in full-duplex mode utilizes only one time-slot, and that is why
it can achieve double throughput (theoretically) with perfect SI cancellation.
Moreover, one can also see the impact of the QoS exponent ($\theta$) and the
length of the finite blocklength ($l$) on the EC of full-duplex truncated
HARQ-enabled D2D communication. The EC is inversely proportional to $\theta$
and $l$; it decreases with an increase in $\theta$ and $l$ and vice-versa.
## VI Conclusion and Future Directions
In this work, we have investigated the effects of using the HARQ protocol on
the EC of buffer-aided D2D communication in multi-tier cellular networks. We
have also performed the ternary hypothesis testing-based mode selection for
D2D in two-tier cellular networks and have analyzed its impact on the EC of
HARQ-enabled D2D communication. We have considered two different queue models
at the transmitting device. In case of an outage, the transmitting device in
the second model discards the packet. Whereas, in the first model, the
transmitting device reduces the packet’s priority rather than discarding it.
We have also extended our analysis to both overlay and underlay D2D settings.
Additionally, we have proposed a special case of truncated HARQ for D2D
communication in which the transmitting device transmits in underlay settings
in the first transmission attempt. If the receiver does not successfully
decode the packet, it retransmits the packet in overlay settings in the second
transmission attempt. Through simulation results, we have observed that almost
three-fold enhanced EC can be achieved by using our proposed truncated HARQ
protocol than by not using any retransmission protocol for D2D communication.
Moreover, the first queue model provides better EC compared to the second
queue model but at the expense of extra bandwidth.
Future work will study the impact of different HARQ variants on the EC of D2D
communication. Moreover, this analysis can also be extended to scenarios when
multiple D2D pairs are present in the network. In that case, it will be quite
intriguing to investigate the impact of network and channel coding on the HARQ
retransmission schemes.
## Appendix A pathloss Estimation
The pathloss estimation has three phases, explained as follows.
* •
Transmission Phase: In this phase, $D_{T}$ transmits $m$ number of symbols on
all the candidate communication links ($D_{T}\to D_{R}$, $D_{T}\to
BS_{{}_{mC}}$, and $D_{T}\to BS_{{}_{MC}}$) using fixed transmission power
$P_{T}$. The signal received at the respective receiver ($D_{R}$,
$BS_{{}_{mC}}$, and $BS_{{}_{MC}}$) can be calculated as follows:
$\begin{split}y_{{}_{D_{R}}}&=\sqrt{P_{T}}\>L_{d}\>Z_{d}\>x+n_{{}_{d}}\\\
y_{{}_{mC}}&=\sqrt{P_{T}}\>L_{{}_{mC}}\>Z^{{}^{mC}}_{ul}\>x+n_{{}_{mBS}}\\\
y_{{}_{MC}}&=\sqrt{P_{T}}\>L_{{}_{MC}}\>Z^{{}^{MC}}_{ul}\>x+n_{{}_{MBS}},\end{split}$
(35)
where $L_{d}(Z_{d})$, $L_{{}_{mC}}(Z_{{}_{mC}})$, and
$L_{{}_{MC}}(Z_{{}_{MC}})$ are the pathlosses (channel coefficients) between
$D_{T}\to D_{R}$, $D_{T}\to BS_{{}_{mC}}$, and $D_{T}\to BS_{{}_{MC}}$,
respectively, as shown in Fig. 8. $x$ is the transmitted signal and
$n_{{}_{d}}$, $n_{{}_{mBS}}$, $n_{{}_{MBS}}$ represent the noise of the
respective channel. The noise of each channel follows the zero-mean complex
Gaussian distribution, therefore,
$n_{{}_{d}}\sim\mathcal{CN}(0,\sigma_{{}_{d}}^{2})$,
$n_{{}_{mBS}}\sim\mathcal{CN}(0,\sigma_{{}_{mBS}}^{2})$, and
$n_{{}_{MBS}}\sim\mathcal{CN}(0,\sigma_{{}_{MBS}}^{2})$. We consider that the
wireless channels of all the three links follow complex Gaussian distribution
with zero mean and unity variance ($Z_{d}\sim\mathcal{CN}(0,1)$,
$Z_{{}_{mC}}^{{}_{ul}}\sim\mathcal{CN}(0,1)$, and
$Z_{{}_{MC}}^{{}^{ul}}\sim\mathcal{CN}(0,1)$). Therefore, the received signal
at all the receiver also follows the complex Gaussian distribution;
$y_{{}_{D_{R}}}\sim\mathcal{CN}(0,\sigma_{{}_{D_{R}}})$,
$y_{{}_{mC}}\sim\mathcal{CN}(0,\sigma_{{}_{mC}})$, and
$y_{{}_{MC}}\sim\mathcal{CN}(0,\sigma_{{}_{MC}})$. To find variance of the
received signal, we assume $x\in C$ and $|x|=1$, then
$\sigma_{{}_{D_{R}}}=P_{T}\>L_{{}_{d}}^{2}+\sigma_{{}_{d}}$,
$\sigma_{{}_{mC}}=P_{T}\>L_{{}_{mC}}^{2}+\sigma_{{}_{mBS}}$, and
$\sigma_{{}_{MC}}=P_{T}\>L_{{}_{MC}}^{2}+\sigma_{{}_{MBS}}$
Figure 8: Pathloss estimation by transmission; solid black arrows represent
uplink data signaling, red dotted arrows represent uplink and downlink control
signaling.
* •
Pathloss Estimation Phase: In this phase, every receiver estimates the
pathloss of the respective communication link and then conveys it to
$BS_{{}_{MC}}$ on the uplink control channel, which then performs the mode
selection mechanism. The noisy measurement of pathloss at $D_{R}$,
$BS_{{}_{mC}}$, and $BS_{{}_{MC}}$ can be calculated as follows:
$\begin{split}\widehat{L}_{d}=\frac{\widehat{P}_{{}_{R,D_{R}}}}{P_{{}_{T}}},\
&\text{where}\
\widehat{P}_{{}_{R,D_{R}}}=\frac{\tsum\slimits@_{i=1}^{m}|y_{{}_{D_{R}}}(i)|^{2}}{m}\\\
\widehat{L}_{{}_{mC}}=\frac{\widehat{P}_{{}_{R,mC}}}{P_{{}_{T}}},\
&\text{where}\
\widehat{P}_{{}_{R,mC}}=\frac{\tsum\slimits@_{i=1}^{m}|y_{{}_{mC}}(i)|^{2}}{m}\\\
\widehat{L}_{{}_{MC}}=\frac{\widehat{P}_{{}_{R,MC}}}{P_{{}_{T}}},\
&\text{where}\
\widehat{P}_{{}_{R,MC}}=\frac{\tsum\slimits@_{i=1}^{m}|y_{{}_{MC}}(i)|^{2}}{m}.\end{split}$
(36)
$\widehat{P}_{{}_{R,D_{R}}}$, $\widehat{P}_{{}_{R,mC}}$, and
$\widehat{P}_{{}_{R,MC}}$ represent the estimated values of received power at
$D_{R}$, $BS_{{}_{mC}}$, and $BS_{{}_{MC}}$, respectively. We know that
$|y_{{}_{D_{R}}}|$, $|y_{{}_{mC}}|$, and $|y_{{}_{MC}}|$ follow Rayleigh
distributions and $|y_{{}_{D_{R}}}|^{2}$, $|y_{{}_{mC}}|^{2}$, and
$|y_{{}_{MC}}|^{2}$ follow exponential distributions. Therefore, by invoking
the Central Limit Theorem and for large $m$, $\widehat{P}_{{}_{R,D_{R}}}$,
$\widehat{P}_{{}_{R,mC}}$, and $\widehat{P}_{{}_{R,MC}}$ follow Gaussian
distributions. Similarly, $\widehat{L}_{d}$, $\widehat{L}_{{}_{mC}}$, and
$\widehat{L}_{{}_{MC}}$ also follow Gaussian distributions.
* •
Mode Selection Phase: In this phase, $BS_{{}_{MC}}$ obtains $\widehat{L}_{d}$,
$\widehat{L}_{{}_{mC}}$, and $\widehat{L}_{{}_{MC}}$ for all the three
candidate links via the uplink control channel. Then, it computes
$\min\big{\\{}\widehat{L}_{d},\widehat{L}_{{}_{mC}},\widehat{L}_{{}_{MC}}\big{\\}}$
and announces the active link via the downlink control channel to $D_{T}$.
## Appendix B Proof of Proposition 1
The mC-D2D link is a two-hop wireless link consisting of an uplink and a
downlink channel. Therefore, the end-to-end channel capacity of the mC-D2D
link is capped by the minimum of the uplink and the downlink channel
capacities [38]. It can be written as,
${C^{o}_{{}_{mC}}}(k)=\min\\{C^{{}^{mC}}_{{}_{ul}}(k),C^{{}^{mC}}_{{}_{dl}}(k)\\}$
(37)
where $C^{{}^{mC}}_{{}_{ul}}(k)$ and $C^{{}^{mC}}_{{}_{dl}}(k)$ represent the
instantaneous channel capacities of the uplink and the downlink channels of
the mC-D2D mode, respectively. We assume that $BS_{{}_{mC}}$ operates in full-
duplex mode. To cancel the self-interference caused by the simultaneous
transmission and reception at $BS_{{}_{mC}}$, the BS utilizes the digital and
analog SI cancellation techniques (see Section II-B). However, we note that in
practical full-duplex systems, it is almost impossible to perfectly cancel out
the effects of SI. Therefore, we incorporate residual SI as a factor of noise
at the BS. Due to this, the instantaneous channel capacity of the uplink
becomes,
$C^{{}^{mC}}_{{}_{ul}}(k)=B\log_{2}\bigg{(}1+\frac{\bar{P}Z^{{}^{mC}}_{{}_{ul}}(k)}{L^{{}^{mC}}_{{}_{ul}}(k)N_{0}+\alpha\bar{P}_{{}_{mC}}^{\beta}}\bigg{)}.$
(38)
Where $\bar{P}_{{}_{mC}}$ represents the average transmit power of
$BS_{{}_{mC}}$. $Z^{{}^{mC}}_{{}_{ul}}(k)$ and $L^{{}^{mC}}_{{}_{ul}}(k)$
represent the channel coefficients and pathloss between $D_{T}$ and
$BS_{{}_{mC}}$. $\alpha\bar{P}_{{}_{mC}}^{\beta}$ represent residual SI, where
$\alpha$ and $\beta(0\leq\beta\leq 1)$ are the constants that reflect the
quality of the SI cancellation techniques employed at $BS_{{}_{mC}}$. By
simplifying the denumerator of (38), we can find the SI-to-noise-ratio for
full-duplex relaying at $BS_{{}_{mC}}$, which is
$\bar{\alpha}\bar{P}_{{}_{mC}}^{\beta}$, where
$\bar{\alpha}=\alpha/L^{{}^{mC}}_{{}_{ul}}(k)N_{0}$. Next, to find the
instantaneous channel capacity of the downlink of mC-D2D mode, we assume that
the receiver node operates in half-duplex mode; thus, it does not experience
SI. Therefore, the instantaneous channel capacity of the downlink becomes,
$C^{{}^{mC}}_{{}_{dl}}(k)=B\log_{2}\bigg{(}1+\frac{\bar{P}_{{}_{mC}}Z^{{}^{mC}}_{{}_{dl}}(k)}{L^{{}^{mC}}_{{}_{dl}}(k)N_{0}}\bigg{)}.$
(39)
Where $Z^{{}^{mC}}_{{}_{dl}}(k)$ and $L^{{}^{mC}}_{{}_{dl}}(k)$ are the
channel coefficients and the pathloss between $BS_{{}_{mC}}$ and $D_{R}$,
respectively. Now, by substituting (38) and (39) in (37), and after some
simplification steps, the end-to-end instantaneous channel capacity of mC-D2D
link becomes,
$C^{o}_{{}_{mC}}(k)=\min\bigg{\\{}B\log_{2}\big{(}1+\gamma^{{}^{mC}}_{{}_{ul}}(k)\big{)},B\log_{2}\big{(}1+\gamma^{{}^{mC}}_{{}_{dl}}(k)\big{)}\bigg{\\}}.$
(40)
Where
$\gamma^{{}^{mC}}_{{}_{ul}}(k)=\bar{P}Z^{{}^{mC}}_{{}_{ul}}(k)\big{/}1+\bar{\alpha}\bar{P}_{{}_{mC}}^{\beta}$
and
$\gamma^{{}^{mC}}_{{}_{ul}}(k)=\bar{P}_{{}_{mC}}Z^{{}^{mC}}_{{}_{dl}}(k)\big{/}L^{{}^{mC}}_{{}_{dl}}(k)N_{0}$
are the SNRs of the uplink and the downlink channels, respectively. Since we
are using Shannon channel capacity where the only variable that affects the
channel capacity is the SNR of the transmission channel, the net-SNR of the
mC-D2D link will be the minimum of uplink and downlink channels SNR. Due to
this, (40) becomes,
$C^{o}_{{}_{mC}}(k)=B\log_{2}\big{(}1+\gamma_{{}_{mC}}(k)\big{)}.$ (41)
Where
$\gamma_{{}_{mC}}(k)=\min\big{\\{}\gamma^{{}^{mC}}_{{}_{ul}}(k),\gamma^{{}^{mC}}_{{}_{dl}}(k)\big{\\}}$
is the net-SNR of mC-D2D link.
## Appendix C Proof of Lemma 1
The block-companion matrix for queue model $n_{1}$ can be derived from (24),
which becomes,
$\mathbf{B}_{n_{1}}=\begin{bmatrix}b_{1,n_{1}}&b_{2,n_{1}}\\\
1&0\end{bmatrix}.$ (42)
By solving (42), the largest positive root comes out to be,
$\lambda_{n_{1}}+=\frac{1}{2}\bigg{(}b_{1,n_{1}}+\sqrt{(b_{1,n_{1}})^{2}+4(b_{2,n_{1}})}\bigg{)}.$
(43)
To solve (43), we have to find $b_{1,n_{1}}$ and $b_{2,n_{1}}$. From (23),
$b_{1,n_{1}}$ becomes,
$b_{1,n_{1}}=\mathbf{q}_{{}_{1}}\mathbf{\Phi}(-\theta)\mathbf{p}_{u,i}.$ (44)
Note that, in our proposed system, the first transmit attempt uses underlay
settings. Therefore, to find $\mathbf{q}_{{}_{1}}$, one has to use
$\Gamma_{d}(k)$, $\Gamma_{{}_{mC}}(k)$, and $\Gamma_{{}_{MC}}(k)$ in (19) to
find $\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{u,1}]$,
$\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{u,1}]$, and
$\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{u,1}]$, respectively. Due to
this fact, $\mathbf{q}_{{}_{1}}$ becomes
$\big{[}1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{u,1}],1,1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{u,1}],1,1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{u,1}],1\big{]}$.
Now, by substituting $\mathbf{q}_{{}_{1}}$,
$\mathbf{\Phi}(-\theta)=\text{diag}[e^{-lr\theta},1,e^{-lr\theta},1,e^{-lr\theta},1]$,
and $\mathbf{p}_{u,i}=[p_{u,1},p_{u,2},p_{u,3},p_{u,4},p_{u,5},p_{u,6}]$ in
(44), and after some simplification steps, $b_{1,n_{1}}$ becomes,
$\begin{split}b_{1,n_{1}}=&(1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{u,1}])e^{-lr\theta}p_{u,1}+p_{u,2}+(1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{u,1}])e^{-lr\theta}p_{u,3}\\\
&+p_{u,4}+(1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{u,1}])e^{-lr\theta}p_{u,5}+p_{u,6}\\\
&=e^{-lr\theta}\bigg{[}p_{u,1}\big{(}\alpha_{{}_{d}}\big{)}+p_{u,3}\big{(}\alpha_{{}_{mC}}\big{)}+p_{u,5}\big{(}\alpha_{{}_{MC}}\big{)}\bigg{]}+p^{\text{off}}_{u}.\end{split}$
(45)
Where $\alpha_{d}=1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{u,1}]$;
$\alpha_{{}_{mC}}=1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{u,1}]$;
$\alpha_{{}_{MC}}=1-\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{u,1}]$; and
$p^{\text{off}}_{u}=p_{u,2}+p_{u,4}+p_{u,6}$, which is the sum of
probabilities in OFF states for the underlay scenario.
Similarly, from (23), $b_{2,n_{1}}$ becomes,
$b_{2,n_{1}}=\mathbf{q}_{{}_{3}}\mathbf{\Phi}(-\theta)\mathbf{p}_{o,i}+\varepsilon_{{}_{ac}}.$
(46)
Note that, for the second transmit attempt, the transmit D2D node uses overlay
settings for packet transmission. Therefore, to find $\mathbf{q}_{{}_{3}}$,
$\varepsilon_{d}$, $\varepsilon_{{}_{mC}}$, and $\varepsilon_{{}_{MC}}$, one
should use $\gamma_{d}(k)$, $\gamma_{{}_{mC}}(k)$, and $\gamma_{{}_{MC}}(k)$
in (19). Due to this fact, $\mathbf{q}_{{}_{3}}$ becomes
$\big{[}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}]-\varepsilon_{d},1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}]-\varepsilon_{{}_{mC}},1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}]-\varepsilon_{{}_{MC}},1\big{]}$.
Now, by substituting $\mathbf{q}_{{}_{3}}$, $\mathbf{\Phi}(-\theta)$, and
$\mathbf{p}_{o,i}=[p_{o,1},p_{o,2},p_{o,3},p_{o,4},p_{o,5},p_{o,6}]$ in (46),
and after some simplification steps, $b_{2,n_{1}}$ becomes,
$\begin{split}b_{2,n_{1}}=&(\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}]-\varepsilon_{d})e^{-lr\theta}p_{o,1}+p_{o,2}\\\
&+(\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}]-\varepsilon_{{}_{mC}})e^{-lr\theta}p_{o,3}+p_{o,4}\\\
&+(\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}]-\varepsilon_{{}_{MC}})e^{-lr\theta}p_{o,5}+p_{o,6}+\varepsilon_{{}_{ac}}\\\
&=e^{-lr\theta}\bigg{[}p_{o,1}\big{(}\beta_{d}\big{)}+p_{o,3}\big{(}\beta_{{}_{mC}}\big{)}+p_{o,5}\big{(}\beta_{{}_{MC}}\big{)}\bigg{]}+p^{\text{off}}_{o}+\varepsilon_{{}_{ac}}.\end{split}$
(47)
Where
$\beta_{d}=\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}]-\varepsilon_{d}$;
$\beta_{{}_{mC}}=\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}]-\varepsilon_{{}_{mC}}$;
$\beta_{{}_{MC}}=\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}]-\varepsilon_{{}_{MC}}$;
and $p^{\text{off}}_{o}=p_{o,2}+p_{o,4}+p_{o,6}$, which is the sum of
probabilities in OFF states for the overlay scenario. One can find
$\varepsilon_{{}_{ac}}$ by calculating $\varepsilon_{d}$,
$\varepsilon_{{}_{mC}}$, and $\varepsilon_{{}_{MC}}$ by substituting $m=2$
into (19a), (19b), and (19c), respectively.
Now, to find $\lambda_{n_{1}}+$, we substitute results from (45) and (47) into
(43). After some simplification steps, the final expression for
$\lambda_{n_{1}}+$ becomes,
$\begin{split}\lambda_{n_{1}}+&=\frac{1}{2}\biggl{(}\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4\big{(}e^{-lr\theta}\big{[}\vartheta\big{]}+p^{\text{off}}_{o}+\varepsilon_{{}_{ac}}\big{)}}\biggl{)},\end{split}$
(48)
where
$\varphi=p_{u,1}\big{(}\alpha_{d}\big{)}+p_{u,3}\big{(}\alpha_{{}_{mC}}\big{)}+p_{u,5}\big{(}\alpha_{{}_{MC}}\big{)}$
and
$\vartheta=p_{o,1}\big{(}\beta_{d}\big{)}+p_{o,3}\big{(}\beta_{{}_{mC}}\big{)}+p_{o,5}\big{(}\beta_{{}_{MC}}\big{)}$.
## Appendix D Proof of Lemma 2
The block-companion matrix for queue model $n_{2}$ can be derived from (24),
which becomes,
$\mathbf{B}_{n_{2}}=\begin{bmatrix}b_{1,n_{2}}&b_{2,n_{2}}\\\
1&0\end{bmatrix}.$ (49)
We note that the first transmit attempt in both of the queue models uses
underlay settings. Moreover, both queue models respond the same when they
receive acknowledgment (either positive or negative) of the first transmit
attempt, as shown in Fig. 3. Therefore, $b_{1,n_{2}}=b_{1,n_{1}}$. The
expression for the second transmit attempt $b_{2,n_{2}}$ can be derived from
(23), which becomes
$b_{2,n_{2}}=\mathbf{q}_{{}_{4}}\mathbf{\Phi}(-\theta)\mathbf{p}_{o,i}.$ (50)
Similar to $n_{1}$ queue model, the second transmit attempt in $n_{2}$ model
also uses overlay settings for packet transmission. Therefore, to find
$\mathbf{q}_{{}_{4}}$, one has to use $\gamma_{d}(k)$, $\gamma_{{}_{mC}}(k)$,
and $\gamma_{{}_{MC}}(k)$ in (19a), (19b), and (19c), respectively. Due to
this fact, $\mathbf{q}_{{}_{4}}$ becomes
$\big{[}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}],1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}],1,\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}],1\big{]}$.
Now, by substituting $\mathbf{q}_{{}_{4}}$, $\mathbf{\Phi}(-\theta)$, and
$\mathbf{p}_{o,i}$ in (50), and after some simplification steps, $b_{2,n_{2}}$
becomes,
$\begin{split}b_{2,n_{2}}&=e^{-lr\theta}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}]p_{o,1}+p_{o,2}+e^{-lr\theta}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}]p_{o,3}+p_{o,4}\\\
&+e^{-lr\theta}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}]p_{o,5}+p_{o,6}\\\
&=e^{-lr\theta}\big{(}p_{o,1}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}]+p_{o,3}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}]+p_{o,5}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}]\big{)}+p^{\text{off}}_{o}.\end{split}$
(51)
Now, to find the largest positive root for the case of $n_{2}$
($\lambda_{n_{2}}+$), we substitute $b_{1,n_{2}}$ and $b_{2,n_{2}}$ into (43),
and after some simplification steps, the final expression becomes,
$\begin{split}\lambda_{n_{2}}+=\frac{1}{2}\bigg{(}\big{(}e^{-lr\theta}&\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}+\\\
&\sqrt{\big{(}e^{-lr\theta}\big{[}\varphi\big{]}+p^{\text{off}}_{u}\big{)}^{2}+4e^{-lr\theta}\big{[}\varrho\big{]}+p^{\text{off}}_{o}}\bigg{)}\end{split}$
(52)
where
$\varrho=p_{o,1}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{d}}_{o,1}]+p_{o,3}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{mC}}_{o,1}]+p_{o,5}\operatorname{\mathbb{E}}_{z}[\zeta^{{}^{MC}}_{o,1}]$.
## References
* [1] J. Gross, “Scheduling with outdated CSI: Effective service capacities of optimistic vs. pessimistic policies,” in _Proc. IEEE 20th International Workshop on Quality of Service_. IEEE, 2012, pp. 1–9.
* [2] F. Tang, Z. M. Fadlullah, N. Kato, F. Ono, and R. Miura, “AC-POCA: Anti coordination game based partially overlapping channels assignment in combined UAV and D2D-based networks,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 2, pp. 1672–1683, Feb. 2018.
* [3] N. Zhao, X. Liu, Y. Chen, S. Zhang, Z. Li, B. Chen, and M.-S. Alouini, “Caching D2D connections in small-cell networks,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 12, pp. 12 326–12 338, Dec. 2018.
* [4] X. Liu, Z. Li, N. Zhao, W. Meng, G. Gui, Y. Chen, and F. Adachi, “Transceiver design and multihop D2D for UAV IoT coverage in disasters,” _IEEE Internet of Things Journal_ , vol. 6, no. 2, pp. 1803–1815, Apr. 2018.
* [5] J. Liu, H. Nishiyama, N. Kato, and J. Guo, “On the outage probability of device-to-device-communication-enabled multi channel cellular networks: An RSS-threshold-based perspective,” _IEEE Journal on Selected Areas in Communications_ , vol. 34, no. 1, pp. 163–175, Jan. 2016.
* [6] D. Kim, B. C. Jung, H. Lee, D. K. Sung, and H. Yoon, “Optimal modulation and coding scheme selection in cellular networks with hybrid-ARQ error control,” _IEEE Transactions on Wireless Communications_ , vol. 7, no. 12, pp. 5195–5201, Dec. 2008.
* [7] M. E. Burich _et al._ , “A cross layer analysis of HARQ protocols in wireless networks,” Master’s thesis, Universidade Tecnológica Federal do Paraná, 2017.
* [8] W. Yafeng, Z. Lei, and Y. Dacheng, “Performance analysis of type III HARQ with turbo codes,” in _Proc. IEEE Semiannual Vehicular Technology Conference, 2003. VTC 2003-Spring._ , vol. 4. IEEE, 2003, pp. 2740–2744.
* [9] D. Wu and R. Negi, “Effective capacity: a wireless link model for support of quality of service,” _IEEE Transactions on Wireless Communications_ , vol. 2, no. 4, pp. 630–643, Jul. 2003.
* [10] L. Musavian, S. Aïssa, and S. Lambotharan, “Effective capacity for interference and delay constrained cognitive radio relay channels,” _IEEE Transactions on Wireless Communications_ , vol. 9, no. 5, pp. 1698–1707, May 2010.
* [11] D. Qiao, M. C. Gursoy, and S. Velipasalar, “Effective capacity of two-hop wireless communication systems,” _IEEE Transactions on Information Theory_ , vol. 59, no. 2, pp. 873–885, Feb. 2012.
* [12] S. W. H. Shah, M. M. U. Rahman, A. N. Mian, A. Imran, S. Mumtaz, and O. A. Dobre, “On the impact of mode selection on effective capacity of device-to-device communication,” _IEEE Wireless Communications Letters_ , vol. 8, no. 3, pp. 945–948, Jun. 2019.
* [13] S. W. H. Shah, A. N. Mian, and J. Crowcroft, “Statistical QoS guarantees for licensed-unlicensed spectrum interoperable D2D communication,” _IEEE Access_ , vol. 8, pp. 27 277–27 290, Jan. 2020.
* [14] W. Cheng, X. Zhang, and H. Zhang, “QoS-aware power allocations for maximizing effective capacity over virtual-MIMO wireless networks,” _IEEE Journal on Selected Areas in Communications_ , vol. 31, no. 10, pp. 2043–2057, Oct. 2013.
* [15] W. Aman, Z. Haider, S. W. H. Shah, M. M. U. Rahman, and O. A. Dobre, “On the effective capacity of an underwater acoustic channel under impersonation attack,” in _Proc. IEEE International Conference on Communications (ICC)_ , 2020, pp. 1–7.
* [16] P. Larsson, J. Gross, H. Al-Zubaidy, L. K. Rasmussen, and M. Skoglund, “Effective capacity of retransmission schemes: A recurrence relation approach,” _IEEE Transactions on Communications_ , vol. 64, no. 11, pp. 4817–4835, Nov. 2016.
* [17] Y. Hu, Y. Li, M. C. Gursoy, S. Velipasalar, and A. Schmeink, “Throughput analysis of low-latency IoT systems with QoS constraints and finite blocklength codes,” _IEEE Transactions on Vehicular Technology_ , vol. 69, no. 3, pp. 3093–3104, Mar. 2020.
* [18] Y. Li, M. C. Gursoy, and S. Velipasalar, “Throughput of hybrid-ARQ chase combining with on-off Markov arrivals under QoS constraints,” in _Proc. IEEE Global Communications Conference (GLOBECOM)_ , 2016, pp. 1–6.
* [19] N. Panwar, S. Sharma, and A. K. Singh, “A survey on 5G: The next generation of mobile communication,” _Physical Communication_ , vol. 18, pp. 64–84, Nov. 2016.
* [20] E. Hossain, M. Rasti, H. Tabassum, and A. Abdelnasser, “Evolution toward 5G multi-tier cellular wireless networks: An interference management perspective,” _IEEE Wireless Communications_ , vol. 21, no. 3, pp. 118–127, Jun. 2014.
* [21] M. O. Al-Kadri, Y. Deng, A. Aijaz, and A. Nallanathan, “Full-duplex small cells for next generation heterogeneous cellular networks: A case study of outage and rate coverage analysis,” _IEEE Access_ , vol. 5, pp. 8025–8038, May 2017.
* [22] E. Everett, A. Sahai, and A. Sabharwal, “Passive self-interference suppression for full-duplex infrastructure nodes,” _IEEE Transactions on Wireless Communications_ , vol. 13, no. 2, pp. 680–694, Feb. 2014.
* [23] M. Elsayed, A. A. A. El-Banna, O. A. Dobre, W. Shiu, and P. Wang, “Low complexity neural network structures for self-interference cancellation in full-duplex radio,” _IEEE Communications Letters_ , vol. 25, no. 1, pp. 181–185, Jan. 2020.
* [24] A. T. Le, X. Huang, Y. J. Guo, _et al._ , “Beam-based analog self-interference cancellation in full-duplex MIMO systems,” _IEEE Transactions on Wireless Communications_ , vol. 19, no. 4, pp. 2460–2471, Apr. 2020.
* [25] E. Ahmed and A. M. Eltawil, “All-digital self-interference cancellation technique for full-duplex systems,” _IEEE Transactions on Wireless Communications_ , vol. 14, no. 7, pp. 3519–3532, Jul. 2015.
* [26] M. Jain, J. I. Choi, T. Kim, D. Bharadia, S. Seth, K. Srinivasan, P. Levis, S. Katti, and P. Sinha, “Practical, real-time, full duplex wireless,” in _Proc. of the ACM International Conference on Mobile Computing and Networking (MobiCom)_ , 2011, pp. 301–312.
* [27] B. Zhao and M. C. Valenti, “Practical relay networks: a generalization of hybrid-ARQ,” _IEEE Journal on Selected Areas in Communications_ , vol. 23, no. 1, pp. 7–18, Jan. 2005.
* [28] E. Malkamaki and H. Leib, “Performance of truncated type-ii hybrid ARQ schemes with noisy feedback over block fading channels,” _IEEE Transactions on Communications_ , vol. 48, no. 9, pp. 1477–1487, Sep. 2000.
* [29] Z. Ahmad, I. Ahmad, D. J. Love, and B. Smida, “Analysis of two-unicast network-coded hybrid-ARQ with unreliable feedback,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 11, pp. 10 871–10 885, 2018\.
* [30] K. Xu, W. Ma, L. Zhu, Y. Xu, Y. Gao, D. Zhang, and W. Xie, “NTC-HARQ: Network–turbo-coding based HARQ protocol for wireless broadcasting system,” _IEEE Transactions on Vehicular Technology_ , vol. 64, no. 10, pp. 4633–4644, 2014.
* [31] H. Chen, R. G. Maunder, and L. Hanzo, “A survey and tutorial on low-complexity turbo coding techniques and a holistic hybrid ARQ design example,” _IEEE Communications Surveys & Tutorials_, vol. 15, no. 4, pp. 1546–1566, 2013.
* [32] G. Hu, K. Xu, and Y. Xu, “ARNC multicasting of HDCP data for cooperative mobile devices with dual interfaces,” _IEEE Communications Letters_ , vol. 21, no. 11, pp. 2504–2507, 2017.
* [33] U. Madhow, _Fundamentals of digital communication_. Cambridge University Press, 2008.
* [34] A. Yadav, M. Goonewardena, W. Ajib, O. A. Dobre, and H. Elbiaze, “Energy management for energy harvesting wireless sensors with adaptive retransmission,” _IEEE Transactions on Communications_ , vol. 65, no. 12, pp. 5487–5498, Dec. 2017.
* [35] Y. Polyanskiy, H. V. Poor, and S. Verdú, “Dispersion of gaussian channels,” in _Proc. IEEE International Symposium on Information Theory_ , 2009, pp. 2204–2208.
* [36] Y. A. Brychkov, “On some properties of the marcum Q function,” _Integral Transforms and Special Functions_ , vol. 23, no. 3, pp. 177–182, 2012.
* [37] S. W. H. Shah, A. N. Mian, S. Mumtaz, and J. Crowcroft, “System capacity analysis for ultra-dense multi-tier future cellular networks,” _IEEE Access_ , vol. 7, pp. 50 503–50 512, Apr. 2019.
* [38] G. Farhadi and N. C. Beaulieu, “On the ergodic capacity of multi-hop wireless relaying systems,” _IEEE Transactions on Wireless Communications_ , vol. 8, no. 5, pp. 2286–2291, May 2009.
|
arxiv-papers
| 2021-07-26T13:52:27 |
2024-09-04T03:07:18.717338
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Syed Waqas Haider Shah, Muhammad Mahboob-ur-Rahman, Adnan Noor Mian,\n Octavia A. Dobre, and Jon Crowcroft",
"submitter": "Syed Waqas Haider Shah",
"url": "https://arxiv.org/abs/2107.12217"
}
|
2107.12221
|
# Thermal resonance in cancer
Umberto Lucia 1,a & Giulia Grisolia 1,b
1 Dipartimento Energia “Galileo Ferraris”, Politecnico di Torino
Corso Duca degli Abruzzi 24, 10129 Torino, Italy
a [email protected]
b [email protected]
###### Abstract
In the end of the second decade of 20th century, Warburg showed how cancer
cells present a fermentative respiration process, related to a metabolic
injury. Here, we develop an analysis of the cell process based on its heat
outflow, in order to control cancer progression. Engineering thermodynamics
represent a powerful approach to develop this analysis and we introduce its
methods to biosystems, in relation to heat outflow for its control. Cells
regulate their metabolisms by energy and ion flows, and the heat flux is
controlled by the convective interaction with their environment. We introduce
the characteristic frequency of a biosystem, its biothermodynamic
characteristic frequency, which results to be evaluated by a classical heat
transfer approach. Resonance forces natural behaviours of systems, and, here,
we introduce it in order to control the fluxes through the cancer membrane,
and to control of the cellular metabolic processes, and, consequently, the
energy available to cancer, for its growth. The result obtained in some
experiments is that the cancer growth rate can be reduced.
Keyword: Cancer; Irreversibility; ELF; Thermal resonance; Entropy.
## 1 Introduction
Complex systems result as non-linear dynamical systems, composed by
interacting subsystems, able to adapt to external perturbations of their
environment [1]. Physics and chemistry of complex systems arose as an evolving
interdisciplinary science from the theory of dynamical systems, in the 1960s
[2]. Physics and chemistry of complex systems allow us to modelling many
phenomena such as self-replicating structures, non-equilibrium pattern
formation, fluid dynamics, but also cancer growth [3, 4].
In biological and medical sciences, evolution is treated as a strategy of life
at the level of the organism [5], based on an interplay of genetic variation
and phenotypic selection [6], because genes, and their variants, are selected
in relation to their ability of encoding functions, useful to organism
survival [7]. This last consideration is particularly true for cancer; indeed,
cancer has been modelled as an adaptive system, based on natural selection, in
order to allow any single cancer cell to become independent of its neighbours
[8].
The recent improvement of the complex physics approach to the analysis of
cancer has pointed out that cancer is a complex adaptive system [9, 2], as a
consequence of some properties of it as heterogeneous clonal expansion,
replicative immortality, patterns of longevity, rewired metabolic pathways,
altered reactive oxygen species, evasion of death signals, metastatic
invasion, etc. are all indications of cancer’s complex adaptive nature [8].
Indeed, the fundamental properties of the complex systems, but also of cancer,
are non-linearity, emergence, self-organization, internal interconnection,
etc. [3, 10]. In particular [5]:
* •
Cells behave as agents: they are a set of active elements which interact in a
selective way;
* •
Cancer cells activate some genes, turned off in normal tissue, in order to
improve the characteristics useful to survive: this mechanism generates rules;
* •
Only the cells with similar adaptive mutations can survive: the components of
the system gather together in relation to their similar abilities;
* •
Cancer behaviour is non-linear;
* •
Genetic instability allows cancer to fit easily and to expand.
Consequently, a new viewpoint emerges in order to model organisms as highly
regulated, complex, dynamic systems with meta-stability state around
homeostatic levels [11]. The meta-stability state is the result of
fluctuations, amplifications and feedback cycles [11] due to continuous
oscillations of living systems between order and chaos, promoting survival.
This meta-stability is the net result of continual oscillations, rhythms,
networks, amplifications and feedback cycles [12, 13]. In relation to
oscillations, the phenomenon of resonance is well known in physics. Indeed,
any system presents a proper oscillation frequency, and it can be forced to
enter into vibration, if exited by a wave (mechanical or electromagnetic) at
the frequencies close to its resonant one [14].
From a thermodynamic viewpoint, a cell is an open system, able to convert its
metabolic energy into mechanical and chemical works. The metabolic energy can
be modelled as the heat inflow of a thermodynamic system. Consequently, cells
can be modelled as a thermodynamic engine, which convert part of the inflow
heat into work [15]. In this context, normal and cancer cells presents two
different cellular metabolism [16]:
* •
The Krebs cycle: a series of chemical reactions used by all aerobic organisms
to release stored energy through the oxidation of acetyl-CoA derived from
carbohydrates, fats, and proteins;
* •
The Warburg cycle: a form of modified cellular metabolism found in cancer
cells, which tend to favour a specialised fermentation over the aerobic
respiration pathway that most other cells of the body prefer.
Indeed, in 1931, the Nobel laureate Otto Warburg showed that cancer cells, if
compared with the normal ones, follow a different respiration pathway, which
is characterized by a glucose fermentation, even when there is no lack of
oxygen, highlighting how the variation on their metabolism was caused by a
metabolic injury [17]. Furthermore, the cytoplasmatic cells pH, and the
extracellular environment, are directly linked to the cells membrane potential
[18]. Comparing the polarization of quiescent cells with that of the
differentiated ones, the latter result hyperpolarized [19].
Any cell, as a thermodynamic engine, must outflow heat towards its environment
[15, 20, 21], so, we expect that the two different cycles present two
different heat outflow through the cell membrane. In order to model this
process, we can consider an electric circuit equivalent for the cell membrane.
But, in electric circuit, both transient and resonant phenomena can occur. So,
we consider the possible equivalent behaviour in the heat transfer from the
cell to its environment.
In this paper, we analyse the resonant heat transfer through the cancer cell
membrane, and show how low frequencies electromagnetic waves can influence it,
with a consequent decrease in the cancer growth.
## 2 Materials and Methods
Warburg showed the metabolic injury in cancer, pointing out the important role
played by the energy conversion in biosystems [17]. Cellular biochemical
reactions convert external metabolites, considered as inflow of energy
(inflowing heat for a direct thermodynamic cycle), into work (cell
replication, protein synthesis, DNA and RNA transcription and translation,
etc.), and wasted heat outflow towards cell environment [15, 20]. Cells
exchange energy and matter through their membrane [22], driven by the
endogenous electric fields [23].
Living cell membrane is a double lipid layer that separates the cytoplasm from
the external environment. In membranes, some proteins perform a function of
channels, across which the inflows and outflows of mass and ions can occur. It
is usual to model the cell membrane as an electric RC circuit (Figure 1) [24].
Figure 1: Electric analogy of a cell membrane. The cell membrane can be
considered as a parallel RC circuit [24].
This kind of circuit presents both transient and resonant behaviour, in
relation to the step or harmonic signal applied [25, 26]. If we consider the
RC circuit, the transient behaviour of this circuit can be obtained in
relation to the the current that flows across the resistor of resistance $R$
during the charge and the discharge of the capacitor [25, 26]:
$i(t)=\frac{V_{0}}{R}\,e^{-t/\tau_{el}}$ (1)
where $i(t)$ is the current, $V_{0}$ is the value of electric potential
applied to the capacitor, $R$ is the value of the electric resistance, and
$\tau_{el}=RC$ is the characteristic time of the system. But, this
characteristic time, related to the transient electric phenomenon, is also
related to the resonant frequency; indeed, it results [25, 26]
$\nu_{el}=\frac{1}{2\pi\,\tau_{el}}$ (2)
In relation to the heat transfer of the membrane, we can consider the thermo-
kinetic lumped model. The cell exchanges heat power with its environment,
remembering that the heat flux is related to its metabolism. This heat outflux
occurs by convection with the fluids around any cell, and it results [27]:
$\dot{Q}=\rho_{cell}Vc_{cell}\frac{dT_{cell}}{dt}=\alpha A(T_{cell}-T_{env})$
(3)
where $\dot{Q}$ is the heat power exchanged by convection, $\rho_{cell}$ is
the cell mass density, $V$ is the volume of the cell, $c_{cell}$ is the
specific heat of the cell, $T_{cell}$ is the cell temperature, $\alpha$ is the
coefficient of convection, $A$ is the surface area of the cell, which varies
during the phases of the development of the cell itself, and
$T_{cell}-T_{env}$ is the temperature difference between the cell temperature
and the environment temperature. As usually done in heat transfer, it is
possible to obtain the characteristic time $\tau_{th}$ for the thermal
transient [28]:
$\tau_{th}=\frac{\rho_{cell}c_{cell}}{\alpha}\,\frac{V}{A}$ (4)
In analogy with the circuit model of the cell membrane, we expect that there
exists a resonant effect with a frequency $\nu_{th}\approx 1/\tau_{th}$, with
the hypothesis that $\nu_{el}=\nu_{th}$, because the electric circuit is only
a theoretical model of the cell membrane.
So, if we irradiate the cancer by using an electromagnetic wave at this
resonance frequency $\nu_{th}$, we expect to force the heat outflow from the
cancer cell to the environment. Indeed, the heat power outflow of the
equivalent electric circuit results:
$\dot{Q}=RI_{M}^{2}\,\sin^{2}\big{(}2\pi\,\nu_{th}t\big{)}$ (5)
where $I_{M}$ is the maximum value of electric current in the equivalent
circuit. But, at resonant state, the heat outflow is the maximum value of heat
we can obtain. So, cancer decreases the energy available for some biochemical
processes, such as differentiation, etc., with the consequence of decreasing
its growth, because the increase of heat outflow makes the cancer cell less
hyperpolarized, as it can be shown by considering the Nernst equation for the
cell membrane [15]:
$\Delta\phi=\Delta G-2.3\frac{R_{u}T_{env}}{F}\,\Delta\text{pH}=\Delta
H-\dot{Q}\,\tau_{th}-2.3\frac{R_{u}T_{env}}{F}\,\Delta\text{pH}$ (6)
where $\phi$ is the cell membrane electric potential, $H$ is the enthalpy,
$R_{u}$ is the universal constant of gasses, $F$ is the Faraday constant, and
$pH$ is the potential of hydrogen, and we have considered that the heat power
outflow results $\dot{Q}=T_{env}\Sigma$, where $\Sigma$ is the entropy
production rate in the environment and the heat results $Q=\dot{Q}\tau_{th}$.
In order to prove this result, we have developed some experiments, which
confirm these results.
## 3 Results
Following the second law of thermodynamics, all the biochemical processes
require energy, and any energy conversion process generates outflows of
energy. Thus, it is possible to analyse the cells system behaviour, by
following an engineering thermodynamic approach, considering the energy and
mass balance. In cancer cells, an alteration on some processes related to
energy and ion channelling have been shown, reducing their proliferation
control. Heat transfer through cell membrane can be described by a thermo-
kinetic lumped biophysical model. So, we can analyse the cell system as a
black box, which is the usual approach used in engineering thermodynamics,
considering all the internal biochemical reactions of the cell as the causes
of the wasted heat outflow. The variation of the fundamental physical
quantities, which control the biochemical reactions, can be controlled by
controlling the heat transfer. Indeed, an electromagnetic wave at the thermal
resonant frequency can force the heat transfer with a related change the
membrane electric potential and the pH, conditioning the biochemical reaction
and forcing them towards a normal behaviour.
The experimental proof of these theoretical results is shown in Table 1. It is
possible to point out that:
* •
The electromagnetic waves at thermal cell resonant frequency, reduce the
growth rate of the cancer;
* •
The phenomenon is selective in relation to the frequencies used, as it must be
for a resonant process.
Table 1: Growth variation of some cancer cell lines after the exposure to the calculated resonant frequencies [29, 30]. Cell line | Frequency | Growth variation
---|---|---
| [Hz] | [%]
A375P | $31$ | $-15$
HT-29 | $24$ | $-19$
GTL16 | $14$ | $-24$
MCF7 | $5$ | $-22$
MDA-MB-231 | $6$ | $-18$
SKBR3 | $8$ | $-18$
## 4 Discussion and Conclusions
The temperature difference between the inside and outside of any living cell
is fundamental for the cell life, because this heat flow contributes to
entropy variation with related reorganisation of the cell itself. The heat
outflow, and the related entropy production, are caused by the biochemical and
biophysical processes inside the cell.
In this paper, we have developed the analysis of the thermal resonance of the
cell membrane in relation to the heat exchanged by convection.
The results obtained highlight the role of the cell volume-area ratio, in
relation to the heat fluxes control, with particular regards to the thermal
resonant state of the living cell.
We have pointed out the existence of a proper time of response of any cell
line to the heat exchange, as we expect in relation the the resonant
phenomena. This time results related to the cells volume-area ratio, a
geometrical parameter fundamental for the considerations on the fluxes and
cells membrane electric potential variation.
Here, we have improved our previous results [31, 32, 33] by focusing our
analysis on the equivalent electric circuit model of membrane. This is a
fundamental results, because it links our usual entropic analysis to the
accepted model of membrane, in literature. In this way, we can explain the
experimental results by linking together both the entropic analysis, developed
in our previous papers, and the electric model of membrane, never considered
before this paper. The results obtained by these different approaches converge
to the same experimental results.
## References
* [1] J. Ladyman and K. Wiesner, What Is a Complex System?, Yale University Press, New Haven, 2020.
* [2] A. Uthamacumaran, A Review of Complex Systems Approaches to Cancer Networks, Complex Systems 20, 779–835 (2020).
* [3] S. Wolfram, A New Kind of Science, Wolfram Media Inc., Champaign, 2002.
* [4] Y. Jiao and S. Torquato, Emergent Behaviors from a Cellular Automaton Model for Invasive Tumor Growth in Heterogeneous Microenvironments, PLoS Computational Biology 7, e1002314 (2011).
* [5] K. J. Pienta, Modeling Cancer as A Complex Adaptive System: Genetic Instability and Evolution, in Complex Systems Science in Biomedicine, edited by T. S. Deisboeck and J. Y. Kresh, pages 537–556, Springer, Boston, 2006.
* [6] M. Radman, I. Matic, and F. Taddei, Evolution of evolvability, Annals of the New York Academy Sciences 870, 146–155 (1999).
* [7] M. Greaves, Cancer causation: the Darwinian downside of past success?, The Lancet Oncology 3, 244–251 (2002).
* [8] D. Hanahan and R. A. Weinberg, Hallmarks of Cancer: The Next Generation, Cell 144, 646–674 (2011).
* [9] E. D. Schwab and K. J. Pienta, Cancer as a complex adaptive system, Medical Hypothesis 47, 235–241 (1996).
* [10] C. Gros, Complex and Adaptive Dynamical Systems: A Primer, Springer, Heidelberg, 2011.
* [11] P. Bellavite, S. Lussignoli, M. L. Semizzi, R. Ortolani, and A. Signorini, The similia principle: From cellular models to regulation of homeostasis, British Homoeopathic journal 86, 73–85 (1997).
* [12] P. Bellavite, G. Andrioli, S. Lussignoli, A. Signorini, R. Ortolani, and A. Conforti, A scientific reappraisal of the “Principle of Similarity”, Medical Hypotheses 49, 203–212 (1997).
* [13] F. Cramer, Chaos and Order. The Complex Structure of Living Systems, VCH, Weinheim, 1993.
* [14] R. Feynman, R. Leighton, and M. Sands, The Feynman Lectures on Physics, Volume I, Addison Wesley, Reading, 2005.
* [15] A. Katchalsky and O. Kedem, Thermodynamics of Flow Processes in Biological Systems, Biophysical Journal 2, 53–78 (1962).
* [16] D. Voet and J. G. Voet, Biochemistry (3rd ed.), John Wiley & Sons, New York, 2004.
* [17] O. Warburg, F. Wind, and E. Negelein, The metabolism of tumors in the body, Journal of General Physiology 8, 519 (1927).
* [18] R. Binggeli and I. L. Cameron, Cellular potentials of normal and cancerous fibroblasts and hepatocytes, Cancer Research 40, 1830 (1980).
* [19] A. Becchetti, Ion channels and transporters in cancer. 1. Ion channels and cell proliferation in cancer, American Journal of Physiology-Cell Physiology 301, C255 (2011).
* [20] E. Schrödinger, What’s life? The Physical Aspect of the Living Cell, Cambridge University Press, Cambridge, 1944.
* [21] H. B. Callen, Thermodynamics, Wiley, New York, 1960.
* [22] S. R. Caplan and A. Essig, Bioenergetics and Linear Nonequilibrium Thermodynamics. The Steady State, Harvard University Press, Cambridge, 1983.
* [23] C. Bustamante, Y. R. Chemla, N. R. Forde, and D. Izhaky, Ion channels and transporters in cancer. 1. Ion channels and cell proliferation in cancer, Annual Review of Biochemistry 73, 705 (2004).
* [24] D. Johnston and S. M.-S. Wu, Foundations of Cellular Neurophysiology, MIT Press, Cambridge, 1994.
* [25] P. Horowitz and W. Hill, The Art of Electronics, Cambridge University Press, Cambridge, 2015.
* [26] E. M. Purcell and D. J. Morin, Electricity and Magnetism, Cambridge University Press, Cambridge, 2013.
* [27] A. Bejan, Shape and Structure, from Engineering to Nature, Cambridge University Press, Cambridge, 2000.
* [28] A. Bejan, Heat Transfer, Wiley, New York, 2011.
* [29] U. Lucia, G. Grisolia, A. Ponzetto, and F. Silvagno, An engineering thermodynamic approach to select the electromagnetic wave effective on cell growth, Journal of Theoretical Biology 429, 181–189 (2017).
* [30] L. Bergandi, U. Lucia, G. Grisolia, R. Granata, I. Gesmundo, A. Ponzetto, E. Paolucci, R. Borchiellini, E. Ghigo, and F. Silvagno, The extremely low frequency electromagnetic stimulation selective for cancer cells elicits growth arrest through a metabolic shift, Biochimica et Biophysica Acta 1866, 1389–1397 (2019).
* [31] U. Lucia, G. Grisolia, A. Ponzetto, L. Bergandi, and F. Silvagno, Thermomagnetic resonance affects cancer growth and motility: Thermomagnetic resonance and cancer, Royal Society Open Science 7, 200299 (2020).
* [32] U. Lucia and G. Grisolia, Resonance in thermal fluxes through cancer membrane, AAPP Atti della Accademia Peloritana dei Pericolanti 98, SC11–SC16 (2020).
* [33] U. Lucia and G. Grisolia, Thermal resonance and cell behavior, Entropy 22, 774 (2020).
|
arxiv-papers
| 2021-07-26T13:59:40 |
2024-09-04T03:07:18.740912
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Umberto Lucia, Giulia Grisolia",
"submitter": "Umberto Lucia Prof.",
"url": "https://arxiv.org/abs/2107.12221"
}
|
2107.12224
|
# Local2Global: Scaling global representation learning on graphs via local
training
Lucas G. S. Jeub The Alan Turing Institute [email protected] , Giovanni
Colavizza University of Amsterdam [email protected] , Xiaowen Dong
University of Oxford [email protected] , Marya Bazzi University of
WarwickThe Alan Turing Institute [email protected] and Mihai
Cucuringu University of OxfordThe Alan Turing Institute
[email protected]
###### Abstract.
We propose a decentralised “local2global” approach to graph representation
learning, that one can a-priori use to scale any embedding technique. Our
local2global approach proceeds by first dividing the input graph into
overlapping subgraphs (or “patches”) and training local representations for
each patch independently. In a second step, we combine the local
representations into a globally consistent representation by estimating the
set of rigid motions that best align the local representations using
information from the patch overlaps, via group synchronization. A key
distinguishing feature of local2global relative to existing work is that
patches are trained independently without the need for the often costly
parameter synchronisation during distributed training. This allows
local2global to scale to large-scale industrial applications, where the input
graph may not even fit into memory and may be stored in a distributed manner.
Preliminary results on medium-scale data sets (up to $\sim$7K nodes and
$\sim$200K edges) are promising, with a graph reconstruction performance for
local2global that is comparable to that of globally trained embeddings. A
thorough evaluation of local2global on large scale data and applications to
downstream tasks, such as node classification and link prediction, constitutes
ongoing work.
scalable graph embedding, distributed training, group synchronization
††conference: ; ;
## 1\. Introduction
The application of deep learning on graphs, or Graph Neural Networks (GNNs),
has recently gained considerable attention. Among the significant open
challenges in this area of research is the question of scalability.
Cornerstone techniques such as Graph Convolutional Networks (GCNs) (Kipf and
Welling, 2017) make the training dependent on the neighborhood of any given
node. Since in many real-world graphs the number of neighbors grows
exponentially with the number of hops taken, the scalability of such methods
is a significant challenge. In recent years, several techniques have been
proposed to make GCNs more scalable, including layer-wise sampling (Hamilton
et al., 2018) and subgraph sampling (Chiang et al., 2019) approaches (see
section 2).
We contribute to this line of work by proposing a decentralised divide-and-
conquer approach to improve the scalability of network embedding techniques.
Our “local2global” approach proceeds by first dividing the network into
overlapping subgraphs (or “patches”) and training separate local node
embeddings for each patch (local in the sense that each patch is embedded into
its own local coordinate system). The resulting local patch node embeddings
are then transformed into a global node embedding (i.e. all nodes embedded
into a single global coordinate system) by estimating a rigid motion applied
to each patch using the As-Synchronized-As-Possible (ASAP) algorithm
(Cucuringu et al., 2012b, a). A key distinguishing feature of this
“decentralised” approach is that we can train the different patch embeddings
separately, without the need to keep parameters synchronised. The benefit of
local2global is threefold: (1) it is highly parallelisable as each patch is
trained independently; (2) it can be used in privacy-preserving applications
and federated learning setups, where frequent communication between devices is
often a limiting factor (Kairouz and McMahan, 2021), or “decentralized”
organizations, where one needs to simultaneously consider data sets from
different departments; (3) it can reflect varying structure across a graph
through asynchronous parameter learning. Another important advantage of our
local2global approach is that it can be directly applied to improve the
scalability of a large variety of network embedding techniques (Goyal and
Ferrara, 2018), unlike most of the existing approaches reviewed in section 2
which are restricted to GCNs.
## 2\. Related work
The key scalability problems for GCNs only concern deep architectures where we
have $l$ nested GCN layers. In particular, a single-layer GCN is easy to train
in a scalable manner using mini-batch stochastic gradient descent (SGD). For
simplicity, assume that we have a fixed feature dimension $d$, i.e.,
$d_{\text{in}}=d_{\text{out}}=d$ for all layers. The original GCN paper (Kipf
and Welling, 2017) uses full-batch gradient descent to train the model which
entails the computation of the gradient for all nodes before updating the
model parameters. This is efficient in terms of time complexity per epoch
($O(lmd+lnd^{2})$) where $n$ is the number of nodes and $m$ is the number of
edges. However, it requires storing all the intermediate embeddings and thus
has memory complexity $O(lnd+ld^{2})$. Further, as there is only a single
parameter update per epoch, convergence tends to be slow.
The problem with applying vanilla mini-batch SGD (where we only compute the
gradient for a sample of nodes, i.e., the batch) to a deep GCN model is that
the embedding of the nodes in the final layer depends on the embedding of all
the neighbours of the nodes in the previous layer and so on iteratively.
Therefore the time complexity for a single mini-batch update approaches that
for a full-batch update as the number of layers increases, unless the network
has disconnected components. There are mainly three families of methods (Chen
et al., 2020; Chiang et al., 2019) that have been proposed to make mini-batch
SGD training more efficient for GCNs.
Layer-wise sampling.:
The idea behind layer-wise sampling is to sample a set of nodes for each layer
of the nested GCN model and compute the embedding for sampled nodes in a given
layer only based on embeddings of sampled nodes in the previous layer rather
than considering all the neighbours as would be the case for vanilla SGD. This
seems to have first been used by GraphSAGE (Hamilton et al., 2018), where a
fixed number of neighbours is sampled for each node at each layer. However,
this results in a computational complexity that is exponential in the number
of layers and also redundant computations as the same intermediate nodes may
be sampled starting from different nodes in the batch. Later methods avoid the
exponential complexity by first sampling a fixed number of nodes for each
layer either independently (FastGCN (Chen et al., 2018a)) or conditional on
being connected to sampled nodes in the previous layer (LADIES (Zou et al.,
2019)) and reusing embeddings. Both methods use importance sampling to correct
for bias introduced by non-uniform node-sampling distributions. Also notable
is (Chen et al., 2018b), which uses variance reduction techniques to
effectively train a GCN model using neighbourhood sampling as in GraphSAGE
with only 2 neighbours per node. However, this is achieved by storing hidden
embeddings for all nodes in all layers and thus has the same memory complexity
as full-batch training.
Linear model.:
Linear models remove the non-linearities between the different GCN layers
which means that the model can be expressed as a single-layer GCN with a more
complicated convolution operator and hence trained efficiently using mini-
batch SGD. Common choices for the convolution operator are powers of the
normalised adjacency matrix (Wu et al., 2019) and variants of personalised
Page-Rank (PPR) matrices (Busch et al., 2020; Chen et al., 2020; Bojchevski et
al., 2020; Klicpera et al., 2019). Another variant of this approach is (Frasca
et al., 2020), which proposes combining different convolution operators in a
wide rather than deep architecture. There are different variants of the linear
model architecture, depending on whether the non-linear feature transformation
is applied before or after the propagation (see (Busch et al., 2020) for a
discussion), leading to predict-propagate and propagate-predict architectures
respectively. The advantage of the propagate-predict architecture is that one
can pre-compute the propagated node features (e.g., using an efficient push-
based algorithm (Chen et al., 2020)) which can make training highly scalable.
The disadvantage is that this will densify sparse features which can make
training harder (Bojchevski et al., 2020). However, the results from (Busch et
al., 2020) suggest that there is usually not much difference in prediction
performance between these options (or the combined architecture where
trainable transformations are applied before and after propagation).
Subgraph sampling.:
Subgraph sampling techniques (Zeng et al., 2019; Chiang et al., 2019; Zeng et
al., 2020) construct batches by sampling an induced subgraph of the full
graph. In particular, for subgraph sampling methods, the sampled nodes in each
layer of the model in a batch are the same. In practice, subgraph sampling
seems to outperform layer-wise sampling (Chen et al., 2020). GraphSAINT (Zeng
et al., 2020), which uses a random-walk sampler with an importance sampling
correction similar to (Chen et al., 2018a; Zou et al., 2019), seems to have
the best performance so far. Our local2global approach shares similarities
with subgraph sampling, most notably ClusterGCN (Chiang et al., 2019), which
uses graph clustering techniques to sample the batches. The key distinguishing
feature of our approach is that we train independent models for each patch
whereas for ClusterGCN, model parameters have to be kept in sync for different
batches, which hinders fully distributed training and its associated key
benefits (see section 1).
## 3\. LOCAL2GLOBAL algorithm
The key idea behind the local2global approach to graph embedding is to embed
different parts of a graph independently by splitting the graph into
overlapping “patches” and then stitching the patch node embeddings together to
obtain a global node embedding. The stitching of the patch node embeddings
proceeds by estimating the rotations/reflections and translations for the
embedding patches that best aligns them based on the overlapping nodes.
Consider a graph $G(V,E)$ with node set $V$ and edge set $E$. The input for
the local2global algorithm is a patch graph $G_{p}(\mathcal{P},E_{p})$, where
each node (i.e., a “patch”) of the patch graph is a subset of $V$ and each
patch $P_{k}\in\mathcal{P}$ is associated with an embedding
$\bm{X}^{(k)}\in\mathbb{R}^{|P_{k}|\times d}$. We require that the set of
patches $\mathcal{P}=\\{P_{k}\\}_{k=1}^{p}$ is a cover of the node set $V$
(i.e., $\bigcup_{k=1}^{p}P_{k}=V$), and that the patch embeddings all have the
same dimension $d$. We further assume that the patch graph is connected and
that the patch edges satisfy the minimum overlap condition
$\\{P_{i},P_{j}\\}\in E_{p}\implies|P_{i}\cap P_{j}|\geq d+1$. Note that a
pair of patches that satisfies the minimum overlap condition is not
necessarily connected in the patch graph.
The local2global algorithm for aligning the patch embeddings proceeds in two
stages and is an evolution of the approach in (Cucuringu et al., 2012b, a). We
assume that each patch embedding $\bm{X}^{(k)}$ is a perturbed part of an
underlying global node embedding $\bm{X}$, where the perturbation is composed
of reflection ($Z_{2}$), rotation (SO($d$)), translation ($\mathbb{R}^{d}$),
and noise. The goal is to estimate the transformation applied to each patch
using only pairwise noisy measurements of the relative transformation for
pairs of connected patches. In the first stage, we estimate the orthogonal
transformation to apply to each patch embedding, using a variant of the
eigenvector synchronisation method (Singer, 2011; Cucuringu et al., 2012b, a).
In the second stage, we estimate the patch translations by solving a least-
squares problem. Note that unlike (Cucuringu et al., 2012b, a), we solve for
translations at the patch level rather than solving a least squares problem
for the node coordinates. This means that the computational cost for computing
the patch alignment is independent of the size of the original network and
depends only on the amount of patch overlap, the number of patches and the
embedding dimension.
### 3.1. Eigenvector synchronisation over orthogonal transformations
We assume that to each patch $P_{i}$, there corresponds an unknown group
element $S_{i}\in O(d)\simeq Z_{2}\times SO(d)$ (represented by a $d\times d$
orthogonal matrix), and for each pair of connected patches $(P_{i},P_{j})\in
E_{p}$ we have a noisy proxy for $S_{i}S_{j}^{-1}$, which is precisely the
setup of the group synchronization problem.
For a pair of connected patches $P_{i},P_{j}\in\mathcal{P}$ such that
$\\{P_{i},P_{j}\\}\in E_{p}$ we can estimate the relative rotation/reflection
by applying the method from (Horn et al., 1988)111Note that the
roation/reflection can be estimated without knowing the relative translation.
to their overlap as $|P_{i}\cap P_{j}|\geq d+1$. Thus, we can construct a
block matrix $\bm{R}$ where $\bm{R}_{ij}$ is the $d\times d$ orthogonal matrix
representing the estimated relative transformation from patch $P_{j}$ to patch
$P_{i}$ if $\\{P_{i},P_{j}\\}\in E_{p}$ and $\bm{R}_{ij}=\bm{0}$ otherwise,
such that $\bm{R}_{ij}\approx\bm{S}_{i}\bm{S}_{j}^{T}$ for connected patches.
In the noise-free case, we have the consistency equations
$\bm{S}_{i}=\bm{R}_{ij}\bm{S}_{j}$ for all $i,j$ such that
$\\{P_{i},P_{j}\\}\in E_{p}$. We can combine the consistency equations for all
neighbours of a patch to get
(1)
$\bm{S}_{i}=\bm{M}_{ij}\bm{S}_{j},\qquad\bm{M}_{ij}=\frac{\sum_{j}w_{ij}\bm{R}_{ij}}{\sum_{j}w_{ij}},$
where we use $w_{ij}=|P_{i}\cap P_{j}|$ to weight the contributions as we
expect a larger overlap to give a more robust estimate of the relative
transformation. We can write eq. 1 as $\bm{S}=\bm{M}\bm{S}$, where
$\bm{S}=(\bm{S}_{1},\ldots,\bm{S}_{p})^{T}$ is a $pd\times d$ block-matrix and
$\bm{M}$ is a $pd\times pd$ block-matrix. Thus, in the noise-free case, the
columns of $\bm{S}$ are eigenvectors of $\bm{M}$ with eigenvalue 1. Thus,
following (Cucuringu et al., 2012b, a), we can use the $d$ leading
eigenvectors222While $\bm{M}$ is not symmetric, it is similar to a symmetric
matrix and thus admits a basis of real, orthogonal eigenvectors. of $\bm{M}$
as the basis for estimating the transformations. Let
$\bm{U}=(\bm{U}_{1},\ldots,\bm{U}_{p})^{T}$ be the $pd\times d$ matrix whose
columns are the $d$ leading eigenvectors of $\bm{M}$, where $\bm{U}_{i}$ is
the $d\times d$ block of $\bm{U}$ corresponding to patch $P_{i}$. We obtain
the estimate $\hat{\bm{S}}_{i}$ of $\bm{S}_{i}$ by finding the nearest
orthogonal transformation to $\bm{U}_{i}$ using an SVD (Horn et al., 1988),
and hence the estimated rotation-synchronised embedding of patch $P_{i}$ is
$\hat{\bm{X}}^{(i)}=\bm{X}^{(i)}\hat{\bm{S}}_{i}^{T}$.
### 3.2. Synchronisation over translations
After synchronising the rotation of the patches, we can estimate the
translations by solving a least squares problem. Let
$\hat{\bm{X}}_{i}^{(k)}\in\mathbb{R}^{d}$ be the (rotation-synchronised)
embedding of node $i$ in patch $P_{k}$ ($\hat{\bm{X}}_{i}^{(k)}$ is only
defined if $i\in P_{k}$). Let $\bm{T}_{k}\in\mathbb{R}^{d}$ be the translation
of patch $k$, then in the noise-free case we have the consistency equations
(2)
$\hat{\bm{X}}_{i}^{(k)}+\bm{T}_{k}=\hat{\bm{X}}_{i}^{(l)}+\bm{T}_{l},\qquad
i\in P_{k}\cap P_{l}.$
We can combine the conditions in eq. 2 for each edge in the patch graph to
obtain
(3) $\bm{B}\bm{T}=\bm{C},\qquad\bm{C}_{(P_{k},P_{l})}=\frac{\sum_{i\in
P_{k}\cap P_{l}}\hat{\bm{X}}_{i}^{(k)}-\hat{\bm{X}}_{i}^{(l)}}{|P_{k}\cap
P_{l}|},$
where $\bm{T}\in\mathbb{R}^{|\mathcal{P}|\times d}$ is the matrix such that
the $k$th row of $\bm{T}$ is the translation $\bm{T}_{k}$ of patch $P_{k}$ and
$\bm{B}\in\\{-1,1\\}^{|E_{p}|\times|\mathcal{P}|}$ is the incidence matrix of
the patch graph with entries
$\bm{B}_{(P_{k},P_{l}),j}=\delta_{lj}-\delta_{kj}$, where $\delta_{ij}$
denotes the Kronecker delta. Equation 3 defines an overdetermined linear
system that has the true patch translations as a solution in the noise-free
case. In the practical case of noisy patch embeddings, we can instead solve
eq. 3 in the least-squares sense
(4) $\hat{\bm{T}}=\operatorname*{arg\,min}_{\bm{T}\in\mathbb{R}^{p\times
d}}\left\|\bm{B}\bm{T}-\bm{C}\right\|_{2}^{2}.$
We estimate the aligned node embedding $\bar{\bm{X}}$ in a final step using
the centroid of the aligned patch embeddings of a node, i.e.,
$\bar{\bm{X}}_{i}=\frac{\sum_{\\{P_{k}\in\mathcal{P}\colon i\in
P_{k}\\}}\hat{\bm{X}}_{i}^{(k)}+\hat{\bm{T}}_{k}}{|\\{P_{k}\in\mathcal{P}\colon
i\in P_{k}\\}|}.$
### 3.3. Scalability of the local2global algorithm
The patch alignment step of local2global is highly scalable and does not
directly depend on the size of the input data. The cost for computing the
matrix $M$ is $O(|E_{p}|od^{2})$ where $o$ is the average overlap between
connected patches (typically $o\sim d$) and the cost for computing the vector
$b$ is $O(|E_{p}|od)$. Both operations are trivially parallelisable over patch
edges. The translation problem can be solved with an iterative least-squares
solver with a per-iteration complexity of $O(|E_{p}|d)$. The limiting step for
local2global is usually the synchronisation over orthogonal transformations
which requires finding $d$ eigenvectors of a $d|\mathcal{P}|\times
d|\mathcal{P}|$ sparse matrix with $|E_{p}|d^{2}$ non-zero entries for a per-
iteration complexity of $O(|E_{p}|d^{3})$. This means that in the typical
scenario where we want to keep the patch size constant, the patch alignment
scales almost linearly with the number of nodes in the dataset, as we can
ensure that the patch graph remains sparse, such that $|E_{p}|$ scales almost
linearly with the number of patches. The $O(|E_{p}|d^{3})$ scaling puts some
limitations on the embedding dimension attainable with the local2global
approach, though, as we can see from the experiments in section 4.4, it
remains feasible for reasonably high embedding dimension.
The preprocessing to divide the network into patches scales as $O(m)$. The
speed-up attainable due to training patches in parallel depends on the
oversampling ratio (i.e., the total number of edges in all patches divided by
the number of edges in the original graph). As seen in section 4.4, we achieve
good results with moderate oversampling ratios.
## 4\. Experiments
### 4.1. Data sets
We consider two data sets to test the viability of the local2global approach
to graph embeddings, the Cora citation data set from (Yang et al., 2016) and
the Amazon photo data set from (Shchur et al., 2019). We consider only nodes
and edges in the largest connected component (LCC). We show some statistics of
the data sets in table 1.
| nodes in LCC | edges in LCC | features
---|---|---|---
Cora | $2485$ | $10\,138$ | $1433$
Amazon photo | $7487$ | $238\,086$ | $745$
Table 1. Data sets
Input: $G_{p}(\mathcal{P},E_{p})$, $G(V,E)$, target patch degree $k$
Result: sparsified patch graph $G_{p}(\mathcal{P},\tilde{E}_{p})$
foreach _$\\{P_{i},P_{j}\\}\in E_{p}$_ do
Compute conductance weight $None$
foreach _$\\{P_{i},P_{j}\\}\in E_{p}$_ do
Compute effective resistance $r_{ij}$ between $P_{i}$ and $P_{j}$ in
$G_{p}(\mathcal{P},E_{p},c)$ using the algorithm of (Spielman and Srivastava,
2011);
Let $w_{ij}=r_{ij}c_{ij}$;
Initialize $\tilde{E}_{p}$ with a maximum spanning tree of
$G_{p}(\mathcal{P},E_{p},w)$;
Sample the remaining $(k-1)p+1$ edges from $E_{p}\setminus\tilde{E}_{p}$
without replacement and add them to $\tilde{E}_{p}$, where edge
$\\{P_{i},P_{j}\\}$ is sampled with probability $w_{ij}$;
return $G_{p}(\mathcal{P},\tilde{E}_{p})$
Algorithm 1 Sparsify patch graph
Input: $\mathcal{C}$, $E_{p}$, $G(V,E)$, min overlap $l$, max overlap $u$
Result: Overlapping patches $\mathcal{P}$
Initialise $\mathcal{P}=\mathcal{C}$;
Define the neighbourhood of a set of nodes $U$ as $None$
foreach _$P_{i}\in\mathcal{P}$_ do
foreach _$P_{j}$ s.t. $\\{P_{i},P_{j}\\}\in E_{p}$_ do
Let $F=N(C_{i})\cap C_{j}$;
while _$|P_{i}\cap C_{j}| <l/2$_ do
if _$|F|+|P_{i}\cap C_{j}| >u/2$_ then
reduce $F$ by sampling uniformly at random such that $|F|=u/2-|P_{i}\cap
C_{j}|$;
Let $P_{i}=P_{i}\cup F$;
Let $F=(N(F)\cap C_{j})\setminus P_{i}$;
return _$\mathcal{P}$_
Algorithm 2 Create overlapping patches
### 4.2. Patch graph construction
The first step in the local2global embedding pipeline is to divide the network
$G(V,E)$ into overlapping patches. In some federated-learning applications,
the network may already be partitioned and some or all of the following steps
may be skipped provided the resulting patch graph is connected and satisfies
the minimum overlap condition for the desired embedding dimension. Otherwise,
we proceed by first partitioning the network into non-overlapping clusters and
then enlarging clusters to create overlapping patches. This two-step process
makes it easier to ensure that patch overlaps satisfy the conditions for the
local2global algorithm without introducing excessive overlaps than if we were
to use a clustering algorithm that produces overlapping clusters directly. We
use the following pipeline to create the patches:
* •
Partition the network into $p$ non-overlapping clusters
$\mathcal{C}=\\{C_{k}\\}_{k=1}^{p}$ such that $|C_{k}|\geq\frac{d+1}{2}$ for
all $k$. We use METIS (Karypis and Kumar, 1998) to cluster the networks for
the experiments in section 4.4. However, for very large networks, more
scalable clustering algorithms such as FENNEL (Tsourakakis et al., 2014) could
be used.
* •
Initialize the patches to $\mathcal{P}=\mathcal{C}$ and define the patch graph
$G_{p}(\mathcal{P},E_{p})$, where $\\{P_{i},P_{j}\\}\in E_{p}$ iff there exist
nodes $i\in P_{i}$ and $j\in P_{j}$ such that $\\{i,j\\}\in E$. (Note that if
$G$ is connected, $G_{p}$ is also connected.)
* •
Sparsify the patch graph $G_{p}$ to have mean degree $k$ using algorithm 1
adapted from the effective-resistance sampling algorithm of (Spielman and
Srivastava, 2011).
* •
Expand the patches to create the desired patch overlaps. We define a lower
bound $l\geq d+1$ and upper bound $u$ for the desired patch overlaps and use
algorithm 2 to expand the patches such that $|P_{i}\cap P_{j}|\geq l$ for all
$\\{P_{i},P_{j}\\}\in E_{p}$.
For Cora, we split the network into 10 patches and sparsify the patch graph to
a target mean degree $k=4$. We set the lower bound for the overlap to $l=129$
and upper bound to $u=256$. For Amazon photo, we split the network into 20
patches and sparsify the patch graph to a target mean degree of $k=5$. We set
the lower bound for the overlap to $l=256$ and the upper bound to $u=512$.
### 4.3. Embedding model
As embedding method we consider the variational graph auto-encoder (VGAE)
architecture of (Kipf and Welling, 2016). We use the Adam optimizer (Kingma
and Ba, 2015) for training with learning rate set to 0.01 for Cora and 0.001
for Amazon photo and train all models for 200 epochs. We set the hidden
dimension of the models to $2\times d$ for Cora and to $4\times d$ for Amazon
photo where $d$ is the embedding dimension.
### 4.4. Results
(a) Cora
(b) Amazon photo
Figure 1. AUC network reconstruction score as function of embedding dimension
using full data or stitched patch embeddings for 1 Cora and 1 Amazon photo.
As a first test case for the viability of the local2global approach, we
consider a network reconstruction task. We train the models using all edges in
the largest connected component and compare three training scenarios
full::
Model trained on the full data.
l2g::
Separate models trained on the subgraph induced by each patch and stitched
using the local2global algorithm.
no-trans::
Same training as l2g but node embeddings are obtained by taking the centroid
over patch embeddings that contain the node without applying the alignment
transformations.
We evaluate the network reconstruction error using the AUC scores based on all
edges in the largest connected component as positive examples and the same
number of randomly sampled non-edges as negative examples. We train the models
for 200 epochs using full-batch gradient descent. We show the results in fig.
1. For ‘full’, we report the best result out of 10 training runs. For ‘l2g’
and ‘no-trans’, we first identify the best model out of 10 training runs on
each patch and report the results for stitching the best models.
Overall, the gap between the results for ‘l2g‘ and ‘full‘ is small and
essentially vanishes for higher embedding dimensions. The aligned ‘l2g‘
embeddings consistently outperform the unaligned ‘no-trans’ baseline.
## 5\. Conclusion
In this work, we introduced a framework that can significantly improve the
computational scalability of generic graph embedding methods, rendering them
scalable to real-world applications that involve massive graphs, potentially
with millions or even billions of nodes. At the heart of our pipeline is the
local2global algorithm, a divide-and-conquer approach that first decomposes
the input graph into overlapping clusters (using one’s method of choice),
computes entirely local embeddings via the preferred embedding method, for
each resulting cluster (exclusively using information available at the nodes
within the cluster), and finally stitches the resulting local embeddings into
a globally consistent embedding, using established machinery from the group
synchronization literature.
Our preliminary results on medium-scale data sets are promising and achieve
comparable accuracy on graph reconstruction as globally trained VGAE
embeddings. Our ongoing work consists of two keys steps. A first is to further
demonstrate the scalability benefits of local2global on large-scale data sets
using a variety of embedding techniques and downstream tasks by comparing with
state-of-the-art synchronised subgraph sampling methods, as well as exploring
the trade-off between parallelisability and embedding quality as a function of
patch size and overlap. A second is to demonstrate particular benefits of
locality and asynchronous parameter learning. These have clear advantages for
privacy preserving and federated learning setups. It would also be
particularly interesting to assess the extent to which this local2global
approach can outperform global methods. The intuition and hope in this
direction stems from the fact that asynchronous locality can be construed as a
regularizer (much like sub-sampling, and similar to dropout) and could
potentially lead to better generalization and alleviate the oversmoothing
issues of deep GCNs, as observed in (Chiang et al., 2019).
## References
* (1)
* Bojchevski et al. (2020) Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. 2020\. Scaling Graph Neural Networks with Approximate PageRank. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_ _(KDD ’20)_. ACM, USA, 2464–2473.
* Busch et al. (2020) Julian Busch, Jiaxing Pi, and Thomas Seidl. 2020. PushNet: Efficient and Adaptive Neural Message Passing. In _Proceedings of the 24th European Conference on Artificial Intelligence_ _(ECAI 2020)_. IOS Press, The Netherlands, 1039–1046.
* Chen et al. (2018a) Jie Chen, Tengfei Ma, and Cao Xiao. 2018a. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In _Proceedings of the 6th International Conference on Learning Representations_ _(ICLR 2018)_.
* Chen et al. (2018b) Jianfei Chen, Jun Zhu, and Le Song. 2018b. Stochastic Training of Graph Convolutional Networks with Variance Reduction. In _Proceedings of the 35th International Conference on Machine Learning_ _(PMLR, Vol. 80)_. PMLR, 942–950.
* Chen et al. (2020) Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, and Ji-Rong Wen. 2020. Scalable Graph Neural Networks via Bidirectional Propagation. In _Advances in Neural Information Processing Systems_ _(NeurIPS 2020, Vol. 33)_. Curran Associates, Inc., USA, 14556–14566.
* Chiang et al. (2019) Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. 2019\. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. In _Proceedings of the 25th ACM SIGKDD Intl. Conference on Knowledge Discovery & Data Mining_ _(KDD ’19)_. ACM, USA, 257–266.
* Cucuringu et al. (2012a) Mihai Cucuringu, Yaron Lipman, and Amit Singer. 2012a. Sensor network localization by eigenvector synchronization over the euclidean group. _ACM Transactions on Sensor Networks_ 8, 3 (2012), 1–42.
* Cucuringu et al. (2012b) Mihai Cucuringu, Amit Singer, and David Cowburn. 2012b. Eigenvector synchronization, graph rigidity and the molecule problem. _Information and Inference_ 1, 1 (2012), 21–67.
* Frasca et al. (2020) Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Ben Chamberlain, Michael Bronstein, and Federico Monti. 2020\. SIGN: Scalable Inception Graph Neural Networks. arXiv:2004.11198 [cs.LG]
* Goyal and Ferrara (2018) Palash Goyal and Emilio Ferrara. 2018. Graph embedding techniques, applications, and performance: A survey. _Knowledge-Based Systems_ 151 (2018), 78–94.
* Hamilton et al. (2018) William L. Hamilton, Rex Ying, and Jure Leskovec. 2018\. Inductive Representation Learning on Large Graphs. In _Advances in Neural Information Processing Systems_ _(NIPS ’17, Vol. 31)_. Curran Associates, Inc., USA, 1025–1035.
* Horn et al. (1988) Berthold K. P. Horn, Hugh M. Hilden, and Shahriar Negahdaripour. 1988. Closed-form solution of absolute orientation using orthonormal matrices. _Journal of the Optical Society of America A_ 5, 7 (1988), 1127–1135.
* Kairouz and McMahan (2021) Peter Kairouz and H. Brendan McMahan (Eds.). 2021\. Advances and Open Problems in Federated Learning. _Foundations and Trends in Machine Learning_ 14, 1 (2021).
* Karypis and Kumar (1998) George Karypis and Vipin Kumar. 1998. A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs. _SIAM Journal on Scientific Computing_ 20, 1 (1998), 359–392.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A Method for Stochastic Optimization. In _Proceedings of the 3rd International Conference on Learning Representations_ _(ICLR 2015)_. arXiv:1412.6980 [cs.LG]
* Kipf and Welling (2016) Thomas N. Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. Bayesian Deep Learning Workshop (NIPS 2016). arXiv:1611.07308 [stat.ML]
* Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In _Proceedings of the 5th International Conference on Learning Representations_ _(ICLR 2017)_. arXiv:1609.02907 [cs.LG]
* Klicpera et al. (2019) Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In _Proceedings of the 7th International Conference on Learning Representations_ _(ICLR 2019)_. arXiv:1810.05997 [cs.LG]
* Shchur et al. (2019) Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. 2019\. Pitfalls of Graph Neural Network Evaluation. arXiv:1811.05868 [cs.LG]
* Singer (2011) Amit Singer. 2011\. Angular synchronization by eigenvectors and semidefinite programming. _Applied and Computational Harmonic Analysis_ 30, 1 (2011), 20–36.
* Spielman and Srivastava (2011) Daniel A Spielman and Nikhil Srivastava. 2011. Graph sparsification by effective resistances. _SIAM J. Comput._ 40, 6 (2011), 1913–1926.
* Tsourakakis et al. (2014) Charalampos Tsourakakis, Christos Gkantsidis, Bozidar Radunovic, and Milan Vojnovic. 2014. FENNEL: Streaming Graph Partitioning for Massive Scale Graphs. In _Proceedings of the 7th ACM international conference on Web search and data mining_ _(WSDM ’14)_. ACM, USA, 333–342.
* Wu et al. (2019) Felix Wu, Tianyi Zhang, Amauri Holanda de Souza Jr., Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. 2019. Simplifying Graph Convolutional Networks. In _Proceedings of the 36th International Conference on Machine Learning_ _(PMLR, Vol. 97)_. PMLR, 6861–6871.
* Yang et al. (2016) Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2016\. Revisiting Semi-Supervised Learning with Graph Embeddings. In _Proceedings of the 33rd International Conference on Machine Learning_ _(PMLR, Vol. 48)_. PMLR, 40–48.
* Zeng et al. (2019) Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal. Kannan, and Viktor Prasanna. 2019\. Accurate, Efficient and Scalable Graph Embedding. In _2019 IEEE International Parallel and Distributed Processing Symposium_ _(IPDPS)_. IEEE, USA, 462–471.
* Zeng et al. (2020) Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. 2020\. GraphSAINT: Graph Sampling Based Inductive Learning Method. In _Proceedings of the 8th Intl. Conference on Learning Representations_ _(ICLR 2020)_.
* Zou et al. (2019) Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. 2019\. Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. In _Advances in Neural Information Processing Systems_ _(NeurIPS 2019, Vol. 32)_. Curran Associates, Inc., USA.
|
arxiv-papers
| 2021-07-26T14:08:31 |
2024-09-04T03:07:18.751407
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Lucas G. S. Jeub, Giovanni Colavizza, Xiaowen Dong, Marya Bazzi, Mihai\n Cucuringu",
"submitter": "Lucas G. S. Jeub",
"url": "https://arxiv.org/abs/2107.12224"
}
|
2107.12225
|
# Long-wavelength fluctuations and dimensionality crossover in confined
liquids
Jing Yang Division of Physics and Applied Physics, School of Physical and
Mathematical Sciences, Nanyang Technological University, Singapore 637371,
Singapore Yan-Wei Li Division of Physics and Applied Physics, School of
Physical and Mathematical Sciences, Nanyang Technological University,
Singapore 637371, Singapore Massimo Pica Ciamarra [email protected]
Division of Physics and Applied Physics, School of Physical and Mathematical
Sciences, Nanyang Technological University, Singapore 637371, Singapore
CNR–SPIN, Dipartimento di Scienze Fisiche, Università di Napoli Federico II,
I-80126, Napoli, Italy
###### Abstract
The phase behavior of liquids confined in a slit geometry does not reveal a
crossover from a three- to a two-dimensional behavior as the gap size
decreases. Indeed, the prototypical two-dimensional hexatic phase only occurs
in liquids confined to a monolayer. Here, we demonstrate that the
dimensionality crossover is apparent in the lateral size dependence of the
relaxation dynamics of confined liquids, developing a Debye model for the
density of vibrational states of confined systems and performing extensive
numerical simulations. In confined systems, Mermin-Wagner fluctuations enhance
the amplitude of vibrational motion or Debye-Waller factor by a quantity
scaling as the inverse gap width and proportional to the logarithm of the
aspect ratio, as a clear signature of a two-dimensional behaviour. As the
temperature or lateral system size increases, the crossover to a size-
independent relaxation dynamics occurs when structural relaxation takes place
before the vibrational modes with the longest wavelength develop.
## I Introduction
The phase behaviour and dynamics of liquids confined in slit geometries are
affected by the competition of several length scales. Indeed, for a liquid
confined in a slit of dimension $L\times L\times H$, the lateral length $L$
and gap width $H\ll L$ interfere with bulk-liquid length scales, such as the
typical distance between the particles, $a_{0}=\rho^{-1/3}$, and the
structural correlation length, $\xi_{\rm bulk}\simeq 10$, e.g., as estimated
from the decay of the radial distribution function [1, 2, 3]. The competition
between $H$ and $\xi_{\rm bulk}$ induces a cascade of confinement-induced
ordering transitions [4, 5, 6, 2], and a solid like behaviour interpreted as a
signal of a first-order transition [7, 8] or, more recently [9, 10, 11], as a
continuous glass transition. For molecular liquids in very narrow
confinements, length scales associated with the anisotropic molecular
structure [1, 12, 13, 14] and the details of the interaction between the
molecules and the confining walls also play a role.
The rich and system-dependent phase behaviour of confined systems makes
difficult rationalizing the crossover from three to two dimensions focusing on
its gap size dependence. Indeed the hexatic phase, which is a phase with
short-ranged translational order and long-ranged bond-orientational order only
occurring in two-dimensional systems, has been only reported for $H\simeq
a_{0}$ in Lennard-Jones systems [15]. In this extremely confined limit, the
occurrence of a two-dimensional behaviour is in line with the observed
decoupling of the lateral and transverse degrees of freedom [16, 17].
The size dependence of the relaxation dynamics of confined liquids offers an
alternative and unexplored approach to investigate the dimensionality
crossover. Indeed, two-dimensional systems differ from their three-dimensional
counterpart because Mermin-Wagner [18] long-wavelength (LW) fluctuations make
their relaxation dynamics size dependent [19, 20, 21, 22, 23, 24, 25]. This
alternative approach is also convenient as Mermin-Wagner fluctuations are
always present in two-dimensional systems; conversely, the two- and the three-
dimensional phase behaviour do not qualitatively differ in all systems [26,
27, 28].
In this paper, we demonstrate that confined systems have a relaxation dynamics
depending on the lateral size $L$, as two-dimensional ones, and rationalize
the dimensionality crossover clarifying how this $L$ dependence varies with
the gap width $H$ and relaxation time. We find that, in the solid regime,
confinement enhances the asymptotic value of the mean-square displacement, or
Debye-Waller factor, by a factor scaling as $(1/H)\ln(L/H)$. A similar
enhancement of the mean square displacement occurs in the liquid phase.
Liquids, however, exhibit a dimensionality crossover as size-effects vanish
above a characteristic $H$-independent system size fixed by sound velocity and
relaxation time. We further clarify that our predictions apply to both
molecular and colloidal liquids through the investigation of experimentally
relevant confinement settings.
## II Debye’s DOS in confinement
We develop a Debye-like model for the vibrational density of states (DOS) of
confined amorphous solids to rationalize the size dependence of their
dynamical properties. In confinement, the length scales $L$ and $H$ and the
transverse sound velocity $c_{s}$ fix two characteristic frequencies,
$\omega_{\rm L}=2\pi c_{s}/L$ and $\omega_{\rm H}=2\pi c_{s}/H$. $\omega_{\rm
L}$ is the smallest possible phonon frequency. The physical role of
$\omega_{\rm H}$ is understood considering that phonons with
$\omega<\omega_{\rm H}$, which have a wavelength larger than $H$, do not fit
along the transverse direction. Hence $\omega_{\rm H}$ separates the spectrum
into a low-frequency region, $\omega_{\rm L}<\omega<\omega_{\rm H}$ where
excitations are essentially two dimensional, and in a high frequency region,
$\omega_{\rm H}<\omega<\omega_{\rm D}$, with $\omega_{\rm D}$ the Debye’
frequency, where excitations are three dimensional. In the Debye’
approximation, the density of states is
$D(\omega)=\left\\{\begin{aligned} &c~{}\frac{\omega}{\omega_{\rm
D}^{2}}&&{\omega_{\rm L}\leq\omega\leq\omega_{\rm H}}\\\
&c~{}\frac{\omega^{2}}{\omega_{\rm H}\omega_{\rm D}^{2}}&&{\omega_{\rm
H}\leq\omega\leq\omega_{\rm D}},\end{aligned}\right.$ (1)
with $c$ non-dimensional normalization constant,
$c^{-1}=\frac{1}{2}\left[\left(\frac{\omega_{\rm H}}{\omega_{\rm
D}}\right)^{2}-\left(\frac{\omega_{\rm L}}{\omega_{\rm
D}}\right)^{2}\right]-\frac{1}{3}\left[\frac{\omega_{\rm D}}{\omega_{\rm
H}}-\left(\frac{\omega_{\rm H}}{\omega_{\rm D}}\right)^{3}\right].$ (2)
$D(\omega)$ is schematically illustrated in Fig. 1(a). We remark that we have
restricted the above investigation to the transverse modes, which are of
greater relevance to our purposes as having a smaller frequency. The
longitudinal modes can be similarly described.
The vibrational density of states allows us to evaluate the asymptotic value
of the mean square displacement, or the Debye-Waller factor, averaging the
contributions $k_{B}T/m\omega^{2}$ of the different modes. To highlight the
dependence on the different length scales involved, we write $\omega_{\rm
D}=2\pi c_{s}/\lambda_{\rm D}$, finding
${\rm DW}=\frac{k_{B}T}{m\omega_{\rm
D}^{2}}\frac{\ln\left(\frac{L}{H}\right)+\frac{H}{\lambda_{\rm
D}}-1}{\frac{1}{2}\left[\left(\frac{\lambda_{\rm
D}}{H}\right)^{2}-\left(\frac{\lambda_{\rm
D}}{L}\right)^{2}\right]+\frac{1}{3}\left[\frac{H}{\lambda_{\rm
D}}-\left(\frac{\lambda_{\rm D}}{H}\right)^{2}\right]}.$ (3)
The three dimensional limit, ${\rm DW_{3D}}\simeq\frac{3k_{B}T}{m\omega_{\rm
D}^{2}}$, and the two-dimensional one, ${\rm
DW_{2D}}=\frac{2k_{B}T}{m\omega_{\rm D}^{2}}\ln\left(\frac{L}{H}\right)$, are
recovered for $H\to L\gg\lambda_{\rm D}$ and for $H\to\lambda_{\rm D}\ll L$,
respectively.
In quasi-2D systems, $L\gg H\gg\lambda_{D}$, Eq. 3 is approximated by
${\rm DW}\simeq{\rm DW_{3D}}\left[1+\frac{\lambda_{\rm
D}}{H}\left(\ln\left(\frac{L}{H}\right)-1\right)\right].$ (4)
Hence, we predict that in confined systems the DW grows logarithmically with
$L$, as in 2D, with a slope decreasing as $1/H$. We remark here that, as long
as $H\gg\lambda_{\rm D}$, the DW factor grows as $H$ decreases at constant
$L$, e.g., as the system becomes more confined. This occurs because, as $H$
decreases, a larger fraction of the phonon spectrum becomes effectively two-
dimensional.
## III Numerical details
We validate our theoretical prediction, and explore the effect of confinement
on the liquid phase, via extensive molecular dynamics simulations [29] of the
standard A:B 80:20 Kob-Andersen (KA) Lennard-Jones (LJ) mixture [30], where
particles interact via the potential
$V_{\alpha\beta}\left({r}\right)=4\epsilon_{\alpha\beta}\left[\left(\frac{\sigma_{\alpha\beta}}{r}\right)^{12}-\left(\frac{\sigma_{\alpha\beta}}{r}\right)^{6}+{C}_{\alpha\beta}\right]$,
and $\epsilon_{AB}=1.5\epsilon_{AA}$, $\epsilon_{BB}=0.5\epsilon_{AA}$,
$\sigma_{AB}=0.8\sigma_{AA}$, $\sigma_{BB}=0.88\sigma_{AA}$,
$\alpha,\beta\in\left\\{{A,B}\right\\}$. The potential is truncated at
$r_{c}=2.5\sigma_{\alpha\beta}$, and $C_{\alpha\beta}$ enforces $V(r_{c})=0$.
The mass of the particles $m$, $\epsilon_{AA}$, and $\sigma_{AA}$ are our unit
of mass, energy and distance, respectively. We first thermalize the system in
the NPT ensemble, at $P=1.0$, allowing the box size to vary only in the
lateral dimensions. Production runs are then performed in the NVE ensemble.
The number of particles depends on $L$ and $H$, and varies between $10^{3}$ to
$~{}10^{6}$ million. We average the dynamical data over at least four
independent runs.
We monitor the relaxation dynamics studying the mean square displacement,
$\langle\Delta r^{2}(t)\rangle=\frac{1}{N}\sum\Delta\mathbf{r}_{i}^{2}(t)$,
where $\Delta\mathbf{r}_{i}$ is the displacement of particle $i$ at time $t$,
and the self-scattering function,
$F_{s}\left(k,t\right)=\frac{1}{N}\left<\sum_{j=1}^{N}e^{i\mathbf{k}\cdot\Delta\mathbf{r}_{j}\left(t\right)}\right>$,
where $\mathbf{k}$ the wavevector of the first peak of the static structure
factor of bulk systems. The relaxation time $\tau$ is defined by
$F_{s}\left(k,\tau\right)=1/e$. We further investigate the dynamics using the
cage-relative mean square displacement and self-scattering function [20, 21,
22, 31]. These are defined as above, with the displacement of particle $i$
replaced by its cage-relative counterpart, $\Delta_{\rm
CR}\mathbf{r}_{i}=\Delta\mathbf{r}_{i}-\frac{1}{n_{i}}\sum_{j}\Delta\mathbf{r}_{j}$,
where the sum is over all neighbors of particle $i$ at time $t=0$. We identify
the neighbors via the Voronoi construction.
Figure 1: (a) Schematic illustration of the Debye’s density of states of
quasi-2D systems, Eq. 1. (b) Low-frequency cumulative density of states of
confined solids with lateral length $L=80$ and different gap sizes $H$. (c)
The data in ${\bf a}$ collapses when plotted vs. $\omega H$ and vertically
scaled, for $H>\xi_{\rm bulk}\simeq 10$. (d) Mean square displacement at
$T=0.005$ and $H=10$, for different $L$ values. (e) The asymptotic DW factor
grows logarithmically with the lateral size $L$, with a slope scaling as $1/H$
(inset). Errors are smaller than the symbol size. Figure 2: The dependence
of the average density on the gap width, at $T=0.35$, when periodic boundary
conditions are used in the confining direction. Confinement does not strongly
influence the average density, in the range of gap widths we have considered.
The radial distribution function, shown in the inset for $H=20$ and $L=20$ at
$T=0.35$, approaches one at $\xi_{\rm bulk}/2\simeq 5$.
We consider three different confinement approaches. First, we use periodic
boundary conditions in the confining direction, which is an approach that is
useful to avoid layering as well as to compare with the theoretical
predictions. When using this approach, the density is essentially constant,
$\rho=1.1775(5)$, as we illustrate in Fig. 2(a). In the figure we also show
that, for larger $H$ values representative of the bulk limit, the radial
distribution function becomes constant for $r\simeq\xi_{\rm bulk}/2\simeq 5$.
Secondly, we confine the system between flat walls. In this case, the
interaction between particles of type $i=A,B$ and the walls is given by a LJ
potential with energy scale $\epsilon_{ii}$ and length scale $\sigma_{ii}$,
truncated in its minimum. In the presence of flat walls, the density sensibly
decreases with $H$, and layering occurs, as shown in Fig. 2b.
Finally, we perform simulations of systems confined between rough walls. In
this case, we first thermalize at the desired state pressure large samples,
using periodic boundary conditions in all directions, and then freeze the
positions of all particles whose height is outside the interval $[0:H]$. When
using rough walls, we work at fixed density rather than at fixed pressure.
## IV Confined amorphous solids
We study the density of states of confined amorphous solid configurations
generated by minimizing the energy of configurations equilibrated at low
temperature. We fix the pressure of these low-temperature configurations to
$P=1$ by adjusting the lateral size, which slightly fluctuates around $L=80$.
We considered several $H$ values, so that the number of particles ranges from
$36000$ to $150000$. We further use periodic boundary conditions in all
spatial directions to prevent structural inhomogeneities due to layering,
hence allowing for a more transparent comparison with the theoretical
predictions. The effect of walls is discussed in Sec. VI.
Figure 3: Long-wavelength fluctuations in confined amorphous solids. (a) Mean
square displacement, and (b), self-scattering function, at three different
values of the temperature. We fix $H=10$ and show, at each temperature,
results for $10\leq L\leq 320$. (c) The relaxation time decreases as the
lateral size increases, while the cage-relative relaxation time is
$L$-independent. (d) The relaxation time decreases as the gap-size decreases,
particularly for $H\leq\xi_{\rm bulk}$, while the cage-relative relaxation
time is $H$-independent. The relaxation times in (c) and (d) are divided by
their respective values at $L=10$ and at $H=30$, to facilitate their
comparison. In (c) and (d), errors are smaller than the symbol size.
We evaluate the low-frequency end of the vibrational spectrum of the generated
energy minima via the direct diagonalization of their Hessian matrix. To
compare the numerical results with our theoretical prediction of Eq. 1,
schematically illustrated in Fig. 1(a), we focus on the frequency dependence
of the cumulative distribution $C(\omega)=\int D(\omega)d\omega$. Due to the
large lateral size of our systems [32], we observe gaps at low frequency, as
predicted by linear elasticity 111We have verified that these gaps are not an
artefact of the discontinuity of the force at the cutoff distance [39], as
they persist when the interaction potential is appropriately smoothed.. Figire
1(b) also demonstrates that $C(\omega)/\omega^{2}$ is constant at small
frequencies, and increases above an $H$ dependent crossover frequency which,
according to Eq. 1, should scale as $\omega_{H}\propto c_{s}/H$. Indeed, when
plotted versus $\omega H$, and vertically scaled, the data collapse up to
their crossover point, as we illustrate in Fig. 1(c). The figure also supports
the $\omega^{2}$ to $\omega^{3}$ crossover for the cumulative distribution
suggested by the theoretical model.
We remark that the data collapse of Fig. 1(c) breaks for small $H$. To
rationalize this observation, we investigate in Fig. 2 the gap size dependence
of the density and the radial correlation function of a low-temperature solid
configuration. We observe that the density is almost $H$ independent, for
$H\geq 5$, and that the radial correlation function approaches the ideal gas
limit at $r\simeq 5$. This allows us to estimate the structural correlation
length of the bulk solid, $\xi_{\rm bulk}\simeq 10$. We thus understand that,
in Fig. 1(c), no collapse occurs for small $H$ as confinement interferes with
the structural correlation length of the system.
We further validate our theoretical prediction for the dependence of the DW
factor of amorphous solids on the relevant length scales $L$ and $H$, Eq. 4,
performing simulations at a low-temperature value at which structural
relaxation is negligible. In this limit, the mean-square displacement
approaches a constant DW value at long times, as illustrated in Fig. 1(d) for
$H=10$. Figure 1(e) shows that this limiting DW factor grows as the logarithm
of the lateral size $L$, with a slope scaling as $1/H$, in agreement with the
predictions of Eq. 4.
Figure 4: Dimensionality crossover in confined liquids. (a) The mean square
displacement exhibits a crossover between two different regimes at a time
$t_{\rm LW}\simeq 0.3L$ (dash-dotted line). Dashed lines are polynomial fits
used to estimate the mean square displacement at the crossover time (circles).
Data are for $T=0.35$, and different $L$ values. (b) The mean square
displacement at the crossover time grows faster that $\ln L$ (open symbols),
above a characteristic $T$ dependent lateral system size. When this occurs,
structural relaxation rather than LWs dominate the diffusivity, and hence the
system has a 3D-like behaviour. (c) State points with an effective two-
dimensional behaviour according to the analysis in (b), are illustrated as
open circles. Diamonds, conversely, identify those having a three-dimensional
behaviour. Stars correspond to the prediction of Eq. 5, $L=\alpha
c_{s}\tau_{CR}(T)$, with $\alpha\simeq 0.018$. The interpolating solid line is
a guide to the eye. All panels refer to $H=10$. Supplemental Fig. S4 shows
that the results are insensitive to changes in the gap width.
## V Confined liquids
Having ascertained that LWs influence the behaviour of confined solids, we now
demonstrate that they similarly affect the relaxation dynamics of quasi-2D
supercooled-liquids. To this end, we investigate the size and temperature
dependence of the mean square displacement and self-scattering function at the
wave vector of the peak of the static structure factor of bulk systems.
Figures 3(a) and (b) show that the transient solid-like response revealed by
the mean square displacement and the self-scattering function becomes less
apparent as the system size decreases. This size dependence is more apparent
at low temperature, where the transient solid like behaviour is manifest.
We prove that this observed size dependence originates from LW fluctuations by
comparing the $L$ dependence of the relaxation time $\tau$ and of the cage-
relative (CR) relaxation time $\tau_{\rm CR}$. Cage-relative quantities,
indeed, are insensitive to collective particle displacements and hence filter
out the effect of LWs [20, 21, 22]. In Fig. 3(c), we observe that, while the
standard relaxation time decreases logarithmically with $L$, the CR one is $L$
independent. These results closely parallel those observed in strictly two-
dimensional systems [19, 20, 21, 22, 23, 24, 25] and demonstrate that LW
fluctuations sensibly affect the structural relaxation dynamics of confined
liquids.
In Fig. 3(d), we further show that the relaxation time $\tau$ decreases as the
gap width is reduced and a larger fraction of the vibrational spectrum becomes
effectively two-dimensional. This dynamical speed up is particularly relevant
for $H<\xi_{\rm bulk}$, indicating that the structural changes induced by such
strong confinement promote LW fluctuations. This is consistent with the
observation of a significant increment in the density of low-frequency modes
for $H=5$, in Fig. 1(b). The gap independence of the cage-relative relaxation
time, also illustrated in Fig. 3(d), confirms our interpretation, namely that
the $H$-induced speed-up originates from LW fluctuations.
Figure 5: Long-wavelength fluctuations in slit geometries. (a) The transverse
mean-square-displacement and (b) the self-intermediate scattering function for
supercooled liquids with various transverse length scales $10\leq L\leq 320$
at the same perpendicular length scale $H=10$. (c) Width dependence of the
relaxation time, and of the cage-relative relaxation time for a $L=40$ system.
Errors are smaller than the symbol size. The inset is a schematic diagram of
the confining geometry.
We quantitatively investigate the dimensionality crossover focusing on the
mean square displacement, $\langle\Delta r^{2}(t)\rangle$. In the solid phase,
$\langle\Delta r^{2}(t)\rangle$ approaches an asymptotic DW factor value on a
time scale $t_{\rm LW}\propto\omega_{L}^{-1}\propto L$. The asymptotic value
of the DW factor grows as $\ln L/f(L)$, with $f(L)$ a slowing increasing
function of $L$, corresponding to the denominator of Eq. 3. In the liquid
phase, therefore, we expect a crossover in the time dependence of the mean
square displacement at a time $t_{\rm LW}$. Figures 4(a) and 4(b) demonstrate
that such a crossover occurs at $t_{\rm LW}\simeq 0.3L$, for $T=0.38$. At the
same $t_{\rm LW}$ similar crossovers occur at all temperatures.
When LW fluctuations dominate the dynamics, as in the solid phase,
$\frac{\langle\Delta r^{2}(t_{\rm LW})}{\ln L}\rangle\propto 1/f(L)$ decreases
with $L$. We therefore assume LW fluctuations to become negligible at $L$
values at which $\langle\Delta r^{2}(t_{\rm LW})\rangle$ grows faster than
$\ln L$. When this occurs, irreversible relaxation events rather than large-
amplitude oscillations dominate the diffusivity. In Fig. 4(b) we indeed
observe that $\langle\Delta r^{2}(t_{\rm LW})\rangle/\ln L$ is not monotonic
in $L$, decreasing with $L$ when LWs are relevant (solid symbols), and
increasing when they are not (open symbols). This behavior allows us to
identify crossover $L$ values, which we have verified not to depend on the gap
width. This study leads to the $L$-$T$ diagram of Fig. 4(c). The system-size
dependent dynamics characteristic of two-dimensional behaviour occurs at low
temperature and small lateral size and disappears as either the lateral length
or the temperature increase. We remark that while this diagram does not depend
on the confinement width $H$, size effects gradually fade away as $1/H$, as in
the solid phase, and hence become not appreciable at large $H$.
We exploit the size independence of the cage-relative relaxation time to
rationalize this observed dimensionality crossover. Indeed, vibrational
excitations cannot last more than the cage-relative relaxation time, as on
this time scale the structure of the system sensibly changes, as particles
change neighbours. Since the vibrational modes influencing the structural
relaxation dynamics are those that have time to develop, we expect the
crossover between a two-dimensional size-dependent relaxation dynamics and a
three-dimensional size-independent relaxation dynamics to occur at
$\frac{L}{\tau_{\rm CR}(T)}=\alpha c_{s},$ (5)
with $c_{s}$ being the transverse sound velocity and $\alpha$ being a
constant. In other words, size effects disappear for $L>\alpha c_{s}\tau_{\rm
CR}(T)$, as the system relaxes before the lowest size-dependent mode develops.
This theoretical prediction well describes the data of Fig. 4(d).
## VI Effect of smooth and rough walls
Our theoretical analysis and numerical simulations demonstrate that LW
fluctuations affect the dynamics of confined liquids. However, so far we have
described simulations obtained using periodic boundary conditions in all
spatial directions; one might wonder, therefore, whether LWs also play a role
in the experimentally relevant set up of liquids confined between two parallel
walls at a separation $H$. To address this question, we investigate the
relaxation dynamics of the KA LJ binary mixture confined between two
atomically-smooth flat walls. Since the walls prevent diffusion along the
transverse direction, we focus on particle motion in the lateral directions,
effectively defining two-dimensional mean-square displacement and self-
scattering function. We find that, under wall confinement, the relaxation
dynamics has the typical size dependence induced by LW fluctuations, the
caging regime becoming less apparent as $L$ increases, as we illustrate in
Figs. 5(a) and 5(b).
Figure 6: Dependence of the average density on the gap width, at $T=0.35$,
for a system confined in between flat walls at a separation $H$. The density
decreases as the gap width decreases. Flat walls, furthermore, induce
layering, as we illustrate in the inset by plotting the density at a distance
$h$ from a confining wall. Figure 7: Mean-square displacement (a) and self-
scattering function (b) of systems confined between rough walls, at $H=2$ and
$H=4$. These quantities are evaluated focusing on the behavior of the central
layer of particles. The relaxation dynamics does not depend on the lateral
length, or system size $N$, indicating that the rough walls kill the LW
fluctuations.
The structural changes induced by the walls, however, strongly affect the
relaxation dynamics, as evidenced by the $H$ dependence of the standard and CR
relaxation times, which we illustrate in Fig. 5(c). For $H\geq\xi_{\rm bulk}$,
both relaxation times decrease as the system becomes more confined; this is,
we believe, the combined effect of layering and the reduction in the average
density induced by the confinement, which we illustrate in Fig. 6.
Importantly, we observe in Fig. 5(c) that for $H\leq\xi_{\rm bulk}$, while the
relaxation time decreases as the gap width is reduced, the cage-relative
relaxation sharply increases. This increase in the CR-relaxation time is in
qualitative agreement with the many previous investigations reporting an
increase in the viscosity of molecular liquids under confinement [1, 34, 35,
9, 11]. Indeed, we remind the reader that viscosity and cage-relative
relaxation time are related [36, 25]. This observed decoupling demonstrates
that smooth walls do not kill the LWs, but rather make their effect more
apparent.
While smooth walls do not kill LWs, rough walls strongly suppress them.
Indeed, we show in Fig. 7 that the relaxation dynamics of liquids confined
between rough walls does not depend on the lateral system size. We remark that
for very large gap widths the effect of the boundary should become negligible,
and hence LW fluctuations should play a role. Since the influence of LWs on
the dynamics scales as $1/H$, however, their effect in this large-$H$ limit
may be not easily appreciated. We expect variations [13] in the roughness of
the confining walls and the wall-liquid interaction potential to only
qualitatively affect the observed phenomenology.
## VII Conclusions and experimental relevance
The confinement-induced enhancement of the DW factor described Eq. 3 is an
equilibrium property not affected by the underlying microscopic dynamics,
equally valid for molecular and colloidal solids. In the supercooled regime,
the signatures of LW fluctuations conversely depend on how much the system
moves along the phase-space directions of the low-frequency modes before
particles rearrange. Since the size of this displacement depends on the
microscopic dynamics and it is smaller if the system moves diffusively, rather
than ballistically, we expect the influence of confinement to be more relevant
at the molecular scale rather than at the colloidal scale. Nevertheless, we
remind the reader that LWs are observed in experiments [22, 25, 23] and
simulations [25] of two-dimensional colloidal systems; our predictions
concerning the role of LW fluctuations in confined systems therefore apply to
both molecular and colloidal systems.
For the effect of LW fluctuations to be experimentally visible, however, the
roughness scale of the confining walls must be smaller than the size of the
particles. Rough walls, indeed, affect the motion in the lateral dimensions
and kill the LW fluctuations, as we have shown in Fig. 7. The requirement of
smooth confining walls is not a technical limitation. Walls that are de facto
flat at the molecular scale exist [10], and it is undoubtedly possible to
confine large colloidal particles between walls that are flat at the particle
scale. In colloidal experiments, however, one should ascertain that no
particles stick irreversibly to the walls, effectively making them rough,
e.g., as observed in Refs. [37, 38]. Hence our predictions are experimentally
testable both in confined molecular liquids, e.g., comparing the size
dependence of the viscosity and of structural relaxation time, and in confined
colloidal systems, comparing, e.g. the standard and cage-relative relaxation
times.
Our results show that confined systems exhibit a gradual dimensionality
crossover controlled by the gap width and the temperature, which is
appreciable when investigating the lateral size dependence of the dynamics.
The physics of confined liquids is thus richer than previously realised. These
findings might be relevant to a variety of applications involving micro- and
nanofluidics, e.g., lab-on-a-chip devices, where particles flow in confined
geometries.
###### Acknowledgements.
We acknowledge support from the Singapore Ministry of Education through the
Academic Research Fund Tier RG 86/19(S), Singapore, and the National
Supercomputing Centre Singapore (NSCC) for the computational resources.
## References
* Granick [1991] S. Granick, Motions and relaxations of confined liquids, Science 253, 1374 (1991).
* Mandal _et al._ [2014] S. Mandal, S. Lang, M. Gross, M. Oettel, D. Raabe, T. Franosch, and F. Varnik, Multiple reentrant glass transitions in confined hard-sphere glasses, Nature Communications 5, 1 (2014).
* Zhang and Kob [2020] Z. Zhang and W. Kob, Revealing the three-dimensional structure of liquids using four-point correlation functions, Proceedings of the National Academy of Sciences 117, 14032 (2020).
* Schmidt and Löwen [1996] M. Schmidt and H. Löwen, Freezing between Two and Three Dimensions, Phys. Rev. Lett. 76, 4552 (1996).
* Löwen [2009] H. Löwen, Twenty years of confined colloids: from confinement-induced freezing to giant breathing, Journal of Physics: Condensed Matter 21, 474203 (2009).
* Cummings _et al._ [2010] P. T. Cummings, H. Docherty, C. R. Iacovella, and J. K. Singh, Phase transitions in nanoconfined fluids: The evidence from simulation and theory, AIChE Journal 56, 842 (2010).
* Klein and Kumacheva [1995] J. Klein and E. Kumacheva, Confinement-induced phase transitions in simple liquids, Science 269, 816 (1995).
* Klein and Kumacheva [1998] J. Klein and E. Kumacheva, Simple liquids confined to molecularly thin layers. i. confinement-induced liquid-to-solid phase transitions, The Journal of chemical physics 108, 6996 (1998).
* Demirel and Granick [2001] A. L. Demirel and S. Granick, Origins of solidification when a simple molecular fluid is confined between two plates, Journal of Chemical Physics 115, 1498 (2001).
* Zhu and Granick [2004] Y. Zhu and S. Granick, Superlubricity: A paradox about confined fluids resolved, Physical review letters 93, 096101 (2004).
* Kienle and Kuhl [2016] D. F. Kienle and T. L. Kuhl, Density and Phase State of a Confined Nonpolar Fluid, Phys.Rev. Lett. 117, 036101 (2016).
* Jabbarzadeh [2016] A. Jabbarzadeh, Friction anisotropy in confined alkanes: linear and branched molecules, Tribology International 97, 108 (2016).
* Jabbarzadeh _et al._ [2006] A. Jabbarzadeh, P. Harrowell, and R. Tanner, Low friction lubrication between amorphous walls: Unraveling the contributions of surface roughness and in-plane disorder, The Journal of chemical physics 125, 034703 (2006).
* Jabbarzadeh _et al._ [2007] A. Jabbarzadeh, P. Harrowell, and R. Tanner, Crystal bridges, tetratic order, and elusive equilibria: The role of structure in lubrication films (2007).
* Gribova _et al._ [2011] N. Gribova, A. Arnold, T. Schilling, and C. Holm, How close to two dimensions does a lennard-jones system need to be to produce a hexatic phase?, The Journal of chemical physics 135, 054514 (2011).
* Franosch _et al._ [2012] T. Franosch, S. Lang, and R. Schilling, Fluids in extreme confinement, Physical Review Letters 109, 240601 (2012).
* Mandal and Franosch [2017] S. Mandal and T. Franosch, Diverging time scale in the dimensional crossover for liquids in strong confinement, Physical Review Letters 118, 065901 (2017).
* Mermin and Wagner [1966] N. D. Mermin and H. Wagner, Absence of ferromagnetism or antiferromagnetism in one-or two-dimensional isotropic heisenberg models, Physical Review Letters 17, 1133 (1966).
* Flenner and Szamel [2015] E. Flenner and G. Szamel, Fundamental differences between glassy dynamics in two and three dimensions, Nature communications 6, 1 (2015).
* Shiba _et al._ [2016] H. Shiba, Y. Yamada, T. Kawasaki, and K. Kim, Unveiling dimensionality dependence of glassy dynamics: 2d infinite fluctuation eclipses inherent structural relaxation, Physical review letters 117, 245701 (2016).
* Illing _et al._ [2017] B. Illing, S. Fritschi, H. Kaiser, C. L. Klix, G. Maret, and P. Keim, Mermin–wagner fluctuations in 2d amorphous solids, Proceedings of the National Academy of Sciences 114, 1856 (2017).
* Vivek _et al._ [2017] S. Vivek, C. P. Kelleher, P. M. Chaikin, and E. R. Weeks, Long-wavelength fluctuations and the glass transition in two dimensions and three dimensions, Proceedings of the National Academy of Sciences 114, 1850 (2017).
* Zhang and Cheng [2019] B. Zhang and X. Cheng, Long-wavelength fluctuations and static correlations in quasi-2d colloidal suspensions, Soft matter 15, 4087 (2019).
* Shiba _et al._ [2019] H. Shiba, T. Kawasaki, and K. Kim, Local density fluctuation governs the divergence of viscosity underlying elastic and hydrodynamic anomalies in a 2d glass-forming liquid, Physical Review Letters 123, 265501 (2019).
* Li _et al._ [2019] Y.-W. Li, C. K. Mishra, Z.-Y. Sun, K. Zhao, T. G. Mason, R. Ganapathy, and M. P. Ciamarra, Long-wavelength fluctuations and anomalous dynamics in 2-dimensional liquids, Proceedings of the National Academy of Sciences 116, 22977 (2019).
* Bernard and Krauth [2011] E. P. Bernard and W. Krauth, Two-Step Melting in Two Dimensions: First-Order Liquid-Hexatic Transition, Phys. Rev. Lett. 107, 155704 (2011).
* Anderson _et al._ [2017] J. A. Anderson, J. Antonaglia, J. A. Millan, M. Engel, and S. C. Glotzer, Shape and Symmetry Determine Two-Dimensional Melting Transitions of Hard Regular Polygons, Physical Review X 7, 021001 (2017).
* Li and Ciamarra [2020] Y.-W. Li and M. P. Ciamarra, Attraction tames two-dimensional melting: from continuous to discontinuous transitions, Physical Review Letters 124, 218002 (2020).
* Plimpton [1995] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics, Journal of computational physics 117, 1 (1995).
* Kob and Andersen [1994] W. Kob and H. C. Andersen, Scaling behavior in the $\beta$-relaxation regime of a supercooled lennard-jones mixture, Physical review letters 73, 1376 (1994).
* Tong and Tanaka [2018] H. Tong and H. Tanaka, Revealing Hidden Structural Order Controlling Both Fast and Slow Glassy Dynamics in Supercooled Liquids, Physical Review X 8, 011041 (2018).
* Tanguy _et al._ [2002] A. Tanguy, J. Wittmer, F. Leonforte, and J.-L. Barrat, Continuum limit of amorphous elastic bodies: A finite-size study of low-frequency harmonic vibrations, Physical Review B 66, 174205 (2002).
* Note [1] We have verified that these gaps are not an artefact of the discontinuity of the force at the cutoff distance [39], as they persist when the interaction potential is appropriately smoothed.
* Hu _et al._ [1991] H.-W. Hu, G. A. Carson, and S. Granick, Relaxation Time of Confined Liquids under Shear, Phys. Rev. Lett. 66, 2758 (1991).
* Demirel and Granick [1996] A. L. Demirel and S. Granick, Glasslike Transition of a Confined Simple Fluid, Phys. Rev. Lett. 77, 2261 (1996).
* Flenner and Szamel [2019] E. Flenner and G. Szamel, Viscoelastic shear stress relaxation in two-dimensional glass–forming liquids, Proceedings of the National Academy of Sciences of the United States of America 116, 2015 (2019).
* Nugent _et al._ [2007] C. R. Nugent, K. V. Edmond, H. N. Patel, and E. R. Weeks, Colloidal Glass Transition Observed in Confinement, Phys. Rev. Lett. 99, 025702 (2007).
* Edmond _et al._ [2012] K. V. Edmond, C. R. Nugent, and E. R. Weeks, Influence of confinement on dynamical heterogeneities in dense colloidal samples, Phys. Rev. E 85, 41401 (2012).
* Shimada _et al._ [2018] M. Shimada, H. Mizuno, and A. Ikeda, Anomalous vibrational properties in the continuum limit of glasses, Physical Review E 97, 22609 (2018).
|
arxiv-papers
| 2021-07-26T14:11:11 |
2024-09-04T03:07:18.764234
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Jing Yang, Yan-Wei Li, Massimo Pica Ciamarra",
"submitter": "Massimo Pica Ciamarra",
"url": "https://arxiv.org/abs/2107.12225"
}
|
2107.12227
|
11institutetext: University Juraj Dobrila of Pula, Zagrebačka 30, HR-52100
Pula, Croatia 11email: [email protected] 22institutetext: Ericsson
Nikola Tesla, Krapinska 45, HR-10000 Zagreb, Croatia 22email:
[email protected]
# The Role of Functional Programming in Management and Orchestration of
Virtualized Network Resources††thanks: Supported by ERASMUS+ project “Focusing
Education on Composability, Comprehensibility and Correctness of Working
Software”, no. 2017-1-SK01-KA203-035402 and the research project “Reliability
and Safety in Complex Software Systems: From Empirical Principles towards
Theoretical Models in View of Industrial Applications (RELYSOFT)” no.
IP-2019-04-4216 funded by the Croatian Science Foundation.
Part II. Network Evolution and Design Principles
Tihana Galinac Grbac 11 0000-0002-4351-4082 Nikola Domazet 22
###### Abstract
This is part II of the follow-up lecture notes of the lectures given by the
authors at the _Three “CO” (Composability, Comprehensibility, Correctness)_
Winter School held in Košice, Slovakia, in January 2018, and Summer School
held in Budapest, Hungary, in June 2019. In this part we explain the recent
network evolution and the concept of virtualization, focusing on the
management and orchestration of virtualized network resources. Network
Functions Virtualization (NFV) is a new paradigm for changing the way networks
are built and operated. Decoupling software implementation from network
resources through a virtualization layer introduces a need for developing sets
of NFV management and orchestration (MANO) functions. We discuss how this new
point of view is highly inspired by the functional programming concepts. We
provide examples and exercises on Open Stack virtual technology, and also
discuss the challenges and problems inspired by telecommunication industry.
Focus is on Reliable operation of Management and Orchestration functions of
Virtualized resources.
These notes provide an introduction to the subject, with the goal of
explaining the necesity for new knowledge and skills in area of network
programming. We introduce students with main problems and the network design
principles, methods and techniques used for their solution. The worked
examples and exercises serve students as the teaching material, from which
they can learn how to use functional programming to effectively and
efficiently coordinate management and orchestration functions in distributed
complex systems using NFV.
###### Keywords:
Network Function Virtualization Management and orchestration Complex software
systems OpenStack platform.
## 1 Introduction
This lecture part II belongs to lecture series on the role of functional
programming in management and orchestration of virtualized network resources.
In the previous lectures part I of the follow-up lecture notes of the lectures
given by the authors at the _Three “CO” (Composability, Comprehensibility,
Correctness)_ Winter School held in Košice, Slovakia, in January 2018, we
discuss the system structure for complex systems and design principles. We
provided introduction to the theory of complex software systems reflecting on
examples from telecommunication network and carefully positioning the
considered problems imposed by network evolution and continuous complexity
increase. Furthermore, we discussed main system design principles proposed to
cope with complexity such as modularity, abstraction, layering and hierarchy.
Since these are very generic recommendations how to design such complex
systems we further explain in detail main paradigms such as service
orientation and virtualisation forcing implementation of such principles.
Virtualization is a paradigm frequently used in management of complex software
systems. It implies introduction of a new abstract layer, a virtual edition of
system layer and its functions, which avoids introducing dependency between
system layers.
Here, in this lecture we go one step further where we discuss network
evolution and design principles. We introduce new concepts that are
cornerstones for future network evolution and are based on virtualisation and
service orientation. These are Network Functions Virtualization (NFV) and
Software Defined Networking (SDN). Network Functions Virtualization (NFV)
decouples network function from physical network resources through a new
virtualization layer [8] thus avoiding dependencies among them. However, it
introduces a need for developing sets of NFV management and orchestration
functions (MANO). Further in this lecture, we describe new challenges arising
from implementation point of view and show students how to use the programming
techniques for coordination of management and orchestration functions of
virtualized network resources operating in distributed environments.
The problems and challenges of coordination of management and orchestration
functions are addressed using the OpenStack platform [12]. It is an open
source cloud operating system which integrates a collection of software
modules that are necessary to provide cloud computing layered model. Such
technology is necessary in dealing with problems arising from the
virtualization paradigm in current networks, and the students understanding
solutions in OpenStack will be able to transfer their knowledge to other
existing technologies with the same or similar purpose.
These notes provide an introduction to the subject, with the goal of
explaining the problems and the principles, methods and techniques used for
their solution. The worked examples and exercises serve students as the
teaching material, from which they can learn how use of functional programming
may result in effective and efficient coordination management and
orchestration functions in distributed complex systems using NFV.
The methods and techniques explained in these lecture notes, and applied to
the problems of management and orchestration of network virtualization, are
already existing and we claim no originality in that sense. The purpose of
these notes is to serve as a teaching material for these methods.
The challenges arising from the new network paradigms, as well as their
solutions, are illustrated through practical examples using OpenStack virtual
technology and inspired by the problems from the telecommunication industry.
The course is divided into following main parts:
* •
Background with reflection to key learnings from previous lectures on
definition of complex system and challenging aspects of their management,
system design principles and technologies enforcing design principles..
* •
New network technologies which drives network evolution such as Cloud
Computing, Network Function Virtualisation and Software Defined Network.
* •
Management and orchestration of virtualized resurces and network design
principles.
* •
Introduction to Open stack platform
* •
Reflections on practical examples.
The main learning outcomes of this lectures are to introduce virtualisation as
one of the design principle for building modern complex systems, to explain
the need of automated management and orchestration (MANO) functions in
virtualized environments, to understand challenges of unreliable MANO
functions in virtualized environments, and finally, to understand how well
formalized virtualisation may help to improve reliable operation in network
environments.
## 2 Background
Nowadays, all software systems and, more precisely, everything is getting
interconnected over the Internet based telecommunication network. This network
is distributed interconnecting various peripheral systems at the edge of the
network, interconnecting variety of application domains. Number of edge
systems and its applications is increasingly growing thus forcing current core
network to increase their capacities. Current networks are already getting
very complex and their management becomes extremely expensive and inefficient.
Therefore, new innovations are needed that would enable simplification of
network management and use. System reliability and safety are of ultimate
importance for ever growing range of applications and services. Note that in
telecommunication network the services are provided to its users by
distributed and complex systems in coordination. Reliability is defined as
continuity of system functionality and service. Safety is defined as non-
occurrence of catastrophic consequences on environments due to system
unreliable operation.
The main problem of current research and practice is that we do not have
adequate mathematical models that provide better understanding of underlying
causes of such a complex system behavior and that can model global system
properties that generate reliable and safe behaviour of modern software
systems with increasingly growing complexity [7]. Network and system
engineering principles have to be redesigned to accommodate these innovations.
Current Software and Systems Engineering knowledge base has to be revised with
new challenges [10, 4]. Furthermore, leading software industries (e.g. Google)
have recognized these properties as vital specialization of software and
systems engineering research that focuses on reliability and maintainability
of large complex software systems and networks [3, 14]. This knowledge is
recognized as important to next generation of software and system engineers
with specialisation in network programmer. Hence, the setting of these
lectures is within the theory of complex systems, in particular, the complex
software systems and their role within telecommunication networks.
In aim of building an complex system there are numerous possibilities how to
structure the complex system. The way how system is built is limiting or
enabling its further evolution and system maintenance. Furthermore, in
building large scale complex systems that provides complex functionalities the
functional system composition is enforced as logical solution. This is
especially the case with complex software systems present in the
telecommunication network which is continuously evolving introducing more and
more complex system functionalities, and whose whole evolution is following
precise standards and recommendations described and regulated by numerous
standard bodies. In fact, all these standards and recommendations define
system functionalities which are achieved by implementing number of system
functions. So, the functional system decomposition is already driven by the
numerous standard bodies.
We provided introduction to the topic already in the first part of this
lecture notes _Part I. System structure for complex systems and design
principles_ that we provided as follow–up lecture notes of the lectures given
by the authors at the _Three “CO” (Composability, Comprehensibility,
Correctness)_ Winter School held in Košice, Slovakia, in January 2018, and
Summer School held in Budapest, Hungary, in June 2019. Therefore, in the
sequel we will just shortly recap the main learning and basic understanding
that are needed to easy follow and understand advanced topics provided further
in this lecture.
In the previous lecture, firstly, we started with an relevant definition of
complex system from complex system theory, [2] and apply this definition to
complex software system. The complex software system is a system where there
exists a number of levels of abstraction and where it is impossible to derive
simple rules from local system properties that are describing component
behaviour towards global properties of system (such are for example
reliability and safety). This behaviour of software systems is observed in the
large scale systems like are mission critical systems that were evolutionary
developed, which are usually very sensitive on reliability and security
requirements. These systems are usually developed in sequence of projects and
releases, involving several hundreds or even thousands of software and system
engineers distributed around the globe, and product that is developed is
exceeding several thousands lines of code that concurrently serves to millions
of users in collaboration with similar complex systems in distributed network
environment. There are many network nodes within the telecommunication network
that share this challenges. In previous lecture we focus and interpret these
challenges on mobile switching node that is central node for switching mobile
subscribes within telecommunication core network.
Here, the main problem arise from the fact that human is developing these
systems and as these systems grow the human inability to cope with such
complexity is recognised as one of the main challenging obstacles to its
further evolution. The main tool used to manage such software systems is
system structure that is used to logically decompose complex system into set
of system components that are needed to accomplish system functionalities.
Such system structure is used to reason and manage system implementation while
providing connection between local and global system properties, but more
importantly to provide communication tool among all parties involved into
development of such systems. Efficient systems use functional system
decomposition which may serve to variety of system functionalities. In such
system, side effects of changing system functions while implementing new and
upgrading existing system functionalities we shall keep under control.
Propagation of implementation effects or failures on variety of system
functions may become very costly and time consuming. In this context, the
functional programming paradigm is getting higher attention. The main idea
behind is to treat program execution while operating system functionality as
evaluation of mathematical functions without influencing global system state
and keeping mutable data across system functions. However, this idea is not
easy to achieve in such systems.
There are numerous possible candidate structures for building such systems and
global system behaviour and system quality may be seriously influenced by
selected candidate solution. To succeed as much as possible in the aim stated
above we introduced the four main system design principles. These are
modularity, abstraction, layering and hierarchy. Modularity means building
systems as set of smaller system components that are independent of each
other. Abstraction is term related to design of system interfaces and
communications in between system components where the main idea is to design
standard interfaces among components which are clearly separated from
component internal implementation details. The components are further
organized into hierarchical layered structure where components with similar
functionality are grouped together within the system layer and communication
follows strict hierarchical rules and only neighboured layers may communicate
in between. In previous lecture we provide an overview of standard Open
Systems Interconnection Model (OSI Model) which define hierarchical layering
of system functions that are present in communication with other systems.
Development of such standard have promote better interconnection within the
equipment of different providers and across national borders and regulations.
During the network evolution there is continuous grow in the number of
possible network users, variety of technologies connected to the network and
various services network offers. Core telecommunication network is
continuously evolving finding new ways how to cope with new requirements such
as massive traffic, with diverse information content, variety of different
users, mobile and fixed, interconnected across geographic and application
domains. The key technological trends implemented in modern telecommunication
network are inspired by two main ideas, virtualisation and service
orientation.
These ideas are build within telecommunication network from the very
beginning. Main motivation for virtualizing physical resources come along with
first idea of building common telecommunication infrastructure that will
provide its services to subscribers. This common infrastructure is shared
among its subscribers. In previous lecture we provided detail description of
introducing multiplexing number of subscribers within one common physical
wire. The multiplexing of subscribers involved first abstraction of physical
resource to its software representation. In order to implement reliable
management over the shared resources proper virtualisation function has to be
developed. The concept of service orientation has already implemented within
the network. However, with network evolution the network service orientation
is moving from manual process to software supported process. In modern
telecommunication network, the user request services dynamically, whenever she
or he needs the services, and network satisfies user needs by executing user
request in fully service oriented computing paradigm. Even more, the network
functions provide services one to another in service oriented fashion.
Both of these concepts introduced numerous benefits such as increased
capacity, enabling rapid innovation.
## 3 Network evolution
Telecommunication networks are continuously evolved in generations and
implements new concepts that enable to accomplish its main goal. The main goal
during its evolution is to allow interconnection of various technologies by
various vendors and in the same time to keep reasonable balance between costs
and performances. Telecommunication networks are used by different classes of
users, utilizing different technologies, sometimes with a very specific
service demands. In such cases, a process of network configuration and
management becomes very expensive and time and resource consuming. Efficient
scaling of network resources, enabling innovation and introducing new services
and energy efficient solutions are very hard to implement. The main problem
network operators are facing today is how to effectively and efficiently
manage high diversity of numerous users and technologies but at the same time
achieve capital efficiency and flexibility for improvements. Recent work is
focused on development of new network architectures that would allow operators
to architect its networks more efficiently. In sequel we introduce main
ingredients of new network architecture defined for fifth generation (5G)
network.
### 3.1 Cloud Computing Platforms
There is growing need and interest in consuming computing and storage
resources from third party vendors in as a service principle. For software
development companies, the service orientation increase opportunities for
specialisation while leaving hardware management operations out of its
business. On the other side, vendor companies can specialize in hardware
management business. Therefore, there is business driven need for open and
well stanardized Application Platform Interfaces (API’s) over which hardware
vendors may offer its services to application service providers, see Figure 1.
Figure 1: Open stack virtualisation of network resources
The new paradigm of abstracting resource plane requires huge efforts in
standardisation of cloud platform. An operating system has to be developed for
management of distributed hardware and related software resources and offering
them as a services over the well standardised set of interfaces API’s. Note
that this is key difference between distributed system and cloud system. Users
may approach Cloud resources from single interface point (e.g. using command
line interface or Graphical user interface) and use its resources on demand
via well standardised API’s. In traditional distributed system architectures
all network resources were physical nodes with installed communication
software for use on that single physical hardware. However, this paradigm has
been changed and now communication software is independent from physical
hardware and can be installed on any network node by using standard set of
API’s. This is the main reason why telecommunication systems are progressively
moving into virtualized Cloud environments.
With aim of speeding up this standardisation process of cloud platform there
are established numerous initiatives. OpenStack is one such project
established jointly by NASA and Rackspace intended to provide an open source
cloud platform alternative that would be compatible with Amazon Elastic
Compute Cloud (EC2). Furthermore, it should provide run time, reliable and
massive scalability of resources with simple design. Therefore, to the project
contributed numerous experts around the globe from various industries. Today,
OpenStack becomes widely accepted as an innovation platform for Cloud platform
industry [13, 9]. Here, in this lecture all our examples will be provided on
OpenStack with intention to provide examples of management functions and their
operation in virtual environments. We selected an open source computing
platform OpenStack aiming to simplify exercises execution to wide community,
and especially targeting audience of graduate students at University Master
level of Computing curricula.
### 3.2 Network Function Virtalisation and Software Defined Network
Network functions virtualisation (NFV) term is referred to abstracting
physical networking equipment and related behaviour by creating software
representations (including memory and storage resources) of network elements
and network operations. In other words, the NFV provides a network service
that is decoupled from the physical hardware and offers feature set identical
to and consistent to its hardware counterpart. Thus, network functions
(hardware and software) are redesigned and offered as a service and following
on demand principle and independently of the physical hardware. Network
Functions Virtualisation (NFV) is aiming to define virtual network
technologies that would allow operators to implement different technologies
within its network offerings, for a which a dedicated and specialized device
was needed by using common industry standard information technologies (IT),
such as servers, switches and storage.
The general framework arround implementation of NFV concept is defined in [6]
consist of following main layers:
* •
Network functions virtualization infrastructure (NFVI) is the layer hosting
generic COTS based hardware components like storage, compute, network hardware
etc.
* •
Virtualized network functions (VNFs) is layer with functions implemented
solely within software reusing benefits of software products like are easy
scaling process, simple and fast deploying over multiple hardware, or even
combining virtual instances on the same hardware, automation of these
processes with licensing.
* •
Management and orchestration functions (MANO) that need to be developed for
managing virtual instances and implementing its autonomous operation as we
will discuss further withn this lecture. For this purpose a special working
group is defined within the European Telecommunications Standards Institute
(ETSI).
Software Defined Networking (SDN) is a new networking paradigm which
introduces additional abstractions in networks by separating a data and a
control plane of networking devices. It assumes the control plane to be able
to use standardized vertical interfaces to dynamically reconfigure the data
plane flows, based on a global network policy. Therefore, many network
functions can easily be virtualized using common servers and simple data plane
networking devices.
Invention of Software Defined Network (SDN) architecture is motivated by the
fact that traditional networking technologies are inadequate to scale to the
levels required by today telecommunication networks. These limits are mainly
caused by the complexity of network control and management functions and their
distributed implementation logic. Distributed logic works well in medium sized
networks but in today’s large and fast expanding network scenarios it becomes
inefficient and too complex to manage and coordinate their scale and growth
[15]. Therefore, a centralized network solution is needed. The main
characteristics that should be provided by the solution are:
* •
Network management should be driven by general network objectives and low
level device performance issues should be separated and considered at a lower
level of abstraction.
* •
A global network view should be built and maintained for comprehensive
understanding of network complexity at a higher abstraction level, such as its
topology, traffic, and events.
* •
Devices at the lower level should be controllable through a standardized
interface, such that they can be programmed and their behaviour changed on the
fly, based on actual network demands and governed from the global network
view.
The main fundamental basis of Software Defined Network are separation of
Control and Data planes, simplified SDN devices (forwarding devices without
complex distributed management protocols but managed from the control plane),
centralized control (all network management is centralized at the control
plane that is managing data plane SDN devices with help of an open standard),
network automation and virtualisation and network openness. Open Networking
Foundation (ONF) [11] was established in 2011 by major network operators to
promote adoption of SDN through open standards development. Open standards
under consideration are Open Flow and Open Flow Configuration and Management
Protocol, both used to communicate control decisions from the control to the
data plane. The main idea behind SDN is to provide programmable network. The
main challenges in SDN based networks are latency, scale, high availability
and security. Latency may be affected with introduction of a central
processing function. It may introduce delays because of numerous requests it
has to process. Since number of users and devices connected to a network is
continuously growing the question of scale in this centralised paradigm may be
limited with processing power of the central function. Also, the central
function has to be very reliable and highly available not to represent single
point of failure for the whole network. Therefore, mechanism of high
redundancy in processing and data storage may be required. And finally,
central point of control may be serious issue for security attacks.
## 4 Management and orchestration of virtualized network resources
As is already stated in Sect.2 systems are getting more and more complex. The
same situation is happening with the telecommunication networks. Networks are
transforming from classical distributed set of interworking nodes to modern
distributed interworking functions and services. The management of such
complex system becomes very expensive, asking for higher expertise and higher
skilled personnel in network management and consequences of actions performed
are unpredictable. Note that in modern networked complex systems the functions
are implemented in different functional blocks, as part of different complex
systems, and that we need new knowledge in order to accomplish reliable
operation for management and orchestration functions operating in these
distributed environments. Therefore, one of the recognized strategies in
evolving telecommunication network is way towards its autonomy and
self–management. Recent research efforts are devoted to innovation in this
field. There is need for effective mechanisms to automate network so it may
automatically adapt their configurations to new traffic demands and to
introduce network flexibility and autonomously adapt to new technologies and
new vendor equipment. These research efforts are driven by idea of autonomic
computing [15], and further involve research on autonomic communication,
autonomic networks, autonomic network management and self–managed networks.
The final level of system autonomy is the level at which the humans only
specify business policies and objectives to govern the systems while
self–management following these policies is left to the system.
Self–management mainly means:
* •
self–configuration
* •
self–healing
* •
self–optimisation
* •
self–protection
In such new networks, the concept of programming software localized within one
node, industry closed standard and solution for network functions is moved to
concept of programming software for open network functions. The necessity for
new profession of network developer is evident. In that new world of network
programming we start to develop network design principles. In next section we
open discussion on that topic.
### 4.1 Design principles for implementing autonomic behavior
Autonomic behaviour has been developed in many other fields and some general
design principles have been recognized across all fields. In respect to
network heterogenity, scalability and distribution the same principles may be
valid also for networks. Here we shortly introduce these principles from [1]
to motivate students to think about their implementation within examples
provided in 6 of this lecture.
* •
Living systems inspired design
* •
Policy based design
* •
Context awareness design
* •
Self–similarity
* •
Adaptive design
* •
Knowledge based design
Living system inspired design is perspective to system design where
inspiration is taken from functioning of living systems. There are many
self–management mechanisms in functioning of living systems and in their
interaction with environment and those ideas are taken as motivators for
autonomy design. These concepts are mostly driven by survival instinct and
collective behaviour. Survival instinct is related to system tension to come
back to original equilibrium state. Collective behaviour refer to some
spontaneous system reactions that may be derived from collective movement.
Note that there is huge knowledge base derived from observing individual in
respect to collective behaviour (like for example in Twiter, Facebook
applications) and sometimes it happens that individual information moves
collective behaviour in some particular state.
Policy based design is a predefined rule that governs behaviour of the system.
This design principle have already been implemented widely across the network.
However, it does not eliminate human interaction with the system.
Context awareness design is related to ability of the system to characterize
situation or environment and based on historic behaviour decide how to adapt
to new conditions. This principle have already been implemented within
computing and networking field. One example are numerous sensing environment
case studies.
Self–similarity design principle is related to characteristic that system
organization persists as system scales and thus guarantee its global
properties. This characteristic is also reflecting to global system properties
that emerges solely of low level interactions, so low level interactions are
not interfered with global.
Adaptive design is related to ability of the system to adapt its inner
behaviour as reaction to various environmental conditions. Such system is able
to learn from its experience in operation and react accordingly by adapting
its actions based on collected information and knowledge gained.
Knowledge–based design is related to find the best design of knowledge
gathering process. Since systems are complex, there are numerous possibilities
in selecting appropriate set of properties to measure, designing appropriate
data collection procedures and using appropriate artificial intelligence
models to build appropriate knowledge base. This design is linked to building
of appropriate business goals.
### 4.2 Current state
Networks are already highly developed and introduction of automation (by
excluding human) into network management is not an easy and one step process.
First step in automation is to virtualise its complex infrastructure and
provide virtualized network resources. Furthermore, a real–time management and
orchestration functions have to be developed that operate on these virtual
network resources. As already mentioned, currently telecommunication network
functions are progressively redesigned (to get virtual) so they can be offered
over Cloud. In this process every network resource gets its own virtual image
so it may be reinstalled, activated or deactivated as is needed in network
reconfiguration or scaling demands. To automate these installation processes
of this complex infrastructure a scripts are written which are then called for
execution whenever needed during dynamic network management activities. These
scripts are written in classical programming languages like is for example
Python. Note here that real–time management and orchestration of network
functions should secure avoiding the overlapping of management and
ochestration processes over the same physical network resource pool. Again,
functional programming like approach here is of ultimate importance to secure
reliable and safe network management and orchestration operations.
## 5 OpenStack
OpenStack is a software platform that implements main functionality of
providing distributed resources and infrastructure using ’As a service’
paradigm to its users. Furthermore, OpenStack is a modular platform meaning
that is designed as set of standardized units each designed to serve specific
purpose, and these units may be used as needed or may be optional to OpenStack
deployment. These units provide services to OpenStack users or other OpenStack
units using standardised Application Platform Interfaces (API’s). Table 1
provides list of services, name of projects and short description of its main
function. The OpenStack was designed around three logical tiers: Network,
Control and Compute, [13]. The Compute tier is taking over all the logic
needed as hypervisor of virtual resources. For example, it implements agents
and services to handle virtual machines. All communication among OpenStack
services and with OpenStack users is provided through Application Platform
Interface (API) services, web interface, database, and message bus. Numerous
services have been implemented so far and detailed list of services can be
found on OpenStack official web page and documentation [12]. In aforementioned
Table 1 we listed just group of services specialized for specific purpose that
we will also use in examples section 6 where we present how they operate
together within an running Openstack environment. Furthermore, OpenStack
offers communication through web interface called Horizon or dashboard. The
Openstack conceptual architecture is presented in Figure 2 available from [12]
where is depicted interaction among OpenStack services mentioned in Table 1.
For communication may be used MySQL, MariaDB and PostgreSQL databases and
RabbitMQ, Opid, and ActiveMQ message buses.
Table 1: OpenStack services and projects Projects | Services | Short description
---|---|---
Horizon | Dashboard | Web interface for using OpenStack services and manipulating with virtual resources.
Keystone | Identity service | Authentification and authorisation functions.
Glance | Image service | Image Management services.
Neutron | Networking service | Provides services for networking of OpenStack resources to external network.
Nova | Compute service | Lifecycle management of virtual resources.
Cinder | Block storage service | Provides persistent storage functionality t virtual resources.
Swift | Object storage service | Data management over RESTful and HTTP based API’s implementing fault tolerant mechanisms for data replication and scaling.
Ceilometer | Telemetry services | Collecting measurements and monitoring of resources
Heat | Orchestration service | Coordination of multiple virtual resources within one service provided to user
Figure 2: Open stack conceptual architecture. Source www.openstack.org
### 5.1 Graphical User interface for manipulating virtual resources
Horizon is a project defined within Openstack environment for management
virtual resources over graphical user web interface.An screenshot of Horison
GUI called dashboard is presented in Figure 3. Dashboard is a Openstack
component that implements set of OpenStack services over the user interface.
Actually, the OpenStack users are given the possibility to manipulate virtual
resources over the visual commands provided on the web interface. In the
background on the graphical user interface are implemented service calls to
the API’s of all officially supported services included within OpenStack. Note
that OpenStack also provides a programmable access to its services over the
API’s that we describe in sequel. In the exercises we will more focus on
programmable access.
Figure 3: Horison graphical user interface
### 5.2 Authentification and authorisation functions
Authentication and authorisation of user access to cloud computing resources
in OpenStack is managed through Keystone service. Objects that may be subject
of keystone management operations are users, tenants, roles, instances (from
catalog of services) and networks (endpoints of the virtual resources running
in OpenStack environment).
All objects must be assigned to tenants. Name tenant is used in command line
while within dashboard the tenant is referred as project. A role has to be
defined to each object assigned to tenant and its purpose is to restrict
actions each object can perform. Even an administrator have to be defined its
role and have to be assigned to tenant. Actions enabled for roles may be
specified within a special policy documents, $/etc/PROJECT/policy.json$ files.
Keystone maintains a service register or service catalog for the services
offered by the components within the OpenStack. When a component is
implemented within OpenStack cluster it should be registered in this service
catalog. Service catalog contains a list of service names and related
endpoints. The service endpoint is URL granted to this component within
OpenStack cluster. The main benefit of this service catalog is that user only
needs to know keystone address and the name of service which she or he wants
to access. Then the keystone service is responsible to verify authentification
of users and based on its role to verifiy if it is authorised to access the
service. User never access Openstack services directly, it does always over
the keystone service. Another important aspect of maintaining the service
catalog is in managing independency between users and local OpenStack
implementation so the changes in endpoints are not propagated to all its
users. I.e. this means that when an service changes its implementation and is
deployed on another endpoint, the end user does not to be informed about that
action. Service users will get correct service endpoint address by asking the
keystone service just in time the service is needed.
### 5.3 Management of disk images
Glance is a component within Openstack with main function to manage disk
images. For quick and fast deployment of virtual resources an pre–installed
disk image may be used to boot from. Glance maintain the register of these
disk images which are cached to compute node during instantiation of virtual
resources and then copied to the ephemeral virtual resource disk location.
These images had installed operating system but have removed secure identity
elements such as Secure Shell host key (SSH) and network device MAC address
that make this images generic and easily transferable to number of virtual
machines without risk of interleaving the processes among them. These host
specific information are transffered at system boot within a cloud–init
script.
Disk images may be also made for specific purposes. For example if there is a
multiple need for a specific web service, then the pre–instaled disk image may
contain also web service preinstalation so the deployment process may be fully
automated and faster for number of instances. There are available numerous
tools for creation of such disk images with separated cloud-int script, like
for example appliance-creator, Oz, and many others.
### 5.4 Network management functions
The main function of Neutron component is network management and offers to its
users Networking as A Service (NaaS) functionality. This function is needed
for configuring virtual resources to operate within virtual network
environment. OpenStack uses Open vSwitch plugin to allow software defined
networking of networking infrastructure and it provides a number of API’s and
related services for its management. These include, connection of virtual
instances to virtual isolated networks, virtual routers, interconnection of
virtual networks via virtual routers and to external networks via external
gateways connected to virtual routers. Thus, users may configure its own
virtual networks appliances which are interconnected to the external network.
Neutron can manage multiple network appliances.
Each instance may be associated to private or public network and is assigned
private and public IP address range. Private or fixed IP address is assigned
to an instance during its creation and is active during instance lifetime. On
the other hand, an public IP address or floating IP address is not dependent
of instance lifetime and it may be associated to an instance when the instance
is made available for public and disassociated when instance is removed from
public. Network Address Translation (NAT) transverse between public and
private address spaces during communication flow between these two networks.
### 5.5 Management of virtual instances
Nova is a component responsible for instance management. This includes
managing of flavours, key pairs, instances, floating IPs and security groups.
Flavors define amonut of resources that are alocated to an instance. Before an
instance can be launched, authentification of users should be performed. An
authenticated user use key pair (SSH pair) and security group to create its
virtual instances. It can use its own SSH or the SSH generated by the system.
The SSH key pairs are not new in OpenStack environment but it is reused
principle from Linux. When an virtual instance is deployed a public key is
placed in $authorized_{k}eys$ file and running instance can be accessed using
an SSH connection without password. Security group is a firewall at cloud
infrastructure layer that should be opened to allow connection to virtual
instance. By default, virtual instances belonging to the same security group
may communicate to each other, while the rules should be specified for the
Internet Control Message Protocol, SSH and other connections outside of the
security group.
### 5.6 Management of persistent memory
Cinder is a component for management of block storage. It is used whenever a
persistent memory space is needed, not dependent on instance lifetime. Note
that disk space associated to an instance at its creation is destroyed at its
termination. This is not the case for block storage. Block storage may be
requested by users on demand and may be presented to running instance. It is
also used for storing the snapshots of block volumes or of instances that are
needed for instance boot.
### 5.7 Management of object storage
Swift is object storage management component. In contrast to block storage,
files and containers are stored without any metadata and are transferred from
an instance to object store by using client–server communication with minimal
overhead to the operating system.
### 5.8 Performance measurement functions
An component within Openstack that is responsible for monitoring Openstack
resources and collecting resource measurements is called Ceilometer.
Originally it was designed for billing purposes but later it receives much
generic purpose to take care about all telemetry within the OpenStack. These
includes also observation of instance behaviour, its availability and
performances, and for alarm setting. An very important application of
ceilometer measurement system and alarm is for autoscaling of OpenStack
resources at runtime.
### 5.9 Orchestration functions
Openstack has special component responsible for orchastration of its
resources. When multiple resources are intended to be used for the specific
purpose and the same user these resources have to be interconnected and tied
together so all operations that are available for regular Openstack instances
may be also performed on this ’orchestrated’ instance. For this purpose within
a heat component of Openstack an template file may be used to specify
resources that needs to be orchestarated, to specify their order and their
mutual dependencies, required data that needs to be transferred among them.
Heat is also compatible with Amazon Web Service (AWS) Cloud Formation template
language.
## 6 Examples
These exercises where developed for the purpose of Software Engineering
Management course within Computer Science master study programme available on
the following link $http://tania.unipu.hr/~{}tgalinac/OpenStack_{V}jezbe-
UPI.pdf$. The source files for the examples that follows could be accessed
from the github $https://github.com/nikoladom91/CEFP2019$.
### 6.1 Example 1
Heat is the main project in the OpenStack Orchestration program. It allows
deployment of resources on an OpenStack platform using templates. Heat
supports various template formats and the format we will be using in this
tutorial is the HOT (Heat Orchestration Template) format written as YAML
files.
The HOT files are executed by the Heat service and provide the blueprint for
the deployment we want to achieve. A resource or groups of resources created
during a HOT deployment is referred to as stack. We will use the following
examples to describe the particulars of writing a HOT template and to show how
ORCHESTRATION can be used.
###### Example 1
heat_template_version: 2013-05-23
description: Simple template to deploy a single compute instance
resources:
my_instance:
type: OS::Nova::Server
properties:
image: ubuntu_cloud14
flavor: m1.small
key_name: my_key1
networks:
\- network: my_net1
In Example 1 we use a basic template to explain the minimum required
information for writing a functional template. We will go over the specific
parts and describe their purpose.
The heat_template_version key is required in every template and it describes
what version of HOT the template is written in. The description is optional
and is usually used to describe the purpose and function of the template.
The resource section describes the resources the template will be creating and
configuring during the deployment. It is required to have at least one
resource per template. Each resource must have a type specified. This is used
to deploy a specific OpenStack resource such as a virtual machine, nova
network, security group etc. The list of available resource types for
OpenStack version Mitaka can be found on the web-page
https://docs.openstack.org/heat/mitaka/template _guide/openstack.html. The
available resources somewhat differ between OpenStack versions so the correct
one must be referenced when looking for them.
Services might require properties that contain the information required for
their successful deployment. Some properties under the properties section are
mandatory while others are optional. The properties for a resource are
described under its type. Example 1 deploys a stack containing a single VM
with hard-coded property values. The resource is identified as “my_instance”
and is of type “OS::Nova::Server”. Its properties describe what image and
flavor will be used in the VM deployment, what security key will be provided
to the OS and to what neutron network the vNIC of the VM will be connected.
All the input resources used as properties need to be defined beforehand or
the deployment of the stack will not be successful. Example 1 is not meant to
be deployed, although it would deploy successfully. We will go over deploying
a template after introducing Example 2.
### 6.2 Example 2
###### Example 2
heat_template_version: 2013-05-23
description: Simple template to deploy a single compute instance
parameters:
image:
type: string
label: Image name or ID
description: Image to be used for compute instance
default: ubuntu_cloud14
flavor:
type: string
label: Flavor
description: Type of instance (flavor) to be used
default: m1.small
key:
type: string
label: Key name
description: Name of key-pair to be used for compute instance
default: my_key1
private_network:
type: string
label: Private network name or ID
description: Network to attach instance to.
default: my_net1
resources:
my_instance:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key }
networks:
\- network: { get_param: private_network }
outputs:
instance_ip:
description: IP address of the instance
value: { get_attr: [my_instance, first_address] }
To allow for the deployment of multiple stacks using the same template, input
is needed. The optional parameters section is used to allow input. Unlike
resources that represent an OpenStack resource entity, like a VM or a network,
parameters represent certain values that are passed to the stack on
deployment. Specific parameters are named, similar to specific resources, and
are described by attributes. The type attribute is the only mandatory
attribute and it defines the type of the value that the parameter represents.
The label and description attributes are human readable parameter name and
description and the default attribute describes the value that the parameter
takes if no other value is given. There are more optional attributes that are
not covered in this example.
The resource property uses an input parameter with the syntax
"<property name>: { get_param: <parameter name> }".
Upon deployment the resource property will assume the value of the specified
parameter. This allows the user to deploy a HOT multiple times with different
input parameters and create unique stacks. The stacks may share the same
blueprint but are separate entities with potentially different
functionalities. The outputs section allows for specifying output parameters
available to users once the template has been deployed. We will see its use in
later examples. Here we use it to output the IP of the VM we created as the
parameter instance_ip. The resource attribute value is retrieved with the
syntax
"{ get_pattr: [<resource name>, <attribute name>] }" .
This is used to retrieve resource attributes generated during deployment that
can be used as outputs of the stack or as inputs for other resources. Example
2 deploys a stack similar to Example 1 but, unlike Example 1, it can be passed
different values for its deployment. If no new values are given, the specified
default values will be used and the stacks from Example 1 and Example 2 will
functionally be the same. They will still be separate entities as different
UUIDs (Universally Unique Identifier) will be generated for the created
resources. Providing different input parameters, VMs with, amongst other
things, different images can be created creating functionally different
resources.
###### Example 3
...
resources:
rng:
type: OS::Heat::RandomString
properties:
length: 4
sequence: digits
inst_simple:
type: OS::Nova::Server
properties:
...
user_data_format: RAW
user_data: |
#!/bin/sh
echo "Hello, World!" >> hello.txt
inst_advanced:
type: OS::Nova::Server
properties:
...
user_data_format: RAW
user_data:
str_replace:
params:
__name__: { get_param: name }
__rnum__: get_attr: [rng, value]
template: |
#!/bin/sh
echo "Hello, my name is __name__. Here is a random number: __rnum__."
>> hello.txt
To automate certain procedures, users can pass blobs of data that the VM can
access trough the metadata service or config drive. VMs that employ services
like cloud-init can use the data in various ways. The blob of data is defined
in the resource property “user_data”. If given without additional attributes,
the value of user_data will be passed. If given the params and template
attributes, the targeted text string defined under params is, within the text
under template, replaced with the defined value. Example 3 replaces the
“__name__” string with the parameter name while and “__rnum__” replaces it
with a randomly generated number.
Here we can see the implementation of the get_attr method where a value of a
different resource is used within another resource. In this case a resource
that when deployed represents a randomly generated number is created. The
value of that resource is than used as an input for the data blob passed to
the VM.
Example 3 HOT when deployed will generate a random number and instantiate two
VMs. If the image used to instantiate a VM has the cloud-init service, that VM
will execute the shell commands given in the user data as the root user. The
inst_simple VM will generate a hello.txt file in the / directory containing
the “Hello, World!” string. The inst_advanced VM creates the same file with
the difference that the string within it contains the parameter name given as
a HOT input and a randomly generated number.
### 6.3 Example 4
HOT allows for usage of nested code. This is done by defining the resource
type as a bath to a different HOT file. It can be given as the path on the
local environment from where the heat command is issued, or as a http/https
link to a .yaml page accessible online containing the relevant HOT. When a
nested HOT resource is defined, the input parameters are passed to that HOT
trough the resource properties. The output parameters of the nested HOT are
accessible as the resource attributes in the parent HOT.
When executing more complicated deployments with custom codes given as user
data, heat cannot natively know if the given code has been executed correctly.
The VM is deployed and Heat continues deploying other resources. Whether or
not the code in the user data was successfully executed or how long it took is
not taken in to account. If other resources depend on the successful execution
of the user data code, it is needed to implement a waiting mechanic.
Heat provides two resources for the waiting mechanic. The
OS::Heat::WaitCondition and the OS::Heat::WaitConditionHandle type resources.
The OS::Heat::WaitCondition resource defines the waiting conditions. In the
timeout property it defines how long the execution will wait for the HOT to
complete before it is declared as a failed execution. The count property
defines how many times a confirmation signal is expected before the execution
is considered as successful. The handle property needs a link to the
OS::Heat::WaitConditionHandle resource. That link is given with the
get_resource method.
The OS::Heat::WaitConditionHandle type resource is used to register the
confirmation signal sent from the execution. It does this by defining an
address that when curled with the appropriate information registers a
confirmation signal. This curl command is inserted in to the user data code at
the point where we want the confirmation signal to be sent, there can me
multiple signals sent, each of which goes towards satisfying the count
condition in the OS::Heat::WaitConditionHandle type resource.
Example 4 depicts a HOT which deploys two interdependent VMs. The first VM is
a MySQL server. It is automatically configured during its initialization and
when deployed is fully functional. The second VM is a Wordpress server that
uses the MySQL database as its backend. As the Wordpress server requires for
the MySQL database to be accessible during its initialization, the MySQL
server employs the waiting service. The Wordpress VM initialization is
therefor not started before the MySQL resource is deployed, as it requires
some of its output attributes as its input parameters.
Each VM is started within a standalone HOT file which are both used as nested
templates within the Example 4 script.
Stack deployment
A resource or groups of resources created during a HOT deployment is referred
to as stack. Here we will be describing how to deploy a stack using the
Example 2 template. The deployment can be done via the Horizon web GUI or over
the command line interface. The bellow command executed in the CLI will deploy
the template from example 2. The template is fetched from github and the input
parameters are passed as key=value pairs under the –parameters argument.
To list all deployed Heat Stacks, the command stack-list can be used as shown
below.
Once
Once we know the UUID of a specific stack we can see its status and details
with the command stack show as shown below.
Heat stack-show <stack UUID>
## 7 Use Case: Virtualisation of Mobile Switching Centre
There are huge industry efforts to virtualise network functions that were
developed in an closed industry product fashion. Some of the network products
are older than forty years and are still active nodes within the curent
telecommunication netwrk. One example is Ericsson Mobile Switching Centre node
that was used as an example in Part I of this lecture series.
Mobile Switching centre implements communications switching functions, such as
call set-up, release, and routing. It also, performs other duties, including
routing SMS messages, conference calls, fax, and service billing as well as
interfacing with other networks, such as the public switched telephone network
(PSTN). This network function was actively been developed during 2G/3G
generation networks. More information about this switching function can be
found at 3GPP standards website (www.3gpp.org).
This product has large installed base and is still progresivelly used in many
operator networks. Therefore, it is estimated that operators will use 2G/3G
networks as fallbacks for a long time to come, so it was decided to virtualise
MSC to make it more compatible with modern environments.
There are identified numerous benefits of virtualizing this function. For
instance, the virtual appliance of MSC function may be faster deployed and
redeployed and thus it can be sold more quickly as only SW is needed for
deployment. Both the product and the environment are scalable. Capacity
increase is very simple; capacity of the product is increased by allocating
more resorces to the VMs or deploying additional VMs, and capacity increase of
the infrastructure itself would require adding more servers to the data
centar. From here it may be concluded that virtualisation enables multiple
products to run on the same datacenter and thus allowing operator more freedom
in resource management. On the other hand side, the same data center could be
used for multiple products, network functions and other virtualised instances
thus eliminating the need for hardware dedicated to every application domain.
Despite numerous benefits that virtualisation of MSC network function may
imply there are also numerous potential problems that may arise on the way. In
the case of Ericsson MSC, the product is developed in evolutionary fashion for
more than forty years and as such it grows in complexity. Product has numerous
functions that enables its long living but these functions were implemented
highly relying on hardware aiming to satisfy very high reliability and safety
requirements. To implement such hardware independent behaviour product has to
be redesigned. Since the product is very complex because of number of
functions implemented this act would require lot of expertise and cost.
Another very important aspect to understand is that mobile switching function
that serves in real time services such as telephone call has very high
reliability requirements and is usually higher that is the case with standard
resources that are getting virtualised. For securing reliable operation of
such virtualized MSC require additional layer that would secure this
requirement. Therefore, Ericsson started developing a new project, its own
proprietary network function virtualisation infrastructure called Ericsson
Cloud Execution Environment CEE [5]. The product is developed by taking
OpenStack as a base where proprietary solutions are incorporated to increase
service reliability of virtualized products run on it. In Ericsson MSC not
only software switching function was hardware related but also this special
purpose hardware is implemented with special requirement to be reliable. The
reliablity of this special purpose hardware is also much higher that is the
case with standard equipment. Therefore, additional solution is to create
specific data center for virtual network function purposes with high demands
on performances. There are other open source ongoing initiatives to produce
High Available OpenStack solution such as for example OPNFV Doctor, OpenStack
Freezer and OpenStack Masakari. All these solutions work on monitor, detact
and correct solutions. However, the implementation solution for above stated
design principles has to be invented and deployed within these solutions.
## 8 Discussion and Conclusion
From the very beginning the telecommunication network has been built with the
main aim to divide management of network cables and switches into separate
business which would provide connection services to its users. In its core the
switching board and network cables have implemented the multiplexing idea.
With the help of the switching board, the same user can be involved in a
number of connections (i.e., calls from subscriber’s perspective or processes
from processor’s perspective) in the time sharing principle. This main
multiplexing principle has been widely applied in every resource which is
consumed in the network. During the network evolution calls/processes are
multiplexed over each network cable, over the processors in switching nodes,
over the memory in shared memory devices, etc. In the ongoing evolution step
the processes are multiplexed over the shared network resources (not node
resources) and even network functions are considered as network resources that
users share in the time sharing principle.
The above mentioned multiplexing or time sharing of common resources and
providing them as a service is implemented by adding new abstraction layers
and new virtualisation layer that introduce need for the new management
functions securing safe and reliable switching of users on these common
resources.
The speciality of switching or multiplexing functions is in their high
requirement on fast and reliable management. Since common resources are shared
among its users in time sharing principle, every lost time slot is directly
causing inefficiency and money loss. On the other hand, the services provided
for each user must be safe and reliable, so that the user does not sense other
users using the same shared resource.
In all these evolutionary steps, there were specific switching programming
languages which were used. In the essence of the functional programming is the
ability to have functions that would for the given input always generate the
same output. Thus, these functions can be easily formally verified by using
mathematical logic. This is especially important in complex systems that
require high safety and reliable operation. Although in complex time sharing
systems it may be difficult to achieve pure functional programs, any good
programmer should strive to get these programs as functional as possible.
In telecom world, there are plenty of programming languages present in the
switching domain. During history these languages evolved, so that functional
programming languages, such as Erlang, have also taken dominance in this area.
From the system verification point of view, the testers are used to work on a
sequence of execution statements to easily follow the program execution.
However, in pure functional world the failures would be minimised by proper
formal methods. Hence, in fault mining process travelling across huge state
machines would be avoided. Therefore, in principle, the more functional our
code is, the less verification efforts would be needed.
As we have seen, the complex software tends to become even more complex. Many
software products started without functional programming paradigm and have
become so complex that it would be too expensive and almost impossible to
redesign them in functional programming fashion. However, new developments,
especially those in which new abstractions are added and old source code is
easily separated from the new code, should aim to move as much as possible to
the functional paradigm. As we can see, evolution is just adding new
abstractions and new management functions responsible for managing these
virtual resources and implementation of these abstractions would be easier
with pure functional code.
In these Part II lectures, as well as in Part I, we went through the network
evolution from the design principles and technology perspective. In Part I we
introduced the main definition of a complex system, discussed challenges of
their management. We introduced generic design principles for structuring
software systems, such as modularity, abstraction, layering and hierarchy, in
order to achieve their easier management. Furthermore, we introduced service
orientation and virtualisation technologies that are used as a tool for
implementing these principles. At the end of Part I, we discussed the example
of the case study reporting experiences in redesigning existing complex
software product with these design principles.
In this Part II, as a continuation of the previous lecture, we introduced new
evolutionary changes that are currently implemented within the networks. These
are Network Function Virtualisation and Software Defined Networks. The two new
concepts could be viewed just as adding new virtualisation layers on network
resources (hardware and software functions) and introduce more service
orientation and computation for each above mentioned network resource.
Therefore, in addition to design principles stated in the previous Part I
lectures that are related to structuring of complex software, we introduced
now in Part II the design principles for implementing network autonomic
behaviour. For the purpose of introducing the students with new technological
changes, we provide examples by implementing simple network applications over
the OpenStack platform by assuring aforementioned design principles for
autonomic behaviour. Furthermore, we discuss an example of implementing a
complex software product as a network application capable to run over
OpenStack platform. Along with the example, we discussed benefits and problems
that may arise in such act. Finally, we conclude with reflections on the role
of functional programming in such complex networked environments.
## References
* [1] Agoulmine, N.: Autonomic Network Management Principles: From Concepts to Applications. Academic Press, Inc., USA, 1st edn. (2016)
* [2] Barabási, A.L.: Network science. Cambridge University Press, 1st edn. (2016)
* [3] Beyer, B., Jones, C., Petoff, J., Murphy, N.R.: Site Reliability Engineering: How Google Runs Production Systems. O’Reilly Media, Inc., 1st edn. (2016)
* [4] Denning, P.J.: Software quality. Commun. ACM 59(9), 23–25 (2016). https://doi.org/10.1145/2971327, https://doi.org/10.1145/2971327
* [5] Ericsson CM–HA. Ericsson (2020), http://cqr.committees.comsoc.org/files/2017/03/04-Kelly_Krick.pdf, accessed Nov 11, 2020
* [6] ETSI Industry Specification Group (ISG) NFV: ETSI GS NFV-MAN 001 v1.1.1: Network Functions Virtualisation (NFV); Management and Orchestration. European Telecommunications Standards Institute (ETSI) (2014), https://www.etsi.org/deliver/etsi_gs/NFV-MAN/001_099/001/01.01.01_60/gs_NFV-MAN001v010101p.pdf, accessed July 1, 2018
* [7] Ganchev, I., van der Mei, R.D., van den Berg, H. (eds.): State of the Art and Research Challenges in the Area of Autonomous Control for a Reliable Internet of Services, pp. 1–22. Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-90415-3_1
* [8] Han, B., Gopalakrishnan, V., Ji, L., Lee, S.: Network function virtualization: Challenges and opportunities for innovations. IEEE Communications Magazine 53(2), 90–97 (2015)
* [9] Jackson, K.: OpenStack Cloud Computing Cookbook. Packt Publishing (2012)
* [10] Mangey Ram, J.P.D. (ed.): Tools and Techniques in Software Reliability Modeling, pp. 281–295. Academic Press (2019)
* [11] Open Networking Foundation. Open Networking Foundation (2018), https://opennetworking.org/, accessed July 1, 2018
* [12] OpenStack Cloud Software. OpenStack Foundation (2018), www.openstack.org, accessed July 1, 2018
* [13] Radez, D.: OpenStack Essentials. Packt Publishing (2015)
* [14] Sloss, B.T., Nukala, S., Rau, V.: Metrics that matter. Commun. ACM 62(4), 88 (2019). https://doi.org/10.1145/3303874, https://doi.org/10.1145/3303874
* [15] Sterritt, R., Bustard, D.: Autonomic computing - a means of achieving dependability? In: 10th IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, 2003. Proceedings. pp. 247–251 (2003)
|
arxiv-papers
| 2021-07-26T14:15:41 |
2024-09-04T03:07:18.777320
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Tihana Galinac Grbac and Nikola Domazet",
"submitter": "Tihana Galinac Grbac",
"url": "https://arxiv.org/abs/2107.12227"
}
|
2107.12229
|
# Precision cosmology and the stiff-amplified gravitational-wave background
from inflation: NANOGrav, Advanced LIGO-Virgo and the Hubble tension
Bohua Li11footnotetext: Corresponding author. and Paul R. Shapiro
###### Abstract
The recent NANOGrav finding of a common-spectrum process has invited
interpretations as possible evidence of a primordial stochastic gravitational-
wave background (SGWB) stronger than predicted by standard inflation +
$\Lambda$CDM. Such an SGWB would contribute an extra radiation component to
the background Universe which may affect its expansion history. As such, it
may help alleviate the current Hubble tension, a novel connection between
gravitational waves and cosmology. We demonstrate this by considering a
cosmological model, the “standard inflation + stiff amplification” scenario,
with two components added to the base-$\Lambda$CDM model: a stiff component
($w\equiv p/\rho=1$) and the primordial SGWB. Previously, we showed that even
for _standard_ inflation, the SGWB may be detectable at the high frequencies
probed by laser interferometers, if it is amplified by a possible early stiff
era after reheating. Models that boost the SGWB enough to cause significant
_backreaction_ , however, must still preserve the well-measured radiation-
matter equality, respecting the demands of precision cosmology. For that, we
calculate the fully-coupled evolution of the SGWB and expansion history,
sampling parameter space (tensor-to-scalar ratio, reheating temperature and
temperature at stiff-to-radiation equality). We then perform a joint analysis
of the NANOGrav results and latest upper bounds from _Planck_ , big bang
nucleosynthesis and Advanced LIGO-Virgo, to constrain the model. The resulting
blue-tilted, stiff-amplified SGWB is still too small to explain the NANOGrav
results. However, if someday, Advanced LIGO-Virgo detects the SGWB, our model
can explain it within standard inflation (_without_ requiring an initial
spectral tilt). Meanwhile, this model may bring current high-$z$ measurements
of the Hubble constant within $3.4\sigma$ of the low-$z$ measurements by SH0ES
(from $4.4\sigma$) and within $2.6\sigma$ of those by H0LiCOW (from
$3.1\sigma$), reducing the tension.
## 1 Introduction
A stochastic gravitational-wave background (SGWB) from primordial tensor
fluctuations is generically produced in the inflationary paradigm [1, 2, 3].
Once deemed too small to be detected, this primordial SGWB is now possibly
within reach of various experiments, from the cosmic microwave background
(CMB) to gravitational-wave (GW) interferometers, over a wide range of
frequencies [4, 5]. It may even contribute significantly enough to the energy
content of the Universe as to affect the expansion history, with possible
observable consequences beyond its direct detection [e.g., 6, 7]. These direct
and indirect probes of the primordial SGWB can, therefore, potentially reveal
important information about inflation and other physical processes in the
early Universe, which are otherwise poorly understood [e.g., 8, 9].
In fact, even before inflation was proposed, Grishchuk realized that in an
expanding Universe, significant _parametric amplification_ can occur, not only
for classical gravitational waves (GWs), but even for quantum fluctuations of
the vacuum [10, 11]. It requires that (1) modes spend time outside the Hubble
radius (i.e., the background Universe expands more rapidly than GWs vary in
time), when (2) the Universe is not radiation-dominated (RD). When both
conditions are met, GWs, or tensor fluctuations, will be amplified relative to
the “adiabatic” solution (for which $h\propto 1/a$ for modes always well-
inside the Hubble radius).
The inflationary paradigm [12, 13, 14] naturally provides such a period that
enables parametric amplification and production of macroscopic GWs from
initial vacuum fluctuations. When tensor modes are stretched well outside the
Hubble radius, their amplitudes become time-independent, or “frozen” [1, 15].
These amplitudes define the primordial tensor power spectrum. For standard
single-field, slow-roll inflation, their distribution is nearly Gaussian, with
a nearly scale-invariant power spectrum that satisfies the consistency
relation [16, 17]. After inflation ends, tensor modes start to reenter the
Hubble radius and each, thereafter, evolves according to the adiabatic
solution, redshifting like radiation. Together, all modes that reentered and
remain inside the Hubble radius constitute the primordial SGWB.
The parametric amplification regime for a given mode spans its Hubble exit and
reentry [18].222In this paper, “Hubble exit/reentry” refers to the times at
which a mode exits/reenters the Hubble volume, when its wavelength passes
above or below the Hubble radius, respectively. While all modes of interest
exit during inflation, different modes can reenter during post-inflationary
eras with different equations of state (EoS). This actually leads to another
kind of amplification/attenuation of the primordial SGWB, as we describe
below, in terms of the departure of the amplitudes of modes at a given time
after their reentries, _relative_ to those if the EoS of the Universe during
their reentries were radiation-like ($w\equiv\bar{p}/\bar{\rho}=1/3$).
Our observed Universe must undergo a standard RD era which begins no later
than big bang nucleosynthesis (BBN) and ends at radiation-matter equality. For
nearly scale-invariant initial conditions, the contribution to the present-day
SGWB energy spectrum, $\Omega_{\rm GW}(f)\equiv\mathrm{d}\,\Omega_{\rm
GW}/\mathrm{d}\ln f$, by modes that reentered during this RD era is nearly
frequency-independent. This results in a spectrum with a long “plateau” [19].
In what follows, we shall henceforth use the term _amplification_ to refer,
not to the parametric amplification effect described above, but rather to the
amplification of a mode at a given time _after_ it reenters, _relative to this
plateau_ associated with Hubble reentries that take place during the RD era.
For modes of longer wavelengths that reenter during the matter-dominated (MD)
era ($w=0$) which follows the RD era, $\Omega_{\rm GW}(f)$ is amplified
relative to the plateau, since the time-dependence of the Hubble parameter
then differs from that of an RD Universe [20, 21]. On the other hand, for
modes with short enough wavelength to reenter before BBN, the possibility
exists for amplification, too, since the expansion history or, equivalently,
the EoS of the Universe during this period are poorly constrained and may also
depart from $w=1/3$. In fact, for these modes of higher frequencies,
Giovannini [22] considered the interesting case in which $\Omega_{\rm GW}(f)$
is amplified relative to the plateau by an early phase whose EoS is stiffer
than radiation (i.e., $w>1/3$). This possibility has subsequently been studied
by many authors [23, 24, 25, 26, 27, 28, 29, 7, 30].
In a previous paper [7, hereafter LSR17], we investigated this amplification
effect in the particular context of complex scalar field dark matter (SFDM)
made up of charged ultralight bosons [31]. If all cosmological dark matter
consists of SFDM (the $\Lambda$SFDM model), the Universe would be dominated at
early times by the stiff phase of SFDM ($w_{\rm SF}=1$), before the standard
RD era. The stiff phase of a scalar field is also known as the “kination”
phase [32], since the energy density of the SFDM is dominated by the kinetic
energy. LSR17 showed that this early stiff-SFDM-dominated era ($w=1$; “stiff
era” for short) indeed amplifies the high-frequency part of $\Omega_{\rm
GW}(f)$ relative to the plateau value. This amplified SGWB may contribute a
non-negligible radiation component to the total energy density, which boosts
the expansion rate during the RD era. Meanwhile, this same effect results in a
blue tilt in $\Omega_{\rm GW}(f)$, which may even make direct detection of the
SGWB possible at high frequencies by current laser interferometer experiments,
e.g., Advanced LIGO-Virgo [33, 34] and LISA [35]. Therefore, the stiff-era
amplification effect (henceforth, “stiff amplification”) encourages multi-
wavelength search of the primordial SGWB using different GW probes [e.g., 36,
37]. In this paper, we again focus on the stiff-amplified primordial SGWB, in
a more general context, not limited to that involving SFDM.
Besides the CMB and laser interferometers, pulsar-timing array (PTA)
observations can probe the SGWB by searching for correlated timing deviations
in millisecond pulsars induced by the SGWB [38, 39]. Recently, the North
American Nanohertz Observatory for Gravitational Waves (NANOGrav) reported
strong evidence for a stochastic common-spectrum process in their 12.5 yr
pulsar-timing data set [40] with a high amplitude ($h_{c}\sim 10^{-15}$ at
$f_{\rm yr}=1$ yr-1). Though it has not been confirmed as an SGWB detection
yet, many interpretations in this direction have flourished since then.
Possible SGWB sources include a cosmic population of supermassive black hole
binaries [41], cosmic strings [42, 43, 44], phase transitions [45, 46, 47,
48], the primordial SGWB with a large initial blue tilt from non-standard
inflationary scenarios (relaxing the consistency relation) [49, 50] and others
[e.g., 51, 52].
In this paper, we are, however, interested in the _secondary_ blue tilt in the
primordial SGWB produced by stiff amplification, within the _standard_
inflationary scenario which _preserves_ the consistency relation (henceforth,
the “standard inflation + stiff amplification” scenario). The case of stiff
amplification ($w=1$) is the one that maximizes the possible secondary blue
tilt that results for modes that reentered when the EoS of the Universe has
$w>1/3$. Thus, the first part of this paper is dedicated to the question of
whether stiff-amplified primordial SGWB can explain the high common-spectrum
amplitude reported by NANOGrav. To this end, we consider a cosmological model
with two additional components to the base-$\Lambda$CDM model [53]: a stiff
component and the primordial SGWB. In our model, when inflation ends, there is
an extended phase of reheating with a matter-like EoS ($w=0$) [54, 55]. When
reheating ends, the Universe is assumed to be dominated by the stiff component
and remains so until the onset of the RD era. In order to constrain our model
parameters, we perform a joint analysis of the latest observational results
from the CMB, BBN, NANOGrav and Advanced LIGO-Virgo’s third observing run (O3)
[56].
Our analysis has a novel feature: we self-consistently include the
backreaction of the SGWB on the background expansion rate, as we did in LSR17.
Although noted before [e.g., 57, 22], this backreaction effect is
unfortunately often neglected when modelling the SGWB. Nevertheless, as stated
above, the stiff-amplified SGWB can contribute a non-negligible (percent
level) radiation component to the total energy density during the RD era. This
will, in return, not only affect the evolution of tensor modes, but also other
observables, e.g., radiation-matter equality and the CMB damping tail. A
_precise_ analysis of the primordial SGWB ought to account for its coupling
with the background expansion history, therefore.
In the meantime, the well-known Hubble tension [e.g., 58, 59] also motivates
our treatment. The present-day Hubble constant, $H_{0}$, now measured at
better than $3\%$ precision by several experiments, shows a discrepancy
$(>3\sigma)$ between its value measured by the CMB [53] and that by the
distance ladder or time delays of lensed quasars in the nearby Universe [60,
61]. With respect to the aforementioned radiation-matter equality, one way to
alleviate the Hubble tension is to exploit the $H_{0}-N_{\rm eff}$ degeneracy:
the redshift of this equality can be kept constant by increasing the value of
$H_{0}$ and the effective number of relativistic species at the same time [62,
63, 64]. Our model implements this $H_{0}-N_{\rm eff}$ degeneracy, boosting
$H_{0}$ in accordance with the additional radiation-like SGWB contribution,
while the coupled evolution of the Hubble parameter and tensor modes is
properly taken into account. Thus, the second part of this paper is dedicated
to the implication of current constraints on the primordial SGWB for the
Hubble tension. We investigate the extent to which the stiff-amplified SGWB
can bring the value of $H_{0}$ from the CMB into agreement with those from
local measurements.
The paper is organized as follows. In section 2, we demonstrate the stiff
amplification effect on the primordial SGWB and introduce our model. In
section 3, we discuss all current measurements and upper bounds on the
primordial SGWB, for each of several probes in turn. In section 4, we combine
these probes in a joint analysis and derive the constraints on the “standard
inflation + stiff amplification” scenario that result. The implication of
these results for the Hubble tension is explored in section 5. We conclude in
section 6.
## 2 Stiff amplification of the primordial SGWB
In this paper, we consider the primordial tensor perturbations with respect to
a flat FLRW background metric, so the short-wave, weak-field limit is
apparently satisfied for the GWs described by these tensor modes (see appendix
A). We can write down the perturbed metric in the transverse-traceless gauge
(the “TT gauge”) [57, 10],
$\mathrm{d}s^{2}=c^{2}\,\mathrm{d}t^{2}-a^{2}(t)(\delta_{ij}+h_{ij})\mathrm{d}x^{i}\mathrm{d}x^{j}$,
where $\sum_{i}\partial_{i}h_{ij}=0$ and $\sum_{i=j}h_{ij}=0$.
In section 2.1, we review the basic equations concerning the primordial SGWB
from inflation, and its amplification by a post-inflationary stiff era. In
section 2.2, we present our cosmological model for the “standard inflation +
stiff amplification” scenario, which self-consistently includes the stiff-
amplified primordial SGWB.
### 2.1 Basic equations
Primordial tensor perturbations can be expanded in Fourier space [e.g., 65],
$\begin{split}h_{ij}(t,\vec{x})&=\sum_{\rm
P=+,\times}\int_{-\infty}^{+\infty}\mathrm{d}f\int\mathrm{d}^{2}\hat{k}\,h^{\rm
P}(t,f,\hat{k})\,e^{i\vec{k}\cdot\vec{x}}\,\epsilon^{\rm
P}_{ij}(\hat{k}),\end{split}$ (2.1)
where $f$ is the comoving frequency, $\hat{k}$ is a unit vector,
$\vec{k}\equiv 2\pi f\hat{k}/c$, and $\epsilon^{\rm P}_{ij}$ are the
polarization tensors for the $+$ and $\times$ states. In our convention,
$\sum_{i,j}\epsilon^{\rm P}_{ij}(\hat{k})\epsilon^{\rm
P^{\prime}}_{ij}(\hat{k})=2\,\delta_{\rm PP^{\prime}}$. $h^{\rm
P}(t,-f,\hat{k})=(h^{\rm P}(t,f,\hat{k}))^{*}$ due to the reality of $h_{ij}$.
When a mode is well-inside the Hubble radius, $h^{\rm P}(t,f,\hat{k})\propto
e^{-2\pi if\eta}/a$, where $\eta$ is the conformal time,
$\mathrm{d}\eta\equiv\mathrm{d}t/a$.333This mode is then essentially a plane
wave on time scales much shorter than the Hubble time. It is said to satisfy
the “high-frequency” limit (in addition to the short-wave limit) [66].
For an isotropic, stationary and Gaussian SGWB, the most straightforward
observable is the two-point correlation function. In Fourier space, it is
defined as
$\langle(h^{\rm P}(t,f,\hat{k}))^{*}\,h^{\rm
P^{\prime}}(t,f^{\prime},\hat{k}^{\prime})\rangle\equiv\frac{1}{2}S_{h}(f)\frac{\delta_{\rm
D}(f-f^{\prime})}{2}\frac{\delta^{\mathcal{S}^{2}}_{\rm
D}(\hat{k}-\hat{k}^{\prime})}{4\pi}\delta_{\rm PP^{\prime}},$ (2.2)
where $\delta^{\mathcal{S}^{2}}_{\rm D}$ is the Dirac function on the two-
sphere and $S_{h}(f)$ is the one-sided power spectral density of the SGWB
[67]. $S_{h}(f)$ is related to the characteristic amplitude/strain of the
SGWB, $h_{c}(f)$, and the (dimensionless) tensor power spectrum,
$\Delta^{2}_{h}(f)$, by $fS_{h}(f)=h_{c}^{2}(f)=\Delta^{2}_{h}(f)/2$ [e.g.,
27].
The primordial SGWB is characterized by its power spectrum at an initial time,
$\Delta^{2}_{h,\rm i}(f)\equiv\Delta^{2}_{h}(t_{\rm i},f)$, and the tensor
transfer function, $T_{h}(t,f)\equiv h^{\rm P}(t,f,\hat{k})/h^{\rm P}(t_{\rm
i},f,\hat{k})$. Standard single-field, slow-roll inflation predicts a nearly
scale-invariant initial power spectrum for tensor modes, $\Delta^{2}_{h,\rm
i}(f)=A_{\rm t}(f/f_{*})^{n_{\rm t}}$. Here $A_{\rm t}$ is the tensor
amplitude, $r\equiv A_{\rm t}/A_{\rm s}$ defines the tensor-to-scalar ratio,
and the tensor spectral index satisfies the consistency relation, $n_{\rm
t}=-r/8$. Following the convention of _Planck_ , the pivot scale is chosen as
$k_{*}=2\pi f_{*}/c=0.05$ Mpc-1 [68]. As for the transfer function, its
evolution follows from the wave equation (A.2),
$\ddot{T}_{h}+\frac{3\dot{a}}{a}\dot{T}_{h}+\left(\frac{2\pi
f}{a}\right)^{2}T_{h}=0,$ (2.3)
where the overdot denotes the derivative with respect to the cosmic time, $t$,
and we have omitted the arguments $(t,f)$ for brevity. For any mode that has
undergone inflation, its amplitude is frozen while it is well-outside the
Hubble radius, so that $\Delta^{2}_{h}(t,f)\simeq\Delta^{2}_{h,\rm i}(f)$,
$T_{h}\simeq 1$ and $\dot{T}_{h}\simeq 0$. After Hubble reentry, the transfer
function for all modes asymptotically evolves as $T_{h}\propto 1/a$ (the
adiabatic solution). However, their relative amplitudes (frequency dependence)
at a given time is subject to the EoS of the Universe at the reentry of each
mode, allowing for _stiff amplification_. We will recap this effect in what
follows.
For a given mode of frequency $f$, its sub-Hubble solution for $\Omega_{\rm
GW}(f)$ can be approximated as (using eq. [A.6])
$\Omega_{\rm GW}(a,f)\simeq\frac{(2\pi f)^{2}\,\Delta^{2}_{h,\rm
i}(f)\,T_{h}^{2}}{12\,a^{2}H^{2}}\propto\Delta^{2}_{h,\rm
i}(f)\left(\frac{2\pi f}{aH}\right)^{2}\left(\frac{a_{f}}{a}\right)^{2},$
(2.4)
where $H\equiv\dot{a}/a$ is the Hubble parameter and $a_{f}\,H(a_{f})\equiv
2\pi f$ defines the scale factor at its Hubble reentry (cf. eq. [58] in
LSR17). As mentioned in the introduction, modes that reentered during the RD
era correspond to a plateau in $\Omega_{\rm GW}(f)$ for standard inflation.
Alternatively, an early stiff era gives rise to a blue-tilted spectral shape
in $\Omega_{\rm GW}(f)$. Such a stiff era is proposed by a variety of physical
mechanisms [e.g., 69, 70] and many of them involve a scalar field dominated by
its kinetic energy [71, 72, 32, 73, 23, 74]. In LSR17, the stiff era is due to
the stiff phase of SFDM, interposed between reheating and the RD era. We
illustrate this stiff phase by an example $\Lambda$SFDM universe in appendix
B. For a mode that reentered during the stiff era, we showed in section
III.B.3 of LSR17 that its Hubble reentry happens later than it would if the
Universe were RD all the time. In other words, $a_{f,\rm stiff}>a_{f,\rm rad}$
for the value of $a_{f}$ appearing in eq. (2.4). Therefore, eq. (2.4) shows
that for such a mode, the value of $\Omega_{\rm GW}(a,f)$ in the sub-Hubble
limit ($a\gg a_{f}$) is greater than it would be if the Hubble reentry
happened in the RD era (i.e., the plateau value). In this way, the primordial
SGWB is amplified for modes reenter during the stiff era, relative to the
plateau.
### 2.2 Stiff-amplified SGWB: self-consistent model for precision cosmology
Stiff amplification causes a secondary blue tilt in $\Omega_{\rm GW}(a,f)$
evaluated at late times when all modes of interest are in the sub-Hubble
limit. Whereas any pre-RD era with an EoS stiffer than radiation would
generically lead to a blue tilt (whose spectral index depends on the EoS), we
will only consider a stiff era ($w=1$) in this paper, in order to maximize the
amplification. Then, for modes that reentered during the stiff era,
$\Omega_{\rm GW}(f)\propto f$, $h_{c}(f)\propto f^{-1/2}$ (see eq. [A.7]). On
the other hand, an extended period of reheating, with a matter-like EoS
$(w=0)$, precedes the stiff era, as mentioned above. For modes that reentered
during reheating, $\Omega_{\rm GW}(f)\propto f^{-2}$. Therefore, in the
“standard inflation + stiff amplification” scenario, the combined effect of
reheating and the stiff era introduces an excess in the spectrum of
$\Omega_{\rm GW}(f)$ relative to the plateau associated with the standard RD
era, which appears as a triangle (in logarithmic scales; see LSR17). This
triangle peaks at $f_{\rm re}$, which corresponds to the mode that reentered
at the end of reheating, characterized by $T_{\rm re}$, the reheating
temperature.
To account for stiff amplification, we consider a cosmological model which
contains a stiff component ($w_{\rm s}=1$) and the primordial SGWB, in
addition to all the base-$\Lambda$CDM components.444In this paper, we assume
that neutrinos are _massless_ , so our base-$\Lambda$CDM model is slightly
more simplified than that adopted by _Planck_. On the other hand, our model
accounts for the thermal history in the early Universe, e.g., the processes of
neutrino decoupling and electron-positron annihilation. When reheating ends,
virtually all of the energy density is assumed to go into the stiff component.
Thereafter, the energy density of the stiff component evolves as $\rho_{\rm
s}\propto a^{-6}$ and dominates the total energy density of the early Universe
between the end of reheating and the end of the stiff era. The latter endpoint
is defined as the moment of equality between the energy density of the stiff
component and that of the radiation components, parameterized by the
temperature at this equality, $T_{\rm sr}$. Therefore, apart from the
base-$\Lambda$CDM parameters, our model has three parameters: $r$, $T_{\rm
re}$ and $T_{\rm sr}$. As we shall describe below, the model requires us to
solve a set of coupled, integro-differential equations for each set of model
parameters.
To solve for tensor transfer functions, we apply the dynamical system
approach. For a given mode with comoving frequency $f$, the following
dynamical variables can be defined:
$\zeta_{f}\equiv\ln\frac{2\pi f}{aH},\quad
x_{f}\equiv\frac{\dot{T}_{h}}{H},\quad y_{f}\equiv\frac{2\pi f}{aH}\,T_{h}.$
(2.5)
Apparently, $T_{h}=y_{f}/e^{\zeta_{f}}$. The wave equation (2.3) can then be
rearranged into the following dynamical system:
$\displaystyle\zeta_{f}^{\prime}$ $\displaystyle=\frac{3}{2}\sigma-1,$ (2.6a)
$\displaystyle x_{f}^{\prime}$
$\displaystyle=-3x_{f}+\frac{3}{2}\sigma\,x_{f}-e^{\zeta_{f}}y_{f},$ (2.6b)
$\displaystyle y_{f}^{\prime}$
$\displaystyle=-y_{f}+\frac{3}{2}\sigma\,y_{f}+e^{\zeta_{f}}x_{f},$ (2.6c)
where the prime denotes the derivative with respect to the number of
$e$-foldings, $N\equiv\ln{a}$ ($\mathrm{d}N=H\,\mathrm{d}t$), and
$\sigma\equiv-\frac{2\dot{H}}{3H^{2}}=\left(\frac{\rho+p}{\rho}\right)_{\rm
tot}=\frac{\sum_{i}(\rho_{i}+p_{i})}{\rho_{\rm tot}}=\Omega_{\rm
m}+\frac{4}{3}\,\Omega_{\rm r}+2\,\Omega_{\rm s}+\Omega_{\rm GW}+\Pi_{\rm
GW},$ (2.7)
where $\Omega_{\rm GW}$ and $\Pi_{\rm GW}$ are defined in eqs. (A.4) and
(A.5), and $\Omega_{\rm m}$, $\Omega_{\rm r}$ and $\Omega_{\rm s}$ are the
energy fractions of matter (CDM+baryons), radiation (photons+massless
neutrinos) and the stiff component, respectively. Apparently, $\sigma$ is
related to the EoS of the Universe by $\sigma=1+w$. Therefore, the evolution
of each tensor mode is coupled to the expansion history of the background
Universe via $\sigma$.
| I | II | III | IV | V | VI ($\Lambda$CDM) | VII ($\Lambda$CDM)
---|---|---|---|---|---|---|---
$r$ | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05
$T_{\rm re}$ (GeV) | 400 | $10^{5}$ | $2.5\times 10^{5}$ | $10^{7}$ | $10^{7}$ | $10^{7}$ | $10^{12}$
$T_{\rm sr}$ (GeV) | $9\times 10^{-3}$ | 2.2 | 2.2 | 88 | $10^{4}$ | N/A | N/A
$\Delta N_{\rm eff,\,BBN}$ | 0.44 | 0.06 | 0.37 | 0.37 | $<10^{-4}$ | 0 | 0
$\Delta N_{\rm eff,\,late}$ | 0.06 | 0.06 | 0.37 | 0.37 | $<10^{-4}$ | 0 | 0
$\log_{10}\,h_{c}(f_{\rm yr})$ | $-17.14$ | $-18.22$ | $-18.22$ | $-18.33$ | $-18.34$ | $-18.34$ | $-18.34$
$\log_{10}\,\Omega_{\rm ref,LIGO}$ | $-10.32$ | $-6.70$ | $-6.67$ | $-8.30$ | $-10.33$ | $-19.99$ | $-15.60$
Table 1: Example models with different model input parameters $(r,T_{\rm
re},T_{\rm sr})$. For each model, values of the observable quantities,
$\left(\Delta N_{\rm eff,\,BBN},\,\Delta N_{\rm eff,\,late},\,h_{c}(f_{\rm
yr}),\,\Omega_{\rm ref,LIGO}\right)$, derived from the numerical solutions to
the dynamical system described in the text for those parameters, are listed
here as well. These observables will be discussed in section 3. Figure 1:
_Left panel_ : Evolution of the energy density, $\rho_{i}$, of each component
in our self-consistent model ($i=$ stiff, radiation+SGWB, matter, or
$\Lambda$), for Model I in table 1. Vertical dashed lines indicate the scale
factors of stiff-to-radiation and radiation-to-matter equalities,
respectively. The grey band indicates the duration of BBN. _Right panel_ :
Evolution of $\sigma=-2\dot{H}/3H^{2}$ for Model I as a function of the number
of $e$-foldings, $N=\ln a$. $N_{\rm re}$ indicates the end of reheating, after
which the Universe enters the stiff era. The dip in the curve during BBN is
due to the process of electron-positron annihilation.
For illustrative purposes, we have solved these equations above for several
example models, with parameters listed in table 1. While we will say more
about our numerical method below, it is useful to present these examples
first, in order to anticipate the general behavior of the solutions in the
discussion which follows. The left panel of figure 1 shows the energy density
evolution of each component for Model I. The time evolution of $\sigma$ is
illustrated in the right panel of figure 1, for Model I in table 1. In order
for amplification to take place, $\sigma\neq 4/3$ is required. As mentioned in
the introduction, when stiff amplification ($\sigma=2$) of the primordial SGWB
occurs, the coupling between the radiation-like SGWB and the background metric
may cause significant backreaction from the SGWB on the Hubble parameter.
To account self-consistently for this backreaction, we must solve the coupled
dynamical system of eqs. (2.6) and (2.7) for each frequency, for any given set
of model parameters, $(r,T_{\rm re},T_{\rm sr})$. Our method of solution is
described in appendix C. Ordinarily, the solution of these coupled equations
would be subject to boundary conditions at the present, fixed by the
observational values adopted for $\Omega_{\text{m,0}}$ and $H_{0}$ (where
$\Omega_{\Lambda,0}=1-\Omega_{\text{m,0}}$ for a flat FLRW Universe). However,
observations of the CMB and baryon acoustic oscillations (BAO) also fix the
value of the redshift of radiation-matter equality, $z_{\text{eq}}$, to an
exquisite precision [e.g., 53]. Since the SGWB adds an extra radiation
component to the background energy density, we must ensure that our solution
yields the observed $z_{\text{eq}}$, despite this. In so doing, we encounter
the degeneracy between the value of $H_{0}$ measured by the CMB and BAO and
the boost to the radiation energy density by the SGWB, allowed by the
requirement that $z_{\text{eq}}$ is fixed.555In fact, it is the value of
$z_{\text{eq}}$, which determines the size of the sound horizon, that the CMB
and BAO data are mostly sensitive to, rather than $\Omega_{\text{m,0}}$ and
$H_{0}$. As we shall show, by the end of the stiff era, the contribution of
the SGWB to the background energy density reaches an asymptotic value,
relative to that of the other radiation components. This asymptotic $\rho_{\rm
GW}$ (which thereafter evolves as $\rho_{\rm GW}\propto a^{-4}$) can be
represented by a constant value of $\Delta N_{\rm eff}$, the effective number
of extra relativistic species. $\Delta N_{\rm eff}\equiv N_{\rm eff}-N_{\rm
eff,0}$, where $N_{\rm eff,0}=3.046$ corresponds to three Standard Model
neutrinos [75]. As a result, we are able to utilize the $H_{0}-N_{\rm eff}$
degeneracy for which the value of $z_{\rm eq}$ is preserved, to determine the
boundary conditions in our solutions, as follows. While $H_{0}$ and $N_{\rm
eff}$ can both vary, we keep $\Omega_{\rm m,0}$ and $\Omega_{\rm
r,0}+\Omega_{\rm GW,0}$ fixed, thus fixing $z_{\rm eq}$ and $z_{\rm m\Lambda}$
(the redshift of matter-$\Lambda$ equality). In our model, then, the
$H_{0}-N_{\rm eff}$ degeneracy is stated as
$\frac{H_{0}}{H_{0,\rm\Lambda CDM}}=\sqrt{1+\mathcal{C}\,\Delta N_{\rm
eff}},\qquad\mathcal{C}\equiv\frac{\frac{7}{8}\left(\frac{4}{11}\right)^{4/3}}{1+\frac{7}{8}\left(\frac{4}{11}\right)^{4/3}N_{\rm
eff,0}},$ (2.8)
where $H_{0,\rm\Lambda CDM}$ is the value in the base-$\Lambda$CDM model (for
which $\Delta N_{\rm eff}=0$). Thus, our model can actually help alleviate the
Hubble tension by boosting $H_{0}$.
Figure 2 illustrates the necessity of our treatment for the backreaction of
the SGWB. The left panel shows that for Model III in table 1, the extra
radiation due to the stiff-amplified SGWB can indeed cause a $\approx 3\%$
increase in the Hubble parameter during the RD era ($\Delta N_{\rm eff}\approx
0.37$). The right panel shows that the backreaction of this SGWB would lead to
a shift of $z_{\rm eq}$ more than 6$\sigma$ away from its value measured by
_Planck_ using the base-$\Lambda$CDM model [53], if $\Omega_{\rm m,0}$ and
$H_{0}$ were both fixed at the $\Lambda$CDM best-fit values. In short,
precision cosmology requires that the simultaneous backreaction of the
primordial SGWB on the background expansion history be self-consistently taken
into account throughout its evolution. We have confirmed that our treatment
meets this requirement with a precision $\sim 10^{-3}$ (cf. appendix C), for
all viable model parameters, $(r,T_{\rm re},T_{\rm sr})$.
Figure 2: _Left panel_ : Fractional difference between the value of the
Hubble parameter for Model III in table 1 and that with the SGWB contribution
subtracted off, $H^{2}_{\rm non-GW}\equiv H^{2}-8\pi G\,\rho_{\rm GW}/3c^{2}$.
The decrease of the curve during BBN is due to the process of electron-
positron annihilation. _Right panel_ : Shift of radiation-matter equality due
to the SGWB backreaction (from $z_{\rm eq}$ to $z_{\rm eq}^{\prime}$), if
$\Omega_{\rm m,0}$ and $H_{0}$ were both fixed at the $\Lambda$CDM best-fit
values. The solid orange line is from Model III. Dash-dotted lines are from
the base-$\Lambda$CDM model. The grey band indicates the 68% confidence
interval of $z_{\rm eq}$ from _Planck_ ’s measurements.
Our results for the present-day SGWB energy spectra, $\Omega_{\rm GW}(f)$, for
the example models in table 1, are shown in figure 3. These models are chosen
so as to illustrate the dependence of the spectral shape on the model
parameters. To begin with, they all share a plateau in $\Omega_{\rm GW}(f)$ of
the same height since they assume the same value of $r$. Models I – V all
display the blue tilt and the triangle-shaped spectrum at high frequencies due
to stiff amplification.666Our spectral shape of $\Omega_{\rm GW}(f)$ here in
figure 3 can be compared with those in figures 8–10 in LSR17 for the
$\Lambda$SFDM model, which is a particular physical realization of our general
model here and thus yields the same spectral shape for $\Omega_{\rm GW}(f)$.
Model I has the lowest value of $T_{\rm sr}$ and thus the highest amplitude at
$f_{\rm yr}\equiv 1$ yr-1, the reference frequency for PTAs. This example also
shows that when the end of the stiff era slightly overlaps BBN, the value of
$N_{\rm eff}$ at BBN can be different from that at late times. Models I and II
have different values of $T_{\rm re}$ and $T_{\rm sr}$ but the same “area”
under the triangle in $\Omega_{\rm GW}(f)$ (as if the same triangle slides
along the plateau), which manifests itself in the equal values of $N_{\rm
eff}$ for these two models at late times. Models II and III have the same
value of $T_{\rm sr}$, so their stiff eras end at the same time. As a result,
their blue-tilted parts of $\Omega_{\rm GW}(f)$ are on top of each other,
joining the plateau together. Models III and IV have the same values of
$N_{\rm eff}$ at late times – the highest value of all the models. Models IV,
V and VI share the same reheating temperature, but their peak frequencies (at
$f_{\rm re}$ for each model) are different, reflecting the different
dependence of their scale factors on time. Models VI and VII are examples of
the base-$\Lambda$CDM model.
Throughout this paper, we adopt the following cosmological parameters from the
_Planck_ 2018 results (TT,TE,EE+lowE+lensing+BAO) [53]: $\Omega_{\rm
m,0}=0.3111$, $z_{\rm eq}=3387$, $H_{0,\rm\Lambda CDM}=67.66$ ${\rm
km\,s}^{-1}\,{\rm Mpc}^{-1}$, $A_{\rm s}=2.105\times 10^{-9}$.
Figure 3: Present-day SGWB energy spectra for all example models in table 1.
Vertical dashed lines indicate the representative frequencies of each SGWB
probe, $f_{*}$, $f_{\rm yr}$ and $f_{\rm LIGO}$ for the CMB, PTA and LIGO-
Virgo, respectively. The 5%–95% confidence interval of the common-spectrum
amplitude reported by the 12 yr NANOGrav results (labeled “NG12”) is displayed
[40], along with the 95% upper limit from BKP [68] and that from Advanced
LIGO-Virgo O3 [56].
## 3 Current measurements and upper bounds on the primordial SGWB
In this section, we present all the current measurements and upper bounds on
the primordial SGWB from direct probes (CMB, PTA, laser interferometry) and
indirect probes (BBN and late-Universe cosmology). They are altogether
illustrated in figure 3. The constraint on our model parameters, $(r,T_{\rm
re},T_{\rm sr})$, from each probe is examined in sections 3.1 through 3.4.
### 3.1 CMB temperature and polarization
The primordial SGWB can leave an observable imprint on the CMB temperature and
polarization anisotropy [76, 77, 78]. In particular, detection of the CMB
$B$-mode polarization around $\ell\sim 100$ would be a convincing signature of
the primordial SGWB. Currently, BICEP2/Keck Array+_Planck_ (BKP) only provides
an upper bound on the tensor-to-scalar ratio, $r<0.061$ at $95\%$ confidence
level (CL) [68]. This upper bound directly applies to our model, too, since
the stiff era does not affect long-wavelength modes that reentered around
recombination. In the future, CMB-S4 experiments will continuously seek to
measure the primordial SGWB from inflation [79].
### 3.2 NANOGrav results
Figure 4: Characteristic strain, $h_{c}$, of the stiff-amplified primordial
SGWB today at $f_{\rm yr}$ in our model, presented as the three-view
projections with respect to the model parameters, $(r,T_{\rm re},T_{\rm sr})$.
The cross-sectional planar slices of the 3-D space of model parameters chosen
in each view are color-coded, and the grey region represents parameters
entirely excluded by the observational constraints. The vertical black line
indicates the 95% CL upper limit on $r$ from BKP [68]. The magenta bars on the
color box (labeled by “NG12”) indicate the 5%–95% confidence interval of the
common-spectrum amplitude, $h_{c}(f_{\rm yr})$, from the 12.5 yr NANOGrav
results [40].
PTA observations measure the times of arrival (“ToAs”) of radio pulses from
millisecond pulsars. Those ToAs can be modulated by an SGWB permeating the
spacetime between the pulsar and the earth. In fact, the existence of an SGWB
would be manifested in the timing-residual cross-power spectral density (cf.
eq. [2] in [40]) as a time-correlated, common-spectrum stochastic process
across all pulsar-earth pairs, with quadrupolar spatial correlations between
pulsars (i.e., the Hellings & Downs curve [80]). In PTA analysis, the
characteristic strain of an SGWB is usually modeled as a power law,
$h_{c}(f)=A_{\rm CP}\,(f/f_{\rm yr})^{\alpha}$.
NANOGrav recently discovered a time-correlated, stochastic process with a
common amplitude and spectral index in their 12.5 yr data set. However, there
is little evidence for quadrupolar spatial correlations in this common-
spectrum process, required to identify it with an SGWB. Hence, the NANOGrav
results are still inconclusive with regard to GW detection, and, in the
meantime, have yet to be confirmed by other PTAs. Nevertheless, despite its
uncertainty, this reported common-spectrum process has incited many attempts
to explain it in terms of the SGWB.
In our model, the present amplitude of $h_{c}$, or, equivalently, $\Omega_{\rm
GW}(f)$, at frequencies near $f_{\rm yr}$ can be higher than in the
$\Lambda$CDM model _only if_ the corresponding modes have experienced stiff
amplification. For example, as shown in figure 3, these modes lie within the
blue-tilted part of the SGWB spectrum for Model I, so their amplitudes at
$f_{\rm yr}$ are higher than the $\Lambda$CDM-plateau value. Here, we sample
our model parameters, $(r,T_{\rm re},T_{\rm sr})$, throughout the entire
parameter space, to calculate the value of $h_{c}(f_{\rm yr})$ of the
primordial SGWB for all model parameters of interest. Our results are shown in
figure 4 (with the $T_{\rm sr}$ axis upside-down in all figures in this paper
that show the results for different models, spanning the range of parameter
space).777Throughout our analysis, we do not sample the grey region in
parameter space displayed in figures 4 – 7 for computational efficiency,
because models in this region result in too much extra radiation energy
density from the stiff-amplified SGWB, and are thus firmly excluded by late-
Universe $N_{\rm eff}$ bounds (cf. section 3.4). We compare our results with
the $A_{\rm CP}$ posterior reported by NANOGrav [40], for which the 5%–95%
confidence interval is $1.75-3.83\times 10^{-15}$ in the case of the blue-
tilted spectral slope predicted for the stiff-amplified SGWB
($\alpha=-1/2$).888These values we quote are different from those in the
fiducial model in NANOGrav’s analysis, because the latter assumes
$\alpha=-2/3$, as expected for the SGWB from unresolved mergers of
supermassive black-hole binaries. Figure 4 shows that the amplitude of the
SGWB at $f_{\rm yr}$ in the “standard inflation + stiff amplification”
scenario, as constrained by other observations, is too small to explain the
common-spectrum process in the 12.5 yr NANOGrav data set.999Our result here
that the amplitude of the stiff-amplified SGWB spectrum at $f_{\rm yr}$ is
constrained to be far below the NANOGrav results is qualitatively consistent
with the argument in [81], based upon applying the BBN constraint (which we
shall discuss in section 3.4) to limit how late the stiff era can end. While
the model in [81] differs from ours (e.g., it posits a stiff era that
immediately follows inflation, with no standard reheating process), this
reflects the fact that the example GW spectra in [81], computed numerically,
share the spectral feature of ours for modes whose Hubble reentry occur during
the stiff era, with a blue tilt of $\Omega_{\rm GW}(f)\propto f$.
### 3.3 Advanced LIGO-Virgo
Figure 5: Present-day energy density fraction per logarithmic frequency of
the stiff-amplified primordial SGWB, $\Omega_{\rm GW}(f)$, at the reference
frequency $f_{\rm LIGO}=25$ Hz in our model, presented as the three-view
projections with respect to the model parameters, $(r,T_{\rm re},T_{\rm sr})$.
The cross-sectional planar slice of the 3-D space of parameters shown in each
view is color-coded, and the grey region is entirely excluded by the
observational constraints. The vertical black line indicates the 95% CL upper
limit on $r$ from BKP [68]. The magenta curves indicate the 95% CL upper limit
on $\Omega_{\rm ref,\,LIGO}$ from the Advanced LIGO-Virgo O3 results [56].
Laser interferometers like the Advanced LIGO-Virgo network can directly detect
SGWBs by cross-correlating data from different detectors [e.g., 82]. Recently,
the LIGO Scientific Collaboration and Virgo Collaboration published results of
a search for an isotropic SGWB using data from their first three observing
runs (O1, O2 and O3) [56]. While the cross-correlation spectrum from data does
not show evidence for an SGWB signal, a new upper limit is placed on the
present-day SGWB energy spectrum, modeled as a power law, $\Omega_{\rm
GW}(f)=\Omega_{\rm ref,\,LIGO}\,(f/f_{\rm LIGO})^{\alpha_{\rm LIGO}}$. The
reference frequency is chosen to be $f_{\rm LIGO}=25$ Hz.
We again calculate the value of $\Omega_{\rm ref,\,LIGO}$ in our model,
sampling the model parameters $(r,T_{\rm re},T_{\rm sr})$, as shown in figure
5. Since the stiff-amplified SGWB in our model has a triangle-shaped spectrum
(i.e., $\Omega_{\rm GW}(f)$ is a broken power law), it does not always have a
fixed spectral index across the LIGO-Virgo frequency range, 20–1726 Hz.
Therefore, we compare our results with the _marginalized_ 95% CL upper limit
from the O3 analysis, $\Omega_{\rm ref,\,LIGO}<6.6\times 10^{-9}$, obtained by
integration over $\alpha_{\rm LIGO}$. Figure 5 displays this upper limit.
### 3.4 Integral bounds: BBN, CMB+BAO
The primordial SGWB can also be searched by indirect probes, e.g., light
element abundances from BBN, the CMB, and large-scale structure of the
Universe. These cosmological probes provide what is known as _integral bounds_
on the SGWB, since the observables in each probe are (indirectly) affected by
the integration of $\Omega_{\rm GW}(f)$ over a wide range of frequencies. In
the following, we will examine all such current probes, classifying them
according to the epoch in the expansion history of the Universe to which each
probe is sensitive.
#### Early-Universe cosmology: big bang nucleosynthesis.
Standard BBN predicts certain relic abundances for light elements like
helium-4 and deuterium (see [83] for a brief review). These abundances are
sensitive to the cosmology of the background Universe during BBN (when $T\sim
10^{9}$ K), in particular the baryon-to-photon ratio and the expansion rate
then. Therefore, one can infer related cosmological parameters, namely the
baryon density, $\Omega_{\rm b,0}h^{2}$ (where we use $h$ here to mean the
Hubble constant in units of 100 ${\rm km\,s}^{-1}\,{\rm Mpc}^{-1}$), and the
effective number of relativistic species at that time, $N_{\rm eff,\,BBN}$, by
combining observations of the primordial 4He and D abundances with theoretical
BBN calculations [e.g., 84]. We, henceforth, use $N_{\rm eff,\,BBN}$ to denote
its value during BBN, in order to distinguish it from the value in the late
Universe, $N_{\rm eff,\,late}$ (which affects the CMB and BAO). We note that,
in our discussion in section 2.2 of the asymptotic $\Delta N_{\rm eff}$
associated with $\rho_{\text{GW}}$, we were actually referring to this latter
$N_{\rm eff,\,late}$. By contrast, the value of $N_{\rm eff,\,BBN}$ in our
model may have contributions from _both_ the primordial SGWB and the stiff
component, the latter because it increases the expansion rate of the Universe
relative to the rate for a standard RD Universe with three neutrino species,
even though it does not, itself, evolve like a radiation-like component. As a
result, the constraint on $N_{\rm eff,\,BBN}$ can be translated into
constraints on the _sum_ of the stiff-amplified primordial SGWB and the stiff
component (rather than on the SGWB alone) in our model, and thus on the model
parameters.
In this paper, we quote the 95% CL bounds on $N_{\rm eff,\,BBN}$, marginalized
over $\Omega_{\rm b,0}h^{2}$, obtained from combining measurements of the
primordial 4He mass fraction, $Y_{\rm P}$, and the primordial deuterium
abundance, $({\rm D/H})_{\rm P}$. For the $Y_{\rm P}$ measurement, our
baseline is from the data compilation of [85] (A15), but we also quote the
bounds from [86] (I14). For the $({\rm D/H})_{\rm P}$ measurement, we
reference the results from [87] (C14).101010We are aware of the more recent
measurements of $({\rm D/H})_{\rm P}$ [e.g., 88]. However, we quote the result
from C14 in this paper for the sake of comparison, because only this result
has been combined with I14. Moreover, the value of $N_{\rm eff,\,BBN}$ is
mainly constrained by the $Y_{\rm P}$ measurement, only mildly dependent on
$({\rm D/H})_{\rm P}$ [64]. The combined observational bounds on $N_{\rm
eff,\,BBN}$ are presented as follows:
$\displaystyle N_{\rm eff,\,BBN}$ $\displaystyle=2.90\,^{+0.58}_{-0.54}\quad$
$\displaystyle(95\%,~{}{\rm A15+C14}),$ (3.1a) $\displaystyle N_{\rm
eff,\,BBN}$ $\displaystyle=3.58\pm 0.40\quad$ $\displaystyle(95\%,~{}{\rm
I14+C14}).$ (3.1b)
The discrepancy between them is due to the moderate tension between the
$Y_{\rm P}$ measurements from A15 and I14, which is still under debate. It is
worth noting that the lower bound from I14+C14 slightly disfavors the standard
value $N_{\rm eff,0}=3.046$.
We have calculated the value of $\Delta N_{\rm eff,\,BBN}$ in our model for
each choice of model parameters. The results are shown in figure 6, where the
95% CL upper limits from each of the two combined observational constraints
presented in eq. (3.1) are also displayed.
Figure 6: Effective number of extra relativistic species during BBN, $\Delta
N_{\rm eff,\,BBN}$, in our model, presented as the three-view projections with
respect to the model parameters, $(r,T_{\rm re},T_{\rm sr})$. The cross-
sectional planar slices of the 3-D space of model parameters chosen in each
view are color-coded, and the grey region is entirely excluded by the
observational constraints. The vertical black line indicates the 95% CL upper
limit on $r$ from BKP [68]. The magenta curves indicate the 95% CL upper limit
on $\Delta N_{\rm eff,\,BBN}$ from combining the $Y_{\rm P}$ and $({\rm
D/H})_{\rm P}$ measurements of A15 and C14 (eq. [3.1a], our baseline). The
cyan curves indicate the 95% CL upper limit from I14+C14 (eq. [3.1b]).
#### Late-Universe cosmology: radiation-matter equality and CMB damping tail.
Figure 7: Effective number of extra relativistic species in the late
Universe, $\Delta N_{\rm eff,\,late}$, in our model, presented as the three-
view projections with respect to the model parameters, $(r,T_{\rm re},T_{\rm
sr})$. The cross-sectional planar slices of the 3-D space of model parameters
chosen in each view are color-coded, and the grey region is entirely excluded
by the observational constraints. The vertical black line indicates the 95% CL
upper limit on $r$ from BKP [68]. The magenta curves indicate the 95% CL upper
limit on $\Delta N_{\rm eff,\,late}$ from combining the CMB and BAO data with
a prior for $Y_{\rm P}$ (eq. [3.2a], our baseline). The cyan curves indicate
the 95% CL upper limit from CMB+BAO only (eq. [3.2b]).
Extra radiation components that survive through the late Universe, like the
stiff-amplified primordial SGWB, may significantly contribute to the value of
$N_{\rm eff,\,late}$. In our model, unlike the case of $N_{\rm eff,\,BBN}$
discussed above, the SGWB is the only contribution to $\Delta N_{\rm
eff,\,late}$, since the stiff component is negligible in the late Universe.
Therefore, constraints on $N_{\rm eff,\,late}$ from the CMB and BAO can be
translated into constraints on the stiff-amplified primordial SGWB, and thus
on our model parameters. Before we discuss these late-Universe constraints, it
is important to (1) point out that a general analysis should distinguish them
from the BBN constraints above (which are only concerned with the early-
Universe cosmology), and (2) clarify which kind of analysis is suitable for
this purpose.
Currently, the CMB+BAO data sets are compatible with primordial element
abundance data inasmuch as they can be jointly fitted by an extension to the
base-$\Lambda$CDM model with a fixed, time-independent $N_{\rm eff}$ which is
common to both the early epoch of BBN and the late times probed by the CMB and
BAO observations (i.e. by assuming $\Delta N_{\rm eff,\,BBN}=\Delta N_{\rm
eff,\,late}$, henceforth “$\Lambda$CDM+$N_{\rm eff}$”) [53]. While many
analyses are based on this assumption [e.g., 64], the concordance between BBN
and the late-Universe cosmology, however, does _not_ demand that these two
values of $N_{\rm eff}$ be equal. As a matter of fact, viable cosmological
models like ours allow for different values of $N_{\rm eff}$ at BBN and at
late times (cf. Model I in table 1). In light of such models, it is therefore
more general and favorable to constrain $N_{\rm eff,\,BBN}$ and $N_{\rm
eff,\,late}$ separately. For the latter, the analysis should only involve
physical processes that directly determine the late-Universe observables,
independent of BBN, in a clean way. We carefully examine those physical
processes in the following:
* (1)
The physical size of the sound horizon depends on the duration of the RD era.
Thus, the angular scales of the sound horizon measured by both the CMB and BAO
(at different redshifts) are sensitive to the value of $z_{\rm eq}$.
* (2)
The early Integrated Sachs-Wolfe (ISW) effect refers to the enhancement of the
CMB temperature anisotropies due to the time-variation of gravitational
potentials after recombination, when the Universe was still not yet fully
transitioned from RD to MD. In particular, the relative heights of the first
three peaks in the CMB temperature power spectrum are sensitive to the value
of $z_{\rm eq}$ via the early ISW effect [e.g., 89].
* (3)
On even smaller scales, the CMB temperature anisotropies are damped by photon
diffusion, an effect known as Silk damping. The slope of the damping tail in
the CMB power spectrum reflects the amount of Silk damping [63]. Since it
depends on both the expansion rate at recombination and the number density of
free electrons, the CMB damping tail measurements are thus sensitive to both
$N_{\rm eff,\,late}$ and $Y_{\rm P}$.
The $z_{\rm eq}$ measurements based on the first two physical processes above
are still subject to the $H_{0}-N_{\rm eff}$ degeneracy, as described in
section 2.2. This degeneracy can, however, be broken by the CMB damping tail
measurements, because they provide additional information that enables
constraining $N_{\rm eff,\,late}$ on its own. Therefore, one can constrain
$N_{\rm eff,\,late}$ independently of information involving any early-Universe
process (e.g., a BBN determination), by fitting the CMB+BAO data with an
extended $\Lambda$CDM model which allows _both_ $N_{\rm eff,\,late}$ and
$Y_{\rm P}$ to vary freely (henceforth, “$\Lambda$CDM+$N_{\rm eff}$+$Y_{\rm
P}$”). In this paper, we quote the $N_{\rm eff,\,late}$ constraints from such
an analysis provide by the _Planck_ 2018 results (CMB+BAO).111111 In fact, the
analysis that completely suits our purpose should use the prior on $N_{\rm
eff,\,late}$ adapted for our model, whereas the _Planck_ analysis was based on
a conservative flat prior. As a proof of principle, however, we quote the
_Planck_ results in this paper. We leave the full analysis for our model for a
future work. Optionally, the $Y_{\rm P}$ value from helium abundance
measurements can be additionally combined as a prior in the analysis. This
provides a tighter constraint on $N_{\rm eff,\,late}$ while still independent
of BBN [53]. We choose this constraint as our baseline in the paper.
The $N_{\rm eff,\,late}$ constraints from _Planck_ are quoted as follows:
$\displaystyle N_{\rm eff,\,late}$ $\displaystyle=2.99\,^{+0.43}_{-0.40}\quad$
$\displaystyle(95\%,~{}{\rm with}~{}Y_{\rm P}~{}{\rm prior\,[A15]}),$ (3.2a)
$\displaystyle N_{\rm eff,\,late}$ $\displaystyle=2.97\,^{+0.58}_{-0.54}\quad$
$\displaystyle(95\%,~{}{\rm without}~{}Y_{\rm P}~{}{\rm prior}).$ (3.2b)
Both bounds are consistent with the standard value $N_{\rm eff,0}=3.046$.
We have calculated the value of $\Delta N_{\rm eff,\,late}$ in our model for
each choice of model parameters. The results are shown in figure 7, where both
95% CL upper limits in eq. (3.2) are displayed.
## 4 Results: joint constraints on standard inflation + stiff amplification
In this section, we combine the constraints on our model parameters,
$(r,T_{\rm re},T_{\rm sr})$, from all the probes of the primordial SGWB
described in the previous section, to obtain the _joint_ constraints on the
“standard inflation + stiff amplification” scenario. We summarize these joint
constraints in figure 8, which we shall now discuss.
As already described in section 3.2 and shown by figure 4, the amplitude of
$h_{c}(f_{\rm yr})$ for the stiff-amplified SGWB in our model, while enhanced
with respect to that _without_ stiff amplification, in the base-$\Lambda$CDM
model, is still much lower than the common-spectrum amplitude from the
NANOGrav 12 yr data set. This is due to the fact that the model parameters
required to amplify the primordial SGWB at this frequency to the level of the
NANOGrav signal would result in excessively large values of $\Delta N_{\rm
eff,\,late}$, well above its current 95% CL upper bound from observations. In
fact, the combined constraints from all other probes indicate that the
difference in $h_{c}(f_{\rm yr})$ between our model predictions and the
NANOGrav results is more than two orders of magnitude. Therefore, the
“standard inflation + stiff amplification” scenario cannot explain the common-
spectrum process reported by NANOGrav. If the latter were indeed due to an
SGWB, astrophysical sources are more likely to be the origin.
Figure 8: Three-dimensional view of the constraints on the “standard
inflation + stiff amplification” scenario in its parameter space, $(r,T_{\rm
re},T_{\rm sr})$. _Panels (a)–(c)_ : constraints from the LIGO-Virgo O3
results, the $N_{\rm eff,\,BBN}$ measurements and the $N_{\rm eff,\,late}$
measurements, respectively. For each probe, the constraints are visualized as
the isosurface and contours for the corresponding 95% CL upper limit. In
panels (b) and (c), the isosurfaces are from our baseline constraints on
$N_{\rm eff,\,BBN}$ and $N_{\rm eff,\,late}$, eqs. (3.1a) and (3.2a),
respectively. In panels (a)–(c), the color-coded planar cross-sections are the
same as those in the leftmost panels of figures 5–7, respectively. In each
panel, the three thick magenta curves are the same as those magenta curves
shown in the three views of the corresponding figure (among figures 5–7), and
the grey vertical plane (with black borders in the figure) indicates the 95%
CL upper limit on $r$ from BKP [68]. _Panel (d)_ : Overall 95% CL allowed
range of our model parameters obtained by combining all the constraints in
panels (a)–(c), indicated by the light green volume. We identify the three
regimes of this overall allowed range according to the dominant probe in each
regime, (i) $N_{\rm eff,\,BBN}$, (ii) LIGO-Virgo and (iii) $N_{\rm
eff,\,late}$ (cf. table 2). The 95% CL bound in regime (ii) is manifested as
the “waterfall”-like surface in the figure.
In that case, as long as we remain below the NANOGrav results, our model can
still be constrained by other probes of the primordial SGWB, e.g., laser
interferometric experiments and indirect probes. The joint constraints on our
model parameters from these bounds are shown in figure 8. It displays three-
dimensional views of the 95% CL constraints in this parameter space, first as
required to satisfy each observational constraint separately, for the
constraints from the O3 data of the Advanced LIGO-Virgo, the $N_{\rm
eff,\,BBN}$ measurements, and the $N_{\rm eff,\,late}$ measurements,
respectively,121212For the $N_{\rm eff,\,BBN}$ and $N_{\rm eff,\,late}$
measurements, we only plot the isosurfaces of the tighter, baseline
constraints, namely, eqs. (3.1a) and (3.2a), respectively. and then with a
view of the range of parameters allowed by _all three_ of those constraints —
the 95% CL _joint_ constraint. Unfortunately, since not all the likelihood
data from these measurements are publicly available, we cannot yet perform a
full Bayesian joint analysis to obtain the posteriors for our model
parameters. Instead, our joint analysis here simply combines the 95% CL
constraints on our model parameters from each probe, i.e., combining the
isosurfaces in panels (a)–(c) of figure 8. The resulting 95% CL allowed range
of $(r,T_{\rm re},T_{\rm sr})$ is indicated by the light green volume in panel
(d).
To describe the features of this overall allowed range, we first note that
there must be a lower bound on $T_{\rm re}$ to allow BBN to occur, $T_{\rm
re}\gtrsim 4$ MeV, and there is an upper bound on $r$ from the CMB, $r<0.061$
(95% CL). For fixed $r$ and $T_{\rm re}$, a lower value of $T_{\rm sr}$ (i.e.,
larger $\Omega_{\rm s,0}$) implies longer duration of the stiff era and thus
higher degree of stiff amplification of the primordial SGWB. Therefore, there
must be a lower bound on $T_{\rm sr}$ for given values of $r$ and $T_{\rm
re}$. As a matter of fact, this bound is described by the _top_ surface of the
allowed region in panel (d) of figure 8, the 95% CL _lower_ limit on $T_{\rm
sr}$ (as a reminder, the $T_{\rm sr}$ axis is upside-down is this figure).
Regime | Range of $T_{\rm re}$ | Lower limit on $T_{\rm sr}$ (95% CL) | Dominant probe
---|---|---|---
(i) | $4\times 10^{-3}\lesssim T_{\rm re}$/GeV $\lesssim 10^{3}$ | $T_{\rm sr}>8.3\times 10^{-3}$ GeV | $N_{\rm eff,\,BBN}$
(ii) | $10^{3}\lesssim T_{\rm re}$/GeV $\lesssim 10^{6}$ | | Indicated by the “waterfall” surface
---
in panel (d) of figure 8
$\Omega_{\rm ref,\,LIGO}$
(iii) | $T_{\rm re}$/GeV $\gtrsim 10^{6}$ | $\log_{10}\frac{T_{\rm sr}}{\rm GeV}>\frac{1}{2}\log_{10}r+\log_{10}\frac{T_{\rm re}}{\rm GeV}-4.4$ | $N_{\rm eff,\,late}$
Table 2: Overall 95% CL allowed range of our model parameters $(r,T_{\rm
re},T_{\rm sr})$, described as the 95% CL lower limit on $T_{\rm sr}$ for
given values of $r$ and $T_{\rm re}$ (i.e., the top surface of the allowed
region in panel [d] of figure 8). The overall allowed range has $r<0.061$ (95%
CL) from BKP [68].
Using this description, we find that the parameter range allowed by the joint
constraints can be characterized by dividing it into three regimes, according
to which observational probe of our model dominates in each regime. They can
be roughly parameterized by the range of $T_{\rm re}$. The results are laid
out in table 2 and labeled in panel (d) of figure 8. We can describe these
three regimes as follows, itemized by the regime number:
* (i)
This regime has the lowest values of $T_{\rm re}$. The dominant constraint on
$T_{\rm sr}$ is from $N_{\rm eff,\,BBN}$ (cf. the discussion on the $N_{\rm
eff,\,BBN}$ constraint in section 3.4, where we explain that in our model, not
only the SGWB but also the stiff component can contribute to $N_{\rm
eff,\,BBN}$ significantly). In fact, the lower limit on $T_{\rm sr}$ in this
regime is roughly manifested as a horizontal plane in the parameter space,
insensitive to the values of $r$ and $T_{\rm re}$. It reflects the fact that
in this regime, the SGWB is unimportant and our model is mainly constrained by
the requirement that the stiff-to-radiation transition must finish early
enough, so that the stiff component alone, for which $\rho_{\rm s}\propto
a^{-6}$, does not boost the expansion rate during BBN beyond the observational
constraints.
* (ii)
This regime has the intermediate range of $T_{\rm re}$. The dominant probe is
the LIGO-Virgo measurements, since the frequency of the peak in $\Omega_{\rm
GW}(f)$ in our model, $f_{\rm re}$ — which corresponds to the mode that
reentered the Hubble radius at the end of reheating (the beginning of the
stiff era) — is around $f_{\rm LIGO}=25$ Hz then. The lower limit on $T_{\rm
sr}$ for given values of $r$ and $T_{\rm re}$ in this regime is manifested as
the “waterfall” surface shown in panel (d) of figure 8.
* (iii)
This regime has the highest values of $T_{\rm re}$ and the dominant constraint
is from $N_{\rm eff,\,late}$, which amounts to an upper bound on the area
under the triangle in $\Omega_{\rm GW}(f)$ (cf. figure 3). Correspondingly,
the lower limit on $T_{\rm sr}$ for given $r$ and $T_{\rm re}$ in this regime
roughly appears as a plane in the parameter space, whose equation is specified
in table 2.
By contrast with our model predictions for the PTA frequency range, which fall
far short of the reported NANOGrav signal for our allowed model parameters,
there is no such gap between our model and the current upper limits from _all
other_ observational probes. As a result, if the NANOGrav signal holds up over
time and is confirmed, then these other probes (i.e., LIGO-Virgo, $N_{\rm
eff,\,BBN}$ and $N_{\rm eff,\,late}$) may be more likely to detect the
“standard inflation + stiff amplification” scenario than PTA. For example, for
the results presented here, within the range of model parameters allowed by
the joint analysis, any future detection by LIGO-Virgo of the primordial SGWB,
consistent with its current O3 95% CL upper limit, can be explained by our
model. If so, then our model would provide an explanation within standard
inflation which does not require an initial spectral tilt.
Each of these probes will be continually upgraded in the future for its
sensitivity (e.g., CMB-S4 [79], LIGO
A+131313https://dcc.ligo.org/LIGO-T1800042/public), and a comparison among
those sensitivities would be required to determine which probe will provide
the first evidence of this early-Universe scenario.141414 See also [90, 91]
for more discussions on the detectability of stiff SGWB spectra by LIGO and
its upgrades. Those authors studied similar stiff SGWB spectra from standard
inflation and discussed the dependence of its detectability on the sensitivity
of GW detectors, but did not calculate the case with an extended phase of
reheating prior to the stiff era (as in our model). If detection results for
multiple probes, all consistent with the predictions of “standard inflation +
stiff amplification”, this would then constitute smoking-gun evidence in favor
of the model.
## 5 Implications for the Hubble tension
New observational results continue to increase the significance of the tension
between $H_{0}$ measurements from CMB+BAO and those from the nearby Universe.
This has motivated growing interest in alternatives to the base-$\Lambda$CDM
model [e.g., 61]. Here we examine the possibility of reconciling/alleviating
the Hubble tension in our “standard inflation + stiff amplification” scenario,
by the presence of the stiff-amplified primordial SGWB as an extra radiation
component. In the presence of this extra radiation component, current
measurements of $z_{\rm eq}$ (for a precision $\sim 0.6\%$) then drive us to
incorporate the $H_{0}-N_{\rm eff}$ degeneracy [64] in our analysis, expressed
as eq. (2.8) in our model. We note that the $H_{0}-N_{\rm eff}$ degeneracy is
concerned with $N_{\rm eff,\,late}$, the late-Universe value of $N_{\rm eff}$,
independent of $N_{\rm eff,\,BBN}$.
Figure 9: Hubble tension implications of the “standard inflation + stiff
amplification” scenario. The tension is illustrated by the discrepancy between
the $H_{0}$ measurements from the nearby Universe, including SH0ES [60] and
H0LiCOW [61], and those from the CMB and BAO, including BAO+BBN [64] and
CMB+BAO [53]. The latter two analyses are both based on the
$\Lambda$CDM+$N_{\rm eff}$ model, and the solid ellipses are their 95% CL
contours, respectively. All the shaded regions represent the 68% CL ranges for
the respective measurements. With respect to our model, the $H_{0}-N_{\rm
eff}$ degeneracy relation, eq. (2.8), is shown as the green line. The vertical
dashed line indicates the standard value, $N_{\rm eff,0}=3.046$. The vertical
magenta and cyan lines are the 95% CL upper limits of $N_{\rm eff,\,late}$
adopted in our model, eq. (3.2), given by the CMB+BAO analysis based on the
$\Lambda$CDM+$N_{\rm eff}$+$Y_{\rm P}$ model, with and without a prior for
$Y_{\rm P}$. Our model thus lies on the segment of the $H_{0}-N_{\rm eff}$
degeneracy relation between the black point and the open circles.
The resulting impact of our model on the Hubble tension is illustrated in
figure 9. Figure 9 shows this $H_{0}-N_{\rm eff}$ degeneracy relation in our
model, along with current observational determinations of $H_{0}$, from
measurements of the distance ladder by the SH0ES collaboration
($H_{0}=74.03\pm 1.42$ km s-1 km-1) [60], time-delay cosmography by the
H0LiCOW collaboration ($H_{0}=73.3\,^{+1.7}_{-1.8}$ km s-1 km-1) [61], BAO+BBN
[64] and CMB+BAO [53]. The last two measurements are both based on the
“$\Lambda$CDM+$N_{\rm eff}$” model, and both involve a BBN calculation which
assumes $N_{\rm eff,\,BBN}=N_{\rm eff,\,late}$. In contrast, our analysis is
free from these assumptions, as described in section 3.4. Since the
$H_{0}-N_{\rm eff}$ degeneracy is only concerned with $N_{\rm eff,\,late}$, we
adopt the BBN-independent upper bound on $N_{\rm eff,\,late}$ from the CMB+BAO
analysis provided by _Planck_ (eq. [3.2]), based on the “$\Lambda$CDM+$N_{\rm
eff}$+$Y_{\rm P}$” model [53]. Therefore, combining eq. (2.8) with the
_Planck_ fits, we obtain the corresponding upper limits on $H_{0}$ for our
model, as follows:
$\displaystyle H_{0}$ $\displaystyle\leq
69.34~{}\mathrm{km\,s}^{-1}\,\mathrm{Mpc}^{-1}\quad$
$\displaystyle(95\%,~{}{\rm with}~{}Y_{\rm P}~{}{\rm prior\,[A15]}),$ (5.1a)
$\displaystyle H_{0}$ $\displaystyle\leq
69.91~{}\mathrm{km\,s}^{-1}\,\mathrm{Mpc}^{-1}\quad$
$\displaystyle(95\%,~{}{\rm without}~{}Y_{\rm P}~{}{\rm prior}).$ (5.1b)
These values are indicated by the open circles in figure 9. They occur for
model parameters corresponding to the current 95% CL upper limits of $N_{\rm
eff,\,late}$, and thus to the boundary surface for regime (iii) described
above (cf. figure 8 and table 2).
Meanwhile, for these current observational limits on $N_{\rm eff,\,late}$, our
model may be able to reduce the discrepancy between measurements of $H_{0}$ by
CMB+BAO and SH0ES from $4.4\sigma$ to $3.8\sigma$ for the baseline upper limit
(eq. [3.2a]), and that between CMB+BAO and the H0LiCOW measurement from
$3.1\sigma$ to $2.8\sigma$. In addition, if we take the more relaxed upper
limit, eq. (3.2b), our model may further bring the $H_{0}$ from CMB+BAO to
within $3.4\sigma$ of the SH0ES measurement, and within $2.6\sigma$ of the
H0LiCOW measurement.
## 6 Summary and discussion
The recent NANOGrav 12 yr results have invited many interpretation attempts
involving a primordial stochastic gravitational-wave background. All those
attempts are based upon non-standard early-Universe scenarios which can
predict stronger amplitudes in the PTA frequency range than that from standard
inflation + $\Lambda$CDM. While it is understood by many authors that such an
SGWB contributes an extra radiation component to the background Universe,
which may therefore affect its expansion history, we here investigate the
possibility that this extra radiation may then help alleviate the current
Hubble tension, thus drawing a novel connection between gravitational waves
and cosmology.
We demonstrate this by considering a cosmological model, the “standard
inflation + stiff amplification” scenario, with two components added to the
base-$\Lambda$CDM model: a stiff component and the primordial SGWB. In our
model, an early stiff era (with $w=1$) arises, when the stiff component
dominates, between the end of an extended period of reheating and the standard
RD era. Unlike some other suggestions to explain the NANOGrav signal by
postulating nonstandard inflation with a blue-tilted primordial spectrum for
the tensor modes responsible for the SGWB, our model does not require us to
_depart_ from standard inflation so as to _tilt_ the primordial spectrum. In
our case, the primordial spectrum is nearly untilted, but a _secondary_ blue
tilt results, instead, from the _stiff amplification_ caused by the stiff era.
Clarifying its distinction from parametric amplification, we revisit this
stiff-amplification effect on the primordial SGWB under the general scenario
considered here, which has three parameters, $(r,T_{\rm re},T_{\rm sr})$. The
secondary blue tilt in the stiff-amplified primordial SGWB is manifested in
its present-day energy spectrum as $\Omega_{\rm GW}(f)\propto f$, for the
frequency range of modes that reentered the Hubble radius during the stiff
era. In this paper, we address the questions of whether such a blue tilt may
explain the NANOGrav results and to what extent the stiff-amplified primordial
SGWB can reduce the Hubble tension. Along the way, we make predictions for
other direct and indirect observables of the SGWB, as well.
In doing so, we develop a new method to include the _backreaction_ of the SGWB
on the background expansion rate _self-consistently_. In fact, we point out
that any GW analysis based on a model that can significantly boost the SGWB,
like ours, must account for its backreaction on the background Universe, in
order to preserve the well-measured redshift of radiation-matter equality as
precision cosmology demands. For that, we solve the fully-coupled dynamical
system of the SGWB and expansion history. We update its boundary conditions by
boosting the Hubble constant in accordance with the extra radiation associated
with the SGWB. In so doing, we utilize the $H_{0}-N_{\rm eff}$ degeneracy
which preserves $z_{\rm eq}$.
We then sample the three-dimensional parameter space, $(r,T_{\rm re},T_{\rm
sr})$, to perform a joint analysis of the NANOGrav results and the latest
upper bounds from _Planck_ , big bang nucleosynthesis and Advanced LIGO-Virgo,
to constrain the model. We find that the resulting blue-tilted, stiff-
amplified SGWB is still too small to explain the common-spectrum amplitude
reported by NANOGrav (by at least two orders of magnitude), when constrained
by current upper limits of the other observables: $\Omega_{\rm ref,\,LIGO}$,
$N_{\rm eff,\,BBN}$ and $N_{\rm eff,\,late}$. The latter together provide
joint constraints on our model parameters. We find that the parameter range
allowed by the joint constraints can be characterized by dividing it into
three regimes, according to which observational probe of our model dominates
in each regime.
While we have shown that, for its allowed parameters, the maximum amplitude of
the predicted primordial SGWB for the “standard inflation + stiff
amplification” scenario in the PTA frequency range is far smaller than the
reported NANOGrav signal, there is no such gap between our model predictions
and the current upper limits on the SGWB from _all other_ observational
probes. In the future, therefore, even if the NANOGrav signal is confirmed and
too large to be the _primordial_ SGWB predicted here, these other probes
(i.e., LIGO-Virgo, $N_{\rm eff,\,BBN}$ and $N_{\rm eff,\,late}$) may still be
able to detect our predicted SGWB. For example, for model parameters which
satisfy the constraints derived here from our joint analysis, any future
detection of the primordial SGWB by LIGO-Virgo that is consistent with the
current O3 95% CL upper limit can be explained by our model. If so, then this
would be an explanation within standard inflation which does not require an
initial spectral tilt.
As the sensitivity of these probes increases in the future (e.g., CMB-S4, LIGO
A+), the chances of detecting the primordial SGWB will increase, as well.
Which probe is likely to provide the first evidence of this early-Universe
scenario will depend upon the relative improvements among their sensitivities
over time. If detections occur for multiple probes, each consistent with the
predictions of “standard inflation + stiff amplification”, then this would be
smoking-gun evidence in favor of the model.
With regard to the Hubble tension, we have shown that the “standard inflation
+ stiff amplification” scenario may reduce the discrepancy between the
measurement of $H_{0}$ by CMB+BAO for the baseline upper limit (eq. [3.2a])
and that by SH0ES, from $4.4\sigma$ to $3.8\sigma$, and that by H0LiCOW, from
$3.1\sigma$ to $2.8\sigma$. Moreover, according to our analysis, if we take
the more relaxed upper limit, eq. (3.2b), instead, then our model can bring
the value of $H_{0}$ derived from CMB+BAO to within $3.4\sigma$ of its value
from the SH0ES measurement, and within $2.6\sigma$ of its value from the
H0LiCOW measurement. Hence, our results demonstrate that, while existing
attempts to reconcile the Hubble tension often appeal to an extra relic
radiation component, the primordial SGWB as generally present in the current
cosmological paradigm can provide a favorable candidate for such extra
radiation which may at least partially reduce the tension. In fact, given the
unknown expansion history of the Universe between the end of inflation and
BBN, non-negligible extra radiation is a natural consequence of the primordial
SGWB produced within the standard inflationary paradigm, caused by stiff
amplification, without further new ingredients, e.g., dark photons or early
dark energy.
Our results for the “standard inflation + stiff amplification” scenario can
also be contrasted with those for models recently proposed to explain the
NANOGrav results as the primordial SGWB from _nonstandard_ inflation. In these
nonstandard inflation models, the consistency relation between $r$ and the
spectral index $n_{\rm t}$ is relaxed, to allow the primordial spectrum to
have a large _initial_ blue tilt [e.g., 49, 50]. In our model, by contrast,
the inflationary consistency relation is obeyed, and, therefore, the
primordial spectrum is nearly flat. We have shown that the SGWB from
_standard_ inflation must be well below the reported NANOGrav amplitude, even
after stiff amplification, while the nonstandard inflationary models can
_match_ the level of that amplitude only by postulating a large enough
_initial_ blue tilt. As a result, the latter models often need to place limits
on the impact of reheating, either by restricting $T_{\text{re}}$ to be small
(i.e., $\,\lesssim 10^{3}$ GeV) or else invoking some nonstandard process;
otherwise, the blue tilt required to match the NANOGrav amplitude would be so
large as to violate the current BBN constraints. Since our stiff-amplified
SGWB does not rise to the level of the NANOGrav results anyway, our model is
not restricted to low values of $T_{\rm re}$.
## Appendix A Primordial SGWB: the short-wave, weak-field limit
The classical description of GWs is based on a clean separation of
perturbations from the background metric [66, 92, 93]. When such a separation
occurs in spatial dimensions, this is the “short-wave” limit: all the
wavelengths of interest are much less than the typical curvature radius of the
background. If perturbations are additionally small (the “weak-field” limit),
one can expand the Ricci tensor around its background value. In this
expansion, the first-order term, $R^{(1)}_{\mu\nu}$, yields the equation of
motion of the GWs (the wave equation) and the second-order term,
$R^{(2)}_{\mu\nu}$, yields their effective stress-energy tensor. The latter is
defined as
$T^{\rm GW}_{\mu\nu}=-\frac{c^{4}}{8\pi
G}\bigg{\langle}R^{(2)}_{\mu\nu}(\gamma)-\frac{1}{2}\bar{g}_{\mu\nu}R^{(2)}(\gamma)\bigg{\rangle}_{\rm
3D}$ (A.1)
where $\bar{g}_{\mu\nu}$ is the background metric,
$R^{(2)}\equiv\bar{g}^{\mu\nu}R^{(2)}_{\mu\nu}$ and $\langle\dots\rangle_{\rm
3D}$ denotes the spatial average over a scale greater than all modes of
interest.
Primordial tensor perturbations over a flat FLRW background naturally satisfy
the short-wave, weak-field limit. Thus, in the TT gauge, the wave equation for
these perturbations takes the following standard form,151515 We consider no
sources with anisotropic stress in this paper.
$\ddot{h}_{ij}+\frac{3\dot{a}}{a}\dot{h}_{ij}-\frac{c^{2}}{a^{2}}\nabla^{2}h_{ij}=0.$
(A.2)
Also, $T^{\rm GW}_{\mu\nu}$ turns out to be homogeneous since it is a
spatially-averaged quantity by definition. For tensor fluctuations produced by
inflation, the spatial average is equal to the ensemble average
$\langle\dots\rangle$ according to the ergodic theorem. Thus, primordial GWs
from inflation constitute a _stochastic background_.
The energy density and pressure of this SGWB can be explicitly written as
$\begin{split}\rho_{\rm GW}&\equiv T_{00}^{\rm GW}=\frac{c^{2}}{32\pi
G}\sum_{ij}\bigg{\langle}\frac{1}{2}(\dot{h}_{ij})^{2}+\frac{c^{2}}{2a^{2}}(\nabla
h_{ij})^{2}+\frac{4\dot{a}}{a}\dot{h}_{ij}h_{ij}\bigg{\rangle},\\\ p_{\rm
GW}&\equiv\frac{1}{3a^{2}}(T_{11}^{\rm GW}+T_{22}^{\rm GW}+T_{33}^{\rm
GW})=\frac{c^{2}}{32\pi
G}\,\frac{1}{3}\sum_{ij}\bigg{\langle}-\frac{5}{2}(\dot{h}_{ij})^{2}+\frac{7c^{2}}{2a^{2}}(\nabla
h_{ij})^{2}\bigg{\rangle}.\end{split}$ (A.3)
Therefore, moving into Fourier space, the dimensionless energy and pressure
spectra can be written as
$\begin{split}\Omega_{\rm GW}(a,f)&\equiv\frac{8\pi
G}{3H^{2}c^{2}}\cdot\frac{\mathrm{d}\,\rho_{\rm GW}}{\mathrm{d}\,\ln
f}=\frac{\Delta^{2}_{h,\rm
i}(f)}{24H^{2}}\left(\dot{T}_{h}^{2}+\left(\frac{2\pi
f}{a}\right)^{2}T_{h}^{2}+\frac{8\dot{a}}{a}\dot{T}_{h}T_{h}\right),\\\
\Pi_{\rm GW}(a,f)&\equiv\frac{8\pi
G}{3H^{2}c^{2}}\cdot\frac{\mathrm{d}\,p_{\rm GW}}{\mathrm{d}\,\ln
f}=\frac{\Delta^{2}_{h,\rm
i}(f)}{72H^{2}}\left(-5\dot{T}_{h}^{2}+7\left(\frac{2\pi
f}{a}\right)^{2}T_{h}^{2}\right).\end{split}$ (A.4)
The inverse relations are apparently
$\begin{split}\Omega_{\rm GW}(a)&=\int_{0}^{+\infty}\Omega_{\rm
GW}(a,f)\,\mathrm{d}\ln f,\qquad\rho_{\rm GW}(a)=\frac{3H^{2}c^{2}}{8\pi
G}\,\Omega_{\rm GW}(a)\\\ \Pi_{\rm GW}(a)&=\int_{0}^{+\infty}\Pi_{\rm
GW}(a,f)\,\mathrm{d}\ln f,\qquad p_{\rm GW}(a)=\frac{3H^{2}c^{2}}{8\pi
G}\,\Pi_{\rm GW}(a).\end{split}$ (A.5)
For modes well-inside the Hubble radius $(2\pi f/aH\gg 1)$, the high-frequency
limit is satisfied in addition to the short-wave limit. In this case, the
adiabatic solution for plane waves reads $T_{h}\propto\cos{(2\pi\,if\eta)}/a$.
It oscillates much more rapidly than the Universe expands. Thus, only time-
averaged values over several oscillations, $\langle\dots\rangle_{t}$, can be
measurable in practice. We then have $\dot{T}^{2}_{h}\simeq(2\pi
f/a)^{2}\,T_{h}^{2}$ (the time-averaging notation for sub-Hubble solutions is
omitted throughout the paper for brevity). This yields
$\Omega_{\rm GW}(a,f)\simeq 3\Pi_{\rm GW}(a,f)\simeq\frac{\Delta^{2}_{h,\rm
i}(f)\,\dot{T}^{2}_{h}}{12H^{2}}\simeq\frac{(2\pi f)^{2}\,\Delta^{2}_{h,\rm
i}(f)\,T_{h}^{2}}{12\,a^{2}H^{2}},$ (A.6)
showing that sub-Hubble modes indeed evolve like radiation, $w(a,f)=1/3$. The
last equality above also implies the following relation in the sub-Hubble
limit:
$\Omega_{\rm GW}(a,f)\simeq\frac{(2\pi
f)^{2}}{12\,a^{2}H^{2}}\Delta^{2}_{h}(a,f)=\frac{2\pi^{2}f^{2}}{3\,a^{2}H^{2}}h_{c}^{2}(a,f).$
(A.7)
## Appendix B Illustrative example of an early stiff era: $\Lambda$SFDM
universe
Figure 10: Energy density evolution of all the components in an example
$\Lambda$SFDM universe [LSR17]. Its particle parameters are
$\lambda/(mc^{2})^{2}=1\times 10^{-18}~{}{\rm eV}^{-1}$cm 3 and $m=8\times
10^{-21}$ eV$/c^{2}$, and the former describes the strength of the repulsive
quartic self-interaction of SFDM. _Panel (a)_ : Energy densities of SFDM,
radiation, baryons and the cosmological constant. _Panel (b)_ : Energy density
of the (stiff-amplified) primordial SGWB. _Panel (c)_ : Energy density
fractions of all the components.
Figure 10 shows the energy density evolution of all the components in an
example $\Lambda$SFDM universe, for which the cosmological dark matter is
composed of ultralight ($m\sim 10^{-22}$ eV$/c^{2}$) bosonic particles in
Bose-Einstein condensate [31, LSR17]. This dark matter model is described by a
complex scalar field, thus known as scalar field dark matter (SFDM). Complex
SFDM is a variant of the fuzzy dark matter (which is otherwise described by a
real scalar field). It generically undergoes a kination or stiff phase as the
earliest stage of its dynamical evolution. As a result, the primordial SGWB
from inflation is subject to stiff amplification and may then causes
significant backreaction on the background Universe.
The example model shown in figure 10 has a repulsive quartic self-interaction,
which causes the radiation-like phase of SFDM as shown in panel (a). The
radiation-like SFDM contributes yet another extra radiation component to the
critical density of the Universe, manifested as the corresponding plateau in
panel (c). Later on, SFDM transitions into the matter-like phase and becomes
dark matter, responsible for cosmological structure formation.
## Appendix C Numerical scheme: approximation by model with constant $\Delta
N_{\rm eff}$
As described in section 2.2, we must solve the coupled dynamical system of
eqs. (2.6) and (2.7) for each frequency, for each set of model parameters,
$(r,T_{\rm re},T_{\rm sr})$.161616For each parameter set, we solve the
dynamical system for a sample of comoving frequencies, $\\{f_{i}\\}$, chosen
so as to resolve the spectrum $\Omega_{\text{GW}}(f)$ as a function of $f$,
which in practice required about 50 frequencies, spaced more closely around
the frequencies corresponding to the modes that entered the Hubble radius at
the transitions between epochs, when the EoS of the Universe changed, e.g., at
$T_{\text{re}}$. It is necessary, in fact, to solve the system of equations
for _all_ frequencies at once, since it is a set of integro-differential
equations, in which $\Omega_{\rm GW}$ and $\Pi_{\rm GW}$ in eq. (2.7) are both
integrals over all frequencies. An iterative solution is required, in that
case, since the integrated quantities at a given time are not known until the
solution is known for each frequency and at all times. Moreover, the finite-
difference scheme must contend with the requirement of resolving the high-
frequency oscillatory behavior of the solution in time, which requires many
small steps. Even if we only integrate the dynamical system exactly during the
Hubble reentry for each mode (for about 10 $e$-foldings) and stitch that
solution with its analytical super-Hubble and sub-Hubble asymptotes, the total
solution to the system consisting of all frequencies can be costly for a
single set of model parameters alone. In addition, in order to constrain our
model parameters by comparing the solutions for different parameters with
observational constraints, we must sample a large grid of representative
points in the three-dimensional parameter space, $(r,T_{\rm re},T_{\rm sr})$,
and find the solution to the coupled equations _for each point_.
Faced with these computational challenges, we have developed an efficient
numerical scheme to solve the coupled system. It takes advantage of the fact
that in the exact solution for cases with significant stiff amplification, the
contribution of the SGWB to the background energy density reaches an
asymptotic value by the end of the stiff era, i.e., a constant fraction of
that of other radiation components. As a result, $\rho_{\rm GW}$ can be
represented in terms of a constant value of $\Delta N_{\rm eff}$. During the
stiff era, itself, the expansion history is not sensitive to this value,
except that it determines when the stiff era ends and the Universe becomes RD.
As such, if we knew what that asymptotic value of $\Delta N_{\rm eff}$ was
going to be, we could approximate the entire expansion history and therefore
the evolution of $\sigma$ (cf. eq. [2.7]) quite well, by adopting this
asymptotic $\Delta N_{\rm eff}$ and assuming that it is constant from the
beginning of the integration forward in time. Since the evolution of $\sigma$
solely determines the transfer of tensor modes, as eq. (2.6) implies, this
method would yield a good approximate solution to the coupled equations.
Unfortunately, we do not know this asymptotic $\Delta N_{\rm eff}$ in advance
of solving the coupled equations. However, we can approach this value
iteratively, if we have an initial guess for the value of $\Delta N_{\rm
eff}$. In the end, this approximation enables us to produce computationally-
efficient solutions, with negligible differences from the exact solutions.
Figure 11: _Left panel_ : Fractional difference between the exact solution
and the approximate model, in terms of $\sigma$, for Model III in table 1.
Vertical dashed lines indicate the scale factors of stiff-to-radiation and
radiation-to-matter equalities, respectively. The decrease of the curve during
BBN is due to the process of electron-positron annihilation. _Right panel_ :
Evolution of $\Delta N_{\rm eff}$ as a function of the number of $e$-foldings,
$N$, for both the exact solution and the approximate model. Vertical dashed
lines indicate the end of reheating, stiff-to-radiation equality and
radiation-to-matter equality, respectively. In both panels, the grey band
indicates the duration of BBN.
In what follows, we describe this approximate model for the treatment of the
backreaction of the SGWB (mentioned in section 2.2 with other detail) and
justify our use of it. First, we explain why the value of $\Delta N_{\rm eff}$
due to $\rho_{\rm GW}$ is asymptotically constant in cases with significant
stiff amplification. The degree of stiff amplification depends on the duration
of the stiff era. If the latter spans more than a few $e$-foldings, the
frequency range of modes that reentered the Hubble radius during the stiff era
will extend more than a few orders of magnitude, accordingly. In this case,
combining eq. (A.5) and the spectrum of the stiff-amplified SGWB ($\Omega_{\rm
GW}(f)\propto f$), one can shown that $\rho_{\rm GW}$ must be dominated by
high-frequency modes that reentered at the beginning of the stiff era (cf.
figure 3). Since the tensor modes within a fixed frequency range evolve like
radiation in the sub-Hubble limit (cf. eq. [A.6]), the overall stiff-amplified
primordial SGWB can therefore be well approximated by a radiation component
with a constant $\Delta N_{\rm eff}$, shortly after the onset of the stiff
era. As a result, we can replace eq. (2.7) in the exact dynamical system by
the following approximation:
$\sigma=-\frac{2\dot{H}}{3H^{2}}=\Omega_{\rm m}+\frac{4}{3}\,\Omega_{\rm
r}+2\,\Omega_{\rm s}+\frac{4}{3}\,\Omega_{\rm er},$ (C.1)
where $\Omega_{\rm er}$ is the energy fraction of this extra radiation
component. The simultaneous coupling between the primordial SGWB and the
background Universe can thus be approximated by the system of eqs. (2.6) and
(C.1), which is more efficient to solve than the exact system.
We solve this approximate system iteratively with an update on its present-day
boundary conditions for each iteration. Particularly, we calculate the
asymptotic value of $\Delta N_{\rm eff}$ associated with $\rho_{\rm GW}$ from
the last iteration, and then update the value of $H_{0}$ as part of the
boundary conditions for the next iteration, using the $H_{0}-N_{\rm eff}$
degeneracy relation, eq. (2.8). As a reminder, the sum $\Omega_{\rm
r,0}+\Omega_{\rm er,0}$ that appears as a parameter in eq. (C.1) is fixed in
our treatment so as to fix $z_{\rm eq}$ (cf. section 2.2). It needs no update
between iterations, therefore. We adopt the following convergence criteria for
the iterative scheme that the fractional difference between the asymptotic
values of $\Delta N_{\rm eff}$ from consecutive iterations is less than
$10^{-3}$. Fortunately, only a few iterations are required in order to
converge or else to reach the conclusion that the adopted model parameters be
excluded (i.e., the grey region in figures 4–7).
Justification of our numerical scheme is illustrated in figure 11, using Model
III in table 1 as an example. The left panel demonstrates the consistency
between the exact model (with the primordial SGWB) and the corresponding
approximate model (with its converged value of the asymptotic $\Delta N_{\rm
eff}\approx 0.37$), in terms of $\sigma$. It shows that their relative
difference in $\sigma$ is less than $10^{-5}$ throughout the expansion
history. The right panel displays the evolution of $\Delta N_{\rm eff}$ in
both models. It confirms that whenever the backreaction is important, that is,
during the RD era, the value of $\Delta N_{\rm eff}$ from the approximate
model agrees with that from the exact model. In summary, the self-consistent
expansion history for the exact model can be faithfully mimicked by that from
the computationally-efficient, approximate model.
## Acknowledgments
BL acknowledges that this work is supported by National Key R&D Program of
China (grant No. 2018YFA0404502), NSFC (grant No. 11821303), and National SKA
Program of China (grant No. 2020SKA0110401). We thank Aaron Zimmerman, Kejia
Lee, Xingjiang Zhu and Paulo Montero-Camacho for valuable comments and
discussions, and thank the anonymous referee for constructive suggestions.
#### Note added.
After we submitted the paper, several recent works are brought to our
attention. In [94], a triangle-shaped SGWB energy spectrum similar to ours is
found. It is also due to the stiff amplification effect, caused by the
kination phase of a scalar field. [95] realized that a full numerical solution
requires solving an integro-differential equation and they developed an
_iterative_ algorithm with the same rationale as our solution here. Concerning
the Hubble tension, [96] also studied the possibility of reducing the
discrepancy by extra radiation species, who derived an $H_{0}-N_{\rm eff}$
degeneracy relation different from our eq. (2.8) here, based on a data-driven
view. Other attempts to resolve the Hubble tension include [97], who suggested
that current Type Ia supernovae data may imply an evolution trend which then
reduces the tension when extrapolated to the redshift of recombination.
## References
* [1] A.A. Starobinskiǐ, _Spectrum of relic gravitational radiation and the early state of the universe_ , _Soviet Journal of Experimental and Theoretical Physics Letters_ 30 (1979) 682.
* [2] V.A. Rubakov, M.V. Sazhin and A.V. Veryaskin, _Graviton creation in the inflationary universe and the grand unification scale_ , _Physics Letters B_ 115 (1982) 189.
* [3] L.F. Abbott and M.B. Wise, _Constraints on generalized inflationary cosmologies_ , _Nuclear Physics B_ 244 (1984) 541.
* [4] M. Maggiore, _Gravitational wave experiments and early universe cosmology_ , _Physics Reports_ 331 (2000) 283 [gr-qc/9909001].
* [5] P.D. Lasky, C.M.F. Mingarelli, T.L. Smith, J.T. Giblin, E. Thrane, D.J. Reardon et al., _Gravitational-Wave Cosmology across 29 Decades in Frequency_ , _Physical Review X_ 6 (2016) 011035 [1511.05994].
* [6] V.F. Shvartsman, _Density of relict particles with zero rest mass in the universe._ , _Soviet Journal of Experimental and Theoretical Physics Letters_ 9 (1969) 184.
* [7] B. Li, P.R. Shapiro and T. Rindler-Daller, _Bose-Einstein-condensed scalar field dark matter and the gravitational wave background from inflation: New cosmological constraints and its detectability by LIGO_ , _Phys. Rev. D_ 96 (2017) 063505 [1611.07961].
* [8] V.F. Mukhanov, H.A. Feldman and R.H. Brandenberger, _Theory of cosmological perturbations_ , _Physics Reports_ 215 (1992) 203.
* [9] S. Kuroyanagi, T. Chiba and T. Takahashi, _Probing the Universe through the stochastic gravitational wave background_ , _JCAP_ 2018 (2018) 038 [1807.00786].
* [10] L.P. Grishchuk, _Amplification of gravitational waves in an isotropic universe_ , _Zhurnal Eksperimentalnoi i Teoreticheskoi Fiziki_ 67 (1974) 825.
* [11] L.P. Grishchuk, _Graviton Creation in the Early Universe_ , in _Eighth Texas Symposium on Relativistic Astrophysics_ , M.D. Papagiannis, ed., vol. 302, p. 439, Dec., 1977, DOI.
* [12] A.A. Starobinsky, _A new type of isotropic cosmological models without singularity_ , _Physics Letters B_ 91 (1980) 99.
* [13] A.H. Guth, _Inflationary universe: A possible solution to the horizon and flatness problems_ , _Phys. Rev. D_ 23 (1981) 347.
* [14] A.D. Linde, _A new inflationary universe scenario: A possible solution of the horizon, flatness, homogeneity, isotropy and primordial monopole problems_ , _Physics Letters B_ 108 (1982) 389.
* [15] S. Weinberg, _Adiabatic modes in cosmology_ , _Phys. Rev. D_ 67 (2003) 123504 [astro-ph/0302326].
* [16] R.L. Davis, H.M. Hodges, G.F. Smoot, P.J. Steinhardt and M.S. Turner, _Cosmic microwave background probes models of inflation_ , _Phys. Rev. Lett._ 69 (1992) 1856 [astro-ph/9207001].
* [17] A.R. Liddle and D.H. Lyth, _COBE, gravitational waves, inflation and extended inflation_ , _Physics Letters B_ 291 (1992) 391 [astro-ph/9208007].
* [18] L.P. Grishchuk, _Quantum effects in cosmology_ , _Classical and Quantum Gravity_ 10 (1993) 2449 [gr-qc/9302036].
* [19] M.S. Turner, _Detectability of inflation-produced gravitational waves_ , _Phys. Rev. D_ 55 (1997) R435 [astro-ph/9607066].
* [20] L.P. Grishchuk and I.V. Sidorov, _LETTER TO THE EDITOR: On the quantum state of relic gravitons_ , _Classical and Quantum Gravity_ 6 (1989) L161.
* [21] L.P. Grishchuk and Y.V. Sidorov, _Squeezed quantum states of relic gravitons and primordial density fluctuations_ , _Phys. Rev. D_ 42 (1990) 3413.
* [22] M. Giovannini, _Gravitational wave constraints on post-inflationary phases stiffer than radiation_ , _Phys. Rev. D_ 58 (1998) 083504 [hep-ph/9806329].
* [23] P.J.E. Peebles and A. Vilenkin, _Quintessential inflation_ , _Phys. Rev. D_ 59 (1999) 063505 [astro-ph/9810509].
* [24] M. Giovannini, _Spikes in the relic graviton background from quintessential inflation_ , _Classical and Quantum Gravity_ 16 (1999) 2905 [hep-ph/9903263].
* [25] M. Giovannini, _Production and detection of relic gravitons in quintessential inflationary models_ , _Phys. Rev. D_ 60 (1999) 123511 [astro-ph/9903004].
* [26] M. Giovannini, _Stochastic backgrounds of relic gravitons, T $\Lambda$CDM paradigm and the stiff ages_, _Physics Letters B_ 668 (2008) 44 [0807.1914].
* [27] L.A. Boyle and P.J. Steinhardt, _Probing the early universe with inflationary gravitational waves_ , _Phys. Rev. D_ 77 (2008) 063504 [astro-ph/0512014].
* [28] L.A. Boyle and A. Buonanno, _Relating gravitational wave constraints from primordial nucleosynthesis, pulsar timing, laser interferometers, and the CMB: Implications for the early universe_ , _Phys. Rev. D_ 78 (2008) 043531 [0708.2279].
* [29] S. Kuroyanagi, K. Nakayama and S. Saito, _Prospects for determination of thermal history after inflation with future gravitational wave detectors_ , _Phys. Rev. D_ 84 (2011) 123513 [1110.4169].
* [30] D.G. Figueroa and E.H. Tanin, _Ability of LIGO and LISA to probe the equation of state of the early Universe_ , _JCAP_ 2019 (2019) 011 [1905.11960].
* [31] B. Li, T. Rindler-Daller and P.R. Shapiro, _Cosmological constraints on Bose-Einstein-condensed scalar field dark matter_ , _Phys. Rev. D_ 89 (2014) 083536 [1310.6061].
* [32] M. Joyce, _Electroweak baryogenesis and the expansion rate of the Universe_ , _Phys. Rev. D_ 55 (1997) 1875 [hep-ph/9606223].
* [33] LIGO Scientific Collaboration, J. Aasi, B.P. Abbott, R. Abbott, T. Abbott, M.R. Abernathy et al., _Advanced LIGO_ , _Classical and Quantum Gravity_ 32 (2015) 074001 [1411.4547].
* [34] F. Acernese, M. Agathos, K. Agatsuma, D. Aisa, N. Allemandou, A. Allocca et al., _Advanced Virgo: a second-generation interferometric gravitational wave detector_ , _Classical and Quantum Gravity_ 32 (2015) 024001 [1408.3978].
* [35] P. Amaro-Seoane, H. Audley, S. Babak, J. Baker, E. Barausse, P. Bender et al., _Laser Interferometer Space Antenna_ , _arXiv e-prints_ (2017) arXiv:1702.00786 [1702.00786].
* [36] T.L. Smith, M. Kamionkowski and A. Cooray, _Direct detection of the inflationary gravitational-wave background_ , _Phys. Rev. D_ 73 (2006) 023504 [astro-ph/0506422].
* [37] P.D. Meerburg, R. Hložek, B. Hadzhiyska and J. Meyers, _Multiwavelength constraints on the inflationary consistency relation_ , _Phys. Rev. D_ 91 (2015) 103505 [1502.00302].
* [38] S. Detweiler, _Pulsar timing measurements and the search for gravitational waves_ , _Astrophys. J._ 234 (1979) 1100.
* [39] S. Burke-Spolaor, S.R. Taylor, M. Charisi, T. Dolch, J.S. Hazboun, A.M. Holgado et al., _The astrophysics of nanohertz gravitational waves_ , _A $\&$A Review_ 27 (2019) 5 [1811.08826].
* [40] Z. Arzoumanian, P.T. Baker, H. Blumer, B. Bécsy, A. Brazier, P.R. Brook et al., _The NANOGrav 12.5 yr Data Set: Search for an Isotropic Stochastic Gravitational-wave Background_ , _Astrophys. J. Lett._ 905 (2020) L34 [2009.04496].
* [41] H. Middleton, A. Sesana, S. Chen, A. Vecchio, W. Del Pozzo and P.A. Rosado, _Massive black hole binary systems and the NANOGrav 12.5 yr results_ , _Mon. Not. Roy. Astron. Soc._ 502 (2021) L99 [2011.01246].
* [42] J. Ellis and M. Lewicki, _Cosmic String Interpretation of NANOGrav Pulsar Timing Data_ , _Phys. Rev. Lett._ 126 (2021) 041304 [2009.06555].
* [43] S. Blasi, V. Brdar and K. Schmitz, _Has NANOGrav Found First Evidence for Cosmic Strings?_ , _Phys. Rev. Lett._ 126 (2021) 041305 [2009.06607].
* [44] N. Ramberg and L. Visinelli, _QCD axion and gravitational waves in light of NANOGrav results_ , _Phys. Rev. D_ 103 (2021) 063031 [2012.06882].
* [45] Z. Arzoumanian, P.T. Baker, H. Blumer, B. Bécsy, A. Brazier, P.R. Brook et al., _Searching For Gravitational Waves From Cosmological Phase Transitions With The NANOGrav 12.5-year dataset_ , _arXiv e-prints_ (2021) arXiv:2104.13930 [2104.13930].
* [46] Y. Nakai, M. Suzuki, F. Takahashi and M. Yamada, _Gravitational waves and dark radiation from dark phase transition: Connecting NANOGrav pulsar timing data and hubble tension_ , _Physics Letters B_ 816 (2021) 136238 [2009.09754].
* [47] A. Addazi, Y.-F. Cai, Q. Gan, A. Marciano and K. Zeng, _NANOGrav results and Dark First Order Phase Transitions_ , _arXiv e-prints_ (2020) arXiv:2009.10327 [2009.10327].
* [48] W. Ratzinger and P. Schwaller, _Whispers from the dark side: Confronting light new physics with NANOGrav data_ , _SciPost Physics_ 10 (2021) 047 [2009.11875].
* [49] S. Vagnozzi, _Implications of the NANOGrav results for inflation_ , _Mon. Not. Roy. Astron. Soc._ 502 (2021) L11 [2009.13432].
* [50] S. Kuroyanagi, T. Takahashi and S. Yokoyama, _Blue-tilted inflationary tensor spectrum and reheating in the light of NANOGrav results_ , _JCAP_ 2021 (2021) 071 [2011.03323].
* [51] V. De Luca, G. Franciolini and A. Riotto, _NANOGrav Data Hints at Primordial Black Holes as Dark Matter_ , _Phys. Rev. Lett._ 126 (2021) 041303 [2009.08268].
* [52] Z. Yi and Z.-H. Zhu, _NANOGrav signal and LIGO-Virgo Primordial Black Holes from Higgs inflation_ , _arXiv e-prints_ (2021) arXiv:2105.01943 [2105.01943].
* [53] Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi et al., _Planck 2018 results. VI. Cosmological parameters_ , _A $\&$A_ 641 (2020) A6 [1807.06209].
* [54] A. Albrecht, P.J. Steinhardt, M.S. Turner and F. Wilczek, _Reheating an Inflationary Universe_ , _Phys. Rev. Lett._ 48 (1982) 1437.
* [55] L.F. Abbott, E. Farhi and M.B. Wise, _Particle production in the new inflationary cosmology_ , _Physics Letters B_ 117 (1982) 29.
* [56] R. Abbott, T.D. Abbott, S. Abraham, F. Acernese, K. Ackley, A. Adams et al., _Upper limits on the isotropic gravitational-wave background from Advanced LIGO and Advanced Virgo’s third observing run_ , _Phys. Rev. D_ 104 (2021) 022004 [2101.12130].
* [57] C.W. Misner, K.S. Thorne and J.A. Wheeler, _Gravitation_ , W.H. Freeman and Company (1973).
* [58] J.L. Bernal, L. Verde and A.G. Riess, _The trouble with H 0_, _JCAP_ 2016 (2016) 019 [1607.05617].
* [59] W.L. Freedman, _Correction: Cosmology at a crossroads_ , _Nature Astronomy_ 1 (2017) 0169 [1706.02739].
* [60] A.G. Riess, S. Casertano, W. Yuan, L.M. Macri and D. Scolnic, _Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics beyond $\Lambda$CDM_, _Astrophys. J._ 876 (2019) 85 [1903.07603].
* [61] K.C. Wong, S.H. Suyu, G.C.F. Chen, C.E. Rusu, M. Millon, D. Sluse et al., _H0LiCOW – XIII. A 2.4 per cent measurement of H 0 from lensed quasars: 5.3$\sigma$ tension between early- and late-Universe probes_, _Mon. Not. Roy. Astron. Soc._ 498 (2020) 1420 [1907.04869].
* [62] S. Bashinsky and U. Seljak, _Signatures of relativistic neutrinos in CMB anisotropy and matter clustering_ , _Phys. Rev. D_ 69 (2004) 083002 [astro-ph/0310198].
* [63] Z. Hou, R. Keisler, L. Knox, M. Millea and C. Reichardt, _How massless neutrinos affect the cosmic microwave background damping tail_ , _Phys. Rev. D_ 87 (2013) 083008 [1104.2333].
* [64] N. Schöneberg, J. Lesgourgues and D.C. Hooper, _The BAO+BBN take on the Hubble tension_ , _JCAP_ 2019 (2019) 029 [1907.11594].
* [65] B. Allen and J.D. Romano, _Detecting a stochastic background of gravitational radiation: Signal processing strategies and sensitivities_ , _Phys. Rev. D_ 59 (1999) 102001 [gr-qc/9710117].
* [66] R.A. Isaacson, _Gravitational Radiation in the Limit of High Frequency. I. The Linear Approximation and Geometrical Optics_ , _Physical Review_ 166 (1968) 1263.
* [67] E.E. Flanagan, _Sensitivity of the Laser Interferometer Gravitational Wave Observatory to a stochastic background, and its dependence on the detector orientations_ , _Phys. Rev. D_ 48 (1993) 2389 [astro-ph/9305029].
* [68] Planck Collaboration, Y. Akrami, F. Arroja, M. Ashdown, J. Aumont, C. Baccigalupi et al., _Planck 2018 results. X. Constraints on inflation_ , _A $\&$A_ 641 (2020) A10 [1807.06211].
* [69] Y.B. Zeldovich, _A hypothesis, unifying the structure and the entropy of the Universe_ , _Mon. Not. Roy. Astron. Soc._ 160 (1972) 1P.
* [70] J.D. Barrow, _Massive particles as a probe of the early universe._ , _Nuclear Physics B_ 208 (1982) 501.
* [71] M. Kamionkowski and M.S. Turner, _Thermal relics: Do we know their abundances?_ , _Phys. Rev. D_ 42 (1990) 3310.
* [72] B. Spokoiny, _Deflationary Universe scenario_ , _Physics Letters B_ 315 (1993) 40 [gr-qc/9306008].
* [73] M. Joyce and T. Prokopec, _Turning around the sphaleron bound: Electroweak baryogenesis in an alternative post-inflationary cosmology_ , _Phys. Rev. D_ 57 (1998) 6022 [hep-ph/9709320].
* [74] D.J.H. Chung, L.L. Everett and K.T. Matchev, _Inflationary cosmology connecting dark energy and dark matter_ , _Phys. Rev. D_ 76 (2007) 103530 [0704.3285].
* [75] G. Mangano, G. Miele, S. Pastor, T. Pinto, O. Pisanti and P.D. Serpico, _Relic neutrino decoupling including flavour oscillations_ , _Nuclear Physics B_ 729 (2005) 221 [hep-ph/0506164].
* [76] R. Crittenden, J.R. Bond, R.L. Davis, G. Efstathiou and P.J. Steinhardt, _Imprint of gravitational waves on the cosmic microwave background_ , _Phys. Rev. Lett._ 71 (1993) 324 [astro-ph/9303014].
* [77] M.S. Turner, M. White and J.E. Lidsey, _Tensor perturbations in inflationary models as a probe of cosmology_ , _Phys. Rev. D_ 48 (1993) 4613 [astro-ph/9306029].
* [78] U. Seljak and M. Zaldarriaga, _Signature of Gravity Waves in the Polarization of the Microwave Background_ , _Physical Review Letters_ 78 (1997) 2054 [astro-ph/9609169].
* [79] K.N. Abazajian, P. Adshead, Z. Ahmed, S.W. Allen, D. Alonso, K.S. Arnold et al., _CMB-S4 Science Book, First Edition_ , _arXiv e-prints_ (2016) arXiv:1610.02743 [1610.02743].
* [80] R.W. Hellings and G.S. Downs, _Upper limits on the isotropic gravitational radiation background from pulsar timing analysis._ , _Astrophys. J. Lett._ 265 (1983) L39.
* [81] M. Giovannini, _The thermal history of the plasma and high-frequency gravitons_ , _Classical and Quantum Gravity_ 26 (2009) 045004 [0807.4317].
* [82] J.D. Romano and N.J. Cornish, _Detection methods for stochastic gravitational-wave backgrounds: a unified treatment_ , _Living Reviews in Relativity_ 20 (2017) 2 [1608.06889].
* [83] R.H. Cyburt, B.D. Fields, K.A. Olive and T.-H. Yeh, _Big bang nucleosynthesis: Present status_ , _Reviews of Modern Physics_ 88 (2016) 015004 [1505.01076].
* [84] O. Pisanti, A. Cirillo, S. Esposito, F. Iocco, G. Mangano, G. Miele et al., _PArthENoPE: Public algorithm evaluating the nucleosynthesis of primordial elements_ , _Computer Physics Communications_ 178 (2008) 956 [0705.0290].
* [85] E. Aver, K.A. Olive and E.D. Skillman, _The effects of He I $\lambda$10830 on helium abundance determinations_, _JCAP_ 2015 (2015) 011 [1503.08146].
* [86] Y.I. Izotov, T.X. Thuan and N.G. Guseva, _A new determination of the primordial He abundance using the He I $\lambda$10830 Å emission line: cosmological implications_, _Mon. Not. Roy. Astron. Soc._ 445 (2014) 778 [1408.6953].
* [87] R.J. Cooke, M. Pettini, R.A. Jorgenson, M.T. Murphy and C.C. Steidel, _Precision Measures of the Primordial Abundance of Deuterium_ , _Astrophys. J._ 781 (2014) 31 [1308.3240].
* [88] R.J. Cooke, M. Pettini and C.C. Steidel, _One Percent Determination of the Primordial Deuterium Abundance_ , _Astrophys. J._ 855 (2018) 102 [1710.11129].
* [89] G. Hinshaw, D. Larson, E. Komatsu, D.N. Spergel, C.L. Bennett, J. Dunkley et al., _Nine-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results_ , _Astrophys. J. Suppl. Ser._ 208 (2013) 19 [1212.5226].
* [90] M. Giovannini, _Cosmic backgrounds of relic gravitons and their absolute normalization_ , _Classical and Quantum Gravity_ 31 (2014) 225002 [1405.6301].
* [91] D. Babusci and M. Giovannini, _Sensitivity of wideband detectors to quintessential gravitons_ , _Phys. Rev. D_ 60 (1999) 083511 [gr-qc/9905072].
* [92] R.A. Isaacson, _Gravitational Radiation in the Limit of High Frequency. II. Nonlinear Terms and the Effective Stress Tensor_ , _Physical Review_ 166 (1968) 1272.
* [93] M. Maggiore, _Gravitational Waves: Volume 1: Theory and Experiments_ , Oxford University Press (2008).
* [94] R.T. Co, D. Dunsky, N. Fernandez, A. Ghalsasi, L.J. Hall, K. Harigaya et al., _Gravitational Wave and CMB Probes of Axion Kination_ , _arXiv e-prints_ (2021) arXiv:2108.09299 [2108.09299].
* [95] T. Kite, J. Chluba, A. Ravenni and S.P. Patil, _Clarifying transfer function approximations for the large-scale gravitational wave background in $\Lambda$CDM_, _arXiv e-prints_ (2021) arXiv:2107.13351 [2107.13351].
* [96] S. Vagnozzi, _New physics in light of the H 0 tension: An alternative view_, _Phys. Rev. D_ 102 (2020) 023518 [1907.07569].
* [97] M.G. Dainotti, B. De Simone, T. Schiavone, G. Montani, E. Rinaldi and G. Lambiase, _On the Hubble Constant Tension in the SNe Ia Pantheon Sample_ , _Astrophys. J._ 912 (2021) 150 [2103.02117].
|
arxiv-papers
| 2021-07-26T14:17:20 |
2024-09-04T03:07:18.795716
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Bohua Li, Paul R. Shapiro",
"submitter": "Bohua Li",
"url": "https://arxiv.org/abs/2107.12229"
}
|
2107.12230
|
# Belief Propagation as Diffusion
Olivier Peltre
[email protected]
Université d’Artois, Faculté Jean Perrin (LML)
Rue Jean Souvraz 62307 LENS CEDEX
(2021)
###### Abstract
We introduce novel belief propagation algorithms to estimate the marginals of
a high dimensional probability distribution. They involve natural
(co)homological constructions relevant for a localised description of
statistical systems.
## Introduction
Message-passing algorithms such as belief propagation (BP) are parallel
computing schemes that try to estimate the marginals of a high dimensional
probability distribution. They are used in various areas involving the
statistics of a large number of interacting random variables, such as
computational thermodynamics [5, 10], artificial intelligence [11, 21, 15],
computer vision [18] and communications processing [3, 4].
We have shown the existence of a non-linear correspondence between BP
algorithms and discrete integrators of a new form of continuous-time diffusion
equations on belief networks [13, 14]. Practical contributions include (a)
regularised BP algorithms for any time step or diffusivity111 This coefficient
$\varepsilon$ would appear as an exponent of messages in the usual
multiplicative writing of BP equations. Diffusivity relates energy density
gradients to heat fluxes in physics, as in
$\vec{\varphi}=-\varepsilon\cdot\vec{\nabla}(u)$. coefficient
$0<\varepsilon<1$, and (b) a canonical Bethe diffusion flux that regularises
GBP messages by new Möbius inversion formulas in degree 1222 Generalised
belief propagation = BP on hypergraphs, see [21] for the algorithm. Our
algorithm 2 exponentiates their messages $m_{\alpha\beta}$ by the coefficients
$c_{\alpha}\in\mathbb{Z}$ appearing in the Bethe-Kikuchi local approximation
of free energy. .
The purpose of this text is to describe the structure of belief networks as
concisely as possible, with the geometric operations that appear in our
rewriting of BP equations. An open-sourced python implementation, hosted on
github at opeltre/topos was also used to conduct benchmarks showing the
importance of chosing $\varepsilon<1$.
In the following, we denote by:
* $-$
$\Omega=\\{i,j,k,\dots\\}$ a finite set of indices (e.g. atoms, neurons,
pixels, bits …)
* $-$
$x_{i}$ the microstate of atom $i$, valued in a finite set $E_{i}$
* $-$
$x_{\Omega}$ the microstate of the global system, valued in
$E_{\Omega}=\prod_{i\in\Omega}E_{i}$
The statistical state of the system is described by a probability distribution
$p_{\Omega}$ on $E_{\Omega}$. We write
$\Delta_{\Omega}=\mathrm{Prob}(E_{\Omega})$ for the convex space of
statistical states.
## 1 Graphical Models
###### Definition 1.1.
A hypergraph $(\Omega,K)$ is a set of vertices $\Omega$ and a set of faces333
Also called hyperedges, or regions. A graph is a hypergraph with only
hyperedges of cardinality 2. A simplicial complex is a hypergraph such that
any subset of a face is also a face. A lattice is a hypergraph closed under
$\cap$ and $\cup$. We shall mostly be interested in semi-lattices, closed only
under intersection, of which simplicial complexes are a special case.
$K\subseteq\mathcal{P}(\Omega)$.
Let us denote by $x_{\alpha}$ the microstate of a face
$\alpha\subseteq\Omega$, valued in $E_{\alpha}=\prod_{i\in\alpha}E_{\alpha}$.
For every $\beta\subseteq\alpha$ in $\mathcal{P}(\Omega)$, we have a canonical
projection or restriction444 The contravariant functor
$E:\mathcal{P}(\Omega)^{op}\to\mathbf{Set}$ of microstates defines a sheaf of
sets over $\Omega$. map:
$\pi^{\beta\alpha}:E_{\alpha}\to E_{\beta}$
We simply write $x_{\beta}$ for the restriction of $x_{\alpha}$ to a subface
$\beta$ of $\alpha$.
###### Definition 1.2.
A graphical model $p_{\Omega}\in\Delta_{\Omega}$ on the hypergraph
$(\Omega,K)$ is a positive probability distribution on $E_{\Omega}$ that
factorises as a product of positive local factors over faces:
$p_{\Omega}(x_{\Omega})=\frac{1}{Z_{\Omega}}\prod_{\alpha\in
K}f_{\alpha}(x_{\alpha})=\frac{1}{Z_{\Omega}}\operatorname{e}^{-\sum_{\alpha}h_{\alpha}(x_{\alpha})}$
We denote by $\Delta_{K}\subseteq\Delta_{\Omega}$ the subspace of graphical
models on $(\Omega,K)$.
Fig 1. Graphical model $p_{ijkl}(x_{ijkl})=f_{ijk}(x_{ijk})\cdot
f_{ikl}(x_{ikl})\cdot f_{jkl}(x_{jkl})$ with its factor graph representation
(middle) on a simplicial complex $K$ formed by joining 3 triangles at a common
vertex and called 2-horn $\Lambda^{2}$ of the 3-simplex (left). The situation
is equivalent when $K$ is a three-fold covering of $\Omega$ by intersecting
regions $\alpha,\alpha^{\prime},\alpha^{\prime\prime}$ (right).
A graphical model $p_{\Omega}$ for $(\Omega,K)$ is also called Gibbs state of
the associated energy function or hamiltonian
$H_{\Omega}:E_{\Omega}\to\mathbb{R}$:
$H_{\Omega}(x_{\Omega})=\sum_{\alpha\in K}h_{\alpha}(x_{\alpha})$
The normalisation factor of the Gibbs density $\operatorname{e}^{-H_{\Omega}}$
is computed by the partition function
$Z_{\Omega}=\sum_{x_{\Omega}}\operatorname{e}^{-H_{\Omega}(x_{\Omega})}$. The
free energy $F_{\Omega}=-\ln Z_{\Omega}$ and partition function generate most
relevant statistical quantities in their derivatives555 Letting $\mu_{H}$
denote the image by $H$ of the counting measure on microstates,
$Z^{\theta}_{\Omega}=\int_{\lambda\in\mathbb{R}}\operatorname{e}^{-\theta\lambda}\mu_{H}(d\lambda)$
is the Laplace transform of $\mu_{H}$ with respect to inverse temperature
$\theta=1/{k_{B}T}$. In [14] we more generally consider free energy as a
functional $A_{\Omega}\to\mathbb{R}$ whose differential at $H_{\Omega}\in
A_{\Omega}$ is the Gibbs state $p_{\Omega}\in A_{\Omega}^{*}$. . They are
however not computable in practice, the sum over microstates scaling
exponentially in the number of atoms.
Message-passing algorithms rely on local structures induced by $K$ to estimate
marginals, providing with an efficient alternative [5, 10] to Markov Chain
Monte Carlo methods such as Hinton’s contrastive divergence algorithm commonly
used for training restricted Boltzmann machines [17, 15]. They are also
related to local variational principles involved in the estimation of
$F_{\Omega}$ [21, 13, 14] by Bethe approximation [2, 5].
We showed in [14] that message-passing explores a subspace of potentials
$(u_{\alpha})$ related to equivalent factorisations of $p_{\Omega}$, until an
associated collection of local probabilities $(q_{\alpha})$ is consistent. Two
fundamental operations constraining this non-linear correspondence are
introduced below. They consist of a differential $d$ associated to a
consistency constraint, and its adjoint boundary $\delta=d^{*}$ enforcing a
dual energy conservation constraint. These operators relate graphical models
to a statistical (co)homology theory, in addition to generating the BP
equations.
## 2 Marginal Consistency
In the following, we suppose given a hypergraph $(\Omega,K)$ closed under
intersection:
$\alpha\cap\beta\in K\quad\mathrm{\quad for\;all\quad}\quad\alpha,\beta\in K$
We denote by $\Delta_{\alpha}$ the space of probability distributions on
$E_{\alpha}$ for all $\alpha\in K$. Given a graphical model
$p_{\Omega}\in\Delta_{\Omega}$, the purpose of belief propagation algorithms
is to efficiently approximate the collection of true marginals
$p_{\alpha}\in\Delta_{\alpha}$ for $\alpha\in K$ by local beliefs
$q_{\alpha}\in\Delta_{\alpha}$, in a space $\Delta_{0}$ of dimension typically
much smaller than $\Delta_{\Omega}$.
###### Definition 2.1.
We call belief over $(\Omega,K)$ a collection $q\in\Delta_{0}$ of local
probabilities over faces, where:
$\Delta_{0}=\prod_{\alpha\in K}\Delta_{\alpha}$
###### Definition 2.2.
For every $\beta\subseteq\alpha$ the marginal or partial integration map
$\Sigma^{\beta\alpha}:\Delta_{\alpha}\to\Delta_{\beta}$ is defined by:
$\Sigma^{\beta\alpha}q_{\alpha}(x_{\beta})=\sum_{y\in
E_{\alpha\setminus\beta}}q_{\alpha}(x_{\beta},y)$
###### Definition 2.3.
Consistent beliefs span the convex subset $\Gamma\subseteq\Delta_{0}$ defined
by marginal consistency constraints666 Equivalently, $\Gamma$ is the
projective limit of the functor $\Delta:K^{op}\to\mathbf{Top}$ defined by
local probabilities and marginal projections, or space of global sections of
the sheaf of topological spaces $\Delta$ over $(\Omega,K)$. :
$q_{\beta}=\Sigma^{\beta\alpha}(q_{\alpha})\quad\mathrm{\quad
for\;all\quad}\;\beta\subseteq\alpha$
The true marginals $(p_{\alpha})\in\Delta_{0}$ of a global density
$p_{\Omega}\in\Delta_{\Omega}$ are always consistent. However their symbolic
definition $p_{\alpha}=\Sigma^{\alpha\Omega}p_{\Omega}$ involves a sum over
fibers of $E_{\Omega\setminus\alpha}$, not tractable in practice. Message-
passing algorithms instead explore a parameterised family of beliefs
$q\in\Delta_{0}$ until meeting the consistency constraint surface
$\Gamma\subseteq\Delta_{0}$.
Let us denote by $A_{\alpha}^{*}$ the space of linear measures on $E_{\alpha}$
for all $\alpha\subseteq\Omega$, and by:
$\Sigma^{\beta\alpha}:A^{*}_{\alpha}\to A^{*}_{\beta}$
the partial integration map.
###### Definition 2.4.
We call $n$-density over $(\Omega,K)$ an element $\lambda\in A_{n}^{*}$ of
local measures indexed by ordered chains of faces, where:
$A_{n}^{*}=\prod_{\alpha_{0}\supset\dots\supset\alpha_{n}}A_{\alpha_{n}}^{*}$
The marginal consistency constraints are expressed by a differential
operator777 Cohomology sequences of this kind were considered by Grothendieck
and Verdier [19], see also [8]. $d$ on the graded vector space
$A_{\bullet}^{*}=\prod_{n}A_{n}^{*}$ of densities over $(\Omega,K)$:
$\bcd
A_{0}^{*}\rightarrow{d}&A_{1}^{*}\rightarrow{d}\dots\rightarrow{d}A_{n}^{*}\ecd$
###### Definition 2.5.
The differential $d:A_{0}^{*}\to A_{1}^{*}$ acts on a density
$(\lambda_{\alpha})\in A_{0}^{*}$ by:
$d(\lambda)_{\alpha\beta}=\lambda_{\beta}-\Sigma^{\beta\alpha}\lambda_{\alpha}$
Consistent densities $\lambda\in[A_{0}^{*}]$ satisfy $d\lambda=0$, and called
$0$-cocycles.
The space of consistent beliefs $\Gamma\subseteq[A_{0}^{*}]$ is the
intersection of $\operatorname{Ker}(d)$ with $\Delta_{0}\subseteq A_{0}^{*}$.
True marginals define a convex map $\Delta_{\Omega}\to\Gamma$, restriction888
Note the image of $\Delta_{\Omega}$ inside $\Gamma$ can be a strict convex
polytope of $\Gamma$, and consistent $q\in\Gamma$ do not always admit a
positive preimage $q_{\Omega}\in\Delta_{\Omega}$ [20, 1]. of a linear
surjection $A^{*}_{\Omega}\to[A_{0}^{*}]$. Consistent beliefs $q\in\Gamma$
acting as for global distributions $p_{\Omega}\in\Delta_{\Omega}$, marginal
diffusion iterates over a smooth subspace of $\Delta_{0}$, diffeomorphic to
equivalent parameterisations of a graphical model $p_{\Omega}$, until
eventually reaching $\Gamma$.
## 3 Energy Conservation
Graphical models parameterise a low dimensional subspace of $\Delta_{\Omega}$,
but definition 1.2 is not injective in the local factors $f_{\alpha}$ or local
potentials $u_{\alpha}=-\ln f_{\alpha}$. The fibers of this parameterisation
can be described linearly at the level of potentials, and correspond to
homology classes of the codifferential operator $\delta=d^{*}$.
We denote by $A_{\alpha}$ the algebra of real functions on $E_{\alpha}$ for
all $\alpha\subseteq\Omega$, and by:
$j_{\alpha\beta}:A_{\alpha}\to A_{\beta}$
the natural extension999 Functions on $E_{\beta}=\prod_{j\in\beta}E_{j}$ can
be viewed as functions on $E_{\alpha}=\prod_{i\in\alpha}$ that do not depend
on the state of $x_{i}$ for $i\in\alpha\setminus\beta$. Therefore $A_{\beta}$
is essentially a subspace of $A_{\alpha}$ and $j_{\alpha\beta}$ an inclusion.
of functions pulled from $E_{\beta}$ to $E_{\alpha}$ by the restriction
$x_{\alpha}\mapsto x_{\beta}$.
###### Definition 3.1.
We let $\delta=d^{*}$ denote the adjoint of $d$, defined by duality:
$\bcd
A_{0}&A_{1}\leftarrow[swap]{\delta}\dots\leftarrow[swap]{\delta}A_{n}\leftarrow[swap]{\delta}\ecd$
###### Proposition 3.2.
The divergence $\delta:A_{1}\to A_{0}$ dual of $d:A_{0}^{*}\to A_{1}^{*}$,
acts on $\varphi\in A_{1}$ by:
$\delta(\varphi)_{\beta}=\sum_{\alpha\supseteq\beta}\varphi_{\alpha\beta}-\sum_{\gamma\subseteq\beta}j_{\beta\gamma}\varphi_{\beta\gamma}$
###### Proof.
Let $\lambda\in A_{0}^{*}$ and $\varphi\in A_{1}$. The duality bracket
$A_{0}^{*}\otimes A_{0}\to\mathbb{R}$ is naturally defined by sum of local
duality brackets $A_{\beta}^{*}\otimes A_{\beta}\to\mathbb{R}$, which
correspond to integration of local measures against observables:
$\langle\,\lambda\,|\,\delta\varphi\,\rangle=\sum_{\beta\in
K}\langle\,\lambda_{\beta}\,|\,\delta\varphi_{\beta}\,\rangle=\sum_{\beta\in
K}\sum_{x_{\beta}\in
E_{\beta}}\lambda_{\beta}(x_{\beta})\delta\varphi_{\beta}(x_{\beta})$
Substituting with the expression of $\delta\varphi$ we get101010 In this
substitution, we simply wrote $\varphi_{\beta\gamma}(x_{\gamma})$ for
$j_{\beta\gamma}(\varphi_{\beta\gamma})(x_{\beta})$, as
$j_{\beta\gamma}:A_{\gamma}\to A_{\beta}$ is an inclusion. :
$\begin{split}\langle\,\lambda\,|\,\delta\varphi\,\rangle&=\sum_{\beta\in
K}\>\sum_{x_{\beta}\in
E_{\beta}}\lambda_{\beta}(x_{\beta})\Big{(}\sum_{\alpha\supseteq\beta}\varphi_{\alpha\beta}(x_{\beta})-\sum_{\gamma\subseteq\beta}\varphi_{\beta\gamma}(x_{\gamma})\Big{)}\\\\[8.00003pt]
&=\sum_{\alpha\supseteq\beta}\>\sum_{x_{\beta}\in
E_{\beta}}\varphi_{\alpha\beta}(x_{\beta})\lambda_{\beta}(x_{\beta})-\sum_{\beta\supseteq\gamma}\>\sum_{x_{\gamma}\in
E_{\gamma}}\varphi_{\beta\gamma}(x_{\gamma})\sum_{y\in
E_{\beta\setminus\gamma}}\lambda_{\beta}(x_{\gamma},y)\end{split}$
The factorisation of the rightmost sum by $\varphi_{\beta\gamma}(x_{\gamma})$
reflects the duality of $\Sigma^{\beta\alpha}$ with $j_{\beta\gamma}$.
Relabeling summation indices $\beta\supseteq\gamma$ as $\alpha\supseteq\beta$,
we finally get:
$\sum_{\alpha\supseteq\beta}\langle\,\lambda_{\beta}\,|\,\varphi_{\alpha\beta}\,\rangle-\sum_{\beta\supseteq\gamma}\langle\,\Sigma^{\gamma\beta}\lambda_{\beta}\,|\,\varphi_{\beta\gamma}\,\rangle=\sum_{\alpha\supseteq\beta}\langle\,\lambda_{\beta}-\Sigma^{\beta\alpha}\lambda_{\alpha}\,|\,\varphi_{\alpha\beta}\,\rangle\\\
$
So that
$\langle\,\lambda\,|\,\delta\varphi\,\rangle=\langle\,d\lambda\,|\,\varphi\,\rangle$
for all $\lambda\in A_{0}^{*}$ and all $\varphi\in A_{1}$. ∎
Consider the total energy map $\zeta_{\Omega}:A_{0}\to A_{\Omega}$ defined by:
$\zeta_{\Omega}(u)=\sum_{\alpha\in K}u_{\alpha}$
We have left injections $j_{\Omega\alpha}$ implicit, viewing each
$A_{\alpha}\subseteq A_{\Omega}$ as a subalgebra of $A_{\Omega}$. Denoting by
$A_{K}\subseteq A_{\Omega}$ the image of $\zeta_{\Omega}$, a graphical model
$p_{\Omega}\in\Delta_{K}$ is then associated to $u\in A_{0}$ by normalising
the Gibbs density $\operatorname{e}^{-\zeta_{\Omega}(u)}$, as in 1.2.
###### Theorem 3.3.
For all $u,u^{\prime}\in A_{0}$ the following are equivalent [14, Chapter 5]:
* $-$
conservation of total energy
$\sum_{\alpha}u^{\prime}_{\alpha}=\sum_{\alpha}u_{\alpha}$ in $A_{\Omega}$,
* $-$
there exists $\varphi\in A_{1}$ such that $u^{\prime}=u+\delta\varphi$ in
$A_{0}$.
Theorem 3.3 states that $\operatorname{Ker}(\zeta_{\Omega})$ coincides with
the image of the divergence $\delta A_{1}\subseteq A_{0}$. The subspace of
total energies $\operatorname{Im}(\zeta_{\Omega})\simeq
A_{0}/\operatorname{Ker}(\zeta_{\Omega})$ is therefore isomorphic to the
quotient $[A_{0}]=A_{0}/\delta A_{1}$, formed by homology classes of
potentials $[u]=u+\delta A_{1}\subseteq A_{0}$. Global observables of
$A_{K}\subseteq A_{\Omega}$ can thus be represented by equivalence classes of
local potentials in $[A_{0}]$, homology under $\delta$ giving a local
characterisation for the fibers of $\zeta_{\Omega}$.
## 4 Diffusions
The local approach to the marginal estimation problem, given
$p_{\Omega}=\frac{1}{Z_{\Omega}}\operatorname{e}^{-H_{\Omega}}$, consists of
using a low dimensional map $A_{0}\to\Delta_{0}$ as substitute for the high
dimensional parameterisation $A_{\Omega}\to\Delta_{\Omega}$, until parameters
$u\in A_{0}$ define a consistent belief $q\in\Gamma$ whose components
$q_{\alpha}\in\Delta_{\alpha}$ estimate the true marginals $p_{\alpha}$ of
$p_{\Omega}$.
$\bcd\Delta_{\Omega}\rightarrow&\Gamma\rightarrow[hook]\Delta_{0}\\\
A_{\Omega}\uar\left[A_{0}\right]\uar[swap,dashed]\leftarrow[hook]A_{0}\leftarrow[twoheads]\uar[swap]\ecd$
Assume the hamiltonian is defined by $H_{\Omega}=\sum_{\alpha}h_{\alpha}$ for
given $h\in A_{0}$. According to theorem 3.3, parameters $u\in A_{0}$ will
define the same total energy if and only if:
$u=h+\delta\varphi$
for some heat flux $\varphi\in\delta A_{1}$. The energy conservation
constraint $[u]=[h]$ therefore restricts parameters to fibers of the bottom-
right arrow in the above diagram. The rightmost arrow $A_{0}\to\Delta_{0}$ is
given by the equations:
$q_{\alpha}=\frac{1}{Z_{\alpha}}\operatorname{e}^{-U_{\alpha}}\quad\mathrm{\quad
where\quad}\quad U_{\alpha}=\sum_{\beta\subseteq\alpha}u_{\beta}$ (1)
The image of $[h]$ in $\Delta_{0}$ is a smooth non-linear manifold of
$\Delta_{0}\subseteq A_{0}^{*}$, which may intersect the convex polytope
$\Gamma=\operatorname{Ker}(d)\cap\Delta_{0}$ of consistent beliefs an unknown
number of times. Such consistent beliefs in $\Gamma\subseteq\Delta_{0}$ are
the fixed points of belief propagation algorithms. The central dashed vertical
arrow therefore represents what they try to compute, although no privileged
$q\in\Gamma$ may be defined from $[h]\in A_{0}$ in general.
###### Definition 4.1.
Given a flux functional $\Phi:A_{0}\to A_{1}$, we call diffusion associated to
$\Phi$ the vector field $\delta\Phi$ on $A_{0}$ defined by:
$\frac{du}{dt}=\delta\Phi(u)$ (2)
Letting $q\in\Delta_{0}$ be defined by (1), we say that $\Phi$ is consistent
if $q\in\Gamma\Rightarrow\Phi(u)=0$, and that $\Phi$ is faithful if it is
consistent and $\Phi(u)=0\Rightarrow q\in\Gamma$.
Consistent flux functionals $\Phi$ are constructed by composition with two
remarkable operators $\zeta:A_{0}\to A_{0}$, mapping potentials to local
hamiltonians $u\mapsto U$, and $\mathpzc{D}:A_{0}\to A_{1}$, a non-linear
analog of the differential $d:A_{0}^{*}\to A_{1}^{*}$, measuring inconsistency
of the local beliefs defined by $U\mapsto q$ in (1). The definition of
$\mathpzc{D}$ involves a conditional form of free energy
$\mathbb{F}^{\beta\alpha}:A_{\alpha}\to A_{\beta}$, which generates
conditional expectation maps with respect to local beliefs by
differentiation111111 The tangent map of $\mathpzc{D}$ in turn yields
differential operators $\nabla_{q}:A_{0}\to A_{1}\to\dots$ for all
$q\in\Gamma$, whose kernels characterise tangent fibers ${\rm T}_{q}\Gamma$
pulled by the non-linear parameterisation (1), see [14, Chapter 6]. .
###### Definition 4.2.
We call effective energy the smooth map
$\mathbb{F}^{\beta\alpha}:A_{\alpha}\to A_{\beta}$ defined by:
$\mathbb{F}^{\beta\alpha}(U_{\alpha}\;|\;x_{\beta})=-\ln\sum_{y\in
E_{\alpha\setminus\beta}}\operatorname{e}^{-U_{\alpha}(x_{\beta},y)}$
and effective energy gradient the smooth map $\mathpzc{D}:A_{0}\to A_{1}$
defined by:
$\mathpzc{D}(U)_{\alpha\beta}=U_{\beta}-\mathbb{F}^{\beta\alpha}(U_{\alpha})$
Letting $q=\operatorname{e}^{-U}$ denote local Gibbs densities, note that
$q\in\Gamma\Leftrightarrow\mathpzc{D}(U)=0$ by:
$\mathpzc{D}(U)_{\alpha\beta}=\ln\bigg{[}\>\frac{\Sigma^{\beta\alpha}q_{\alpha}}{q_{\beta}}\>\bigg{]}$
The map $u\mapsto U$ is a fundamental automorphism $\zeta$ of $A_{0}$,
inherited from the partial order structure of $K$. Möbius inversion formulas
define its inverse $\mu=\zeta^{-1}$ [16, 7, 14]. We have extended $\zeta$ and
$\mu$ to automorphisms on the full complex $A_{\bullet}$ in [14, Chapter 3],
in particular, $\zeta$ and $\mu$ also act naturally on $A_{1}$.
###### Definition 4.3.
The zeta transform $\zeta:A_{0}\to A_{0}$ is defined by:
$\zeta(u)_{\alpha}=\sum_{\beta\subseteq\alpha}u_{\beta}$
The flux functional $\Phi=-\mathpzc{D}\circ\zeta$ is consistent and faithful
[14], meaning that $\delta\Phi$ is stationary on $u\in A_{0}$ if and only if
associated beliefs $q\in\Delta_{0}$ are consistent. This flux functional
yields the GBP equations of algorithm A (up to the normalisation step of line
3, ensuring normalisation of beliefs). It may however not be optimal.
We propose another flux functional $\phi=-\mu\circ\mathpzc{D}\circ\zeta$ by
degree-1 Möbius inversion on heat fluxes in algorithm B. It is remarkable that
the associated diffusion $\delta\phi$ involves only the coefficients
$c_{\alpha}\in\mathbb{Z}$ originally used by Bethe [2] to estimate the free
energy of statistical systems close to their critical temperature. These
coefficients also appear in the cluster variational problem [5, 9, 12] on free
energy, solved by fixed points of belief propagation and diffusion algorithms
[14, 21].
It remains open whether fixed points of Bethe diffusion are always consistent.
We were only able to prove this in a neighbourhood of the consistent manifold,
a property we called local faithfulness of the Bethe diffusion flux $\phi$,
see [14, Chapter 5]. Faithfulness proofs are non-trivial and we conjecture the
global faithfulness of $\phi$.
###### Definition 4.4.
The Bethe numbers $(c_{\alpha})\in\mathbb{Z}^{K}$ are uniquely defined by the
equations:
$\sum_{\alpha\supseteq\beta}c_{\alpha}=1\mathrm{\quad for\>\;all\quad}\beta\in
K$
Algorithms. GBP and Bethe Diffusions121212 Note the normalisation operation
$U_{\alpha}\leftarrow U_{\alpha}+\ln Z_{\alpha}$ line 3 in A. It is replaced
by line 4 in B, which takes care of harmonising normalisation factors by
eliminating redundancies in $\Phi$. The arrows $U_{\alpha}\leftarrow\dots$
suggest $\tt map$ operations that may be efficiently parallelised through
asynchronous streams, by locality of the associated operators
$\zeta,\mathpzc{D},\delta\dots$. Each stream performs local operations over
tensors in $A_{\alpha}$, whose dimensions depend on the cardinality of local
configuration spaces $E_{\alpha}=\prod_{i\in\alpha}E_{i}$. .
Input: | potential ${\tt u}\in A_{0}$
---|---
| diffusivity $\varepsilon>0$
| number of iterations $\tt n_{it}$
Output: belief ${\tt q}\in\Delta_{0}$ |
---|---
A. ${\rm GBP}$ $\varepsilon$-diffusion
1: 2:for $\tt i=0\dots n_{it}$ do 3: $\tt
U_{\alpha}\leftarrow\zeta(u)_{\alpha}$ 4: $\tt U_{\alpha}\leftarrow
U_{\alpha}+\ln\Sigma\operatorname{e}^{-U_{\alpha}}$ 5:
$\Phi_{\alpha\beta}\tt\leftarrow-\mathpzc{D}(U)_{\alpha\beta}$ 6: 7: ${\tt
u_{\alpha}\leftarrow u_{\alpha}}+\varepsilon\cdot\delta(\Phi)_{\alpha}$ 8:end
for 9:$\tt q_{\alpha}\leftarrow\operatorname{e}^{-U_{\alpha}}$ 10:return ${\tt
q}$
B. Bethe $\varepsilon$-diffusion
1: 2:for $\tt i=0\dots n_{it}$ do 3: $\tt
U_{\alpha}\leftarrow\zeta(u)_{\alpha}$ 4: 5:
$\Phi_{\alpha\beta}\tt\leftarrow-\mathpzc{D}(U)_{\alpha\beta}$ 6:
$\phi_{\alpha\beta}\leftarrow{\tt c_{\alpha}}\cdot\Phi_{\alpha\beta}$ 7: $\tt
u_{\alpha}\leftarrow u_{\alpha}+\varepsilon\cdot\delta(\phi)_{\alpha}$ 8:end
for 9:$\tt q_{\alpha}\leftarrow\operatorname{e}^{-U_{\alpha}}$ 10:return ${\tt
q}$
Both algorithms consist of time-step $\varepsilon$ discrete Euler integrators
of diffusion equations of the form (2), for two different flux functionals.
Generalised belief propagation (GBP) is usually expressed multiplicatively for
$\varepsilon=1$ in terms of beliefs
$q_{\alpha}=\frac{1}{Z_{\alpha}}\operatorname{e}^{-U_{\alpha}}$ and messages
$m_{\alpha\beta}=\operatorname{e}^{-\varphi_{\alpha\beta}}$. A choice of
$\varepsilon<1$ would appear as an exponent in the product of messages by this
substitution. This is different from damping techniques [6] and has not been
previously considered to our knowledge.
Bethe numbers $c_{\alpha}$ would also appear as exponents of messages in the
multiplicative formulation of algorithm B. The combinatorial regularisation
offered by Bethe numbers stabilises divergent oscillations in non-constant
directions on hypergraphs, improving convergence of GBP diffusion at higher
diffusivities. When $K$ is a graph, the two algorithms are actually
equivalent, so that Bethe numbers only regularise normalisation factors in the
degree $\geq 2$ case.
Fig 2. Convergence of GBP and Bethe diffusions for different values of
diffusivity $0<\varepsilon<1$ and energy scales on the 2-horn, depicted in
figure 1. Both diffusions almost surely diverge for diffusivities
$\varepsilon\geq 1$, so that the usual GBP algorithm is not represented in
this table.
Figure 2 shows the results of experiments conducted on the simplest hypergraph
$K$ for which GBP does not surely converge to the unique solution $q\in[u]\cap
A_{0}^{\Gamma}$, the horn $\Lambda^{2}$ depicted in figure 1. Initial
potentials $u\in A_{0}$ were normally sampled according to
$h_{\alpha}(x_{\alpha})\sim\frac{1}{T}{\cal N}(0,1)$ at different temperatures
or energy scales $T>0$. For each value of $T$ and for each fixed diffusivity
$\varepsilon>0$, GBP and Bethe diffusion algorithms were run on random initial
conditions for ${\tt n_{it}}=10$ iterations. Consistency of the returned
beliefs, if any, was assessed in the effective gradient $\Phi$ to produce the
represented decay ratios. Diffusivity was then increased until the drop in
Bethe diffusion convergence, occuring significantly later than GBP diffusion
but before $\varepsilon<1$, reflecting the importance of using finer
integrators than usual $\varepsilon=1$ belief propagation algorithms.
The discretised diffusion $(1+\varepsilon\delta\Phi)^{n}$ may be compared to
the approximate integration of $\exp(-n\varepsilon x)$ as $(1-\varepsilon
x)^{n}$, which should only be done under the constraint $\varepsilon|x|<1$.
Assuming all eigenvalues of the linearised diffusion flow $\delta\Phi_{*}$ are
negative (as is the case in the neighbourhood of a stable potential), one
should still ensure $\varepsilon|\delta\Phi_{*}|<1$ to confidently estimate
the large time asymptotics of diffusion as
$\exp(n\varepsilon\delta\Phi)\simeq(1+\varepsilon\delta\Phi)^{n}$ and reach
$\Gamma$.
An open-sourced python implementation of the above algorithms, with
implementations of the (co)-chain complex $A_{\bullet}(K)$ for arbitrary
hypergraphs $K$, Bethe numbers, Bethe entropy and free energy functionals, and
other operations for designing marginal estimation algorithms is on github at
opeltre/topos.
## References
* [1] S. Abramsky and A. Brandenburger, The Sheaf-theoretic structure of non-locality and contextuality, New Journal of Physics, 13 (2011).
* [2] H. A. Bethe and W. L. Bragg, Statistical Theory of Superlattices, Proceedings of the Royal Society of London. Series A - Mathematical and Physical Sciences, 150 (1935), pp. 552–575.
* [3] R. G. Gallager, Low-Density Parity-Check Codes, MIT Press, 1963.
* [4] C. Jego and W. J. Gross, Turbo Decoding of Product Codes Using Adaptive Belief Propagation, IEEE Transactions on Communications, 57 (2009).
* [5] R. Kikuchi, A Theory of Cooperative Phenomena, Phys. Rev., 81 (1951), pp. 988–1003.
* [6] C. Knoll and F. Pernkopf, On Loopy Belief Propagation – Local Stability Analysis for Non-Vanishing Fields, in Uncertainty in Artificial Intelligence, 2017.
* [7] T. Leinster, The Euler Characteristic of a Category, Documenta Mathematica, 13 (2008), pp. 21–49.
* [8] I. Moerdijk, Classifying Spaces and Classifying Topoi, Springer, 1995\.
* [9] T. Morita, Cluster Variation Method of Cooperative Phenomena and its Generalization I, Journal of the Physical Society of Japan, 12 (1957), pp. 753–755.
* [10] M. Mézard and A. Montanari, Information, Physics and Computation, Oxford University Press, 2009.
* [11] J. Pearl, Reverend Bayes on Inference Engines: A Distributed Hierachical Approach, in AAAI-82 Proceedings, 1982.
* [12] A. Pelizzola, Cluster variation method in statisical physics and probabilistic graphical models, Journal of Physics A: Mathematical and General, 38 (2005).
* [13] O. Peltre, A Homological Approach to Belief Propagation and Bethe Approximations, in Geometric Science of Information, 4th International Conference GSI 2019, Springer, 2019.
* [14] , Message-Passing Algorithms and Homology. PhD preprint, arXiv:2009.11631, 2020.
* [15] W. Ping and A. Ihler, Belief Propagation in Conditional RBMs for Structured Prediction, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, 2017, pp. 1141–1149.
* [16] G.-C. Rota, On the Foundations of Combinatorial Theory - I. Theory of Möbius Functions, Z. Warscheinlichkeitstheorie, 2 (1964), pp. 340–368.
* [17] R. Salakhutdinov and G. Hinton, Deep Boltzmann Machines, in Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, D. van Dyk and M. Welling, eds., vol. 5 of Proceedings of Machine Learning Research, 2009, pp. 448–455.
* [18] J. Sun, N.-N. Zheng, and H.-Y. Shum, Stereo Matching Using Belief Propagation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (2003).
* [19] J.-L. Verdier and A. Grothendieck, V: Cohomologie dans les Topos, SGA-4, 2 (1972).
* [20] N. Vorob’ev, Consistent Families of Measures and their Extensions, Theory of Probability and its Applications, 7 (1962), pp. 147–164.
* [21] J. Yedidia, W. Freeman, and Y. Weiss, Constructing Free Energy Approximations and Generalized Belief Propagation Algorithms, IEEE Transactions on Information Theory, 51 (2005), pp. 2282–2312.
|
arxiv-papers
| 2021-07-26T14:17:26 |
2024-09-04T03:07:18.814787
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Olivier Peltre",
"submitter": "Olivier Peltre",
"url": "https://arxiv.org/abs/2107.12230"
}
|
2107.12232
|
# Sub-second Temporal Magnetic Field Microscopy Using Quantum Defects in
Diamond
Madhur Parashar Information Processing Laboratory, Department of Electronics
and Electrical Communication Engineering, Indian Institute of Technology
Kharagpur, Kharagpur, West Bengal, India -721302 School of Medical Science
and Technology, Indian Institute of Technology Kharagpur, Kharagpur, West
Bengal, India - 721302 Anuj Bathla Department of Electrical Engineering,
Indian Institute of Technology Bombay, Mumbai, Maharashtra, India-400076
Centre for Research in Nanotechnology and Science, Indian Institute of
Technology Bombay, Mumbai, Maharashtra, India-400076 Dasika Shishir
Department of Electrical Engineering, Indian Institute of Technology Bombay,
Mumbai, Maharashtra, India-400076 Alok Gokhale Department of Electrical
Engineering, Indian Institute of Technology Bombay, Mumbai, Maharashtra,
India-400076 Sharba Bandyopadhyay Information Processing Laboratory,
Department of Electronics and Electrical Communication Engineering, Indian
Institute of Technology Kharagpur, Kharagpur, West Bengal, India -721302
Kasturi Saha Department of Electrical Engineering, Indian Institute of
Technology Bombay, Mumbai, Maharashtra, India-400076 [email protected]
###### Abstract
Wide field-of-view magnetic field microscopy has been realised by probing
shifts in optically detected magnetic resonance (ODMR) spectrum of Nitrogen
Vacancy (NV) defect centers in diamond. However, these widefield diamond NV
magnetometers require few to several minutes of acquisition to get a single
magnetic field image, rendering the technique temporally static in it’s
current form. This limitation prevents application of diamond NV magnetometers
to novel imaging of dynamically varying microscale magnetic field processes.
Here, we show that the magnetic field imaging frame rate can be significantly
enhanced by performing lock-in detection of NV photo-luminescence (PL),
simultaneously over multiple pixels of a lock-in camera. A detailed protocol
for synchronization of frequency modulated PL of NV centers with fast camera
frame demodulation, at few kilohertz frequencies, has been experimentally
demonstrated. This experimental technique allows magnetic field imaging of
sub-second varying microscale currents in planar microcoils with imaging frame
rates in the range of 50 to 200 frames per second (fps). Our work demonstrates
that widefield per-pixel lock-in detection of frequency modulated NV ODMR
enables dynamic magnetic field microscopy.
## 1 Introduction
The past decade has seen a revolution in high-resolution diffraction-limited
microscale and wide field-of-view magnetometry based on optically detected
magnetic resonance (ODMR) imaging of Nitrogen Vacancy (NV) defect centers in
diamond [1, 2, 3, 4, 5, 6]. These room-temperature ultra-sensitive diamond NV
magnetometers [7, 8, 9, 10] have enabled a new class of magnetic field
microscopy, for example - probing magnetic particles in living cells [3, 11],
imaging fluid-like current flow in graphene [6, 12], microscopy of novel
quantum materials [13] and rapidly evolving other applications [14, 15, 16,
17]. In diamond NV-based widefield magnetic field (WMF) imaging, red photo-
luminescence (PL) emitted from a microscale volume of NV centers is collected
and imaged on to a conventional scientific CMOS or CCD camera. Microwave (MW)
resonant frequencies applied to NV centers create changes in NV fluorescence
and the precise estimation or tracking of these resonant MW frequencies yields
a 2D microscale magnetic field map. The changes in magnetic field experienced
by small microscopic volumes of NVs in the diamond crystal get mapped to
corresponding pixels on the camera pixel array. However, magnetic field images
acquired by this method have remained temporally static in nature, demanding
few to several minutes of acquisition time for each image frame [4, 11, 6].
Inherently low NV ensemble resonance contrast and division of informative NV
light onto thousands to millions of pixels significantly decrease per-pixel
signal-to-noise ratio (SNR) and consequently the magnetic field sensitivity.
NV imaging frame rate for DC to low-frequency magnetometry is fundamentally
limited by the NV’s optical re-polarization rate i.e. $\sim$
$1\text{\,}\mathrm{MHz}$. However, practical SNR bounds have limited imaging
frame rates to primarily static magnetic field maps. Development of high-
spatially-resolved and high-frame-rate imaging capabilities will enable new
applications of NV centers to investigate processes like vortex dynamics in
superconductors [18], estimating fluctuating magnetic fields from quantum
materials [13], magnetic nano-particle motion in living cells [11, 19] and
imaging mammalian action potential associated magnetic fields [20, 21, 22,
23].
Detection of weak signals embedded in noise hinges on smart techniques such as
the lock-in amplification method, wherein a near-DC or slowly varying signal,
mainly submerged in $1/f$ noise, can be periodically modulated and filtered
from a narrow band while the noise spanning a large bandwidth can be
eliminated leading to significant improvement in signal-to-noise ratio. Pico-
Newton scale resolution in atomic force microscopy [24] and high sensitivity
magnetometry in SQUIDs and atomic magnetometers [25] are testament to this
detection methodology. With the advent of lock-in cameras [26], parallel per-
pixel lock-in detection of optical light can be performed over many pixels. In
contrast to conventional cameras, the lock-in cameras require synchronized
external triggers to perform light integration over specific time windows for
each pixel. Intensity measured during these externally timed windows can be
used to subtract DC components and estimate the frequency content of the
optical signal. With these high frame rate lock-in cameras, new improvements
have been observed in techniques where light can be frequency or phase
modulated, e.g., deep tissue optical coherence tomography (OCT) [27] and
ultrasound-modulated OCT [28] and other avenues [29, 30]. NV’s emitted light
can be frequency modulated by microwave control of NV resonance [31].
Frequency modulated optically detected magnetic resonance (fm-ODMR) schemes
for NVs have been used for real-time single point (SP) bulk magnetometry [32,
33, 20, 34, 22], where total emitted NV light is collected onto a single
photodetector and also for boosting DC-magnetic field sensitivity. A prior
work on camera review [35] has also suggested potential application of high-
frame rate lock-in camera to perform real-time NV imaging.
In this work, we demonstrate a novel per-pixel lock-in detection protocol that
enables dynamic millisecond scale magnetic field imaging in wide-field using
NV centers in diamond. The paper describes a procedure for synchronizing
camera frames of a commercial lock-in camera (Heliotis Helicam C3 [36]) with
NV microwave modulation to obtain fm-ODMR across thousands of pixels. Post
calibration of noise statistics and magnetic field sensitivity across
different pixels, we measured a median
$731\text{\,}\mathrm{n}\mathrm{T}\mathrm{/}\sqrt{\mathrm{Hz}}$ sensitivity per
pixel. To demonstrate spatially and temporally resolved magnetometry, we
perform imaging of microscale magnetic fields produced by current flow in two
different samples fabricated using e-beam lithography: first, a
$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ track width gold (Au) microwire
with a $90\text{\,}\mathrm{\SIUnitSymbolDegree}$ bend and second, a square-
spiral planar microcoil of $10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ track
width and full dimensions of $100\text{\,}\mathrm{\SIUnitSymbolMicro m}$
$\times$ $125\text{\,}\mathrm{\SIUnitSymbolMicro m}$. We show dynamic
widefield magnetic field images obtained by probing periodically varying
current flow in the above samples at near 1Hz , 20Hz and 50Hz magnetic field
variations. Multi-pixel fluorescence time traces, scaled to magnetic field
values by NV resonance parameters, show expected magnetic field tracking.
These sub-second temporal magnetic field images are enabled by fast NV imaging
frame rates of 50 to 200 frames per second (fps). To further demonstrate a
general application of temporally varying magnetic fields, we show
millisecond-scale magnetic field images of current flow in the microcoil from
an arbitrary current waveform of varying amplitude and rapid inversion of
current direction where the entire event duration is
$\approx$$150\text{\,}\mathrm{ms}$. We discuss the coupling of imaging frame
rates and per-pixel SNR to the NV’s modulation frequency and the number of
signal averaging cycles. Our experimental results demonstrate that frequency-
locked widefield imaging of NV emitted light enables dynamic widefield
magnetic field imaging at frame rates ranging from $50200$fps. Recent work
towards dynamic NV widefield imaging [37, 38], employ more advanced microwave
pulse sequences based on double quantum protocols to significantly reduce
heterogeneity in resonant frequencies across imaging field of view which
enables high sensitivity magnetic field imaging. In contrast, our results
demonstrate high imaging frame rates with a relatively simpler protocol with
the application of single resonant MW frequency and could be potentially
relevant for a wide variety of NV based imaging applications and can be
further improved with spatial homogeneity of resonant frequencies across the
field of view. The scope of the work demonstrated in this paper is not limited
to just imaging single crystalline diamonds, but can also be extended to
perform improved temporal imaging of nanodiamonds in cellular environments
[39, 40, 41].
## 2 Experimental Methods
### 2.1 Magnetic Resonance in Nitrogen Vacancy Defects in Diamond
Negatively charged Nitrogen Vacancy defect centers are point localized
Nitrogen substitution of Carbon atoms in the diamond lattice with an adjacent
vacancy and an overall negative charge. Due to the unique electronic
properties of these vacancies [9], they are sensitive to external environment
changes like, magnetic field, electric field, strain and temperature. The
ground state is a spin-triplet with $m_{s}=0$ and a doubly degenerate
$m_{s}=+1$ and $m_{s}=-1$ in the absence of magnetic field with a zero field
splitting of $2.87\text{\,}\mathrm{GHz}$. The degeneracy of $m_{s}=+1$ and
$m_{s}=-1$ is lifted by Zeeman splitting in the presence of an external
magnetic field. Transitions to the excited state are spin conserved, however,
the relaxation from excited triplet state take two paths - a radiative spin
conserving path and a non-radiative decay via intersystem crossing (ISCs). The
radiative decay produces broadband red photo-luminescence with the zero-phonon
line centered at $637\text{\,}\mathrm{nm}$. The non-radiative ISCs are highly
spin-selective towards the $m_{s}=0$ spin sublevel. Therefore, continuous
optical excitation leads to electron spin polarization. Neglecting the
hyperfine interaction between the nuclear spin of the nitrogen atom the NV’s
electronic spin, the ground state NV Hamiltonian is given by
$H=hDS_{z}^{2}+hE\left(S_{x}^{2}-S_{y}^{2}\right)+g\mu_{B}B\cdot S,$ (1)
where, $h$ is the Planck’s constant, $D$ is the zero-field splitting,
$\mu_{B}$ is the Bohr magneton, $g$ is the gyromagnetic ratio, $E$ is the
applied electric field and the last term corresponds to the Zeeman term, with
$B$, the externally applied magnetic field. $S_{x},S_{y},S_{z}$ correspond to
the Pauli matrices for a spin-1 system. In the weak-field regime where
$\mathrm{B}_{\perp}\ll\mathrm{B}_{\|}$, the electron spin resonance
frequencies are given by
$\nu_{\pm}\left(B_{NV}\right)=D\pm\sqrt{\left(\frac{g\mu_{B}}{h}B_{NV}\right)^{2}+E^{2}}$
(2)
where, $B_{NV}$ is the component of the applied field parallel to the NV axis.
For cases where applied bias field is high enough to neglect the $E$ term, the
electron spin resonance frequencies vary linearly with the applied magnetic
field. Such a regime is ideal for sensitive magnetometry with diamond NV
centers.
Figure 1: Schematic of the experimental setup and protocol for data
acquisition: (a) Schematic describing the experimental setup for single
photodiode diamond NV magnetometry (SP) or widefield per pixel lock-in diamond
nitrogen-vacancy magnetometry (WMF) (b) Illustration explaining generation of
frequency modulated NV emitted red light by applying frequency shift key type
microwave resonant frequencies. The applied microwave resonant frequencies
shuttle between $\omega$ and $\omega-\omega_{dev}$ in sync with square wave
waveform of frequency $\omega_{mod}$. When the microwave frequencies are
resonant, the emitted NV red light is frequency modulated at $\omega_{mod}$
(c) Pulse protocol to control and synchronize the demodulation of internal
camera frames with modulation frequency of optical signal to obtain lock-in
in-phase (I) and quadrature (Q) images. A green laser illumination at 532-nm
is continuously on and frequency shift key microwave (MW) waveform with
modulation $\omega_{mod}$ is applied. Lock-in camera external trigger pulses,
controlling internal frame acquisition timings, are provided at
$2\omega_{mod}$, synced with MW modulation, where they define 4 quarters for
light integration $S_{1}S_{2}S_{3}S_{4}$. These four quarters of light
integration allow in-phase ($S_{1}-S_{3}$) and quadrature ($S_{2}-S_{4}$)
estimation of optical signal and are averaged over N cycles to give single
pair of In-phase image and Quadrature Image (IQ Frame)
### 2.2 Experimental Setup
Fig. 1(a) is an illustration of the experimental setup used to perform diamond
NV magnetometry. A non-resonant green light excitation at
$532\text{\,}\mathrm{nm}$ (Sprout Laser) is used to illuminate NV centers via
a $100\times$ objective (Olympus, MLPNFLN series). The excitation beam is
focused on the back focal plane of the objective to obtain $\sim$
$200\text{\,}\mathrm{\SIUnitSymbolMicro m}$ diameter spot size on the NV
layer. Optical power impinging the objective back aperture is
$\sim$$1.5\text{\,}\mathrm{W}$. We use an isotopically pure diamond crystal
(procured from Element Six) of lateral dimensions
$4.5\text{\,}\mathrm{mm}$$\times$ $4.5\text{\,}\mathrm{mm}$ and
$500\text{\,}\mathrm{\SIUnitSymbolMicro m}$ thick with a thin
$1\text{\,}\mathrm{\SIUnitSymbolMicro m}$ NV- implanted layer of 1-2 ppm NV-
concentration. The emitted light from NV centers is collected via the same
objective, filtered to select the red light (above $567\text{\,}\mathrm{nm}$)
and reject green excitation light at ($532\text{\,}\mathrm{nm}$ using a notch
stop filter (SEMROCK NF03-532E-25). The collected light is focused onto a
widefield lock-in camera (Heliotis Helicam C3) to perform widefield
magnetometry. The diamond sample is mounted on a microwave loop PCB, and
associated microwave electronics are used to deliver amplified microwave
frequencies in the range $\sim$ 2.5-3.2 $\text{\,}\mathrm{GHz}$. The applied
microwave frequencies follow frequency shift keying waveforms with square-wave
envelopes. The camera imaging frames are synchronized with the microwave
modulation with specific pulse sequences generated by a high-speed TTL pulse
generator card (SpinCore PulseBlaster ESR-PRO $500\text{\,}\mathrm{MHz}$).
Samarium-Cobalt (Sm-Co) ring magnets are used for applying a bias magnetic
field and have not been shown in the experimental schematic. Two microscale
conductive samples, a $\sim$$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ track-
width microwire and a $\sim$$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ track
width planar spiral microcoil was fabricated on two independent silicon
substrates. Sample patterning was done using e-beam lithography followed by
$100\text{\,}\mathrm{nm}$ thick deposition of Ti/Au. Typical resistances of
these structures were found to be $\sim$
$400\text{\,}\mathrm{\SIUnitSymbolOhm}$. These samples are mounted on a custom
PCB and wire-bonded to supply the drive voltage. The entire assembly was then
glued with Loctite (cyanoacrylate glue) to the diamond crystal. Due to the
back focal plane focusing, the 100X objective used in this work suffered a
damage in the center of the imaging field of view (FOV) because of high
optical intensity localized in a very small area. Consequently, a small number
of pixels in the FOV center have zero or minimal ODMR response and can be
observed as a small blank hole in all magnetometry related images (for example
see Fig. 3(a) and Fig. 4 (c),(f),(g)).
### 2.3 NV Frequency Modulation and Synchronization of Lock-in Camera
The generation of modulated NV light with frequency $\omega_{mod}$ in the fm-
ODMR protocol is shown via the schematic shown in Fig. 1(b). In a frequency
shift keying waveform, two microwave frequencies $\omega$ and
$\omega-\delta\omega$ are delivered via the MW resonator, where they shuttle
between each other with the square wave waveform of frequency $\omega_{mod}$.
For each MW frequency, the NV fluorescence settles to a steady-state value,
given by the NV’s resonance curve at the applied MW frequency. To measure the
amplitude of modulated NV PL, we perform lock-in detection of the collected
light at reference frequency $\omega_{mod}$. For the rest of the article, by
referring to the ’modulation frequency’ of NVs, we also mean the ’reference
frequency’ of the lock-in camera.
To synchronize the applied MW waveform with the camera’s internal frames, an
external reference signal of $2\omega_{mod}$, carefully synced to MW
modulation reference at $\omega_{mod}$, is provided to the camera’s external
trigger input, see Fig. 1(c). This TTL signal, of twice the modulation
frequency, defines the four quarter periods of the sensor light integration
whose values are denoted by $S_{1},S_{2},S_{3},S_{4}$. The in-phase signal is
$S_{1}-S_{3}$ and the quadrature signal $S_{2}-S_{4}$. Additionally, as shown
in the schematic Fig. 1(c), each cycle of demodulation is internally averaged
$N$ times to provide a pair of 2D images containing in-phase (I) and
quadrature (Q) values for each pixel. Therefore, to get a single 2D IQ image,
the total time is $(1/2\omega_{mod})*cyc$, which sets the imaging frame rate.
Further, since the NV signal scales with $\omega_{mod}$ different imaging
frame rates have different SNR as discussed later. The lock-in camera is
limited to frame rates of $3.2\text{\,}\mathrm{kHz}$ and a maximum
$250\text{\,}\mathrm{kHz}$ signal demodulation.
## 3 Results
### 3.1 Optically Detected Magnetic Resonance of Multiple Pixels
Figure 2: Frequency modulated optically detected magnetic resonance spectrum
(ODMR) of multiple pixels: (a) A 2D array of $300\times 300$ pixels have been
concatenated into a 1D array of pixels and their magnetic resonance responses
have been color coded. We observe 8 NV resonant frequencies across multiple
pixels, with each resonance feature further split into 2 peaks due to N15
hyperfine transitions. (b) Three randomly chosen pixels are used to
demonstrate individual pixel ODMR response. The baseline of the pixels,
centered at 0, has been shifted to represent them in the same plot. (c)
Example pixel ODMR response data recorded at $6.25\text{\,}\mathrm{kHz}$
modulation frequency and 122 frame averaging cycles. Each red dot represents
data at a single microwave frequency and the black curve represent non-linear
Lorentzian-derivative curve fit (d) Example pixel ODMR response data recorded
at $8.33\text{\,}\mathrm{kHz}$ modulation frequency and 82 frame averaging
cycles. Each red dot represents data at a single microwave frequency and the
black curve represents non-linear Lorentzian-derivative curve-fit. Reduced
ODMR zero-crossing slope can be observed at faster modulation frequencies.
Optically detected resonance spectrum of an ensemble of NV centers
corresponding to each pixel on the Helicam C3 Array is shown in Fig. 2(a). A
2D array of camera pixels have been concatenated into a 1D vector of pixels
and their lock-in ODMR response across multiple microwave excitation
frequencies have been color-coded. Three randomly selected pixel’s individual
ODMR traces have been shown in Fig. 2(b). For each pixel, the NV response
curve can be described by a Lorentzian function,
$f(\omega)=A\left[1-\frac{C}{\left[1+\left(\frac{\omega-\omega_{0}}{\Gamma}\right)^{2}\right]}\right],$
(3)
where $A,C,\Gamma,\omega,\omega_{0}$ denote baseline PL, contrast, the
linewidth of resonance, applied MW frequency, and resonant MW frequency of the
NV center respectively.The lock-in signal is proportional to the derivative of
the NV ODMR response curve given in Eq. (3). The derivative of the response
curve with an added baseline term was used to fit the lock-in ODMR response,
with examples shown in Fig. 2(c) and (d). To highlight the importance of NV
modulation frequency and frame averaging, two examples of ODMR traces (Fig.
2(c) and Fig. 2(d) acquired at different NV modulation frequencies
($6.25\text{\,}\mathrm{kHz}$ and $8.33\text{\,}\mathrm{kHz}$) and frame
averaging cycles (122 cycles and 82 cycles respectively) are shown. The slope
at the zero-crossing point of the fm-ODMR response curve along with the noise
floor are critical factors that determine the magnetic field sensitivity of
individual pixels. In agreement with previous studies [31], we observe reduced
zero-crossing slope at higher modulation frequency due to reduced NV
interaction time with the resonant microwave frequencies, oscillating between
$\omega$ and $\omega-\omega_{dev}$. The camera pixel readout noise grows with
square root of number of the demodulation cycles (HelicamC3 datasheet). This
factor introduces a trade-off between the NV response signal and the noise
floor with different parameters. Further, the imaging frame rate is dependent
on the ratio between modulation frequency to averaging cycles (see Methods,
HelicamC3 synchronization), and hence is coupled to the SNR of the NV’s ODMR
response.
### 3.2 Magnetic Field Sensitivity and Static Imaging
Figure 3: Per-pixel sensitivity: (a) Measured 2D map of sensitivity of all
responsive pixels. Due to the Gaussian nature of the beam spot, the SNR drops
in the outer periphery of the field-of-view(FOV). Pixels with sensitivity
better than 3$\mu T/\sqrt{Hz}$ have been included. Some pixels at the center
of FOV are non-responsive due to a damage in the objective. (b) Histogram of
sensitivity of all responding pixels, with median sensitivity of
$731\text{\,}\mathrm{n}\mathrm{T}\mathrm{/}\sqrt{Hz}$.
As evident from the two example ODMR traces at different acquisition rates,
the noise statistics and fm-ODMR signal of pixels can vary significantly with
varying image acquisition parameters. Typically, the sensitivity of a sensor
is defined by the ratio of uncertainty in the measurement to the maximum slope
point i.e.the point of operation of the sensor where the smallest perturbation
in the input creates a maximal change in the output of the sensor.
Specifically, for fm-ODMR the slope is maximum at the zero-crossing of the
lock-in output, also corresponding to the resonant frequency of NV centers.
Therefore, the magnetic field sensitivity is defined as:
$\eta=\frac{\sigma\sqrt{\tau}}{\left.\frac{dV_{\text{lock
}}}{df}\right|_{V_{\text{lock }}=0,f=\omega_{\text{res }}}}$ (4)
where $\sigma$ is the standard deviation of measurement (voltage for lock-in
amplifier or arbitrary units for camera) and $\tau$ is the measurement time of
the signal and f is the frequency. The denominator denotes the slope at the
resonant frequency $\omega_{res}$.
To acquire the $\sigma$ for individual pixels, sixty imaging frames were
acquired and the mean and standard deviation of each pixel’s intensity were
recorded. Example noise spectrum of WMF pixels as a function of lock-in
modulation frequencies have been shown in Supplementary Fig. S1(b),(c) along
with a typical $1/f$ noise spectrum of a single-photodiode (SP) lockin
measurement (Supplementary Fig. S1(a). The WMF $\sigma$ spectrum for most
pixels remained approximately flat, as compared to the SP $\sigma$ spectrum,
between modulation frequencies of 3-100 $\text{\,}\mathrm{kHz}$, with mean
value of 1.95 units (out of 10-bit 1024 point scale) for all pixels in Fig.
S1(c). Since the minimum possible camera modulation frequency is
$2.2\text{\,}\mathrm{kHz}$, most of the low-frequency noise is eliminated in
the WMF noise spectrum. For WMF imaging experiments
$\tau=(1/\omega_{mod})*n_{cyc}$, where $n_{cyc}$ is the number of frame
averaging cycle. To measure the zero-crossing slope an ODMR spectrum is
measured with a frequency resolution of ($100\text{\,}\mathrm{kHz}$). The
slope at zero-crossing for each pixel is then obtained by non-linear curve
fitting and the corresponding 2D sensitivity map is shown in Fig. 3(a),
depicting a spatial variation of pixel sensitivity by evaluating Eq. (4) for
each pixel. The pixel response mimics the excitation profile. As expected,
pixels with high response, fall within the central region of the FOV and
pixels with low or no response, fall towards the outer periphery of the NVs PL
intensity profile. The distribution of per-pixel sensitivity is shown in
Fig.3(b) where a median pixel sensitivity of
$731\text{\,}\mathrm{n}\mathrm{T}\mathrm{/}\sqrt{\mathrm{Hz}}$ is observed.
Only pixels with sensitivity more than
$3\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{T}\mathrm{/}\sqrt{\mathrm{Hz}}$
have been considered due to the low ODMR response at the outer periphery of
the beam. Additionally, before the curve fitting for each pixel, a selection
threshold was applied to select pixels with a minimum threshold level of fm-
ODMR response (see Supplementary notes, per-pixel raw data processing) and
only the responding pixels were further analyzed.
Figure 4: Static magnetic field images of the microwire and the microcoil
sample: (a) Color microscope image of the U shaped microwire sample. Microwire
track width is $10\text{\,}\mathrm{\SIUnitSymbolMicro m}$. Scale Bar
$200\text{\,}\mathrm{\SIUnitSymbolMicro m}$. Inset shows the
$90\text{\,}\mathrm{\SIUnitSymbolDegree}$ bend feature which has been imaged.
(b) Simulation of single NV-axis magnetic field map of the
$90\text{\,}\mathrm{\SIUnitSymbolDegree}$ bend feature of the microwire, at a
standoff $13\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and current
$2.4\text{\,}\mathrm{mA}$. Scale bar $40\text{\,}\mathrm{\SIUnitSymbolMicro
m}$. Black square indicates the approximate NV magnetic field imaging field of
view location. (c) Experimentally measured magnetic field image of the
microwire with $2.4\text{\,}\mathrm{mA}$ current flow, about the same NV axis
as shown in simulation. Scale bar $27\text{\,}\mathrm{\SIUnitSymbolMicro m}$.
(d) Color microscope image of the microcoil sample with metal track width
$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and overall dimensions
$100\text{\,}\mathrm{\SIUnitSymbolMicro m}$$\times$
$125\text{\,}\mathrm{\SIUnitSymbolMicro m}$ . Scale bar
$50\text{\,}\mathrm{\SIUnitSymbolMicro m}$. (e) Simulation of the single NV-
axis magnetic field map of the microcoil, at standoff
$14\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and
$500\text{\,}\mathrm{\SIUnitSymbolMicro A}$ current flow. Sample geometry,
translucent gray lines, has been scaled to simulation field image and overlaid
for easy comprehension of the current flow path. Scale Bar
$40\text{\,}\mathrm{\SIUnitSymbolMicro m}$. (f) Experimentally obtained single
axis magnetic field image of the microcoil for positive direction
$500\text{\,}\mathrm{\SIUnitSymbolMicro A}$ current flow, about the same NV
axis as shown in simulation. Scale Bar $34\text{\,}\mathrm{\SIUnitSymbolMicro
m}$. (g) Experimentally obtained single axis magnetic field image of the
microcoil for negative direction $500\text{\,}\mathrm{\SIUnitSymbolMicro A}$
current flow, about the same NV axis as shown in simulation. Scale Bar
$34\text{\,}\mathrm{\SIUnitSymbolMicro m}$.
Spatial and temporal resolutions are inherently coupled in diamond NV
microscopy. We verify magnetic field image formation with static acquisition
(5-10 minutes) for two microscale samples, one
$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ track width microwire and one
spiral microcoil of $10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ track width and
overall dimensions of $100\text{\,}\mathrm{\SIUnitSymbolMicro m}$$\times$
$125\text{\,}\mathrm{\SIUnitSymbolMicro m}$ , as described earlier in methods.
The two sample images are shown in Fig. 4(a) and Fig. 4(d). Magnetic field
images, projected onto a single NV axis, of these samples were formed by
$100\text{\,}\mathrm{kHz}$ step size sampling of the NV resonance, non-linear
parameter fits for individual pixels and subsequent determination of a map of
resonant frequencies for 2D array of pixels. The resonant frequency maps of
these samples were acquired for both DC current on and off and subtracted to
probe sample magnetic field dependent on linear shifts in the resonant
frequencies. Single NV axis magnetic field images of these samples (Fig. 4(c)
microwire and Fig. 4(f),(g) microcoil) were in agreement with simulated
magnetic field images obtained using COMSOL Multiphysics, ( Fig. 4(b)
microwire and Fig. 4(e) microcoil for expected simulated field images) at an
estimated standoff of $\sim$ $13\text{\,}\mathrm{\SIUnitSymbolMicro m}$ for
the microwire and $\sim$$14\text{\,}\mathrm{\SIUnitSymbolMicro m}$ for the
microcoil.
The $\textbf{B}_{NV}$ field is measured by resonant frequency shifts on either
side of a reference bias field resonant frequency. Therefore, on inverting the
current direction in the sample we observed an inverted contrast in the
magnetic field image of the microcoil sample as shown in Fig. 4(f) for
arbitrarily defined positive current and Fig. 4(g) for negative current, which
further affirms that the magnetic field images obtained are from the
microscale current flow in the sample. Additionally, static acquisition allows
for quantification of the field of view, the spatial resolution of the imaging
setup and the effective magnification. The imaging field of view, with
sufficient NV resonance SNR, is $\sim$ $150\text{\,}\mathrm{\SIUnitSymbolMicro
m}$ $\times$ $150\text{\,}\mathrm{\SIUnitSymbolMicro m}$ (Fig. 3) and is
limited by the excitation beam spot size on the NV layer and the total optical
power of the Gaussian excitation $532\text{\,}\mathrm{nm}$ beam in our
experimental setup (with $\sim$$1.5\text{\,}\mathrm{W}$ entering the objective
back aperture). We estimated the spatial resolution to be
$1.7\text{\,}\mathrm{\SIUnitSymbolMicro m}$ per camera pixel (see
Supplementary note for pixel resolution estimation method) during microcoil
measurements and $1.33\text{\,}\mathrm{\SIUnitSymbolMicro m}$ per camera pixel
during microwire measurements. Spatial resolution slightly differs in the two
measurements due to minor differences in positioning of a focusing plano-
convex lens in the red emitted light collection path to incorporate a larger
field of view for the microcoil. Consequently, corresponding effective
magnifications were 30X for microwire measurements and 23.5X for microcoil
measurements in our widefield microscope. While we acquire only single NV axis
magnetic field static and dynamic images in this study, we show that the
microcoil sample’s vector magnetic field can be reliably reconstructed
(Supplementary Fig. S2) from single NV axis magnetic field images by well
established Fourier reconstruction methods [42, 4].
### 3.3 Dynamic Widefield Magnetic Imaging
In this section we describe the acquisition of millisecond scale widefield
magnetic field images. To perform real-time imaging, the applied microwave
frequency is fixed to a specific NV resonant frequency along one NV axis. An
externally applied magnetic field causes a linear shift in pixel intensity,
proportional to the zero-crossing NV slope. Therefore, tracking the pixel
intensities, scaled by the slope, gives a measure of the external magnetic
field fluctuation along the chosen NV axis corresponding to each pixel. The
time-dependent magnetic field can be estimated from
$B(t)=\frac{v(t)-v_{o}}{\left.\frac{dV_{\text{lock
}}}{df}\right|_{V_{\text{lock }}=0,f=\omega_{\text{res }}}}\gamma$ (5)
where $v(t)$ is the lock-in pixel intensity, $v_{o}$ is a fixed offset
baseline of the pixel and $\gamma=$
$28\text{\,}\mathrm{kHz}\text{\,}{\mathrm{\SIUnitSymbolMicro T}}^{-1}$ is the
gyromagnetic ratio. The zero-crossing slope scale factor is independently
determined corresponding to each pixel in the imaging window. Individual
pixels are heterogeneous in their resonant frequencies due to small deviation
arising from local crystal strain, non-uniform bias magnetic field and
temperature in the excitation volume of the diamond sample. Therefore, we
choose to select the median resonant frequency from the distribution of
resonant frequencies in the imaging window for widefield magnetic field
tracking.
We demonstrate temporal magnetic field imaging examples for both samples, the
microwire and the microcoil, at different magnetic field variations and
imaging frame rates. The current flow in these samples are controlled by an
arbitrary waveform analog voltage generator (NIDAQ PCIe-6363, Analog output)
and the applied voltage waveform is triggered in synchronization with camera
frame acquisition (see Fig. 1). A low peak current level of
$500\text{\,}\mathrm{\SIUnitSymbolMicro A}$ was chosen for temporal field
imaging demonstration of both samples to keep peak magnetic field values below
$\sim$ $6\text{\,}\mathrm{\SIUnitSymbolMicro T}$ in the entire FOV, at the
given sample-standoff (see static imaging section and Fig. 4).
Figure 5: Temporal imaging of 1.26 Hz magnetic field variation at 78 frames
per second of the $90\text{\,}\mathrm{\SIUnitSymbolDegree}$ bend feature of
the microwire sample: (a) Magnetic field frames at single time-points
(averaged $n=15$ iterations) showing alternating field image contrast with
reversal in current direction. No voltage applied for a baseline time of
$0.5\text{\,}\mathrm{s}$, first frame selected from baseline window. A
periodic square wave voltage waveform of alternating polarity was applied
after the baseline time, at $1.26\text{\,}\mathrm{Hz}$ periodicity and peak
current $500\text{\,}\mathrm{\SIUnitSymbolMicro A}$. Exact magnetic field
frame time-points have been shown on top of each image. Scale bar
$27\text{\,}\mathrm{\SIUnitSymbolMicro m}$ (b) Full temporal time traces of
example pixels showing magnetic field tracking in time. Location of example
pixels, labelled as P1 to P4 in the field of view have been shown in the
magnetic field images. For each pixel, magnetic field traces versus time show
tracking of applied the magnetic field, with faded gray lines as single
iteration traces and solid black lines showing mean ($n=15$) magnetic field
traces for the given pixel. Amplitude spectral density of single-pixel field
traces are shown on the left, where pixel Fourier spectra are in blue and
applied voltage Fourier spectra has been shown in gray. Applied voltage
spectral density is scaled to a constant to compare spectral content with
pixel Fourier spectra. Since pixels track magnetic field, peaks in the pixel
Fourier spectra matches with peaks in the Fourier spectrum of the applied
voltage, with peaks occurring at magnetic field variation
$1.26\text{\,}\mathrm{H}\mathrm{z}$ and it’s odd harmonics.
Microwire imaging, $1.26\text{\,}\mathrm{Hz}$ sample field variation, 78 fps
NV acquisition: Dynamic magnetic field imaging was performed on the microwire
sample, where acquisition rate of magnetic field frames was set to 78 fps and
a $1.26\text{\,}\mathrm{Hz}$ periodic square bipolar voltage waveform was
applied to the microwire. A peak current of
$500\text{\,}\mathrm{\SIUnitSymbolMicro A}$ produced peak magnetic field
around $5\text{\,}\mathrm{\SIUnitSymbolMicro T}$ in the imaging FOV. Fig. 5(a)
shows example single magnetic field frames (iterations $n=15$) at selected
time-points demonstrating temporally varying, alternating field magnetic image
contrast due to periodic changes in current polarity. Few example pixels P1 to
P4 (see Fig. 5(b)) have been selected to show full temporal response of these
individual pixels. Single-iteration time traces (Fig. 5(b), faded gray traces
from all $n=15$ iteration) and mean time traces ($n=15$, Fig. 5(b), black
solid traces) of individual pixels track applied magnetic field waveform.
Fourier spectra of these pixel time traces are observed to contain peaks at
odd harmonics of applied magnetic field variation
$1.26\text{\,}\mathrm{H}\mathrm{z}$ as expected. Example pixels P2 and P4 were
selected at perpendicular locations to the current path near pixels P1 and P3.
Therefore, we observe P2 and P4 time traces are $\sim$
$90\text{\,}\mathrm{\SIUnitSymbolDegree}$ phase shifted to P1 and P3 time
traces, inline with expected spatial magnetic field profile of the microwire.
To the best of our knowledge this demonstrates the first real-time tracking of
microscale magnetic fields which faithfully reconstructs the frequency and
phase of the applied field. A link to the video file of this imaging dataset
has been provided in the supplementary section.
Figure 6: Temporal imaging of $17.9\text{\,}\mathrm{Hz}$ magnetic field
variation at 78 frames per second of the microcoil sample: (a) Magnetic field
frames at single time points (averaged $n=15$ iterations) showing alternating
field image contrast with reversal in current direction. No voltage applied
for a baseline time of $0.5\text{\,}\mathrm{s}$, first frame selected from
baseline window. A periodic square wave voltage waveform of alternating
polarity was applied after the baseline time, at $17.9\text{\,}\mathrm{Hz}$
periodicity and peak current $500\text{\,}\mathrm{\SIUnitSymbolMicro A}$.
Exact magnetic field frame time points have been shown on top of each image.
Scale bar $34\text{\,}\mathrm{\SIUnitSymbolMicro m}$ (b) Full temporal time
traces of example pixels showing magnetic field tracking in time. Location of
example pixels, labelled P1-P4 in the field of view have been shown in the
magnetic field images. For each pixel, magnetic field traces versus time show
tracking of applied the magnetic field, with faded gray lines as single
iteration traces and solid black lines showing mean ($n=15$) magnetic field
traces for the pixel. Amplitude spectral density of single-pixel field traces
are shown on the left, where pixel Fourier spectra are in blue and applied
voltage Fourier spectra has been shown in gray. Applied voltage spectral
density is scaled to a constant to compare spectral content with pixel Fourier
spectra. Since pixels track magnetic field, the prominent peak in the pixel
Fourier spectra matches with the peak in the Fourier spectrum of the applied
voltage, both occurring at magnetic field variation
$17.9\text{\,}\mathrm{Hz}$.
Microcoil imaging, $17.98\text{\,}\mathrm{Hz}$ sample field variation, 78 fps
NV acquisition: Dynamic magnetic field imaging was performed on the planar
microcoil sample, where the NV acquisition rate was set to 78 fps and a
$17.98\text{\,}\mathrm{Hz}$ periodic square voltage waveform was applied to
the microcoil. Results of microcoil imaging (see Fig. 6) have been similarly
organized as discussed in the microwire temporal imaging text. Microscale
magnetic field profiles of the microcoil are spatially resolved in single sub-
second magnetic field frames (Fig. 6(a), 12ms per frame, $n=15$). Magnetic
field time traces of example pixels have been shown in Fig. 6(b) and example
pixel locations on the microcoil images have been marked in Fig. 6(a). Fourier
spectra of these pixels show peak at the frequency of applied
$17.98\text{\,}\mathrm{H}\mathrm{z}$ periodic magnetic field waveform. These
results demonstrate resolving spatially intricate field profiles, in this case
multiple current flow paths separated by
$\sim$$7\text{\,}\mathrm{\SIUnitSymbolMicro m}$, at millisecond scale
snapshots of magnetic field images. A link to the video file of this imaging
dataset has been provided in the supplementary section.
Temporal imaging data for the microcoil at similar magnetic field variation
$18.9\text{\,}\mathrm{Hz}$ but higher NV acquisition rate of 208 fps has been
shown in the supplementary section (Supplementary Fig. S3). Microcoil magnetic
field features are spatially-resolved with reduced SNR and the first odd
harmonic of $18.9\text{\,}\mathrm{Hz}$ field variation is also observed in the
Fourier spectra of individual pixel responses. A higher magnetic field
variation $41.52\text{\,}\mathrm{Hz}$ applied to the microcoil at 208fps
acquisition rate is also shown (Supplementary Fig. S4). Additionally, for
completeness, we show dynamics in the microwire sample at similar magnetic
field variations ($16.3\text{\,}\mathrm{Hz}$) and 78fps NV acquisition rate.
Supplementary Fig. S5 shows spatially resolved magnetic field images of the
microwire and expected magnetic field tracking in individual pixel responses.
Figure 7: Temporal imaging of an arbitrary millisecond scale magnetic field
variation at 208 frames per second of the microcoil sample: (a) Applied
current profile to the microcoil sample. The main waveform signature lasts for
less than $150\text{\,}\mathrm{ms}$. Vertical blue lines indicates time-points
where single magnetic field image frames have been shown further. (b) Example
magnetic field frames at selected time-points (averaged $n=15$ iterations)
have been shown. Magnetic field images are Gaussian-smoothened with
4.5$\sigma$ filter. The applied current profile is reflected in the series of
spatially resolved magnetic field images of the microcoil. Magnetic field
image at $413\text{\,}\mathrm{ms}$ is observed to faithfully capture the fast
inversion of current polarity. Scale bar
$34\text{\,}\mathrm{\SIUnitSymbolMicro m}$.
Arbitrary waveform dynamics: Here, we show millisecond scale widefield
magnetometry for a generalized arbitrary waveform signal (Fig. 7a) with a
rapid inversion of current polarity and where the total fluctuation event
lasts less than 150ms. Fourier spectrum of this applied waveform contains
energy in frequencies upto $100\text{\,}\mathrm{Hz}$. Therefore, to
sufficiently sample the magnetic field profile, number of demodulation cycles
were reduced to get NV acquisition rate of 208fps. Fig. 7(b) shows selected
single magnetic field frames ($4.8\text{\,}\mathrm{ms}$ NV acquisition time
per frame, $n=15$ iterations) that show expected microcoil field temporal
profile in response to the applied waveform. Notably, field frame at
$413\text{\,}\mathrm{ms}$ captures the point of near-zero magnetic field
profile when the current rapidly switches polarity within
$\sim$$30\text{\,}\mathrm{ms}$. Since magnitude of the peak negative current
is higher than the peak positive current, microcoil features are more
prominent in field frame at $427\text{\,}\mathrm{ms}$ as compared to field
frame at $394\text{\,}\mathrm{ms}$. Magnetic field images for this case have
been Guassian-smoothened with $4.5\sigma$ filter to remove additional noise in
temporal images incorporated at higher imaging frame rates. A link to the
video file of this imaging dataset has been provided in the supplementary
section.
Further, we observed high frequency noise in lock-in camera pixel response
which can be reduced by the use of appropriate filtering techniques like
Bayesian filtering to further enhance imaging SNR. To the best of our
knowledge, the acquired single-axis widefield magnetic field images constitute
a novel demonstration of real-time millisecond scale widefield magnetic field
microscopy. Improved temporal resolution is primarily enabled by pixel noise-
rejection at higher lock-in frequencies, high imaging frame rates offered by
the lock-in camera and ability to synchronize modulation of NV emitted light
with lock-in camera frame integration timings. At high imaging frame rates,
the SNR is primarily limited by the NV’s emitted fluorescence rate from the
diamond sample, and not by the lock-in camera demodulation rates. Therefore,
the temporal imaging enhancement demonstrated in this work is expected to
improve at least one-two fold with optimized optical and microwave excitation
power of the NV ensemble and further, by the use state-of-art ion-irradiated
high density nitrogen vacancy diamond samples.
Imaging speed and sensitivity trade-off: Finally, we discuss the interplay of
four key parameters of WMF imaging method, namely, the imaging frame rate $I$,
the mean per-pixel sensitivity $\eta$, the NV modulation frequency
$\omega_{mod}$ and the number of frame averaging cycles $n_{cyc}$. A
phenomenological understanding of the coupling of parameters will be useful in
deciding the trade-off. To maximize the imaging frame rate $I$
$\propto\omega_{mod}/n_{cyc}$, we need to modulate NVs faster (increase
$\omega_{mod}$) and average for lesser number of internal frames (decrease
$n_{cyc}$). Increasing $\omega_{mod}$ leads to a decrease in the zero-crossing
NV slope but the noise, $\sigma$, remains mostly constant. Therefore, $\eta$
will drop at higher $\omega_{mod}$, keeping $n_{cyc}$ same. Increasing the
$n_{c}yc$ has more interesting effects on $\eta$, since the camera readout
noise $\sigma$ increases with more $n_{cyc}$ but the NV signal strength also
improves. Therefore, a multi-parameter optimization is required for
understanding the trade-offs and zone of best performance for the sensor for a
given specific application.
In summary, we have developed a novel widefield magnetic field microscope
capable of probing dynamically varying microscale magnetic field features at
tunable imaging frame rates of 50-200 frames per second. Millisecond to sub-
second magnetic field images have been demonstrated for a planar microcoil
sample with detailed microscale features, consisting of multiple current flow
paths separated by $\sim$ $7\text{\,}\mathrm{\SIUnitSymbolMicro m}$, current
flow track width $10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and multiple
$90\text{\,}\mathrm{\SIUnitSymbolDegree}$ turns in the current flow path.
While maintaining microscale spatial resolution, individual pixels in the
imaging FOV have been shown to track applied magnetic fields in time with
correct amplitude and phase for both periodic current waveforms and short
$150\text{\,}\mathrm{ms}$ arbitrary current waveform. Frequency spectrum of
individual pixels reveal near exact match to the frequency spectrum of applied
periodic current waveforms. Further, the NV imaging speed enhancements have
been shown for small magnetic fields, typically less than
$\sim$$6\text{\,}\mathrm{\SIUnitSymbolMicro T}$ in the entire FOV. Therefore,
to the best of our knowledge, the widefield per-pixel lock-in method proposed
here marks significant improvement over conventional ODMR imaging, where few
to several minutes of averaging time is required to obtain a single magnetic
field image of similar microscale spatial resolution.
## 4 Conclusion and Outlook
In this work, we have developed and demonstrated an experimental technique to
perform real-time widefield magnetic field imaging using diamond NV centers in
diamond. Per-pixel SNR is significantly enhanced using lock-in detection
techniques implemented on a commercial lock-in camera which allows
simultaneous demodulation of multiple pixels. While previous diamond NV based
magnetometers have shown acquisition rate of several minutes per frame, to the
best of our knowledge, we demonstrate for the first time, spatio-temporal
magnetic field imaging at timescale around 1-40 $\text{\,}\mathrm{Hz}$ at
imaging speed of $50200$ fps. The fm-ODMR protocol used in this demonstration
is easy to implement, demanding only frequency modulated NV-PL and microsecond
digital pulses that control camera frame demodulation. We expect temporal
imaging SNR and imaging FOV shown in our work to significantly improve with
increase in optical excitation of NV centers and with application of state-of-
art higher NV density diamond crystals. Additionally, the spatio-temporal
resolution is expected to improve in future with the use of higher NV
concentration diamond samples and improved coherence time. We emphasize that
while we operate the camera at demodulation of 6.25-8.33
$\text{\,}\mathrm{kHz}$ and imaging frame rates of $\sim\,50-200$ fps, the
demonstration is primarily limited by the low NV fluorescence and not by
maximum achievable lock-in modulation rates (possible up to
$250\text{\,}\mathrm{kHz}$) and imaging frame rates (maximum possible 3200
fps) for the camera used here. Other lock-in cameras [43, 44] are expected to
offer similar high frame rate advantages. We are aware of a similar
independent preprint submission by Webb et al. [45] where the authors
demonstrate an application of widefield lock-in detection to enhance imaging
speed of diamond NV magnetometry. Both our work and their work, with
differences in experimental implementation, show that widefield lock-in
detection enables sub-second magnetic field microscopy using NV defect centers
in diamond, in contrast to conventional static diamond NV magnetic field
microscopy.
## References
* [1] Steinert, S. _et al._ High sensitivity magnetic imaging using an array of spins in diamond. _Rev. Sci. Instrum._ 81, 043705, DOI: 10.1063/1.3385689 (2010).
* [2] Pham, L. M. _et al._ Magnetic field imaging with nitrogen-vacancy ensembles. _New J. Phys._ 13, 045021, DOI: 10.1088/1367-2630/13/4/045021 (2011).
* [3] Le Sage, D. _et al._ Optical magnetic imaging of living cells. _Nature_ 496, 486–489 (2013).
* [4] Glenn, D. R. _et al._ Micrometer-scale magnetic imaging of geological samples using a quantum diamond microscope. _Geochemistry, Geophysics, Geosystems_ 18, 3254–3267 (2017).
* [5] Levine, E. V. _et al._ Principles and techniques of the quantum diamond microscope. _Nanophotonics_ 8, 1945–1973, DOI: doi:10.1515/nanoph-2019-0209 (2019).
* [6] Tetienne, J.-P. _et al._ Quantum imaging of current flow in graphene. _Science advances_ 3, e1602429 (2017).
* [7] Wolf, T. _et al._ Subpicotesla diamond magnetometry. _Phys. Rev. X_ 5, 041001, DOI: 10.1103/PhysRevX.5.041001 (2015).
* [8] Barry, J. F. _et al._ Optical magnetic detection of single-neuron action potentials using quantum defects in diamond. _Proceedings of the National Academy of Sciences_ 113, 14133–14138, DOI: 10.1073/pnas.1601513113 (2016).
* [9] Rondin, L. _et al._ Magnetometry with nitrogen-vacancy defects in diamond. _Reports on progress in physics_ 77, 056503 (2014).
* [10] Petrini, G. _et al._ Is a quantum biosensing revolution approaching? perspectives in nv-assisted current and thermal biosensing in living cells. _Advanced Quantum Technologies_ 3, 2000066, DOI: https://doi.org/10.1002/qute.202000066 (2020).
* [11] Davis, H. C. _et al._ Mapping the microscale origins of magnetic resonance image contrast with subcellular diamond magnetometry. _Nature communications_ 9, 1–9 (2018).
* [12] Ku, M. J. H. _et al._ Imaging viscous flow of the dirac fluid in graphene. _Nature_ 583, 537–541, DOI: 10.1038/s41586-020-2507-2 (2020).
* [13] Casola, F., van der Sar, T. & Yacoby, A. Probing condensed matter physics with magnetometry based on nitrogen-vacancy centres in diamond. _Nat. Rev. Mater_ 3, 17088, DOI: 10.1038/natrevmats.2017.88 (2018).
* [14] Turner, M. J. _et al._ Magnetic field fingerprinting of integrated-circuit activity with a quantum diamond microscope. _Physical Review Applied_ 14, 014097 (2020).
* [15] Mizuno, K., Ishiwata, H., Masuyama, Y., Iwasaki, T. & Hatano, M. Simultaneous wide-field imaging of phase and magnitude of ac magnetic signal using diamond quantum magnetometry. _Scientific Reports_ 10, 1–10 (2020).
* [16] Lillie, S. E. _et al._ Imaging graphene field-effect transistors on diamond using nitrogen-vacancy microscopy. _Phys. Rev. Applied_ 12, 024018, DOI: 10.1103/PhysRevApplied.12.024018 (2019).
* [17] Broadway, D. A. _et al._ Spatial mapping of band bending in semiconductor devices using in situ quantum sensors. _Nature Electronics_ 1, 502–507, DOI: 10.1038/s41928-018-0130-0 (2018).
* [18] Lillie, S. E. _et al._ Laser modulation of superconductivity in a cryogenic wide-field nitrogen-vacancy microscope. _Nano Letters_ 20, 1855–1861, DOI: 10.1021/acs.nanolett.9b05071 (2020).
* [19] Mahmoudi, M. _et al._ Magnetic resonance imaging tracking of stem cells in vivo using iron oxide nanoparticles as a tool for the advancement of clinical regenerative medicine. _Chemical Reviews_ 111, 253–280, DOI: 10.1021/cr1001832 (2011).
* [20] Barry, J. F. _et al._ Optical magnetic detection of single-neuron action potentials using quantum defects in diamond. _Proceedings of the National Academy of Sciences_ 113, 14133–14138 (2016).
* [21] Price, J. C., Mesquita-Ribeiro, R., Dajas-Bailador, F. & Mather, M. L. Widefield, spatiotemporal mapping of spontaneous activity of mouse cultured neuronal networks using quantum diamond sensors. _Frontiers in Physics_ 8, 255, DOI: 10.3389/fphy.2020.00255 (2020).
* [22] Webb, J. L. _et al._ Detection of biological signals from a live mammalian muscle using an early stage diamond quantum sensor. _Scientific Reports_ 11, 2412, DOI: 10.1038/s41598-021-81828-x (2021).
* [23] Parashar, M., Saha, K. & Bandyopadhyay, S. Axon hillock currents enable single-neuron-resolved 3d reconstruction using diamond nitrogen-vacancy magnetometry. _Communications Physics_ 3, 174, DOI: 10.1038/s42005-020-00439-6 (2020).
* [24] Schlierf, M., Berkemeier, F. & Rief, M. Direct observation of active protein folding using lock-in force spectroscopy. _Biophysical Journal_ 93, 3989–3998, DOI: https://doi.org/10.1529/biophysj.107.114397 (2007).
* [25] Shah, V., Knappe, S., Schwindt, P. D. D. & Kitching, J. Subpicotesla atomic magnetometry with a microfabricated vapour cell. _Nature Photonics_ 1, 649–652, DOI: 10.1038/nphoton.2007.201 (2007).
* [26] Beer, S. & Seitz, P. Real-time tomographic imaging without x-rays: a smart pixel array with massively parallel signal processing for real-time optical coherence tomography performing close to the physical limits. In _Research in Microelectronics and Electronics, 2005 PhD_ , vol. 2, 135–138, DOI: 10.1109/RME.2005.1542955 (2005).
* [27] Changhuei Yang, A. M. & Alford, J. Systems and methods for quasi-ballistic photon optical coherence tomography in diffusive scattering media using a lock-in camera detector. US Patent US10881300B2 (2021).
* [28] Liu, Y., Shen, Y., Ma, C., Shi, J. & Wang, L. V. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media. _Applied physics letters_ 108, 231106 (2016).
* [29] Meier, A. H. & Roesgen, T. Imaging laser doppler velocimetry. _Experiments in Fluids_ 52, 1017–1026, DOI: 10.1007/s00348-011-1192-1 (2012).
* [30] Sinclair, L. C., Cossel, K. C., Coffey, T., Ye, J. & Cornell, E. A. Frequency comb velocity-modulation spectroscopy. _Phys. Rev. Lett._ 107, 093002, DOI: 10.1103/PhysRevLett.107.093002 (2011).
* [31] Schoenfeld, R. S. & Harneit, W. Real time magnetic field sensing and imaging using a single spin in diamond. _Physical review letters_ 106, 030802 (2011).
* [32] Schloss, J. M., Barry, J. F., Turner, M. J. & Walsworth, R. L. Simultaneous broadband vector magnetometry using solid-state spins. _Physical Review Applied_ 10, 034044 (2018).
* [33] Clevenson, H. _et al._ Robust high-dynamic-range vector magnetometry with nitrogen-vacancy centers in diamond. _Applied Physics Letters_ 112, 252406 (2018).
* [34] Webb, J. L. _et al._ Nanotesla sensitivity magnetic field sensing using a compact diamond nitrogen-vacancy magnetometer. _Applied Physics Letters_ 114, 231103 (2019).
* [35] Wojciechowski, A. M. _et al._ Contributed review: Camera-limits for wide-field magnetic resonance imaging with a nitrogen-vacancy spin sensor. _Review of Scientific Instruments_ 89, 031501 (2018).
* [36] Heliotis helicam ™ c3 – lock-in camera.
* [37] Kazi, Z. _et al._ Wide-field dynamic magnetic microscopy using double-double quantum driving of a diamond defect ensemble. _Physical Review Applied_ 15, 054032 (2021).
* [38] Hart, C. A. _et al._ $\mathrm{N}$-$v$–diamond magnetic microscopy using a double quantum 4-ramsey protocol. _Phys. Rev. Applied_ 15, 044020, DOI: 10.1103/PhysRevApplied.15.044020 (2021).
* [39] Kucsko, G. _et al._ Nanometre-scale thermometry in a living cell. _Nature_ 500, 54–58, DOI: 10.1038/nature12373 (2013).
* [40] Fujiwara, M. _et al._ Real-time estimation of the optically detected magnetic resonance shift in diamond quantum thermometry toward biological applications. _Phys. Rev. Research_ 2, 043415, DOI: 10.1103/PhysRevResearch.2.043415 (2020).
* [41] Fujiwara, M. _et al._ Real-time nanodiamond thermometry probing in vivo thermogenic responses. _Science Advances_ 6, DOI: 10.1126/sciadv.aba9636 (2020).
* [42] Lima, E. A. & Weiss, B. P. Obtaining vector magnetic field maps from single-component measurements of geological samples. _Journal of Geophysical Research_ 114 (2009).
* [43] Cao, H. T., Brown, D. D., Veitch, P. J. & Ottaway, D. J. Optical lock-in camera for gravitational wave detectors. _Opt. Express_ 28, 14405–14413, DOI: 10.1364/OE.384754 (2020).
* [44] Changhuei Yang, J. A., Adam Marblestone & Wentz, C. System and method for simultaneously detecting phase modulated optical signals. US Patents US10016137B1 (2018).
* [45] Webb, J. L. _et al._ High speed microcircuit and synthetic biosignal widefield imaging using nitrogen vacancies in diamond (2021). 2107.14156.
## Acknowledgements
K.S. acknowledges financial support from IIT Bombay seed grant number
17IRCCSG009, DST Inspire Faculty Fellowship - DST/ INSPIRE/04/2016/002284,
SERB EMR grant Number EMR/2016/007420 and Asian Office of Aerospace Research
and Development (AOARD) R$\&$D grant No. FA2386-19-1-4042. K.S. acknowledges
the support and usage of fabrication facilities in the IIT Bombay
Nanofabrication facility via the NNetra project sponsored by Department of
Science and Technology (DST) and Ministry of Electronics and Information
Technology (MEITY), India. This work was also supported by the DBT/Wellcome
Trust India Alliance Fellowship IA/I/11/2500270 awarded to S.B.. M.P. thanks
MHRD, India for Institute Fellowship and Prime Minister’s Research Fellowship
(PMRF). The authors thank Heliotis Helicam technical assistance, especially
Istvan Biro, for his help in camera synchronization. K.S. acknowledges the
contribution of Aditya Malusare and Parth Jatakia during the initial
experimental setup. The authors thank Dr. Siddharth Tallur for allowing the
usage of SR860 Lockin amplifier for single-photodiode experiments and also
thank Prof. Pradeep Sarin, Prof. Kantimay Dasgupta and Bikas C.Barik for
access and help in wire bonding the micro-fabricated samples. The authors note
that the work has been provisionally filed under the Indian Patent Act with
application number:202121010532.
## Author contributions statement
M.P, S.B, and K.S conceived the idea. M.P and K.S designed the experimental
setup. M.P constructed the experimental setup, wrote custom software for
experiment control and data acquisition and performed all primary experiments
and data analysis. A.B designed and performed micro-fabrication of the
microwire and the microcoil sample. A.B simulated magnetic field profiles of
the samples. D.S., A.B and A.G assisted data collection, designed and
characterized microwave loop PCB. D.S, A.B and S.B contributed key ideas to
experiments and data analysis. M.P and K.S. wrote the manuscript in discussion
with S.B . All authors reviewed and approved of the manuscript. K.S supervised
all aspects of the work.
## Competing Interests
The authors declare no competing interests.
## S1 Supplementary information
### S1.1 Supplementary notes
Normalization of amplitude spectral density: Amplitude spectral density (ASD)
of magnetic field traces of individual pixels is defined as the square root of
one-sided power spectral density (PSD). One-sided PSD $S(f)$ of a pixel time
series X(t) is normalized such that area under the PSD curve integrated on
one-side from 0 to $\frac{f_{s}}{2}$ equals the variance $\sigma_{X}^{2}$ of
the mean subtracted pixel time traces. $f_{s}$ denotes sampling frequency or
frames per second.
$\sigma_{X}^{2}=\int_{0}^{\frac{f_{s}}{2}}S(f)\differential{f}$ (6)
The above normalization was implemented with MATLAB in-built functions. For a
time series data vector $V$, the ASD vector is given by
fourier_vector = fftshift(fft(V))
ASD = sqrt(2) * abs(fourier_vector) / sqrt(length(fourier_vector))
The right half of the ASD vector represents amplitude spectral density from
frequency 0 to $f_{s}/2$.
Determination of single pixel spatial resolution and effective magnification:
In the experimental setup, the entire assembly of sample, diamond crystal and
microwave resonator was mounted on a motorized XYZ stage and the excitation
beam was kept fixed. The motorized stage coordinates are accurate to
$100\text{\,}\mathrm{nm}$ positioning. Magnetic field images of the microcoil
sample were acquired at slightly shifted ($\sim$
$20\text{\,}\mathrm{\SIUnitSymbolMicro m}$) locations in X and Y motor
coordinates. Corresponding to the change in motor coordinates, number of pixel
shifts were noted for a sharp feature in the magnetic field image of the
microcoil or the microwire sample. The measurements were repeated several
times and the per-pixel spatial resolution was evaluated to be
$1.33\text{\,}\mathrm{\SIUnitSymbolMicro m}$ per pixel during the microwire
measurements and $1.7\text{\,}\mathrm{\SIUnitSymbolMicro m}$ per pixel during
the microcoil measurements. As mentioned in the main text, the per-pixel
resolution during the two sets of sample measurements differ due to slight
change in positioning of a focusing plano-convex lens in the excitation-
fluorescence collection path of the widefield microscope. The lock-in camera
real pixel size is $40\text{\,}\mathrm{\SIUnitSymbolMicro m}$, which yields
effective magnification of $30\times$ for microwire measurements and
$23.5\times$ for microcoil measurements.
Per-pixel raw data processing: Additional details to measure raw-data, process
and analyze time-dependent magnetic field maps.
1. 1.
Before dynamic magnetic field tracking, we acquire a widefield lock-in ODMR
spectrum of a single NV resonant feature at high microwave frequency step size
of $100\text{\,}\mathrm{kHz}$. The resonant feature is selected on the basis
of high signal response and linearity at the NV zero-crossing point i.e.the NV
resonant frequency. This selection determines the NV axis along which the
magnetic field sensing will be performed.
2. 2.
The informative red light emitted from NV centers spans a limited area on the
CMOS array ($300\times 300$) pixels. Further, the NV ODMR signal of different
pixels differ due to Gaussian nature of optical illumination, spatial non-
uniformity of the applied microwave field and limited spot size of the
excitation beam. Therefore, it is important to select responding pixels. First
we create an average response template by taking mean of ODMR response of all
pixels, responding and non-responding pixels. Since a high number of pixels
are responsive in the ODMR data, the template carries an average ODMR feature.
The template is normalized to unit norm and the unit-norm responses of all
pixels are correlated to the template, via the dot product. Pixels with
projection values higher than a set threshold are selected for further
processing. This threshold was kept low at 1e-4 to only reject extremely low
SNR pixels. Additional selection of high response pixels occurs with
subsequent process of non-linear curve fitting.
3. 3.
Non-linear curve fitting is performed to fit derivative sum of two Lorentzian
profiles separated by $3.05\text{\,}\mathrm{MHz}$ to each selected pixel
response in the previous step. The MATLAB fit function `lsqcurvefit` is used
to perform a Levenburg-Marquadt non-linear fitting for each pixel ODMR
response.
4. 4.
Histogram of distribution of resonant frequencies of individual pixels is
analyzed. Pixels with artefacts or low ODMR response result in incorrect ODMR
curve fits and have widely different resonant frequencies, as much as in
gigahertz, from the median resonant frequency. On the contrary, all pixels
with sufficient SNR levels have resonant frequencies clustered in a small
’continuous’ band near the median resonant frequency. This resonant frequency
range is visually inspected and provides bounds to the color axis of the
resonant frequency maps. This simple bound removes pixels with wrong curve
fits, pixels with low ODMR response and adjust dynamic range of the color axis
of the 2D resonant frequency maps.
5. 5.
The scaling of raw-data PL intensity data of the pixels to magnetic field time
traces was done as described in the main text.
6. 6.
In temporal imaging datasets, pixels have an offset value ranging from 0 to
1024 (10-bit scale), but mostly centered in the range of 500-600. Also, when a
single resonant frequency is applied, heterogeneous pixels might show
different baseline PL value. A small baseline time window of about 250-500
$\text{\,}\mathrm{ms}$, where no voltage was applied to the sample, was
acquired and mean pixel value during the baseline window was subtracted from
entire time trace of the pixel. Therefore, all pixels were centered to 0 in
the beginning of temporal magnetic field tracking. Further, a ’detrend’ in-
built MATLAB function was applied over time traces of each pixel to remove
linear drifts in the magnetic field tracking data.
### S1.2 Supplementary Videos
The links to videos of imaging datasets in the main text have been provided
below.
1. 1.
Video1: Imaging video file of data shown in the main text Fig. 5. Scale bar
$27\text{\,}\mathrm{\SIUnitSymbolMicro m}$. Microwire imaging,
$1.26\text{\,}\mathrm{Hz}$, 78 fps NV acquisition. Video link.
2. 2.
Video2: Imaging video file of data shown in the main text Fig. 6. Scale bar
$34\text{\,}\mathrm{\SIUnitSymbolMicro m}$. Microcoil imaging,
$17.9\text{\,}\mathrm{Hz}$, 78 fps NV acquisition. Video link.
3. 3.
Video3: Imaging video file of data shown in the main text Fig. 7. Scale bar
$34\text{\,}\mathrm{\SIUnitSymbolMicro m}$. Arbitrary waveform dynamics. In
this video, each time frame was filtered with a MATLAB function ’filloutliers’
to primarily reduce noise in the low-SNR pixels at the edges of the FOV.
Outliers from image frames were removed on the criteria of 3 scaled median
absolute deviation away from the median. Outliers pixels were filled with
nearby local median value. Video link.
### S1.3 Supplementary figures
Figure S1: Experimental noise spectrum of widefield imaging setup: (a) Noise
spectrum of a reference single photodiode (SP) magnetometry setup across
different diamond NV modulation frequency (which is same as lock-in amplifier
reference). This spectrum has been measured with all experimental conditions
identical to ODMR experiments except applied microwave excitation was off.
Low-pass filter time constant set to $100\text{\,}\mathrm{ms}$ during the
measurement. (b) Mean Noise measured for randomly chosen 15 pixels of the
lock-in camera measured across different modulation frequencies. Similar to
part A, experimental conditions were same as widefield ODMR experiments except
microwave excitation was off. Units reflect 1024 (10-bit) points scale of
camera output (c) Mean curve of standard deviation of randomly chosen 10000
pixels of the lock-in camera versus diamond NV modulation frequencies. Part
(b) and Part (c) data obtained from same set of data of camera lock-in
intensity frames ($n=20$) at different modulation frequencies ($n=20$ frames
collected at each modulation frequency). Units reflected 1024 (10-bit) points
scale of camera output. We note that the minimum camera lock-in frequency is
$2.2\text{\,}\mathrm{kHz}$ , and therefore high noise at lower frequencies are
not observed, unlike part (a) SP measurements. Figure S2: Reconstruction of
all three orthogonal axes B fields from single NV axis magnetic field image:
(a) Simulated magnetic field profiles of the microcoil sample at sample
standoff $14\text{\,}\mathrm{\SIUnitSymbolMicro m}$, current magnitude
$500\text{\,}\mathrm{\SIUnitSymbolMicro A}$. The simulated single NV axis
projection image has been is shown on the same NV axis about which the
widefield ODMR was acquired. Scale bar $40\text{\,}\mathrm{\SIUnitSymbolMicro
m}$. (b) Experimentally obtained static magnetic field image of the microcoil
current flow acquired about a single NV resonance peak with relatively higher
magnetic field sensitivity. Orthogonal components of the magnetic field images
reconstructed from the single NV axis magnetic image, assuming source free
sensor plane and fourier inversion techniques. Scale bar
$34\text{\,}\mathrm{\SIUnitSymbolMicro m}$ Figure S3: Temporal imaging of
$18.9\text{\,}\mathrm{Hz}$ magnetic field variation at 208 frames per second
of the microcoil sample: (a) Magnetic field frames at single time-points
(averaged $n=15$ iterations) showing alternating field image contrast with
reversal in current direction. No voltage applied for a baseline time of
$0.25\text{\,}\mathrm{s}$, first frame selected from baseline window. A
periodic square wave voltage waveform of alternating polarity was applied
after the baseline time, at $18.9\text{\,}\mathrm{Hz}$ periodicity and peak
current $500\text{\,}\mathrm{\SIUnitSymbolMicro A}$. Exact magnetic field
frame time-points have been shown on top of each image. Scale bar
$34\text{\,}\mathrm{\SIUnitSymbolMicro m}$ (b) Full temporal time traces of
example pixels showing magnetic field tracking in time. Location of example
pixels, labelled P1-P4 in the field of view have been shown in the magnetic
field images. For each pixel, magnetic field traces versus time show tracking
of applied the magnetic field, with faded gray lines as single iteration
traces and solid black lines showing mean ($n=15$) magnetic field traces for
the pixel. Amplitude spectral density of single-pixel field traces are shown
on the left, where pixel Fourier spectra are in blue and applied voltage
Fourier spectra has been shown in gray. Applied voltage spectral density is
scaled to a constant to compare spectral content with pixel Fourier spectra.
Since pixels track magnetic field, peaks in the pixel Fourier spectra matches
with peaks in the Fourier spectrum of the applied voltage, with peaks
occurring at magnetic field variation $18.9\text{\,}\mathrm{H}\mathrm{z}$ and
its odd harmonics. Figure S4: Temporal imaging of $41.52\text{\,}\mathrm{Hz}$
magnetic field variation at 208 frames per second of the microcoil sample: (a)
Magnetic field frames at single time-points (averaged $n=15$ iterations)
showing alternating field image contrast with reversal in current direction.
No voltage applied for a baseline time of $0.25\text{\,}\mathrm{s}$, first
frame selected from baseline window. A periodic square wave voltage waveform
of alternating polarity was applied after the baseline time, at
$41.52\text{\,}\mathrm{Hz}$ periodicity and peak current
$500\text{\,}\mathrm{\SIUnitSymbolMicro A}$. Exact magnetic field frame time-
points have been shown on top of each image. Scale bar
$34\text{\,}\mathrm{\SIUnitSymbolMicro m}$ (b) Full temporal time traces of
example pixels showing magnetic field tracking in time. Location of example
pixels, labelled P1-P4 in the field of view have been shown in the magnetic
field images. For each pixel, magnetic field traces versus time show tracking
of applied the magnetic field, with faded gray lines as single iteration
traces and solid black lines showing mean ($n=15$) magnetic field traces for
the pixel. Amplitude spectral density of single-pixel field traces are shown
on the left, where pixel Fourier spectra are in blue and applied voltage
Fourier spectra has been shown in gray. Applied voltage spectral density is
scaled to a constant to compare spectral content with pixel Fourier spectra.
Since pixels track magnetic field, peaks in the pixel Fourier spectra matches
with peaks in the Fourier spectrum of the applied voltage, with the peak
occurring at magnetic field variation rate
$41.52\text{\,}\mathrm{H}\mathrm{z}$. Figure S5: Temporal imaging of
$16.3\text{\,}\mathrm{Hz}$ magnetic field variation at 78 frames per second of
the $90\text{\,}\mathrm{\SIUnitSymbolDegree}$ bend microwire sample: (a)
Magnetic field frames at single time-points (averaged $n=15$ iterations)
showing alternating field image contrast with reversal in current direction.
No voltage applied for a baseline time of $0.5\text{\,}\mathrm{s}$, first
frame selected from baseline window. A periodic square wave voltage waveform
of alternating polarity was applied after the baseline time, at
$16.3\text{\,}\mathrm{Hz}$ periodicity and peak current
$500\text{\,}\mathrm{\SIUnitSymbolMicro A}$. Exact magnetic field frame time-
points have been shown on top of each image. Scale bar
$27\text{\,}\mathrm{\SIUnitSymbolMicro m}$(b) Full temporal time traces of
example pixels showing magnetic field tracking in time. Location of example
pixels, labelled P1-P4 in the field of view have been shown in the magnetic
field images. For each pixel, magnetic field traces versus time show tracking
of applied the magnetic field, with faded gray lines as single iteration
traces and solid black lines showing mean ($n=15$) magnetic field traces for
the pixel. Amplitude spectral density of single-pixel field traces are shown
on the left, where pixel Fourier spectra are in blue and applied voltage
Fourier spectra has been shown in gray. Applied voltage spectral density is
scaled to a constant to compare spectral content with pixel Fourier spectra.
Since pixels track magnetic field, peaks in the pixel Fourier spectra matches
with peaks in the Fourier spectrum of the applied voltage, with each pixel
peak occurring at magnetic field variation $16.3\text{\,}\mathrm{Hz}$.
|
arxiv-papers
| 2021-07-26T14:22:02 |
2024-09-04T03:07:18.827354
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Madhur Parashar, Anuj Bathla, Dasika Shishir, Alok Gokhale, Sharba\n Bandyopadhyay, and Kasturi Saha",
"submitter": "Kasturi Saha",
"url": "https://arxiv.org/abs/2107.12232"
}
|
2107.12233
|
# Proof of universality in one-dimensional few-body systems including
anisotropic interactions
Lucas Happ [email protected] Institut für Quantenphysik and Center for
Integrated Quantum Science and Technology ($\rm IQ$ST), Universität Ulm,
D-89069 Ulm, Germany Maxim A. Efremov Institut für Quantenphysik and Center
for Integrated Quantum Science and Technology ($\rm IQ$ST), Universität Ulm,
D-89069 Ulm, Germany Institute of Quantum Technologies, German Aerospace
Center (DLR), D-89069 Ulm, Germany
###### Abstract
We provide an analytical proof of universality for bound states in one-
dimensional systems of two and three particles, valid for short-range
interactions with negative or vanishing integral over space. The proof is
performed in the limit of weak pair-interactions and covers both binding
energies and wave functions. Moreover, in this limit the results are formally
shown to converge to the respective ones found in the case of the zero-range
contact interaction.
## I Introduction
In contrast to purely attractive potentials, which are ubiquitous in quantum
physics, interactions whose attractive and repulsive parts cancel each other
are only scarcely discussed. Nevertheless, the latter allow for bound states
[1], and interest in such potentials ramped up within the last years with the
ability to realize them in systems of ultracold dipoles [2]. This is supported
by recent analysis for these potentials on the formation of few- and many-body
bound states in arrays of one-dimensional tubes [3] or in terms of beyond-
mean-field contributions in reduced dimensions [4]. In two spatial dimensions
the scattering properties [5, 6] have been studied as well as the universality
of weakly-bound two-body states [7, 8, 6] in the experimentally relevant
weakly-interacting regime. Here, universality means that the bound states
become independent of the details of the interparticle interaction.
In this Letter we study a two- and three-body system of two components,
confined to one spatial dimension. We consider only short-range interactions
$v(\xi)\equiv v_{0}f(\xi)$ with magnitude $v_{0}$ and shape $f(\xi)$ between
distinguishable particles, and none between identical ones. Within these
systems we are interested in the universal behavior [9] and consider the
weakly-interacting limit $v_{0}\to 0$, which implies [1, 10] a weakly-bound
two-body ground state. Moreover, we allow for anisotropic features in the
interactions which are often present in physical systems.
Within an analytical calculation we prove that in this weakly-interacting
limit, interactions of both negative (type I), $\int\mathrm{d}\xi\,v(\xi)<0$,
and vanishing (type II), $\int\mathrm{d}\xi\,f(\xi)=0$, integral over space
lead to the same universal behavior. Our proof is performed for two- and
three-body systems alike, by employing the corresponding integral equations in
momentum space. The demonstrated universality is not restricted to the binding
energies alone, but also includes the corresponding wave functions carrying
the full information about the few-body system. In particular, we show that
the universal limits are those obtained for a zero-range contact interaction.
The energies of three-body bound states $\mathcal{E}_{0,n}$ can then
approximately be expressed as
$\mathcal{E}_{0,n}\simeq\epsilon_{n}^{\star}\left|{\mathcal{E}^{(2)}_{0}}\right|.$
(1)
Here, $\mathcal{E}_{0}^{(2)}$ is the energy of the two-body ground state in
the potential $v(\xi)$, and $\epsilon_{n}^{\star}$ are the universal energy
ratios obtained for the contact interaction. These energy ratios depend only
on the heavy-light mass ratio and are presented for several experimentally
relevant situations in Ref. [11].
This Letter is organized as follows. In section II we introduce the two types
(type I and II) of pair-interactions and present our approach to discuss
universality in the two-body domain. We then turn in section III to the three-
body system where we apply a similar approach as for the two-body case in
order to prove universality for both types of two-body interactions. Finally,
we conclude by summarizing our results and by presenting an outlook in section
IV.
## II Two interacting particles
In this section we first introduce the one-dimensional two-body system and the
relevant quantities to describe it. Then we define two different types of
interactions whose weakly-bound ground state is discussed in terms of its
universality.
### II.1 The two-body system
In the following, we focus on a two-body system composed of a heavy and a
light particle of masses $M$ and $m\leq M$, respectively, all constrained to
one spatial dimension. The interaction between both particles is described by
a potential
$v(\xi)\equiv v_{0}f(\xi)$ (2)
of amplitude $v_{0}$ and shape $f$. Here, the relative coordinate between the
two particles is denoted by $\xi$ in units of the potential range $\xi_{0}$,
while the interaction potential is given in units of the characteristic energy
$\hbar^{2}/(\mu\xi_{0}^{2})$, with the reduced mass $\mu\equiv mM/(m+M)$ and
the Planck constant $\hbar$. The pair-interaction is assumed to be real,
short-ranged, $\xi^{2}v(\xi)\to 0$, as $\left|{\xi}\right|\to\infty$, and to
support a bound state.
Moreover, we require that the weakly-interacting limit, $v_{0}\to 0$, leads to
an even-wave resonance, that is the symmetric part of the two-body ground
state wave function becomes dominant compared to the antisymmetric part. We
assume that this is the usual case for the ground state of distinguishable
particles. There exists of course the counter-example of the odd-wave
pseudopotential [12], however this is a highly nonanalytic model-potential
designed to describe the antisymmetric ground state of two non-distinguishable
fermions. Finally, we want to emphasize that this requirement is not identical
with constraining our analysis to symmetric potentials only, that is we allow
for anisotropic features of the potential.
We are only interested in bound states, hence this system is governed by the
homogeneous Lippmann-Schwinger equation [13, 14]
$\phi^{(2)}(p)=\frac{v_{0}}{\mathcal{E}^{(2)}-p^{2}/2}\int\frac{\mathrm{d}p^{\prime}}{2\pi}F(p-p^{\prime})\phi^{(2)}(p^{\prime})$
(3)
for the two-body wave function $\phi^{(2)}(p)$ in momentum representation.
Here
$F(p)\equiv\int\mathrm{d}\xi\mathrm{e}^{-\mathrm{i}p\xi}f(\xi)$ (4)
denotes the potential shape $f$ in momentum representation and
$\mathcal{E}^{(2)}<0$ is the total energy of the two-body system in units of
$\hbar^{2}/(\mu\xi_{0}^{2})$.
In the following we consider the weakly-interacting regime, $v_{0}\to 0$,
which in 1D leads to a weakly-bound ground state [1, 10] with energy
$\mathcal{E}_{0}^{(2)}$. In order to simplify the analysis we introduce the
scaled momentum $P\equiv p/q_{0}$ with
$q_{0}\equiv\sqrt{2\left|{\mathcal{E}_{0}^{(2)}}\right|}.$ (5)
The corresponding integral equation (3) then reads
$\phi^{(2)}(P)=-\frac{1}{1+P^{2}}\frac{v_{0}}{q_{0}}\int\frac{\mathrm{d}P^{\prime}}{\pi}F\left[q_{0}(P-P^{\prime})\right]\phi^{(2)}(P^{\prime}).$
(6)
### II.2 Proof of two-body universality
Now we prove that in the limit of a vanishing binding energy
$\mathcal{E}_{0}^{(2)}$ of the heavy-light ground state, short-range
potentials with both negative, $v_{0}F(0)<0$, and vanishing, $F(0)=0$,
integral over space yield the same universal solutions for the two-body ground
state as for the contact interaction. This universality is shown not only for
the binding energy, but also for the corresponding wave function. While this
result might not be completely unexpected, the presentation of our approach
serves as basis for the subsequent proof of universality in the three-body
system (section III) which is performed in an analogous way.
#### II.2.1 Contact interaction
First, we discuss the case of the zero-range contact interaction of shape
$f_{\delta}(\xi)\equiv\delta(\xi)$, corresponding to $F_{\delta}(p)=1$.
Moreover, for this potential the relation
$q_{0}=-v_{0}$ (7)
remains exact for all values of $v_{0}$ and $q_{0}$ [1]. Hence, in this case
the integral equation (6) takes the form
$\phi^{(2)}(P)=\frac{1}{1+P^{2}}\int\frac{\mathrm{d}P^{\prime}}{\pi}\phi^{(2)}(P^{\prime}).$
(8)
It is independent of $q_{0}$ and equivalently independent of
$\mathcal{E}_{0}^{(2)}$, reflecting the scale-invariant property of the delta
potential. The solution to this integral equation in the original variable $p$
then takes the form of a Lorentzian
$\phi_{\delta}^{(2)}(p)=\frac{2q_{0}^{3/2}}{q_{0}^{2}+p^{2}}$ (9)
normalized with respect to
$\int\frac{\mathrm{d}p}{2\pi}\left[\phi^{(2)}(p)\right]^{2}=1.$ (10)
#### II.2.2 Type-I potentials: $v_{0}F(0)<0$
Next, we discuss the potentials with $v_{0}F(0)<0$, that is with overall
negative integral over space, see Eq. (4), which we define as type-I
potentials. The requirements for the potentials presented in the beginning of
section II still hold. According to Ref. [1], we have
$q_{0}=-v_{0}F(0)+O(v_{0}^{2})$ (11)
as $v_{0}\to 0$. Hence, in this limit also $q_{0}\to 0$, and the approximation
$F\left[q_{0}(P-P^{\prime})\right]\simeq F(0)$ can be performed inside the
integral in Eq. (6). Thus, by using this approximation together with Eq. (11),
we obtain for Eq. (6) the same form as Eq. (8), that is the same integral
equation as for the contact interaction. Consequently, in this limit the
normalized wave function $\phi^{(2)}$ converges to the corresponding one
$\phi_{\delta}^{(2)}$, Eq. (9), obtained for the contact interaction.
Effectively, we have used here the fact that in momentum space the potential
is more slowly varying around $p^{\prime}=0$ compared to the wave function,
which becomes more localized as $\mathcal{E}^{(2)}\to 0^{-}$, or $q_{0}\to 0$
accordingly. This argument is equivalent to the picture in coordinate
representation that in the limit $\mathcal{E}^{(2)}\to 0^{-}$, the bound state
wave function gets broader with respect to the fixed range of the potential.
#### II.2.3 Type-II potentials: $F(0)=0$
Now we analyze potentials for which the integral over space vanishes, that is
for which $F(0)=0$, as of Eq. (4). We denote them as potentials of type II.
Here, we additionally require
$\frac{\left|{F(p)}\right|^{2}}{p^{2}}<\infty\qquad\mathrm{as}\ p\to 0,$ (12)
which is always fulfilled for an analytic and smooth potential shape. For
these type-II potentials, the linear relation (11) between $q_{0}$ and
$v_{0}F(0)$ does not hold. Instead, Ref. [1] derived the quadratic dependence
$q_{0}\simeq\frac{v_{0}^{2}}{\pi}\int\mathrm{d}p\frac{\left|{F(p)}\right|^{2}}{p^{2}}.$
(13)
In order to prove that as $q_{0}\to 0$ also for the type-II potentials the
same solutions as for the contact interactions are retrieved, we iterate Eq.
(6) once and obtain
$\displaystyle\phi^{(2)}(P)=$
$\displaystyle~{}\frac{1}{1+P^{2}}\frac{v_{0}^{2}}{q_{0}^{2}}\int\frac{\mathrm{d}P^{\prime}}{\pi}\frac{F\left[q_{0}(P-P^{\prime})\right]}{1+P^{\prime
2}}$
$\displaystyle\times\int\frac{\mathrm{d}P^{\prime\prime}}{\pi}F\left[q_{0}(P^{\prime}-P^{\prime\prime})\right]\phi^{(2)}(P^{\prime\prime})$
(14)
or
$\displaystyle\phi^{(2)}(P)=$
$\displaystyle~{}\frac{1}{1+P^{2}}\frac{v_{0}^{2}}{q_{0}}\int\frac{\mathrm{d}P^{\prime\prime}}{\pi}\phi^{(2)}(P^{\prime\prime})$
$\displaystyle\times\int\frac{\mathrm{d}p^{\prime}}{\pi}\frac{F\left(q_{0}P-p^{\prime}\right)F\left(p^{\prime}-q_{0}P^{\prime\prime}\right)}{q_{0}^{2}+p^{\prime
2}},$ (15)
where we have rescaled the integration variable $P^{\prime}\equiv
p^{\prime}/q_{0}$.
In order to perform the limit $q_{0}\to 0$ in the integral over $p^{\prime}$,
we have to separately discuss the case $p^{\prime}=0$. Due to Eq. (12), the
integrand
$\frac{F\left(q_{0}P\right)F\left(-q_{0}P^{\prime\prime}\right)}{q_{0}^{2}}<\infty$
(16)
remains finite. For all $p^{\prime}\neq 0$, the limit $q_{0}\to 0$ can be
performed straightforwardly, hence due to Eq. (16) we can replace the full
integral over $p^{\prime}$ in Eq. (II.2.3) by the zero-order Taylor expansion
term
$\int\frac{\mathrm{d}p^{\prime}}{\pi}\frac{F\left(-p^{\prime}\right)F\left(p^{\prime}\right)}{p^{\prime
2}}.$ (17)
As a result, we obtain for $q_{0}\to 0$ the integral equation
$\displaystyle\phi^{(2)}(P)=\frac{4}{1+P^{2}}\frac{v_{0}^{2}}{q_{0}}\int\frac{\mathrm{d}p^{\prime}}{2\pi}\frac{\left|{F(p^{\prime})}\right|^{2}}{p^{\prime
2}}\int\frac{\mathrm{d}P^{\prime\prime}}{2\pi}\phi^{(2)}(P^{\prime\prime})$
(18)
where we have made use of the fact that $F(-p)=[F(p)]^{*}$. Application of Eq.
(13) in Eq. (18) then leads to the same integral equation (8) as for the
contact interaction. Thus, in the limit $q_{0}\to 0$, also the type-II
potentials yield solutions $\phi^{(2)}$ which converge to the same limit
functions $\phi_{\delta}^{(2)}$, Eq. (9), as for the contact interaction.
This concludes the proof of universality in the two-body system for the
potentials of type I and II. This universality is equivalent to the statement
that in the unitary limit, $\mathcal{E}_{0}^{(2)}\to 0^{-}$, the corresponding
two-body pseudopotential is the delta-potential, even though the delta
potential does not feature the property of a vanishing integral over space.
## III Three interacting particles
In this section we first introduce the one-dimensional three-body system which
is at the focus of this article. Next, we present a proof of three-body
universality that is valid for both type-I and type-II potentials in the
weakly-interacting regime.
### III.1 The three-body system
We now add a third particle to the two-body system, also constrained to 1D and
identical to the other heavy particle of mass $M$. We assume the same
interaction between the light particle and each heavy one, as introduced in
section II, but no interaction between the two heavy ones.
The homogeneous Lippmann-Schwinger equation [13, 14] governing the bound
states in this system can be formulated in complete analogy to the two-body
case discussed in section II
$|\Phi\rangle=G_{\epsilon}^{(0)}(V_{31}+V_{12})|\Phi\rangle.$ (19)
Here however, there are of course two interaction terms, $V_{31}$ and
$V_{12}$, corresponding to the interactions of the light particle (particle 1)
with each of the two heavy ones (particles 2 and 3). In the center-of-mass-
frame of the three-body system, Eq. (19) can be cast into the form
$\displaystyle\Phi(P,K)=\frac{v_{0}}{q_{0}}G_{\epsilon}^{(0)}(P,K)\int\frac{\mathrm{d}P^{\prime}}{\pi}F[q_{0}(P-P^{\prime})]$
$\displaystyle\
\times\left[\Phi\left(P^{\prime},K-\frac{P-P^{\prime}}{2}\right)+\Phi\left(P^{\prime},K+\frac{P-P^{\prime}}{2}\right)\right]$
(20)
where $\Phi(P,K)\equiv\langle P,K|\Phi\rangle$ is the three-body wave function
of the two relative motions.
The momenta $P\equiv p/q_{0}$ and $K\equiv k/q_{0}$, which are scaled
accordingly by $q_{0}$, Eq. (5), describe the two relative motions. Indeed,
$k$ denotes the relative Jacobi-momentum between the two heavy particles. On
the other hand, $p$ is the Jacobi-momentum of the light particle relative to
the center of mass of the two heavy ones. We have introduced the free-particle
three-body Green function
$G_{\epsilon}^{(0)}(P,K)=\frac{1}{\epsilon-\alpha_{p}P^{2}-\alpha_{k}K^{2}}$
(21)
with the coefficients $\alpha_{p}\equiv(1+2\alpha)/[2(1+\alpha)]$ and
$\alpha_{k}\equiv 2/(1+\alpha)$ depending only on the mass ratio $\alpha=M/m$.
Moreover, the three-body binding energy in units of the energy of the ground
state in the heavy-light subsystems is denoted by
$\epsilon\equiv\frac{\mathcal{E}}{\left|{\mathcal{E}^{(2)}_{0}}\right|}.$ (22)
As we only discuss three-body bound states, we restrict the three-body energy
and therefore $\epsilon$ to negative values.
We are interested in the universal [9], that is interaction independent
behavior of this three-body system in the weakly-interacting limit $v_{0}\to
0$. In particular, we analyze universality of the three-body bound states in
terms of the energy spectrum and the corresponding wave functions.
### III.2 Proof of three-body universality
For the type-I potentials an analytic proof of universality performed in
coordinate-representation has already been presented in Ref. [11]. This proof
however cannot be performed in the same way for type-II potentials. In this
subsection we therefore first revisit the original proof [11] of universality
for type-I potentials, however in momentum representation. For this we
consider the cases of the contact interaction and any interaction of type I.
Next, we extend the proof to type-II potentials.
#### III.2.1 Contact interaction
We start with considering the case of the contact interaction
$f_{\delta}(\xi)=\delta(\xi)$. In this case $F_{\delta}(p)=1$ and the three-
body integral equation (III.1) then simplifies with the help of Eq. (7) to the
form
$\displaystyle\Phi(P,K)=-G_{\epsilon}^{(0)}(P,K)\int\frac{\mathrm{d}P^{\prime}}{\pi}\times$
(23)
$\displaystyle\left[\Phi\left(P^{\prime},K-\frac{P-P^{\prime}}{2}\right)+\Phi\left(P^{\prime},K+\frac{P-P^{\prime}}{2}\right)\right].$
Here, $\epsilon$ enters only as a parameter in the Green function
$G_{\epsilon}^{(0)}$.
We denote the solutions of Eq. (23) for the bound-state energy spectrum and
the corresponding wave functions by $\epsilon_{n}^{\star}$ and
$\Phi_{n}^{\star}$, respectively. We emphasize that Eq. (23) is independent of
$q_{0}$ and $\mathcal{E}_{0}^{(2)}$, therefore the solutions
$\epsilon_{n}^{\star}$ and $\Phi_{n}^{\star}$ are scale-invariant for all
values of the two-body binding energy. A more detailed analysis, as well as a
table of these energy ratios for a selection of experimentally relevant mass
ratios, together with a representation of the full three-body wave functions
can be found in Ref. [11].
#### III.2.2 Type-I potentials: $v_{0}F(0)<0$
Now we discuss the type-I potentials. According to Eq. (11), the expression
$v_{0}F[q_{0}(P-P^{\prime})]/q_{0}$ present in Eq. (III.1) still converges to
$-1$, as $q_{0}\to 0$. Thus, in this limit we obtain for Eq. (III.1) the same
integral equation (23) as for the contact interaction.
Consequently, as $q_{0}\to 0$, the solutions $\epsilon_{0,n}$ and
$\Phi_{0,n}$, denoting the three-body energy spectrum and the three-body wave
functions for all type-I potentials, converge to the corresponding ones
$\epsilon_{n}^{\star}$ and $\Phi_{n}^{\star}$, obtained for the contact
interaction.
#### III.2.3 Type-II potentials: $F(0)=0$
Next, we present a proof of three-body universality for the type-II
potentials. Since in this case $q_{0}$ is proportional to the second order of
$v_{0}$ and $F$, as summarized by Eq. (13), we iterate Eq. (III.1) once to the
next order in $v_{0}$ and $F$. In the same spirit as for the two-body system
presented in section II, this then allows us to perform the limit $q_{0}\to
0$.
Indeed, after iteration Eq. (III.1) takes the form
$\Phi(P,K)=\frac{v_{0}^{2}}{q_{0}}\,G_{\epsilon}^{(0)}(P,K)\int\frac{\mathrm{d}P^{\prime\prime}}{\pi}\left[I_{1}+I_{2}+I_{3}+I_{4}\right]$
(24)
with
$\displaystyle
I_{1}\equiv\Phi\left(P^{\prime\prime},K-\frac{P-P^{\prime\prime}}{2}\right)\int\frac{\mathrm{d}P^{\prime}}{\pi}A_{-}(P,K,P^{\prime},P^{\prime\prime})$
$\displaystyle
I_{2}\equiv\int\frac{\mathrm{d}P^{\prime}}{\pi}\Phi\left(P^{\prime\prime},K-\frac{P+P^{\prime\prime}}{2}+P^{\prime}\right)A_{-}(P,K,P^{\prime},P^{\prime\prime})$
$\displaystyle
I_{3}\equiv\int\frac{\mathrm{d}P^{\prime}}{\pi}\Phi\left(P^{\prime\prime},K+\frac{P+P^{\prime\prime}}{2}-P^{\prime}\right)A_{+}(P,K,P^{\prime},P^{\prime\prime})$
$\displaystyle
I_{4}\equiv\Phi\left(P^{\prime\prime},K+\frac{P-P^{\prime\prime}}{2}\right)\int\frac{\mathrm{d}P^{\prime}}{\pi}A_{+}(P,K,P^{\prime},P^{\prime\prime}),$
(25)
and
$\displaystyle A_{\pm}(P,K,P^{\prime},P^{\prime\prime})\equiv$
$\displaystyle~{}\frac{F[q_{0}(P-P^{\prime})]\
F[q_{0}(P^{\prime}-P^{\prime\prime})]}{q_{0}}$ $\displaystyle~{}\times
G_{\epsilon}^{(0)}\left(P^{\prime},K\pm\frac{P-P^{\prime}}{2}\right).$ (26)
We now analyze the expressions $I_{j},\,j=1,2,3,4$. First, we note that in
$I_{1}$ and $I_{4}$ the argument of $\Phi$ is independent of $P^{\prime}$. On
the other hand, in $I_{2}$ and $I_{3}$ the wave function still depends on
$P^{\prime}$ and therefore remains inside the integral. The dependence of
$I_{j}$ on $q_{0}$ can be brought out more clearly by scaling the integration
variable $P^{\prime}\equiv p^{\prime}/q_{0}$. The expression
$\displaystyle\frac{A_{\pm}\left(P,K,\frac{p^{\prime}}{q_{0}},P^{\prime\prime}\right)}{q_{0}}=$
$\displaystyle\qquad\qquad\frac{F(q_{0}P-p^{\prime})F(p^{\prime}-q_{0}P^{\prime\prime})}{q_{0}^{2}\epsilon-\alpha_{p}p^{\prime
2}-\alpha_{k}\left[q_{0}K\pm\frac{1}{4}\left(q_{0}P-p^{\prime}\right)\right]^{2}}$
(27)
then appears in each integral of $I_{j}$. For $p^{\prime}=0$ this expression
takes on the value
$\frac{A_{\pm}(P,K,0,P^{\prime\prime})}{q_{0}}=\frac{F(q_{0}P)F(-q_{0}P^{\prime\prime})}{q_{0}^{2}\left[\epsilon-\alpha_{k}(K\pm
P/2)^{2}\right]}$ (28)
and remains finite also in the limit $q_{0}\to 0$, due to Eq. (12).
First, we discuss the integrals $I_{1}$ and $I_{4}$. Since $A_{\pm}$ is not
singular at $p^{\prime}=0$, we can replace in $I_{1}$ and $I_{4}$ the full
integral over $p^{\prime}$ by the zero-order Taylor expansion term
$\int\frac{\mathrm{d}P^{\prime}}{\pi}A_{\pm}(P,K,P^{\prime},P^{\prime\prime})\to-\int\frac{\mathrm{d}p^{\prime}}{\pi}\frac{\left|{F(p^{\prime})}\right|^{2}}{p^{\prime
2}}$ (29)
as $q_{0}\to 0$. Here we have used the idenitity $\alpha_{p}+\alpha_{k}/4=1$,
and $F(-p)=[F(p)]^{*}$, as of Eq. (4). Hence, in the limit $q_{0}\to 0$,
$I_{1}$ and $I_{4}$ are given by
$I_{1}\to-\Phi\left(P^{\prime\prime},K-\frac{P-P^{\prime\prime}}{2}\right)\int\frac{\mathrm{d}p^{\prime}}{\pi}\frac{\left|{F(p^{\prime})}\right|^{2}}{p^{\prime
2}}$ (30)
and
$I_{4}\to-\Phi\left(P^{\prime\prime},K+\frac{P-P^{\prime\prime}}{2}\right)\int\frac{\mathrm{d}p^{\prime}}{\pi}\frac{\left|{F(p^{\prime})}\right|^{2}}{p^{\prime
2}}.$ (31)
Next, we discuss the integrals $I_{2}$ and $I_{3}$. Inside the integration
over $p^{\prime}$, there exists the additional factor
$\Phi(P^{\prime\prime},K\pm(P+P^{\prime\prime})/2\mp p^{\prime}/q_{0})$, which
eliminates any contribution of the integrand for
$\left|{p^{\prime}}\right|>q_{0}$, as $q_{0}\to 0$. This is because we only
discuss normalizable bound states that vanish at infinity,
$\Phi(P,K\to\infty)\to 0$. Hence, only the integration in the domain
$\left|{p^{\prime}}\right|\leq q_{0}$ remains. Since according to Eqs. (28)
and (29), $A_{\pm}$ is finite therein, we can approximate
$\left|{I_{2}}\right|\leq\left|{C}\right|\int\limits_{-q_{0}}^{q_{0}}\frac{\mathrm{d}p^{\prime}}{\pi}\left|{\Phi\left(P^{\prime\prime},K-\frac{P+P^{\prime\prime}}{2}+\frac{p^{\prime}}{q_{0}}\right)}\right|$
(32)
with $\left|{C}\right|$ being the maximum value of $\left|{A_{\pm}}\right|$
inside this integration domain. In order to ensure a finite normalization, the
integral
$\iint\mathrm{d}p\,\mathrm{d}k\left|{\Phi(p,k)}\right|^{2}/(4\pi^{2})$ can
have at most a finite contribution from the interval $\left|{p}\right|<q_{0}$,
hence we deduce that the right hand side of Eq. (32), where
$\left|{\Phi}\right|$ enters only linearly, vanishes for $q_{0}\to 0$.
Equivalent arguments can be made also for $I_{3}$, thus in this limit
$I_{2}\to 0$ and $I_{3}\to 0$.
In total, Eq. (24) reduces to
$\displaystyle\Phi$
$\displaystyle(P,K)=-G_{\epsilon}^{(0)}(P,K)\frac{v_{0}^{2}}{q_{0}}\,\int\frac{\mathrm{d}p^{\prime}}{\pi}\frac{\left|{F(p^{\prime})}\right|^{2}}{p^{\prime
2}}\int\frac{\mathrm{d}P^{\prime\prime}}{\pi}$
$\displaystyle\times\left[\Phi\left(P^{\prime\prime},K-\frac{P-P^{\prime\prime}}{2}\right)+\Phi\left(P^{\prime\prime},K+\frac{P-P^{\prime\prime}}{2}\right)\right].$
(33)
Application of Eq. (13) in this equation then finally leads to the same
integral equation (23) as for the contact interaction, which concludes the
proof.
As a consequence, in the weakly-interacting limit $v_{0}\to~{}0$, also for
type-II potentials the solutions $\epsilon_{0,n}$ and $\Phi_{0,n}$ for the
three-body energy spectrum and wave functions converge to the corresponding
ones $\epsilon_{n}^{\star}$ and $\Phi_{n}^{\star}$, obtained for the contact
interaction.
## IV Conclusion and Outlook
In the present Letter we have discussed universality of binding energies and
wave functions in both the two-body and three-body domain. While in the former
the concept of universality for the ground state is often assumed, we have
provided here an approach to prove it formally. Application of the same
approach to the three-body system has then allowed us to prove universality of
the binding energies and wave functions of three-body bound states. In
particular, they are shown to converge to the corresponding results for the
contact interaction, provided the pair-interactions are tuned to support a
weakly-bound two-body ground state. The presented proof of two- and three-body
universality is valid for attractive potentials of negative (type I) and
vanishing (type II) integral over space alike.
As a result, we can provide approximate expressions for the three-body binding
energies
$\mathcal{E}_{0,n}\simeq\begin{cases}-\epsilon_{n}^{\star}\,v_{0}^{2}\,\left[F(0)\right]^{2}&\qquad\text{(type
I)\ \ }\\\
-\epsilon_{n}^{\star}\,v_{0}^{4}\,\left[\displaystyle\int\cfrac{\mathrm{d}p}{\pi}\cfrac{\left|{F(p)}\right|^{2}}{p^{2}}\right]^{2}&\qquad\text{(type
II)}\end{cases}$ (34)
valid in the case of small potential magnitude $v_{0}$, that is when the pair-
interactions are tuned to support a weakly-bound ground state in the heavy-
light subsystems.
The universality of energies and wave functions of three-body bound states for
finite-range interactions that are tuned to support a weakly-bound excited
state in the heavy-light subsystems has been demonstrated numerically in Ref.
[15]. An analytical proof as presented in this work would be desirable and
might explain the reported [15] differences and similarities compared to the
situation of a weakly-bound heavy-light ground state.
###### Acknowledgements.
We are very grateful to M. Zimmermann and W. P. Schleich for fruitful
discussions. We thank the Center for Integrated Quantum Science and Technology
(IQST) for financial support. The research of the IQST is financially
supported by the Ministry of Science, Research and Arts Baden-Württemberg.
## References
* Simon [1976] B. Simon, Ann. Phys. (NY) 97, 279 (1976).
* Böttcher _et al._ [2021] F. Böttcher, J.-N. Schmidt, J. Hertkorn, K. S. H. Ng, S. D. Graham, M. Guo, T. Langen, and T. Pfau, Rep. Prog. Phys. 84, 012403 (2021).
* Volosniev _et al._ [2013] A. G. Volosniev, J. R. Armstrong, D. V. Fedorov, A. S. Jensen, M. Valiente, and N. T. Zinner, New J. Phys. 15, 043046 (2013).
* Pricoupenko and Petrov [2021] A. Pricoupenko and D. S. Petrov, Phys. Rev. A 103, 033326 (2021).
* Klawunn _et al._ [2010] M. Klawunn, A. Pikovski, and L. Santos, Phys. Rev. A 82, 044701 (2010).
* Rosenkranz and Bao [2011] M. Rosenkranz and W. Bao, Phys. Rev. A 84, 050701 (2011).
* Volosniev _et al._ [2011a] A. G. Volosniev, D. V. Fedorov, A. S. Jensen, and N. T. Zinner, Phys. Rev. Lett. 106, 250401 (2011a).
* Volosniev _et al._ [2011b] A. G. Volosniev, N. T. Zinner, D. V. Fedorov, A. S. Jensen, and B. Wunsch, J. Phys. B: At., Mol. Opt. Phys. 44, 125301 (2011b).
* Braaten and Hammer [2006] E. Braaten and H.-W. Hammer, Phys. Rep. 428, 259 (2006).
* Gat and Rosenstein [1993] G. Gat and B. Rosenstein, Phys. Rev. Lett. 70, 5 (1993).
* Happ _et al._ [2019] L. Happ, M. Zimmermann, S. I. Betelu, W. P. Schleich, and M. A. Efremov, Phys. Rev. A 100, 012709 (2019).
* Girardeau _et al._ [2004] M. Girardeau, H. Nguyen, and M. Olshanii, Opt. Commun. 243, 3 (2004).
* Lippmann and Schwinger [1950] B. Lippmann and J. Schwinger, Phys. Rev. 79, 469 (1950).
* Sitenko [1991] A. G. Sitenko, _Scattering Theory_ (Springer, Berlin, 1991).
* [15] L. Happ, M. Zimmermann, and M. A. Efremov, arXiv:2102.06403.
|
arxiv-papers
| 2021-07-26T14:23:49 |
2024-09-04T03:07:18.842405
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Lucas Happ, Maxim A. Efremov",
"submitter": "Lucas Happ",
"url": "https://arxiv.org/abs/2107.12233"
}
|
2107.12236
|
††thanks: These authors contributed equally††thanks: These authors contributed
equally
# Quantum Floquet engineering with an exactly solvable tight-binding chain in
a cavity
Christian J. Eckhardt Institut für Theorie der Statistischen Physik, RWTH
Aachen University and JARA-Fundamentals of Future Information Technology,
52056 Aachen, Germany Max Planck Institute for the Structure and Dynamics of
Matter, Center for Free-Electron Laser Science, Luruper Chaussee 149, 22761
Hamburg, Germany Giacomo Passetti Institut für Theorie der Statistischen
Physik, RWTH Aachen University and JARA-Fundamentals of Future Information
Technology, 52056 Aachen, Germany Moustafa Othman Technische Universität
Braunschweig, Institut für Mathematische Physik, Mendelssohnstraße 3, 38106
Braunschweig, Germany Christoph Karrasch Technische Universität
Braunschweig, Institut für Mathematische Physik, Mendelssohnstraße 3, 38106
Braunschweig, Germany Fabio Cavaliere Dipartimento di Fisica, Università di
Genova, 16146, Genova, Italy SPIN-CNR, 16146, Genova, Italy Michael A.
Sentef Max Planck Institute for the Structure and Dynamics of Matter, Center
for Free-Electron Laser Science, Luruper Chaussee 149, 22761 Hamburg, Germany
Dante M. Kennes [email protected] Institut für Theorie der
Statistischen Physik, RWTH Aachen University and JARA-Fundamentals of Future
Information Technology, 52056 Aachen, Germany Max Planck Institute for the
Structure and Dynamics of Matter, Center for Free-Electron Laser Science,
Luruper Chaussee 149, 22761 Hamburg, Germany
###### Abstract
Recent experimental advances enable the manipulation of quantum matter by
exploiting the quantum nature of light. However, paradigmatic exactly solvable
models, such as the Dicke, Rabi or Jaynes-Cummings models for quantum-optical
systems, are scarce in the corresponding solid-state, quantum materials
context. Focusing on the long-wavelength limit for the light, here, we provide
such an exactly solvable model given by a tight-binding chain coupled to a
single cavity mode via a quantized version of the Peierls substitution. We
show that perturbative expansions in the light-matter coupling have to be
taken with care and can easily lead to a false superradiant phase.
Furthermore, we provide an analytical expression for the groundstate in the
thermodynamic limit, in which the cavity photons are squeezed by the light-
matter coupling. In addition, we derive analytical expressions for the
electronic single-particle spectral function and optical conductivity. We
unveil quantum Floquet engineering signatures in these dynamical response
functions, such as analogs to dynamical localization and replica side bands,
complementing paradigmatic classical Floquet engineering results. Strikingly,
the Drude weight in the optical conductivity of the electrons is partially
suppressed by the presence of a single cavity mode through an induced
electron-electron interaction.
## I Introduction
The control of matter through light, or more generally electromagnetic (EM)
radiation, is a research direction that has gained tremendous attention
recently.1 It connects to many topical fields including information processing
and steering chemical reactions.2, 3, 4, 5, 6, 7, 8, 9 In recent years, some
exciting progress has been made towards this goal by periodically driving
materials with light in a regime where the quantum nature of the light field
can be disregarded. 10, 11 In this classical-light regime the physics of
materials under continuous-wave irradiation is efficiently described by
Floquet theory. 12, 13, 14 Within Floquet theory, a time-periodic Hamiltonian
is replaced by a quasi-static, effective so-called Floquet Hamiltonian, which
can include renormalized effective model parameters, new synthetically
generated terms, as well as Floquet sidebands, i.e., shakeoff features
separated by the driving frequency from the main resonances, in frequency-
dependent spectra. The search for driving protocols that realize certain
effective Hamiltonians with specific desired properties has become known as
Floquet engineering. 15, 14 Along these lines several ways to control matter
with light have been proposed, for example, the manipulation of topologically
non-trivial states, 16, 17, 18, 10, 11, 19, 20, 21, 22 strongly correlated
materials 23, 24, 25, 26, 27 and superconductors. 28, 29, 30, 31, 32, 33
However, a fundamental problem for driving materials with classical light is
heating, 31, 34, 35 which in many realistic setups prohibits versatile
control.
To circumvent detrimental heating, control of materials through quantum light
has recently been proposed. 36, 37, 38, 6, 9 The basic idea is to place a
material into an optical cavity by which the light-matter coupling can be
enhanced 39, 9 since the coupling is inversely proportional to the square-root
of the effective mode volume 39, 40. One can therefore bolster the coupling by
manufacturing smaller devices, or by employing near-field enhancement
effects.41 Through this enhancement of the coupling, vacuum fluctuations or
few photon states of the cavity can already have a sizeable effect on the
matter degrees of freedom, alleviating the need of strong classical driving
fields. In the emerging field of cavity engineering, ultra-strongly coupled
light-matter systems have been realized based on different implementation
schemes, starting from the first results obtained with microwave and optical
cavities. 42, 43 More recently, sizeable light-matter coupling (LMC) has been
implemented in superconducting circuits,44 and it is nowadays possible to
couple few electrons to EM fields in split-ring resonators. 45, 46, 47 These
technological advances have led to the observation of LMC-controlled phenomena
such as transport properties being tuned by polaritonic excitations 48 and
Bose-Einstein condensation of exciton-polaritons. 49, 50, 51 Another route to
control matter by quantum light is to influence chemical reactions 52, 53
through the selective enhancement of desired reactive paths and blocking of
others. In addition, there have been several proposals to influence
superconductivity in a cavity, either by coupling cavity modes to the phonons
involved in electronic pairing, 54 to magnons that are believed to form the
pairing glue in cuprates, 55 or by directly coupling to the electronic degrees
of freedom. 56, 57, 58, 59, 60 Concurrently, experimental evidence of cavity-
enhanced superconductivity was recently reported, whose origin and
interpretation are still under debate. 61
To turn the question around and to add another facet to the problem of LMC,
one can inversely ask: How can one engineer the light field of a cavity using
matter? One prominent and widely discussed route is the realization of a
superradiant phase in thermal equilibrium. 62, 63, 64, 65, 66, 67, 68, 69, 70
Generally, systems that require a quantum-mechanical treatment of both light
and matter will host hybrid states that mix light and matter degrees of
freedom. 71 Describing such light-matter systems is a formidable challenge and
often relies on using few-body simplifications. For instance, describing
matter through effective few-level systems has led to paradigmatic models such
as the Dicke, Rabi or Jaynes-Cummings models. These simplified models capture
certain aspects of the underlying physics well.72, 73, 74, 39, 75 However, in
order to capture collective phenomena of solid-state systems, a many-body
description of the material is needed. Efforts in this direction include
first-principles approaches, such as the density functional reformulation of
QED, 76, 77, 78 generalized coupled cluster theory 79 or hybrid-orbital
approaches. 80, 81 In addition, a recent work presents the analytic solution
of the free 2D electron gas coupled to a cavity. 82
In this work, we introduce and study an exactly solvable quantum lattice model
for a solid coupled to the quantized light field of a cavity. At the same
time, we aim at connecting quantum-photon phenomena to previous results of
Floquet engineering by investigating the quantum-to-classical crossover. To
this end, we focus on a tight-binding chain coupled to a single mode modelling
a resonance of a cavity, through a quantized version of the Peierls
substitution that was recently introduced.83, 84, 85, 86 As we aim to describe
solid-state systems, we are mainly interested in the thermodynamic (TD) limit
of this model, but we also connect to prior finite system size studies. First,
we determine the groundstate (GS) of the system. By exact numerical means, we
exclude the existence of an equilibrium superradiant phase, consistent with
existing no-go theorems. 62, 64 We show explicitly that gauge invariance must
be taken into account carefully to prohibit false signatures of a superradiant
phase upon expanding the Peierls substitution in orders of the LMC. We then
concentrate on the thermodynamic limit where the electronic groundstate is
found to remain the Fermi sea of the uncoupled system centered at quasi-
momentum $k=0$ consistent with the findings of Rokaj et al.82 Using this
insight, we analytically determine the photonic GS of the system to be a
squeezed state. Additionally, an analytical expression for the electronic
spectral function is given. With this we establish the quantum analogues to
paradigmatic Floquet results, such as dynamical localization or the emergence
of replica bands, and pinpoint the differences between the classical and
quantum cases. To make the connection to Floquet results explicit, we analyze
the quantum-to-classical crossover and show that the nonequilibrium spectral
function of the system approaches that of a classically driven system in the
limit of strong driving. Finally, the current response to a spatially uniform
external field, i.e., the optical conductivity, is calculated and a f-sum rule
for cavity-coupled systems is identified. The presence of the single cavity
mode induces a non-complete suppression of the Drude peak that remains even in
the TD limit. This result is consistent with that previously found by Rokaj et
al.82 for the 2D electron gas. We attribute this feature to the effective
electron-electron interaction mediated by the cavity.
## II Results
### II.1 Model
Figure 1: Model and groundstate. (a) Illustration of the studied model: A one
dimensional tight-binding chain with nearest neighbour hopping $t_{h}$ is
coupled to the first transmittance resonance (blue shaded area) of a cavity at
$\omega_{0}$. We model the frequency ($\omega$) dependent coupling (black
line) as a box function (red line) and assume that its width
$\Delta\omega\ll\omega_{0}$ to arrive at an effective single mode that couples
strongly to the electrons (see the Model subsection under Results). (b) Energy
density $e_{\psi_{\mathrm{T(FS)}}}$ according to Eq. (5) (colored lines), with
the electronic part of the wavefunction $\ket{\psi}_{f}$ chosen as a single
connected quasi-momentum region being occupied (Fermi sea, FS). The minimum at
wave-vector $k=0$ coincides with that of the variational scheme described in
the main text (see the Groundstate subsection under results and the Methods
section) where we have used trial wave-functions with arbitrary distributions
in momentum-space, i.e., not limited to a connected region. Inset: Average
photon number $N_{\text{phot}}:=\braket{a^{{\dagger}}a}$ (colored lines) for
varying coupling strength $g$, as function of the system size $L$. For all $g$
values shown, the number of bosons in the cavity converges at large $L$ to a
finite value (black dashed lines). The red vertical line corresponds to the
system size used in the main plot ($L=1010$). $N_{\rm max}^{\rm boson}=100$
has been used for the bosonic Hilbert space. (c) The exact probability
distribution $P(n_{\rm phot})$ in logarithmic scale of the photon number is
compared to the one given by a squeezed state (black crosses) for the
groundstate of a chain of length $L=510$ (blue bars) and $L=10$ (yellow bars).
Here the coupling constant is set to $g=2$ and $N_{\rm max}^{\rm boson}=100$.
In the inset, the same quantity is plotted on a linear scale. (d) Ratio of
variance of canonical momentum and coordinate operator $\Delta P/\Delta X$
(colored lines) as function of the coupling $g$ for three different values of
$\omega_{0}$ and two representative squeezing ellipses for $g=0.2$ and
$g=0.75$, respectively.
We consider a non-interacting tight-binding chain with nearest-neighbour
hopping, as illustrated in Fig. 1(a). The chain is coupled to the first
transmittance resonance of a cavity. We take into account a continuum of modes
in the cavity but neglect modes that have a wave-vector with non-zero
component in the direction of the chain as their coupling with the matter
degrees of freedom will be strongly suppressed by the presence of the cavity.
This essentially amounts to the dipole approximation. The frequency of the
modes is confined to a small region of width $\Delta\omega$ around the
resonance of the empty cavity at $\omega_{0}$ ($\Delta\omega\ll\omega_{0}$).
We therefore model these modes as all having the same frequency $\omega_{0}$.
Additionally, we assume that they couple to the chain with equal strength
essentially replacing the frequency dependent profile of the coupling by a box
function of width $\Delta\omega$ centered at $\omega_{0}$ (see Fig. 1(a)). In
Supplementary Note 1, we show that having selected $N$ modes, this setup
results in one single mode strongly coupling to the electrons and $N-1$
uncoupled modes. Hence, we model the system as electrons coupled to an
effective single cavity mode that is spatially constant along the chain. The
corresponding Hamiltonian reads83
$H=\omega_{0}\left(a^{\dagger}a+\frac{1}{2}\right)-\sum_{j=1}^{L}\left[t_{h}e^{-i\,\frac{g}{\sqrt{L}}(a^{\dagger}{+}a)}\,c_{j+1}^{\dagger}c_{j}+\text{h.c.}\right].$
(1)
Here $c_{j}(c_{j}^{\dagger})$ is the fermionic annihilation (creation)
operator at lattice cite $j$, and $a(a^{\dagger})$ is the bosonic annihilation
(creation) operator of the single effective cavity mode. The latter are
related to the quantized electromagnetic vector potential via
$A=\frac{g}{\sqrt{L}}(a+a^{{\dagger}})$, with the convention $e=\hbar=c=1$ and
$L$ the number of lattice sites. We use periodic boundary conditions and set
the lattice constant to $1$. One can show that, within a few-band truncation,
inclusion of the relevant effects of the LMC as well as gauge invariance are
guaranteed by the quantized form of the Peierls substitution employed to set
up the Hamiltonian given in Eq. (1).83, 84, 85 The coupling constant $g$
depends on the specifics of the system, such as the geometry and material
composition of the cavity. We keep the explicit dependence $1/\sqrt{L}$,
instead of including it in the dimensionless coupling parameter $g$, in order
to simplify the analysis of the thermodynamic limit. In quasi-momentum space,
the model takes the form
$H=\cos\left(\frac{g}{\sqrt{L}}(a+a^{{\dagger}})\right)\mathcal{T}+\sin\left(\frac{g}{\sqrt{L}}(a+a^{{\dagger}})\right)\mathcal{J}+\omega_{0}\left(a^{\dagger}a+\frac{1}{2}\right),$
(2)
where we have introduced the kinetic energy and current operators
$\displaystyle\mathcal{T}$
$\displaystyle:=\sum_{k}-2t_{h}\cos(k)\,c^{\dagger}_{k}c_{k}=:\sum_{k}\varepsilon_{k}\,c^{\dagger}_{k}c_{k}$
(3) $\displaystyle\mathcal{J}$
$\displaystyle:=\sum_{k}2t_{h}\sin(k)\,c^{\dagger}_{k}c_{k}=:\sum_{k}v_{k}\,c^{\dagger}_{k}c_{k},$
and $\varepsilon_{k}$, $v_{k}$ are the band dispersion and band velocity at
quasi-momentum $k$, respectively. $c_{k}^{({\dagger})}$ annihilates (creates)
and electron at quasi-momentum $k$. These expressions highlight the extensive
number of constants of motion of the model, namely
$\rho_{k}=c_{k}^{\dagger}c_{k}$ with $[\rho_{k},H]=0$ for all $k\in\text{BZ}$
(Brillouin Zone), which is a consequence of the spatially constant vector
potential not breaking the lattice periodicity and preserving fermionic quasi-
momentum in any electron-photon scattering process. 82 As a consequence, the
eigenstates of the Hamiltonian can be factorized as
$H|\Psi\rangle=E_{\Psi}|\Psi\rangle\hskip 5.69054pt;\hskip
8.53581pt|\Psi\rangle=\ket{\phi}_{b}\otimes\ket{\psi}_{f},$ (4)
where $\ket{\phi}_{b}$ is the photonic part of the wavefunction, and
$\ket{\psi}_{f}$ is an eigenstate of the electronic density operator
$\rho=\frac{1}{L}\sum_{k}c^{\dagger}_{k}c_{k}$.
### II.2 Groundstate
We determine the GS of the system
$\ket{\Psi_{\mathrm{GS}}}=\ket{\phi_{\mathrm{GS}}}_{b}\otimes\ket{\psi_{\mathrm{GS}}}_{f}$
in two different ways: (i) by a variational scheme that exploits the extensive
number of constants of motion varying the electronic occupation and using
exact diagonalization for the remaining non-harmonic bosonic system (see the
Methods section) and (ii) by full exact diagonalization of the combined
electronic and bosonic system (ED). The variational scheme can be performed
for hundreds of lattice sites while the ED calculations serve to verify the
variational results for small system sizes. Both numerical methods are exact
in the sense that their accuracy is only limited by the cutoff of the maximum
boson number in the Fock space $N_{\rm max}^{\rm boson}$. This can, however,
be chosen large enough to converge all calculations to arbitrary precision,
making the results obtained with ED identical to those obtained with the
variational method in the case of small system sizes. Since the data reported
in the plots has been acquired for system sizes too large for ED to handle,
all reported results have been obtained with the variational scheme.
We consider a half-filled electronic system with
$n:=\langle\rho\rangle=\frac{1}{2},$ and choose the cavity frequency
$\omega_{0}=t_{h}$, unless explicitly denoted otherwise. Within the
variational scheme, we find that the electronic part of the GS wavefunction
$|\psi_{\mathrm{GS}}\rangle_{f}$ is the Fermi sea (FS) around $k=0$ even at
non-zero $g$. In Fig. 1(b) we illustrate this for a subset of possible
electronic configurations. Here, following the procedure explained in the
Methods section, we take as fermionic trial wavefunctions
$|\psi_{\mathrm{T(FS)}}\rangle_{f}$ only connected regions in $k$-space
centered at different positions (FS center). Then we numerically determine the
GS energy $E_{\psi_{\mathrm{T(FS)}}}$ of the resulting bosonic hamiltonian
$H_{\psi_{\mathrm{T(FS)}}}=\,_{f}\langle\psi_{\mathrm{T(FS)}}|H|\psi_{\mathrm{T(FS)}}\rangle_{f}$.
In Fig. 1(b) we show the energy density
$e_{\psi_{\mathrm{T(FS)}}}=\frac{E_{\psi_{\mathrm{T(FS)}}}}{L}$ (5)
as a function of the center of the connected region (FS center). The energetic
minimum always remains at the FS centered around $k=0$ for all considered
coupling values. This shows that the fermionic part of the GS wavefunction
remains unchanged upon turning on a coupling to the bosonic mode, a result
that is consistent with the two-dimensional electron gas considered by Rokaj
et al.82 The unbiased variational scheme (see the Methods section) is not
limited to connected regions in $k$-space, and a full variation in electronic
state space confirms the unshifted Fermi sea as the true ground state.
We now discuss the bosonic part of the wavefunction,
$|\phi_{\mathrm{GS}}\rangle_{b}$. To this end, we define the photon number
eigenstates as $a^{{\dagger}}a\ket{n_{\rm phot}}=n_{\rm phot}\ket{n_{\rm
phot}}$ and introduce the probability distribution $P(n_{\rm
phot}):=|\braket{n_{\rm phot}}{\phi_{GS}}|^{2}$ of finding $n_{\rm phot}$
photons in the GS.
$P(n_{\rm phot})$ for $g=2$ (Fig. 1(c)) shows that only even number states
contribute, implying that the bosonic wavefunction has a probability
distribution that is incompatible with a coherent state. Instead, $P(n_{\rm
phot})$ agrees perfectly with a squeezed state with the same average photon
number, indicated by the black crosses in Fig. 1(c). This finding does not
change qualitatively for different values of $g$. In the inset of Fig. 1(b) we
show the scaling of the average photon number in the GS,
$N_{\text{phot}}=\langle a^{\dagger}a\rangle$. $N_{\text{phot}}$ is found not
to grow extensively with the system size, which excludes the existence of a
superradiant phase.
Put differently, the absence of a superradiant phase implies that the
expectation value of the bosonic operators in the GS does not scale with the
system size. This allows us to perform a scaling analysis of contributions to
the GS energy
$\displaystyle\langle\Psi_{\mathrm{GS}}|H|\Psi_{\mathrm{GS}}\rangle$
$\displaystyle=\underbrace{\langle\Psi_{\mathrm{GS}}|\omega_{0}\left(a^{{\dagger}}a+\frac{1}{2}\right)|\Psi_{\mathrm{GS}}\rangle}_{\sim
1}+\underbrace{\langle\Psi_{\mathrm{GS}}|\mathcal{T}|\Psi_{\mathrm{GS}}\rangle}_{\sim
L}+\underbrace{\langle\Psi_{\mathrm{GS}}|\frac{g}{\sqrt{L}}\left(a^{\dagger}+a\right)\mathcal{J}|\Psi_{\mathrm{GS}}\rangle}_{\sim\sqrt{L}}$
(6)
$\displaystyle-\underbrace{\langle\Psi_{\mathrm{GS}}|\frac{1}{2}\frac{g^{2}}{L}\left(a^{\dagger}+a\right)^{2}\mathcal{T}|\Psi_{\mathrm{GS}}\rangle}_{\sim
1}+\mathcal{O}\left(\frac{1}{\sqrt{L}}\right).$
In the TD limit, the GS energy is entirely composed of terms that are at most
quadratic in the photon field amplitude
$A=\frac{g}{\sqrt{L}}(a^{\dagger}{+}a)$. In order to simplify the following
discussion, we diagonalize the Hamiltonian up to quadratic ($A^{2}$) order by
a combined squeezing and displacement transformation yielding (see
Supplementary Note 2)
$H^{\text{D}}=\mathcal{W}[\mathcal{T}]\left(\beta^{{\dagger}}\beta+\frac{1}{2}\right)+\mathcal{T}-\frac{g^{2}\omega_{0}\mathcal{W}[\mathcal{T}]^{-2}}{L}\mathcal{J}^{2}\hskip
5.69054pt;\hskip
8.53581pt\mathcal{W}[\mathcal{T}]=\omega_{0}\sqrt{1-2\frac{g^{2}}{L\omega_{0}}\mathcal{T}}.$
(7)
where $\beta^{({\dagger})}$ annihilates (creates) a coherent squeezed state.
30 In terms of the original creation and annihilation operators of the
unsqueezed cavity photons, the corresponding squeezed-state operators are
given as
$\displaystyle\beta^{\dagger}$
$\displaystyle=\cosh\left(\frac{1}{2}\ln\left(\frac{\mathcal{W}[\mathcal{T}]}{\omega_{0}}\right)\right)\left(a^{\dagger}+\frac{g\,\omega_{0}\mathcal{W}[\mathcal{T}]^{-2}}{L}\mathcal{J}\right)+\sinh\left(\frac{1}{2}\ln\left(\frac{\mathcal{W}[\mathcal{T}]}{\omega_{0}}\right)\right)\left(a+\frac{g\,\omega_{0}\mathcal{W}[\mathcal{T}]^{-2}}{L}\mathcal{J}\right),$
(8) $\displaystyle\beta$
$\displaystyle=\cosh\left(\frac{1}{2}\ln\left(\frac{\mathcal{W}[\mathcal{T}]}{\omega_{0}}\right)\right)\left(a+\frac{g\,\omega_{0}\mathcal{W}[\mathcal{T}]^{-2}}{L}\mathcal{J}\right)+\sinh\left(\frac{1}{2}\ln\left(\frac{\mathcal{W}[\mathcal{T}]}{\omega_{0}}\right)\right)\left(a^{\dagger}+\frac{g\,\omega_{0}\mathcal{W}[\mathcal{T}]^{-2}}{L}\mathcal{J}\right).$
The last term in $H^{\text{D}}$ of Eq. (7) highlights that the cavity induces
an effective electron-electron interaction.
Knowing that the electronic part of the GS wavefunction is the unshifted FS,
we define the expectation value of the electronic kinetic energy density and
current density in the GS as
$\displaystyle t_{\mathrm{GS}}$
$\displaystyle=\frac{\,{}_{f}\langle\psi_{\mathrm{GS}}|\mathcal{T}|\psi_{\mathrm{GS}}\rangle_{f}}{L}<0,$
(9) $\displaystyle j_{\mathrm{GS}}$
$\displaystyle=\frac{\,{}_{f}\langle\psi_{\mathrm{GS}}|\mathcal{J}|\psi_{\mathrm{GS}}\rangle_{f}}{L}=0,$
and the dressed cavity frequency as
$\tilde{\omega}=\mathcal{W}[t_{\mathrm{GS}}]=\omega_{0}\sqrt{1+2\frac{g^{2}}{\omega_{0}}|t_{\mathrm{GS}}|}.$
(10)
The bosonic part of the GS wavefunction is then given by the GS of the
electronically renormalized bosonic Hamiltonian
$H^{\mathrm{D}}_{b}=\,_{f}\langle\psi_{\mathrm{GS}}|H^{\mathrm{D}}|\psi_{\mathrm{GS}}\rangle_{f}=\tilde{\omega}\left(\beta^{{\dagger}}\beta+\frac{1}{2}\right)-|t_{\mathrm{GS}}|L$
(11)
which is a squeezed vacuum state $|\phi_{\mathrm{GS}}\rangle_{b}$74, 87, 88,
89 that is connected to the bare cavity vacuum $|0\rangle$ through a squeezing
transformation,
$|\phi_{\mathrm{GS}}\rangle_{b}=e^{\frac{1}{2}\left(\zeta^{*}a^{2}-\zeta(a^{\dagger})^{2}\right)}|0\rangle.$
(12)
The squeeze factor $\zeta$90 is given by (see Supplementary Note 2)
$\zeta=\frac{1}{2}\ln\left(\frac{\tilde{\omega}}{\omega_{0}}\right).$ (13)
The squeezed state that was numerically observed to match the exact
$P(n_{\mathrm{phot}})$ for the GS Fig. 1(c) corresponds precisely to the
squeeze factor $\zeta$ defined in Eq. (13). In Fig. 1(d) we show how the
amount of squeezing depends on the cavity coupling strength $g$. Defining
$X:=\left(a^{\dagger}{+}a\right)$ and $P:=i\left(a^{{\dagger}}-a\right)$, and
$\Delta\mathcal{O}=\sqrt{\braket{\mathcal{O}^{2}}-\braket{\mathcal{O}}^{2}}$
for a generic operator $\mathcal{O}$, a squeezed state minimizes the
Heisenberg uncertainty $\Delta P\Delta X=1$. The ratio
$\frac{\Delta P}{\Delta X}=e^{2\zeta}=\frac{\tilde{\omega}}{\omega_{0}}$ (14)
characterizes the degree of squeezing. 90, 74 The squeezing of the vacuum is
reminiscent of the finding by Ciuti et al.,91 which was obtained for a
different light-matter model. It has recently become possible to directly
measure the vacuum fluctuations inside a cavity, 92, 93 which enables
experimental tests of our prediction.
### II.3 False superradiant phase transition in the approximate model
Figure 2: False superradiance and instability for the truncated Hamiltonian.
(a) Minimum energy density $e_{\psi_{\mathrm{T(FS)}}}$ Eq. (5) (colored lines)
of the Hamiltonian truncated at first order for an electronic wavefunction
being a single connected occupied region in $k$-space, as function of the
shift of the Fermi sea (FS). The position of one minimum of the curves is
indicated by a circle. At a critical coupling strength
$g_{c}=\sqrt{\frac{\pi\omega_{0}}{4t_{h}}}$ the center of the Fermi sea
realizing the minimal energy moves to a finite $k$-value which is illustrated
by the small shift of the minimum of the curve corresponding to
$g=g_{c}+\delta$ where $\delta=0.001$. Inset: Average photon number
$\braket{a^{{\dagger}}a}$ (colored lines) for varying coupling strengths $g$
as function of the system size $L$. Above the critical value $g_{c}$,
superradiant scaling of the photonic occupancy sets in. The vertical red line
denotes the system size used in the main plot ($L=1010$). (b) Minimum energy
density of the second-order truncated Hamiltonian (colored lines) as function
of the shift of the Fermi sea (FS). When the shift is sufficiently large such
that the kinetic energy of the electrons is positive, it is possible to obtain
a spectrum of the electronically renormalized bosonic Hamiltonian that is not
bounded from below anymore, rendering the system unstable. The instability is
indicated by the dotted line. Here $L=1010$.
Next, we analyze the effect of truncating the Hamiltonian at first and second
order in $A=\frac{g}{\sqrt{L}}(a^{\dagger}{+}a)$ on the GS at finite $L$
$\displaystyle H^{1^{\mathrm{st}}}$
$\displaystyle=\omega_{0}\left(a^{{\dagger}}a+\frac{1}{2}\right)+\mathcal{T}+\frac{g}{\sqrt{L}}\left(a^{\dagger}+a\right)\mathcal{J}$
(15) $\displaystyle H^{2^{\mathrm{nd}}}$
$\displaystyle=\omega_{0}\left(a^{{\dagger}}a+\frac{1}{2}\right)+\mathcal{T}+\frac{g}{\sqrt{L}}\left(a^{\dagger}+a\right)\mathcal{J}-\frac{1}{2}\frac{g^{2}}{L}\left(a^{\dagger}+a\right)^{2}\mathcal{T}.$
For the first-order truncated Hamiltonian $H^{1^{\mathrm{st}}}$ we again
determine the GS by the unbiased variational scheme (see Methods section). The
GS is given by a connected region in $k$-space that is, however, not always
centered at $k=0$. This is shown in Fig. 2(a), where the energy density
$e_{\psi_{\mathrm{T(FS)}}}$ (Eq. (5)) for $H^{1^{\mathrm{st}}}$ is evaluated
as function of the FS shift, in analogy to our analysis in Groundstate
subsection under Results. Here both the energy density and the photon
occupation are calculated analytically. We find that at a critical coupling
strength $g_{c}$ there is a phase transition to a GS hosting a finite current
signified by the shift of the FS, Fig. 2(a). This is complemented by an
occupation of the cavity mode that scales linearly with $L$ as shown in the
inset of Fig. 2(a) as well as a non-zero expectation value in the TD limit of
the field $\langle A\rangle=\frac{g\sqrt{L}}{\omega_{0}}j_{\mathrm{GS}}$. The
critical coupling is given by $g_{c}=\sqrt{\frac{\pi\omega_{0}}{4t_{h}}}$. A
symmetric or anti-symmetric combination of the degenerate GS wavefunctions (FS
shifted either to the left or the right) would yield a net zero current
restoring the inversion symmetry of the system but still result in a
macroscopic occupation of the cavity mode. This transition is reminiscent of
the one in the Dicke model, for which neglecting the diamagnetic ($A^{2}$)
coupling yields a superradiant phase defined through $\langle
A\rangle\sim\sqrt{N_{\rm emitter}}$ (where $N_{\rm emitter}$ is the number of
emitters) yielding a macroscopically occupied photon mode, 72, 94 which is
absent for the full gauge-invariant coupling. 95
In the lattice case, only the inclusion of coupling terms to all orders in $A$
of the the Peierls substitution guarantees gauge invariance. If one instead
includes only terms up to second order ($A^{2}$), a large coupling strength
$g$ results in a spectrum of the Hamiltonian that is not bounded from below.
Fig. 2(b) is obtained in an analogous way to Fig. 1(b), but with energies
calculated analytically, illustrating the absence of a GS above a critical
coupling strength as follows: Fixing the electronic part of the wavefunction
to be a shifted FS, an increased shift will yield a corresponding bosonic
problem with a decreased frequency. At some point the effective frequency
vanishes, leading to the absence of a GS of the remaining bosonic problem
beyond that point. We indicate this point by a dotted line in Fig. 2(b). This
instability can be cured by including an arbitrarily small $A^{4}$ term,
signalling the breakdown of the truncation.
States with a finite current, which have lower energy than the one with zero
current when the energy is truncated after the first two orders of the LMC
(see Fig. 2(b)), are moved to higher energies upon inclusion of all orders of
the Peierls coupling (see Fig. 1(b)), which is a manifestation of gauge
invariance.64 This explains the validity of our analytical results obtained
including only the second order of the cavity field together with the
electronic GS with zero current. The instability discussed here, caused by
truncation of the LMC after the second order, has previously been noted by
Dmytruk and Schiró85 in the context of a mean-field approach to a two orbital
model.
### II.4 Momentum-resolved spectral function in the TD limit
Figure 3: Momentum-resolved spectral function in equilibrium and for a driven
cavity. (a) False-color plot of the momentum($k$)-resolved spectral function
$A(k,\omega)$ Eq. (18) as function of frequency $\omega$ in units of the
hopping amplitude $t_{\rm h}$ at $T=0$. The central white dashed curve shows
the bare electronic band. Replicas of the bare band offset by the bare cavity
frequency $\pm\omega_{0}$ are shown by white dashed curves. The quantum
replica bands seen in the false-color spectra are at an increased distance
from the main band, which is set by the dressed cavity frequency
$\tilde{\omega}>\omega_{0}$. The replica bands are below (above) the main band
in the occupied (unoccupied) quasi-momentum regions, reflecting the overall
particle-hole symmetry of the half-filled system. The dashed line at
$k=\frac{3\pi}{8}$ denotes the $k$-space position of the plot in panel (b).
Here we consider $L=170$, $g=1$ and $N_{\rm max}^{\rm boson}=50$, the delta
functions of Eq. (18) are represented by Lorentzians with broadening
$\eta=0.025$. (b) Nonequilibrium time- and momentum-resolved spectral function
according to Eq.(22) evaluated at $k=\frac{3\pi}{8}$ as a function of
frequency ($\omega$) offset by the value of the dispersion $\varepsilon(k)$ at
that $k$-point in units of the hopping amplitude $t_{\rm h}$ for several
cavity pumping strengths, characterized by the displacement parameter $\alpha$
with $|\alpha|^{2}=\Delta N_{\mathrm{phot}}^{\mathrm{pump}}$ (colored lines).
$g^{2}\Delta N_{\mathrm{phot}}^{\mathrm{pump}}$ is kept constant, implying
that $g\to 0$ as the pumping $\Delta
N_{\mathrm{phot}}^{\mathrm{pump}}\to\infty$. The black line corresponds to the
ground state for $g=2.5$ for which the y-axis reports the amplitude, while the
coloured lines are vertically shifted for clarity and follow the progressive
occupation $\Delta N_{\mathrm{phot}}^{\mathrm{pump}}$ indicated on the right.
For increasing pump strength the side-bands become more symmetric and their
position approaches $\omega_{0}$ as $\tilde{\omega}\stackrel{{\scriptstyle
g\to 0}}{{\longrightarrow}}\omega_{0}$. For the largest pump $\Delta
N_{\mathrm{phot}}^{\mathrm{pump}}$ the curve is overlaid with the Floquet
result (red dashed line), that matches the pumped-cavity result. Here $L=90$,
$N_{\rm max}^{\rm boson}=100$, and a Lorentzian broadening $\eta=0.025$ has
been included in the delta functions.
The effects of the cavity on electrons could be investigated via ARPES
measurements. For this reason, but also to pinpoint analogs to Floquet
results, we calculate the electronic spectral-function defined as
$A(k,\omega)=-\frac{1}{\pi}\,\text{Im}\,G^{\text{R}}(k,\omega),$ (16)
with
$G^{\text{R}}(k,\omega)=-\int_{0}^{\infty}dt\,i\langle\left[c_{k}(t),c_{k}^{{\dagger}}\right]_{+}\rangle
e^{i\omega t}$ (17)
where $[.]_{+}$ is the anti-commutator. We evaluate the electronic part of the
expectation value in Eq. (17) analytically by commuting the electronic
creation and annihilation operators with the appearing time-evolution
operators and replacing $\mathcal{T}\rightarrow t_{\mathrm{GS}}L$ and
$\mathcal{J}\rightarrow j_{\mathrm{GS}}L=0$ in the expression. The remaining
vector-matrix-vector product in the bosonic part of the Hilbert space is then
evaluated numerically at each time $t$ and the result transformed to frequency
space via a FFT. The result is given in Fig. 3(a) for a chain of length
$L=170$ including all orders of the Peierls coupling.
In the TD limit, we can use similar arguments to the ones previously utilized
in the Groundstate subsection under Results to give an analytic expression for
the electronic spectral function. No operator in the expectation value Eq.
(17) creates a macroscopic number of photons. We can thus conclude by a
similar scaling analysis as in Eq.(6) that in the TD limit the time evolution
can be written with the diagonal Hamiltonian Eq. (7). The spectral function
keeping leading $1/L$ corrections is analytically found to be
$\displaystyle A(k,\omega)$
$\displaystyle=(1-n_{k})e^{-\frac{g^{2}v_{k}^{2}\omega_{0}}{L\tilde{\omega}^{3}}}\sum_{\ell}\frac{\left(\frac{g^{2}v_{k}^{2}\omega_{0}}{L\tilde{\omega}^{3}}\right)^{\ell}}{\ell!}\delta\left(\omega-\varepsilon_{k}\left(1-\frac{g^{2}}{2L}\frac{\omega_{0}}{\tilde{\omega}}\right)-\Sigma_{k}-\tilde{\omega}\ell\right)$
(18)
$\displaystyle+\,n_{k}\,e^{-\frac{g^{2}v_{k}^{2}\omega_{0}}{L\tilde{\omega}^{3}}}\sum_{\ell}\frac{\left(\frac{g^{2}v_{k}^{2}\omega_{0}}{L\tilde{\omega}^{3}}\right)^{\ell}}{\ell!}\delta\left(\omega-\varepsilon_{k}\left(1-\frac{g^{2}}{2L}\frac{\omega_{0}}{\tilde{\omega}}\right)+\Sigma_{k}+\tilde{\omega}\ell\right).$
Here $n_{k}=\langle\rho_{k}\rangle$ and the self-energy $\Sigma_{k}$ is given
by
$\Sigma_{k}=\frac{g^{2}\omega_{0}}{\tilde{\omega}^{2}L}v_{k}^{2}.$ (19)
The details of the calculation are presented in Supplementary Note 3. From Eq.
(18) the spectral function of the unperturbed electrons,
$A(k,\omega)\overset{L\to\infty}{\to}A_{0}(k,\omega)=\delta(\omega-\varepsilon_{k}),$
(20)
is recovered in the limit $L\to\infty$. From Eq. (7) one might expect a finite
contribution to the electronic self-energy stemming from the coupling of a
single electron to all other electrons collectively. However, due to the form
of the induced interaction, the single electron couples to the total current
that vanishes identically in the GS. Contributions to the spectral function
beyond the described collective effect are small in the TD limit as
highlighted in Eq. (18). We discuss how this might be related to a short-
coming of the single-mode approximation in the Discussion.
The spectral function Eq. (18) most prominently contains a sum over $\delta$
functions with distance $\tilde{\omega}$ between each other, given by the
dressed instead of bare cavity frequency, which is a direct consequence of the
quantum nature of the photons. This is the quantum analog to the Floquet
replica bands visible in Fig. 3(a). Contrary to the Floquet replica bands, the
quantum replica bands lie either above or below the main band, but only on one
side for fixed quasi-momentum $k$ at zero temperature, depending on whether
the respective momentum state is filled or empty. This reflects the particle-
hole symmetry of the half-filled system, in which a combined
$\omega\rightarrow-\omega$ and $k\rightarrow k+\pi$ sublattice particle-hole
transformation leaves the spectral function invariant.
Importantly, despite the fact that the cavity induces an effective all-to-all
electron-electron interaction, there is no broadening of the $\delta$-peaks.
This is related to the vanishing momentum transfer of the interaction and the
resulting fact that the Bloch states remain exact electronic eigenstates. As a
consequence, the interaction results in a purely real electronic self-energy
$\Sigma_{k}$, leading to band renormalizations without broadening.
The presence of the cavity squeezes the band dispersion $\varepsilon_{k}$ by a
factor $\left(1-\frac{g^{2}}{2L}\frac{\omega_{0}}{\tilde{\omega}}\right)<1$.
This is the quantum analog to the dynamical localization that leads to a
suppression of the band width. The band renormalization factor
$1-\frac{g^{2}\omega_{0}}{2L\tilde{\omega}}$ is consistent to leading order in
$\frac{1}{L}$ with the expectation value of the bosonic operator
$\langle\cos\left(\frac{g}{\sqrt{L}}\left(a^{\dagger}{+}a\right)\right)\rangle$
as a multiplicative factor to the kinetic energy of the electrons. The
electrons are thus effectively localized by coupling to the vacuum
fluctuations of the electromagnetic field.
### II.5 Quantum to Floquet crossover
In the following, we analyze the quantum to classical crossover and recover
known Floquet physics in the regime of $N_{\text{phot}}\to\infty$ and $g\to
0$, keeping $g^{2}\,\Delta N_{\mathrm{phot}}^{\mathrm{pump}}=\mathrm{const}$.
The limit $g\to 0$ is needed in the crossover to lift the light-matter
hybridization that would otherwise lead to the shifted frequency
$\tilde{\omega}$ of an effective cavity mode which we identify as an intrinsic
quantum effect. The limit of strong pumping, keeping the coupling $g$
constant, is treated in Supplementary Note 4.
We employ a protocol where the cavity mode is coherently displaced with
respect to the GS with displacement parameter $\alpha$
$|\alpha\rangle=e^{\alpha(a^{\dagger}-a)}|\phi_{\mathrm{GS}}\rangle_{b}.$ (21)
The photon number is thereby increased relative to the one in the GS by
$|\alpha|^{2}=\Delta N_{\mathrm{phot}}^{\mathrm{pump}}$. The coherent
displacement considered here models the application of a laser pumping the
cavity on time scales too short for the coupled system to follow. Thus, the
laser is assumed to place the cavity into a squeezed coherent state in the
limit of large system size. The subsequent time evolution of the light-matter
coupled system is considered from starting time $t=0$. While for the
equilibrium spectral function only the first two orders in $g$ of the
Hamiltonian had to be taken into account, the time evolution is now affected
by all orders of the Peierls coupling due to the occupation of the photonic
mode that is macroscopic in the classical limit.
We calculate the nonequilibrium spectral function, defined via the full
double-time retarded Green’s function,96
$\displaystyle A_{\text{non-eq.}}(k,\omega)=$ (22)
$\displaystyle\frac{1}{\pi}\text{Im}\frac{1}{\tilde{\tau}}\int_{\Delta
T-\frac{\tilde{\tau}}{2}}^{\Delta
T+\frac{\tilde{\tau}}{2}}\left[\int_{0}^{\infty}ie^{i\omega_{0}(t-t^{\prime})}\,_{f}\langle\psi_{\mathrm{GS}}|\otimes\langle\alpha|\left[c_{k}(t),c_{k}^{{\dagger}}(t^{\prime})\right]_{+}|\alpha\rangle\otimes|\psi_{\mathrm{GS}}\rangle_{f}\,d\left(t-t^{\prime}\right)\right]d\left(\frac{t+t^{\prime}}{2}\right)$
where $\tilde{\tau}=\frac{2\pi}{\tilde{\omega}}$ is the period corresponding
to the dressed cavity frequency. The form is chosen in analogy to the diagonal
elements of the Floquet representation of the GF.97 Here we include a waiting
time $\Delta T$ after the start of the real-time evolution, set to a large
value with respect to the intrinsic timescale, $\Delta T=200\tilde{\tau}$, in
the numerical simulation. Otherwise the calculation is performed in the same
manner as that for the equilibrium spectral function Eq. (17). For comparison,
we also consider the nonequilibrium spectral function of a classically driven
system where the time evolution is governed by the Hamiltonian
$H^{\text{c}}(t)=-\sum_{j}t_{h}e^{-iA(t)}c_{j+1}^{\dagger}c_{j}+h.c.$ (23)
In this case, we couple the chain to the classical field
$A(t)=A_{0}\sin(\omega_{0}t),$ that oscillates with the eigenfrequency of the
unperturbed cavity $\omega_{0}$. Similar to the quantum case, we calculate the
nonequilibrium spectral function according to
$\displaystyle A_{\text{Floquet}}(k,\omega)=$ (24)
$\displaystyle\frac{1}{\pi}\text{Im}\frac{1}{\tau}\int_{-\frac{\tau}{2}}^{\frac{\tau}{2}}\left[\int_{0}^{\infty}ie^{i\omega(t-t^{\prime})}\,_{f}\langle\psi_{\mathrm{GS}}|\left[c_{k}(t)_{H^{\text{c}}(t)},c_{k}^{{\dagger}}(t^{\prime})_{H^{\text{c}}(t)}\right]_{+}|\psi_{\mathrm{GS}}\rangle_{f}\,d\left(t-t^{\prime}\right)\right]d\left(\frac{t+t^{\prime}}{2}\right)$
where $\tau=\frac{2\pi}{\omega_{0}}$. Here $(.)(t)_{H^{\text{c}}}$ denotes the
time dependence governed by the semi-classical Hamiltonian Eq. (23). The
spectral function fulfills
$A_{\text{Floquet}}(k,\omega+m\omega_{0})|_{\omega\in\left(-\frac{\omega_{0}}{2},\frac{\omega_{0}}{2}\right]}=-\frac{1}{\pi}\text{Im}\,G_{mm}(\omega)$
(25)
with $G_{mm}(\omega)$ the diagonal part of the Floquet representation of the
GF.97
We show the evolution from quantum to Floquet spectra for a representative
quasi-momentum $k=\frac{3\pi}{8}$ inside the FS in Fig. 3(b). In the extreme
quantum case (GS) the replica band only appears below the main band.
Furthermore, it is not located at the bare cavity frequency $\omega_{0}$ but
at the eigenfrequency of the coupled light-matter system $\tilde{\omega}$. By
contrast, as the classical limit is approached, the symmetry of the replica
bands is restored and their position moves to $\omega_{0}$. For the largest
displacement ($\Delta N_{\mathrm{phot}}^{\mathrm{pump}}=30$) the spectrum
matches precisely the Floquet spectrum. The fact that the system experiences
no heating during the driving is a direct consequence of the absence of
electron-electron interactions and the corresponding macroscopic number of
constants of motion.
### II.6 Optical conductivity
Figure 4: Optical conductivity (a) Real part of the conductivity $\rm
Re(\sigma)$, Eq. (30) in units of half the conductance quantum
$\frac{e^{2}}{h}$, for strong ($g=1$, dark blue line) and intermediate
($g=0.3$, dashed yellow line) couplings as a function of frequency $\omega$ in
units of the hopping amplitude $t_{\rm h}$. The result for $g=0$ is shown for
comparison (black line). The Drude peak is suppressed with increasing $g$, and
two side peaks appear at the same time. The inset shows the negative effective
kinetic energy $\langle e_{\rm kin}\rangle$ (black line) and the integrated
conductivity $\int\sigma(\omega)d\omega$ (red dashed line). The vertical
dashed lines indicate the coupling strengths from the main plot. They match
fulfilling the f-sum rule Eq. (35), here we set $L=170$, $N_{\rm max}^{\rm
boson}=50$ and a Lorentzian broadening $\eta=0.05$. (b) Corresponding
imaginary parts of the conductivity $\rm Im(\sigma)$ (Eq. (36)). Again the
central $\frac{1}{\omega}$ feature is suppressed and two side features appear
at $\omega=\pm\tilde{\omega}$.
In order to discuss the impact of the light-matter coupling on a paradigmatic
electronic two-particle response function, we compute the optical conductivity
using the standard Kubo formalism.82, 98 To this end the cavity-chain system
is coupled to a spatially uniform external field $A_{\text{ext}}(t)$, in
addition to the quantized cavity field. The resulting optical conductivity in
the long-wavelength limit is obtained in the standard form99
$\sigma(\omega)=-\frac{-\langle
e_{\text{kin}}\rangle-\Lambda(q=0,\omega)}{i\left(\omega+i0^{+}\right)},$ (26)
where
$e_{\text{kin}}=\frac{1}{L}\cos\left(\frac{g}{\sqrt{L}}\left(a^{\dagger}{+}a\right)\right)\mathcal{T}$
(27)
is the effective kinetic energy density of the electrons in the cavity-
modified GS, and $\Lambda$ is the current-current correlator
$\Lambda(q=0,\omega)=-\frac{i}{L}\int_{0}^{\infty}dt\,\,e^{i\omega
t}\langle\left[j_{q=0}^{p}(t),j_{q=0}^{p}\right]\rangle,$ (28)
with $j_{q=0}^{p}$ the paramagnetic current density operator at $q=0$. The
latter is obtained from the charge continuity equation as
$j_{q=0}^{p}=-\cos\left(\frac{g}{\sqrt{L}}(a^{\dagger}{+}a)\right)\sum_{k}2t_{h}\,\sin(k)c^{\dagger}_{k}c_{k}-\sin\left(\frac{g}{\sqrt{L}}\left(a^{\dagger}{+}a\right)\right)\sum_{k}\,2t_{h}\cos(k)c^{\dagger}_{k}c_{k}.$
(29)
We evaluate Eq. (26) numerically for $L=170$ and finite broadening
$0^{+}\rightarrow 0.05$. The result is shown in Fig. 4(a)-(b).
One can gain additional insight into the properties of the optical
conductivity by evaluating it analytically in the TD limit. For the real part
of the conductivity we find
$\text{Re}\,{\sigma(\omega)}=D\delta(\omega)+\sigma_{\text{reg}}(\omega),$
(30)
where the Drude weight $D$ is given as
$\frac{D}{\pi}=|t_{\text{GS}}|\left(1-\frac{g^{2}}{2L}\frac{\omega_{0}}{\tilde{\omega}}-2\frac{g^{2}\omega_{0}}{\tilde{\omega}^{2}}|t_{\mathrm{GS}}|\right).$
(31)
The second term in the brackets in Eq. (31) derives from the squeezing of the
band, previously coined quantum dynamical localization, subsection Momentum-
resolved spectral function in the TD limit under Results, and vanishes in the
TD limit. The last term originates from the current-current correlator and
remains finite even in the TD limit, resulting in a partial suppression of the
Drude weight. In contrast to the spectral function considered in the
subsection Momentum-resolved spectral function in the TD limit under Results,
modifications to the optical conductivity remain finite even in the TD limit
since the perturbation of the system within the linear response framework
enables a contribution from the induced electron-electron interaction. Writing
$\gamma=\frac{\omega_{p}^{2}}{\omega_{0}^{2}+\omega_{p}^{2}}\hskip
2.84526pt;\hskip 8.53581pt\omega_{p}^{2}=2g^{2}\omega_{0}|t_{\mathrm{GS}}|$
(32)
we find for $D$ in the TD limit
$D=D_{0}(1-\gamma)\hskip 2.84526pt;\hskip 8.53581pt0\leq\gamma\leq 1$ (33)
where $D_{0}$ is the Drude weight of the uncoupled chain. This is consistent
with the findings Rokaj et al.82 for an electron gas. For the second
contribution $\sigma_{\mathrm{reg}}$ in Eq. (30) one finds
$\frac{\sigma_{\text{reg}}(\omega)}{\pi}=\frac{g^{2}\omega_{0}}{\tilde{\omega}^{2}}t_{\text{GS}}^{2}(\delta(\omega+\tilde{\omega})+\delta(\omega-\tilde{\omega})).$
(34)
Two side-peaks at $\omega=\pm\tilde{\omega}$ appear that balance the
suppression of the Dude weight. These effects are illustrated in Fig. 4(a).
The inset of Fig. 4(a) shows that the real part of the conductivity satisfies
the f-sum rule, similar to other electron-boson models100,
$\frac{D}{\pi}+\int_{-\infty}^{\infty}\sigma_{\text{reg}}(\omega)\,d\omega=-\langle
e_{\text{kin}}\rangle,$ (35)
which is also evident from the corresponding analytical expression.
For completeness, we also state the imaginary part of the conductivity
$\text{Im}\,{\sigma(\omega)}=t_{GS}\frac{1}{\omega}\left(1-\frac{g^{2}}{2L}\frac{\omega_{0}}{\tilde{\omega}}\right)+\frac{g^{2}\omega_{0}}{\tilde{\omega}}t_{\text{GS}}^{2}\frac{1}{\omega}\left(\frac{1}{\omega-\tilde{\omega}}-\frac{1}{\omega+\tilde{\omega}}\right).$
(36)
which fulfills the usual Kramers-Kronig relation
$\text{Im}\,\sigma(\omega)=-\frac{1}{\pi}\mathcal{P}\int_{-\infty}^{\infty}\frac{\text{Re}\,\sigma(\omega^{\prime})}{\omega^{\prime}-\omega}d\omega^{\prime}$
and is shown in Fig. 4(b). Similar to the real part we find a suppression at
$\omega=0$ and shakeoff features at $\omega=\pm\tilde{\omega}$.
## III Discussion
In this work, we have discussed a tight-binding chain coupled to a single
spatially constant cavity mode. The exact solution of this model is enabled by
the macroscopic number of constants of motion that results from the absence of
momentum transfer between photons and electrons in the long-wavelength limit.
Consequently, the GS of the system is a product state of electrons and photons
(subsection Groundstate under Results).
Removing these constants of motion, either through relaxing the dipole
approximation or including an electron-electron interaction, is expected to
lead to interesting new results. It is well known that a one-dimensional
system with local interactions is susceptible to form a charge density wave at
zero temperature. 101 The effective interaction induced by the cavity
considered in this work does not lead to such a symmetry-broken GS, since it
is featureless. Including local interactions, it would therefore be
interesting to study the effect of the cavity on charge-ordered phases. An
important consequence of the non-interacting limit is the absence of heating
in the semi-classically driven case described in the subsection Quantum to
Floquet crossover under Results. In an interacting setup, a continuous
classical drive would heat up the system eventually leading to an infinite
temperature state. On the other hand, an initial coherent state of the cavity
will dissipate energy into the system leading to a decay of its amplitude. For
these reasons, the comparison made in the subsection Quantum to Floquet
crossover under Results will only hold on time-scales much shorter than the
time it takes for the system to heat up. Previous works noted that even when
including electron-electron interactions but neglecting any momentum transfer
by the cavity photons, a factorized wave-function might still be suitable for
a description of the system as the corresponding mean field picture becomes
exact in the TD limit.85, 63, 64
Relaxing the dipole approximation would lead to a finite-ranged but non-local
effective electron-electron interaction, which opens new opportunities for
inducing or modifying materials properties.102 Through this, also existing no-
go theorems related to superradiance would be circumvented, possibly making it
worthwhile to revisit the question whether an equilibrium photon condensate
can exist.64, 85, 103
In order to describe realistic experimental situations, a continuum of modes
needs to be included, where also the wave-vector in the direction of the chain
is a continuous variable. As a first approximation one might, as we did
earlier for the orthogonal directions, treat these modes as identical. For
this case the principle of collective strong coupling that we describe in
Supplementary Note 1 applies, leading to a mere renormalization of
parameters.82 However, macroscopically many modes coupled to all electrons at
once will lead to unphysical effects like a diverging effective mode energy.
To remedy this also the dipole approximation would need to be relaxed making
all but the zeroth mode couple to a microscopic quantity.
We have furthermore calculated the single-particle Green’s function
analytically (subsection Momentum-resolved spectral function in the TD limit
under Results). Here we found that in the limit $L\to\infty$ we recover the
bare spectral function of the uncoupled electrons indicating that corrections
due to the presence of the cavity vanish in the TD limit. We pointed out that
a possible mean-field term does not contribute due to the current in the GS
having zero expectation value, $\langle\mathcal{J}\rangle=0$. Corrections
beyond this are small in the TD limit which we attribute to the vanishing
energy density of the single mode signified by $\frac{g}{\sqrt{L}}\to 0$ in
that limit. Supplementary Note 1 shows how such corrections could be
reconciled through a collective coupling effect, reminiscent of previously
discussed collective (vibrational) strong coupling,104, 105, 106, 75, 106 when
retaining many modes corresponding to a finite energy density in the TD limit
which is reflected in the replacement
$\frac{g}{\sqrt{L}}\to\frac{g\sqrt{N}}{\sqrt{L}}$. This argument, however,
requires further consideration such as the relaxation of the dipole
approximation as mentioned above, to arrive at a mathematically rigorous
conclusion. Such a calculation goes beyond the scope of this work.
The analytical expression for the single-particle Green’s function derived in
this work might provide the basis for future studies by building a many-body
perturbation theory around this solution to investigate many-body
instabilities diagrammatically, such as superconductivity. Note that the here
considered system does not host polaritons since there are no collective
bosonic excitations in our model such as plasmons, excitons or phonons as
would be the case in a multi-band system. 85, 103 Accordingly, no signatures
of such quasi-particles show up in the electronic spectral function. Using
insights from the squeezing transformation, it might be possible to treat
systems with two different bosonic modes analytically. One interesting
prospect is to include an optically active phonon into the model that couples
quadratically to the electrons. 30, 28, 107 Extending the here-presented
analytical methods to a bimodal squeezing, it might be possible to
analytically obtain GS properties and signatures in electronic spectra of the
coupled bosonic modes. This could open up a pathway to realize multi-mode
squeezed states, with important applications to quantum information.108 In a
similar spirit, one could also study two distinct photonic cavity modes and
search for signatures of the matter-induced photon-photon interaction on the
basis of the exactly solvable model put forward in the present work.
Concerning the connection to experiments, a temperature lower than the
eigenfrequency of the cavity is needed in order for our zero-temperature
calculations to hold qualitatively. For a resonance at
$\omega_{0}=0.41\mathrm{THz}$ as used in a recent cavity setup109 this would
correspond to temperatures well below $3.1\mathrm{K}$. The validity of the
dipole approximation depends on the specific experimental setup. However, a
sample that is much smaller that the size of the cavity is necessarily
needed110 which would be fulfilled for a cavity size on the order of $1\rm mm$
corresponding to the above mentioned resonance at
$\omega_{0}=0.41\mathrm{THz}$ when at the same time considering an atomic wire
with a length in the sub micrometer range. The electronic spectra calculated
here (Fig. 3(a)) should in principle be observable in ARPES measurements. A
quality factor that ensures a linewidth that is smaller than the cavity
frequency is required to observe the side bands, which appears within
experimental reach. 109 We attributed the vanishing of corrections to the
spectral function in the TD limit to the vanishing energy-density of the
single mode in that limit. In an experimental setup one naturally has a
continuum of modes with finite energy density possibly retaining these
corrections. For small enough in-plane wave-vectors of the photons one might
expect qualitative effects, such as the asymmetry of the shake-off bands in
the quantum limit, to remain present also in this case. However, some further
work definitely needs to be dedicated to this aspect in order to support this
claim. The experimental observation of asymmetric shake-off bands would
complement the successful demonstration of classical Floquet replica bands.10
Another prediction of the present work is the squeezing of the vacuum
fluctuations in the GS consistent with predictions for other models.91, 89
Recently progress in probing the vacuum fluctuations of light 92, 93 puts an
experimental confirmation of our prediction within reach.
Finally, a suppression of the Drude peak (Fig. 4(a)) has already been observed
experimentally. 48 It has previously been explained by Rokaj et al. 82 via an
analogous result to the one presented by us but for an electron gas instead of
a tight-binding chain. It is an interesting question why the effective cavity
mode with vanishing energy density can influence the macroscopically many
electrons in this particular case. From our point of view, the reason lies in
the induced electron-electron interaction that does not vanish in the TD limit
and is probed indirectly through the optical conductivity.
## IV Methods
### IV.1 Variational scheme
Here, we describe the variational scheme that we use to determine the exact
GS. As discussed before, the Bloch states are fermionic eigenstates of the
system. Thus the input to the procedure is a vector of length $L$ specifying
the occupations of each Bloch-state at quasi-momentum $k$. This determines the
electronic part $\ket{\psi_{\mathrm{T}}}_{f}$ of the trial wavefunction
$\ket{\Psi_{\mathrm{T}}}=\ket{\phi_{\mathrm{T}}}_{b}\otimes\ket{\psi_{\mathrm{T}}}_{f}$,
with which we calculate the eigenvalues of the operators $\mathcal{T}$ and
$\mathcal{J}$
$T_{\psi_{\mathrm{T}}}=\,_{f}\bra{\psi_{\mathrm{T}}}\mathcal{T}\ket{\psi_{\mathrm{T}}}_{f}\hskip
5.69054pt;\hskip
5.69054ptJ_{\psi_{\mathrm{T}}}=\,_{f}\bra{\psi_{\mathrm{T}}}\mathcal{J}\ket{\psi_{\mathrm{T}}}_{f}.$
(37)
Evaluating the electronic part of the expectation value for the GS energy one
is left with the purely photonic Hamiltonian
$H_{\psi_{\mathrm{T}}}=\omega_{0}\left(a^{\dagger}a+\frac{1}{2}\right)+\cos\left(\frac{g}{\sqrt{L}}\left(a^{\dagger}{+}a\right)\right)T_{\psi_{\mathrm{T}}}+\sin\left(\frac{g}{\sqrt{L}}\left(a^{\dagger}{+}a\right)\right)J_{\psi_{\mathrm{T}}}.$
(38)
The problem reduces to that of an anharmonic oscillator, that can be solved by
numerical diagonalization introducing a cutoff $N_{\rm max}^{\rm boson}$ in
the Fock space. All results are converged with respect to this cutoff. The
scheme then varies over trial wave-functions optimizing for the smallest GS
energy of the remaining bosonic problem Eq. (38). It thus only compares
eigenenergies of exact eigenstates making it possible to find the true GS. We
have chosen different starting wave-functions for the optimization procedure
including the state where $\langle\rho_{k}\rangle=0.5$ for all $k$ in the BZ
and randomly generated states. Due to somewhat better convergence properties
the former have been used to obtain the shown plots.
We verified our results against an exact diagonalization of the full
Hamiltonian for small system sizes obtaining identical results within machine
precision.
## Data availability
Data included in the paper can be reproduced using the Python code available
at https://github.com/ce335805/comeChainComeShine.git.
## Code availability
The code used within this work is openly available at
https://github.com/ce335805/comeChainComeShine.git.
## References
* [1] de la Torre, A. _et al._ Colloquium: Nonthermal pathways to ultrafast control in quantum materials. _Rev. Mod. Phys._ 93, 041002 (2021). URL https://link.aps.org/doi/10.1103/RevModPhys.93.041002.
* [2] Acín, A. _et al._ The quantum technologies roadmap: a european community view. _New Journal of Physics_ 20, 080201 (2018). URL https://doi.org/10.1088/1367-2630/aad1ea.
* [3] Moody, G. _et al._ 2022 roadmap on integrated quantum photonics. _Journal of Physics: Photonics_ 4, 012501 (2022). URL https://doi.org/10.1088/2515-7647/ac1ef4.
* [4] Ebbesen, T. W. Hybrid light–matter states in a molecular and material science perspective. _Accounts of Chemical Research_ 49, 2403–2412 (2016). URL https://doi.org/10.1021/acs.accounts.6b00295.
* [5] Feist, J., Galego, J. & Garcia-Vidal, F. J. Polaritonic Chemistry with Organic Molecules. _ACS Photonics_ 5, 205–216 (2018).
* [6] Ruggenthaler, M., Tancogne-Dejean, N., Flick, J., Appel, H. & Rubio, A. From a quantum-electrodynamical light–matter description to novel spectroscopies. _Nature Reviews Chemistry_ 2, 0118 (2018). URL https://doi.org/10.1038/s41570-018-0118.
* [7] Ribeiro, R. F., Martínez-Martínez, L. A., Du, M., Campos-Gonzalez-Angulo, J. & Yuen-Zhou, J. Polariton chemistry: Controlling molecular dynamics with optical cavities. _Chemical Science_ 9, 6325–6339 (2018).
* [8] Flick, J., Rivera, N. & Narang, P. Strong light-matter coupling in quantum chemistry and quantum photonics. _Nanophotonics_ 7, 1479–1501 (2018). URL https://www.degruyter.com/view/j/nanoph.2018.7.issue-9/nanoph-2018-0067/nanoph-2018-0067.xml.
* [9] Frisk Kockum, A., Miranowicz, A., De Liberato, S., Savasta, S. & Nori, F. Ultrastrong coupling between light and matter. _Nature Reviews Physics_ 1, 19–40 (2019). URL https://doi.org/10.1038/s42254-018-0006-2.
* [10] Wang, Y. H., Steinberg, H., Jarillo-Herrero, P. & Gedik, N. Observation of Floquet-Bloch states on the surface of a Topological Insulator. _Science_ 342, 453–457 (2013). URL https://science.sciencemag.org/content/342/6157/453.
* [11] McIver, J. W. _et al._ Light-induced anomalous Hall effect in graphene. _Nature Physics_ 16, 38–41 (2020). URL https://www.nature.com/articles/s41567-019-0698-y.
* [12] Bukov, M., D’Alessio, L. & Polkovnikov, A. Universal high-frequency behavior of periodically driven systems: from dynamical stabilization to Floquet engineering. _Advances in Physics_ 64, 139–226 (2015). URL http://dx.doi.org/10.1080/00018732.2015.1055918.
* [13] Eckardt, A. Colloquium: Atomic quantum gases in periodically driven optical lattices. _Reviews of Modern Physics_ 89, 011004 (2017). URL http://link.aps.org/doi/10.1103/RevModPhys.89.011004. eprint 1606.08041.
* [14] Oka, T. & Kitamura, S. Floquet Engineering of Quantum Materials. _Annual Review of Condensed Matter Physics_ 10, 387–408 (2019). URL https://doi.org/10.1146/annurev-conmatphys-031218-013423. _eprint: https://doi.org/10.1146/annurev-conmatphys-031218-013423.
* [15] Rudner, M. S. & Lindner, N. H. The floquet engineer’s handbook (2020). eprint arXiv:2003.08252.
* [16] Oka, T. & Aoki, H. Photovoltaic Hall effect in graphene. _Physical Review B_ 79, 081406 (2009).
* [17] Lindner, N. H., Refael, G. & Galitski, V. Floquet topological insulator in semiconductor quantum wells. _Nature Physics_ 7, 490–495 (2011). URL https://doi.org/10.1038/nphys1926.
* [18] Kitagawa, T., Oka, T., Brataas, A., Fu, L. & Demler, E. Transport properties of nonequilibrium systems under the application of light: Photoinduced quantum hall insulators without landau levels. _Phys. Rev. B_ 84, 235108 (2011). URL https://link.aps.org/doi/10.1103/PhysRevB.84.235108.
* [19] Decker, K. S. C., Karrasch, C., Eisert, J. & Kennes, D. M. Floquet engineering topological many-body localized systems. _Phys. Rev. Lett._ 124, 190601 (2020). URL https://link.aps.org/doi/10.1103/PhysRevLett.124.190601.
* [20] Sentef, M. A. _et al._ Theory of Floquet band formation and local pseudospin textures in pump-probe photoemission of graphene. _Nature Communications_ 6, 7047 (2015). URL https://www.nature.com/articles/ncomms8047.
* [21] Hübener, H., Sentef, M. A., De Giovannini, U., Kemper, A. F. & Rubio, A. Creating stable Floquet-Weyl semimetals by laser-driving of 3D Dirac materials. _Nature Communications_ 8, 13940 (2017). eprint 1604.03399.
* [22] Fleckenstein, C., Ziani, N. T., Privitera, L., Sassetti, M. & Trauzettel, B. Transport signatures of a floquet topological transition at the helical edge. _Phys. Rev. B_ 101, 201401 (2020). URL https://link.aps.org/doi/10.1103/PhysRevB.101.201401.
* [23] Bukov, M., Kolodrubetz, M. & Polkovnikov, A. Schrieffer-wolff transformation for periodically driven systems: Strongly correlated systems with artificial gauge fields. _Phys. Rev. Lett._ 116, 125301 (2016). URL https://link.aps.org/doi/10.1103/PhysRevLett.116.125301.
* [24] Claassen, M., Jiang, H. C., Moritz, B. & Devereaux, T. P. Dynamical time-reversal symmetry breaking and photo-induced chiral spin liquids in frustrated Mott insulators. _Nature Communications_ 8, 1192 (2017). URL https://www.nature.com/articles/s41467-017-00876-y.pdf. eprint 1611.07964.
* [25] Kennes, D. M., de la Torre, A., Ron, A., Hsieh, D. & Millis, A. J. Floquet engineering in quantum chains. _Phys. Rev. Lett._ 120, 127601 (2018). URL https://link.aps.org/doi/10.1103/PhysRevLett.120.127601.
* [26] Mentink, J. H., Balzer, K. & Eckstein, M. Ultrafast and reversible control of the exchange interaction in Mott insulators. _Nature Communications_ 6, 6708 (2015). URL https://www.nature.com/articles/ncomms7708.pdf. eprint arXiv:1407.4761v1.
* [27] Walldorf, N., Kennes, D. M., Paaske, J. & Millis, A. J. The antiferromagnetic phase of the floquet-driven hubbard model. _Phys. Rev. B_ 100, 121110 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.100.121110.
* [28] Sentef, M. A., Kemper, A. F., Georges, A. & Kollath, C. Theory of light-enhanced phonon-mediated superconductivity. _Physical Review B_ 93, 1–10 (2016). eprint 1505.07575.
* [29] Knap, M., Babadi, M., Refael, G., Martin, I. & Demler, E. Dynamical Cooper pairing in nonequilibrium electron-phonon systems. _Physical Review B_ 94, 214504 (2016). eprint 1511.07874.
* [30] Kennes, D. M., Wilner, E. Y., Reichman, D. R. & Millis, A. J. Transient superconductivity from electronic squeezing of optically pumped phonons. _Nature Physics_ 13, 479–483 (2017). eprint 1609.03802v1.
* [31] Murakami, Y., Tsuji, N., Eckstein, M. & Werner, P. Nonequilibrium steady states and transient dynamics of conventional superconductors under phonon driving. _Physical Review B_ 96, 045125 (2017). eprint 1702.02942.
* [32] Porta, S. _et al._ Feasible model for photoinduced interband pairing. _Phys. Rev. B_ 100, 024513 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.100.024513.
* [33] Kennes, D. M., Claassen, M., Sentef, M. A. & Karrasch, C. Light-induced $d$-wave superconductivity through floquet-engineered fermi surfaces in cuprates. _Phys. Rev. B_ 100, 075115 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.100.075115.
* [34] D’Alessio, L. & Rigol, M. Long-time behavior of isolated periodically driven interacting lattice systems. _Phys. Rev. X_ 4, 041048 (2014). URL https://link.aps.org/doi/10.1103/PhysRevX.4.041048.
* [35] Lazarides, A., Das, A. & Moessner, R. Equilibrium states of generic quantum systems subject to periodic driving. _Phys. Rev. E_ 90, 012110 (2014). URL https://link.aps.org/doi/10.1103/PhysRevE.90.012110.
* [36] Kibis, O. V., Kyriienko, O. & Shelykh, I. A. Band gap in graphene induced by vacuum fluctuations. _Phys. Rev. B_ 84, 195413 (2011). URL https://link.aps.org/doi/10.1103/PhysRevB.84.195413.
* [37] Wang, X., Ronca, E. & Sentef, M. A. Cavity quantum electrodynamical Chern insulator: Towards light-induced quantized anomalous Hall effect in graphene. _Physical Review B_ 99, 235156 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.99.235156.
* [38] Hübener, H. _et al._ Engineering quantum materials with chiral optical cavities. _Nature Materials_ 20, 438–442 (2021). URL https://doi.org/10.1038/s41563-020-00801-7.
* [39] Dutra, S. M. _Cavity Quantum Electrodynamics_ (John Wiley & Sons, Inc., 2004). URL https://doi.org/10.1002/0471713465.
* [40] Li, J. _et al._ Electromagnetic coupling in tight-binding models for strongly correlated light and matter. _Phys. Rev. B_ 101, 205140 (2020). URL https://link.aps.org/doi/10.1103/PhysRevB.101.205140.
* [41] Maissen, C. _et al._ Ultrastrong coupling in the near field of complementary split-ring resonators. _Phys. Rev. B_ 90, 205309 (2014). URL https://link.aps.org/doi/10.1103/PhysRevB.90.205309.
* [42] Meschede, D., Walther, H. & Müller, G. One-atom maser. _Phys. Rev. Lett._ 54, 551–554 (1985). URL https://link.aps.org/doi/10.1103/PhysRevLett.54.551.
* [43] Thompson, R. J., Rempe, G. & Kimble, H. J. Observation of normal-mode splitting for an atom in an optical cavity. _Phys. Rev. Lett._ 68, 1132–1135 (1992). URL https://link.aps.org/doi/10.1103/PhysRevLett.68.1132.
* [44] Gu, X., Kockum, A. F., Miranowicz, A., xi Liu, Y. & Nori, F. Microwave photonics with superconducting quantum circuits. _Physics Reports_ 718-719, 1–102 (2017). URL https://www.sciencedirect.com/science/article/pii/S0370157317303290. Microwave photonics with superconducting quantum circuits.
* [45] Scalari, G. _et al._ Ultrastrong coupling of the cyclotron transition of a 2d electron gas to a THz metamaterial. _Science_ 335, 1323–1326 (2012). URL https://doi.org/10.1126/science.1216022.
* [46] Keller, J. _et al._ Few-electron ultrastrong light-matter coupling at 300 ghz with nanogap hybrid lc microcavities. _Nano Letters_ 17, 7410–7415 (2017). URL https://doi.org/10.1021/acs.nanolett.7b03228.
* [47] Ballarini, D. & Liberato, S. D. Polaritonics: from microcavities to sub-wavelength confinement. _Nanophotonics_ 8, 641–654 (2019). URL https://doi.org/10.1515/nanoph-2018-0188.
* [48] Paravicini-Bagliani, G. L. _et al._ Magneto-transport controlled by landau polariton states. _Nature Physics_ 15, 186–190 (2018). URL https://doi.org/10.1038/s41567-018-0346-y.
* [49] Kasprzak, J. _et al._ Bose–Einstein condensation of exciton polaritons. _Nature_ 443, 409–414 (2006).
* [50] Keeling, J. & Kéna-Cohen, S. Bose–einstein condensation of exciton-polaritons in organic microcavities. _Annual Review of Physical Chemistry_ 71, 435–459 (2020). URL https://doi.org/10.1146/annurev-physchem-010920-102509. PMID: 32126177, eprint https://doi.org/10.1146/annurev-physchem-010920-102509.
* [51] Byrnes, T., Kim, N. Y. & Yamamoto, Y. Exciton-polariton condensates. _Nature Physics_ 10, 803–813 (2014).
* [52] Thomas, A. _et al._ Ground-state chemical reactivity under vibrational coupling to the vacuum electromagnetic field. _Angewandte Chemie International Edition_ 55, 11462–11466 (2016). URL https://onlinelibrary.wiley.com/doi/abs/10.1002/anie.201605504. eprint https://onlinelibrary.wiley.com/doi/pdf/10.1002/anie.201605504.
* [53] Schäfer, C., Flick, J., Ronca, E., Narang, P. & Rubio, A. Shining light on the microscopic resonant mechanism responsible for cavity-mediated chemical reactivity (2021). eprint arXiv:2104.12429.
* [54] Sentef, M. A., Ruggenthaler, M. & Rubio, A. Cavity quantum-electrodynamical polaritonically enhanced electron-phonon coupling and its influence on superconductivity. _Science Advances_ 4, eaau6969 (2018). URL http://advances.sciencemag.org/content/4/11/eaau6969.
* [55] Curtis, J. B. _et al._ Cavity magnon-polaritons in cuprate parent compounds. _Phys. Rev. Research_ 4, 013101 (2022). URL https://link.aps.org/doi/10.1103/PhysRevResearch.4.013101.
* [56] Schlawin, F., Cavalleri, A. & Jaksch, D. Cavity-Mediated Electron-Photon Superconductivity. _Physical Review Letters_ 122, 133602 (2019). URL https://link.aps.org/doi/10.1103/PhysRevLett.122.133602.
* [57] Chakraborty, A. & Piazza, F. Long-range photon fluctuations enhance photon-mediated electron pairing and superconductivity. _Phys. Rev. Lett._ 127, 177002 (2021). URL https://link.aps.org/doi/10.1103/PhysRevLett.127.177002.
* [58] Gao, H., Schlawin, F., Buzzi, M., Cavalleri, A. & Jaksch, D. Photoinduced electron pairing in a driven cavity. _Phys. Rev. Lett._ 125, 053602 (2020). URL https://link.aps.org/doi/10.1103/PhysRevLett.125.053602.
* [59] Curtis, J. B., Raines, Z. M., Allocca, A. A., Hafezi, M. & Galitski, V. M. Cavity Quantum Eliashberg Enhancement of Superconductivity. _Physical Review Letters_ 122, 167002 (2019). URL https://link.aps.org/doi/10.1103/PhysRevLett.122.167002.
* [60] Allocca, A. A., Raines, Z. M., Curtis, J. B. & Galitski, V. M. Cavity superconductor-polaritons. _Phys. Rev. B_ 99, 020504 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.99.020504.
* [61] Thomas, A. _et al._ Exploring Superconductivity under Strong Coupling with the Vacuum Electromagnetic Field. _arXiv:1911.01459 [cond-mat, physics:quant-ph]_ (2019). URL http://arxiv.org/abs/1911.01459. ArXiv: 1911.01459.
* [62] Nataf, P. & Ciuti, C. No-go theorem for superradiant quantum phase transitions in cavity qed and counter-example in circuit qed. _Nature Communications_ 1, 72 (2010). URL https://doi.org/10.1038/ncomms1069.
* [63] Mazza, G. & Georges, A. Superradiant Quantum Materials. _Physical Review Letters_ 122, 017401 (2019). URL https://link.aps.org/doi/10.1103/PhysRevLett.122.017401.
* [64] Andolina, G. M., Pellegrino, F. M. D., Giovannetti, V., MacDonald, A. H. & Polini, M. Cavity quantum electrodynamics of strongly correlated electron systems: A no-go theorem for photon condensation. _Physical Review B_ 100, 121109 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.100.121109.
* [65] Ashida, Y., Imamoglu, A. & Demler, E. Nonperturbative waveguide quantum electrodynamics (2021). eprint arXiv:2105.08833.
* [66] Schuler, M., Bernardis, D. D., Läuchli, A. M. & Rabl, P. The Vacua of Dipolar Cavity Quantum Electrodynamics. _SciPost Phys._ 9, 66 (2020). URL https://scipost.org/10.21468/SciPostPhys.9.5.066.
* [67] De Bernardis, D., Jaako, T. & Rabl, P. Cavity quantum electrodynamics in the nonperturbative regime. _Phys. Rev. A_ 97, 043820 (2018). URL https://link.aps.org/doi/10.1103/PhysRevA.97.043820.
* [68] Guerci, D., Simon, P. & Mora, C. Superradiant phase transition in electronic systems and emergent topological phases. _Phys. Rev. Lett._ 125, 257604 (2020). URL https://link.aps.org/doi/10.1103/PhysRevLett.125.257604.
* [69] Reitz, M., Sommer, C. & Genes, C. Cooperative quantum phenomena in light-matter platforms. _PRX Quantum_ 3, 010201 (2022). URL https://link.aps.org/doi/10.1103/PRXQuantum.3.010201.
* [70] Stokes, A. & Nazir, A. Uniqueness of the Phase Transition in Many-Dipole Cavity Quantum Electrodynamical Systems. _Phys. Rev. Lett._ 125, 143603 (2020). URL https://link.aps.org/doi/10.1103/PhysRevLett.125.143603. Publisher: American Physical Society.
* [71] Genet, C., Faist, J. & Ebbesen, T. W. Inducing new material properties with hybrid light–matter states. _Physics Today_ 74, 42–48 (2021). URL https://doi.org/10.1063/pt.3.4749.
* [72] Dicke, R. H. Coherence in spontaneous radiation processes. _Phys. Rev._ 93, 99–110 (1954). URL https://link.aps.org/doi/10.1103/PhysRev.93.99.
* [73] Kirton, P., Roses, M. M., Keeling, J. & Torre, E. G. D. Introduction to the Dicke Model: From Equilibrium to Nonequilibrium, and Vice Versa. _Advanced Quantum Technologies_ 2, 1800043 (2019).
* [74] Fox, M. & Javanainen, J. Quantum optics: An introduction. _Physics Today - PHYS TODAY_ 60 (2007).
* [75] Frisk Kockum, A., Miranowicz, A., De Liberato, S., Savasta, S. & Nori, F. Ultrastrong coupling between light and matter. _Nature Reviews Physics_ 1, 19–40 (2019). URL https://www.nature.com/articles/s42254-018-0006-2.
* [76] Tokatly, I. V. Time-dependent density functional theory for many-electron systems interacting with cavity photons. _Phys. Rev. Lett._ 110, 233001 (2013). URL https://link.aps.org/doi/10.1103/PhysRevLett.110.233001.
* [77] Ruggenthaler, M. _et al._ Quantum-electrodynamical density-functional theory: Bridging quantum optics and electronic-structure theory. _Phys. Rev. A_ 90, 012508 (2014). URL https://link.aps.org/doi/10.1103/PhysRevA.90.012508.
* [78] Pellegrini, C., Flick, J., Tokatly, I. V., Appel, H. & Rubio, A. Optimized effective potential for quantum electrodynamical time-dependent density functional theory. _Phys. Rev. Lett._ 115, 093001 (2015). URL https://link.aps.org/doi/10.1103/PhysRevLett.115.093001.
* [79] Haugland, T. S., Ronca, E., Kjønstad, E. F., Rubio, A. & Koch, H. Coupled cluster theory for molecular polaritons: Changing ground and excited states. _Phys. Rev. X_ 10, 041043 (2020). URL https://link.aps.org/doi/10.1103/PhysRevX.10.041043.
* [80] Buchholz, F., Theophilou, I., Giesbertz, K. J. H., Ruggenthaler, M. & Rubio, A. Light–matter hybrid-orbital-based first-principles methods: The influence of polariton statistics. _Journal of Chemical Theory and Computation_ 16, 5601–5620 (2020). URL https://doi.org/10.1021/acs.jctc.0c00469.
* [81] Nielsen, S. E. B., Schäfer, C., Ruggenthaler, M. & Rubio, A. Dressed-orbital approach to cavity quantum electrodynamics and beyond (2018). eprint arXiv:1812.00388.
* [82] Rokaj, V., Ruggenthaler, M., Eich, F. G. & Rubio, A. Free electron gas in cavity quantum electrodynamics. _Phys. Rev. Research_ 4, 013012 (2022). URL https://link.aps.org/doi/10.1103/PhysRevResearch.4.013012.
* [83] Li, J. _et al._ Electromagnetic coupling in tight-binding models for strongly correlated light and matter. _Phys. Rev. B_ 101, 205140 (2020). URL https://link.aps.org/doi/10.1103/PhysRevB.101.205140.
* [84] Sentef, M. A., Li, J., Künzel, F. & Eckstein, M. Quantum to classical crossover of Floquet engineering in correlated quantum systems. _Physical Review Research_ 2, 033033 (2020). URL https://link.aps.org/doi/10.1103/PhysRevResearch.2.033033.
* [85] Dmytruk, O. & Schiró, M. Gauge fixing for strongly correlated electrons coupled to quantum light. _Phys. Rev. B_ 103, 075131 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.103.075131.
* [86] Kiffner, M., Coulthard, J. R., Schlawin, F., Ardavan, A. & Jaksch, D. Manipulating quantum materials with quantum light. _Physical Review B_ 99, 085116 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.99.085116.
* [87] Bagchi, B., Ghosh, R. & Khare, A. A pedestrian introduction to coherent and squeezed states. _International Journal of Modern Physics A_ 35, 2030011 (2020). URL https://doi.org/10.1142/S0217751X20300112. eprint https://doi.org/10.1142/S0217751X20300112.
* [88] Rabl, P., Shnirman, A. & Zoller, P. Generation of squeezed states of nanomechanical resonators by reservoir engineering. _Phys. Rev. B_ 70, 205304 (2004). URL https://link.aps.org/doi/10.1103/PhysRevB.70.205304.
* [89] Glauber, R. J. & Lewenstein, M. Quantum optics of dielectric media. _Phys. Rev. A_ 43, 467–491 (1991). URL https://link.aps.org/doi/10.1103/PhysRevA.43.467.
* [90] Walls, D. & Milburn, G. J. (eds.) _Quantum Optics_ (Springer Berlin Heidelberg, 2008). URL https://doi.org/10.1007/978-3-540-28574-8.
* [91] Ciuti, C., Bastard, G. & Carusotto, I. Quantum vacuum properties of the intersubband cavity polariton field. _Phys. Rev. B_ 72, 115303 (2005). URL https://link.aps.org/doi/10.1103/PhysRevB.72.115303.
* [92] Riek, C. _et al._ Direct sampling of electric-field vacuum fluctuations. _Science_ 350, 420–423 (2015). URL https://science.sciencemag.org/content/350/6259/420. eprint https://science.sciencemag.org/content/350/6259/420.full.pdf.
* [93] Benea-Chelmus, I.-C., Settembrini, F. F., Scalari, G. & Faist, J. Electric field correlation measurements on the electromagnetic vacuum state. _Nature_ 568, 202–206 (2019). URL https://doi.org/10.1038/s41586-019-1083-9.
* [94] Kirton, P. & Keeling, J. Superradiant and Lasing States in Driven-Dissipative Dicke Models. _N. J. Phys._ 20, 015009 (2018).
* [95] Rzażewski, K., Wódkiewicz, K. & Żakowicz, W. Phase transitions, two-level atoms, and the ${A}^{2}$ term. _Phys. Rev. Lett._ 35, 432–434 (1975). URL https://link.aps.org/doi/10.1103/PhysRevLett.35.432.
* [96] Freericks, J. K., Krishnamurthy, H. R. & Pruschke, T. Theoretical description of time-resolved photoemission spectroscopy: Application to pump-probe experiments. _Phys. Rev. Lett._ 102, 136401 (2009). URL https://link.aps.org/doi/10.1103/PhysRevLett.102.136401.
* [97] Tsuji, N., Oka, T. & Aoki, H. Correlated electron systems periodically driven out of equilibrium: $\text{Floquet}+\text{DMFT}$ formalism. _Phys. Rev. B_ 78, 235124 (2008). URL https://link.aps.org/doi/10.1103/PhysRevB.78.235124.
* [98] Amelio, I., Korosec, L., Carusotto, I. & Mazza, G. Optical dressing of the electronic response of two-dimensional semiconductors in quantum and classical descriptions of cavity electrodynamics. _Phys. Rev. B_ 104, 235120 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.235120.
* [99] Scalapino, D. J., White, S. R. & Zhang, S. C. Insulator, metal, or superconductor: The criteria. _Phys. Rev. B_ 47, 7995–8007 (1993). URL https://link.aps.org/doi/10.1103/PhysRevB.47.7995.
* [100] Alvermann, A., Fehske, H. & Trugman, S. A. Polarons and slow quantum phonons. _Phys. Rev. B_ 81, 165113 (2010). URL https://link.aps.org/doi/10.1103/PhysRevB.81.165113.
* [101] Giamarchi, T. _Quantum Physics in One Dimension_ (Oxford University Press, 2003). URL https://doi.org/10.1093/acprof:oso/9780198525004.001.0001.
* [102] Gao, H., Schlawin, F. & Jaksch, D. Higgs mode stabilization by photo-induced long-range interactions in a superconductor (2021). eprint arXiv:2106.05076.
* [103] Lenk, K. & Eckstein, M. Collective excitations of the $u$(1)-symmetric exciton insulator in a cavity. _Phys. Rev. B_ 102, 205129 (2020). URL https://link.aps.org/doi/10.1103/PhysRevB.102.205129.
* [104] Shalabney, A. _et al._ Coherent coupling of molecular resonators with a microcavity mode. _Nature Communications_ 6, 5981 (2015).
* [105] Du, M. & Yuen-Zhou, J. Catalysis by dark states in vibropolaritonic chemistry. _Phys. Rev. Lett._ 128, 096001 (2022). URL https://link.aps.org/doi/10.1103/PhysRevLett.128.096001.
* [106] Sidler, D., Ruggenthaler, M., Schäfer, C., Ronca, E. & Rubio, A. A perspective on ab initio modeling of polaritonic chemistry: The role of non-equilibrium effects and quantum collectivity. _arXiv:2108.12244 [physics, physics:quant-ph]_ (2021). URL http://arxiv.org/abs/2108.12244. ArXiv: 2108.12244.
* [107] Buzzi, M. _et al._ Photomolecular high-temperature superconductivity. _Phys. Rev. X_ 10, 031028 (2020). URL https://link.aps.org/doi/10.1103/PhysRevX.10.031028.
* [108] Braunstein, S. L. & van Loock, P. Quantum information with continuous variables. _Rev. Mod. Phys._ 77, 513–577 (2005). URL https://link.aps.org/doi/10.1103/RevModPhys.77.513.
* [109] Zhang, Q. _et al._ Collective non-perturbative coupling of 2D electrons with high-quality-factor terahertz cavity photons. _Nature Phys_ 12, 1005–1011 (2016). URL https://www.nature.com/articles/nphys3850. Bandiera_abtest: a Cg_type: Nature Research Journals Number: 11 Primary_atype: Research Publisher: Nature Publishing Group Subject_term: Quantum Hall;Quantum optics Subject_term_id: quantum-hall;quantum-optics.
* [110] Ruggenthaler, M., Tancogne-Dejean, N., Flick, J., Appel, H. & Rubio, A. From a quantum-electrodynamical light–matter description to novel spectroscopies. _Nat Rev Chem_ 2, 1–16 (2018). URL https://www.nature.com/articles/s41570-018-0118. Bandiera_abtest: a Cg_type: Nature Research Journals Number: 3 Primary_atype: Reviews Publisher: Nature Publishing Group Subject_term: Chemical physics;Method development;Quantum chemistry;Quantum physics Subject_term_id: chemical-physics;method-development;quantum-chemistry;quantum-physics.
* [111] Truax, D. R. Baker-Campbell-Hausdorff relations and unitarity of SU(2) and SU(1,1) squeeze operators. _Phys. Rev. D_ 31, 1988–1991 (1985). URL https://link.aps.org/doi/10.1103/PhysRevD.31.1988.
* [112] Van-Brunt, A. & Visser, M. Special-case closed form of the Baker-Campbell-Hausdorff formula. _J. Phys. A: Math. Theor._ 48, 225207 (2015). URL http://arxiv.org/abs/1501.02506. ArXiv: 1501.02506.
* [113] Mahan, G. D. _Many-Particle Physics_ (Springer US, 1990). URL https://doi.org/10.1007/978-1-4613-1469-1.
## ACKNOWLEDGEMENT
The authors thank Vasilis Rokaj, Brieuc Le De, Martin Eckstein, Jiajun Li and
Mara Caltapanides for fruitful discussions regarding the manuscript.
We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) via Germany’s Excellence Strategy – Cluster of Excellence
Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 – 390534769 and
within the RTG 1995. We also acknowledge support from the Max Planck-New York
City Center for Non-Equilibrium Quantum Phenomena. MAS acknowledges financial
support through the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) via the Emmy Noether program (SE 2558/2). C.K. acknowledges
support by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) through the Emmy Noether program (KA3360/2-1) as well as by
‘Niedersächsisches Vorab’ through the ‘Quantum- and Nano-Metrology (QUANOMET)’
initiative within the project P-1. M.O. gratefully acknowledges the support of
the Braunschweig International Graduate School of Metrology B-IGSM and the DFG
Research Training Group 1952 Metrology for Complex Nanosystems.
## Author contributions
C.J.E. carried out the simulations with the variational code, G.P. and M.O.
performed the ED simulations. Analytical calculations were done by C.J.E. All
authors analyzed the data and discussed the results. C.J.E., G.P., M.A.S. and
D.M.K. wrote the manuscript with input from M.O., F.C. and C.K. The project
was conceived by D.M.K. and M.A.S.
## Competing Interests
The authors declare no competing interests.
## Supplementary information
## Supplementary Note 1: Collective strong coupling in the case of $N$
identical modes
In this supplementary we show that coupling electrons to $N$ identical modes
with a coupling constant $\frac{g}{\sqrt{L}}$, in a setup as described in the
Model subsection under Results of the main text, effectively results in a
single mode coupling with enhanced strength $\frac{g\,\sqrt{N}}{\sqrt{L}}$ and
$N-1$ completely decoupled modes. We will start to show how this holds for the
Hamiltonian expanded to second order in the light-matter coupling (also
compare Eq. (15) of the main text) and later in this section argue why this
might also hold for the full Peierls substitution including all order in the
LMC (also compare Eq. (2) of the main text). We write the Hamiltonian to
second order in the LMC $g$ for many identical modes as
$H=\mathcal{T}+\frac{g}{\sqrt{L}}\mathcal{J}\sum_{\lambda}\left(a_{\lambda}^{{\dagger}}+a_{\lambda}\right)-\frac{1}{2}\frac{g^{2}}{L}\mathcal{T}\left(\sum_{\lambda}\left(a_{\lambda}^{{\dagger}}+a_{\lambda}\right)\right)^{2}+\omega\sum_{\lambda}a_{\lambda}^{{\dagger}}a_{\lambda}.$
(39)
Here $a_{\lambda}$ annihilates -; $a_{\lambda}^{{\dagger}}$ creates a photon
in mode $\lambda$. All other symbols are as defined in the main text.
To find a form where the modes are decoupled we will represent them in terms
of their generalized coordinate and momentum according to
$\displaystyle X_{\lambda}$
$\displaystyle=\frac{1}{\sqrt{2\omega}}\left(a_{\lambda}^{{\dagger}}+a_{\lambda}\right)$
(40) $\displaystyle P_{\lambda}$
$\displaystyle=i\frac{\sqrt{\omega}}{\sqrt{2}}\left(a_{\lambda}^{{\dagger}}-a_{\lambda}\right)$
with which the Hamiltonian becomes
$H=\mathcal{T}+\sqrt{2\omega}\frac{g}{\sqrt{L}}\mathcal{J}\sum_{\lambda}X_{\lambda}-\frac{g^{2}}{L}\omega\mathcal{T}\sum_{\lambda,\kappa}X_{\lambda}X_{\kappa}+\sum_{\lambda}\frac{1}{2}\omega^{2}X_{\lambda}^{2}+\frac{1}{2}P_{\lambda}^{2}.$
(41)
This can be written in matrix form as
$H=\mathcal{T}+\sqrt{2\omega}\frac{g}{\sqrt{L}}\mathcal{J}\sum_{\lambda}X_{\lambda}-\frac{g^{2}}{L}\omega\mathcal{T}\,\underline{X}^{\mathrm{T}}\begin{pmatrix}I_{\mathrm{e}}-\frac{\omega\,L}{2g^{2}}\mathcal{T}^{-1}&I_{\mathrm{e}}&\dots&I_{\mathrm{e}}\\\
I_{\mathrm{e}}&I_{\mathrm{e}}-\frac{\omega\,L}{2g^{2}}\mathcal{T}^{-1}&I_{\mathrm{e}}&\dots\\\
\dots&&\dots&I_{\mathrm{e}}&\\\
I_{\mathrm{e}}&\dots&I_{\mathrm{e}}&I_{\mathrm{e}}-\frac{\omega\,L}{2g^{2}}\mathcal{T}^{-1}\end{pmatrix}\underline{X}+\frac{1}{2}\underline{P}^{\mathrm{T}}\,I_{N{\times}N}\,\underline{P}.$
(42)
Here $I_{\mathrm{e}}$ is the identity on the electronic part of the Hilbert
space. We have introduced $N$-dimensional coordinate and momentum vectors as
$\underline{X}=\begin{pmatrix}X_{1}\\\ \dots\\\ X_{N}\end{pmatrix}\hskip
5.69054pt;\hskip 8.53581pt\underline{P}=\begin{pmatrix}P_{1}\\\ \dots\\\
P_{N}\end{pmatrix}$ (43)
and $I_{N{\times}N}$ is simply the unity in $N$ dimensions with
$I_{\mathrm{e}}$ on the diagonal. One eigenvector of the above matrix in Eq.
(42) is clearly
$v^{1}=\frac{1}{\sqrt{N}}\begin{pmatrix}1\\\ \dots\\\ 1\end{pmatrix}$ (44)
with corresponding eigenvalue (that still contains an operator from the
electronic subsystem due to the composite nature of the system)
$\varepsilon^{1}=N-\frac{\omega\,L}{2g^{2}}\mathcal{T}^{-1}.$ (45)
Each vector $v=(v_{1},\dots,v_{N})^{\mathrm{T}}$ from the orthogonal $N-1$
dimensional subspace of $v^{1}$, defined through the equation
$\sum_{i=1}^{N}v_{i}=0$, is an eigenvector with eigenvalue
$\varepsilon=-\frac{\omega\,L}{2g^{2}}\mathcal{T}^{-1}$ which is therefore
$N-1$ times degenerate. Denoting by $P_{+}$ and $X_{+}$ momentum and
coordinate corresponding to the first eigenvector and by $\tilde{P}_{\kappa}$,
$\tilde{X}_{\kappa}$, $\kappa=1,\dots,N-1$ momenta and coordinates
corresponding to the other $N-1$ eigenvectors we can write the Hamiltonian
with decoupled bosonic modes as
$H=\mathcal{T}+\sqrt{2\omega}\sqrt{N}\frac{g}{\sqrt{L}}X_{+}\mathcal{J}+\frac{1}{2}\left(\omega^{2}-2N\frac{g^{2}}{L}\omega\mathcal{T}\right)X_{+}^{2}+\frac{1}{2}P_{+}^{2}+\sum_{\kappa}\frac{1}{2}\omega^{2}\tilde{X}_{\kappa}^{2}+\frac{1}{2}\tilde{P}_{\kappa}^{2}.$
(46)
From this it is clear that the $X_{+}$ mode couples to the electrons with
effective strength $\frac{g\sqrt{N}}{\sqrt{L}}$ while all other $N-1$ modes
don’t couple to the electrons or among each other at all.
Next we discuss the case of the full Peierls substitution keeping all orders
in the LMC. For this situation the Hamiltonian including many identical modes
with zero momentum transfer would read
$H=\sin\left(\frac{g}{\sqrt{L}}\sum_{\lambda}\left(a_{\lambda}^{{\dagger}}+a_{\lambda}\right)\right)\mathcal{J}+\cos\left(\frac{g}{\sqrt{L}}\sum_{\lambda}\left(a_{\lambda}^{{\dagger}}+a_{\lambda}\right)\right)\mathcal{T}+\omega\sum_{\lambda}a_{\lambda}^{{\dagger}}a_{\lambda}.$
(47)
We now write this Hamiltonian in terms of the canonical position and momentum
operators introduced in Eq. (40)
$\displaystyle H$
$\displaystyle=\sin\left(\frac{g\sqrt{2\omega}}{\sqrt{L}}\sum_{\lambda}X_{\lambda}\right)\mathcal{J}+\cos\left(\frac{g\sqrt{2\omega}}{\sqrt{L}}\sum_{\lambda}X_{\lambda}\right)\mathcal{T}+\sum_{\lambda}\frac{1}{2}P_{\lambda}^{2}+\frac{\omega^{2}}{2}X_{\lambda}^{2}$
(48)
$\displaystyle=\sin\left(\frac{g\sqrt{2\omega}}{\sqrt{L}}\sum_{\lambda}X_{\lambda}\right)\mathcal{J}+\cos\left(\frac{g\sqrt{2\omega}}{\sqrt{L}}\sum_{\lambda}X_{\lambda}\right)\mathcal{T}+\frac{1}{2}\underline{P}^{\mathrm{T}}\,I_{N{\times}N}\,\underline{P}+\frac{\omega^{2}}{2}\underline{X}^{\mathrm{T}}\,I_{N{\times}N}\,\underline{X}$
where in the last step we have again introduced $N$-dimensional notation as in
Eq. (43). The fact that the harmonic oscillator terms can be written using the
$N$-dimensional unity $I_{N\times N}$ stems from our approximation of all
modes having equal frequency. Due to this, we can now write the Hamiltonian in
terms of any other set of collective modes in particular the one used to write
Eq. (46) in which the last term will remain diagonal (ie. in particular not
couple different modes) obtaining
$\displaystyle H$
$\displaystyle=\sin\left(\frac{g\sqrt{2\omega}\sqrt{N}}{\sqrt{L}}X_{+}\right)\mathcal{J}+\cos\left(\frac{g\sqrt{2\omega}\sqrt{N}}{\sqrt{L}}X_{+}\right)\mathcal{T}+\frac{1}{2}P_{+}^{2}+\frac{\omega^{2}}{2}X_{+}^{2}+\sum_{\kappa=1}^{N-1}\frac{1}{2}\tilde{P}_{\kappa}^{2}+\frac{\omega^{2}}{2}\tilde{X}_{\kappa}^{2}.$
(49)
Here all operators are defined as in Eq. (46). Thus also in the case of
keeping all orders in the LMC we obtain a single mode with effectively
enhanced coupling $\frac{g}{\sqrt{L}}\rightarrow\frac{g\sqrt{N}}{\sqrt{L}}$
and $N-1$ uncoupled modes.
In Eq. (46) (and also Eq. (49) when expanding again) it seems like the
effective frequency of the $X_{+}$ mode would scale like $\sqrt{N}$ for large
enough $N$ which seems counter-intuitive. This is, however, reminiscent of the
dipole approximation that is here taken for all modes. When allowing for any
small but non-zero momentum transfer, the modes immediately couple to a
microscopic quantity instead of all electrons collectively yielding a finite
effective frequency.
The here shown mechanism for collective strong coupling is reminiscent of an
analogous one considered in the case of vibrational Strong Coupling104, 105,
106 in the case of a cavity coupling to vibrational excitations of a solid or
to collective strong coupling of an electro-magnetic resonator coupled to many
emitters.75, 106
## Supplementary Note 2: Diagonalization of the Hamiltonian in the TD limit
In this part we show how to diagonalize the Hamiltonian expanded to second
order in the field that gave the only non-vanishing contribution in the TD
limit to the GS energy in Eq. (6). It reads
$H^{2^{\mathrm{nd}}}=\omega_{0}\left(a^{{\dagger}}a+\frac{1}{2}\right)+\mathcal{T}+\frac{g}{\sqrt{L}}\left(a^{\dagger}+a\right)\mathcal{J}-\frac{g^{2}}{2L}(a^{\dagger}{+}a)^{2}\mathcal{T}$
(50)
and can be diagonalized using a combined squeezing and displacement
transformation30, 87
$\displaystyle H^{\mathrm{D}}$
$\displaystyle=e^{S^{\mathrm{d}}[\mathcal{T},\mathcal{J}]}e^{S^{\mathrm{sq}}[\mathcal{T}]}H^{A,A^{2}}e^{-S^{\mathrm{d}}[\mathcal{T},\mathcal{J}]}e^{-S^{\mathrm{sq}}[\mathcal{T}]}$
(51) $\displaystyle S^{\text{d}}[\mathcal{T},\mathcal{J}]$
$\displaystyle=\frac{g}{\sqrt{L}\omega_{0}}\left(\frac{\mathcal{W}[\mathcal{T}]}{\omega_{0}}\right)^{-\frac{3}{2}}\left(a^{{\dagger}}-a\right)\mathcal{J},$
$\displaystyle S^{\text{sq}}[\mathcal{T}]$
$\displaystyle=\frac{1}{4}\ln\left(\frac{\mathcal{W}[\mathcal{T}]}{\omega_{0}}\right)\left(a^{2}-(a^{\dagger})^{2}\right).$
The diagonal Hamiltonian $H^{\mathrm{D}}$ is given in the main text Eq. (7)
together with the definition of $\mathcal{W}[\mathcal{T}]$. Both displacement
and squeezing transformations depend on fermionic operators namely the kinetic
energy $\mathcal{T}$ and the current $\mathcal{J}$. Since $\mathcal{T}$ and
$\mathcal{J}$ are diagonal in $k$-space the GS of the whole system is given as
(see also Eq. (11) of the main text and below)
$\displaystyle|\Phi_{\mathrm{GS}}\rangle$
$\displaystyle=|\psi_{\mathrm{GS}}\rangle_{f}\otimes|0_{\beta}\rangle$ (52)
$\displaystyle=|\psi_{\mathrm{GS}}\rangle_{f}\otimes
e^{S^{\mathrm{d}}[-t_{\mathrm{GS}}L,j_{\mathrm{GS}}L]}e^{S^{\mathrm{sq}}[-t_{\mathrm{GS}}L]}|0\rangle.$
where $|\psi_{\mathrm{GS}}\rangle_{f}$ is the unshifted FS and
$|0_{\beta}\rangle$ is the vacuum state of the annihilators(creators)
$\beta^{(\dagger)}$ of the coherent squeezed states, defined in the main text
Eq. (8). $|0\rangle$ is the vacuum state of the non squeezed bosonic operators
$a^{{\dagger}}$ and $a$. Since we found $j_{\mathrm{GS}}=0$ due to the
vanishing shift of the FS we have
$e^{S^{\mathrm{d}}[-t_{\mathrm{GS}}L,j_{\mathrm{GS}}L]_{\beta}}=I_{b}$ where
$I_{b}$ is the identity on the bosonic part of the Hilbertspace. The photon
part of the GS wavefunction is thus given by Eq. (12) of the main part.
## Supplementary Note 3: Momentum-resolved spectral function in the TD limit
In this part we show how to analytically calculate the spectral function
$A(k,\omega)$ of the electrons in the TD limit. Since we do this at
temperature $T=0$ the expectation values appearing in the definition of the
spectral function (Eq. (16) of the main text) are taken just with respect to
the GS. None of the operators in the expectation value creates a macroscopic
occupation of the photonic mode. Therefore, the scaling analysis of Eq. (6) of
the main text can be applied in this case allowing us to diagonalize the
problem by the combined squeezing and displacement transformation Eq. (51). To
evaluate the expectation values we also need the behaviour of the fermionic
creation (annihilation) operators under these transformations which read
$\displaystyle
e^{S^{\text{d}}}e^{S^{\text{sq}}}c_{k}e^{-S^{\text{sq}}}e^{-S^{\text{d}}}=c_{k}XY,$
(53) $\displaystyle
e^{S^{\text{d}}}e^{S^{\text{sq}}}c_{k}^{{\dagger}}e^{-S^{\text{sq}}}e^{-S^{\text{d}}}=c_{k}^{{\dagger}}X^{{\dagger}}Y^{{\dagger}}$
with
$\displaystyle\ln(X)=-\frac{g\omega_{0}\mathcal{W}^{-2}}{\sqrt{L}}v_{k}\left(a^{{\dagger}}-a\right)+\mathcal{O}\left(\frac{1}{L^{\frac{3}{2}}}\right),$
(54)
$\displaystyle\ln(Y)=\frac{1}{2}\frac{g^{2}}{\omega_{0}L}\varepsilon_{k}\left(1-2\frac{g^{2}}{\omega_{0}L}\mathcal{T}\right)^{-1}\left(a^{2}-(a^{{\dagger}})^{2}\right)+\mathcal{O}\left(\frac{1}{L^{\frac{3}{2}}}\right).$
Considering the first expectation value from the spectral function, Eq. (16)
of the main text, we find
$\displaystyle\langle c_{k}(t)c_{k}^{{\dagger}}\rangle$
$\displaystyle=\,_{f}\bra{\psi_{\mathrm{GS}}}\otimes\,_{b}\bra{\phi_{\mathrm{GS}}}\overbrace{1}^{\mathclap{e^{-S^{\text{sq}}[\mathcal{T}]}e^{-S^{\text{d}}[\mathcal{T},\mathcal{J}]}e^{S^{\text{d}}[\mathcal{T},\mathcal{J}]}e^{S^{\text{sq}}[\mathcal{T}]}}}e^{iHt}c_{k}e^{-iHt}c_{k}^{{\dagger}}\underbrace{1}_{\mathclap{e^{-S^{\text{sq}}[\mathcal{T}]}e^{-S^{\text{d}}[\mathcal{T},\mathcal{J}]}e^{S^{\text{d}}[\mathcal{T},\mathcal{J}]}e^{S^{\text{sq}}[\mathcal{T}]}}}\ket{\phi_{\mathrm{GS}}}_{b}\otimes\ket{\psi_{\mathrm{GS}}}_{f}$
(55)
$\displaystyle=\,_{f}\bra{\psi_{\mathrm{GS}}}\otimes\bra{0}e^{iH^{\text{D}}t}c_{k}XYe^{-iH^{\text{D}}t}c_{k}^{{\dagger}}X^{{\dagger}}Y^{{\dagger}}\ket{0}\otimes\ket{\psi_{\mathrm{GS}}}_{f}+\mathcal{O}\left(\frac{1}{L^{\frac{3}{2}}}\right)$
$\displaystyle=\bra{\psi_{\mathrm{GS}}}_{f}\otimes\bra{0}c_{k}(t)_{H^{\text{D}}}X(t)_{H^{\text{D}}}Y(t)_{H^{\text{D}}}c_{k}^{{\dagger}}X^{{\dagger}}Y^{{\dagger}}\ket{0}\otimes\ket{\psi_{\mathrm{GS}}}_{f}+\mathcal{O}\left(\frac{1}{L^{\frac{3}{2}}}\right).$
With the subscript $(.)(t)_{H^{\text{D}}}$ we signify that the time dependence
is determined by the diagonal Hamiltonian $H^{\text{D}}$, Eq. (7) of the main
text.
The operators $\mathcal{T}$ and $\mathcal{J}$ appearing in $X$ and $Y$ have no
time dependence since they commute with $H^{\text{D}}$ (and in fact also the
full $H$). The time dependence of the operators $X$ and $Y$ is determined by
that of the bosonic operators
$\displaystyle a(t)_{H^{\text{D}}}$ $\displaystyle=ae^{-i\mathcal{W}t}$ (56)
$\displaystyle a^{{\dagger}}(t)_{H^{\text{D}}}$
$\displaystyle=a^{{\dagger}}e^{i\mathcal{W}t}.$
Evaluating the electronic part of the expectation value will yield
$\mathcal{W}\to\tilde{\omega}$ restoring a simple time dependence with the
dressed cavity frequency $\tilde{\omega}$.
Reconsidering the expectation value Eq. (55) we note that moving the fermionic
operators through $X$ and $Y$ will only yield higher order corrections such
that we can write
$\langle
c_{k}(t)c_{k}^{{\dagger}}\rangle=e^{\Phi(t)}(1-n_{k})\bra{0}X_{\psi_{\text{GS}}}(t)_{H^{\text{D}}_{b}}Y_{\psi_{\text{GS}}}(t)_{H^{\text{D}}_{b}}X^{{\dagger}}_{\psi_{\text{GS}}}Y^{{\dagger}}_{\psi_{\text{GS}}}\ket{0}$
(57)
where $n_{k}=\langle c_{k}^{{\dagger}}c_{k}\rangle$. Here we have evaluated
the time dependence of the fermionic annihilators that yields the time
dependent phase factor $e^{\Phi(t)}$. We find, only keeping the leading order
as before
$c_{k}(t)_{H^{\text{D}}}=c_{k}e^{\mathcal{F}(t)}\hskip 5.69054pt;\hskip
5.69054pt\mathcal{F}(t)=-i\varepsilon_{k}t+i\frac{g^{2}\varepsilon_{k}}{L}\omega_{0}\mathcal{W}^{-1}\left(a^{\dagger}a+\frac{1}{2}\right)t-i\frac{g^{2}\omega_{0}\mathcal{W}^{-2}}{L}v_{k}^{2}t$
(58)
Evaluating the expectation of this yields
$\langle e^{\mathcal{F}(t)}\rangle=e^{\Phi(t)}\hskip 5.69054pt;\hskip
5.69054pt\Phi(t)=-i\varepsilon_{k}t+i\frac{g^{2}\varepsilon_{k}}{2L}\frac{\omega_{0}}{\tilde{\omega}}t-i\Sigma_{k}t$
(59)
with
$\Sigma_{k}=\frac{g^{2}\omega_{0}}{\tilde{\omega}^{2}L}v_{k}^{2}.$ (60)
In Eq. (57) we have already executed the fermionic part of the expectation
value performing
$\displaystyle\frac{\mathcal{T}}{L}$
$\displaystyle\rightarrow\frac{\bra{\psi_{\mathrm{GS}}}_{f}\mathcal{T}\ket{\psi_{\mathrm{GS}}}_{f}}{L}=t_{\text{GS}}$
(61) $\displaystyle\frac{\mathcal{J}}{L}$
$\displaystyle\rightarrow\frac{\bra{\psi_{\mathrm{GS}}}_{f}\mathcal{J}\ket{\psi_{\mathrm{GS}}}_{f}}{L}=j_{\text{GS}}$
in the $X^{({\dagger})}$ and $Y^{({\dagger})}$ operator writing them as
$X^{({\dagger})}_{\psi_{\text{GS}}}$ and $Y^{({\dagger})}_{\psi_{\text{GS}}}$.
Since all operators act on the $\ket{0}$ state, contributions come only from
commutators of the operators in the exponentials. Therefore, all contributions
from the $Y$ operator are at least
$\exp\left(\mathcal{O}\left(\frac{1}{L^{\frac{3}{2}}}\right)\right)$ 111, 112
and will thus be neglected. We are thus left with
$\langle
c_{k}(t)c_{k}^{{\dagger}}\rangle=e^{\Phi(t)}(1-n_{k})\bra{0}X_{\psi_{\text{GS}}}(t)_{H^{\mathrm{D}}_{b}}X^{{\dagger}}_{\psi_{\text{GS}}}\ket{0}+\mathcal{O}\left(\frac{1}{L^{\frac{3}{2}}}\right).$
(62)
The evaluation of the remaining expectation value is a standard textbook
problem. 113
Evaluating the other expectation value in the definition of the spectral
function (Eq. (16) in the main part) yields the same result, just with a
factor $n_{k}$ instead of $1-n_{k}$ up front and the final expectation value
is in Eq. (62) is complex conjugated as the order of the operators is
reversed. This reflects the particle-hole symmetry of the half-filled system,
which is inherited from the bare chain.
Performing the remaining FT we arrive at the final result reported in Eq. (18)
in the main text.
## Supplementary Note 4: Non-equilibrium spectral function from coherent
pumping
Figure 5: Strong pumping limit of the non-equilibrium spectral function. Non-
equilibrium spectral function obtained according to Eq. (22) in analogy to
Fig. 3(b) of the main text. The LMC is kept constant at $g=0.5$ while the
strength of the pump increases from zero to $\Delta N_{\rm phot}^{\rm
pump}=30$ as reported on the right-hand side of the plot. The spectral
function corresponding to the strongest pump is overlayed with the non-
equilibrium spectral function of the classically driven system (Eq. (24)) at
the effective cavity frequency $\tilde{\omega}$ as stated in Eq. (10). In
analogy to Fig. 3(b) of the main text, the structure of the peaks changes from
completely asymmetric to symmetric for increased pumping. In contrast to Fig.
3(b), the size of the side-peaks now increases for stronger pumping.
Additionally, features that were previously small in the TD limit (see Eq.
(18)) now emerge as for example the dynamical localization (shift of central
peak) and the shake-off bands (also a second shake-off band is now visible).
These features are well reproduced within the classical drive. Parameters, if
not specifically mentioned otherwise, are as in Fig. 3(b) but with an
increased size of the bosonic Hilbertspace of $N_{\rm max}^{\rm boson}=130$.
In this part we calculate the non-equilibrium spectral function according to
Eq. (22) of the main text in analogy to our analysis in the Quantum to Floquet
crossover subsection under Results in the main text. However, in contrast to
that part, we do not keep $g^{2}\,\Delta N_{\rm phot}^{\rm pump}=\rm const$
while sending $\Delta N_{\rm phot}^{\rm pump}\to\infty$ but set $g=0.5$.
Hence, we here do not perform the classical limit, since the light-matter
hybridization is never lifted, but the limit of strong driving.
The result can be seen in Fig. 1 of this supplement. The side-peaks are at a
shifted frequency $\tilde{\omega}$ which reflects the fact that the effective
boson of the system represents a mixture of light and matter degrees of
freedom. In contrast to the classical limit, their position stays
approximately constant and does not reduce to $\omega_{0}$ for stronger
pumping. At the same time, the evolution of completely asymmetric side-peaks
to fully symmetric ones prevails. The strength of the peaks increases
monotonically with stronger pumping while it stayed almost constant
previously.
The last line corresponding to the strongest pump is again compared to the
non-equilibrium spectral function obtained from a classically driven system
according to Eq.(24) of the main text. We set the frequency to
$\tilde{\omega}$ as stated in Eq. (10). The result matches well with that of
the strongest drive. For even larger numbers of photons injected into the
system one will, however, start to see deviations as the higher, non-harmonic
terms in the Hamiltonian become relevant for the dynamics.
As expected, features of the electronic spectral function that were previously
small in the TD limit (see Eq. (18) of the main text) are enhanced through the
driving. The dynamical localization now becomes notable through the shift of
the central peak and a second shake-off band appears. These features are also
well reproduced by the classical drive in this regime.
|
arxiv-papers
| 2021-07-26T14:33:20 |
2024-09-04T03:07:18.854866
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Christian J. Eckhardt, Giacomo Passetti, Moustafa Othman, Christoph\n Karrasch, Fabio Cavaliere, Michael A. Sentef, Dante M. Kennes",
"submitter": "Christian Eckhardt",
"url": "https://arxiv.org/abs/2107.12236"
}
|
2107.12237
|
# Deep Transfer Clustering of Radio Signals
Qi Xuan, _Member, IEEE_ , Xiaohui Li, Zhuangzhi Chen, Dongwei Xu, Shilian
Zheng, and Xiaoniu Yang This work was supported in part by the National
Natural Science Foundation of China under Grants 61973273 and 61903334, and by
the Zhejiang Provincial Natural Science Foundation of China under Grants
LR19F030001 and LY21F030016. _(Corresponding authors: Qi Xuan.)_ Q. Xuan, X.
Li, Z. Chen, and D. Xu are with the Institute of Cyberspace Security, College
of Information Engineering, Zhejiang University of Technology, Hangzhou
310023, China (e-mail: [email protected]).S. Zheng is with the Science and
Technology on Communication Information Security Control Laboratory, Jiaxing
314033, China.X. Yang is with the Institute of Cyberspace Security, Zhejiang
University of Technology, Hangzhou 310023, China, and also with the Science
and Technology on Communication Information Security Control Laboratory,
Jiaxing 314033, China.
###### Abstract
Modulation recognition is an important task in radio signal processing. Most
of the current researches focus on supervised learning. However, in many real
scenarios, it is difficult and cost to obtain the labels of signals. In this
letter, we turn to the more challenging problem: can we cluster the modulation
types just based on a large number of unlabeled radio signals? If this problem
can be solved, we then can also recognize modulation types by manually
labeling a very small number of samples. To answer this problem, we propose a
deep transfer clustering (DTC) model. DTC naturally integrates feature
learning and deep clustering, and further adopts a transfer learning mechanism
to improve the feature extraction ability of an embedded convolutional neural
network (CNN) model. The experiments validate that our DTC significantly
outperforms a number of baselines, achieving the state-of-the-art performance
in clustering radio signals for modulation recognition.
###### Index Terms:
Signal clustering, deep learning, modulation recognition, transfer learning,
convolutional neural network.
## I Introduction
With the development of radio communication technology, the electromagnetic
environment has become increasingly complex, and the amount of radio signals
has also exploded. As the basis of radio communication, signal modulation
recognition is of particular importance. Recently, many deep learning models
have been applied to signal modulation classification. Most of these works
focus on supervised learning, which relies on a large number of labeled
signals. However, labeling a large number of signals could be difficult and
costly in reality. In order to make better use of available unlabeled signals,
clustering is a promising direction. As a kind of unsupervised learning
method, clustering can directly captures the correlation between signals, so
as to group them into multiple clusters without the need for signal labels in
advance. However, it is real a challenge to cluster radio signals for
modulation recognition since the signal waves of the same modulation type
could be quite different, while those of different modulation types may be
close to each other, since the difference of signal waves could be largely
determined by the transmitted information. To the best of our knowledge, there
are few studies on the modulation clustering of radio signals, which is used
to analyze the importance of various features in modulation recognition [1],
and to reconstruct the cluster center vector of the constellation diagram [2,
3, 4, 5].
The researches on clustering are much more active in other areas, such as
computer vision [6, 7, 8, 9, 10] and time-series analysis [11, 12]. Quite
recently, a number of deep learning models are proposed for image clustering,
which can be roughly divided into two groups: end-to-end methods and two-step
methods. For the first group, samples are soft-labeled according to the
clustering results to guide the training of deep learning models. For example,
deep embedded clustering (DEC) [13] soft-labeled samples based on the Student
$t$ distribution, joint unsupervised learning (JULE) [14] used $K$-nearest
neighbor (KNN), and deep adaptive clustering (DAC) [15] is based on the
similarity between samples. For the second group, feature learning is
separated from the clustering process. Most of these methods use deep learning
models such as autoencoders to learn features, and then cluster them, e.g.,
deep density-based image clustering (DDC) [16]. In this case, the feature
learning process is not guided by clustering. Since the two processes are
separated, the features learned by the model may not meet the clustering
requirements, which may hurt the performance of the methods.
The above deep clustering methods are largely determined by the training of
deep learning models, and cannot be directly adopted to realize signal
clustering for modulation recognition due to its essential challenge. Transfer
learning [17, 18, 19, 20], on the other hand, is proposed to solve the problem
of insufficient samples. It can largely utilize knowledge or patterns learned
from a different but related fields or problems. Therefore, it is naturally to
believe that the performance of deep clustering methods could be significantly
enhanced if we use transfer learning to pre-train the deep learning models
based on an auxiliary dataset in the related fields, and then use it to
cluster the samples in the target dataset.
Figure 1: The overall framework of DTC for signal clustering, including three
stages: data preprocessing, pre-training, fine-tuning and clustering.
In this letter, we propose a deep transfer learning (DTC) of radio signals for
modulation recognition for the first time, which naturally integrates feature
learning and clustering, and adopts a transfer learning strategy to enhance
the feature extraction ability. In particular, firstly, a convolutional neural
network (CNN) model is pre-trained with labeled signals of an auxiliary
dataset in the same field as the target dataset. In this process, due to the
true labels of the signals, the effective feature learning could be
guaranteed. After that, iterative cluster training is performed on the target
dataset to fine-tune the CNN model. Note that the pre-trained CNN model is
deployed to perform preliminary feature extraction of the signals for
clustering, while those signals with high confidence of clustering results
will be selected and labeled as soft labels to further fine-tune the CNN
model. The process is carried out iteratively until the clustering accuracy no
longer improves. Since each time the signals with high confidence are used for
the model training, the model has a certain degree of anti-interference
ability, and at the same time, the training efficiency could be improved.
Since the clustering process and the feature learning process are jointly
trained, this method can better obtain hidden features that are more suitable
for signal clustering. The main contributions of this letter are summarized as
follows:
1. 1.
We propose a deep transfer clustering (DTC) model for radio signals, which
naturally integrates feature learning and deep clustering for the first time
in this area.
2. 2.
We adopt a transfer learning mechanism for the supervised pre-training of the
CNN model, which effectively improves the feature extraction ability of our
DTC model for signal clustering.
3. 3.
We create a loss function consisting of two parts: positive loss and negative
loss, the balance between which is adjusted by a hyperparameter $\lambda$.
4. 4.
Experimental results validate that our DTC model significantly outperform the
other traditional or deep learning based clustering methods on multiple radio
signal datasets, achieving the state-of-the-art performance.
The rest of paper is organized as follows. In Section II, we introduce our DTC
model in detail, including data preprocessing, pre-training, fine-tuning and
clustering. In Section III, we give the experimental results on three public
radio signal datasets, to validate the effectiveness of the DTC model.
Finally, the paper is concluded in Section IV.
## II Deep transfer clustering
In this section, we introduce the detail of our method. The overall framework
of DTC is shown in Fig. 1, which includes three stages: data preprocessing,
pre-training, and clustering.
### II-A Data preprocessing
The target dataset $O$ needs to be clustered into $k$ classes. The auxiliary
dataset $B$ is a labeled signal dataset that is independent from $O$. We will
use $B$ to pre-train the model before clustering $O$. Since $O$ and $B$ are
independent from each other, the signals from these two sets may be of
different length. Then, the length of signals in $B$ is first adjusted to be
the same as $O$. When the signals in $B$ are shorter than those in $O$, they
are expanded to the target length by just copying. For example, signal ”$abb$”
of length three is expanded to ”$abbabbab$” of length eight. When signals in
$B$ are longer than those in $O$, they are compressed based on equal-interval
sampling to keep the structural characteristics of signals. Of course, the
number of categories in the two datasets may not be the same. When the number
of sample categories in $B$ is more than $k$, we only select the $k$ category
samples to use, here we choose randomly; When the number of categories is not
enough, all samples are used, but the effect of pre-training process will be
slightly reduced.
### II-B Pre-training
In order to improve the feature extraction ability of the convolutional neural
network (CNN), the signals in $B$, including their labels, are then used to
pre-train the model.
We randomly select $m$ signals from dataset $B$ as the batch input of the CNN,
and the output is an $m\times{k}$ signal feature matrix
$F_{B}=\\{f_{i}\\}^{m}_{i=1}$, where $f_{i}$ is the feature vector of signal
$i$ in $B$, with feature dimension equal to $k$. The cosine similarity between
signals $i$ and $j$ is defined as:
$\displaystyle sim(x_{i},x_{j})=\frac{f_{i}\cdot
f_{j}}{\left\|f_{i}\right\|\cdot\left\|f_{j}\right\|},$ (1)
which is simplified to
$\displaystyle sim(x_{i},x_{j})=f_{i}\cdot f_{j},$ (2)
when we set $\left\|f_{i}\right\|=1$ for $i=1,2,\cdots,m$. So the similarity
matrix of these $m$ signals is
$\displaystyle S_{B}=F_{B}\cdot F_{B}^{\mathrm{T}}.$ (3)
The labels of these signals are converted into one-hot vectors of length $k$,
which are grouped into an $m\times{k}$ label matrix
$Y_{B}=\\{y_{i}\\}^{m}_{i=1}$. Then the true binary judgment matrix of the
signals is defined as:
$\displaystyle P_{B}=Y_{B}\cdot Y_{B}^{\mathrm{T}},$ (4)
which is a Boolean matrix, with its element $P_{B}(i,j)=1$ if signals $i$ and
$j$ belong to the same category, and $P_{B}(i,j)=0$ otherwise. Based on
$P_{B}$, we define a positive matrix $P_{B}^{p}=P_{B}$ and a negative matrix
$P_{B}^{n}=1-P_{B}$. Then, the loss function in the pre-training process is
defined as:
$\displaystyle\mathcal{L}_{pre}=-P_{B}^{p}\cdot\log{S_{B}}-\lambda
P_{B}^{n}\cdot\log{(1-S_{B})},$ (5)
where $\lambda$, as a hyperparameter, is used to adjust the proportion of
positive losses and negative losses.
The pre-training process stops when the loss value on the validation set of
$B$ no longer drops, and we think the CNN has a good ability of feature
extraction.
### II-C Fine-tuning and clustering
Now, the target dataset $O$ is also divided into batches as the input of the
pre-trained CNN, with the batch size is set to $m$, as shown in the right of
Fig. 1. For each batch input, we have the output feature matrix $F_{O}$, also
we can obtain the similarity matrix
$\displaystyle S_{O}=F_{O}\cdot F_{O}^{\mathrm{T}}.$ (6)
The elements of $S_{O}$ are then compared with the upper threshold $u$ and
lower threshold $l$, respectively, to determine whether the corresponding
signals belong to the same cluster or not.
We construct the positive matrix $P_{O}^{p}$ and the negative matrix
$P_{O}^{n}$ with their elements defined as
$\displaystyle P_{O}^{p}(i,j)=\left\\{\begin{array}[]{lr}1,\quad if\
S_{O}(i,j)\geq u&\\\ 0,\quad if\ S_{O}(i,j)<u&\\\
\end{array}i,j=1,\dots,m\right.$ (9) $\displaystyle
P_{O}^{n}(i,j)=\left\\{\begin{array}[]{lr}1,\quad if\ S_{O}(i,j)\leq l&\\\
0,\quad if\ S_{O}(i,j)>l&\\\ \end{array}i,j=1,\dots,m\right.$ (12)
Then, it is considered that signals $i$ and $j$ are from the same category if
$P_{O}^{p}(i,j)=1$, while they belong to different categories if
$P_{O}^{n}(i,j)=1$. In the process of fine-tuning, $P_{O}^{p}$ and $P_{O}^{n}$
are used as the soft labels to replace the true labels of the signals, and the
loss function in the cluster training stage is defined as:
$\displaystyle\mathcal{L}_{clu}=-P_{O}^{p}\cdot\log{S_{O}}-\lambda
P_{O}^{n}\cdot\log{(1-S_{O})}.$ (13)
During the training process, the feature vectors of signals from different
categories tend to be perpendicular to each other. Note that the dimension of
feature vector is set to $k$, which is the same as the number of categories,
the feature vector is normalized, and each element is limited between $0$ and
$1$. Therefore, as the training progresses, the output features tend to be in
the form of one-hot vectors. The characteristics of the output actually
represent the probability distribution of the signals in each category. In
other words, the index of the maximum value of the feature vector can be
directly used as the label of the signal.
In particular, our CNN consists of four convolutional layers and two dense
fully connected layers. Each layer use rectified linear (ReLU) activation
function. In order to prevent over-fitting, batch normalization (BN) is added
before the ReLU layers. At the same time, BN layers can adjust the
distribution of the data to a normal distribution to ensure the generalization
performance of the model when the input distribution is different at each
time. To remove redundant information, the max-pooling layers are added after
the ReLU layers of the second and third convolutional layers, and the outputs
are also been adjusted by the BN layers. The illustration of the CNN
architecture is shown in Fig. 2. The model contains 32, 128, 128, and 32
filters in layers 1 to 4, respectively. And the last two dense layers contain
64 and $k$ neurons, respectively. At the end of the model is the softmax
function, which acts as a classifier and outputs the probability distribution.
The training of the model uses the Adam optimizer and the loss functions at
different stages are defined by Eq. (5) and Eq. (13), respectively. All
experiments are run on NDIDIA Tesla V100 based on the TensorFlow deep learning
framework.
Figure 2: The structure of CNN. TABLE I: The modulation types of the three datasets Datasets | Modulation Types
---|---
RML2016.10A | WBFM,QPSK,64QAM,16QAM,4PAM,GFSK,CPFSK,BPSK,8PSK,AM-SSB
RML2016.04C | WBFM,QPSK,64QAM,16QAM,4PAM,GFSK,CPFSK,BPSK,8PSK,AM-SSB
RML2018.01A | 32PSK,16APSK,32QAM,FM,GMSK,32APSK,OQPSK,8ASK,16PSK,64APSK
TABLE II: The clustering results of cross pre-training on the three datasets | Datasets | RML2016.10A | RML2016.04C | RML2018.01A
---|---|---|---|---
| Metrics | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC
| No pre-train | 0.3309 | 0.2372 | 0.3699 | 0.4336 | 0.2662 | 0.3905 | 0.3898 | 0.2205 | 0.3082
Auxiliary | RML2016.10A | —— | —— | —— | 0.8259 | 0.6888 | 0.7137 | 0.6576 | 0.4516 | 0.4831
dataset | RML2016.04C | 0.8547 | 0.7566 | 0.7444 | —— | —— | —— | 0.6674 | 0.4587 | 0.4321
| RML2018.01A | 0.5441 | 0.3716 | 0.4980 | 0.6768 | 0.5213 | 0.6229 | —— | —— | ——
TABLE III: The clustering results of various methods on the three datasets Datasets | RML2016.10A | RML2016.04C | RML2018.01A
---|---|---|---
Metrics | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC
K-means | 0.1345 | 0.0585 | 0.1946 | 0.2674 | 0.1494 | 0.3186 | 0.3573 | 0.0933 | 0.1355
DEC | 0.1150 | 0.0626 | 0.2160 | 0.3034 | 0.2022 | 0.3865 | 0.2741 | 0.0887 | 0.1758
DAC | 0.3081 | 0.2345 | 0.3616 | 0.4707 | 0.2880 | 0.3984 | 0.3768 | 0.2174 | 0.3067
DTC | 0.8547 | 0.7566 | 0.7444 | 0.8259 | 0.6888 | 0.7137 | 0.6576 | 0.4516 | 0.4831
## III Experiments
### III-A Datasets
The experiments are conducted on three publicly available datasets [21],
including RML2016.10A, RML2016.04C, and RML2018.01A. However, in our
experiments, only 10 categories of signals in each dataset are used. We set
the modulation types in RML2016.10A and RML2016.04C the same, while they are
totally different from those in RML2018.01A, as presented in Table I.
### III-B Experimental settings
* •
Baselines: we compare the proposed DTC with several existing clustering
methods, including K-means, DEC, and DAC. The codes of DEC and DAC used in the
experiments are downloaded from GitHub, with the parameters set as suggested
and the signals reshaped as required.
* •
Evaluation metrics: we use three popular metrics, including adjusted rand
index (ARI), normalized mutual information (NMI), and clustering accuracy
(ACC), with their values all in [0, 1] and higher scores indicating better
clustering performance.
* •
Hyperparameters: we set $\lambda=0.1$ for pre-training and $\lambda=100$ for
fine-tuning and clustering, and set the upper threshold $u=0.95$, and the
lower threshold $l=0.7$.
### III-C The experimental results of DTC
The experimental results of DTC are shown in Table II, where we can see that
DTC can achieve reasonable results even without pre-training, i.e., both NMI
and ACC are above 0.3, while ARI is above 0.2 for all the three datasets.
Meanwhile, the performance of DTC is indeed significantly boosted when the CNN
model is pre-trained by auxiliary dataset. All the clustering results on any
evaluation metric are significantly improved, no matter which auxiliary
dataset is used to pre-train the model for which target dataset. Taking the
RML2016.10A dataset as an example, without pre-training, NMI, ARI, and ACC are
0.3309, 0.2372, and 0.3699, respectively. However, after pre-training on the
dataset RML2016.04C, these three metrics greatly increase to 0.8547, 0.7566,
and 0.7444, respectively. Note that the two datasets RML2016.10A and
RML2016.04C are quite similar to each other, i.e., they share exactly the same
types of modulation and the same length of signals, while the dataset
RML2018.01A is relatively different on both types of modulation and length of
signals. As expected, the DTC for RML2016.10A benefits most from the CNN model
pretrained on RML2016.04C, and vice versa. More interestingly, though quite
different, the CNN models pretrained by RML2018.01A can still help to extract
important features of the signals in the other two datasets, so as to improve
the performance of DTC, which indicates the generalization ability of our
method.
### III-D Comparison with other clustering methods
Now, we compare our DTC with other clustering methods, including K-means, DEC,
and DAC. K-means is a very popular clustering method in many areas, DEC and
DAC are two typical deep clustering methods with outstanding performance in
computer vision. Note that we also try several latest deep clustering methods,
such as semantic pseudo-labeling for image clustering (SPICE) [22], robust
learning for unsupervised clustering (RUC) [23], and semantic clustering by
adopting nearest neighbors (SCAN) [24], but the results are worse than the
three baselines we choose. The comparison results are shown in Table III,
where we can see that DTC significantly outperforms all the other clustering
methods, achieving the state-of-the-art performance. In particular, the
clustering accuracy of DTC is as high as 0.7444 on RML2016.10A, which is
105.9% higher than the second best method DAC. On RML2016.04C, the clustering
accuracy of DTC is 0.7137, which is 79.1% higher than the second best method
DAC. On RML2018.01A, these numbers are 0.4831 and 57.5%. Such incredible
results suggest that DTC could be a feasible method to cluster radio signals
for modulation recognition as a challenging task in wireless communication.
## IV Conclusion
Automatic modulation recognition is crucial for many applications in
electromagnetic space, especially when the 5G/6G wireless systems emerge.
However, it is always difficult to label a large number of radio signals in
many real scenarios, making it hard to use supervised learning to recognize
modulation types. Therefore, in this letter, we focus on clustering radio
signals for modulation recognition. Since the signal waves could be largely
determined by the transmitted information, the observed signals of the same
modulation type could be quite different, while those of different modulation
types may be close to each other. This makes clustering radio signals real a
challenge in reality.
With the help of the strong feature extraction ability of convolutional neural
network (CNN), in this letter, we propose a novel end-to-end deep transfer
clustering (DTC) model for radio signals, which naturally integrates deep
learning and transfer learning into a single framework to improve the
clustering performance. The experimental results show that, compared with a
number of baselines, our method achieves significantly better performance on
three public radio signal datasets. In the future, we will apply our DTC model
on more various signal datasets, to validate its generalization ability more
comprehensively.
## References
* [1] N. Daldal, K. Polat, and Y. Guo, “Classification of multi-carrier digital modulation signals using ncm clustering based feature-weighting method,” Computers in Industry, vol. 109, pp. 45–58, 2019.
* [2] G. Jajoo, Y. Kumar, S. K. Yadav, B. Adhikari, and A. Kumar, “Blind signal modulation recognition through clustering analysis of constellation signature,” Expert Systems with Applications, vol. 90, pp. 13–22, 2017\.
* [3] F. Yang, L. Yang, D. Wang, P. Qi, and H. Wang, “Method of modulation recognition based on combination algorithm of k-means clustering and grading training svm,” China Communications, vol. 15, no. 12, pp. 55–63, 2018\.
* [4] J. Tian, Y. Pei, Y.-D. Huang, and Y.-C. Liang, “Modulation-constrained clustering approach to blind modulation classification for mimo systems,” IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 4, pp. 894–907, 2018.
* [5] Z. Zhao, A. Yang, P. Guo, and Q. Tan, “A density clustering algorithm for simultaneous modulation format identification and osnr estimation,” Applied Sciences, vol. 10, no. 3, p. 1095, 2020.
* [6] F. Tian, B. Gao, Q. Cui, E. Chen, and T.-Y. Liu, “Learning deep representations for graph clustering,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28, Jun. 2014.
* [7] K. Tu, P. Cui, X. Wang, P. S. Yu, and W. Zhu, “Deep recursive network embedding with regular equivalence,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, (New York, NY, USA), p. 2357–2366, Association for Computing Machinery, 2018.
* [8] W. Yu, C. Zheng, W. Cheng, C. C. Aggarwal, D. Song, B. Zong, H. Chen, and W. Wang, “Learning deep network representations with adversarially regularized autoencoders,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, (New York, NY, USA), p. 2663–2671, Association for Computing Machinery, 2018\.
* [9] Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson, “Structured sequence modeling with graph convolutional recurrent networks,” in Neural Information Processing (L. Cheng, A. C. S. Leung, and S. Ozawa, eds.), (Cham), pp. 362–373, Springer International Publishing, 2018.
* [10] A. Sperduti and A. Starita, “Supervised neural networks for the classification of structures,” IEEE Transactions on Neural Networks, vol. 8, no. 3, pp. 714–735, 1997.
* [11] R. McConville, R. Santos-Rodriguez, R. J. Piechocki, and I. Craddock, “N2d: (not too) deep clustering via clustering the local manifold of an autoencoded embedding,” 2019.
* [12] S. M. Mousavi, W. Zhu, W. Ellsworth, and G. Beroza, “Unsupervised clustering of seismic signals using deep convolutional autoencoders,” IEEE Geoscience and Remote Sensing Letters, vol. 16, no. 11, pp. 1693–1697, 2019\.
* [13] J. Xie, R. Girshick, and A. Farhadi, “Unsupervised deep embedding for clustering analysis,” in Proceedings of The 33rd International Conference on Machine Learning (M. F. Balcan and K. Q. Weinberger, eds.), vol. 48 of Proceedings of Machine Learning Research, (New York, New York, USA), pp. 478–487, PMLR, 20–22 Jun 2016.
* [14] J. Yang, D. Parikh, and D. Batra, “Joint unsupervised learning of deep representations and image clusters,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
* [15] J. Chang, L. Wang, G. Meng, S. Xiang, and C. Pan, “Deep adaptive image clustering,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
* [16] Y. Ren, N. Wang, M. Li, and Z. Xu, “Deep density-based image clustering,” Knowledge-Based Systems, vol. 197, p. 105841, 2020.
* [17] Y. Yu, “Boosting for transfer learning,” in ICML, 2007.
* [18] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, “Self-taught learning: transfer learning from unlabeled data,” in Proceedings of the 24th international conference on Machine learning, pp. 759–766, 2007.
* [19] T. Evgeniou and M. Pontil, “Regularized multi–task learning,” in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 109–117, 2004.
* [20] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” arXiv preprint arXiv:1411.1792, 2014\.
* [21] Z. Chen, H. Cui, J. Xiang, K. Qiu, L. Huang, S. Zheng, S. Chen, Q. Xuan, and X. Yang, “Signet: An advanced deep learning framework for radio signal classification,” arXiv preprint arXiv:2011.03525, 2020.
* [22] C. Niu and G. Wang, “Spice: Semantic pseudo-labeling for image clustering,” arXiv preprint arXiv:2103.09382, 2021.
* [23] S. Park, S. Han, S. Kim, D. Kim, S. Park, S. Hong, and M. Cha, “Improving unsupervised image clustering with robust learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12278–12287, June 2021.
* [24] W. Van Gansbeke, S. Vandenhende, S. Georgoulis, M. Proesmans, and L. Van Gool, “Scan: Learning to classify images without labels,” in Computer Vision – ECCV 2020 (A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, eds.), (Cham), pp. 268–285, Springer International Publishing, 2020.
|
arxiv-papers
| 2021-07-26T14:35:21 |
2024-09-04T03:07:18.872762
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Qi Xuan, Xiaohui Li, Zhuangzhi Chen, Dongwei Xu, Shilian Zheng, and\n Xiaoniu Yang",
"submitter": "Xiaohui Li",
"url": "https://arxiv.org/abs/2107.12237"
}
|
2107.12240
|
# A prismatic approach to $(\varphi,\hat{G})$-modules and $F$-crystals
Heng Du Department of Mathematics, Purdue University [email protected] and
Tong Liu Department of Mathematics, Purdue University
[email protected]
###### Abstract.
We give a new construction of $(\varphi,\hat{G})$-modules using the theory of
prisms developed by Bhatt and Scholze. As an application, we give a new proof
about the equivalence between the category of prismatic $F$-crystals in finite
locally free $\mathcal{O}_{{\mathbbl{\Delta}}}$-modules over
$(\mathcal{O}_{K})_{{\mathbbl{\Delta}}}$ and the category of lattices in
crystalline representations of $G_{K}$, where $K$ is a complete discretely
valued field of mixed characteristic with perfect residue field. We also
generalize this result to semi-stable representations using the absolute
logarithmic prismatic site defined by Koshikawa.
###### Contents
1. 1 Introduction
2. 2 Ring Structures on certain prismatic envelope
1. 2.1 Construction of $A^{(2)}$
2. 2.2 The ring $A^{(2)}_{\max}$
3. 2.3 The ring $A^{(2)}_{\mathop{\rm st}\nolimits}$
4. 2.4 Embedding $A^{(2)}$ and $A^{(2)}_{\mathop{\rm st}\nolimits}$ to $A_{\mathrm{inf}}$
3. 3 Application to semi-stable Galois representations
1. 3.1 Kisin module attached to semi-stable representation
2. 3.2 Descent of the $G_{K}$-action
3. 3.3 Relation to $(\varphi,\hat{G})$-modules
4. 4 Crystalline representations and prismatic $F$-crystals
1. 4.1 Prismatic $F$-crystals in finite projective modules
2. 4.2 $(\varphi,\tau)$-modules and prismatic $F$-crystals
3. 4.3 Proofs of Proposition 3.2.2 and Theorem 4.1.10
5. 5 Logarithmic prismatic $F$-crystals and semi-stable representations
6. 6 Some discussions on base rings
## 1\. Introduction
Let $K$ be a complete discretely valued field of mixed characteristic with
perfect residue field $k$. Fix a separable closure of $\overline{K}$ of $K$
and let $G_{K}$ be the absolute Galois group of $K$. The study of stable
lattices in crystalline representations of $G_{K}$ plays an important role in
number theory. For example, in many modularity lifting results, one wants to
understand liftings of mod $p$ representations of the Galois group of a number
field $F$ to Galois representations over $\mathbb Z_{p}$-lattices with nice
properties when restricted to the Galois groups of $F_{v}$ for all places $v$
of $F$. And a reasonable property at places over $p$ is that the
representation of the Galois group of the local field is crystalline. There
are various theories about characterizing $G_{K}$-stable lattices in
crystalline representations, for example, theory of strongly divisible
lattices of Breuil(cf. [Bre02]), Wach modules(cf. [Wac96] and [Ber04]), Kisin
modules(cf. [Kis06]), Kisin-Ren’s theory(cf. [KR09]) and the theory of
$(\varphi,\widehat{G})$-modules(cf. [Liu10]). The theories above state that
one can describe lattices in crystalline representations using certain linear
algebraic data over certain commutative rings $A$.
In a recent work of Bhatt-Scholze[BS21], they give a different
characterization of the category of lattices in crystalline representations.
To explain their result, let $\mathcal{O}_{K}$ be the ring of integers in $K$,
and they consider the absolute prismatic site
$(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}}$, which is defined as the opposite
category of all bounded prisms over $\mathcal{O}_{K}$ and equipped with the
faithfully flat topology. Let $\mathcal{O}_{{\mathbbl{\Delta}}}$ be the
structure sheaf over $(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}}$, and
$\mathcal{I}_{{{\mathbbl{\Delta}}}}\subset\mathcal{O}_{{\mathbbl{\Delta}}}$ be
the ideal sheaf of the Hodge-Tate divisor, then
$\mathcal{O}_{{\mathbbl{\Delta}}}$ carries a $\varphi$-action coming from the
$\delta$-structures. A prismatic $F$-crystal in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}$-modules over
$(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}}$ is defined as a crystal
$\mathfrak{M}_{{\mathbbl{\Delta}}}$ over
$(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}}$ in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}$-modules together with an isomorphism
$(\varphi^{\ast}\mathfrak{M}_{{\mathbbl{\Delta}}})[1/\mathcal{I}_{{{\mathbbl{\Delta}}}}]\simeq\mathfrak{M}_{{\mathbbl{\Delta}}}[1/\mathcal{I}_{{{\mathbbl{\Delta}}}}]$.
The main result in [BS21] is the following:
###### Theorem 1.0.1.
([BS21, Theorem 1.2] and Theorem 4.1.10) There is an equivalence of the
category of prismatic $F$-crystals in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}$-modules over
$(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}}$ and the category of Galois stable
lattices in crystalline representations of $G_{K}$.
To relate the result of Bhatt-Scholze with previous works of characterizing
lattices in crystalline representations using linear algebraic data, one
should first realize the base rings $A$ used in those theories as certain
prisms $(A,I)$ over $\mathcal{O}_{K}$. Then one should expect that evaluating
the prismatic $F$-crystals on $(A,I)$ should recover the corresponding theory.
For example, in the theory of Kisin [Kis06], he uses the base ring
$A=\mathfrak{S}:=W(k)[\\![u]\\!]$ with $\delta(u)=0$, and if one fixes a
uniformizer $\varpi$ of $\mathcal{O}_{K}$ which is a zero of an Eisenstein
polynomial $E\in W(k)[u]$, then it is well-known that $(A,(E))$ is the so-
called Breuil-Kisin prism which is inside
$(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}}$. And Kisin was able to attach any
lattice $T$ in a crystalline representation of $G_{K}$ a finite free
$A$-module together with an isomorphism
$(\varphi^{\ast}\mathfrak{M})[1/E]\simeq\mathfrak{M}[1/E]$. Now, if
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ is the prismatic $F$-crystal attaching
to $T$ under Theorem 1.0.1, then Bhatt-Scholze show that the evaluation of
$\mathfrak{M}_{{\mathbbl{\Delta}}}$ on $(A,(E))$ recovers Kisin’s theory (cf.
Theorem 1.3 of $loc.cit.$).
The first question answered in this paper is whether and how one can recover
the theory of $(\varphi,\hat{G})$-modules from the prismatic $F$-crystals
characterization of Bhatt-Scholze. The category of
$(\varphi,\hat{G})$-modules, roughly speaking, consisting of pairs
$((\mathfrak{M},\varphi_{\mathfrak{M}}),\hat{G})$, where
$(\mathfrak{M},\varphi_{\mathfrak{M}})$ is a Kisin module, and $\hat{G}$ is a
$G_{K}$-action on
$\mathfrak{M}\otimes_{\mathfrak{S},\varphi}\widehat{\mathcal{R}}$ that
commutes with $\varphi_{\mathfrak{M}}$ and satisfying some additional
properties. Here $\widehat{\mathcal{R}}$ is a subring of $A_{\mathrm{inf}}$
that is stable under $\varphi$ and $G_{K}$, where
$A_{\mathrm{inf}}=W(\mathcal{O}_{\overline{K}}^{\flat})$ introduced by
Fontaine, and there is a surjection
$\theta:A_{\mathrm{inf}}:=W(\mathcal{O}_{\overline{K}}^{\flat})\to\widehat{\mathcal{O}_{\overline{K}}}$.
However, the period ring $\widehat{\mathcal{R}}$ introduced by Liu is not
known to be $p$-adically complete or not, and it is even harder to determine
whether it can be shown up as a prism. So in order to relate the theory of
$(\varphi,\hat{G})$-modules with the category of prismatic $F$-crystals of
Bhatt-Scholze, we develop a theory of prismatic $(\varphi,\hat{G})$-modules,
in which theory the ring $\widehat{\mathcal{R}}$ is replaced by
$A^{(2)}_{\mathop{\rm st}\nolimits}$, a subring of $A_{\mathrm{inf}}$
constructed as certain prismatic envelope in §2.3.
The first result of this paper is about the theory of prismatic
$(\varphi,\hat{G})$-modules. We can show similar to the classical
$(\varphi,\hat{G})$-module theory, there is an equivalence between the
category of prismatic $(\varphi,\hat{G})$-modules and lattices in semi-stable
representations of $G_{K}$. Moreover, $(A^{(2)}_{\mathop{\rm
st}\nolimits},(E))$ is indeed a prism in
$(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}}$, it admits a map
$(A,(E))\to(A^{(2)}_{\mathop{\rm st}\nolimits},(E))$ of prisms, and carries an
action of $G_{K}$. For a $G_{K}$-stable lattice $T$ in a crystalline
representation, if $\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ is the prismatic
$F$-crystal attaches to $T$, then evaluating
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ on the morphism
$(A,(E))\to(A^{(2)}_{\mathop{\rm st}\nolimits},(E))$ recovers the prismatic
$(\varphi,\hat{G})$-module attaches to $T$. We can also show the map
$A^{(2)}_{\mathop{\rm st}\nolimits}\to
A_{\mathrm{inf}}\xrightarrow{\varphi}A_{\mathrm{inf}}$ factor through
$\widehat{\mathcal{R}}$, so the theory of prismatic
$(\varphi,\hat{G})$-modules recovers the classical theory. The ring
$A^{(2)}_{\mathop{\rm st}\nolimits}$ is simpler than $\widehat{\mathcal{R}}$
in many ways, although it is still very complicated and non-noetherian, it is
more explicitly described and is $p$-adic complete. In particular, our new
theory can be used to fix the gap [Liu07] indicated by [Gao21, Appendix B].
The second attempt made in this paper is to provide a new approach to the
equivalence between the category of prismatic $F$-crystals and the category of
lattices in crystalline representation established by Bhatt and Scholze as in
Theorem 1.0.1. That is, using the known equivalence between lattices in semi-
stable representations and prismatic $(\varphi,\hat{G})$-modules, we will
establish a functor from the category of prismatic $(\varphi,\hat{G})$-modules
that correspond to crystalline representations to prismatic $F$-crystals, and
show this functor is an equivalence.
To be more precise, let $T$ be a $G_{K}$-stable lattice in a crystalline
representation with positive Hodge-Tate weights, let $(A,E)$ be the Breuil-
Kisin prism, and let $(A^{(2)},(E))$ (resp. $(A^{(3)},(E))$) be the self-
product (self-triple-product) of $(A,(E))$ in
$(\mathcal{O}_{K})_{{\mathbbl{\Delta}}}$. Then evaluating prismatic
$F$-crystals on the diagram
$(A,(E))\xrightarrow{i_{1}}(A^{(2)},(E))\xleftarrow{i_{2}}(A,(E))$ induces an
equivalence of the category of prismatic $F$-crystals and Kisin modules with
descent data, that is pairs $((\mathfrak{M},\varphi_{\mathfrak{M}}),f)$ where
$(\mathfrak{M},\varphi_{\mathfrak{M}})$ is a Kisin module and
$f:\mathfrak{M}\otimes_{\mathfrak{S},i_{1}}A^{(2)}\simeq\mathfrak{M}\otimes_{\mathfrak{S},i_{2}}A^{(2)}$
is an isomorphism of $A^{(2)}$-modules that is compatible with $\varphi$ and
satisfies cocycle condition over $A^{(3)}$. Using this, to establish an
equivalence between prismatic $(\varphi,\hat{G})$-modules that correspond to
crystalline representations and prismatic $F$-crystals, it remains to find
certain correspondence between the $\hat{G}$-action and the descent
isomorphism $f$. We will show the descent isomorphism can be obtained by
taking the $G_{K}$-action of the $(\varphi,\widehat{G})$-module at a specific
element. To be more precise, fix a Kummer tower
$K_{\infty}=\bigcup_{n=1}^{\infty}K(\varpi_{n})$ used in the theory of Kisin,
where $\\{\varpi_{n}\\}_{n}$ is a compatible system of $p^{n}$-th roots of
$\varpi_{0}=\varpi$, and let $L$ be the normalization of $K_{\infty}$ inside
$\overline{K}$. Choose $\tau\in\hat{G}:=\mathop{\rm Gal}\nolimits(L/K)$
satisfying $\tau(\varpi_{n})=\zeta_{p^{n}}\varpi_{n}$ such that
$\\{\zeta_{p^{n}}\\}$ is a compatible system of primitive $p^{n}$-th roots of
$1$, then our slogan is that the descent isomorphism corresponds to the
$\tilde{\tau}$-action on the Kisin module $\mathfrak{M}$ inside
$T^{\vee}\otimes A_{\mathrm{inf}}$ where $\tilde{\tau}\in G_{K}$ is any
lifting of $\tau$ under the quotient map $G_{K}\to\widehat{G}$.
To sketch our idea, first we have the maps $u\mapsto[{\varpi}^{\flat}]$ and
$u\mapsto[{\tau}({\varpi}^{\flat})]$ defines two morphisms of $(A,(E))$ to
$(A_{\mathrm{inf}},\mathop{\rm Ker}\nolimits\theta)$. By the universal
property of $(A^{(2)},(E))$, these two maps induce a morphism
$(A^{(2)},(E))\to(A_{\mathrm{inf}},\mathop{\rm Ker}\nolimits\theta)$. We can
show this map is injective, and the embedding factors through
$A^{(2)}_{\mathop{\rm st}\nolimits}$, which is the base ring used in our
prismatic $(\varphi,\hat{G})$-module theory. That is, we have a chain of
subrings $A\subset A^{(2)}\subset A^{(2)}_{\mathop{\rm st}\nolimits}$ of
$A_{\mathrm{inf}}$, such that $\tilde{\tau}(A)$ is also contained in
$A^{(2)}$. We can show a prismatic $(\varphi,\hat{G})$-module corresponds to a
crystalline representation if and only if the coefficients of the
$\tilde{\tau}$-action on $\mathfrak{M}$ in $T^{\vee}\otimes A_{\mathrm{inf}}$
lie inside $A^{(2)}$. And once this is proved, the $\tilde{\tau}$-action will
induce an isomorphism:
$f_{\tau}:\mathfrak{M}\otimes_{\mathfrak{S},\tau}A^{(2)}\simeq\mathfrak{M}\otimes_{\mathfrak{S}}A^{(2)}.$
We will see $f_{\tau}$ gives the descent isomorphism. As a result, we give a
new proof for Theorem 1.0.1.
An advantage of our approach is that our new method can be easily generalized
to the semi-stable representations cases. It turns out that the prism
$(A^{(2)}_{\mathop{\rm st}\nolimits},(E))$ is isomorphic to the self-coproduct
of $(A,(E))$ in the category of logarithmic prisms over $\mathcal{O}_{K}$
defined by Koshikawa[Kos21]. Using the equivalence between prismatic
$(\varphi,\hat{G})$-modules and lattices in semi-stable representations of
$G_{K}$. we will show in §5 the following generalization of Theorem 1.0.1 for
semi-stable representations.
###### Theorem 1.0.2.
(Theorem 5.0.18) There is an equivalence of the category of prismatic
$F$-crystals in finite locally free $\mathcal{O}_{{\mathbbl{\Delta}}}$-modules
over $(\mathcal{O}_{K})_{{{\mathbbl{\Delta}}}_{\log}}$ and the category of
Galois stable lattices in semi-stable representations of $G_{K}$.
Another interesting and natural question one can ask is whether Theorem 1.0.1
and Theorem 1.0.2 can accommodate more general base rings. Motivated by our
strategy, it seems to us that the answer should be affirmative if a suitable
theory of $(\varphi,\hat{G})$-module can accommodate more general base rings,
for example, if the base ring $R$ is a complete DVR with _imperfect_ residue
field that admits a finite $p$-basis. We are working on such direction and
hopefully will report our progress in the future. So part of our paper, for
example, § 2 do allow specific general base rings.
### Acknowledgments
It is our pleasure to thank Hui Gao, Wansu Kim, Teruhisa Koshikawa, Zeyu Liu,
Yong Suk Moon, Peter Scholze, Koji Shimizu, Yupeng Wang, Zhiyou Wu and Min Yu
for comments and conversations during the preparation of this paper.
## 2\. Ring Structures on certain prismatic envelope
Recall that $K$ is a completed discrete valuation field in mix characteristic
$(0,p)$ with ring of integers of $\mathcal{O}_{K}$ and prefect residue field
$k$. Write $W=W(k)$. Let $\varpi\in\mathcal{O}_{K}$ be a uniformizer and
$E=E(u)\in W[u]$ be the Eisenstein polynomial of $\varpi$. Let $\mathbb C_{p}$
be the $p$-adic completion of $\overline{K}$, and $\mathcal{O}_{\mathbb
C_{p}}$ be the ring of integers. Let $R_{0}$ be a $W(k)$-algebra which admits
Frobenius lift $\varphi:R_{0}\to R_{0}$. Set
$R:=R_{0}\otimes_{W(k)}\mathcal{O}_{K}$. We make the following assumptions for
$R_{0}$ and $R$:
1. (1)
Both $R_{0}$ and $R$ are $p$-adically complete integral domains, and
$R_{0}/pR_{0}=R/\varpi R$ is an integral domain;
2. (2)
Let $\breve{R}_{0}=W\langle t_{1},\dots,t_{m}\rangle$. $R_{0}$ is a
$\breve{R}_{0}$-_formally étale_ algebra with $p$-adic topology;
3. (3)
$\breve{R}_{0}$ admits a Frobenius lift such that $\breve{R_{0}}\to R_{0}$
defined in (2) is $\varphi$-equivalent.
4. (4)
The $k$-algebra $R_{0}/pR_{0}$ has finite $p$-basis in the sense of [dJ95,
Definition 1.1.1].
Our main example is $R_{0}=\breve{R}_{0}=W(k).$ We will not use the finite
$p$-basis assumption until §4. The following are other examples of $R_{0}$:
###### Example 2.0.1.
1. (1)
$R_{0}=W(k)\langle t_{1}^{\pm 1},\dots,t_{m}^{\pm 1}\rangle$ with
$\varphi(t_{j})=t^{p}_{j}$
2. (2)
$R_{0}=W(k)[\\![t]\\!]$ with $\varphi(t)=t^{p}$ or $(1+t)^{p}-1$.
3. (3)
$R_{0}$ is an unramified complete DVR with imperfect field $\kappa$ with
finite $p$-basis. See §6 for more discussions.
We reserve $\gamma_{i}(\cdot)$ to denote $i$-th divided power.
### 2.1. Construction of $A^{(2)}$
Let $A=\mathfrak{S}=R_{0}[\\![u]\\!]$ and extend $\varphi:A\to A$ by
$\varphi(u)=u^{p}$. It is well-known that $(A,E)$ is a prism and we can define
a surjection $\theta:A\to R$ via $u\mapsto\varpi$. We have $\mathop{\rm
Ker}\nolimits\theta=(E(u))$. Let $\breve{A}:=\breve{R}_{0}[\\![u]\\!]$ and
define $\varphi$ and
$\breve{\theta}:\breve{A}\to\breve{R}:=\mathcal{O}_{K}\otimes_{W}\breve{R}_{0}$
similarly. We set
$A^{\widehat{\otimes}2}:=A[\\![y-x,s_{1}-t_{1},\dots,s_{m}-t_{m}]\\!],\
A^{\widehat{\otimes}3}:=A[\\![y-x,w-x,\\{s_{i}-t_{i},r_{i}-t_{i}\\}_{j=1,\dots,m}]\\!].$
Note that $A^{\widehat{\otimes}2}$ (resp. $A^{\widehat{\otimes}3}$) is
$\breve{A}\otimes_{\mathbb Z_{p}}\breve{A}$(resp. $\breve{A}\otimes_{\mathbb
Z_{p}}\breve{A}\otimes_{\mathbb Z_{p}}\breve{A}$)-algebra by $u\otimes
1\mapsto x$, $1\otimes u\mapsto y$ and $1\otimes t_{i}\mapsto s_{i}$ (resp.
$1\otimes 1\otimes u\mapsto w$ and $1\otimes 1\otimes t_{i}\mapsto r_{i}$). So
in this way, we can extend Frobenius $\varphi$ of $A$, which is compatible
with that on $\breve{A}$ to $A^{\widehat{\otimes}2}$ and
$A^{\widehat{\otimes}3}$. Set
$J^{(2)}=(E,y-x,\\{s_{i}-t_{i}\\}_{i=1,\dots,m})\subset
A^{\widehat{\otimes}2}$ and
$J^{(3)}=(E,y-x,w-x,\\{s_{i}-t_{i},r_{i}-t_{i}\\}_{i=1,\dots,m})\subset
A^{\widehat{\otimes}3}.$ Clearly, we have
$A^{\widehat{\otimes}i}/J^{(i)}\simeq R$ for $i=2,3$. And we have
$A^{\widehat{\otimes}2}/(p,E)$ (resp. $A^{\widehat{\otimes}3}/(p,E)$) is a
formal power series ring over the variables
$\bar{y}-\bar{x},\\{\bar{s}_{i}-\bar{t}_{i}\\}_{i=1,\dots,m}$ (resp.
$\bar{y}-\bar{x},\bar{w}-\bar{x},\\{\bar{s}_{i}-\bar{t}_{i},\bar{r}_{i}-\bar{t}_{i}\\}_{i=1,\dots,m}$),
so $(A,(E))\to(A^{\widehat{\otimes}i},J^{(i)})$ satisfies the requirements of
in [BS22, Prop. 3.13], and we can construct the prismatic envelope with
respect to this map, which will be denoted by $A^{(i)}$. More precisely,
$A^{(i)}\simeq
A^{\widehat{\otimes}i}\left\\{\frac{J^{(i)}}{E}\right\\}_{\delta}^{\wedge}$,
here $\\{\cdot\\}_{\delta}^{\wedge}$ means freely adjoining elements in the
category of $(p,E(u))$-completed $\delta$-$A$-algebras. We will see $A^{(i)}$,
$i=2,3$ are the self product and triple product of $A$ in category
$X_{{\mathbbl{\Delta}}}$ in §4.1.
### 2.2. The ring $A^{(2)}_{\max}$
Now we set $t_{0}=x$, $s_{0}=y$ and
$z_{j}=\frac{s_{i}-t_{i}}{E}\textnormal{ and
}z_{0}=z=\frac{y-x}{E}=\frac{s_{0}-t_{0}}{E}.$
Note that $A^{(i)}$ are $A$-algebras via $u\mapsto x$.
###### Definition 2.2.1.
Let ${O}_{\mathrm{max}}$ be the $p$-adic completion of the $A$-subalgebra of
$A[\frac{1}{p}]$ generated by $p^{-1}E$. And let $A_{\max}^{(2)}$ be the
$p$-adic completion of the $A$-subalgebra of
$A[z_{j},\frac{1}{p};j=0,\dots,m]$ generated by $p^{-1}E$ and
$\\{\gamma_{i}(z_{j})\\}_{i\geq 1,j=0,\dots,m}$.
We first note that $A^{(2)}_{\max}$ is an $A^{\widehat{\otimes}2}$-algebra via
$(s_{j}-t_{j})=Ez_{j},j=0,\dots,m$. Write $\iota:A^{\widehat{\otimes}2}\to
A^{(2)}_{\max}$ for the structure map. By construction, it is easy to see that
$A^{(2)}_{\max}\subset R_{0}[\frac{1}{p}][\\![E,z_{j},j=0,\dots,m]\\!]$. In
particular, $A^{(2)}_{\max}$ is a domain and any element $b\in A^{(2)}_{\max}$
can be _uniquely_ written as
$\sum\limits_{i_{0}=0}^{\infty}\cdots\sum\limits_{i_{m}=0}^{\infty}b_{i_{1},\dots,i_{m}}\prod\limits_{j=0}^{m}\gamma_{i_{j}}(z_{j})$
with $b_{i_{0},\dots,i_{m}}\in{O}_{\mathrm{max}}$ and
$b_{i_{0},\dots,i_{m}}\to 0$ $p$-adically when $i_{0}+\cdots+i_{m}\to\infty$.
Our next aim is to define $\varphi$ on $A^{(2)}_{\max}$. For this, we need a
little preparation.
###### Lemma 2.2.2.
$c:=\frac{\varphi(E)}{p}\in{O}_{\mathrm{max}}$ and
$c^{-1}\in{O}_{\mathrm{max}}$.
###### Proof.
We have $A$ is a $\delta$-ring, and $E$ is a distinguished element, so in
particular
$\varphi(E)/p=c_{0}+E^{p}/p$
where $c_{0}=\delta(E)\in A^{\times}$. So
$c=\varphi(E)/p\in{O}_{\mathrm{max}}$, and
$c^{-1}=c_{0}^{-1}\sum\limits_{i=0}^{\infty}\frac{(-c_{0}^{-1}E^{p})^{i}}{p^{i}}\in{O}_{\mathrm{max}}.$
∎
Now we define $\varphi(z)=\varphi(z_{0})=\frac{y^{p}-x^{p}}{\varphi(E)}$ and
$\varphi(z_{j})=\frac{\varphi(s_{j})-\varphi(t_{j})}{\varphi(E)}$. Since
+rCl+x* φ(z)=yp-x pφ(E)=c^-1yp-xpp=c^-1(x+ Ez)p-xpp &=
c^-1∑_i=1^px^p-i(Ez)^i(pi)/p
= c^-1∑_i=1^pa_iz^i, where $a_{i}\in
W(k)[\\![x]\\!][\frac{E^{p}}{p}]\subset{O}_{\mathrm{max}}\subset
A^{(2)}_{\max}$ and $c$ is a unit in ${O}_{\mathrm{max}}$, we have
$\varphi(z)\in A^{(2)}_{\max}$. Then
$\gamma_{n}(\varphi(z))=\frac{\varphi(z)^{n}}{n!}=\frac{z^{n}}{n!}(c^{-1}\sum_{i=1}^{p}a_{i}z^{i-1})^{n}$
is in $A^{(2)}_{\max}.$ The argument for $\varphi(z_{j})$ for $j>1$ need a
little more details. Note that $\varphi(t_{j})=t_{j}^{p}+p\delta(t_{j})$ with
$\delta(t_{j})\in\breve{R}_{0}$ by our assumptions. It is clear that
$\delta(s_{j})-\delta(t_{j})=(s_{j}-t_{j})\lambda_{j}$ with $\lambda_{j}\in
A^{\widehat{\otimes}2}$. Using that $(s_{j}-t_{j})=Ez_{j}$, so
(1) $\varphi(z_{j})=c^{-1}(\frac{s^{p}_{j}-t^{p}_{j}}{p}+Ez_{j}\lambda_{j})$
The same argument as that for $\varphi(z_{0})$ also shows that
$\gamma_{n}(z_{j})\in A^{(2)}_{\max}$, for $j=1,\dots,m$.
Since any element $b\in A^{(2)}_{\max}$ can be uniquely written as
$\sum\limits_{i_{0}=0}^{\infty}\cdots\sum\limits_{i_{m}=0}^{\infty}b_{i_{1},\dots,i_{m}}\prod\limits_{j=0}^{m}\gamma_{i_{j}}(z_{j})$
with $b_{i_{0},\dots,i_{m}}\in{O}_{\mathrm{max}}$ and
$b_{i_{0},\dots,i_{m}}\to 0$ $p$-adically when $i_{0}+\cdots+i_{m}\to\infty$,
this allows to extend Frobenius map $\varphi$ on $A$ to a _ring_ map
$\varphi:A^{(2)}_{\max}\to A^{(2)}_{\max}$ by sending $u\mapsto u^{p}$,
$z\mapsto\frac{y^{p}-x^{p}}{\varphi(E)}$,
$\varphi(z_{j})=\frac{\varphi(s_{j})-\varphi(t_{j})}{\varphi(E)}$, and
$\gamma_{i}(z_{j})\mapsto\gamma_{i}(\varphi(z_{j}))$ as the above.
###### Remark 2.2.3.
The ring map $\varphi:A^{(2)}_{\max}\to A^{(2)}_{\max}$ is _not_ a Frobenius
lift of $A^{(2)}_{\max}/p$ because $\varphi(E/p)-(E/p)^{p}\not\in
pA^{(2)}_{\max}$. In particular, $A^{(2)}_{\max}$ is not a $\delta$-ring.
Recall that $A^{(2)}_{\max}$ is an $A^{\widehat{\otimes}2}$-algebra via map
$\iota:A^{\widehat{\otimes}2}\to A^{(2)}_{\max}$. The above construction of
Frobenius $\varphi$ on $A^{(2)}_{\max}$ is obviously compatible with $\iota$.
Our next goal is to show that $\iota$ induces a map $A^{(2)}\to
A^{(2)}_{\max}$ so that $A^{(2)}$ is a subring of $A^{(2)}_{\max}$ which is
compatible with $\varphi$-structures and filtration. We need a little
preparation. Write $\mathfrak{z}_{n}=\delta^{n}(z)$ with
$\delta_{0}(z)=z=\mathfrak{z}_{0}$, and $A_{0}=W(k)[\\![u]\\!]$.
###### Lemma 2.2.4.
$\delta^{n}(Ez)=b_{n}\mathfrak{z}_{n}+\sum_{i=0}^{p}a^{(n)}_{i}\mathfrak{z}_{n-1}^{i}.$
where $a^{(n)}_{i}\in A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{n-2}]$ so
that $a^{(n)}_{p}\in A_{0}^{\times}$ and for $0\leq i\leq p-1$ each monomials
of $a^{(n)}_{i}$ contains a factor $\mathfrak{z}_{j}^{p}$ for some $0\leq
j\leq n-2$. Furthermore, $b_{n+1}=p\delta(b_{n})+b^{p}_{n}$ and
$b_{1}=p\delta(E)+E^{p}$.
###### Proof.
Given $f\in A_{0}[x_{1},\dots,x_{m}]$, if each monomials of $f$ contains
$x_{j}^{l}$ for some $j$ and $l\geq p$ then we call $f$ _good_. For example,
$f=x_{1}^{p}x_{2}+2x_{1}x_{2}^{p+3}.$ So we need to show that $a^{(n)}_{i}\in
A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{n-2}]$ is good. Before making
induction on $n$, we discuss some properties of good polynomial. It is clear
that the set of good polynomials is closed under addition and multiplications.
Note that
(2)
$\delta(\mathfrak{z}_{l}^{i})=\frac{1}{p}(\varphi(\mathfrak{z}_{l}^{i})-\mathfrak{z}_{l}^{pi})=\frac{1}{p}\big{(}(p\mathfrak{z}_{l+1}+\mathfrak{z}_{l}^{p})^{i}-\mathfrak{z}_{l}^{pi}\big{)}=\sum\limits_{j=1}^{i}\binom{i}{j}(p^{j-1}\mathfrak{z}_{l}^{p(i-j)})\mathfrak{z}_{l+1}^{j}.$
In particular, given an $f\in A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{m}]$,
$\delta(\mathfrak{z}_{m}^{p}f)=f^{p}\delta(\mathfrak{z}_{m}^{p})+\mathfrak{z}_{m}^{p^{2}}\delta(f)+p\delta(\mathfrak{z}_{m}^{p})\delta(f)$
is a good polynomial in $A[\mathfrak{z}_{0},\dots,\mathfrak{z}_{m+1}]$. Using
the fact that $\delta(a+b)=\delta(a)+\delta(b)+F(a,b)$ where
$F(X,Y)=\frac{1}{p}(X^{p}+Y^{p}-(X+Y)^{p})=-\sum\limits_{i=1}^{p-1}\binom{p}{i}/pX^{i}Y^{p-i}$,
together with the above argument of $\delta(\mathfrak{z}_{l}^{p}f)$, it is not
hard to show that if $g\in A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{m}]$ is
good then $\delta(g)\in
A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{m},\mathfrak{z}_{m+1}]$ is also
good.
Now we make induction on $n$. When $n=1$, we have
$\delta(Ez)=E^{p}\mathfrak{z}_{1}+z^{p}\delta(E)+p\delta(E)\mathfrak{z}_{1}=(p\delta(E)+E^{p})\mathfrak{z}_{1}+\delta(E)z^{p}.$
Then $b_{1}=p\delta(E)+E^{p}$, $a^{(1)}_{p}=\delta(E)\in A_{0}^{\times}$ and
$a^{(1)}_{i}=0$ for $1\leq i\leq p-1$ are required. Now assume the formula is
correct for $n$, then
$\delta^{n+1}(Ez)=\delta(b_{n}\mathfrak{z}_{n}+\sum_{i=0}^{p}a^{(n)}_{i}\mathfrak{z}_{n-1}^{i})=\delta(b_{n}\mathfrak{z}_{n})+\delta(\sum_{i=0}^{p}a^{(n)}_{i}\mathfrak{z}_{n-1}^{i}))+F(b_{n}\mathfrak{z}_{n},\sum_{i=0}^{p}a^{(n)}_{i}\mathfrak{z}_{n-1}^{i})),$
Clearly,
$F(b_{n}\mathfrak{z}_{n},\sum\limits_{i=0}^{p}a^{(n)}_{i}\mathfrak{z}_{n-1}^{i}))=\sum\limits_{j=1}^{p-1}\tilde{a}^{(n)}_{j}\mathfrak{z}_{n}^{j}$
with $\tilde{a}^{(n)}_{j}$ being good. An easy induction shows that
$\delta(\sum\limits_{i=0}^{p}a^{(n)}_{i}\mathfrak{z}_{n-1}^{i})=\sum\limits_{i=0}^{p}\delta(a^{(n)}_{i}\mathfrak{z}_{n-1}^{i})+f$
with $f\in A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{n-1}]$ being good. Since
$\delta(a^{(n)}_{i}\mathfrak{z}_{n-1}^{i})=(a^{(n)}_{i})^{p}\delta(\mathfrak{z}_{n-1}^{i})+(\mathfrak{z}_{n-1}^{pi})\delta(a_{i}^{(n)})+p\delta(\mathfrak{z}_{n-1}^{i})\delta(a_{i}^{(n)})$,
by using formula of $\delta(\mathfrak{z}_{n-1}^{i})$ in (2) and that
$a^{(n)}_{i}$ is good implies that $\delta(a_{i}^{(n)})$ is also good, we
conclude that for $0\leq i\leq p-1$,
$\sum\limits_{i=0}^{p-1}\delta(a^{(n)}_{i}\mathfrak{z}_{n-1}^{i})=\sum_{i=0}^{p-1}\alpha_{i}\mathfrak{z}_{n}^{i}$
with $\alpha_{i}\in A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{n-1}]$ being
good polynomials. Using that $a_{p}^{(n)}\in A_{0}^{\times}$, we compute that
$\delta(a_{p}^{(n)}\mathfrak{z}^{p}_{n-1})=\sum\limits_{i=0}^{p}\beta_{i}\mathfrak{z}_{n}^{i}$
with $\beta_{p}\in pA_{0}$ and $\beta_{j}\in
A_{0}[\mathfrak{z}_{0},\dots,\mathfrak{z}_{n-1}]$ being good for $1\leq j\leq
p-1$. Now we only need to analyze $\delta(b_{n}\mathfrak{z}_{n})$, which is
$\delta(b_{n})\mathfrak{z}_{n}^{p}+b_{n}^{p}\mathfrak{z}_{n+1}+p\delta(b_{n})\mathfrak{z}_{n+1}$.
So $b_{n+1}=p\delta(b_{n})+b_{n}^{p}$ and
$a_{p}^{(n+1)}=\delta(b_{n})+\beta_{p}$. Since $\delta(b_{n})\in
A_{0}^{\times}$, we see that $a_{p}^{(n+1)}=\delta(b_{n})+\beta_{p}\in
A_{0}^{\times}$ as required. ∎
Let
$\widetilde{A}^{(2)}:=A^{\widehat{\otimes}2}[z_{j}]_{\delta}=A^{\widehat{\otimes}2}[\delta^{n}(z_{j}),n\geq
0,j=0,\dots,m]$ and natural map
$\alpha:\widetilde{A}^{(2)}\to\widetilde{A}^{(2)}[\frac{1}{p}]$ (we do not
know $\alpha$ is injective at this moment).
###### Lemma 2.2.5.
For $i\geq 0$ and $j=0,1,\ldots,d$, there exists
$f_{ij}(X)\in\widetilde{A}^{(2)}[X]$ such that, as elements of
$\widetilde{A}^{(2)}[\frac{1}{p}]$ via
$\alpha:\widetilde{A}^{(2)}\to\widetilde{A}^{(2)}[\frac{1}{p}]$,
$\gamma_{i}(z_{j})=f_{ij}\Bigl{(}\frac{E}{p}\Bigr{)}.$
###### Proof.
Write $z=z_{j}$ for simplicity, and let $\tilde{\gamma}(z)=\frac{z^{p}}{p}$
and
$\tilde{\gamma}^{n}=\underbrace{\tilde{\gamma}\circ\tilde{\gamma}\cdots\circ\tilde{\gamma}}_{n}$.
It suffices to show that for each $n\geq 1$, we have
$\tilde{\gamma}^{n}(z)=f_{n}(\frac{E}{p})$ inside
$\widetilde{A}^{(2)}[\frac{1}{p}]$ for some
$f_{n}(X)\in\widetilde{A}^{(2)}[X]$. For an element $x\in
A[\delta^{i}(z)]_{i\geq 0}$, we say that $x$ has _$\delta$ -order $\leq n$_ if
$x\in\sum_{0\leq j\leq n}A[\\{\delta^{i}(z)\\}_{0\leq i\leq n}]\delta^{j}(z)$,
namely, if $x$ can be written as a sum of monomials such that each term is
divisible by $\delta^{j}(z)$ for some $0\leq j\leq n$.
We claim that the following two equations hold for each $n\geq 1$:
1. (1)
We have
(3)
$\delta^{n}(z)=\nu_{n}\tilde{\gamma}^{n}(z)+P_{n}\Bigl{(}\frac{E}{p}\Bigr{)}+\frac{E^{p}}{p}d_{n}\delta^{n}(z)$
for some $\nu_{n}\in A^{\times}$, $d_{n}\in A$, and
$P_{n}(X)\in(A[\delta^{i}(z)]_{i\geq 0})[X]$ such that each coefficient of
$P_{n}(X)$ has $\delta$-order $\leq n-1$.
2. (2)
We have
(4)
$\tilde{\gamma}(\delta^{n-1}(z))=\mu_{n-1}\tilde{\gamma}^{n}(z)+Q_{n-1}\Bigl{(}\frac{E}{p}\Bigr{)}$
for some $\mu_{n-1}\in A^{\times}$ and $Q_{n-1}(X)\in(A[\delta^{i}(z)]_{i\geq
0})[X]$ such that each coefficient of $Q_{n-1}(X)$ has $\delta$-order $\leq
n-1$.
We prove claims (1) and (2) by induction. For $n=1$, since
$\delta(Ez)=z^{p}\delta(E)+(p\delta(E)+E^{p})\delta(z)$
and $\delta(E)\in\mathfrak{S}^{\times}$, we have
$\delta(z)=-\tilde{\gamma}(z)+\delta(E)^{-1}\frac{\delta(Ez)}{p}-\delta(E)^{-1}\frac{E^{p}}{p}\delta(z).$
By easy induction, we also have $\delta^{i}(Ez)\in(Ez)A$ for each $i\geq 1$.
So claim (1) holds. Claim (2) holds for $n=1$ trivially with $Q_{0}(X)=0$.
Suppose that claims (1) and (2) hold for $1\leq n\leq m$. We will verify
claims (1) and (2) for $n=m+1$. We first consider claim (2). Since each
coefficient of $P_{m}(X)$ has $\delta$-order $\leq m-1$,
$\frac{E^{p}}{p}=p^{p-1}\bigl{(}\frac{E}{p}\bigr{)}^{p}$, and Equations (3)
and (4) hold for $1\leq n\leq m$, applying $\tilde{\gamma}(\cdot)$ to Equation
(3) for $n=m$ yields
$\tilde{\gamma}(\delta^{m}(z))=\nu_{m}^{p}\tilde{\gamma}^{m+1}(z)+Q_{m}\Bigl{(}\frac{E}{p}\Bigr{)}$
for some $Q_{m}(X)\in(\mathfrak{S}[\delta^{i}(z)]_{i\geq 0})[X]$ such that
each coefficient of $Q_{m}(X)$ has $\delta$-order $\leq m$. This proves the
claim (2) for $n=m+1$.
We now consider claim (1) for $n=m+1$. By the above Lemma for $n=m+1$ and that
$b_{n}=p\alpha_{n}+\beta_{n}E^{p}$ for some $\alpha_{n}\in A^{\times}$ and
$\beta_{n}\in A$ (via an easy induction on $n$), we have
$\alpha_{m+1}\delta^{m+1}(z)=\frac{\delta^{m+1}(Ez)}{p}-\beta_{m+1}\frac{E^{p}}{p}\delta^{m+1}(z)-a_{p}^{(m+1)}\tilde{\gamma}(\delta^{m}(z))-\frac{1}{p}\sum_{j=0}^{p-1}a_{j}^{(m+1)}(\delta^{m}(z))^{j}.$
As noted above, we have $\delta^{m+1}(Ez)\in(Ez)A$. Furthermore, by the
condition on $a_{j}^{(m+1)}$, the last term
$\frac{1}{p}\sum_{j=0}^{p-1}a_{j}^{(m+1)}(\delta^{m}(z))^{j}$ is a linear
combination of terms involving
$\tilde{\gamma}(\delta^{l}(z))=\frac{1}{p}(\delta^{l}(z))^{p}$ for some $0\leq
l\leq m-1$. Thus, by applying Equations (3) and (4) for $1\leq n\leq m$, we
see that claim (1) also holds for $n=m+1$ with
$\nu_{m+1}=-\alpha_{m+1}^{-1}a_{p}^{(m+1)}\mu_{m}$ and
$d_{m+1}=-\alpha_{m+1}^{-1}\beta_{m+1}$. This completes the induction and
prove the lemma . ∎
###### Remark 2.2.6.
In the above proof, by equation (4), we even have for each $i,j\geq 0$,
$\gamma_{i}(\delta^{j}(z))=f(\frac{E}{p})$ for some
$f\in\widetilde{A}^{(2)}[X]$.
An easy induction by (3) implies that $\alpha(\delta^{n}(z))\in
A^{\widehat{\otimes}2}[\\{\gamma_{i}(z_{j})\\}_{i\geq
0,j=1,\dots,m},\frac{E}{p}]\subset A^{(2)}_{\max}$, which satisfies equations
in Lemma 2.2.4 by replacing $\mathfrak{z}_{n}$ by $\alpha(\delta^{n}(z))$
inside $A^{(2)}_{\max}$. It is clear that $\iota$ is still Frobenius
compatible (because both $A^{\widehat{\otimes}2}$ and $A^{(2)}_{\max}$ are
domains). Since $E=p\frac{E}{p}$, $\iota$ is a continuous for $(p,E)$-topology
on $\widetilde{A}^{(2)}$ and $p$-topology on $A^{(2)}_{\max}$. Finally, we
construct a ring map $\iota:A^{(2)}\to A^{(2)}_{\max}$ so that $\iota$ is
compatible with Frobenius.
Our next goal is to show that $\iota$ is injective. Define $\mathop{\rm
Fil}\nolimits^{i}A^{(2)}_{\max}[\frac{1}{p}]:=E^{i}A^{(2)}_{\max}[\frac{1}{p}]$.
For any subring $B\subset A^{(2)}_{\max}[\frac{1}{p}]$, set
$\mathop{\rm Fil}\nolimits^{i}B:=B\cap\mathop{\rm
Fil}\nolimits^{i}A^{(2)}_{\max}[\frac{1}{p}]=B\cap
E^{i}A^{(2)}_{\max}[\frac{1}{p}].$
Let $D_{z}$ be the $p$-adic completion of $R[\gamma_{i}(z_{j}),i\geq
0;j=0,\dots,m]$.
###### Proposition 2.2.7.
1. (1)
$\widetilde{A}^{(2)}/E=R[\gamma_{i}(z_{j}),i\geq 0;j=0,\dots,m]$.
2. (2)
$A^{(2)}/E\simeq D_{z}$.
3. (3)
$\iota$ is injective.
4. (4)
$\mathop{\rm Fil}\nolimits^{1}A^{(2)}=EA^{(2)}$.
5. (5)
$A^{(i)}$ are flat over $A$ for $i=2,3$.
###### Proof.
(1) By definition,
$\widetilde{A}^{(2)}=A^{\widehat{\otimes}2}[z^{(n)}_{j},n\geq
0;j=0,\dots,m]/J$ where $\mod J$ is equivalent the following relations (note
that $z_{0}=z$):
$Ez=(x-y),Ez_{j}=s_{j}-t_{j},\delta(z^{(n)}_{j})=z^{(n+1)}_{j},\delta^{n}(Ez)=\delta^{n}(y-x),\delta^{n}(Ez_{j})=\delta^{n}(s_{j}-t_{j}).$
Since $\delta(x-y)=\frac{(x^{p}-y^{p})-(x-y)^{p}}{p}$ and
$\delta(s_{j}-t_{j})=\frac{\varphi(s_{j}-t_{j})-(s_{j}-t_{j})^{p}}{p}$, it is
easy to prove by induction that $\delta^{n}(x-y)$ and
$\delta^{n}(s_{j}-t_{j})$ always contains a factor $(x-y)$, $s_{j}-t_{j}$ and
hence $\delta^{n}(x-y),\delta(s_{j}-t_{j})\equiv 0\mod E$. Therefore
$\delta^{n}(Ez_{j})\equiv 0\mod E$. By Lemma 2.2.4, we see that
$p\mu_{n}z^{(n)}_{j}=-\sum_{i=0}^{p}\overline{a^{(n)}_{i}}(z^{(n-1)}_{j})^{i}\mod
E\text{ and }pz^{(1)}_{j}=z_{j}^{p}\mod E$
where $\overline{a^{(n)}_{i}}=a^{(n)}_{i}\mod E$ and
$\mu_{n}=\frac{\delta(b_{n})}{p}\mod E\in\mathcal{O}_{K}^{\times}$. Using that
$a_{p}^{(n)}\in A_{0}^{\times}$, and $a_{i}^{(n)},1\leq i\leq p-1$ are good in
the sense that they contains factor of $(z^{(l)}_{j})^{p}$ for some
$l=0,\dots,n-2$, we easily see by induction that
$\widetilde{A}^{(2)}/E=R[\widetilde{\gamma}^{n}(z_{j}),n\geq 0;j=0,\dots,m]$.
But it is well-known that $R[\widetilde{\gamma}^{n}(z_{j}),n\geq
0;j=0,\dots,m]=R[\gamma_{n}(z_{j}),n\geq 0;j=0,\dots,m].$
Now we show that the natural map $\iota:\widetilde{A}^{(2)}\to
A^{(2)}_{\max}[\frac{1}{p}]$ induced by $\alpha(\delta^{n}(z_{j}))$ is
injective. Note that $\widetilde{A}^{(2)}$ is the direct limit of
$\widetilde{A}^{(2)}_{n}:=A^{\hat{\otimes}2}[\\{\delta^{i}(z_{j})\\}_{i=1,\dots,n,j=0,\dots,m}]$.
A similar argument similar as above show that $\widetilde{A}^{(2)}_{n}/E$
injects to $A^{(2)}_{\max}[\frac{1}{p}]/E=D_{z}[\frac{1}{p}]$. Since
$\widetilde{A}^{(2)}_{n}$ is $E$-separate and $A^{(2)}_{\max}$ is a domain,
this implies that $\widetilde{A}^{(2)}_{n}$ injects to
$A^{(2)}_{\max}[\frac{1}{p}]$. So $\widetilde{A}^{(2)}$ injects to
$A^{(2)}_{\max}$ via $\iota$.
(2) Since $A^{(2)}$ is $(p,E)$-completion of $\widetilde{A}^{(2)}$ 111Indeed,
$A^{(2)}$ is _derived_ $(p,E)$-completion. Since $\widetilde{A}^{(2)}/E$ is
$\mathbb Z_{p}$-flat, then derived completion coincides with the classical
completion, which is used here., we have a natural map from
$\bar{\iota}:A^{(2)}/E\to D_{z}$. The surjectivity of $\bar{\iota}$ is
straightforward as $A^{(2)}$ is also $p$-complete. To see injectivity, given
an sequence $f_{n}$ so that $f_{n+1}-f_{n}\in(p,E)^{n}\widetilde{A}^{(2)}$ and
$f_{n}=Eg_{n}$ for all $n$, we have to show that $g_{n}$ is a convergent
sequence in $A^{(2)}$. Since
$E(g_{n+1}-g_{n})=\sum_{i=0}^{n}p^{i}E^{n-i}h_{i}$ with
$h_{i}\in\widetilde{A}^{(2)}$. Then $E|p^{n}h_{n}$. Since
$\widetilde{A}^{(2)}/E$ has no $p$-torsion, we have $E|h_{n}$ and write
$h_{n}=Eh^{\prime}_{n}$. Since $\widetilde{A}^{(2)}$ is a domain as it is
inside the fraction field of $A^{\widehat{\otimes}2}$, we see that
$g_{n+1}-g_{n}=p^{n}h^{\prime}_{n}+\sum\limits_{i=0}^{n-1}p^{i}E^{n-i-1}h_{i}$.
Hence $g_{n}$ converges in $A^{(2)}$ as required.
(3) It is clear that $A^{(2)}_{\max}[\frac{1}{p}]/E\simeq D_{z}[\frac{1}{p}]$.
So the map $\iota\mod E(u)$ induces an injection $D_{z}\hookrightarrow
D_{z}[\frac{1}{p}]$. So for any $x\in\mathop{\rm Ker}\nolimits(\iota)$, we see
that $x=Ea$ for some $a\in A^{(2)}$. As $A^{(2)}_{\max}$ is $E$-torsion free
and $A^{(2)}$ is $E$-complete, we see that $x=0$ as required.
(4) By the definition of $\mathop{\rm Fil}\nolimits^{1}A^{(2)}$, we see that
$EA^{(2)}\subset\mathop{\rm Fil}\nolimits^{1}A^{(2)}$ and $A^{(2)}/\mathop{\rm
Fil}\nolimits^{1}A^{(2)}$ injects to
$A^{(2)}_{\max}[\frac{1}{p}]/E=D_{z}[\frac{1}{p}]$. But we have seen that
$A^{(2)}/E=D_{z}$ injects to $D_{z}$. Then $\mathop{\rm
Fil}\nolimits^{1}A^{(2)}=EA^{(2)}$.
(5) Both $A^{(2)}$ and $A^{(3)}$ are obtained by the construction of [BS22,
Proposition 3.13], which implies that they are $(p,E)$-complete flat over $A$.
Since $A$ is Noetherian, by [Sta20, Tag 0912], we have both $A^{(2)}$ and
$A^{(3)}$ are $A$-flat. ∎
###### Corollary 2.2.8.
1. (1)
$\mathop{\rm Fil}\nolimits^{i}A^{(2)}=E^{i}A^{(2)}.$
2. (2)
$A^{(i)}$ are bounded prisms for $i=2,3$.
###### Proof.
These follow that $A^{(2)}/EA^{(2)}\simeq D_{z}$ which is $\mathbb
Z_{p}$-flat. For (2), we have $A^{(2)}$ and $A^{(3)}$ are $(p,E)$-complete
flat over $A$, so boundedness follows from (2) in [BS22, Lemma 3.7]. ∎
###### Lemma 2.2.9.
$A^{(2)}$ is a closed subset inside $A^{(2)}_{\max}$.
###### Proof.
We need to show the following statement: Given $x\in\widetilde{A}^{(2)}$, if
$x=p^{n}y$ with $y\in A^{(2)}_{\max}$ then
$x=\sum\limits_{i=0}^{n}p^{n-i}E^{i}x_{i}$ with $x_{i}\in\widetilde{A}^{(2)}.$
Indeed, since $A^{(2)}/E\simeq A^{(2)}_{\max}/{\mathop{\rm
Fil}\nolimits^{1}}$, there exists $x_{0},w_{1}\in\widetilde{A}^{(2)}$ so that
$x=p^{n}x_{0}+Ew_{1}$. Then $Ew_{1}\in p^{n}A^{(2)}_{\max}$. Write
$Ew_{1}=p^{n}\sum\limits_{i=0}^{\infty}\sum\limits_{j=0}^{m}f_{ij}\gamma_{i}(z_{j})$,
we see that $f_{ij}=\sum_{l\geq 1}a_{ijl}\frac{E^{l}}{p^{l}}\in\mathop{\rm
Fil}\nolimits^{1}{O}_{\mathrm{max}}$. So it is easy to see that
$p^{n}E^{-1}f_{ij}\in p^{n-1}{O}_{\mathrm{max}}$ and then $w_{1}=p^{n-1}x_{1}$
with $x_{1}\in A^{(2)}_{\max}$. Then we may repeat the above argument to
$w_{1}$, and finally $x=\sum\limits_{i=0}^{n}p^{n-i}E^{i}x_{i}$ with
$x_{i}\in\widetilde{A}^{(2)}$ as required. ∎
Now we realize $A^{(2)}$ as a subring of $A^{(2)}_{\max}$ via $\iota$. We need
to introduce some auxiliary rings. By the description of elements in
$A^{(2)}_{\max}$, we define ${\widetilde{S}}_{0}$ be the subring of
$A^{(2)}_{\max}$ as follow
$\widetilde{S}:=A^{(2)}[\\![\frac{E^{p}}{p}]\\!]:=\\{\sum_{i\geq
0}a_{i}(\frac{E^{p}}{p})^{i}\mid a_{i}\in A^{(2)}\\}.$
And when $p=2$, we define $\widehat{S}:=A^{(2)}[\\![\frac{E^{4}}{2}]\\!]$
simiarly. We will have $\widehat{S}\subset\widetilde{S}\subset
A^{(2)}_{\max}$. Viewing $\widetilde{S}$ and $\widehat{S}$ as subrings of
$A^{(2)}_{\max}$, we give them the filtration induced from $A^{(2)}_{\max}$.
The following lemma is crucial for later applications and we thank Yong Suk
Moon for many useful comments to improve many details in the proof.
###### Lemma 2.2.10.
Fix $h\in\mathbb N$, then we have
1. (1)
We have $\varphi(A^{(2)}_{\max})\subset\widetilde{S}\subset A^{(2)}_{\max}$,
and when $p=2$, we have
$\varphi(\widetilde{S})\subset\widehat{S}\subset\widetilde{S}$;
2. (2)
$x\in\mathop{\rm Fil}\nolimits^{h}\widetilde{S}$ if and only if $x$ can be
written as
$x=\sum\limits_{i\geq h}a_{i}\frac{E^{i}}{p^{\lfloor\frac{i}{p}\rfloor}}$
with $a_{i}\in A^{(2)}$.
3. (3)
when $p>2$, there is a $h_{0}>h$ such that $\varphi(\mathop{\rm
Fil}\nolimits^{m}\widetilde{S})\subset A^{(2)}+E^{h}\mathop{\rm
Fil}\nolimits^{m+1}\widetilde{S}$ for all $m>h_{0}$;
4. (4)
when $p=2$, then $x\in\mathop{\rm Fil}\nolimits^{h}\widehat{S}$ if and only if
$x$ can be written as
$x=\sum\limits_{i\geq h}a_{i}\frac{E^{i}}{2^{\lfloor\frac{i}{4}\rfloor}}$
with $a_{i}\in A^{(2)}$;
5. (5)
when $p=2$, there is a $h_{0}>h$ such that $\varphi(\mathop{\rm
Fil}\nolimits^{m}\widehat{S})\subset A^{(2)}+E^{h}\mathop{\rm
Fil}\nolimits^{m+1}\widehat{S}$ for all $m>h_{0}$.
###### Proof.
For $(1)$, any $a\in A^{(2)}_{\max}$, we can write
$a=\sum_{i_{0}=0}^{\infty}\cdots\sum_{i_{m}=0}^{\infty}\sum_{l=0}^{\infty}a_{i_{0},\dots,i_{m},l}\left(\frac{E}{p}\right)^{l}\prod_{j=0}^{m}\gamma_{i_{j}}(z_{j})$
where $a_{i_{0},\dots,i_{m},l}\in A$ and $a_{i_{0},\dots,i_{m},l}\to 0$
$p$-adically when $\sum_{j}i_{j}+l\to\infty$. Thanks for Lemma 2.2.5, we see
that
$b_{i_{0},\dots,i_{m},l}:=\varphi\left(\left(\frac{E}{p}\right)^{l}\prod_{j=0}^{m}\gamma_{i_{j}}(z_{j})\right)\in\widetilde{S}$.
So $\varphi(a)=\sum a_{i_{0},\dots,i_{m},l}b_{i_{0},\dots,i_{m},l}$ converges
in $\widetilde{S}$.
For the claim in $(1)$ for $p=2$, we have
$\varphi(\frac{E^{2}}{2})=(E^{2}+2b^{\prime})^{2}/2=\frac{E^{4}}{2}+2b$ for
some $b,b^{\prime}\in A$. And for $a=\sum_{i\geq
0}a_{i}(\frac{E^{p}}{p})^{i}\in\widetilde{S}$, we have
$\varphi(a)=\sum_{i\geq
0}\varphi(a_{i})(\frac{\varphi(E^{2})}{2})^{i}=\sum_{i\geq
0}\varphi(a_{i})\sum_{j=0}^{i}c_{ij}(2b)^{i-j}(\frac{E^{4}}{2})^{j}=\sum_{j\geq
0}\left(\sum_{i=j}^{\infty}\varphi(a_{i})c_{ij}(2b)^{i-j}\right)(\frac{E^{4}}{2})^{j}$
for some $c_{ij}\in\mathbb Z$. So we have $\varphi(a)\in\widehat{S}$.
For $(2)$, the if part is trivial. For the other direction, any
$x\in\mathop{\rm Fil}\nolimits^{h}\widetilde{S}$, we have
$x=\sum\limits_{i\geq 0}a_{i}\frac{E^{i}}{p^{\lfloor\frac{i}{p}\rfloor}}$
as element in $\widetilde{S}$. And if we also have $x\in\mathop{\rm
Fil}\nolimits^{h}A^{(2)}_{\max}[\frac{1}{p}]=E^{h}A^{(2)}_{\max}[\frac{1}{p}]$,
this implies for $\tilde{a}_{0}=\sum\limits_{0\leq i\leq
h}a_{i}\frac{E^{i}}{p^{\lfloor\frac{i}{p}\rfloor}}$ is in $\mathop{\rm
Fil}\nolimits^{h}A^{(2)}[\frac{1}{p}]$. This implies
$p^{\lfloor\frac{h}{p}\rfloor}\tilde{a}_{0}\in\mathop{\rm
Fil}\nolimits^{h}A^{(2)}=E^{h}A^{(2)}$. That is
$\tilde{a}_{0}={p^{-\lfloor\frac{h}{p}\rfloor}}{E^{h}}b$ for some $b\in
A^{(2)}$. So $x$ is of the given form. The proof for $(4)$ is similar.
For $(3)$, we have by $(2)$, $x\in\mathop{\rm Fil}\nolimits^{m}\widetilde{S}$,
$x$ can be written as
$x=\sum\limits_{i\geq m}a_{i}\frac{E^{i}}{p^{\lfloor\frac{i}{p}\rfloor}}.$
And use the fact $\varphi(E)=E^{p}+pb$ for some $b\in A^{(2)}$, we have
$\varphi(x)=\sum\limits_{i\geq
m}\varphi(a_{i})\sum_{j=0}^{i}\frac{c_{ij}E^{p(i-j)}p^{j}}{p^{\lfloor\frac{i}{p}\rfloor}}=\sum_{i\geq
m}\sum_{j\geq\lfloor\frac{i}{p}\rfloor}^{i}\frac{b_{ij}E^{p(i-j)}p^{j}}{p^{\lfloor\frac{i}{p}\rfloor}}+\sum_{i\geq
m}\sum_{0\leq
j<\lfloor\frac{i}{p}\rfloor}E^{h}\frac{b_{ij}E^{p(i-j)-h}p^{j}}{p^{\lfloor\frac{i}{p}\rfloor}}$
with $b_{ij}\in A^{(2)}$.
In particular, we have $\sum_{i\geq
m}\sum_{j\geq\lfloor\frac{i}{p}\rfloor}^{i}\frac{b_{ij}E^{p(i-j)}p^{j}}{p^{\lfloor\frac{i}{p}\rfloor}}$
is inside $A^{(2)}$. To prove $(3)$, it is amount to find $h_{0}$ such that
whenever $m>h_{0}$, $i\geq m$ and $0\leq j<\lfloor\frac{i}{p}\rfloor$, we have
$\sum_{i\geq m}\sum_{0\leq
j<\lfloor\frac{i}{p}\rfloor}\frac{b_{ij}E^{p(i-j)-h}p^{j}}{p^{\lfloor\frac{i}{p}\rfloor}}\in\mathop{\rm
Fil}\nolimits^{m+1}\widetilde{S}.$
The claim follows if we can find $h_{0}>h$ such that
$\frac{E^{p(i-j)-h}p^{j}}{p^{\lfloor\frac{i}{p}\rfloor}}\in\widetilde{S}$ and
$p(i-j)-h\geq m+1$ for all $m>h_{0}$, $i\geq m$ and $0\leq
j<\lfloor\frac{i}{p}\rfloor$. That is
$\lfloor\frac{p(i-j)-h}{p}\rfloor+j\geq\lfloor\frac{i}{p}\rfloor$ and
$p(i-j)-h\geq m+1$ for all $i,j,m$ in this range. And solve this we have it is
enough to choose $h_{0}>\max\\{h,\frac{p(h+1)+1}{p(p-2)}\\}$, which is valid
for $p>2$.
Statement in $(5)$ is similar to $(3)$. Any $x\in\mathop{\rm
Fil}\nolimits^{m}\widehat{S}$, $x$ can be written as
$x=\sum\limits_{i\geq m}a_{i}\frac{E^{i}}{2^{\lfloor\frac{i}{4}\rfloor}}.$
We have $\varphi(E)=E^{2}+2b$ for some $b\in A^{(2)}$, so
$\varphi(x)=\sum\limits_{i\geq
m}\varphi(a_{i})\sum_{j=0}^{i}\frac{c_{ij}E^{2(i-j)}2^{j}}{2^{\lfloor\frac{i}{4}\rfloor}}=\sum_{i\geq
m}\sum_{j\geq\lfloor\frac{i}{4}\rfloor}^{i}\frac{b_{ij}E^{2(i-j)}2^{j}}{2^{\lfloor\frac{i}{4}\rfloor}}+\sum_{i\geq
m}\sum_{0\leq
j<\lfloor\frac{i}{4}\rfloor}E^{h}\frac{b_{ij}E^{2(i-j)-h}2^{j}}{2^{\lfloor\frac{i}{4}\rfloor}}.$
Similar to the argument in $(3)$, it is amount to find $h_{0}$ such that
whenever $m>h_{0}$, $i\geq m$ and $0\leq j<\lfloor\frac{i}{4}\rfloor$, we have
$\lfloor(i-j)-\frac{h}{2}\rfloor+j\geq\lfloor\frac{i}{4}\rfloor$ and
$2(i-j)-h\geq m+1$. It is enough to choose $h_{0}>2(h+2)$. ∎
If $A$ is a ring then we denote by ${\rm M}_{d}(A)$ the set of $d\times
d$-matrices with entries in $A$.
###### Proposition 2.2.11.
Let $Y\in{\rm M}_{d}(A^{(2)}_{\max})$ so that $E^{h}Y=B\varphi(Y)C$ with $B$
and $C$ in ${\rm M}_{d}(A^{(2)})$ then $Y$ is in ${\rm
M}_{d}(A^{(2)}[\frac{1}{p}])$.
###### Proof.
First, we claim that there is a constant $s$ only depends on $h$, such that
the entries of $p^{s}Y$ is in $\widetilde{S}$. By $(1)$ of Lemma 2.2.10,
entries of $E^{h}Y$ are in $\widetilde{S}$. So for each entry $a$ of $Y$, we
can write $E^{h}a=\sum\limits_{i=0}^{\infty}a_{i}\frac{E^{pi}}{p^{i}}$ with
$a_{i}\in A^{(2)}$. It is clear that
$E^{h}p^{h}a=a^{\prime}+E^{h}\sum\limits_{i\geq h}a_{j}\frac{E^{pi-h}}{p^{i}}$
so that $a^{\prime}\in A^{(2)}$. Therefore, $a^{\prime}\in\mathop{\rm
Fil}\nolimits^{h}A^{(2)}=E^{h}A^{(2)}$ by Corollary 2.2.8. So write
$a^{\prime}=E^{h}b$, we have $p^{h}a=b^{\prime}+\sum\limits_{i\geq
h}a_{j}\frac{E^{pi-h}}{p^{i}}$. In particular, we see that
$p^{2h}a\in\widetilde{S}$, this proves our claim. When $p=2$, then we may
repeat the above argument and we can assume $p^{s}Y$ is in ${\rm
M}_{d}(\widehat{S})$.
Let $R=\widetilde{S}$ when $p>2$ and $R=\widehat{S}$ when $p=2$, then we may
assume $Y$ is inside ${\rm M}_{d}(R)$. Then we claim there is another constant
$r$ only depends on $h$, such that for each entry $a$ of $Y$, there is a
sequence $\\{b_{i}\\}_{i\geq 1}$ in $A^{(2)}$ such that we have
$a-\sum\limits_{i=0}^{m}b_{i}E^{i}\in\mathop{\rm Fil}\nolimits^{m+1}R$. Note
that once this is known, we will have $\sum\limits_{i=0}^{m}b_{i}E^{i}$
converges to an element $b$ in $A^{(2)}$, and $a-b=0$ since it is in
$\mathop{\rm Fil}\nolimits^{m}R$ for all $m\in\mathbb N$.
So it remains to show our claim. When $p>2$, let $h_{0}$ be the integer in
$(3)$ of Lemma 2.2.10, then it is easy to show there is a constant $r$ only
depends on $h_{0}$ (so only on $h$) and sequence $\\{b_{i}\\}_{i=1}^{h_{0}}$
such that for each entry $a$ of $Y^{\prime}:=p^{r}Y$, we have
$a-\sum\limits_{i=0}^{h_{0}}b_{i}E^{i}\in\mathop{\rm
Fil}\nolimits^{h_{0}+1}R.$
Now we show our claim by induction, assume for each entry $a$ in $Y^{\prime}$,
there is a sequence $\\{b_{i}\\}_{i=1}^{m}$ such that,
$a-\sum\limits_{i=0}^{m}b_{i}E^{i}\in\mathop{\rm Fil}\nolimits^{m+1}R.$
for some $m\geq h_{0}$. So we can write $Y^{\prime}$ as
$\sum_{i=0}^{m}Y_{i}E^{i}+Z_{m+1},$
with $Y_{i}\in{\rm M}_{d}(A^{(2)})$ and $Z_{m+1}\in{\rm M}_{d}(\mathop{\rm
Fil}\nolimits^{m+1}R)$. Writing $X_{m}=\sum_{i=0}^{m}Y_{i}E^{i}$, then
$E^{h}Y^{\prime}=B\varphi(Y^{\prime})C$ implies
$E^{h}Z_{m+1}=B\varphi(X_{m})C-E^{h}X_{m}+B\varphi(Z_{m+1})C.$
By $(3)$ in Lemma 2.2.10, we have $B\varphi(Z_{m+1})C=A_{m+1}+E^{h}B_{m+1}$,
with $A_{m+1}\in{\rm M}_{d}(A^{(2)})$ and $B_{m+1}\in{\rm M}_{d}(\mathop{\rm
Fil}\nolimits^{m+2}R)$. One can check
$B\varphi(X_{m})C-E^{h}X_{m}+A_{m+1}\in{\rm M}_{d}(\mathop{\rm
Fil}\nolimits^{h+m+1}A^{(2)})$, so
$B\varphi(X_{m})C-E^{h}X_{m}+A_{m+1}=E^{h+m+1}Y_{m+1}$ with $Y_{m+1}\in{\rm
M}_{d}(A^{(2)})$. And we have $Y-\sum_{i=0}^{m+1}Y_{i}E^{i}=B_{m+1}\in{\rm
M}_{d}(\mathop{\rm Fil}\nolimits^{m+2}R)$ as required.
At last when $p=2$. We know we can assume $Y$ is inside ${\rm
M}_{d}(\widehat{S})$. Then repeat the above arguments by replacing $(3)$ in
Lemma 2.2.10 with $(5)$, we can also prove our claim. ∎
### 2.3. The ring $A^{(2)}_{\mathop{\rm st}\nolimits}$
We assume that $R=\mathcal{O}_{K}$ in the following two subsections. For our
later use for semi-stable representations, we construct $A^{(2)}_{\mathop{\rm
st}\nolimits}$ as the following: Define $\varphi$ on
$W(k)[\\![x,\mathfrak{y}]\\!]$ by $\varphi(x)=x^{p}$ and
$\varphi(\mathfrak{y})=(1+\mathfrak{y})^{p}-1$ and set
$w=\frac{\mathfrak{y}}{E}$. Set $A^{(2)}_{\mathop{\rm
st}\nolimits}:=W(k)[\\![x,\mathfrak{y}]\\!]\\{w\\}_{\delta}^{\wedge}$ where
$\wedge$ means $(p,E)$-completion. Similarly, we define $A^{(3)}_{\mathop{\rm
st}\nolimits}=W(k)[\\![x,\mathfrak{y},\mathfrak{z}]\\!]\\{\frac{\mathfrak{y}}{E},\frac{\mathfrak{z}}{E}\\}^{\wedge}_{\delta}$,
with the $\delta$-structure on $W(k)[\\![x,\mathfrak{y},\mathfrak{z}]\\!]$
given by $\delta(x)=0$, $\varphi(\mathfrak{y})=(\mathfrak{y}+1)^{p}-1$ and
$\varphi(\mathfrak{z})=(\mathfrak{z}+1)^{p}-1$. Define $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$ to be the $p$-adic completion of
$W(k)[\\![x,\mathfrak{y}]\\!][w,\frac{E}{p},\gamma_{i}(w),i\geq 0].$ It is
clear that for any $f\in A^{(2)}_{\mathop{\rm st}\nolimits,\max}$ can be
written uniquely $a=\sum\limits_{i=0}^{\infty}f_{i}\gamma_{i}(w)$ with
$f_{i}\in{O}_{\mathrm{max}}$ and $f_{i}\to 0$ $p$-adically. For any subring
$B\subset A^{(2)}_{\mathop{\rm st}\nolimits,\max}[\frac{1}{p}]$, we set
$\mathop{\rm Fil}\nolimits^{i}B:=B\cap E^{i}A^{(2)}_{\mathop{\rm
st}\nolimits,\max}[\frac{1}{p}]$ and $D_{w}$ the $p$-adic completion of
$\mathcal{O}_{K}[\gamma_{i}(w),i\geq 0]$.
It turns out that $A^{(2)}$ and $A^{(2)}_{\mathop{\rm st}\nolimits}$ share
almost the same properties by replacing $z$ with $w$. So we summarize all
these properties in the following:
###### Proposition 2.3.1.
1. (1)
One can extend Froebnius from $A$ to $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$.
2. (2)
There exists an embedding $\iota:A^{(2)}_{\mathop{\rm
st}\nolimits}\hookrightarrow A^{(2)}_{\mathop{\rm st}\nolimits,\max}$ so that
$\iota$ commutes with Frobenius.
3. (3)
$A^{(2)}_{\mathop{\rm st}\nolimits}\cap E^{i}A^{(2)}_{\mathop{\rm
st}\nolimits,\max}[\frac{1}{p}]=EA^{(2)}_{\mathop{\rm st}\nolimits}$.
4. (4)
$A^{(2)}_{\mathop{\rm st}\nolimits}/E\simeq D_{w}=A^{(2)}_{\mathop{\rm
st}\nolimits,\max}/\mathop{\rm Fil}\nolimits^{1}A^{(2)}_{\mathop{\rm
st}\nolimits,\max}.$
5. (5)
$A^{(2)}_{\mathop{\rm st}\nolimits}$ is closed in $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$.
6. (6)
$A^{(2)}_{\mathop{\rm st}\nolimits}$ and $A^{(3)}_{\mathop{\rm st}\nolimits}$
are flat over $A$, and in particular they are bounded.
7. (7)
Proposition 2.2.11 holds by replacing $A^{(2)}_{\max}$ and $A^{(2)}$ by
$A^{(2)}_{\mathop{\rm st}\nolimits}$ and $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$ respectively.
###### Proof.
All previous proof applies by noting the following difference
$\varphi(w)=\varphi(\frac{\mathfrak{y}}{E})=c^{-1}\frac{1}{p}\sum_{i=1}^{p}\binom{p}{i}\mathfrak{y}^{i}=c^{-1}\sum_{i=1}^{p-1}\mathfrak{y}^{i}\binom{p}{i}/p+c^{-1}\frac{E^{p}w^{p}}{p}.$
Also
$\delta(\mathfrak{y})=\sum\limits_{i=1}^{p-1}\mathfrak{y}^{i}\binom{p}{i}/p$
always contains $\mathfrak{y}$-factor and this is a key input for the analogy
of Lemma 2.2.5.
For the boundedness of $A^{(3)}_{\mathop{\rm st}\nolimits}$, we have
$W(k)[\\![x,\mathfrak{y},\mathfrak{z}]\\!]/(p,E)\simeq(\mathcal{O}_{K}/p)[\\![\bar{\mathfrak{y}},\bar{\mathfrak{z}}]\\!]$
so $\\{\mathfrak{y},\mathfrak{z}\\}$ form a $(p,E)$-complete regular sequence,
and by [BS22, Proposition 3.13], $A^{(3)}_{\mathop{\rm st}\nolimits}$ is also
$A$-flat, and this implies $A^{(3)}_{\mathop{\rm st}\nolimits}$ is bounded by
(2) in Lemma 3.7 of $loc.cit.$. ∎
Note that $A^{\widehat{\otimes}2}=W(k)[\\![x,y]\\!]\subset
W(k)[\\![x,\mathfrak{y}]\\!]$ via $y=x(\mathfrak{y}+1)$ or equivalently
$\mathfrak{y}=\frac{y}{x}-1$. It is clear that this inclusion is a map of
$\delta$-rings. By the universal property of prismatic envelope to construct
$A^{(2)}$, the inclusion induces a map of prisms $\alpha:A^{(2)}\to
A^{(2)}_{\mathop{\rm st}\nolimits}$. Since $z=xw$, we easily see that
$A^{(2)}_{\max}\subset A^{(2)}_{\mathop{\rm st}\nolimits,\max}$. So
$A^{(2)}\subset A^{(2)}_{\mathop{\rm st}\nolimits}$ via $\alpha$. We will see
that $A^{(2)}$ (resp. $A^{(2)}_{\mathop{\rm st}\nolimits}$) is the self
product of $A$ in category $X_{{{\mathbbl{\Delta}}}}$ (resp.
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\text{log}}}$) in §4.1 and §5. Then the
existence of $\alpha:A^{(2)}\to A^{(2)}_{\mathop{\rm st}\nolimits}$ can be
explained by the universal property of self product. See §5 for details.
To simplify our notation, let $B^{(2)}_{\mathop{\rm st}\nolimits}$ (resp.
$B^{(3)}_{\mathop{\rm st}\nolimits}$, $B^{(2)}$, $B^{(3)}$) be the $p$-adic
completion of ${A^{(2)}_{\mathop{\rm st}\nolimits}}[\frac{1}{E}]$ (resp.
$A^{(3)}_{\mathop{\rm st}\nolimits}[\frac{1}{E}]$, $A^{(2)}[\frac{1}{E}]$,
$A^{(3)}[\frac{1}{E}]$).
###### Lemma 2.3.2.
1. (1)
$A^{(i)}_{\mathop{\rm st}\nolimits}\subset B^{(i)}_{\mathop{\rm
st}\nolimits}\subset B^{(i)}_{\mathop{\rm st}\nolimits}[\frac{1}{p}]$ and
$A^{(i)}\subset B^{(i)}\subset B^{(i)}[\frac{1}{p}]$ for $i=2,3$.
2. (2)
$B^{(2)}_{\mathop{\rm st}\nolimits}\cap{A^{(2)}_{\mathop{\rm
st}\nolimits}}[\frac{1}{p}]=A^{(2)}_{\mathop{\rm st}\nolimits}$ and
$B^{(2)}\cap{A^{(2)}}[\frac{1}{p}]=A^{(2)}$.
###### Proof.
Here we only prove the case $A^{(2)}$ while the proofs for
$A^{(2)}_{\mathop{\rm st}\nolimits}$, $A^{(3)}$ and $A^{(3)}_{\mathop{\rm
st}\nolimits}$ are almost the same.
By Proposition 2.2.7, $A^{(2)}$ is a subring of $A^{(2)}_{\max}\subset
K_{0}[\\![x,z]\\!]$. So $A^{(2)}$ and hence $A^{(2)}[\frac{1}{E}]$ is an
integral domain. Then $B^{(2)}$ has no $p$-torsion: Assume that $x\in B^{(2)}$
so that $px=0$. Suppose that $x_{n}\in A^{(2)}[\frac{1}{E}]$ so that $x\equiv
x_{n}\mod p^{n}$. Then $px_{n}\equiv 0\mod p^{n}A^{(2)}[\frac{1}{E}]$. Since
$A^{(2)}[\frac{1}{E}]$ is domain, $x_{n}\equiv 0\mod p^{n-1}$. Hence $x=0$. As
$B^{(2)}$ has no $p$-torsion, we see that $B^{(2)}\subset
B^{(2)}[\frac{1}{p}]$. To see the natural map $A^{(2)}\to B^{(2)}$ is
injective, it suffices to show that $A^{(2)}/pA^{(2)}$ injects to
$A^{(2)}/pA^{(2)}[\frac{1}{u}]=B^{(2)}/pB^{(2)}$. Clearly, this is equivalent
to that $A^{(2)}/pA^{(2)}$ has no $u$-torsion. Note that $A^{(2)}$ is obtained
by taking prismatic envelope of $A^{\widehat{\otimes}2}=W(k)[\\![x,z]\\!]$ for
the ideal $I=(z)$. As mentioned before, we can apply [BS22, Prop. 3.13] to our
situation. So $A^{(2)}$ is flat over $A$ and hence $A^{(2)}/pA^{(2)}$ has no
$u$-torsion as desired.
Now we can regard $B^{(2)}$ and $A^{(2)}[\frac{1}{p}]$ as subrings of
$B^{(2)}[\frac{1}{p}]$. In particular, $B^{(2)}\cap A^{(2)}[\frac{1}{p}]$
makes sense and contains $A^{(2)}$. For any $x\in B^{(2)}\cap
A^{(2)}[\frac{1}{p}]$, if $x\not\in A^{(2)}$ but $px\in A^{(2)}$. Then the
image of $y=px$ inside $A^{(2)}/pA^{(2)}$ is nonzero but the image of $y$ in
$B^{(2)}/pB^{(2)}$ is zero. This contradicts to that $A^{(2)}/pA^{(2)}$
injects to $B^{(2)}/pB^{(2)}$. So such $x$ can not exist and we have
$B^{(2)}\cap{A^{(2)}}[\frac{1}{p}]=A^{(2)}$ as required. ∎
By [BS22, Lem. 3.9], any prism $(B,J)$ admits its perfection
$(B,J)_{\mathop{\rm perf}\nolimits}=(B_{\mathop{\rm
perf}\nolimits},JB_{\mathop{\rm perf}\nolimits})$.
###### Remark 2.3.3.
In [BS22], the underlying $\delta$-ring of $(B,J)_{\mathop{\rm
perf}\nolimits}$ is denoted by $(B_{\infty},JB_{\infty})$, and $B_{\mathop{\rm
perf}\nolimits}$ is defined as the direct perfection of $B$ in the category of
$\delta$-rings. In this paper, we write $B_{\mathop{\rm perf}\nolimits}$ as
the $(p,J)$-adic completion of $\mathrm{colim}_{\varphi}B$, which also
coincides with the derived $(p,I)$-completion of $\mathrm{colim}_{\varphi}B$
(cf. Lemma 3.9 of $loc.cit.$).
###### Lemma 2.3.4.
We have $(A^{(2)})_{\mathop{\rm perf}\nolimits}$ and $(A^{(2)}_{\mathop{\rm
st}\nolimits})_{\mathop{\rm perf}\nolimits}$ are $A$-flat.
###### Proof.
We have seen that $A^{(2)}$ is $A$-flat via $i_{1}$. And it is easy to see
$\varphi$ on $A$ is flat. Since $i_{1}$ is a $\delta$-map, so we have
$\varphi^{n}\circ i_{1}=i_{1}\circ\varphi^{n}$ which is flat. So
$\mathrm{colim}_{\varphi}A^{(2)}$ is flat over $A$. In particular, we will
have $A_{\mathop{\rm perf}\nolimits}$ is $(p,E)$-complete flat over $A$. Now
since $A$ is Noetherian, by [Sta20, Tag 0912], we have $(A^{(2)})_{\mathop{\rm
perf}\nolimits}$ is $A$-flat. The proof for $(A^{(2)}_{\mathop{\rm
st}\nolimits})_{\mathop{\rm perf}\nolimits}$ is the same. ∎
### 2.4. Embedding $A^{(2)}$ and $A^{(2)}_{\mathop{\rm st}\nolimits}$ to
$A_{\mathrm{inf}}$
Let $A_{\mathrm{inf}}=W(\mathcal{O}_{\mathbb C_{p}}^{\flat})$, then there is a
surjection $\theta:A_{\mathrm{inf}}\to\mathcal{O}_{\mathbb C_{p}}$ and
$\mathop{\rm Ker}\nolimits\theta=(E)$. And let $B_{\mathrm{dR}}^{+}$ be the
$\mathop{\rm ker}\nolimits\theta$-adic completion of
$A_{\mathrm{inf}}[\frac{1}{p}]$.
###### Definition 2.4.1.
Let $\mathbb A_{\max}$ be the $p$-adic completion of the
$A_{\mathrm{inf}}$-subalgebra of $B_{\mathrm{dR}}^{+}$ generated by $E/p$.
It can be easily seen that $\varphi(E/p):=\varphi(E)/p\in
A_{\mathrm{cris}}\subset\mathbb A_{\max}$ is well-defined and it extends the
Frobenius structure on $A_{\mathrm{inf}}$ to an endomorphism on ${\mathbb
A}_{\mathrm{max}}$.
Let $\\{\varpi_{n}\\}_{n\geq 0}$ be a compatible system of $p^{n}$-th roots of
$\varpi_{0}=\varpi$ and $\\{\zeta_{n}\\}_{n\geq 0}$ be a compatible system of
$p^{n}$-th roots of 1. Write $\varpi^{\flat}:=\\{\varpi_{n}\\}_{n\geq
0},\zeta^{\flat}:=\\{\zeta_{n}\\}_{n\geq 0}\in\mathcal{O}_{\mathbb
C_{p}}^{\flat}$ and let $u=[\varpi^{\flat}]$, $\epsilon=[\zeta^{\flat}]$,
$v=\epsilon u$ and $\mu=\epsilon-1$ be elements inside $A_{\mathrm{inf}}$. We
can regard $W(k)[\\![x,y]\\!]$ as a subring of $A_{\mathrm{inf}}$ via
$x\mapsto u$ and $y\mapsto v$. Consider $z^{\prime}=\frac{u-v}{E}\in
A_{\mathrm{inf}}[\frac{1}{E}]$. Since $u-v=u(\epsilon-1)$ is clearly inside
$\mathop{\rm Ker}\nolimits(\theta)$ and $\mathop{\rm
Ker}\nolimits(\theta)=EA_{\mathrm{inf}}$, we conclude that $z^{\prime}\in
A_{\mathrm{inf}}$. Hence we have a natural map (of $\delta$-rings)
$\iota_{A}:\widetilde{A}^{(2)}\to A_{\mathrm{inf}}$ via $z\mapsto z^{\prime}$,
which naturally extends to $\iota_{A}:A^{(2)}\to A_{\mathrm{inf}}$ because
$(p,E)$-topology of $A^{(2)}$ matches with the weak topology of
$A_{\mathrm{inf}}$. Similarly, we have map of $\delta$-rings
$\iota_{\mathop{\rm st}\nolimits}:A^{(2)}_{\mathop{\rm st}\nolimits}\to
A_{\inf}$ via $x\mapsto u$ and $\mathfrak{y}\mapsto\epsilon-1$ and
$w\mapsto\frac{\epsilon-1}{E}$.
###### Remark 2.4.2.
Once we know that $A^{(2)}$ is self-product of $A$ inside
$X_{{\mathbbl{\Delta}}}$ with $X=\mathop{\rm Spf}\nolimits(\mathcal{O}_{K})$
as explained in §4.1. The map $\iota_{A}$ can be constructed as the following:
First we fix an embedding $A\to A_{\mathrm{inf}}$ by sending $x\mapsto
u=[\varpi^{\flat}]$. Then $A\to A_{\mathrm{inf}}$ by $x\to v=\epsilon u$ is
another map of prisms. By universal property of $A^{(2)}$, these two maps
extends to a map $\iota_{A}:A^{(2)}\to A_{\mathrm{inf}}$. Clearly, the map
$\iota_{A}:A^{(2)}\to A_{\mathrm{inf}}$ depends on choice of
${\varpi}^{\flat}=(\varpi_{n})_{n\geq 0}$ and
${\zeta}^{\flat}=(\zeta_{n})_{n\geq 0}$. Also $\iota_{A}$ is a special case of
$\iota^{(2)}_{\gamma}$ defined by (14) in §4.3. Indeed if
$\gamma([w^{\flat}])=[\zeta^{\flat}][w^{\flat}]$ then
$\iota_{A}=\iota^{(2)}_{\gamma}$. Similarly comment also applies to
$\iota_{\mathop{\rm st}\nolimits}$.
###### Proposition 2.4.3.
There is a unique embedding ${A_{\max}^{(2)}}$${\mathbb A_{\max}}$ such that
${W(k)[\\![x,y]\\!]}$${A_{\mathrm{inf}}}$${A_{\max}^{(2)}}$${\mathbb
A_{\max}}$${B_{\mathrm{dR}}^{+}}$
commutes. Furthermore, $\mathop{\rm Fil}\nolimits^{i}B_{\mathrm{dR}}^{+}\cap
A^{(2)}_{\max}=\mathop{\rm Fil}\nolimits^{i}A^{(2)}_{\max}$. The same result
holds when $A^{(2)}_{\max}$ is replaced by $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$.
###### Proof.
In the following, we only treat the case of $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$ while the proof of $A^{(2)}_{\max}$ is the same by noting
that $z=uw$ in $A_{\inf}$.
The uniqueness is clear. To show the existence of the embedding, it is enough
to show $\gamma_{i}(w)\in{\mathbb A}_{\mathrm{max}}$ for all $i\geq 1$.
It is a well-known fact that ${\mathbb A}_{\mathrm{max}}$ is isomorphic to the
$p$-adic completion of $A_{\mathrm{inf}}[\frac{u^{e}}{p}]$, and ${\mathbb
A}_{\mathrm{max}}[1/p]$ is a Banach $\mathbb Q_{p}$-algebra, which is the
completion of $A_{\mathrm{inf}}[1/p]$ under the norm
$\lvert\cdot\rvert_{p^{-1}}$ such that
$\lvert x\rvert_{p^{-1}}=\sup_{n}\\{p^{-n}\lvert
x_{n}\rvert_{\mathcal{O}_{C}^{\flat}}\\}$
where $x=\sum_{n\gg 0}[x_{n}]p^{n}\in A_{\mathrm{inf}}[1/p]$. And we have for
$x\in{\mathbb A}_{\mathrm{max}}[1/p]$, $x\in{\mathbb A}_{\mathrm{max}}$ if and
only if $\lvert x\rvert_{p^{-1}}\leq 1$. Moreover $\lvert\cdot\rvert_{p^{-1}}$
is multiplicative. So now it is enough to show for $x=\gamma_{i}(w)$
considered as an element inside ${\mathbb A}_{\mathrm{max}}[1/p]$, we have
$\lvert x^{p-1}\rvert_{p^{-1}}\leq 1$. To show this, we have by [BMS18,
Proposition 3.17], $\xi:=\mu/\varphi^{-1}(\mu)$ is a generator of $\mathop{\rm
Ker}\nolimits\theta$ with $\mu=\epsilon-1$. In particular,
$w=\mu/E=a\varphi^{-1}(\mu)\in A_{\mathrm{inf}}$ with $a\in
A_{\mathrm{inf}}^{\times}$. And we can check
$\overline{w}^{p-1}=c\overline{u}^{e}$ inside
$\mathcal{O}_{C}^{\flat}=A_{\mathrm{inf}}/pA_{\mathrm{inf}}$, with $c$ a unit.
So $w^{p-1}=au^{e}+bp$ with $a,b\in A_{\mathrm{inf}}$, and
$x^{p-1}=\frac{(au^{e}+bp)^{i}}{(i!)^{p-1}}.$
Using the fact $v_{p}(i!)<\frac{i}{p-1}$, one can show each term in the
binomial expansion on the right hand side of the equation has
$\lvert\cdot\rvert_{p^{-1}}$-norm less or equal to $1$, so in particular,
$\lvert x^{p-1}\rvert_{p^{-1}}\leq 1$.
To prove that $\mathop{\rm Fil}\nolimits^{i}B_{\mathrm{dR}}^{+}\cap
A^{(2)}_{\mathop{\rm st}\nolimits,\max}=\mathop{\rm
Fil}\nolimits^{i}A^{(2)}_{\mathop{\rm st}\nolimits,\max}$, it suffices to show
that $EB_{\mathrm{dR}}^{+}\cap A^{(2)}_{\mathop{\rm
st}\nolimits,\max}[\frac{1}{p}]=EA^{(2)}_{\mathop{\rm
st}\nolimits,\max}[\frac{1}{p}]$. By Proposition 2.2.7, we reduces to prove
that the map
$\theta:D_{w}=A^{(2)}_{\mathop{\rm st}\nolimits,\max}[\frac{1}{p}]/E\to
B_{\mathrm{dR}}^{+}/E=\mathbb C_{p}$
is injective. Let $f(w)=\sum_{i\geq 0}a_{i}\gamma_{i}(w)\in\mathop{\rm
Ker}\nolimits\theta$ with $a_{i}\in\mathcal{O}_{K}$ limits to $0$
$p$-adically. Then $f(w_{0})=0$ with
$w_{0}:=\theta(w)=\theta(\frac{\epsilon-1}{E})\in\mathbb C_{p}$. Note
$v_{p}(w_{0})\geq\frac{1}{p-1}$ because it is well-known
$\frac{\epsilon-1}{\varphi^{-1}(\epsilon)-1}$ is another generator of kernel
$\theta:A_{\inf}\to\mathcal{O}_{\mathbb C_{p}}$ and then
$v_{p}(w_{0})=v_{p}(\theta(\varphi^{-1}(\epsilon)-1))=\frac{1}{p-1}$. Since we
are aiming to show that $f=0$, without loss of generality, we can assume that
$K$ contains $p_{1}=\sqrt[p-1]{p}$. Note that $v_{p}(i!)\leq\frac{1}{p-1}$, we
conclude that $\frac{w_{0}}{p_{1}}$ is a root of $f(p_{1}w)$ which is in
$\mathcal{O}_{K}\langle w\rangle$. By Weierstrass preparation theorem, $w_{0}$
is algebraic over $K$ unless $f=0$. By Lemma below,
$w_{0}:=\theta(w)\in\mathbb C_{p}$ is transcendental over $K$ and hence $f=0$.
∎
###### Lemma 2.4.4.
$w_{0}=\theta(\frac{\epsilon-1}{E})$ is transcendental over $K$.
###### Proof.
If $w_{0}$ is contained in an algebraic extension $L$ over $K$, we define
$L_{0,\infty}=\bigcup_{n}L(\varpi_{n})$. For $g\in G_{L_{0,\infty}}$, we will
have
$\theta(g(\frac{\epsilon-1}{E}))=g(w_{0})=w_{0}=\theta(\frac{\epsilon-1}{E}).$
Since $G_{L_{0,\infty}}$ fix $E$,
$\theta(\frac{g(\epsilon-1)-(\epsilon-1)}{E})=0$. This implies
$g(\epsilon-1)-(\epsilon-1)\in\mathop{\rm
Fil}\nolimits^{2}B_{\mathrm{dR}}^{+}$. Recall for $t=\log\epsilon$,
$t-(\epsilon-1)\in\mathop{\rm Fil}\nolimits^{2}B_{\mathrm{dR}}^{+}$, so we
have $g(t)-t\in\mathop{\rm Fil}\nolimits^{2}B_{\mathrm{dR}}^{+}$. But this
can’t be true. Since $L_{0,\infty}$ can only contain finitely many $p^{n}$-th
roots of $1$, for $g\in G_{L_{0,\infty}}$, $g(t)=c(g)t$ satisfying
$c(g)\in\mathbb Q_{p}$ and $c(g)\neq 1$. This implies
$g(t)-t=(c(g)-1)t\in\mathop{\rm
Fil}\nolimits^{1}B_{\mathrm{dR}}^{+}\setminus\mathop{\rm
Fil}\nolimits^{2}B_{\mathrm{dR}}^{+}$. ∎
###### Corollary 2.4.5.
The natural maps $\iota_{A}:A^{(2)}\to A_{\inf}$ and $\iota_{\mathop{\rm
st}\nolimits}:A^{(2)}_{\mathop{\rm st}\nolimits}\to A_{\inf}$ are injective.
To summarize, we have the following commutative diagram of rings inside
$B_{\mathrm{dR}}^{+}$:
${A^{(2)}}$${A^{(2)}_{\mathop{\rm
st}\nolimits}}$${A_{\mathrm{inf}}}$${A^{(2)}_{\max}}$${A^{(2)}_{\mathop{\rm
st}\nolimits,\max}}$${{\mathbb A}_{\mathrm{max}}.}$
## 3\. Application to semi-stable Galois representations
In this section, we assume that $R=\mathcal{O}_{K}$. We explain how to use the
period ring $A^{(2)}$ and $A^{(2)}_{\mathop{\rm st}\nolimits}$ to understand
lattices in crystalline and semi-stable representations. Roughly speaking, we
are going to use $A^{(2)}$ and $A^{(2)}_{\mathop{\rm st}\nolimits}$ to replace
$\widehat{\mathcal{R}}$ in the theory of $(\varphi,\hat{G})$-modules developed
in [Liu10].
Let $K_{\infty}=\bigcup_{n=1}^{\infty}K(\varpi_{n})$, $G_{\infty}:={\rm
Gal}(\overline{K}/K_{\infty})$ and $G_{K}:={\rm Gal}(\overline{K}/K)$. Recall
that $A=\mathfrak{S}=W(k)[\\![u]\\!]$. Let $S$ be the $p$-adic completion of
$W(k)[\\![u,\frac{E^{i}}{i!},i\geq 1]\\!]$, which is the PD envelope of
$W(k)[u]$ for the ideal $(E)$. It is clear that $S\subset{O}_{\mathrm{max}}$.
We define $\varphi$ and $\mathop{\rm Fil}\nolimits^{i}$ on $S$ induced that
from those on ${O}_{\mathrm{max}}$, in particular, $\mathop{\rm
Fil}\nolimits^{i}S=S\cap E^{i}{O}_{\mathrm{max}}[\frac{1}{p}]$. Note that $A$
embeds to $A_{\mathrm{inf}}$ via $u\mapsto[\varpi^{\flat}]$ is not stable
under $G_{K}$-action but only on $G_{\infty}$-action. For any $g\in G_{K}$,
define ${\underline{\varepsilon}}(g)=\frac{g(u)}{u}$. It is clear that
${\underline{\varepsilon}}(g)=\epsilon^{a(g)}$ with $a(g)\in\mathbb Z_{p}$. We
define _two_ differential operators $N_{S}$ and $\nabla_{S}$ on $S$ by
$N_{S}(f)=\frac{df}{du}u$ and $\nabla_{S}(f)=\frac{df}{du}$. We need
$\nabla_{S}$ to treat crystalline representations.
### 3.1. Kisin module attached to semi-stable representation
Fix $h\geq 0$, a _Kisin module of height $h$_ is a finite free $A$-module
$\mathfrak{M}$ with a semi-linear endomorphism
$\varphi_{\mathfrak{M}}:\mathfrak{M}\to\mathfrak{M}$ so that $\mathop{\rm
coker}\nolimits(1\otimes\varphi_{\mathfrak{M}})$ is killed by $E^{h}$, where
$1\otimes\varphi_{\mathfrak{M}}:\mathfrak{M}^{*}:=A\otimes_{\varphi,A}\mathfrak{M}\to\mathfrak{M}$
is linearization of $\varphi_{\mathfrak{M}}$. Note here we are using classical
setting of Kisin modules used in [Liu10] but it is good enough for this paper.
The following summarizes the results on Kisin modules attached to
$G_{K}$-stable $\mathbb Z_{p}$-lattices in semi-stable representations. The
details and proofs of these facts can be found in [Liu10].
Let $T$ be a $G_{K}$-stable $\mathbb Z_{p}$-lattice inside a semi-stable
representation $V$ of $G_{K}$ with Hodge-Tate weights in $\\{0,\dots,h\\}$.
Let $D:=D^{*}_{\mathop{\rm st}\nolimits}(V)=\mathop{\rm Hom}\nolimits_{\mathbb
Q_{p},G_{K}}(V,B_{\mathop{\rm st}\nolimits})$ be the filtered
$(\varphi,N)$-module attached to $V$ and $D_{K}:=K\otimes_{K_{0}}D$. Then
there exists a unique Kisin module $\mathfrak{M}:=\mathfrak{M}(T)$ of height
$h$ attached to $T$ so that
1. (1)
$\mathop{\rm Hom}\nolimits_{\varphi,A}(\mathfrak{M},A_{\mathrm{inf}})\simeq
T|_{G_{\infty}}$.
2. (2)
There exists an $S$-linear isomorphism
$\iota_{S}:S[\frac{1}{p}]\otimes_{\varphi,A}\mathfrak{M}\simeq
D\otimes_{W(k)}S$
so that $\iota_{S}$ is compatible with $\varphi$ on the both sides.
3. (3)
$\iota_{S}$ also induces an isomorphism $\mathop{\rm
Fil}\nolimits^{h}(S[\frac{1}{p}]\otimes_{\varphi,A}\mathfrak{M})\simeq\mathop{\rm
Fil}\nolimits^{h}(D\otimes_{W(k)}S)$. The filtration on the both sides are
defined as following:
$\mathop{\rm
Fil}\nolimits^{h}(S[\frac{1}{p}]\otimes_{\varphi,A}\mathfrak{M}):=\left\\{x\in
S[\frac{1}{p}]\otimes_{\varphi,A}\mathfrak{M}|(1\otimes\varphi_{\mathfrak{M}}(x))\in\mathop{\rm
Fil}\nolimits^{h}S[\frac{1}{p}]\otimes_{A}\mathfrak{M}\right\\}.$
To define filtration on ${\mathcal{D}}:=S\otimes_{W(k)}D$, we first extend the
monodromy operator $N_{\mathcal{D}}$ (resp. $\nabla_{\mathcal{D}}$) on $D$ to
${\mathcal{D}}$ by $N_{{\mathcal{D}}}=1\otimes N_{D}+N_{S}\otimes 1$ (resp.
$\nabla_{\mathcal{D}}=1\otimes N_{D}+\nabla_{S}\otimes 1$). Then we define
$\mathop{\rm Fil}\nolimits^{i}{\mathcal{D}}$ by induction: set $\mathop{\rm
Fil}\nolimits^{0}{\mathcal{D}}={\mathcal{D}}$ and
$\mathop{\rm
Fil}\nolimits^{i}{\mathcal{D}}:=\\{x\in{\mathcal{D}}|N_{{\mathcal{D}}}(x)\in\mathop{\rm
Fil}\nolimits^{i-1}{\mathcal{D}},f_{\varpi}(x)\in\mathop{\rm
Fil}\nolimits^{i}D_{K}\\}$
where $f_{\varpi}:{\mathcal{D}}\to D_{K}$ is induced by $S\to\mathcal{O}_{K}$
via $u\mapsto\varpi.$
###### Remark 3.1.1 (Griffith transversality).
From the construction of $\mathop{\rm Fil}\nolimits^{i}{\mathcal{D}}$, we see
that $N_{{\mathcal{D}}}(\mathop{\rm
Fil}\nolimits^{i}{\mathcal{D}})\subset\mathop{\rm
Fil}\nolimits^{i-1}{\mathcal{D}}$. This property is called Griffith
transversality.
We only use $\nabla_{\mathcal{D}}$ when $N_{D}=0$, that is, when $V$ is
crystalline. In this case, it is clear that
$N_{\mathcal{D}}=u\nabla_{{\mathcal{D}}}$. So it is clear that
$\nabla_{\mathcal{D}}(\mathop{\rm
Fil}\nolimits^{i}{\mathcal{D}})\subset\mathop{\rm
Fil}\nolimits^{i-1}{\mathcal{D}}$.
For ease of notations, we will write $N=N_{\mathcal{D}}$ and
$\nabla=\nabla_{\mathcal{D}}$ in the following. Let $T^{\vee}:=\mathop{\rm
Hom}\nolimits_{\mathbb Z_{p}}(T,\mathbb Z_{p})$ and
$V^{\vee}:=T^{\vee}\otimes_{\mathbb Z_{p}}\mathbb Q_{p}$ denote the dual
representations. Then there exists an $A_{\inf}$-linear injection
(5) $\iota_{\mathfrak{M}}:A_{\inf}\otimes_{A}\mathfrak{M}\to
T^{\vee}\otimes_{\mathbb Z_{p}}A_{\mathrm{inf}},$
which is compatible with $G_{\infty}$-actions ($G_{\infty}$ acts on
$\mathfrak{M}$ trivially) and $\varphi$ on both sides. Applying
$S\otimes_{\varphi,A}$ and using
$\iota_{S}:=S\otimes_{\varphi,A}\iota_{\mathfrak{M}}$, we obtain the following
commutative diagram
$\textstyle{A_{\mathrm{cris}}[\frac{1}{p}]\otimes_{\varphi,A}\mathfrak{M}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\wr}$$\scriptstyle{A_{\mathrm{cris}}\otimes_{S}\iota_{S}}$$\scriptstyle{S\otimes_{\varphi,A}\iota_{\mathfrak{M}}}$$\textstyle{V^{\vee}\otimes_{\mathbb
Z_{p}}A_{\mathrm{cris}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{A_{\mathrm{cris}}\otimes_{W(k)}D\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{V^{\vee}\otimes_{\mathbb
Z_{p}}A_{\mathrm{cris}}}$
where the second row $\alpha$ is built by the classical comparison
$B_{\mathop{\rm st}\nolimits}\otimes_{K_{0}}D^{*}_{\mathop{\rm
st}\nolimits}(V)\simeq V^{\vee}\otimes_{\mathbb Q_{p}}B_{\mathop{\rm
st}\nolimits},$
and $\alpha$ is $G_{K}$-stable on the both sides. The left side of $\alpha$ is
defined by
$\forall x\in D,\forall g\in
G_{K},g(x)=\sum_{i=0}^{\infty}N^{i}(x)\gamma_{i}(\log({\underline{\varepsilon}}(g)))$
Therefore, if we regard $\mathfrak{M}^{*}:=A\otimes_{\varphi,A}\mathfrak{M}$
as an $A$-submodule of $V^{\vee}\otimes_{\mathbb Z_{p}}A_{\mathrm{cris}}$ via
injection $\iota^{*}:=S\otimes_{\varphi,A}\iota_{A}$, one can show that:
(6) $\forall g\in
G_{K},x\in\mathfrak{M}^{*},g(x)=\sum_{i=0}^{\infty}N_{\mathcal{D}}^{i}(x)\gamma_{i}(\log({\underline{\varepsilon}}(g))).$
When $V$ is crystalline, or equivalently, $N_{D}=0$, we have ([LL21, §8.1])
(7) $\forall g\in
G_{K},x\in\mathfrak{M}^{*},g(x)=\sum_{i=0}^{\infty}\nabla_{\mathcal{D}}^{i}(x)\gamma_{i}(u{\underline{\varepsilon}}(g)).$
### 3.2. Descent of the $G_{K}$-action
Let us first discuss the $G_{K}$-action on $\mathfrak{M}\subset
T^{\vee}\otimes_{\mathbb Z_{p}}A_{\mathrm{inf}}$ via $\iota_{\mathfrak{M}}$ in
(5) in more details. We select an $A$-basis $e_{1},\dots,e_{d}$ of
$\mathfrak{M}$ so that
$\varphi(e_{1},\dots,e_{d})=(e_{1},\dots,e_{d})\mathfrak{A}$ with
$\mathfrak{A}\in{\rm M}_{d}(A)$. Then there exists a matrix $B\in{\rm
M}_{d}(A)$ so that $\mathfrak{A}B=B\mathfrak{A}=E^{h}I_{d}$. For any $g\in
G_{K},$ let $X_{g}$ be the matrix so that
$g(e_{1},\dots,e_{d})=(e_{1},\dots,e_{d})X_{g}.$
In this section, we are interested in where the entries of $X_{g}$ locates.
###### Theorem 3.2.1.
The entries of $X_{g}$ are in $A^{(2)}_{\mathop{\rm st}\nolimits}$. If $V$ is
crystalline and $g(u)-u=Ez$ then $X_{g}\in{\rm M}_{d}(A^{(2)}).$
First, it is well-known that $W(\mathbb
C_{p}^{\flat})\otimes_{A_{\mathrm{inf}}}\iota_{\mathfrak{M}}$ is an
isomorphism. So $X_{g}\in{\rm M}_{d}(W(\mathbb C_{p}^{\flat}))$. Since
$G_{K}$-actions and $\varphi$-commutes, we have
$\mathfrak{A}\varphi(X_{g})=X_{g}g(\mathfrak{A}).$
Define
$\mathop{\rm
Fil}\nolimits^{h}\mathfrak{M}^{*}:=\\{x\in\mathfrak{M}^{*}|(1\otimes\varphi_{\mathfrak{M}})(x)\in
E^{h}\mathfrak{M}\\}.$
Since $\mathfrak{M}$ has $E$-height $h$, it is easy to show that $\mathop{\rm
Fil}\nolimits^{h}\mathfrak{M}^{*}$ is a finite free $A$-module and
$\mathop{\rm Fil}\nolimits^{h}{\mathcal{D}}$ is generated by $\mathop{\rm
Fil}\nolimits^{h}\mathfrak{M}^{*}$.
To be more precise, let $\\{e^{*}_{i}:=1\otimes e_{i},i=1,\dots,d\\}$ be an
$A$-basis of $\mathfrak{M}^{*}$. It is easy to check that
$(\alpha_{1},\dots,\alpha_{d})=(e_{1}^{*},\dots,e_{d}^{*})B$ is an $A$-basis
of $\mathop{\rm Fil}\nolimits^{h}\mathfrak{M}^{*}$, and it is also an
$S[\frac{1}{p}]$-basis of $\mathop{\rm Fil}\nolimits^{h}{\mathcal{D}}$. So for
any $g\in G_{K}$, we have
$g(\alpha_{j})=\sum\limits_{i=0}^{\infty}N^{i}(\alpha_{j})\gamma_{i}(\log({\underline{\varepsilon}}(g)))$.
By Griffith transversality in Remark 3.1.1: $N(\mathop{\rm
Fil}\nolimits^{i}{\mathcal{D}})\subset\mathop{\rm
Fil}\nolimits^{i-1}{\mathcal{D}},$ we have,
(8)
$g(\alpha_{j})=\sum_{i=0}^{h}N^{i}(\alpha_{j})E^{i}\gamma_{i}(\frac{\log({\underline{\varepsilon}}(g))}{E})+\sum_{i>h}^{\infty}N^{i}(\alpha_{j})\gamma_{i}(E)(\frac{\log({\underline{\varepsilon}}(g))}{E})^{i}.$
Since $N^{i}(\alpha_{j})E^{i}\in\mathop{\rm Fil}\nolimits^{h}{\mathcal{D}}$,
$\gamma_{i}(E)$ in ${O}_{\mathrm{max}}$ and $w^{n}\to 0$ inside
$A^{(2)}_{\mathop{\rm st}\nolimits,\max}$, we see that
$g(\alpha_{1},\dots,\alpha_{d})=(\alpha_{1},\dots,\alpha_{d})Y_{g}$ with
$Y_{g}\in A^{(2)}_{\mathop{\rm st}\nolimits,\max}[\frac{1}{p}].$
In the case that $V$ is crystalline, using (7), we have
$g(\alpha_{j})=\sum_{i=0}^{h}\nabla^{i}(\alpha_{j})E^{i}\gamma_{i}(\frac{u{\underline{\varepsilon}}(g)}{E})+\sum_{i>h}^{\infty}\nabla^{i}(\alpha_{j})\gamma_{i}(E)(\frac{u{\underline{\varepsilon}}(g)}{E})^{i}$
_If $g$ is chosen so that $g(u)-u=Ez$_ then, a similar argument can shows that
$g(\alpha_{1},\dots,\alpha_{d})=(\alpha_{1},\dots,\alpha_{d})Y^{\nabla}_{g}$
with $Y^{\nabla}_{g}\in A^{(2)}_{\max}[\frac{1}{p}].$
Now $g(e_{1}^{*},\dots,e_{d}^{*})=(e_{1}^{*},\dots,e_{d}^{*})\varphi(X_{g})$.
Using similar arguments, we see that $\varphi(X_{g})$’s entry are in
$A^{(2)}_{\mathop{\rm st}\nolimits,\max}[\frac{1}{p}]$ and
$A^{(2)}_{\max}[\frac{1}{p}]$ respectively. Since
$(\alpha_{1},\dots,\alpha_{d})=(e_{1}^{*},\dots,e_{d}^{*})B$, we conclude that
$\varphi(X_{g})g(B)=BY_{g}.$
Using the formula that $\mathfrak{A}\varphi(X_{g})=X_{g}g(\mathfrak{A})$ and
$\mathfrak{A}B=B\mathfrak{A}=E^{h}I_{d}$, we conclude that
$Y_{g}=(\frac{g(E)}{E})^{h}X_{g}$. Write $r=\frac{g(E)}{E}$. We claim that $r$
is a unit in $A^{(2)}_{\mathop{\rm st}\nolimits}$. Indeed,
$\frac{g(E)}{E}=\frac{E(u\epsilon^{a(g)})}{E(u)}=\sum\limits_{i=0}^{e}E^{(i)}(u)\frac{u^{i}(\epsilon^{a(g)}-1)^{i}}{Ei!}$
is again inside $A_{\mathop{\rm st}\nolimits}^{(2)}$, where $E^{(i)}$ means
the $i$-th derivative of $E$. And it is easy to show $g(E)$ is also a
distinguished element $A_{\mathop{\rm st}\nolimits}^{(2)}$, so by [BS22, Lemma
2.24], $r$ is a unit. Similarly, when $g(u)-u=Ez$, we will have
$r=\frac{g(E)}{E}\in(A^{(2)})^{\times}$. Hence
(9) $E^{h}X_{g}=r^{-h}\mathfrak{A}\varphi(X_{g})g(B).$
Now we can apply Proposition 2.2.11 and Proposition 2.3.1 (5) to the above
formula, we conclude that for $g\in G_{K}$ (resp. $g\in G_{K}$ such that
$g(u)-u=Ez$ and $V$ is crystalline), we have $X_{g}$ has entries in
$A^{(2)}_{\mathop{\rm st}\nolimits}[\frac{1}{p}]$ (resp.
$A^{(2)}[\frac{1}{p}]$).
To complete the proof of Theorem 3.2.1, it suffices to show that entries
$X_{g}$ are in $A^{(2)}_{\mathop{\rm st}\nolimits}$ (resp. $A^{(2)}$).
Unfortunately, the proof to remove “$\frac{1}{p}$” is much harder, which needs
§4.2 and §4.3. For the remaining of this subsection, we only show that the
proof of Theorem 3.2.1 can be reduced to the case that $g=\tilde{\tau}$ for a
special selected $\tilde{\tau}\in G_{K}$.
Let $L=\bigcup\limits_{n=1}^{\infty}K_{\infty}(\zeta_{p^{n}})$,
$K_{1^{\infty}}:=\bigcup_{n=1}^{\infty}K(\zeta_{p^{n}})$,
$\hat{G}:=\mathop{\rm Gal}\nolimits(L/K)$ and $H_{K}:=\mathop{\rm
Gal}\nolimits(L/K_{\infty})$. If $p>2$ then it is known that
$\hat{G}\simeq\mathop{\rm Gal}\nolimits(L/K_{1^{\infty}})\rtimes H_{K}$ with
$\mathop{\rm Gal}\nolimits(L/K_{1^{\infty}})\simeq\mathbb Z_{p}$. Let $\tau$
be a topological generator of $\mathop{\rm Gal}\nolimits(L/K_{1^{\infty}})$.
We have $\tau(u)=\epsilon^{a}u$ with $a\in\mathbb Z_{p}^{\times}$. Without
loss of generality, we may assume that $\tau(u)=\epsilon u$. If $p=2$ then we
can still select $\tau\in\hat{G}$ so that $\tau(u)=\epsilon u$ and
$\tau,H_{K}$ topologically generate $\hat{G}$. Pick $\tilde{\tau}\in G_{K}$ a
lift of $\tau$. Clearly, we have $\tilde{\tau}(u)-u=Ez$.
###### Proposition 3.2.2.
For $g=\tilde{\tau},$ the entries of $X_{g}$ are in $A^{(2)}_{\mathop{\rm
st}\nolimits}$, and if further $V$ is crystalline, then $X_{g}\in{\rm
M}_{d}(A^{(2)}).$
###### Lemma 3.2.3.
Proposition 3.2.2 is equivalent to Theorem 3.2.1.
###### Proof.
Since $\hat{G}$ is topologically generated by $\tau$ and $H_{K}$. So $G_{K}$
is topologically generated by $G_{\infty}$ and $\tilde{\tau}$. And we have
$\tau(u)-u=(\epsilon-1)u=Ez$. Now if $X_{\tilde{\tau}}$ has coefficient in
$A^{(2)}_{\mathop{\rm st}\nolimits}$ and $X_{g}=I_{d}$ for all $g\in
G_{\infty}$ then to show that $X_{g}\in A^{(2)}_{\mathop{\rm st}\nolimits}$
for all $g\in G_{K}$, it suffices to show that $X_{\tilde{\tau}^{p^{n}}}$
converges to $I_{d}$ inside ${\rm M}_{d}(A^{(2)}_{\mathop{\rm st}\nolimits})$.
Since $A^{(2)}_{\mathop{\rm st}\nolimits}$ is closed in $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$ by Proposition 2.3.1 (5), it suffices to show that
$X_{\tilde{\tau}^{p^{n}}}$ converges inside $A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$. Since $X_{g}=(\frac{E}{g(E)})^{r}Y_{g}$ and $Y_{g}$ is
defined by (8), we easily check that $X_{\tilde{\tau}^{p^{n}}}$ converges to
$I_{d}$ in $A^{(2)}_{\mathop{\rm st}\nolimits,\max}$ by using that
${\underline{\varepsilon}}(\tilde{\tau}^{p^{n}})$ converges to $0$ in
$(p,\epsilon-1)$-topology. The proof for the crystalline case is similar by
replacing $A^{(2)}_{\mathop{\rm st}\nolimits}$ with $A^{(2)}$. ∎
So it remains to prove Proposition 3.2.2 to complete the proof of Theorem
3.2.1. We will prove Proposition 3.2.2 in §4.3. Briefly speaking, for
$g=\tilde{\tau}$, we have shown that the linearization of the $g$-action
defines a $\varphi$-equivariant isomorphism:
$f_{g}:\mathfrak{M}\otimes_{A,\iota_{g}}A_{\mathop{\rm
st}\nolimits}^{(2)}[\frac{1}{p}]\simeq\mathfrak{M}\otimes_{A}A_{\mathop{\rm
st}\nolimits}^{(2)}[\frac{1}{p}]$
of $A_{\mathop{\rm st}\nolimits}^{(2)}[\frac{1}{p}]$-modules, and since
$g(u)-u=Ez$ and $V$ is crystalline, $f_{g}$ defines a $\varphi$-equivariant
isomorphism:
$f_{g}:\mathfrak{M}\otimes_{A,\iota_{g}}A^{(2)}[\frac{1}{p}]\simeq\mathfrak{M}\otimes_{A}A^{(2)}[\frac{1}{p}]$
of $A^{(2)}[\frac{1}{p}]$-modules. Here $\iota_{g}:A\to A^{(2)}_{\mathop{\rm
st}\nolimits}$ (resp. $\iota_{g}:A\to A^{(2)})$) is defined by $u\to g(u)$. On
the other hand, by [Wu21, Theorem 5.6], we will see the $g$-action on
$T^{\vee}\otimes W(\mathbb C_{p}^{\flat})$ also descent to a
$\varphi$-equivariant morphism $c$ of $B^{(2)}$-modules, and recall that
$B^{(2)}$ the is $p$-adic completion of $A^{(2)}[\frac{1}{E}]$. Then by
comparing $c$ and $f_{g}$ using the technique developed in §4.2, we will
deduce Proposition 3.2.2 from Lemma 2.3.2.
###### Remark 3.2.4.
Our original strategy to prove Theorem 3.2.1 is to show $A^{(2)}_{\mathop{\rm
st}\nolimits}[\frac{1}{p}]\cap W({\mathbb C^{\flat}_{p}})=A^{(2)}_{\mathop{\rm
st}\nolimits}$ (resp. $A^{(2)}[\frac{1}{p}]\cap W(\mathcal{O}^{\flat}_{\mathbb
C_{p}})=A^{(2)}$). This is equivalent to that $A^{(2)}/p,A^{(2)}_{\mathop{\rm
st}\nolimits}/p$ injects in $\mathbb C_{p}^{\flat}$. Unfortunately, it does
not work out though we can show
$\widetilde{A}^{(2)}/p,\widetilde{A^{(2)}_{\mathop{\rm st}\nolimits}}/p$
injects in $\mathbb C_{p}^{\flat}$.
### 3.3. Relation to $(\varphi,\hat{G})$-modules
In this subsection, we show that the base ring $\widehat{{\mathcal{R}}}$ for
the theory of $(\varphi,\hat{G})$-modules can be replaced by
$A^{(2)}_{\mathop{\rm st}\nolimits}$. To this end, this builds a new theory of
$(\varphi,\hat{G})$-modules with new base ring $A^{(2)}_{\mathop{\rm
st}\nolimits}$. Since the idea of this new theory is almost the same as that
of the old one, We will use _classical_ to indicate we are using the theory
over $\widehat{\mathcal{R}}$. For example, when we say classical
$(\varphi,\hat{G})$-module, it means a $(\varphi,\hat{G})$-module over
$\widehat{\mathcal{R}}$. Recall
$L=\bigcup\limits_{n=1}^{\infty}K_{\infty}(\zeta_{p^{n}})$,
$\hat{G}:=\mathop{\rm Gal}\nolimits(L/K)$ and $H_{K}:=\mathop{\rm
Gal}\nolimits(L/K_{\infty})$. Let $\mathfrak{m}$ be the maximal ideal of
$\mathcal{O}_{\mathbb C_{p}}^{\flat}$ and set $I_{+}=W(\mathfrak{m})$ so that
$A_{\mathrm{inf}}/I_{+}=W(\bar{k})$. For any subring $B\subset
A_{\mathrm{inf}}$ set $I_{+}B=B\cap I_{+}$. Let $t=\log\epsilon$,
$t^{(i)}=t^{r(i)}\gamma_{\tilde{q}(i)}(\frac{t^{p-1}}{p})$ where
$i=(p-1)\tilde{q}(i)+r(i)$ with $0\leq r(i)<p-1.$ Recall that
$\widehat{\mathcal{R}}:=A_{\mathrm{inf}}\cap{\mathcal{R}}_{K_{0}}$ where
${\mathcal{R}}_{K_{0}}:=\left\\{\sum_{i=0}^{\infty}f_{i}t^{(i)},f_{i}\in
S[\frac{1}{p}],f_{i}\to 0\ p{\text{-adically}}\right\\}.$
###### Lemma 3.3.1.
1. (1)
As a subring of $A_{\mathrm{inf}}$, $A^{(2)}_{\mathop{\rm st}\nolimits}$ is
stable under $G_{K}$-action and the $G_{K}$-action factors through $\hat{G}$.
2. (2)
$A^{(2)}_{\mathop{\rm st}\nolimits}/I_{+}A^{(2)}_{\mathop{\rm
st}\nolimits}=W(k)$.
3. (3)
$I_{+}A^{(2)}\subset uA^{(2)}_{\mathop{\rm st}\nolimits}$.
4. (4)
$\varphi(A^{(2)}_{\mathop{\rm st}\nolimits})\subset\widehat{\mathcal{R}}$.
###### Proof.
(1) It is clear that the $G_{K}$-action is stable on
$W(k)[\\![u,\epsilon-1]\\!]$. Since $A^{(2)}_{\mathop{\rm st}\nolimits}$ is
$(p,E)$-completion of $W(k)[\\![u,\epsilon-1]\\!][\delta^{i}(w),i\geq 0]$, to
show that $A^{(2)}_{\mathop{\rm st}\nolimits}$ is $G_{K}$-stable, it suffices
to show that $g(w)\in A^{(2)}_{\mathop{\rm st}\nolimits}$ (because $g$ and
$\delta$ commutes, if $g(x)\in A^{(2)}_{\mathop{\rm st}\nolimits}$ then so is
$g(\delta(x))$). Now $Ew=\epsilon-1$, we have
$g(E)g(w)=g(\epsilon-1)=\epsilon^{a(g)}-1$. Then
$g(w)=\frac{E}{g(E)}\frac{\epsilon^{a(g)}-1}{E}$. By [BS22, Lemma 2.24],
$E/g(E)$ is a unit in $A^{(2)}_{\mathop{\rm st}\nolimits}$, then $g(w)\in
A^{(2)}_{\mathop{\rm st}\nolimits}$.
(2) It is clear that both $u,\epsilon-1$ are in $I_{+}$. Hence $w\in I_{+}$
because $Ew=\epsilon-1\in I_{+}$ and $E\equiv p\mod I_{+}$. For any
$x=\sum\limits_{i=0}^{\infty}p^{i}[x_{i}]\in A_{\mathrm{inf}}$, $x\in I_{+}$
if and only of $x_{i}\in\mathfrak{m}$. Then it is easy to check that
$\delta(I_{+})\subset I_{+}$, and consequently all $\delta^{i}(w)\in I_{+}$.
So $I_{+}A^{(2)}_{\mathop{\rm st}\nolimits}$ is topologically generated by
$u,y=\epsilon-1,\delta^{i}(w),i\geq 0$ and hence $A^{(2)}_{\mathop{\rm
st}\nolimits}/I_{+}A^{(2)}_{\mathop{\rm st}\nolimits}=W(k)$ as required.
(3) $I_{+}A^{(2)}$ is topologically generated by $u,v=\epsilon
u,\\{\delta^{i}(z)\\},i\geq 0$. And (3) follows from $z=uw$ and
$\delta^{n}(z)=u^{p^{n}}\delta^{n}(w)$.
(4) Since $A^{(2)}_{\mathop{\rm st}\nolimits}\subset A^{(2)}_{\mathop{\rm
st}\nolimits,\max}$, it suffices to show that $\varphi(A^{(2)}_{\mathop{\rm
st}\nolimits,\max})\subset{\mathcal{R}}_{K_{0}}$. Since
$\varphi({O}_{\mathrm{max}})\subset A[\\![\frac{E^{p}}{p}]\\!]\subset S$, it
suffices to show that $\varphi(\gamma_{n}(w))\in{\mathcal{R}}_{K_{0}}$. Note
that $\varphi(E)=p\nu$ with $\nu\in A[\\![\frac{E^{p}}{p}]\\!]^{\times}$ and
$\gamma_{i}(\epsilon-1)\in{\mathcal{R}}_{K_{0}}$. And we have
$\varphi(w)=\varphi(\frac{(\epsilon-1)}{E})=\nu^{-1}(\epsilon-1)\sum_{i=1}^{p}\left(\binom{p}{i}/p\right)(\epsilon-1)^{i-1}$
which is a polynomial with coefficients in $\mathbb Z$ and in variables
$\nu^{-1}$ and $\gamma_{i}(\epsilon-1)$’s. In particular
$\varphi(\gamma_{n}(w))\in{\mathcal{R}}_{K_{0}}$ by basic properties of
divided powers. ∎
###### Definition 3.3.2.
A (finite free) $(\varphi,\hat{G})$-module of height $h$ is a (finite free)
Kisin module $(\mathfrak{M},\varphi_{\mathfrak{M}})$ of height $h$ together
with an $A^{(2)}_{\mathop{\rm st}\nolimits}$-semi-linear $\hat{G}$-action on
$\widehat{\mathfrak{M}}:=A^{(2)}_{\mathop{\rm
st}\nolimits}\otimes_{A}\mathfrak{M}$ so that
1. (1)
The actions of $\varphi$ and $\hat{G}$ on $\widehat{\mathfrak{M}}$ commutes;
2. (2)
$\mathfrak{M}\subset\widehat{\mathfrak{M}}^{H_{K}}$;
3. (3)
$\hat{G}$-acts on $\widehat{\mathfrak{M}}/I^{+}A^{(2)}_{\mathop{\rm
st}\nolimits}$ trivially.
The category of $(\varphi,\hat{G})$-modules consists of the above objects and
morphism of two $(\varphi,\hat{G})$-modules is morphism of Kisn modules that
commutes with actions of $\hat{G}$. Given a $(\varphi,\hat{G})$-modules
$\widehat{\mathfrak{M}}:=(\mathfrak{M},\varphi,\hat{G})$, we define a $\mathbb
Z_{p}$-representation of $G_{K}$,
$\widehat{T}(\widehat{\mathfrak{M}}):=\mathop{\rm
Hom}\nolimits_{A^{(2)}_{\mathop{\rm
st}\nolimits},\varphi}(A^{(2)}_{\mathop{\rm
st}\nolimits}\otimes_{A}\mathfrak{M},A_{\mathrm{inf}}).$
Since $\varphi(A^{(2)}_{\mathop{\rm
st}\nolimits})\subset\widehat{\mathcal{R}}$, given a
$(\varphi,\hat{G})$-module
$\widehat{\mathfrak{M}}:=(\mathfrak{M},\varphi,\hat{G})$-defined as the above,
$(\mathfrak{M},\varphi)$ together $\hat{G}$-action on
$\widehat{\mathcal{R}}\otimes_{\varphi,A}\mathfrak{M}$ is a _classical_
$(\varphi,\hat{G})$-modules $\widehat{\mathfrak{M}}_{c}$. It is easy to check
that
$\widehat{T}(\widehat{\mathfrak{M}})=\widehat{T}(\widehat{\mathfrak{M}}_{c}):=\mathop{\rm
Hom}\nolimits_{\widehat{\mathcal{R}},\varphi}(\widehat{\mathcal{R}}\otimes_{\varphi,A}\mathfrak{M},A_{\mathrm{inf}}).$
###### Theorem 3.3.3.
The functor $\widehat{T}$ induces an anti-equivalence between the category of
$(\varphi,\hat{G})$-modules of height $h$ and the category of $G_{K}$-stable
$\mathbb Z_{p}$-lattices in semi-stable representations with Hodge-Tate
weights in $[0,\dots,h]$.
###### Proof.
Given an $(\varphi,\hat{G})$-module
$\widehat{\mathfrak{M}}=(\mathfrak{M},\varphi,\hat{G})$,
$\widehat{\mathfrak{M}}_{c}$ is a classical $(\varphi,\hat{G})$-module. So
$\widehat{T}(\widehat{\mathfrak{M}})=\widehat{T}(\widehat{\mathfrak{M}}_{c})$
is a lattice inside semi-stable representation with Hodge-Tate weights in
$[0,\dots,h]$. Conversely, given a lattice in semi-stable representation $T$
with Hodge-Tate weights in $[0,\dots,h]$, following the proof for the
existence of classical $(\varphi,\hat{G})$-module $\widehat{\mathfrak{M}}$ so
that $\widehat{T}(\widehat{\mathfrak{M}})=T$, it suffices to show that for any
$g\in G_{K}$, $g(\mathfrak{M})\subset A^{(2)}_{\mathop{\rm
st}\nolimits}\otimes_{A}\mathfrak{M}$, here $\mathfrak{M}$ and
$A^{(2)}_{\mathop{\rm st}\nolimits}\otimes_{A}\mathfrak{M}$ are regarded as
submodules of $T^{\vee}\otimes_{\mathbb Z_{p}}A_{\mathrm{inf}}$ via
$\iota_{\mathfrak{M}}$ in (5) and uses the $G_{K}$-action on
$T^{\vee}\otimes_{\mathbb Z_{p}}A_{\mathrm{inf}}$. This follows Theorem 3.2.1.
∎
Now let us discuss when $\widehat{T}(\widehat{\mathfrak{M}})$ becomes a
crystalline representation. Recall that $\tau$ is a selected topological
generator of $\mathop{\rm Gal}\nolimits(L/K_{1^{\infty}})$, and we have
$\tau(u)=\epsilon u$ and $\tau,H_{K}$ topologically generate $\hat{G}$.
###### Corollary 3.3.4.
Select $\tau\in\hat{G}$ as the above. Then
$\widehat{T}(\widehat{\mathfrak{M}})$ is crystalline if and only if
$\tau(\mathfrak{M})\subset A^{(2)}\otimes_{A}\mathfrak{M}$.
###### Proof.
Clearly for the selected $\tau$, we have $\tau(u)-u=Ez$. If
$T:=\widehat{T}(\widehat{\mathfrak{M}})$ is crystalline then Theorem 3.2.1
proves that $\tau(\mathfrak{M})\subset A^{(2)}\otimes_{A}\mathfrak{M}$.
Conversely, Suppose $\tau(\mathfrak{M})\subset
A^{(2)}\otimes_{A}\mathfrak{M}$. Then we see that $(\tau-1)\mathfrak{M}\subset
uA_{\mathrm{inf}}\otimes_{A}\mathfrak{M}$ by Lemma 3.3.1 (3). And we have this
is enough to show that $\widehat{T}(\widehat{\mathfrak{M}})$ is crystalline.
For example, We will have
$\mathfrak{M}\otimes_{A}(A_{\mathrm{inf}}[\frac{1}{p}]/\mathfrak{p})$ has a
$G_{K}$-fixed basis given by a basis of $\mathfrak{M}$, where the ideal
$\mathfrak{p}$ is defined as $\mathfrak{p}:=\cup_{n\in\mathbb
N}\varphi^{-n}(u)A_{\mathrm{inf}}[\frac{1}{p}]\subset
A_{\mathrm{inf}}[\frac{1}{p}]$. Then one can prove by the same method in
[Oze18, Thm. 3.8] or directly use [Du21, Theorem 4.2.1] that $T$ is
crystalline. ∎
###### Remark 3.3.5.
Though $A^{(2)}_{\mathop{\rm st}\nolimits}$ is still complicated, for example,
it is not noetherian, $A^{(2)}_{\mathop{\rm st}\nolimits}$ is still better
than old $\widehat{\mathcal{R}}$: at least it has explicit topological
generators. Furthermore, $A^{(2)}_{\mathop{\rm st}\nolimits}$ is $p$-adic
complete. This can help to close the gap in [Liu07] mentioned in [Gao21,
Appendix B]. Indeed, as indicated by Remark B.0.5 _loc.cit._ , if
$\widehat{\mathcal{R}}$ can be shown to be $p$-adic complete then the gap in
[Liu07] can be closed. So by replacing $\widehat{\mathcal{R}}$ by
$A^{(2)}_{\mathop{\rm st}\nolimits}$, we close the gap of [Liu07] ([Gao21]
provides another similar way to close the gap).
## 4\. Crystalline representations and prismatic $F$-crystals
In this section, we reprove the theorem of Bhatt and Scholze on the
equivalence of prismatic $F$-crystal and lattices in crystalline
representations of $G_{K}$ and complete the proof of Theorem 3.2.1. We start
to discuss some general facts on the absolute prismatic site (which allows
general base rings).
### 4.1. Prismatic $F$-crystals in finite projective modules
Let $R=R_{0}\otimes_{W}\mathcal{O}_{K}=R_{0}[u]/E$ as in the beginning of §2
and $X=\mathop{\rm Spf}\nolimits(R)$ with the $p$-adic topology.
###### Definition 4.1.1.
The (absolute) prismatic site $X_{{\mathbbl{\Delta}}}$ of $X$ is the opposite
of the category of bounded prisms $(A,I)$ that are $(p,I)$-completed together
with a map $R\to A/I$, and a morphism of prisms $(A,I)\to(B,J)$ is a covering
map if and only if $A\to B$ is $(p,I)$-completely faithfully flat.
Define the following functors:
$\mathcal{O}_{{\mathbbl{\Delta}}}:(A,I)\mapsto A,$
and for all $h\in\mathbb N$, let
$\mathcal{I}_{{\mathbbl{\Delta}}}^{h}:(A,I)\mapsto I^{h}.$
It is known in [BS22] that these are sheaves on $X_{{\mathbbl{\Delta}}}$. We
will also use
$\mathcal{O}_{{\mathbbl{\Delta}}}[1/\mathcal{I}_{{{\mathbbl{\Delta}}}}]^{\wedge}_{p}$
to denote functor assign $(A,I)$ to the $p$-adic completion of $A$ with $I$
inverted.
Now we verify $A^{(2)}$(resp. $A^{(3)})$ constructed in §2.1 is indeed self
(resp. triple) product of $A$ in $X_{{\mathbbl{\Delta}}}$. We mainly discuss
the situation of $A^{(2)}$ while the proof of $A^{(3)}$ is almost the same.
Recall that $\breve{A}=\breve{R}_{0}[\\![u]\\!]=W\langle
t_{1},\dots,t_{m}\rangle[\\![u]\\!]$.
First we want to make a remark on the existence of nonempty self-coproduct in
the category of prisms over $R$. We thank Peter Scholze for answering our
question on Mathoverflow. And we just repeat his answer here. Let
$(A_{i},I_{i})$ for $i=1,2$ that are prisms over $R$, let
$A_{0}=A_{1}\hat{\otimes}_{\mathbb Z_{p}}A_{2}$ where the completion is taken
for the $(p,I_{1},I_{2})$-adic topology. Let $J$ be the kernel of the map:
$A_{0}\to A_{1}/I_{1}\otimes_{R}A_{2}/I_{2}.$
Let $(A,I)$ be the prismatic envelope of $(A_{1},I_{1})\to(A_{0},J)$, one can
check this is the initial object in the category of prisms over $R$ that
admits maps from $(A_{i},I_{i})$ such that the two $R\to A_{i}/I_{i}\to A/I$
agree. Also we want to note that in general, we don’t know if the boundedness
of $(A_{1},I_{1})$ and $(A_{2},I_{2})$ will imply the boundedness of their
coproduct. But we have seen $A^{(2)}$ and $A^{(3)}$ are indeed bounded by
Corollary 2.2.8.
To start, note that there exists a $W$-linear map $\breve{i}_{2}:\breve{A}\to
A^{\widehat{\otimes}2}$ induced by $u\mapsto y$ and $t_{i}\mapsto s_{i}$. We
claim that $\breve{i}_{2}$ uniquely extends to $i_{2}:A\to
A^{\widehat{\otimes}2}$ which is compatible with $\delta$-structures. Indeed,
consider the following commutative diagram
$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{i}_{2}}$$\scriptstyle{i_{2,n}}$$\textstyle{A^{\widehat{\otimes}2}/(p,J^{(2)})}$$\textstyle{{\breve{A}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\breve{i}_{2,n}}$$\textstyle{A^{\widehat{\otimes}2}/(p,J^{(2)})^{n}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Here $\breve{i}_{2,n}=\breve{i}_{2}\mod(p,J^{(2)})^{n}$ and $\overline{i}_{2}$
is induced by $A\to A/(p,E)\simeq A^{\widehat{\otimes}2}/(p,J^{(2)})$. Since
$\breve{i}_{2}(u)=y=x+(y-x)$ and
$\breve{i}_{2}(t_{i})=s_{i}=t_{i}+(s_{i}-t_{i})$, we see that the above
(outer) diagram commutes. Since $A$ is formally étale over $\breve{A}$ by
$(p,u)$-adic topology, we conclude that there exists a unique map
$i_{2,n}:A\to A^{\widehat{\otimes}2}/(p,J^{(2)})^{n}$ so that the above
diagram commutes. Since $A^{\widehat{\otimes}2}$ is $(p,J^{(2)})$-complete,
there uniquely exists $i_{2}:A\to A^{\widehat{\otimes}2}$ which extends
$\breve{i}_{2}$. To see $i_{2}$ is compatible with $\delta$-structures. it
suffices to show that $\varphi\circ i_{2}=i_{2}\circ\varphi$. But both of
$\varphi\circ i_{2}$ and $i_{2}\circ\varphi$ extend
$\breve{A}\overset{\varphi}{\to}\breve{A}\to A^{\widehat{\otimes}2}$. Again by
formally étaleness of $A$ over $\breve{A}$, we see that $\varphi\circ
i_{2}=i_{2}\circ\varphi$. Hence we obtain a map $1\otimes
i_{2}:A\otimes_{\mathbb Z_{p}}A\to A^{\widehat{\otimes}2}$. Define
$\theta^{\otimes 2}:A\otimes_{\mathbb Z_{p}}A\to R$ via $\theta^{\otimes
2}(a\otimes b)=\theta(a)\theta(b)$. By the construction of $i_{2}$, we have
the following commutative diagram
$\textstyle{A\otimes_{\mathbb
Z_{p}}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1\otimes
i_{2}}$$\scriptstyle{\theta^{\otimes
2}}$$\textstyle{A^{\widehat{\otimes}2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{A^{\widehat{\otimes}2}/J^{(2)}}$
Let $\widehat{A^{\otimes 2}}$ be the $(p,\mathop{\rm
ker}\nolimits(\theta^{\otimes 2}))$-completion of $A^{\otimes
2}:=A\otimes_{\mathbb Z_{p}}A$. Hence $1\otimes i_{2}$ induces a map
$\hat{i}_{2}$ from the $\widehat{A^{\otimes 2}}$ to $A^{\widehat{\otimes}2}$
because $A^{\widehat{\otimes}2}$ is clearly $(p,J^{(2)})$-complete. To treat
$A^{\widehat{\otimes}3}$, we construct $i_{3}:A\to A^{\widehat{\otimes}3}$ by
extending $\breve{i}_{3}:A\to A^{\widehat{\otimes}3}$ by sending $u\mapsto w$
and $t_{j}\mapsto r_{j}$. The same method shows that $i_{3}$ is compatible
with $\delta$-structure and we obtain a map $1\otimes i_{2}\otimes
i_{3}:A^{\otimes 3}\to A^{\widehat{\otimes}3}$ with $A^{\otimes
3}:A\otimes_{\mathbb Z_{p}}A\otimes_{\mathbb Z_{p}}A$. Similarly, we obtain a
natural map $\hat{i}_{3}:\widehat{A^{\otimes 3}}\to A^{\widehat{\otimes}3}$.
###### Lemma 4.1.2.
For $s=2,3$, $\hat{i}_{s}:\widehat{A^{\otimes s}}\to A^{\widehat{\otimes}s}$
are isomorphisms.
###### Proof.
We need to construct an inverse of $\hat{i}_{s}$. We only show for
$\hat{i}_{2}$ and the proof for $\hat{i}_{3}$ is the same. Let
$g:A^{\widehat{\otimes}2}\to\widehat{A^{\otimes 2}}$ be the $A$-linear map by
sending $y-x\mapsto 1\otimes u-u\otimes 1$ and $s_{j}-t_{j}\mapsto 1\otimes
t_{j}-t_{j}\otimes 1$. Clearly $g$ is well-defined because $1\otimes
u-u\otimes 1$ and $1\otimes t_{j}-t_{j}\otimes 1$ are in $\mathop{\rm
Ker}\nolimits(\theta^{\otimes 2})$. Since $i_{2}(u)=y$ and
$i_{2}(t_{j})=s_{j}$, $\hat{i}_{2}\circ g$ is identity on
$A^{\widehat{\otimes}2}$. Now it suffices to show that $h:=g\circ\hat{i}_{2}$
is identity. Write $K=(p,\mathop{\rm Ker}\nolimits(\theta^{\otimes 2}))$. Note
that we have map $A\otimes_{\mathbb Z_{p}}\breve{A}\to\widehat{A^{\otimes
2}}\overset{h}{\to}\widehat{A^{\otimes 2}}$ induced by $h$ which we still call
it $\breve{h}$. Now we have the following commutative diagram
$\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
18.41666pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-18.41666pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{A\otimes_{\mathbb
Z_{p}}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
33.82573pt\raise 5.43056pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.43056pt\hbox{$\scriptstyle{\mod K}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 75.09732pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
44.64268pt\raise-14.67471pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.82222pt\hbox{$\scriptstyle{\mod K^{n}}$}}}\kern
3.0pt}}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
44.7555pt\raise-26.35803pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.0pt\hbox{$\scriptstyle{h_{n}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
78.00015pt\raise-29.955pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}{\hbox{\kern
75.09732pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{(A\otimes_{\mathbb
Z_{p}}A)/K}$}}}}}}}{\hbox{\kern-17.16666pt\raise-40.99387pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{A\otimes_{\mathbb
Z_{p}}{\breve{A}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
0.0pt\raise-6.22444pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
28.65993pt\raise-34.52164pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-3.47223pt\hbox{$\scriptstyle{\breve{h}\mod K^{n}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
73.41666pt\raise-40.99387pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
73.41666pt\raise-40.99387pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{(A\otimes_{\mathbb
Z_{p}}A)/K^{n}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
106.72922pt\raise-6.22444pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces}}}}\ignorespaces,$
where $h_{n}$ is induced by $h\mod K^{n}$. We see that both $h_{n}$ and $\mod
K^{n}$ on the dashed arrow can make the diagram commute. Then by the formal
étaleness of $A$ over $\breve{A}$, we conclude that $h_{n}=\mod K^{n}$ and $h$
is the identity map. ∎
###### Proposition 4.1.3.
$A^{(2)}$ and $A^{(3)}$ is self-product and triple product of $A$ in
$X_{{\mathbbl{\Delta}}}$.
###### Proof.
In the following, we only treat the case of $A^{(2)}$ while the proof for
$A^{(3)}$ is the same. We need to prove that for any $B=(B,J)$ in
$X_{{\mathbbl{\Delta}}}$,
$\mathop{\rm
Hom}\nolimits_{X_{{\mathbbl{\Delta}}}^{\mathrm{opp}}}(A^{(2)},B)=\mathop{\rm
Hom}\nolimits_{X_{{\mathbbl{\Delta}}}^{\mathrm{opp}}}(A,B)\times\mathop{\rm
Hom}\nolimits_{X_{{\mathbbl{\Delta}}}^{\mathrm{opp}}}(A,B).$
By the above lemma, we have natural maps $A\otimes_{\mathbb
Z_{p}}A\to\widehat{A^{\otimes 2}}\simeq A^{\widehat{\otimes}2}$. Combined with
natural map $A^{\widehat{\otimes}2}\to A^{(2)}$ as $A^{(2)}$ is the prismatic
envelope of $A^{\widehat{\otimes}2}$ for the ideal $J^{(2)}$, we have map
$\alpha:A\otimes_{\mathbb Z_{p}}A\to A^{(2)}$ which is compatible with
$\delta$-structures. Then $\alpha$ induces map
$\beta:\mathop{\rm
Hom}\nolimits_{X_{{\mathbbl{\Delta}}}^{\mathrm{opp}}}(A^{(2)},B)\to\mathop{\rm
Hom}\nolimits_{X_{{\mathbbl{\Delta}}}^{\mathrm{opp}}}(A,B)\times\mathop{\rm
Hom}\nolimits_{X_{{\mathbbl{\Delta}}}^{\mathrm{opp}}}(A,B).$
To prove the surjectivity of $\beta$, given $f_{i}\in\mathop{\rm
Hom}\nolimits_{X_{{\mathbbl{\Delta}}}}(A,B)$ for $i=1,2$, we obtain a map
$f_{1}\otimes f_{2}:A\otimes_{\mathbb Z_{p}}A\to B$. It is clear that
$(f_{1}\otimes f_{2})(\mathop{\rm Ker}\nolimits(\theta^{\otimes 2}))\subset
J$. Since $B$ is $(p,J)$-derived complete, $f\otimes f_{2}$ extends to a map
$f_{1}\widehat{\otimes}f_{2}:\widehat{A^{\otimes 2}}\simeq
A^{\widehat{\otimes}2}\to B$ which is compatible with $\delta$-structures,
Hence $f_{1}\widehat{\otimes}f_{2}$ is a morphism of $\delta$-algebra.
Finally, by the universal properties of prismatic envelope,
$f_{1}\widehat{\otimes}f_{2}$ extends to a map of prisms
$f_{1}\widehat{\otimes}_{{{\mathbbl{\Delta}}}}f_{2}:A^{(2)}\to B$ as required.
Finally, we need to show that $\beta$ is injective. It suffices to show that
$A$-algebra structure map $i_{1}:A\to A^{(2)}$ and
$i^{\prime}_{2}:A\overset{i_{2}}{\to}A^{\widehat{\otimes}2}\to A^{(2)}$ both
are injective. Since all rings here are $(p,E)$-complete integral domains, it
suffices to check that $i_{1},i_{2}^{\prime}\mod(p,E)$ are injective. By
Proposition 2.2.7, we see that $i_{1}\mod(p,E)$ is $R/pR\to
R/pR[\\{\gamma_{i}(z_{j})\\}]$, so it is injective. By the construction
$i^{\prime}_{2}$ and $i_{2}$, we see that $i^{\prime}_{2}\mod(p,E)$ is the
same as $A/(p,E)\to A^{\widehat{\otimes}2}/(p,J^{(2)})\to A^{(2)}/(p,E)$,
which is same as $R/pR\to R/pR[\\{\gamma_{i}(z_{j})\\}]$. So it is injective.
∎
###### Remark 4.1.4.
When $R=\mathcal{O}_{K}$ is a complete DVR with perfect residue field $k$, we
know a priori, the self-product $A^{(2)}$ of $(A,(E))$ in
$X_{{\mathbbl{\Delta}}}$ can be constructed as the prismatic envelope of
$(A,(E))\to(B,I)$, where $B$ is the $(p,E(u),E(v))$-adic completion of
$W(k)[\\![u]\\!]\otimes_{\mathbb Z_{p}}W(k)[\\![v]\\!]$ and $I$ is the kernel
of the map:
$B\to A/(E)\otimes_{R}A/(E)=R.$
On the other hand, $W(k)$ is formally étale over $\mathbb Z_{p}$ for the
$p$-adic topology, so for all $(C,J)\in X_{{\mathbbl{\Delta}}}$, the map
$W(k)\to R\to C/J$ lifts uniquely to a map $W(k)\to C$. In particular, for all
$(C,J)\in X_{{\mathbbl{\Delta}}}$, $C$ has a natural $W(k)$-algebra structure.
So when we construct the self-product, we can also consider $A^{(2)}$ as the
prismatic envelope of $(A,(E))\to(C,J)$, where $C$ is the $(p,E(u),E(v))$-adic
completion of $A\otimes_{W(k)}A$ and $J$ is the kernel of the map:
$C\to A/(E)\otimes_{R}A/(E)=R.$
We have $C\simeq W(k)[\\![u,v]\\!]$, $J=(E(u),u-v)$ and
$A^{(2)}=W(k)[\\![u,v]\\!]\\{\frac{u-v}{E}\\}^{\wedge}_{\delta}$.
###### Definition 4.1.5.
1. (1)
A prismatic crystal over $X_{{\mathbbl{\Delta}}}$ in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}$-modules (resp.
$\mathcal{O}_{{\mathbbl{\Delta}}}[1/I]^{\wedge}_{p}$-modules) is a finite
locally free $\mathcal{O}_{{\mathbbl{\Delta}}}$-module (resp.
$\mathcal{O}_{{\mathbbl{\Delta}}}[1/I]^{\wedge}_{p}$-module)
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ such that for all morphisms
$f:(A,I)\to(B,J)$ of prisms, it induces an isomorphism:
$f^{\ast}\mathfrak{M}_{{{\mathbbl{\Delta}}},A}:=\mathfrak{M}_{{{\mathbbl{\Delta}}}}((A,I))\otimes_{A}B\simeq\mathfrak{M}_{{{\mathbbl{\Delta}}},B}:=\mathfrak{M}_{{{\mathbbl{\Delta}}}}((B,J))$
$(resp.\quad
f^{\ast}\mathfrak{M}_{{{\mathbbl{\Delta}}},A}:=\mathfrak{M}_{{{\mathbbl{\Delta}}}}((A,I))\otimes_{A[1/I]^{\wedge}_{p}}B[1/I]^{\wedge}_{p}\simeq\mathfrak{M}_{{{\mathbbl{\Delta}}},B}:=\mathfrak{M}_{{{\mathbbl{\Delta}}}}((B,J))).$
2. (2)
A prismatic $F$-crystal over $X_{{\mathbbl{\Delta}}}$ of height $h$ in finite
locally free $\mathcal{O}_{{\mathbbl{\Delta}}}$-modules is a prismatic crystal
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}$-modules together with a
$\varphi_{\mathcal{O}_{{\mathbbl{\Delta}}}}$-semilinear endomorphism
$\varphi_{\mathfrak{M}_{{{\mathbbl{\Delta}}}}}$ of the
$\mathcal{O}_{{\mathbbl{\Delta}}}$-module
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}:\mathfrak{M}_{{{\mathbbl{\Delta}}}}\to\mathfrak{M}_{{{\mathbbl{\Delta}}}}$
such that the cokernel of the linearization
$\varphi^{\ast}\mathfrak{M}_{{{\mathbbl{\Delta}}}}\to\mathfrak{M}_{{{\mathbbl{\Delta}}}}$
is killed by $\mathcal{I}^{h}$.
###### Proposition 4.1.6.
If the sheaf represented by $(B,I)$ in $\mathop{\rm
Shv}\nolimits(X_{{\mathbbl{\Delta}}})$ covers the final object $\ast$ in
$\mathop{\rm Shv}\nolimits(X_{{\mathbbl{\Delta}}})$, i.e., for any $(C,J)$ in
$X_{{\mathbbl{\Delta}}}$, there is a $(P,J)$ lies over $(B,I)$ and covers
$(C,J)$. Also assume that the self-coproduct $B^{(2)}$ and self-triple-
coproduct $B^{(3)}$ of $(B,I)$ are inside $X_{{{\mathbbl{\Delta}}}}$, i.e.,
they are bounded. Then a prismatic crystal
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ over $X$ in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}$-modules is the same as a finite projective
module $\mathfrak{M}$ over $B$ together with a descent data
$\psi:\mathfrak{M}\otimes_{i_{1},B}B^{(2)}\simeq\mathfrak{M}\otimes_{i_{2},B}B^{(2)}$
satisfies the cocycle condition. Here $i_{j}:B\to B^{(2)}$ $(j=1,2)$ are the
two natural maps.
###### Proof.
First let $\mathfrak{M}$ be a prismatic crystal in finite projective modules.
Define $\mathfrak{M}=\mathfrak{M}_{{{\mathbbl{\Delta}}}}((B,I))$, and the
descent data comes from the crystal property:
$\psi:\mathfrak{M}\otimes_{i_{1},B}B^{(2)}\simeq\mathfrak{M}_{{{\mathbbl{\Delta}}}}((B^{(2)},I))\simeq\mathfrak{M}\otimes_{i_{2},B}B^{(2)}.$
Now given $(\mathfrak{M},\psi)$, then for any $(C,J)$ in
$X_{{\mathbbl{\Delta}}}$, we need to construct a finite projective module over
$C$. We choose the $(P,J)$ as in the assumption, let
$\mathfrak{M}_{P}=\mathfrak{M}\otimes_{B}P$, and consider the following
diagram:
${C}$${P}$${P^{(2)}_{C}}$${B}$${B^{(2)}}$${B}$${P}$${C}$$\scriptstyle{f_{1}}$$\scriptstyle{i_{1}}$$\scriptstyle{f}$$\scriptstyle{i_{2}}$$\scriptstyle{f_{2}}$
Here $(P^{(2)}_{C},J)$ is the self-coproduct of $(P,J)$ in the category of
prisms over $(C,J)$, and the existence of $(P^{(2)}_{C},J)$ is from [BS22,
Corollary 3.12], where they also show that $P^{(2)}_{C}$ is the derived
$(p,J)$-completion of $P\otimes L_{C}P$ and $(P^{(2)}_{C},J)$ is bounded. As a
bounded prism over $(C,J)$, $(P^{(2)}_{C},J)$ is naturally inside
$X_{{\mathbbl{\Delta}}}$, so $f$ exists by the universal property of
$B^{(2)}$. So if we take the base change of $\psi$ along $f$, we get
$f^{\ast}\psi:(\mathfrak{M}\otimes_{i_{1},B}B^{(2)})\otimes_{B^{(2)},f}P^{(2)}_{C}\simeq(\mathfrak{M}\otimes_{i_{2},B}B^{(2)})\otimes_{B^{(2)},f}P^{(2)}_{C}$
which is the same as an isomorphism:
$\psi_{C}:\mathfrak{M}_{P}\otimes_{P,f_{1}}P^{(2)}_{C}\simeq\mathfrak{M}_{P}\otimes_{P,f_{2}}P^{(2)}_{C}.$
Similar arguments will show $\psi_{C}$ satisfies the cocycle condition. And
$\mathfrak{M}_{P}$ descents to a finite projective module over $C$ by [AB21,
Proposition A.12]. ∎
###### Remark 4.1.7.
We want to note that the structures of finite nonempty coproducts in the
category of bounded prisms over a prism $(A,I)$ is much simpler compared with
the structure of finite nonempty products in the category
$(R/A)_{{{\mathbbl{\Delta}}}}$ (cf. [Bha18, Lecture V, Corollary 5.2]).
###### Lemma 4.1.8.
The prism $(A,(E))$ defined in §2.1 covers the final object $\ast$ in
$\mathop{\rm Shv}\nolimits(X_{{\mathbbl{\Delta}}})$ in the sense of
Proposition 4.1.6. And $A^{(2)}$ and $A^{(3)}$ are bounded.
###### Proof.
The proof is similar to [AB21, Lemma 5.2.8], we need to show for $R$ defined
as in §2.1, there exists a quasi-syntomic perfectoid cover of $R$. We will
construct this perfectoid cover similar to [Kim14, §7.1].
First recall we have $R=\mathcal{O}_{K}\otimes_{W}R_{0}$, and we fix a
compatible system $\\{\varpi_{n}\\}_{n\geq 0}$ of $p^{n}$-th roots of a
uniformizer $\varpi_{0}$ of $\mathcal{O}_{K}$ inside $E$. Let
$\widehat{K}_{\infty}$ be the $p$-adic completion of $\cup_{n}K(\varpi_{n})$,
we know $\widehat{K}_{\infty}$ is perfectoid. Use
$\overline{R}_{0}[\\![u]\\!]$ to denote
$A/(p)=R/(\varpi)=R_{0}/(p)[\\![u]\\!]$, and let
$\overline{R}_{0}[\\![u]\\!]_{\rm perf}^{\wedge}$ to be the $u$-adic
completion of the direct perfection of $\overline{R}_{0}[\\![u]\\!]$, it can
be checked directly that $(\overline{R}_{0}[\\![u]\\!]_{\rm
perf}^{\wedge}[1/u],\overline{R}_{0}[\\![u]\\!]_{\rm perf}^{\wedge})$ is a
perfectoid affinoid $\widehat{K}_{\infty}^{\flat}$-algebra, by tilt
equivalence, there is a corresponded perfectoid affinoid
$\widehat{K}_{\infty}$-algebra. More explicitly, let
$\tilde{R}_{\infty}=W(\overline{R}_{0}[\\![u]\\!]_{\rm
perf}^{\wedge})\otimes_{W(\mathcal{O}_{\widehat{K}_{\infty}}^{\flat}),\theta}\mathcal{O}_{\widehat{K}_{\infty}}$.
Then $\tilde{R}_{\infty}$ is naturally an $R$-algebra, and we claim it is a
quasi-syntomic cover of $R$.
To show this, by [Kim14, §7.1.2], we have
$\tilde{R}_{\infty}=(R_{0}\widehat{\otimes}_{W}\mathcal{O}_{\widehat{K}_{\infty}})\widehat{\otimes}_{\mathbb
Z_{p}}\mathbb Z_{p}\langle T_{i}^{-p^{\infty}}\rangle$
where $T_{i}\in R_{0}$ is any lift of a $p$-basis of $R_{0}/(p)$. We have
$\mathcal{O}_{K}\to\mathcal{O}_{\widehat{K}_{\infty}}$ is a quasi-syntomic
cover so by (2) of [BMS19, Lemma 4.16], $R\to
R_{0}\widehat{\otimes}_{W}\mathcal{O}_{\widehat{K}_{\infty}}$ is also a quasi-
syntomic cover. And we have $S=\mathbb Z_{p}\langle
T_{i}^{-p^{\infty}}\rangle$ is a quasi-syntomic ring, this can be seen by
constructing a perfectoid quasi-syntomic covering of it, so by Lemma 4.34 of
$loc.cit.$, we have the complex $\mathbb{L}_{S/\mathbb Z_{p}}\in D(S)$ has
$p$-complete Tor amplitude in $[-1,0]$. In particular, $\mathbb
Z_{p}\to\mathbb Z_{p}\langle T_{i}^{-p^{\infty}}\rangle$ is also a quasi-
syntomic cover, so applying (1) in Lemma 4.16 of $loc.cit.$,
$R\to\tilde{R}_{\infty}$ is a quasi-syntomic perfectoid cover.
The boundedness of $A^{(2)}$ and $A^{(3)}$ is from (2) in Corollary 2.2.8. ∎
###### Corollary 4.1.9.
Assume the the base $X=\mathop{\rm Spf}\nolimits(R)$ satisfies the condition
in §2, and let $A$, $A^{(2)}$ and $A^{(3)}$ be defined as in §2.1, then a
prismatic $F$-crystal
$(\mathfrak{M}_{{{\mathbbl{\Delta}}}},\varphi_{\mathfrak{M}_{{{\mathbbl{\Delta}}}}})$
in finite locally free $\mathcal{O}_{{\mathbbl{\Delta}}}$-modules of height
$h$ over $X$ is the same as a Kisin module
$(\mathfrak{M},\varphi_{\mathfrak{M}})$ of height $h$ over $A$ with a descent
datum
$f:\mathfrak{M}\otimes_{A,i_{1}}A^{(2)}\simeq\mathfrak{M}\otimes_{A,i_{2}}A^{(2)}$
that compatible with the $\varphi$-structure and satisfies the cocycle
condition over $A^{(3)}$.
###### Theorem 4.1.10.
([BS21, Theorem 1.2]) Let $T$ be a crystalline representation of $G_{K}$ over
a $\mathbb Z_{p}$-lattice of Hodge-Tate weights in $[0,h]$, then there is a
prismatic $F$-crystal $\mathfrak{M}_{{{\mathbbl{\Delta}}}}(T)$ over
$X_{{\mathbbl{\Delta}}}$ of height $h$ over $X$ such that
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}((A,E))$ is the Kisin module associated to
$T$. Moreover, the association of
$T\mapsto\mathfrak{M}_{{{\mathbbl{\Delta}}}}(T)$ induces an equivalence of the
above two categories.
We will prove this theorem in §4.3.
###### Remark 4.1.11.
Theorem 4.1.10 was first established by Bhatt-Scholze in [BS21, Theorem 1.2].
The harder direction of [BS21, Theorem 1.2] is to show for all $\mathbb
Z_{p}$-lattices inside crystalline representations of $G_{K}$, one can attach
a prismatic $F$-crystal. Using the theory of $(\varphi,\hat{G})$-modules, we
have shown in §3.2, given a crystalline representation of $G_{K}$ over a
$\mathbb Z_{p}$-lattices $T$, we can attach a Kisin module $\mathfrak{M}$ and
a descent data222Strictly speaking, §3.2 only constructs an isomorphism but
have not checked that it satisfies cocycle condition, which will be proved in
§4.3.
$f_{\tilde{\tau}}:\mathfrak{M}\otimes_{A,i_{1}}A^{(2)}[\frac{1}{p}]\simeq\mathfrak{M}\otimes_{A,i_{2}}A^{(2)}[\frac{1}{p}]$
comes from the $\tau$-action. We just show this is a $\varphi$-equivariant
isomorphism, and we need to show it gives rise to a descent data over
$A^{(2)}$. As we have mentioned in Remark 3.2.4, we can not find a direct ring
theoretic proof of this. Our idea is to use result of [Wu21] or [BS21,
Corollary 3.7]: the underlying Galois representation $T$ gives a descent data
over $A^{(2)}[\frac{1}{E}]^{\wedge}_{p}$. To finish our proof, we need to
compare this descent data with $f_{\tilde{\tau}}$ over
$A^{(2)}[\frac{1}{E}]^{\wedge}_{p}[\frac{1}{p}]$. This lead us to develop a
“prismatic” $(\varphi,\tau)$-module theory in the next subsection, where we
will have Lemma 4.2.12 and Lemma 4.2.16 to help us compare descent data over
$A^{(2)}[\frac{1}{E}]^{\wedge}_{p}$ and
$A^{(2)}[\frac{1}{E}]^{\wedge}_{p}[\frac{1}{p}]$ via an evaluation map to
$W(\mathcal{O}_{\hat{L}}^{\flat})$.
### 4.2. $(\varphi,\tau)$-modules and prismatic $F$-crystals
In this subsection, we make some preparations to prove Proposition 3.2.2 and
Theorem 4.1.10. So we restrict to the case that $R=\mathcal{O}_{K}$ is a
complete DVR with perfect residue field.
###### Definition 4.2.1.
An étale $\varphi$-module over $A[1/E]^{\wedge}_{p}$ is a pair
$(\mathcal{M},\varphi_{\mathcal{M}})$ such that $\mathcal{M}$ is a finite free
module over $A[1/E]^{\wedge}_{p}$, and $\varphi_{\mathcal{M}}$ is an
isomorphism
$\varphi_{\mathcal{M}}:\varphi^{\ast}\mathcal{M}:=A[1/E]^{\wedge}_{p}\otimes_{\varphi,A[1/E]^{\wedge}_{p}}\mathcal{M}\simeq\mathcal{M}$
of $A[1/E]^{\wedge}_{p}$-modules. And we define an étale $\varphi$-module over
$A[1/E]^{\wedge}_{p}[1/p]$ to be a $\varphi$-module over
$A[1/E]^{\wedge}_{p}[1/p]$ such that it is obtained from an étale
$\varphi$-module over $A[1/E]^{\wedge}_{p}$ by base change.
An étale $\varphi$-module over $A[1/E]^{\wedge}_{p}$ (resp.
$A[1/E]^{\wedge}_{p}[1/p]$) with descent data is a triple
$(\mathcal{M},\varphi_{\mathcal{M}},c)$, such that
$(\mathcal{M},\varphi_{\mathcal{M}})$ is an étale $\varphi$-module over
$A[1/E]^{\wedge}_{p}$ (resp. $A[1/E]^{\wedge}_{p}[1/p]$), and $c$ is an
isomorphism
$c:\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p},i_{1}}B^{(2)}\simeq\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p},i_{2}}B^{(2)}$
$(\text{resp.
}c:\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p}[1/p],i_{1}}B^{(2)}[1/p]\simeq\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p}[1/p],i_{2}}B^{(2)}[1/p])$
that compatible with the $\varphi$-structure and satisfies the cocycle
condition over $B^{(3)}$ (resp. $B^{(3)}[\frac{1}{p}]$). Here for $j=1,2$,
$i_{j}:A[1/E]^{\wedge}_{p}\to B^{(2)}$ is the map induced from
$i_{j}:(A,(E))\to(A^{(2)},(E))$.
###### Remark 4.2.2.
It is the main result in [Wu21] and [BS21, §2] that there is an equivalence of
the category of lattices in representations of $G_{K}$ and the category of
prismatic $F$-crystals in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}[1/I]^{\wedge}_{p}$-modules over
$\mathcal{O}_{K}$. Also by [BS21, Proposition 2.7], one can show prismatic
$F$-crystals in finite locally free
$\mathcal{O}_{{\mathbbl{\Delta}}}[1/I]^{\wedge}_{p}$-modules is the same as
étale $\varphi$-modules over $A[1/E]^{\wedge}_{p}$ with descent data.
The aim of this subsection is to use the ideas in [Wu21] and [KL19, §5.5] show
that étale $\varphi$-modules over $A[1/E]^{\wedge}_{p}$ (resp.
$A[1/E]^{\wedge}_{p}[1/p]$) with descent data are equivalence to
$\mathrm{Rep}_{\mathbb Z_{p}}(G_{K})$ (resp. $\mathrm{Rep}_{\mathbb
Q_{p}}(G_{K})$). More importantly, for all $\gamma\in\hat{G}$, we will
construct an evaluation at $\gamma$ map
$e_{\gamma}:B^{(2)}\to W(\hat{L}^{\flat})$
and use it to study $\varphi$-equivariant morphisms between finite free
$B^{(2)}$ and $B^{(2)}[1/p]$-modules. We will see the evaluation at $\tau$ map
will play a crucial role in our proof of Proposition 3.2.2 and the Theorem
4.1.10 below.
Recall in §3.3, we define
$L=\bigcup\limits_{n=1}^{\infty}K_{\infty}(\zeta_{p^{n}})$,
$\hat{G}:=\mathop{\rm Gal}\nolimits(L/K)$ and $H_{K}:=\mathop{\rm
Gal}\nolimits(L/K_{\infty})$. Moreover, we define $\widehat{K}_{1^{\infty}}$
to be the $p$-adic completion of $\cup_{n\geq 0}K(\zeta_{p^{n}})$, and we let
$\hat{L}$ to be the $p$-adic completion of $L$. It is clear that
$A[1/E]_{p}^{\wedge}\subset W(\hat{L}^{\flat})^{H_{K}}$. Recall the following
definition and theorem in [Car13]:
###### Theorem 4.2.3.
An étale $(\varphi,\tau)$-module is a triple
$(\mathcal{M},\varphi_{\mathcal{M}},\hat{G})$ where
* •
$(\mathcal{M},\varphi_{\mathcal{M}})$ is an étale $\varphi$-module over
$A[1/E]^{\wedge}_{p}$;
* •
$\hat{G}$ is a continuous $W(\hat{L}^{\flat})$-semi-linear $\hat{G}$-action on
$\hat{\mathcal{M}}:=W(\hat{L}^{\flat})\otimes_{A[1/E]^{\wedge}_{p}}\mathcal{M}$,
and $\hat{G}$ commutes with $\varphi_{\mathcal{M}}$;
* •
regarding $\mathcal{M}$ as an $A[1/E]^{\wedge}_{p}$-submodule of
$\hat{\mathcal{M}}$, we have $\mathcal{M}\subset\hat{\mathcal{M}}^{H_{K}}$.
Then there is an anti-equivalence of the category of étale
$(\varphi,\tau)$-modules and $\mathrm{Rep}_{\mathbb Z_{p}}(G_{K})$, such that
if $T$ corresponds to $(\mathcal{M},\varphi_{\mathcal{M}},\hat{G})$, then
$T^{\vee}=(\hat{\mathcal{M}}\otimes_{W(\hat{L}^{\flat})}W(\mathbb
C_{p}^{\flat}))^{\varphi=1}.$
One of the basic facts used in the theory of étale $(\varphi,\tau)$-modules
developed in [Car13] is that $\mathop{\rm
Gal}\nolimits(\hat{L}/\widehat{K}_{1^{\infty}})\simeq\mathbb Z_{p}$, and we
write $\tau$ to be a topological generator of $\mathop{\rm
Gal}\nolimits(\hat{L}/K_{1^{\infty}})$ determined by
$\tau(\varpi_{n})=\zeta_{p^{n}}\varpi_{n}$ as the discussion before Corollary
3.3.4. Also $\hat{G}$ is topologically generated by $\tau$ and $H_{K}$, so in
particular, the $\hat{G}$-action on $\hat{\mathcal{M}}$ is determined by the
action of $\tau$ on $\mathcal{M}$ inside $\hat{\mathcal{M}}$. As discussed
before, we will provides a direct correspondence of the category of étale
$(\varphi,\tau)$-modules and the category of étale $\varphi$-modules over
$A[1/E]^{\wedge}_{p}$ with descent data. Moreover, we will construct an
evaluation at $\tau$ map:
$e_{\tau}:B^{(2)}\to W(\hat{L}^{\flat}),$
and show that the $\tau$-action on $\mathcal{M}$ inside $\hat{\mathcal{M}}$ is
given by the base change of the descent data along $e_{\tau}$.
###### Remark 4.2.4.
In [Wu21, Theorem 5.2], they prove a similar equivalence but for étale
$(\varphi,\Gamma)$-modules. The theory of étale $(\varphi,\Gamma)$-module is
defined for the cyclotomic tower $K_{1^{\infty}}$ over $K$ while the theory of
étale $(\varphi,\tau)$-modules is defined using the Kummer tower $K_{\infty}$.
We will use a lot of ideas and results developed in [Wu21] when proving our
claims in this subsection. The main difficulty in our situation is that the
Kummer tower $K_{\infty}$ is not a Galois tower over $K$. To deal with this,
we have to use the idea in [KL19, §5.5]. Roughly speaking, we will take the
Galois closure $L$ of $K_{\infty}$, then prove results over $\hat{L}$, then
descent back to $K_{\infty}$ using the fact
$\widehat{K}_{\infty}=\hat{L}^{H_{K}}$.
One should be able to construct the evaluation map in the content of [Wu21]
the same way as we define in this subsection. This map will give a more direct
correspondence of the descent data and the $\Gamma$-actions on étale
$(\varphi,\Gamma)$-modules.
By [BS22, Lem 3.9], any prism $(B,J)$ admits a map into its perfection
$(B_{\mathop{\rm perf}\nolimits},JB_{\mathop{\rm perf}\nolimits})$. The
following theorem ([BS22, Thm 3.10]) is the key to understand perfect prisms.
###### Theorem 4.2.5.
$(A,I)\to A/I$ induces an equivalence of the category of perfect prisms over
$\mathcal{O}_{K}$ with the category of integral perfectoid rings over
$\mathcal{O}_{K}$.
Let $(A,(E))$ be the Breuil-Kisin prism defined in §2.1, we have
###### Lemma 4.2.6.
$A_{\mathop{\rm perf}\nolimits}\simeq
W(\mathcal{O}_{\widehat{K}_{\infty}}^{\flat})$.
###### Proof.
Exactly the same as the proof of [Wu21, Lemma 2.17] ∎
###### Lemma 4.2.7.
Let $\mathop{\rm Perfd}\nolimits_{K}$ be the category of perfectoid
$K$-algebras, then $\mathop{\rm Perfd}\nolimits_{K}$ admits finite non-empty
coproducts.
###### Proof.
Let $R$ and $S$ be two perfectoid $K$-algebras, it follows from [KL15,
Corollary 3.6.18] that the uniform completion $(R\otimes_{K}S)^{u}$ of the
tensor product $(R\otimes_{K}S)$ is again a perfectoid $K$-algebra, and it is
easy to show this is the coproduct of $R$ and $S$ in the category of
perfectoid $K$-algebras. ∎
For $i\in\mathbb N_{>0}$, let $(A^{(i)},(E))$ (resp.
$(A_{\mathrm{inf}}(\mathcal{O}_{\hat{L}})^{(i)},(E))$) denote the $i$-th self-
coproduct of $(A,(E))$ (resp. $(A_{\mathrm{inf}}(\mathcal{O}_{\hat{L}}),(E))$)
in the category of prisms over $\mathcal{O}_{K}$, where
$A_{\mathrm{inf}}(\mathcal{O}_{\hat{L}}):=W(\mathcal{O}_{\hat{L}}^{\flat})$.
The following is a description of $(A^{(i)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}$ and
$(A_{\mathrm{inf}}(\mathcal{O}_{\hat{L}})^{(i)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}$.
###### Lemma 4.2.8.
Let $\widehat{K}_{\infty}^{(i)}$ (resp. $\hat{L}^{(i)}$) be the $i$-th self-
coproduct of $\widehat{K}_{\infty}$ (resp. $\hat{L}$) in $\mathop{\rm
Perfd}\nolimits_{K}$, then $(A^{(i)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\simeq
W((\widehat{K}_{\infty}^{(i)})^{\flat})$ (resp.
$(A_{\mathrm{inf}}(\mathcal{O}_{\hat{L}})^{(i)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\simeq W((\hat{L}^{(i)})^{\flat})$).
###### Proof.
We will only prove the lemma for $(A^{(i)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}$, and the case for
$(A_{\mathrm{inf}}(\mathcal{O}_{\hat{L}})^{(i)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}$ is similar.
We use similar arguments as in [Wu21, Lemma 5.3]. Fix $i$, first we can show
$(A^{(i)})_{\mathop{\rm perf}\nolimits}$ is the $i$-th self-coproduct of
$(A_{\mathop{\rm perf}\nolimits},(E))$ in the category of perfect prisms over
$\mathcal{O}_{K}$, i.e. $(A^{(i)})_{\mathop{\rm
perf}\nolimits}=(A_{\mathop{\rm perf}\nolimits})^{(i)}_{\mathop{\rm
perf}\nolimits}$. By Theorem 4.2.5, Lemma 4.2.6 and [Wu21, Proposition 2.15],
if we let $S=(A^{(i)})_{\mathop{\rm perf}\nolimits}/E$, then $S[1/p]$ is the
$i$-th self-coproduct of $\widehat{K}_{\infty}$ in the category of perfectoid
$K$-algebras. Now we have
$(A^{(i)})_{\mathop{\rm perf}\nolimits}[1/E]^{\wedge}_{p}\simeq
W(S^{\flat})[1/[\varpi^{\flat}]]^{\wedge}_{p}=W(S^{\flat}[1/\varpi^{\flat}])=W((S[1/p])^{\flat})\simeq
W((\widehat{K}_{\infty}^{(i)})^{\flat}).$
∎
###### Remark 4.2.9.
There is another way to view $\widehat{K}_{\infty}^{(i)}$ in terms of diamonds
over $\mathrm{Spd}(K,\mathcal{O}_{K})$ which is used in the proof of [Wu21,
Lemma 5.3], that there exist a ring of integral elements
$\widehat{K}_{\infty}^{(i),+}$ in $\widehat{K}_{\infty}^{(i)}$, such that we
have
(10)
$\mathrm{Spa}(\widehat{K}_{\infty}^{(i)},\widehat{K}_{\infty}^{(i),+})^{\diamond}\simeq\underbrace{\mathrm{Spa}(\widehat{K}_{\infty},\widehat{K}_{\infty}^{+})^{\diamond}\times_{\mathrm{Spd}(K,\mathcal{O}_{K})}\ldots\times_{\mathrm{Spd}(K,\mathcal{O}_{K})}\mathrm{Spa}(\widehat{K}_{\infty},\widehat{K}_{\infty}^{+})^{\diamond}}_{i\text{-copies
of }\mathrm{Spa}(K_{\infty},K_{\infty}^{+})^{\diamond}}.$
And similar results hold for $\hat{L}$. Using this description and the fact
that functor from perfectoid spaces over $\mathrm{Spa}(K,\mathcal{O}_{K})$ to
diamonds over $\mathrm{Spd}(K,\mathcal{O}_{K})$ is an equivalence, we have
$\hat{L}^{(i)}$ has a natural action of $\hat{G}^{i}$ coming from the action
on the diamond spectrum. Since $\hat{L}^{H_{K}}=\widehat{K}_{\infty}$, we have
$\mathrm{Spa}(\widehat{K}_{\infty}^{(i)},\widehat{K}_{\infty}^{(i),+})^{\diamond}\simeq\left(\mathrm{Spa}(\hat{L},\mathcal{O}_{\hat{L}})^{\diamond}\times\ldots\times_{\mathrm{Spd}(K,\mathcal{O}_{K})}\mathrm{Spa}(\hat{L},\mathcal{O}_{\hat{L}})^{\diamond}\right)^{H_{K}^{i}}\simeq(\mathrm{Spa}(\hat{L}^{(i)},\hat{L}^{(i),+})^{\diamond})^{H_{K}^{i}}.$
That is, $(\hat{L}^{(i)})^{H_{K}^{i}}=\widehat{K}_{\infty}^{(i)}$.
Now we use ideas in [Wu21] and [KL19, §5.5] to study étale $\varphi$-modules
over $A[1/E]^{\wedge}_{p}$ with descent data. We will show this category is
the same as generalized $(\varphi,\Gamma)$-modules in the work of Kedlaya-Liu.
The following is a quick review of Example 5.5.6 and 5.5.7 in [KL19].
Firstly, one has $\hat{L}^{(i)}\simeq\mathop{\rm
Cont}\nolimits(\hat{G}^{i-1},\hat{L})$, here $\mathop{\rm Cont}\nolimits$
means the set of continuous functions. One can see this fact from the proof of
[Wu21, Theorem 5.6]. When $i=2$, we choose the two canonical maps
$i_{1},i_{2}:\hat{L}\to\hat{L}^{(2)}$, corresponds to
$j_{1},j_{2}:\hat{L}\to\mathop{\rm Cont}\nolimits(\hat{G},\hat{L})$ given by
(11) $j_{1}(x):\gamma\mapsto\gamma(x)\quad\text{ and }\quad
j_{2}(x):\gamma\mapsto x.$
From Remark 4.2.9, there is a natural action of $\hat{G}^{2}$ on
$\hat{L}^{(2)}$. One can check this corresponds to the $\hat{G}^{2}$-action on
$\mathop{\rm Cont}\nolimits(\hat{G},\hat{L})$ given by:
$(\sigma_{1},\sigma_{2})(f)(\gamma)=\sigma_{2}f(\sigma_{2}^{-1}\gamma\sigma_{1}).$
###### Remark 4.2.10.
We interchange the roles of $j_{1}$ and $j_{2}$ comparing with the isomorphism
defined in [KL19, Example 5.5.6], so the $\hat{G}^{2}$-action is different
from that in Example 5.5.7 of $loc.cit.$, we will see this definition is more
convenient when relating the descent data with the semilinear group actions.
One can show $\mathop{\rm Cont}\nolimits(\hat{G},-)$ commutes with tilting and
the Witt vector functor, as been discussed in [Wu21, Lemma 5.3], so in
particular, we have
$W((\hat{L}^{(i)})^{\flat})\simeq\mathop{\rm
Cont}\nolimits(\hat{G}^{i-1},W(\hat{L}^{\flat})).$
For $i=2$, we still use $j_{1}$ and $j_{2}$ to represent the two canonical
maps from $W(\hat{L}^{\flat})$ to $\mathop{\rm
Cont}\nolimits(\hat{G},W(\hat{L}^{\flat}))$ that comes from (11). The above
isomorphism also is compatible with the action of $\hat{G}^{2}$, so we have
(12) $W((\widehat{K}_{\infty}^{(2)})^{\flat})\simeq\mathop{\rm
Cont}\nolimits(\hat{G},W(\hat{L}^{\flat}))^{H_{K}^{2}}$
We prove the following lemma for our later use.
Now let $\mathcal{M}$ be an étale $\varphi$-module over
$W(\widehat{K}_{\infty}^{\flat})$ with a descent data:
$\psi:\mathcal{M}\otimes_{W(\widehat{K}_{\infty}^{\flat}),j_{1}}W((\widehat{K}_{\infty}^{(2)})^{\flat})\simeq\mathcal{M}\otimes_{W(\widehat{K}_{\infty}^{\flat}),j_{2}}W((\widehat{K}_{\infty}^{(2)})^{\flat})$
as étale $\varphi$-modules over $W((\widehat{K}_{\infty}^{(2)})^{\flat})$ and
satisfies cocycle condition over $W((\widehat{K}_{\infty}^{(3)})^{\flat})$.
Using (12), we have $\psi$ is the same as a descent data:
(13)
$\hat{\psi}:{\mathcal{M}}\otimes_{W(\widehat{K}_{\infty}^{\flat}),j_{1}}\mathop{\rm
Cont}\nolimits(\hat{G},W(\hat{L}^{\flat}))^{H_{K}^{2}}\simeq{\mathcal{M}}\otimes_{W(\widehat{K}_{\infty}^{\flat}),j_{2}}\mathop{\rm
Cont}\nolimits(\hat{G},W(\hat{L}^{\flat}))^{H_{K}^{2}}.$
For each $\gamma\in\hat{G}$, we have an evaluation map
$\tilde{e}_{\gamma}:\mathop{\rm Cont}\nolimits(\hat{G},W(\hat{L}^{\flat}))\to
W(\hat{L}^{\flat})$ given by evaluating at $\gamma$. Using (11), one can check
$\tilde{e}_{\gamma}\circ j_{2}:W(\widehat{K}_{\infty}^{\flat})\to
W(\hat{L}^{\flat})$ is given by the natural embedding and
$\tilde{e}_{\gamma}\circ j_{1}:W(\widehat{K}_{\infty}^{\flat})\to
W(\hat{L}^{\flat})$ is given by $x\mapsto\gamma(x)$. So for each
$\gamma\in\hat{G}$, if we tensor (13) against the evaluation map
$\tilde{e}_{\gamma}$, we get an isomorphism:
$\psi_{\gamma}:{\mathcal{M}}\otimes_{W(\widehat{K}_{\infty}^{\flat}),\gamma}W(\hat{L}^{\flat})\simeq{\mathcal{M}}\otimes_{W(\widehat{K}_{\infty}^{\flat})}W(\hat{L}^{\flat}).$
And similar to the classical Galois descent theory, the cocycle condition for
$\psi$ implies $\\{\psi_{\gamma}\\}_{\gamma}$ satisfies
$\psi_{\sigma\gamma}=\psi_{\sigma}\circ\sigma^{\ast}\psi_{\gamma}.$
Hence $\\{\psi_{\gamma}\\}_{\gamma}$ defines a continuous semilinear action of
$\hat{G}$ on
$\hat{\mathcal{M}}:=\mathcal{M}\otimes_{W(\widehat{K}_{\infty}^{\flat})}W(\hat{L}^{\flat})$.
One can check for $\gamma\in H_{K}$, we have the composition
$W(\widehat{K}_{\infty}^{\flat})\xrightarrow{j_{k}}W((\widehat{K}_{\infty}^{(2)})^{\flat})\to\mathop{\rm
Cont}\nolimits(\hat{G},W(\hat{L}^{\flat}))\xrightarrow{\tilde{e}_{\gamma}}W(\hat{L}^{\flat})$
is the natural embedding $W(\widehat{K}_{\infty}^{\flat})\hookrightarrow
W(\hat{L}^{\flat})$ for $k=1,2$. And using the cocycle condition, one can show
$\psi_{\gamma}=\mathop{\rm id}\nolimits$ for $\gamma\in H_{K}$, so in
particular, $\mathcal{M}\subset\hat{\mathcal{M}}^{H_{K}}$. Conversely, given a
semilinear action of $\hat{G}$ on $\hat{\mathcal{M}}$ such that
$\mathcal{M}\subset\hat{\mathcal{M}}^{H_{K}}$, $\\{\psi_{\gamma}\\}_{\gamma}$
defines a descent data $\psi$ over $\mathop{\rm
Cont}\nolimits(\hat{G},W(\hat{L}^{\flat}))^{H_{K}^{2}}$ if and only if the
semilinear action is continuous. In summary, we have
###### Theorem 4.2.11.
1. (1)
The category of étale $\varphi$-modules over $A[1/E]^{\wedge}_{p}$ with
descent data over $A^{(2)}[1/E]^{\wedge}_{p}$ is equivalent to the category of
étale $(\varphi,\tau)$-modules over $A[1/E]^{\wedge}_{p}$;
2. (2)
Given a descent data $f$ of an étale $\varphi$-module $\mathcal{M}$ over
$A[1/E]^{\wedge}_{p}$, and $\gamma\in\hat{G}$, we can define the evaluation
$f_{\gamma}$ of $f$ at $\gamma$, defined by the base change of $f$ along
$e_{\gamma}:A^{(2)}[1/E]^{\wedge}_{p}\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\xrightarrow{\tilde{e}_{\gamma}}W(\hat{L}^{\flat}),$
which defines an isomorphism:
$f_{\gamma}:\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p},\tilde{\iota}_{\gamma}}W(\hat{L}^{\flat})\simeq\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p}}W(\hat{L}^{\flat})$
where $\tilde{\iota}_{\gamma}:A[1/E]^{\wedge}_{p}\to
W(\hat{L}^{\flat})\xrightarrow{\gamma}W(\hat{L}^{\flat})$. Suppose that
$(\mathcal{M},f)$ corresponds to a $\mathbb Z_{p}$-representation $T$ of
$G_{K}$, then $f_{\gamma}$ corresponds to the semilinear action of $\gamma$ on
$\mathcal{M}$ inside
$\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p}}W(\mathbb{C}_{p}^{\flat})\simeq
T^{\vee}\otimes W(\mathbb{C}_{p}^{\flat})$. Moreover, two descent data $f,g$
are equal if and only if $f_{\tau}=g_{\tau}$.
###### Proof.
The discussion above the theorem establishes the equivalence between the
category of étale $\varphi$-modules over $A_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}$ with descent data over
$(A^{(2)})_{\mathop{\rm perf}\nolimits}[1/E]^{\wedge}_{p}$ is equivalent to
the category of étale $(\varphi,\tau)$-modules over $A[1/E]^{\wedge}_{p}$. Now
(1) follows [Wu21, Theorem 4.6] which shows that the category of étale
$\varphi$-modules over $B[\frac{1}{I}]^{\wedge}_{p}$ is equivalent to the
category of étale $\varphi$-modules over $B_{\mathop{\rm
perf}\nolimits}[\frac{1}{I}]^{\wedge}_{p}$ for bounded prism $(B,I)$
satisfying $\varphi(I)\mod p$ is generated by a non-zero divisor in $B/p$.
Then it just remains to prove the last statement in (2). Actually one can
check (2) by chasing all the functors used in (1), and use the fact that for
any étale $(\varphi,\tau)$-module, the $\hat{G}$-action on $\hat{\mathcal{M}}$
is determined by the $\tau$-action on $\mathcal{M}$. However, this can also
been seen directly from the following lemma. ∎
###### Lemma 4.2.12.
Given two finite free étale $\varphi$-modules $\mathcal{M},\mathcal{N}$ over
$A^{(2)}[1/E]^{\wedge}_{p}$ and two morphisms $f,g:\mathcal{M}\to\mathcal{N}$
of étale $\varphi$-modules over $A^{(2)}[1/E]^{\wedge}_{p}$. Let
$f_{\tau},g_{\tau}$ be the base changes of $f,g$ along the map
$e_{\tau}:A^{(2)}[1/E]^{\wedge}_{p}\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\simeq\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}\xrightarrow{\tilde{e}_{\tau}}W(\hat{L}^{\flat}).$
Then $f=g$ if and only if $f_{\tau}=g_{\tau}$.
###### Proof.
We take the natural base change of $f$ and $g$ along
$A^{(2)}[1/E]^{\wedge}_{p}\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}$, we get two morphisms $\psi$ and
$\psi^{\prime}$ between étale $\varphi$-modules over $(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}$. Since the base change functor between
étale $\varphi$-modules over $A^{(2)}[1/E]^{\wedge}_{p}$ and
$(A^{(2)})_{\mathop{\rm perf}\nolimits}[1/E]^{\wedge}_{p}$ is an equivalence
of categories, it reduces to show that $\psi=\psi^{\prime}$ if and only if
their base change along
$\tilde{e}_{\tau}:(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\simeq\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}\xrightarrow{}W(\hat{L}^{\flat})$
is equal. Since $\mathcal{M}$ and $\mathcal{N}$ are finite free, it is enough
to show the evaluation map:
$\tilde{e}_{\tau}:\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}\to
W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}$
is injective. Suppose $h\in\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}$
satisfies $h(\tau)=0$, then
$(\sigma_{1},\sigma_{2})(h)(\tau)=\sigma_{2}h(\sigma_{2}^{-1}\tau\sigma_{1})=0$
for $(\sigma_{1},\sigma_{2})\in H_{K}^{2}$. Since $\hat{G}$ is topologically
generated by $H_{K}$ and $\tau$, we get $h\equiv 0$. ∎
Now we give the $\mathbb Q$-isogeny versions of Theorem 4.2.11 and Lemma
4.2.12. Recall that the étale $(\varphi,\tau)$-modules over
$A[1/E]^{\wedge}_{p}[\frac{1}{p}]$ is equivalent to the category of $\mathbb
Q_{p}$-representations of $G_{K}$, and recall the following definition of
étale $(\varphi,\tau)$-modules over $B[1/J]^{\wedge}_{p}[\frac{1}{p}]$ for a
prism $(B,J)\in X_{{\mathbbl{\Delta}}}$.
###### Definition 4.2.13.
An (globally) étale $\varphi$-module $\mathcal{M}$ over
$B[1/J]^{\wedge}_{p}[\frac{1}{p}]$ is a (finite projective) $\varphi$-module
over $B[1/J]^{\wedge}_{p}[\frac{1}{p}]$ that arises by base extension from an
étale $\varphi$-module $B[1/J]^{\wedge}_{p}$.
From this definition, we immediately deduce the following result from [Wu21,
Theorem 4.6]
###### Proposition 4.2.14.
For any prism $(B,J)\in X_{{\mathbbl{\Delta}}}$ satisfying $\varphi(J)\mod p$
is generated by a non-zero divisor in $B/p$, the base change functor defined
by $B[1/J]^{\wedge}_{p}[\frac{1}{p}]\to B_{\mathop{\rm
perf}\nolimits}[1/J]^{\wedge}_{p}[\frac{1}{p}]$ induces an equivalence between
the category of étale $\varphi$-modules over
$B[1/J]^{\wedge}_{p}[\frac{1}{p}]$ and the category of étale $\varphi$-modules
over $B_{\mathop{\rm perf}\nolimits}[1/J]^{\wedge}_{p}[\frac{1}{p}]$.
And similar to Theorem 4.2.11 and Lemma 4.2.12, we have
###### Theorem 4.2.15.
The category of étale $\varphi$-modules over
$A[1/E]^{\wedge}_{p}[\frac{1}{p}]$ with descent data over
$A^{(2)}[1/E]^{\wedge}_{p}[\frac{1}{p}]$ is equivalent to the category of
étale $(\varphi,\tau)$-modules over $A[1/E]^{\wedge}_{p}[\frac{1}{p}]$.
Moreover,
$\mathop{\rm
Cont}\nolimits\big{(}\hat{G},W(\hat{L}^{\flat})[\frac{1}{p}]\big{)}^{H_{K}^{2}}\simeq
W(\widehat{K}_{\infty}^{(2)})^{\flat}[\frac{1}{p}].$
For $\gamma\in\hat{G}$, we can define the evaluation map
$\tilde{e}_{\gamma}:\mathop{\rm
Cont}\nolimits\big{(}\hat{G},W(\hat{L}^{\flat})[\frac{1}{p}]\big{)}\to
W(\hat{L}^{\flat})[\frac{1}{p}].$
And given a descent data $f$ of an étale $\varphi$-module $\mathcal{M}$ over
$A[1/E]^{\wedge}_{p}[\frac{1}{p}]$, and $\gamma\in\hat{G}$, we can define the
evaluation $f_{\gamma}$ of $f$ at $\gamma$, defined by the base change of $f$
along
$e_{\gamma}:A^{(2)}[1/E]^{\wedge}_{p}[\frac{1}{p}]\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}[\frac{1}{p}]\xrightarrow{\tilde{e}_{\gamma}}W(\hat{L}^{\flat})[\frac{1}{p}],$
which defines an isomorphism:
$\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p}[1/p],\tilde{\iota}_{\gamma}}W(\hat{L}^{\flat})[\frac{1}{p}]\simeq\mathcal{M}\otimes_{A[1/E]^{\wedge}_{p}[1/p]}W(\hat{L}^{\flat})[\frac{1}{p}]$
where $\tilde{\iota}_{\gamma}:A[1/E]^{\wedge}_{p}[\frac{1}{p}]\to
W(\hat{L}^{\flat})[\frac{1}{p}]\xrightarrow{\gamma}W(\hat{L}^{\flat})[\frac{1}{p}]$.
If $(\mathcal{M},f)$ corresponds to a $\mathbb Q_{p}$-representation $V$ of
$G_{K}$, then $f_{\gamma}$ corresponds to the semilinear action of $\gamma$ on
$\mathcal{M}$ inside $V^{\vee}\otimes W(\mathbb{C}_{p}^{\flat})[1/p]$.
Moreover, two descent data $f,g$ are equal if and only if $f_{\tau}=g_{\tau}$.
###### Lemma 4.2.16.
Given two finite free étale $\varphi$-modules $\mathcal{M},\mathcal{N}$ over
$A^{(2)}[1/E]^{\wedge}_{p}[\frac{1}{p}]$ and two morphisms
$f,g:\mathcal{M}\to\mathcal{N}$ of étale $\varphi$-modules over
$A^{(2)}[1/E]^{\wedge}_{p}[\frac{1}{p}]$. Let $f_{\tau},g_{\tau}$ be the base
changes of $f,g$ along the map
$e_{\tau}:A^{(2)}[1/E]^{\wedge}_{p}[\frac{1}{p}]\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}[\frac{1}{p}]\simeq\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}[\frac{1}{p}]\big{)}\Big{)}^{H_{K}^{2}}\xrightarrow{\tilde{e}_{\tau}}W(\hat{L}^{\flat})[\frac{1}{p}].$
Then $f=g$ if and only if $f_{\tau}=g_{\tau}$.
###### Proof.
The proofs are exactly the same as the proof of Theorem 4.2.11 and Lemma
4.2.12, plus the following fact that
$\mathop{\rm
Cont}\nolimits\big{(}\hat{G},W(\hat{L}^{\flat})[\frac{1}{p}]\big{)}=\mathop{\rm
Cont}\nolimits\big{(}\hat{G},W(\hat{L}^{\flat})\big{)}[\frac{1}{p}],$
which can be shown by the compactness of $\hat{G}$. ∎
### 4.3. Proofs of Proposition 3.2.2 and Theorem 4.1.10
We keep the assumption that $R=\mathcal{O}_{K}$ is a mixed characteristic
complete DVR with perfect residue field in this subsection, and keep our
notations in §2.1.
Let us first prove Proposition 3.2.2 using Lemma 2.3.2 and results in §4.2.
First, we give a different interpretation of the “evaluation map”:
$e_{\gamma}:A^{(2)}[1/E]^{\wedge}_{p}\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\simeq\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}\xrightarrow{\tilde{e}_{\gamma}}W(\hat{L}^{\flat})$
in Theorem 4.2.11 when restricted on $A^{(2)}$ . Recall that we fix a
compatible system $\\{\varpi_{n}\\}_{n}$ of $p^{n}$-th roots of a uniformizer
$\varpi\in\mathcal{O}_{K}$, this defines a map of prisms
$\iota:(A,(E))\to(A_{\mathrm{inf}},(E))$ maps $u$ to $[{\varpi}^{\flat}]$, and
given a $\gamma\in G_{K}$, we define $\iota_{\gamma}$ to be the composition of
$\iota$ with $\gamma:(A_{\mathrm{inf}},(E))\to(A_{\mathrm{inf}},(E))$ where
the second map is defined as $a\mapsto\gamma(a)$. Since $(E)\subset
A_{\mathrm{inf}}$ is equal to $\mathop{\rm Ker}\nolimits(\theta)$ and $\theta$
is $G_{K}$-equivariant, $\gamma$ is a well-defined map of $\delta$-pairs. By
the universal property of $A^{(2)}$, we can define a map of prisms
$\iota_{\gamma}^{(2)}:(A^{(2)},(E))\to(A_{\mathrm{inf}},(E))$ so that the
following diagram commutes:
(14)
${(A,(E))}$${(A^{(2)},(E))}$${(A,(E))}$${(A_{\mathrm{inf}},(E))}$$\scriptstyle{i_{1}}$$\scriptstyle{\iota_{\gamma}}$$\scriptstyle{\iota^{(2)}_{\gamma}}$$\scriptstyle{i_{2}}$$\scriptstyle{\iota}$
We have $\iota^{(2)}_{\gamma}$ induces a morphism
$\tilde{\iota}^{(2)}_{\gamma}:A^{(2)}[1/E]^{\wedge}_{p}\to W(\mathbb
C_{p}^{\flat})$. We claim for all $\gamma\in G_{K}$,
$\tilde{\iota}^{(2)}_{\gamma}$ is the same as the
$A^{(2)}[1/E]^{\wedge}_{p}\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\simeq\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}\xrightarrow{\tilde{e}_{\gamma}}W(\hat{L}^{\flat})\hookrightarrow
W(\mathbb C_{p}^{\flat}).$
To see this, by the universal property of direct perfection, we have (14)
factorizes as:
${(A,(E))}$${(A^{(2)},(E))}$${(A,(E))}$${(A_{\mathop{\rm
perf}\nolimits},(E))}$${((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E))}$${(A_{\mathop{\rm
perf}\nolimits},(E))}$${(A_{\mathrm{inf}},(E))}$$\scriptstyle{i_{1}}$$\scriptstyle{i_{2}}$$\scriptstyle{i^{\prime}_{1}}$$\scriptstyle{\iota^{\prime}_{\gamma}}$$\scriptstyle{\iota^{\prime(2)}_{\gamma}}$$\scriptstyle{i^{\prime}_{2}}$$\scriptstyle{\iota^{\prime}}$
So $\tilde{\iota}^{(2)}_{\gamma}$ has a factorization
$A^{(2)}[1/E]^{\wedge}_{p}\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\to W(\mathbb C_{p}^{\flat}).$
We just need to check $\iota^{\prime(2)}_{\tau}$ induces the evaluation map
$(A^{(2)})_{\mathop{\rm perf}\nolimits}[1/E]^{\wedge}_{p}\simeq\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}\xrightarrow{\tilde{e}_{\tau}}W(\hat{L}^{\flat})\xhookrightarrow{}W(\mathbb{C}_{p}^{\flat}).$
And this follows from the isomorphism of $(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\simeq W((K^{(2)}_{\infty})^{\flat})$, then
one check directly for $j_{1},j_{2}$ defined in (11), $\tilde{e}_{\gamma}\circ
j_{1}:A_{\mathop{\rm perf}\nolimits}[1/E]^{\wedge}_{p}\to W(\hat{L}^{\flat})$
is equal to the map induced from $\iota^{\prime}_{\gamma}$ and
$\tilde{e}_{\gamma}\circ j_{2}:A_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}\to W(\hat{L}^{\flat})$ is equal to the map
induced from $\iota^{\prime}$. In particular, we have a commutative diagram:
(15)
${A^{(2)}}$${A_{\mathrm{inf}}}$${A^{(2)}[1/E]^{\wedge}_{p}}$${(A^{(2)})_{\mathop{\rm
perf}\nolimits}[1/E]^{\wedge}_{p}}$${W(\hat{L}^{\flat})}$${W(\mathbb
C_{p}^{\flat}).}$$\scriptstyle{\iota^{(2)}_{\gamma}}$$\scriptstyle{\tilde{e}_{\gamma}}$
Now we can prove Proposition 3.2.2.
###### Proof of Proposition 3.2.2.
First we pick $\gamma=\tilde{\tau}$ that is a preimage of $\tau$ under the map
$G_{K}\to\hat{G}$, we have $\gamma(u)-u=Ez$ and $\iota^{(2)}_{\gamma}$ defined
as above is the embedding defined in §2.4 by Remark 2.4.2. In particular,
composing the embedding $A^{(2)}\hookrightarrow A_{\mathrm{inf}}$ defined in
§2.4 with $A_{\mathrm{inf}}\hookrightarrow W(\mathbb C_{p}^{\flat})$, one get
the evaluation map
$(A^{(2)})_{\mathop{\rm perf}\nolimits}[1/E]^{\wedge}_{p}\simeq\mathop{\rm
Cont}\nolimits\Big{(}\hat{G},W\big{(}(\hat{L}^{(2)})^{\flat}\big{)}\Big{)}^{H_{K}^{2}}\xrightarrow{\tilde{e}_{\tau}}W(\hat{L}^{\flat})\xhookrightarrow{}W(\mathbb{C}_{p}^{\flat}).$
restricted on $A^{(2)}$.
Keep the notations as in §3.2, and let
$\mathcal{M}_{A_{\mathrm{inf}}}=W(\mathbb
C_{p}^{\flat})\otimes_{A}\mathfrak{M}$ and
$\mathcal{M}_{A}\simeq\mathfrak{M}\otimes_{A}A[1/E]^{\wedge}_{p}$. By Theorem
4.2.11 and Theorem 4.2.3, recall we use
$B^{(2)}=A^{(2)}[\frac{1}{E}]^{\wedge}_{p}$ and $B^{(2)}_{\mathop{\rm
st}\nolimits}=A^{(2)}_{\mathop{\rm st}\nolimits}[\frac{1}{E}]^{\wedge}_{p}$ to
simplify our notations, we have there is a descent data
$c:\mathcal{M}_{A}\otimes_{A[1/E]^{\wedge}_{p},\tilde{i}_{1}}B^{(2)}\to\mathcal{M}_{A}\otimes_{A[1/E]^{\wedge}_{p},\tilde{i}_{2}}B^{(2)}$
of $\mathcal{M}_{A}$ over $B^{(2)}$ that corresponds to the representation
$T$. And the semilinear action of $\gamma=\tilde{\tau}$ on
$\mathcal{M}_{A_{\mathrm{inf}}}$ is given by the evaluation $c_{\tau}$, that
is, we have the linearization of the $\tilde{\tau}$-action is defined by
$c_{\tau}:W(\mathbb
C_{p}^{\flat})\otimes_{\tilde{\iota}_{\gamma},A[1/E]^{\wedge}_{p}}\mathcal{M}_{A}\simeq
W(\mathbb
C_{p}^{\flat})\otimes_{\tilde{\iota},A[1/E]^{\wedge}_{p}}\mathcal{M}_{A}.$
By base change $c$ along $B^{(2)}\to B^{(2)}[\frac{1}{p}]$, we get a
$B^{(2)}[\frac{1}{p}]$-linear $\varphi$-equivariant morphism:
$c^{\prime}:\mathcal{M}_{A}\otimes_{A[1/E]^{\wedge}_{p},\tilde{i}_{1}}B^{(2)}[\frac{1}{p}]\to\mathcal{M}_{A}\otimes_{A[1/E]^{\wedge}_{p},\tilde{i}_{2}}B^{(2)}[\frac{1}{p}].$
On the other hand, from the discussions after Proposition 3.2.2,
$\tilde{\tau}$-action also defines a $\varphi$-equivariant morphism
$f_{\tilde{\tau}}:\mathfrak{M}\otimes_{A,\iota_{\tilde{\tau}}}A_{\mathop{\rm
st}\nolimits}^{(2)}[\frac{1}{p}]\simeq\mathfrak{M}\otimes_{A}A_{\mathop{\rm
st}\nolimits}^{(2)}[\frac{1}{p}].$
We will see in Proposition 4.3.1 below that $f_{\tilde{\tau}}$ actually
descents to a $B^{(2)}[1/p]$-linear morphism. Assuming this fact, then if we
base change $f_{\tilde{\tau}}$ along $A^{(2)}[\frac{1}{p}]\to W(\mathbb
C_{p}^{\flat})[\frac{1}{p}]$, we will have $f_{\tilde{\tau}}\otimes W(\mathbb
C_{p}^{\flat})[\frac{1}{p}]=c_{\tau}$ since the way we define
$f_{\tilde{\tau}}$ is by taking the ${\tilde{\tau}}$-action. From the
discussion at the beginning of the proof and Lemma 4.2.16, we have
$f_{\tilde{\tau}}=c^{\prime}$ as a $B^{(2)}[\frac{1}{p}]$-linear isomorphism
between
$\mathcal{M}_{A}\otimes_{A[1/E]^{\wedge}_{p},\tilde{i}_{1}}B^{(2)}[\frac{1}{p}]$
and
$\mathcal{M}_{A}\otimes_{A[1/E]^{\wedge}_{p},\tilde{i}_{2}}B^{(2)}[\frac{1}{p}]$.
We fix a basis $\\{e_{i}\\}$ of $\mathfrak{M}$, for $j=1,2$ let
$\\{e^{j}_{i}\\}$ be the basis of
$\mathcal{M}_{A}\otimes_{A,\tilde{i^{\prime}}_{j}}B^{(2)}[\frac{1}{p}]$
defined by $e^{j}_{i}=e_{i}\otimes 1$ and the tensor is via $A\to
A[1/E]^{\wedge}_{p}\xrightarrow{\tilde{i}_{j}}B^{(2)}[1/p]$. So we can
interpret $f_{\tilde{\tau}}=c^{\prime}$ as matrix using this two basis, this
matrix is $X_{\tilde{\tau}}$ from this definition, so it has coefficients
inside $A_{\mathop{\rm st}\nolimits}^{(2)}[\frac{1}{p}]$ by the discussion
before Proposition 3.2.2. On the other hand, $X_{\tilde{\tau}}$ has
coefficients in $B^{(2)}\subset B_{\mathop{\rm st}\nolimits}^{(2)}$ since
$c^{\prime}$ is defined by the $B^{(2)}$-linear map $c$. So by Lemma 2.3.2, we
have $X_{\tilde{\tau}}$ has coefficients inside $A_{\mathop{\rm
st}\nolimits}^{(2)}$. The same argument shows when $T$ is crystalline, then
$X_{\tilde{\tau}}$ has coefficients inside $A^{(2)}$. ∎
###### Proposition 4.3.1.
Base change along $B^{(2)}\to A^{(2)}_{\mathop{\rm
st}\nolimits}[1/E]^{\wedge}_{p}$ defines an equivalence of categories of étale
$\varphi$-modules over $B^{(2)}$ and $A^{(2)}_{\mathop{\rm
st}\nolimits}[1/E]^{\wedge}_{p}$ and an equivalence of categories of étale
$\varphi$-modules over $B^{(2)}[1/p]$ and $A^{(2)}_{\mathop{\rm
st}\nolimits}[1/E]^{\wedge}_{p}[1/p]$.
###### Proof.
By [Wu21, Theorem 4.6], we just need to show the same result after
perfections, we will show $(A^{(2)})_{\mathop{\rm
perf}\nolimits}=(A^{(2)}_{\mathop{\rm st}\nolimits})_{\mathop{\rm
perf}\nolimits}$ in Lemma 5.0.13 using the logarithmic prismatic site. ∎
Now, let us prove Theorem 4.1.10 by first producing a functor $\mathcal{T}$
from prismatic $F$-crystals in finite
$\mathcal{O}_{{\mathbbl{\Delta}}}$-modules to lattices inside a crystalline
representation. For prism $A$, we use $i_{k}:A\to A^{(2)}$ or $A^{(3)}$ for
natural map from $A$ to $k$-th factor of $A^{(2)}$ or $A^{(3)}$. The notation
$i_{kl}:A^{(2)}\to A^{(3)}$ has the similar meaning.
By Corollary 4.1.9, given a prismatic $F$-crystal
${\mathfrak{M}}_{{\mathbbl{\Delta}}}$, we obtain a Kisin module
$(\mathfrak{M},\varphi_{\mathfrak{M}})$ of height $h$ together with descent
data
$f:\mathfrak{M}\otimes_{A,i_{1}}A^{(2)}\to\mathfrak{M}\otimes_{A,i_{2}}A^{(2)}$
so that $f$ satisfies the following cocycle condition $i_{13}\otimes
f=(i_{23}\otimes f)\circ(i_{12}\otimes f)$, where $i_{kl}\otimes f$ is the
base change of $f$ along $i_{kl}$, and $f$ also compatible with the
$\varphi$-structure on the both sides of $f$. Note that the existence of $f$
follows from the crystal property of $\mathfrak{M}_{{\mathbbl{\Delta}}}$:
(16)
$f:\mathfrak{M}\otimes_{A,i_{1}}A^{(2)}\simeq\mathfrak{M}_{{\mathbbl{\Delta}}}((A^{(2)},(E)))\simeq\mathfrak{M}\otimes_{A,i_{2}}A^{(2)}$
We let $\mathcal{M}=\mathfrak{M}\otimes_{A}A[1/E]^{\wedge}_{p}$ and
$c=f\otimes_{A^{(2)}}B^{(2)}$, then $(\mathcal{M},c)$ is an étale
$\varphi$-module with descent data, which corresponds to a $\mathbb
Z_{p}$-representation of $G_{K}$. Moreover the semilinear action of $G_{K}$ on
$\mathfrak{M}\otimes_{A}W(\mathbb{C}_{p}^{\flat})$ comes from
$\\{c_{\gamma}\\}_{\gamma\in G_{K}}$ using the evaluation maps. If we define
$f_{\gamma}:A_{\mathrm{inf}}\otimes_{\iota_{\gamma},A}\mathfrak{M}\to
A_{\mathrm{inf}}\otimes_{\iota,A}\mathfrak{M}$
as the base change of $f$ along $\iota_{\gamma}^{(2)}$, then by (15), we have
$c_{\gamma}=f_{\gamma}$. The $G_{K}$-semilinear action commutes with $\varphi$
as $f$ does. For any $\gamma\in G_{K}$, we have $\gamma(A)\subset
W(k)[\\![u,\epsilon-1]\\!]\subset A^{(2)}_{\mathop{\rm st}\nolimits}\subset
A_{\mathrm{inf}}$. Therefore, the $G_{K}$-action on the
$A_{\mathrm{inf}}\otimes_{A}\mathfrak{M}$ defined the above factors through
$A^{(2)}_{\mathop{\rm st}\nolimits}\otimes_{A}\mathfrak{M}$. We claim that
$G_{K}$-action on $\widehat{\mathfrak{M}}:=A^{(2)}_{\mathop{\rm
st}\nolimits}\otimes_{A}\mathfrak{M}$ defines a $(\varphi,\hat{G})$-module
which corresponds to a crystalline representation.
First, for $\gamma\in G_{\infty}$, $\gamma(A)=A$ in $A_{\mathrm{inf}}$, we
conclude $\iota^{(2)}_{\gamma}:A^{(2)}\to A_{\mathrm{inf}}$ satisfies
$\iota^{(2)}_{\gamma}\circ i_{1}=\iota^{(2)}_{\gamma}\circ i_{2}$. In
particular, for any $\gamma\in G_{\infty}$ and $j=1,2$, using (16) and the
crystal property of $\mathfrak{M}_{{\mathbbl{\Delta}}}$, $f_{\gamma}$ comes
from the base change of (16) along $\iota^{(2)}_{\gamma}:A^{(2)}\to
A_{\mathrm{inf}}$, in particular, we have
$f_{\gamma}:\mathfrak{M}\otimes_{A,\iota^{(2)}_{\gamma}\circ
i_{1}}A_{\mathrm{inf}}\simeq\mathfrak{M}_{{\mathbbl{\Delta}}}((A_{\mathrm{inf}},\mathop{\rm
Ker}\nolimits\theta))\simeq\mathfrak{M}\otimes_{A,\iota^{(2)}_{\gamma}\circ
i_{2}}A_{\mathrm{inf}}.$
Since $\iota^{(2)}_{\gamma}\circ i_{1}=\iota^{(2)}_{\gamma}\circ i_{2}$, we
have $f_{\gamma}={\rm id}$ which means
$\mathfrak{M}\subset(\widehat{\mathfrak{M}})^{G_{\infty}}$. Similarly, $G_{K}$
acts on $\widehat{\mathfrak{M}}/I_{+}$ corresponds the base change of $f$
along
$A^{(2)}\xrightarrow{\iota^{(2)}_{\gamma}}A_{\mathrm{inf}}\to W(\bar{k})$
where the last arrow is the reduction modulo $W(\mathfrak{m})$ ($\mathfrak{m}$
is the maximal ideal of $\mathcal{O}_{\mathbb C_{p}}^{\flat}$). One can check
for all $\gamma\in G_{K}$ and $j=1,2$, we have
$A\xrightarrow{i_{j}}A^{(2)}\xrightarrow{\iota^{(2)}_{\gamma}}A_{\mathrm{inf}}\to
W(\bar{k})$
are all equal to $A\to W(k)\hookrightarrow W(\overline{k})$ with the first
arrow given by $u\mapsto 0$. The above map induces a morphism of prisms
$(A,(E))\to(W(k),(p))$, then using (16) and the crystal condition of
$\mathfrak{M}_{{\mathbbl{\Delta}}}$, we can similarly prove that $G_{K}$ acts
on $\widehat{\mathfrak{M}}/I_{+}$-trivially, so
$(\mathfrak{M},\varphi_{\mathfrak{M}},G_{K})$ is a $(\varphi,\hat{G})$-module.
Furthermore, $\widehat{T}(\widehat{\mathfrak{M}})$ is crystalline by Corollary
3.3.4 and Theorem 3.2.1.
###### Remark 4.3.2.
In §5, we will consider a category consisting of modules with descent data,
and similar arguments about the triviality of the Galois actions can be shown
directly using the cocycle condition of the descent data. We summarize this
fact in the following easy fact.
###### Lemma 4.3.3.
Let $q:(A^{(2)},(E))\to(B,J)$ be a map of prisms satisfying $q\circ
i_{1}=q\circ i_{2}$, then for any descent data $f$ over $A^{(2)}$, the base
change of $f$ along $q$ is the identity map.
To show the fully faithfulness of this functor, first let $(\mathfrak{M},f)$,
$(\mathfrak{M}^{\prime},f^{\prime})$ be two Kisin modules with descent data
$f,f^{\prime}$ respectively. Suppose that there exists a map
$\alpha:\mathcal{T}((\mathfrak{M},f))\to\mathcal{T}((\mathfrak{M}^{\prime},f^{\prime}))$
as lattices of crystalline representations, then from our construction of
$\mathcal{T}$ and Theorem 3.3.3, $\alpha$ is induced from a map
$\hat{\alpha}:(\mathfrak{M},\varphi_{\mathfrak{M}},\hat{G}_{\mathfrak{M}})\to(\mathfrak{M}^{\prime},\varphi_{\mathfrak{M}^{\prime}},\hat{G}_{\mathfrak{M}^{\prime}})$
between $(\varphi,\hat{G})$-modules. The faithfulness of $\mathcal{T}$ follows
the fact that $A\to A[1/E]^{\wedge}_{p}$ induces a fully faithful functor
between Kisin modules over $A$ and étale $\varphi$-modules over
$A[1/E]^{\wedge}_{p}$ from [Kis06, Proposition 2.1.12]. On the other hand,
$\hat{\alpha}$ gives morphisms
$\hat{\alpha}_{1}:\mathfrak{M}\otimes_{A,i_{1}}A^{(2)}\to\mathfrak{M}^{\prime}\otimes_{A,i_{1}}A^{(2)}$
and
$\hat{\alpha}_{2}:\mathfrak{M}\otimes_{A,i_{2}}A^{(2)}\to\mathfrak{M}^{\prime}\otimes_{A,i_{2}}A^{(2)}$.
If we view $A$ and $A^{(2)}$ as subrings of $A_{\mathrm{inf}}$ using diagram
(14), then the following diagram commutes by the fact that
$\hat{\alpha}:\widehat{\mathfrak{M}}\to\widehat{\mathfrak{M}^{\prime}}$ is
compatible with $\tau$-action.
${\mathfrak{M}\otimes_{A,i_{1}}A^{(2)}}$${\mathfrak{M}\otimes_{A,i_{2}}A^{(2)}}$${\mathfrak{M}^{\prime}\otimes_{A,i_{1}}A^{(2)}}$${\mathfrak{M}^{\prime}\otimes_{A,i_{2}}A^{(2)}}$$\scriptstyle{f}$$\scriptstyle{\hat{\alpha}_{1}}$$\scriptstyle{\hat{\alpha}_{2}}$$\scriptstyle{f^{\prime}}$
Thus we produces a morphism between $(\mathfrak{M},f)$ and
$(\mathfrak{M}^{\prime},f^{\prime})$, i.e. $\mathcal{T}$ is also full.
It remains to show the functor $\mathcal{T}$ is essential surjective. Given a
lattice $T$ in a crystalline representation of $G_{K}$, let $\mathfrak{M}$ be
the corresponded Kisin module, it suffices to construct a descent data of
$\mathfrak{M}$ over $A^{(2)}$. We have shown in our proof of Proposition 3.2.2
that if we view $A^{(2)}$ as a subring of $A_{\mathrm{inf}}$ via
$\iota^{(2)}_{\tilde{\tau}}$, then $X_{\tilde{\tau}}$ defines a
$\varphi$-equivariant isomorphism
$f:\mathfrak{M}\otimes_{A,i_{1}}A^{(2)}\simeq\mathfrak{M}\otimes_{A,i_{2}}A^{(2)}$
of $A^{(2)}$-modules. We also show the base change of $f$ along $A^{(2)}\to
B^{(2)}$ is equal to the descent data $c$ of the étale $\varphi$-module
$\mathcal{M}_{A}=\mathfrak{M}\otimes_{A}A[1/E]^{\wedge}_{p}$ that corresponds
to $G_{K}$-action on $T$. In particular,
$c:\mathfrak{M}\otimes_{A,i_{1}}B^{(2)}\simeq\mathfrak{M}\otimes_{A,i_{2}}B^{(2)}$
satisfies the cocycle condition. By Lemma 2.3.2, $A^{(2)}$ (resp. $A^{(3)}$)
injects into $B^{(2)}$ (resp. $B^{(3)}$), so we have $f$ also satisfies the
cocycle condition. In particular, $(\mathfrak{M},f)$ together produce a
primatic $F$-crystals in finite free $\mathcal{O}_{{\mathbbl{\Delta}}}$-module
by Corollary 4.1.9.
###### Remark 4.3.4.
Given an étale $\varphi$-module
$(\mathcal{M}_{A},\varphi_{\mathcal{M}_{A}},c)$ over $A[1/E]^{\wedge}_{p}$
with descent datum $c$, we call
$(\mathcal{M}_{A},\varphi_{\mathcal{M}_{A}},c)$ is _of finite $E$-height_ if
$\mathcal{M}_{A}$ is of finite $E$-height, i.e., if there is a finite free
Kisin module $(\mathfrak{M},\varphi_{\mathfrak{M}})$ of finite height and
defined over $A$ such that
$\mathfrak{M}\otimes_{A}A[1/E]^{\wedge}_{p}\simeq\mathcal{M}_{A}$ as
$\varphi$-modules. Since $(\mathcal{M}_{A},\varphi_{\mathcal{M}_{A}})$ is the
étale $\varphi$-module for $T|_{G_{\infty}}$, our definition of finite
$E$-height is compatible with the one given by Kisin under the equivalence in
(1) of Theorem 4.2.11.
We expect same arguments in the proof of Proposition 3.2.2 will be used to
study representations of finite $E$-height. Similar result has been studied
using the theory of $(\varphi,\tau)$-modules by Caruso. For example, in the
proof of [Car13, Lemma 2.23], Caruso shows for representations of finite
$E$-height, the $\tau$-actions descents to $\mathfrak{S}_{u\text{-np},\tau}$,
which is a subring of $A_{\mathrm{inf}}$ closely related to
$\tilde{\iota}^{(2)}_{\tilde{\tau}}(B^{(2)})\cap A_{\mathrm{inf}}$, where
$\tilde{\tau}$ is a preimage of $\tau$ in $G_{K}$.
###### Remark 4.3.5.
We can also establish the compatibility of our Theorem 4.1.10, the theory of
Kisin and [BS21, Theorem 1.2]. Given a lattice $T$ in a crystalline
representation of $G_{K}$ with non-negative Hodge-Tate weight, and let
$\mathfrak{M}$ be the Kisin module corresponds to $T$ in [Kis06], and let
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ (reso.
$\mathfrak{M}^{\prime}_{{{\mathbbl{\Delta}}}}$) be the prismatic $F$-crystal
corresponds to $T^{\vee}$ under [BS21, Theorem 1.2] (resp. $T$ under Theorem
4.1.10). Note that we need to take $T^{\vee}$ since in the work of Bhatt-
Scholze, the equivalence is covariant. By our construction of
$\mathfrak{M}^{\prime}_{{{\mathbbl{\Delta}}}}$, we have
$\mathfrak{M}^{\prime}_{{{\mathbbl{\Delta}}}}((A,(E)))\simeq\mathfrak{M}$. By
[BS21, Remark 7.11],
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}((A,(E)))\simeq\mathfrak{M}$. Next we need
to show the descent data over $A^{(2)}$ constructed respectively are the same.
By Corollary 2.4.5, we just need to show they are the same as descent data of
étale $\varphi$-modules over $A^{(2)}[1/E]^{\wedge}_{p}$, but they are the
same by our $\tau$-evaluation criteria in Lemma 4.2.12.
## 5\. Logarithmic prismatic $F$-crystals and semi-stable representations
In this section, we will propose a possible generalization of Theorem 4.1.10
to semi-stable representations using the absolute logarithmic prismatic site.
The main reference of this subsection is [Kos21]. We will restrict ourselves
to the base ring $R=\mathcal{O}_{K}$, a complete DVR with perfect residue
field. And we give $R$ the log structure associated to the prelog structure
$\alpha:\mathbb N\to R$ such that $\alpha(1)=\varpi$ is a uniformizer in $R$,
i.e., let $D=\\{\varpi=0\\}$, then the log structure on $X=\mathop{\rm
Spf}\nolimits(R)$ is defined by
$M_{X}=M_{D}\hookrightarrow\mathcal{O}_{X}\text{ where
}M_{D}(U):=\\{f\in\mathcal{O}_{X}(U)\,|\,f|_{U\backslash
D}\in\mathcal{O}^{\times}(U\backslash D)\\}.$
Let us introduce the absolute logarithmic site over $(X,M_{X})$.
###### Definition 5.0.1.
[Kos21, Definition 2.2 and Definition 3.3]
1. (1)
A $\delta_{\log}$-ring is a tuple $(A,\delta,\alpha:M\to A,\delta_{\log}:M\to
A)$, where $(A,\delta)$ is a $\delta$-pair and $\alpha$ is a prelog-structure
on $A$. And $\delta_{\log}$ satisfies:
* •
$\delta_{\log}(e)=0$,
* •
$\delta(\alpha(m))=\alpha(m)^{p}\delta_{\log}(m)$,
* •
$\delta_{\log}(mn)=\delta_{\log}(m)+\delta_{\log}(n)+p\delta_{\log}(m)\delta_{\log}(n)$
for all $m,n\in M$. And we will simply denote it by $(A,M)$ if this is no
confusion. Morphisms are morphisms of $\delta$-pairs that compatible with the
perlog structure and $\delta_{\log}$-stucture.
2. (2)
A $\delta_{\log}$-triple is $(A,I,M)$ such that $(A,I)$ is a $\delta$-pair and
$(A,M)$ is a $\delta_{\log}$-ring.
3. (3)
A $\delta_{\log}$-triple $(A,I,M)$ is a prelog prism if $(A,I)$ is a prism,
and it is bounded if $(A,I)$ is bounded.
4. (4)
A bounded prelog prism is a log prism if it is $(p,I)$-adically log-affine
(cf. [Kos21, Definition 3.3]).
5. (5)
A bounded (pre)log prism is integral if $M$ is an integral monoid.
6. (6)
A $\delta_{\log}$-triple $(A,I,M)$ is said to be over $(R,\mathbb N)$ if $A/I$
is an $R$-algebra and there is a map $M\to\mathbb N$ of monoids such that the
following diagram commutes.
${M}$${A}$${\mathbb N}$${R}$${A/I}$
All $\delta_{\log}$-triples over $(R,\mathbb N)$ form a category. Similarly,
we can define the category of prelog prisms over $(R,\mathbb N)$ and the
category of bounded log prisms over $(R,\mathbb N)^{a}$.
###### Remark 5.0.2.
If $A$ is an integral domain, or more general if $\alpha(M)$ consists of non-
zero divisors, then $\delta_{\log}$ is uniquely determined by $\delta$ if
exists. In particular, morphisms between such $\delta_{\log}$-rings are just
morphisms of $\delta$-rings.
###### Remark 5.0.3.
Note that in this paper, for a $\delta$-pair $(A,I)$, we always assume $A$ is
$(p,I)$-adic complete, but in [Kos21], non-$(p,I)$-adic completed
$\delta_{\log}$-triples are also been studied. By Lemma 2.10 of loc.cit., we
can always take the $(p,I)$-adic completions of the $\delta$-pair $(A,I)$ and
the $\delta_{\log}$-structure will be inherited.
###### Proposition 5.0.4.
[Kos21, Corollary 2.15] Given a bounded prelog prism $(A,I,M)$, one can
associate it with a log prism
$(A,I,M)^{a}=(A,I,M^{a})$
###### Remark 5.0.5.
When we deal with log prisms in this paper, we will always take it as the log
prism associated with some prelog prism. And by the above proposition, we know
taking the associated log prism does not change the underlying $\delta$-pair.
Moreover, it is a general fact that $(A,I,M)^{a}$ is integral if $(A,I,M)$ is
a integral.
###### Definition 5.0.6.
The absolute logarithmic prismatic site
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ is the opposite of the category
whose objects are
1. (1)
bounded log prisms $(A,I,M_{A})$ with integral log structure,
2. (2)
maps of formal schemes $f_{A}:\mathop{\rm Spf}\nolimits(A/IA)\to X$,
3. (3)
the map $f_{A}$ satisfies
$(\mathop{\rm Spf}\nolimits(A/IA),f_{A}^{\ast}M_{X})\to(\mathop{\rm
Spf}\nolimits(A),M_{A})^{a}$
defines an exact closed immersion of log formal schemes.
A morphism $(A,I,M_{A})\to(B,I,M_{B})$ is a cover if and only if $A\to B$ is
$(p,I)$-complete faithfully flat and the pullback induces an isomorphism on
log structure. We define the structure sheaf
$\mathcal{O}_{{{\mathbbl{\Delta}}}_{\log}}$ on
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ by $(A,I,M_{A})\mapsto A$.
There is a variant of the about definition that we will also use in this
subsection, we define $(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}^{\mathop{\rm
perf}\nolimits}$ be the full subcategory of
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ whose objects are $(A,I,M_{A})$ with
$A$ perfect.
###### Remark 5.0.7.
Our definition of the absolute logarithmic prismatic site is different from
[Kos21, Definition 4.1]. First, we need to consider the absolute prismatic
site, not the relative one. Furthermore, we use the $(p,I)$-complete
faithfully flat topology compared with the $(p,I)$-complete étale topology.
Also we require the log-structures to be integral.
###### Proposition 5.0.8.
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ forms a site.
###### Proof.
Similar to [BS22, Corollary 3.12], we need to show for a given diagram
${(C,I,M_{C})}$${(A,I,M_{A})}$${(B,I,M_{B})}$$\scriptstyle{c}$$\scriptstyle{b}$
in $(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ such that $b$ is a cover, then
the pushout of $b$ along $c$ is a covering. From the argument in $loc.cit.$,
we known for the underlying prisms, the pushout of $b$ along $c$ is the
$(p,I)$-completed tensor product $D=C\widehat{\otimes}_{A}B$, and $(D,I)$ is a
bounded prism covers $(C,I)$ in the $(p,I)$-complete faithful flat topology.
And we give $D$ the log structure $M_{D}$ defined by viewing $\mathop{\rm
Spf}\nolimits(D)$ as the fiber product via [Ogu18, Proposition 2.1.2], then
$(C,M_{C})\to(D,M_{D})$ is strict morphism by Remark 2.1.3 of $loc.cit.$, so
in particular, $M_{D}$ is integral since $M_{C}$ is. For the same reason,
$(\mathop{\rm Spf}\nolimits(D/ID),f_{D}^{\ast}M_{X})\to(\mathop{\rm
Spf}\nolimits(D),M_{D})^{a}$
is strict since it is the base change of a strict morphism. It is an exact
closed immersion since pushout of a surjective map of monoids is again
surjective. ∎
###### Example 5.0.9.
[Kos21, Example 3.4]
1. (1)
Let $(A,(E))$ be the Breuil-Kisin prism, then we can define a perlog structure
to $(A,(E))$ given by $\mathbb N\to A;n\mapsto u^{n}$, one have
$(A,(E),\mathbb N)^{a}$ is in $(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$, where
(3) in Definition 5.0.6 follows from the prelog structures $\mathbb N\to R\to
A/(E)$ and $\mathbb N\to A\to A/(E)$ induce the same log structure.
2. (2)
For any prism $(B,J)$ over $(A,(E))$, it has a natural prelog structure
$\mathbb N\to A\to B$, and similar to $(1)$, $(B,J,\mathbb N)^{a}$ is in
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$.
3. (3)
A special case of (2) is that $(B,J)=(A_{\mathop{\rm perf}\nolimits},(E))$,
the perfection of $(A,(E))$. One has the prelog structure in (2) can be
directly defined as $1\mapsto[\varpi^{\flat}]$. And $(A,(E),\mathbb
N)^{a}\to(B,J,\mathbb N)^{a}$ is a covering of log prisms in
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$.
Actually, all logarithmic structures of log prisms in
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ is the log structure associated to a
prelog structure defined by $\mathbb N$. We thank Teruhisa Koshikawa for
letting us know the following lemma.
###### Lemma 5.0.10.
For any log prism $(B,J,M_{B})$ inside
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$, $(B,M_{B})^{a}$ admits a chart
$\mathbb N\to B$ defined by $n\mapsto u_{B}^{n}$ for some $u_{B}\in B$
satisfying $u_{B}\equiv\varpi\mod J$.
###### Proof.
For any log prism $(B,J,M_{B})$ inside
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$, we have
$(\mathop{\rm Spf}\nolimits(B/J),f_{B}^{\ast}M_{X})\to(\mathop{\rm
Spf}\nolimits(B),M_{B})^{a}$
defines an exact closed immersion of log formal schemes. So by the proof of
[Kos21, Proposition 3.7], if we let $N^{a}_{B/J}:=\Gamma(\mathop{\rm
Spf}\nolimits(B/J),\underline{\mathbb N}^{a})$ for the prelog structure
$\mathbb N\to\mathcal{O}_{K}\to B/J$ induced from the given prelog structure
on $\mathcal{O}_{K}$, then the fiber product $M_{B}\times_{N^{a}_{B/J}}\mathbb
N$ is a chart for $(B,M_{B})^{a}$. Moreover, since we assume $M_{B}$ to be
integral, we have $(\mathop{\rm
Spf}\nolimits(B/J),f_{B}^{\ast}M_{X})\to(\mathop{\rm
Spf}\nolimits(B),M_{B})^{a}$ is a log thickening with ideal $J$ in the sense
of [Ogu18, Definition 2.1.1.], and one can show
$M_{B}\times_{N^{a}_{B/J}}\mathbb N\simeq\mathbb N\times(1+J)$. Now
$(1+J)^{\times}=(1+J)$, so
$\mathbb N\to\mathbb N\times(1+J)\simeq M_{B}\times_{N^{a}_{B/J}}\mathbb N\to
B$
is also a chart for $(B,M_{B})^{a}$. And the prelog structure given by
$n\mapsto u_{B}^{n}$ for some $u_{B}\in B$ satisfying the image of $u_{B}$ in
$B/J$ coincides with the image of $\varpi$ under $\mathcal{O}_{K}\to B/J$. ∎
In the rest of this subsection, we will try to generalize results we proved in
§4.1-§4.3 for the logarithmic prismatic site.
###### Lemma 5.0.11.
1. (1)
For
$(A,I_{A},M_{A})^{a},(B,I_{B},M_{B})^{a}\in(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$
such that $M_{A},M_{B}$ are integral and $(A,M_{A})\to(A/I_{A},\mathbb N)$ and
$(B,M_{B})\to(B/I_{B},\mathbb N)$ are exact surjective, there is a prelog
prism $(C,I_{C},M_{C})$ with integral log structure that is universal in the
sense that the diagram
${(A,I_{A},M_{A})}$${(C,I_{C},M_{C})}$${(B,I_{B},M_{B})}$
is initial in the category of diagrams
${(A,I_{A},M_{A})}$${(D,I_{D},M_{D})}$${(B,I_{B},M_{B})}$
of prelog prisms over $(R,\mathbb N)$, and $(D,M_{D})\to(D/I_{D},\mathbb N)$
is an exact surjective.
2. (2)
If $(C,I_{C})$ in (1) is bounded, then $(C,I_{C},M_{C})^{a}$ is the product of
$(A,I_{A},M_{A})^{a}$ and $(B,I_{B},M_{B})^{a}$ inside
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$.
3. (3)
If $(A,I_{A},M_{A})^{a},(B,I_{B},M_{B})^{a}$ in (1) are in
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}^{\mathop{\rm perf}\nolimits}$, and
let $(C_{\mathop{\rm perf}\nolimits},I_{C})$ be the perfection of $(C,I_{C})$
defined in (1). Let $(C_{\mathop{\rm perf}\nolimits},I_{C},M_{C})$ be the
prelog prism with prelog structure induced from $C$. Then $(C_{\mathop{\rm
perf}\nolimits},I_{C},M_{C})^{a}$ is the product of $(A,I_{A},M_{A})^{a}$ and
$(B,I_{B},M_{B})^{a}$ in $(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}^{\mathop{\rm
perf}\nolimits}$.
###### Proof.
Let
$(A,I_{A},M_{A}),(B,I_{B},M_{B})\in(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$,
define $C_{0}$ to be the $(p,I_{A},I_{B})$-adic completion of
$A\otimes_{W(k)}B$ and let $J$ be the kernel of
$C_{0}\to A/I_{A}\widehat{\otimes}_{R}B/I_{B}.$
Then $(C_{0},J,M_{A}\times M_{B})$ is a $\delta_{\log}$-triple over
$(A,I_{A},M_{A})$. And we have $(C_{0},J,M_{A}\times M_{B})\to(C_{0}/J,\mathbb
N)$ is surjective. Then we can apply [Kos21, Proposition 3.6] to get a
universal prelog prism $(C,I_{C},M_{C})$ over $(A,I_{A},M_{A})$ and
$(B,I_{B},M_{B})$ and satisfies $(C,M_{C})\to(C/J,\mathbb N)$ is exact
surjective. Just recall in the proof of [Kos21, Proposition 3.6], we first
construct a $\delta_{\log}$-triple $(C^{\prime},J^{\prime},M_{C}^{\prime})$
which is universal in the sense that it is a $\delta_{\log}$-triple over both
$(A,I_{A},M_{A})$ and $(B,I_{B},M_{B})$ satisfying $C^{\prime}/J^{\prime}$ is
over $A/I_{A}$ and $B/I_{B}$ as $R$-algebra and
$(C^{\prime},M_{C}^{\prime})\to(C^{\prime}/J^{\prime},\mathbb N)$ is exact
surjective. Then we take the prismatic envelope with respect to
$(A,I_{A})\to(C^{\prime},J^{\prime})$ to get $(C,I_{C})$. Then we can check
such $(C,I_{C},M_{C})$ satisfies the universal property. For (2), when
$(C,I_{C})$ is bounded, the fact that $(C,I_{C},M_{C})^{a}$ is the product of
$(A,I_{A},M_{A})^{a}$ and $(B,I_{B},M_{B})^{a}$ inside
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ follows from Proposition 3.7 of
$loc.cit.$. For (3), we have $(C_{\mathop{\rm perf}\nolimits},I_{C})$ is
automatic bounded, and one can check $(C_{\mathop{\rm perf}\nolimits},I_{C})$
is universal using exactly the same proof of Proposition 3.7 of $loc.cit.$. ∎
We thank Koji Shimizu for the following lemma on $A^{(2)}_{\mathop{\rm
st}\nolimits}$.
###### Lemma 5.0.12.
Let $(A,I,\mathbb N)^{a}$ be the Breuil-Kisin prism defined in $(1)$ of
Example 5.0.9, then the self-product (resp. self-triple product) of
$(A,I,\mathbb N)^{a}$ in $(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ exist.
Moreover, if we let $(A^{\langle 2\rangle},I,M^{2})^{a}$ (resp. $(A^{\langle
3\rangle},I,M^{3})^{a}$) be self-product (resp. self-triple product) of
$(A,I,\mathbb N)^{a}$, then $A^{\langle i\rangle}\simeq A^{(i)}_{\mathop{\rm
st}\nolimits}$ for $i=2,3$.
###### Proof.
By our construction in Lemma 5.0.11, $(A^{\langle 2\rangle},I,M)$ is the
prelog prismatic envelope $(C,I_{C},M_{C})$ with respect to
$(A,(E),\mathbb N)\to(C_{0},J,\mathbb N^{2})\text{ and }(C_{0}/J,\mathbb
N^{2})\to(R,\mathbb N)$
where $C_{0}=W[\\![u,v]\\!]$, $J=(E(u),u-v)$ with the prelog structure given
by $\beta:(1,0)\mapsto u,(0,1)\mapsto v$. The prelog prismatic envelope is
constructed using the technique of exactification: consider
$\pi:(C_{0},\mathbb N^{2})\to(R=C/J,\mathbb N)$ where the map between log
structures is given by $\pi_{\log}:\mathbb N\times\mathbb N\to\mathbb
N;(m,n)\mapsto m+n$, here $\pi_{\log}$ is surjective but not exact, so to
constructsthe exactification of $\pi:(C,\mathbb N^{2})\to(R,\mathbb N)$ (cf.
[Kos21, Construction 2.18]), first we have the exactification of $\pi_{\log}$
is
$\alpha:M^{2}\to\mathbb N\quad\text{ given by }\quad(m,n)\mapsto m+n,$
where $M^{2}=\\{(m,n)\in\mathbb Z\times\mathbb Z\,|\,m+n\in\mathbb N\\}$.
Since $M^{2}$ is generated by $(-1,1)$, $(1,-1)$, $(0,1)$ and $(1,0)$, one has
the exactification of $\pi$ is
$\Big{(}W(k)[\\![u,v]\\!]\big{[}\frac{v}{u},\frac{u}{v}\big{]}^{\wedge}_{(p,J^{\prime})},J^{\prime},M^{2};\alpha:(1,0)\mapsto{u},(0,1)\mapsto
v,(1,-1)\mapsto\frac{u}{v},(-1,1)\mapsto\frac{v}{u}\Big{)}$
where $J^{\prime}:=\mathop{\rm
ker}\nolimits(W(k)[\\![u,v]\\!]\big{[}\frac{v}{u},\frac{u}{v}\big{]}\to R)$.
We have the $(p,J^{\prime})$-adic completion of
$W(k)[\\![u,v]\\!]\big{[}\frac{v}{u},\frac{u}{v}\big{]}$ is
$W(k)[\\![u,\frac{v}{u}-1]\\!]$. Then take prismatic envelope of
$(A,(E))\to(W(k)[\\![u,\frac{v}{u}-1]\\!],(E,\frac{v}{u}-1)).$ One can check
$W(k)[\\![u,\frac{v}{u}-1]\\!]\big{\\{}\frac{v/u-1}{E(u)}\big{\\}}^{\wedge}_{\delta}\simeq
A_{\mathop{\rm st}\nolimits}^{(2)}$
directly from the definition of $A_{\mathop{\rm st}\nolimits}^{(2)}$.
Similarly, we can show $A^{\langle 3\rangle}\simeq A^{(3)}_{\mathop{\rm
st}\nolimits}$ which is also bounded. ∎
The following is one of our key observations.
###### Lemma 5.0.13.
We have $(A^{\langle 2\rangle})_{\mathop{\rm
perf}\nolimits}\simeq(A^{(2)})_{\mathop{\rm perf}\nolimits}$.
###### Proof.
Let $u_{1},u_{2}$ be the image of $u$ under the two natural maps
$i_{j}:A_{\mathop{\rm perf}\nolimits}\to(A^{(2)})_{\mathop{\rm
perf}\nolimits}$ for $j=1,2$. We claim that $u_{2}/u_{1}$ is inside
$(A^{(2)})_{\mathop{\rm perf}\nolimits}$.
Firstly, we have already shown $A_{\mathop{\rm perf}\nolimits}\simeq
W(\widehat{\mathcal{O}}_{K_{\infty}}^{\flat})$ and $u=[\varpi^{\flat}]$, here
$\varpi^{\flat}=(\varpi_{n})$ with $\\{\varpi_{n}\\}_{n\geq 0}$ being a
compatible system of $p^{n}$-th roots of $\varpi$ inside
$\mathcal{O}_{\widehat{K}_{\infty}}$, and
$(\varpi_{n})\in\mathcal{O}_{\widehat{K}_{\infty}}^{\flat}$ via the
identification $\mathcal{O}_{\widehat{K}_{\infty}}^{\flat}\simeq\lim_{x\mapsto
x^{p}}\mathcal{O}_{\widehat{K}_{\infty}}$. Let $S=(A^{(2)})_{\mathop{\rm
perf}\nolimits}/(E)$, this is an integral perfectoid ring over
$\mathcal{O}_{K}$ in the sense of [BMS18]. We have
$S^{\flat}\simeq(A^{(2)})_{\mathop{\rm perf}\nolimits}/(p)$. For $j=1,2$,
define $\varpi_{j}^{\flat}=u_{j}\mod(p)\in S^{\flat}$, then we have
$u_{j}=[\varpi_{j}^{\flat}]$ for $j=1,2$.
Recall in § 2.1, we have $z=\frac{y-x}{E(x)}$ in $A^{(2)}$. Since $E(x)\equiv
x^{e}\mod p$, we have $x(1+x^{e-1}z)\equiv y\mod p$. If we denote
$\iota:A^{(2)}\to(A^{(2)})_{\mathop{\rm perf}\nolimits}$ the natural map, then
$\iota(x)=u_{1}$ and $\iota(y)=u_{2}$ in our definition, and
$u_{1}(1+u_{1}^{e-1}\iota(z))\equiv u_{2}\mod p$ inside
$S^{\flat}=A^{(2)}_{\mathop{\rm perf}\nolimits}/(p)$. This is the same as
$\varpi_{1}^{\flat}\mu=\varpi_{2}^{\flat}$ with
$\mu=(1+u_{1}^{e-1}\iota(z))\mod p$ in $S^{\flat}$. So we have
$[\mu]u_{1}=[\mu][\varpi_{1}^{\flat}]=[\varpi_{2}^{\flat}]=u_{2}$, which
proves our claim.
Now by symmetry, $u_{1}/u_{2}$ is also inside $(A^{(2)})_{\mathop{\rm
perf}\nolimits}$, so $u_{1}/u_{2}$ is a unit in $(A^{(2)})_{\mathop{\rm
perf}\nolimits}$. So we can give $(A^{(2)})_{\mathop{\rm perf}\nolimits}$ a
prelog structure
$\alpha:M^{2}\to(A^{(2)})_{\mathop{\rm perf}\nolimits}\text{ with
}(1,-1)\mapsto\frac{u_{1}}{u_{2}},(-1,1)\mapsto\frac{u_{2}}{u_{1}},(1,0)\mapsto{u_{1}},(0,1)\mapsto{u_{2}}$
with the monoid $M^{2}$ defined as in the proof of Lemma 5.0.12, then
$((A^{(2)})_{\mathop{\rm perf}\nolimits},(E),M^{2})^{a}$ is in
$X_{{{\mathbbl{\Delta}}}_{\log}}^{\mathop{\rm perf}\nolimits}$.
One can check the maps
$i_{1},i_{2}:(A,(E))\to(A^{(2)},(E))\to((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E))$ induce $i_{1},i_{2}:(A_{\mathop{\rm
perf}\nolimits},(E),\mathbb N)\to((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E),M^{2})$ of prelog prisms. So by Lemma 5.0.12, there is a
unique map $(A^{\langle 2\rangle},I,M^{2})\to((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E),M^{2})$, which factors through $((A^{\langle
2\rangle})_{\mathop{\rm perf}\nolimits},(E),M^{2})$. So it induces a map
$((A^{\langle 2\rangle})_{\mathop{\rm
perf}\nolimits},(E),M^{2})\to((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E),M^{2})$ inside
$X_{{{\mathbbl{\Delta}}}_{\log}}^{\mathop{\rm perf}\nolimits}$. On the other
hand, by the universal property of $A^{(2)}$, we know there is a map
$(A^{(2)})_{\mathop{\rm perf}\nolimits}\to(A^{\langle 2\rangle})_{\mathop{\rm
perf}\nolimits}$ fits into the coproduct diagram in
$X_{{{\mathbbl{\Delta}}}}^{\mathop{\rm perf}\nolimits}$, which is the full
subcategory of $X_{{\mathbbl{\Delta}}}$ containing perfect prisms.
One can check the composition $\eta:((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E))\to((A^{\langle 2\rangle})_{\mathop{\rm
perf}\nolimits},(E))\to((A^{(2)})_{\mathop{\rm perf}\nolimits},(E))$ satisfies
$\eta\circ i_{j}=i_{j}\circ\eta$ for $i_{1},i_{2}:(A_{\mathop{\rm
perf}\nolimits},(E))\to((A^{(2)})_{\mathop{\rm perf}\nolimits},(E))$. Such a
map is unique inside $X_{{{\mathbbl{\Delta}}}}^{\mathop{\rm perf}\nolimits}$,
so $\eta=\mathop{\rm id}\nolimits_{((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E))}$.
On the other hand, the composition
$\eta^{\prime}:((A^{\langle 2\rangle})_{\mathop{\rm
perf}\nolimits},(E),M^{2})^{a}\to((A^{(2)})_{\mathop{\rm
perf}\nolimits},(E),M^{2})^{a}\to((A^{\langle 2\rangle})_{\mathop{\rm
perf}\nolimits},(E),M^{2})^{a}$
satisfies $\eta\circ i^{\prime}_{j}=i^{\prime}_{j}\circ\eta$ for
$i^{\prime}_{1},i^{\prime}_{2}:(A_{\mathop{\rm perf}\nolimits},(E),\mathbb
N)^{a}\to((A^{\langle 2\rangle})_{\mathop{\rm perf}\nolimits},(E),M^{2})^{a}$
induced from $i^{\prime}_{1},i^{\prime}_{2}:(A,(E),\mathbb N)\to(A^{\langle
2\rangle},(E),M^{2})$. Such map is also unique inside
$X_{{{\mathbbl{\Delta}}}_{\log}}^{\mathop{\rm perf}\nolimits}$, so
$\eta^{\prime}=\mathop{\rm id}\nolimits_{((A^{\langle 2\rangle})_{\mathop{\rm
perf}\nolimits},(E),M^{2})^{a}}$. So in particular we have $(A^{\langle
2\rangle})_{\mathop{\rm perf}\nolimits}\simeq(A^{(2)})_{\mathop{\rm
perf}\nolimits}$. ∎
###### Theorem 5.0.14.
The category of étale $\varphi$-module over $A[1/E]^{\wedge}_{p}$ with a
descent data over $A_{\mathop{\rm st}\nolimits}^{(2)}[1/E]^{\wedge}_{p}$ is
equivalent to the category of lattice in representations of $G_{K}$. Moreover,
for all $\gamma\in\hat{G}$, we can define the evaluation map
$e_{\gamma}:A_{\mathop{\rm st}\nolimits}^{(2)}[1/E]^{\wedge}_{p}\to
W(\hat{L}^{\flat})$
such that Lemma 4.2.12 is still valid. Moreover, the $\mathbb Q$-isogeney
version of this theorem also holds.
###### Remark 5.0.15.
The above theorem should be related to the étale comparison theorem in the log
prismatic settings, which has not been studied in [Kos21] yet.
Moreover, we have a log version of Lemma 4.1.8 also holds. We thank Teruhisa
Koshikawa for hints of the following result.
###### Proposition 5.0.16.
The sheaf represented by $(A,(E),\mathbb N)^{a}$ covers the final object
$\ast$ in in $\mathop{\rm
Shv}\nolimits((X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}})$.
###### Proof.
For any log prism $(B,J,M_{B})$, by Lemma 5.0.10, we can assume
$(B,J,M_{B})^{a}=(B,J,\mathbb N)^{a}$, with prelog structure defined by
$n\mapsto u_{B}^{n}$ with $u_{B}\equiv\varpi\mod J$.
Using deformation theory, we have there is a unique $W(k)$-algebra structure
for $B$, and we define
$C=B[\\![u]\\!][\frac{u_{B}}{u},\frac{u}{u_{B}}]\\{\frac{u_{B}/u-1}{J}\\}^{\wedge}_{\delta}$,
where the completion is taken for the $(p,J)$-adic topology. Similar to the
proof of Lemma 5.0.12, we have $(C,JC,\mathbb N)^{a}$ is the product of
$(A,(E),\mathbb N)^{a}$ and $(B,J,\mathbb N)^{a}$ inside
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$. Moreover, we have $B\to C$ is
$(p,J)$-complete flat by [BS22, Proposition 3.13]. It remains to show that
$(B,J)\to(C,J)$ is a covering, i.e., $B\to C$ is $(p,J)$-complete faithfully
flat. Let
$C^{nc}:=B[\\![u]\\!][\frac{u_{B}}{u},\frac{u}{u_{B}}]\\{\frac{u_{B}/u-1}{J}\\}_{\delta}$
be the non-complete version of $C$ that we have the $(p,J)$-adic completion of
$C^{nc}$ is $C$. Now we just need to show the flat ring map $B/(p,J)\to
C/(p,J)=C^{nc}/(p,J)$ is also faithful.
We claim that $C/(p,J)$ is free over $B/(p,J)$. One has $JC=E(u)C$, and
$(p,J)=(p,E)=(p,J,E)$ in $C$. So $C/(p,J)=C^{nc}/(p,J)$ is equal to
$B[\\![u]\\!][\frac{u_{B}}{u},\frac{u}{u_{B}}][\delta^{i}(z),i\geq
0]/\left(p,J,E,Ez=\frac{u_{B}}{u}-1,\delta^{i}(\frac{u_{B}}{u}-1))=\delta^{i}(Ez),i\geq
1\right).$
After modulo $(p,J)$, the above is the direct limit of
$B/(p,J)[\delta^{i}(z)]/\left(\delta^{i}(\frac{u_{B}}{u}-1))=\delta^{i}(Ez)\mod(p,E,J)\right)$
for $i\geq 0$.
Now we use Lemma 2.2.4 to compute
$\delta^{i}(\frac{u_{B}}{u}-1)=\delta^{i}(Ez)\mod(p,E,J)$. We keep the
notations in Lemma 2.2.4, by induction, we have $b_{n}=0\mod(p,E)$. Using that
$a_{p}^{(j)}\in A_{0}^{\times}$,
$\delta^{i}(\frac{u_{B}}{u}-1)=\delta^{i}(Ez)\mod(p,E,J)$ gives a relation
$(z_{i-1})^{p}=\sum\limits_{j=0}^{p-1}\tilde{a}_{j}^{(i)}(z_{i-1})^{j}$ where
$z_{i}=\mathfrak{z}_{i}\mod(p,J,E)$ and $\tilde{a}_{j}^{(i)}\in
B/(p,J)[z_{0},z_{1},\dots,z_{i-2}]$. In summary, we have
$C/(p,J)=B/(p,J)[z_{i},i\geq
0]\Bigg{/}\left((z_{i})^{p}-\sum\limits_{j=0}^{p-1}\tilde{a}_{j}^{(i)}(z_{i})^{j},i\geq
1\right)$
which is free over $B/(p,J)$. ∎
###### Definition 5.0.17.
1. (1)
A prismatic crystal over $(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ in finite
locally free $\mathcal{O}_{{{\mathbbl{\Delta}}}_{\log}}$-modules is a finite
locally free $\mathcal{O}_{{{\mathbbl{\Delta}}}_{\log}}$-module
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ such that for all morphisms
$f:(A,I,M_{A})\to(B,J,M_{B})$ of log prisms, it induces an isomorphism:
$f^{\ast}\mathfrak{M}_{{{\mathbbl{\Delta}}},A}:=\mathfrak{M}_{{{\mathbbl{\Delta}}}}((A,I,M_{A}))\otimes_{A}B\simeq\mathfrak{M}_{{{\mathbbl{\Delta}}},B}:=\mathfrak{M}_{{{\mathbbl{\Delta}}}}((B,J,M_{B}))$
2. (2)
A prismatic $F$-crystal over $(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ of
height $h$ (in finite locally free
$\mathcal{O}_{{{\mathbbl{\Delta}}}_{\log}}$-modules) is a prismatic crystal
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}$ in finite locally free
$\mathcal{O}_{{{\mathbbl{\Delta}}}_{\log}}$-modules together with a
$\varphi_{\mathcal{O}_{{{\mathbbl{\Delta}}}_{\log}}}$-semilinear endomorphism
$\varphi_{\mathfrak{M}_{{{\mathbbl{\Delta}}}}}$ of the
$\mathcal{O}_{{{\mathbbl{\Delta}}}_{\log}}$-module
$\mathfrak{M}_{{{\mathbbl{\Delta}}}}:\mathfrak{M}_{{{\mathbbl{\Delta}}}}\to\mathfrak{M}_{{{\mathbbl{\Delta}}}}$
such that the cokernel of the linearization
$\varphi^{\ast}\mathfrak{M}_{{{\mathbbl{\Delta}}}}\to\mathfrak{M}_{{{\mathbbl{\Delta}}}}$
is killed by $\mathcal{I}_{{\mathbbl{\Delta}}}^{h}$.
In particular, with help of Theorem 5.0.14 and Proposition 5.0.16, a direct
translation of proofs in §4.3 with $A^{(2)}$ replaced by $A^{(2)}_{\mathop{\rm
st}\nolimits}$ shows the following theorem.
###### Theorem 5.0.18.
The category of prismatic $F$-crystals over
$(X,M_{X})_{{{\mathbbl{\Delta}}}_{\log}}$ of height $h$ is equivalent to the
category of lattices in semi-stable representations of $G_{K}$ with Hodge-Tate
weights between $0$ and $h$.
## 6\. Some discussions on base rings
In this section, we show that our base ring assumed at the beginning of §2
covers many situations of base rings used in [Kim14] and [Bri08].
Let $K$ be complete DVR with perfect residue field $k$, and let
$K_{0}=W[\frac{1}{p}]$ with $W=W(k)$, fix a uniformizer
$\varpi\in\mathcal{O}_{K}$ and $E(u)\in W[u]$ a minimal polynomial of $\varpi$
over $K_{0}$. Let $R$ be a normal domain and satisfies that $R$ is a
$p$-complete flat $\mathcal{O}_{K}$-algebra that is complete with respect to
$J$-adic topology, for an ideal $J=(\varpi,{t_{1}},\ldots,{t_{d}})$ of $R$
containing $\varpi$. We also assume $\overline{R}=R/(\varpi)$ is a finite
generated $k$-algebra with _finite $p$-basis_ discussed in [dJ95, §1.1].
###### Lemma 6.0.1 ([Kim14] Lemma 2.3.1 and lemma 2.3.4).
1. (1)
In the above setting, there is a $p$-adic formally smooth flat $W$-algebra
$R_{0}$ equipped with a Frobenius lift $\varphi_{0}$ such that
$\overline{R}:=R_{0}/(p)$. Moreover let $J_{0}$ be the preimage of
$\overline{J}$ inside $R_{0}$, then $R_{0}$ is $J_{0}$-adically complete, and
under this topology, $R_{0}$ is formally smooth.
2. (2)
$R_{0}/(p)\xrightarrow{\sim}R/(\varpi)$ lifts to a $W$-algebra morphism
$R_{0}\to R$ and the induced $\mathcal{O}_{K}$-algebra morphism
$\mathcal{O}_{K}\otimes_{W}R_{0}\to R$ is an isomorphism. Moreover this
isomorphism is continuous with respect to the $J_{0}$-adic topology.
Let $(R_{0},\varphi_{R_{0}})$ denote a flat $W$-lift of $R/(\varpi)$ obtained
from the above lemma. And we will have $J_{0}=(p,t_{1},\ldots,t_{d})\in
R_{0}$, and we write
$\overline{J}=(\overline{t_{1}},\ldots,\overline{t_{d}})\subset\overline{R}$.
###### Definition 6.0.2.
Let $R_{0}$ be a $p$-complete $\mathbb Z_{p}$-algebra, we say $R_{0}$
satisfies the “refined almost étalenes” assumption, or simply RAE assumption,
if $\hat{\Omega}_{R_{0}}=\oplus_{i=1}^{m}R_{0}dT_{i}$ with $T_{i}\in
R_{0}^{\times}$. Where $\hat{\Omega}_{R_{0}}$ is the module of of $p$-adically
continuous Kähler differentials.
The following are examples of $R_{0}$ and $R$ which satisfy assumptions of
Lemma 6.0.1 and RAE assumption.
###### Example 6.0.3.
1. (1)
If $R/(\varpi)$ is a completed noetherian regular local ring with residue
field $k$, then Cohen structure theorem implies
$R/(\varpi)=k[\\![\overline{x_{1}},\ldots,\overline{x_{d}}]\\!]$. In this
case, $R_{0}=W[\\![x_{1},\ldots,x_{d}]\\!]$ and
$J_{0}=(p,x_{1},\ldots,x_{d})$. Then $R=W[\\![x_{1},\ldots,x_{d}]\\!][u]/E$,
with $E\in W[u]$ is a Eisenstein polynomial.
2. (2)
Let $R_{0}=W(k)\langle t_{1}^{\pm 1},\dots,t_{m}^{\pm 1}\rangle$ and
$J_{0}=(p)$, in this example, $\overline{R}=k[\overline{t}_{1}^{\pm
1},\dots,\overline{t}_{m}^{\pm 1}]$ is not local.
3. (3)
An unramified complete DVR $(R_{0},p)$ with residue field $k$ so that
$[k:k^{p}]<\infty$.
4. (4)
Note the the Frobenius liftings in Lemma 6.0.1 is not unique. In (2) we can
choose $\varphi_{R_{0}}(t_{i})=t_{i}^{p}$. In (1), we can choose the
$\varphi_{R_{0}}(x_{i})=x_{i}^{p}$ or
$\varphi_{R_{0}}(x_{i})=(x_{i}+1)^{p}-1$.
Let $R_{0}$ be $p$-complete algebra which satisfies the RAE assumption, Set
$\breve{R}_{0}=W\langle t_{1},\dots,t_{m}\rangle$ and $f:\breve{R}_{0}\to
R_{0}$ by sending $t_{i}$ to $T_{i}$.
###### Proposition 6.0.4.
Assume that $R_{0}$ is a $p$-complete integral domain which admits finite
$p$-basis and satisfies RAE assumption. Then $f$ is formally étale
$p$-adically.
###### Proof.
We thanks for Wansu Kim providing the following proof. By standard technique
using [Ill71, Ch.III, Corollaire 2.1.3.3] (e.g., see the proof in [Kim14, Lem.
2.3.1]), it suffices to show that the cotangent complex $\mathbb
L_{R_{0}/\breve{R}_{0}}$ is acyclic. Since both $R_{0}$ and $\breve{R}_{0}$
are $\mathbb Z_{p}$-flat, it suffice to show that $\mathbb
L_{R_{1}/\breve{R}_{1}}$ is acyclic where $R_{1}=R_{0}/pR_{0}$ and
$\breve{R}_{1}=\breve{R}_{0}/p\breve{R}_{0}$. Since $R_{0}$ has finite
$p$-basis, by [dJ95, Lem. 1.1.2], $\mathbb L_{R_{1}/k}\simeq\Omega_{R_{1}/k}$.
Note that maps $k\to\breve{R}_{1}\to R_{1}$ induces a fiber sequence
$\mathbb L_{\breve{R}_{1}/k}\otimes^{\mathbb L}_{\breve{R}_{1}}R_{1}\to\mathbb
L_{R_{1}/k}\to\mathbb L_{R_{1}/\breve{R}_{1}}$
Since that $\mathbb L_{\breve{R}_{1}/k}\simeq\Omega_{\breve{R}_{1}/k}$ and
$\Omega_{\breve{R}_{1}/k}\simeq\Omega_{R_{1}/k}$ by RAE condition, we conclude
that $\mathbb L_{R_{1}/\breve{R}_{1}}=0$ as required. ∎
Let us end with a discussion about our base rings and the base rings used in
[Bri08]. As explained in the beginning of [Bri08, Chap. 2], his base ring
$R_{0}$ in [Bri08] is obtained from $W\langle t_{1}^{\pm 1},\ldots,t_{m}^{\pm
1}\rangle$ by a finite number of iterations of certain operations and is also
assumed to satisfy certain properties. By Prop. 2.0.2 _loc. cit._ , we see
that $R_{0}$ has finite $p$-basis and satisfies RAE assumption. So the base
ring $R_{0}$ in [Bri08] also satisfies the requirement that $f:W\langle
t_{1},\ldots,t_{m}\rangle\to R_{0}$ is formally étale by Proposition 6.0.4.
## References
* [AB21] Johannes Anschütz and Arthur-César Le Bras, _Prismatic dieudonné theory_ , 2021, arXiv:1907.10525.
* [Ber04] Laurent Berger, _Limites de représentations cristallines_ , Compos. Math. 140 (2004), no. 6, 1473–1498. MR 2098398 (2006c:11138)
* [Bha18] Bhargav Bhatt, _Prismatic cohomology_ , 2018, Eilenberg Lecture Notes at Columbia University.
* [BMS18] Bhargav Bhatt, Matthew Morrow, and Peter Scholze, _Integral $p$-adic Hodge theory_, Publ. Math. Inst. Hautes Études Sci. 128 (2018), 219–397. MR 3905467
* [BMS19] by same author, _Topological Hochschild homology and integral $p$-adic Hodge theory_, Publ. Math. Inst. Hautes Études Sci. 129 (2019), 199–310. MR 3949030
* [Bre02] Christophe Breuil, _Integral $p$-adic Hodge theory_, Algebraic geometry 2000, Azumino (Hotaka), Adv. Stud. Pure Math., vol. 36, Math. Soc. Japan, Tokyo, 2002, pp. 51–80. MR 1971512 (2004e:11135)
* [Bri08] Olivier Brinon, _Représentations $p$-adiques cristallines et de de Rham dans le cas relatif_, Mém. Soc. Math. Fr. (N.S.) (2008), no. 112, vi+159. MR 2484979
* [BS21] Bhargav Bhatt and Peter Scholze, _Prismatic $F$-crystals and crystalline Galois representations_, 2021, arXiv:2106.14735.
* [BS22] by same author, _Prisms and prismatic cohomology_ , 2022, arXiv:1905.08229.
* [Car13] Xavier Caruso, _Représentations galoisiennes $p$-adiques et $(\varphi,\tau)$-modules_, Duke Math. J. 162 (2013), no. 13, 2525–2607. MR 3127808
* [dJ95] A. J. de Jong, _Crystalline Dieudonné module theory via formal and rigid geometry_ , Inst. Hautes Études Sci. Publ. Math. (1995), no. 82, 5–96 (1996). MR 1383213
* [Du21] Heng Du, _Arithmetic Breuil-Kisin-Fargues modules and several topics in p-adic Hodge theory_ , https://hammer.purdue.edu/articles/thesis/Arithmetic_Breuil-Kisin-Fargues_modules_and_several_topics_in_p-adic_Hodge_theory/14502945, 5 2021.
* [Gao21] Hui Gao, _Breuil-Kisin modules and integral $p$-adic Hodge theory_, 2021, arXiv:1905.08555.
* [Ill71] Luc Illusie, _Complexe cotangent et déformations. I_ , Lecture Notes in Mathematics, Vol. 239, Springer-Verlag, Berlin-New York, 1971. MR 0491680
* [Kim14] Wansu Kim, _The Relative Breuil–Kisin Classification of p-Divisible Groups and Finite Flat Group Schemes_ , International Mathematics Research Notices 2015 (2014), no. 17, 8152–8232.
* [Kis06] Mark Kisin, _Crystalline representations and $F$-crystals_, Algebraic geometry and number theory, Progr. Math., vol. 253, Birkhäuser Boston, Boston, MA, 2006, pp. 459–496. MR MR2263197 (2007j:11163)
* [KL15] Kiran S. Kedlaya and Ruochuan Liu, _Relative p-adic hodge theory: Foundations_ , Societe Mathematique De France, 2015.
* [KL19] by same author, _Relative p-adic hodge theory, II: Imperfect period rings_, 2019, arXiv:1602.06899.
* [Kos21] Teruhisa Koshikawa, _Logarithmic prismatic cohomology I_, 2021, arXiv:2007.14037.
* [KR09] Mark Kisin and Wei Ren, _Galois representations and Lubin-Tate groups_ , Doc. Math. 14 (2009), 441–461. MR 2565906 (2011d:11122)
* [Liu07] Tong Liu, _Torsion $p$-adic Galois representations and a conjecture of Fontaine_, Ann. Sci. École Norm. Sup. (4) 40 (2007), no. 4, 633–674. MR 2191528
* [Liu10] by same author, _A note on lattices in semi-stable representations_ , Math. Ann. 346 (2010), no. 1, 117–138. MR 2558890
* [LL21] Shizhang Li and Tong Liu, _Comparison of prismatic cohomology and derived de rham cohomology_ , 2021, arXiv:2012.14064.
* [Ogu18] Arthur Ogus, _Lectures on logarithmic algebraic geometry_ , Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2018.
* [Oze18] Yoshiyasu Ozeki, _Lattices in crystalline representations and Kisin modules associated with iterate extensions_ , Doc. Math. 23 (2018), 497–541. MR 3846051
* [Sta20] The Stacks Project Authors, _Stacks Project_ , http://stacks.math.columbia.edu, 2020.
* [Wac96] Nathalie Wach, _Représentations $p$-adiques potentiellement cristallines_, Bull. Soc. Math. France 124 (1996), no. 3, 375–400. MR 1415732 (98b:11119)
* [Wu21] Zhiyou Wu, _Galois representations, $(\varphi,{\Gamma})$-modules and prismatic F-crystals_, 2021, arXiv:2104.12105.
|
arxiv-papers
| 2021-07-26T14:40:15 |
2024-09-04T03:07:18.887917
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Heng Du, Tong Liu",
"submitter": "Heng Du",
"url": "https://arxiv.org/abs/2107.12240"
}
|
2107.12242
|
11institutetext: INAF – Osservatorio Astronomico di Roma, Via Frascati 33,
I-00078 Monte Porzio Catone (RM), Italy.
11email: [email protected] 22institutetext: ASI – Space Science Data
Center, Via del Politecnico snc, I-00133 Roma, Italy. 33institutetext: INAF –
Istituto di Astrofisica Spaziale e Fisica Cosmica di Milano, Via A. Corti 12,
I-20133 Milano, Italy. 44institutetext: Università di Bologna, Dip. di Fisica
e Astronomia “A. Righi”, Via P. Gobetti 93/2, I-40129 Bologna, Italy.
55institutetext: INAF – Osservatorio di Astrofisica e Scienza dello Spazio di
Bologna, Via P. Gobetti 93/3, I-40129 Bologna, Italy. 66institutetext: INAF –
Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, I-34143 Trieste,
Italy. 77institutetext: CSIC – Instituto de Astrofísica de Andalucía, Dep.to
de Astronomía Extragaláctica, Glorieta de la Astronomía s/n, E-18008 Granada,
Spain. 88institutetext: Max-Planck Institut für Astronomie, Königstuhl 17,
D-69117 Heidelberg, Germany. 99institutetext: Instituto de Astrofísica de
Canarias, C/ Vía Láctea s/n, E-38205 La Laguna (Tenerife), Spain.
1010institutetext: Universidad de La Laguna, Dep.to de Astrofísica, Av.da
Astrofísico F. Sánchez s/n, E-38206 La Laguna (Tenerife), Spain.
# Capturing dual AGN activity and kiloparsec-scale outflows
in IRAS 20210+1121
F. G. Saturni Capturing dual AGN activity and kiloparsec-scale outflows in
IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale outflows in
IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale outflows in
IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale outflows in
IRAS 20210+1121 G. Vietri Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121 E. Piconcelli Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121 C. Vignali Capturing dual AGN
activity and kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN
activity and kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN
activity and kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN
activity and kiloparsec-scale outflows in IRAS 20210+1121 M. Bischetti
Capturing dual AGN activity and kiloparsec-scale outflows in IRAS
20210+1121Capturing dual AGN activity and kiloparsec-scale outflows in IRAS
20210+1121 A. Bongiorno Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121 S. Cazzoli Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121 C. Feruglio Capturing dual AGN
activity and kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN
activity and kiloparsec-scale outflows in IRAS 20210+1121
F. Fiore Capturing dual AGN activity and kiloparsec-scale outflows in IRAS
20210+1121Capturing dual AGN activity and kiloparsec-scale outflows in IRAS
20210+1121 B. Husemann Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121Capturing dual AGN activity and kiloparsec-scale
outflows in IRAS 20210+1121 C. Ramos Almeida Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121Capturing dual AGN activity and
kiloparsec-scale outflows in IRAS 20210+1121
(Received 2021 May 24 / Accepted 2021 Jul 26)
The most standard scenario for the evolution of massive galaxies across cosmic
time assumes a correspondence based on the interplay between active galactic
nuclei (AGN) feedback, which injects large amounts of energy into the host
environment, and galaxy mergers, with their ability to trigger massive star
formation events and accretion onto supermassive black holes. Interacting
systems hosting AGN are useful laboratories for obtaining key insights into
both phenomena. In this context, we present an analysis of the optical
spectral properties of IRAS 20210+1121 (I20210), a merging system at
$z=0.056$. According to X-ray data, this object comprises two interacting
galaxies, each hosting an obscured AGN. The optical spectra confirm the
presence of AGN features in both galaxies. In particular, we are able to
provide a Seyfert classification for I20210 North. The spectrum of I20120
South shows broad blueshifted components associated with the most intense
emission lines that indicate the presence of an ionized outflow, for which we
derive a maximum velocity of $\sim$2000 km s-1, an extension of $\sim$2 kpc,
and a mass rate of $\sim$0.6 M⊙ yr-1. We also report the existence of an
ionized nebular component with $v\sim 1000$ km s-1 at $\sim$6.5 kpc southwards
of I20210 South, which can be interpreted as disrupted gas ejected from the
host galaxy by the action of the outflow. I20120 therefore exhibits a double
obscured AGN, with one of them showing evidence of ongoing events for AGN-
powered outflows. Future spatially resolved spectroscopy will allow for an
accurate mapping of the gas kinematics in this AGN pair and evaluate the
impact of the outflow on both the interstellar medium and the galaxy
environment.
###### Key Words.:
galaxies: active – galaxies: groups: general – galaxies: groups: individual:
IRAS 20210+1121 – galaxies: Seyfert – quasars: emission lines – quasars:
supermassive black holes
## 1 Introduction
The past history of formation and evolution of present-day massive galaxies is
a key point to consider on the path to obtaining a fuller understanding of the
functioning of the Universe. In this context, the study of processes operating
on galaxy-wide scales, such as the presence of active galactic nuclei (AGN;
e.g., Lynden-Bell 1969) or events related to galaxy mergers (e.g., Hernquist
1989), is crucial to improving our knowledge of the mechanisms that are able
to boot, maintain, enhance, and quench star formation in galaxies – thereby
shaping the entire environment in the process.
It is now widely accepted that AGN activity and galaxy mergers are among the
most effective phenomena regulating star formation in massive galaxies at
nearly all redshifts; namely, the first injects large amounts of energy that
are able to originate powerful gas winds in the surrounding environment (e.g.,
Di Matteo et al. 2005; Cattaneo et al. 2009; Fabian 2012), while the second
takes place by triggering massive star formation and starburst events in
molecular gas-rich clouds (e.g., Sanders & Mirabel 1996). Both simulations of
the evolutionary history of the Universe in the framework of the $\Lambda$-CDM
model (e.g., Davis et al. 1985; Springel et al. 2005; Croton et al. 2006) and
observations that confirm their contribution to simultaneously shaping
galactic environments (e.g., Sanders et al. 1988; Kormendy & Ho 2013; Ellison
et al. 2019) have confirmed the major role that such processes play in galaxy
formation and evolution.
Figure 1: Image in false colors of the I20210 system, obtained by combining
the grizy exposures of the Pan-STARRS1 survey (PS1; Chambers & Pan-STARRS Team
2016) centered on the sky coordinates of I20210S ($\alpha_{\rm J2000}=$ 20 23
25.4, $\delta_{\rm J2000}=+$11 31 34.7). The isophotes of the XMM-Newton
Optical Monitor (OM) UVW1 mosaic exposure (green solid lines) taken
simultaneously to the X-ray data analyzed by Piconcelli et al. (2010) – along
with some of the associated CCD count levels – are drawn onto the PS1 image to
highlight weak features. The TNG slit direction and position (magenta dot-
dashed lines) are also indicated along with the positions and directions of
the trace centers (white dashed lines) identified to extract the 1D spectrum
of each object.
Within this general picture, however, several values related to how exactly
AGN energetics and mergers directly impact the star formation history of
galaxies are still missing. For instance, it is still unknown whether
radiation-powered gas outflows are ubiquitous to all AGN (e.g., Elvis 2000) or
whether they affect only a fraction of the AGN lifetime (e.g., Farrah et al.
2007), along with what their effectiveness is with regard to altering the
physical and dynamical status of gas reservoirs on several spatial scales
(e.g., Scannapieco & Oh 2004; Cicone et al. 2018). In addition, the relative
dominance of one process onto the other for moving large gas masses and
triggering or quenching star formation has been found to be dependent on the
details of the AGN emission mode, the galaxy’s surrounding environment and its
star formation history (e.g., Hopkins et al. 2006; Heckman & Best 2014).
Therefore, the study of interacting galactic systems with the presence of
multiple AGN (e.g., Veilleux et al. 2002) offers an extremely interesting
possibility for understanding the properties and links between such competing
mechanisms. The most common objects of this kind are dual AGN, in which an
active nucleus is hosted in both members of a pair of interacting galaxies
with separation on the scale of $5\div 20$ kpc (see e.g., De Rosa et al. 2018,
and references therein).
Figure 2: 2D spectral sections of the I20210 system. Left panel: The H$\beta$
and [O III] region. Right panel: The H$\alpha$+[N II] and [S II] region. In
both panels, the trace centers of the main components are identified for
reference (green dashed lines). The elliptical fits to the [O III] and
H$\alpha$+[N II] emissions from the South Nebula are reported (green ellipses)
along with the respective best-fit parameters and statistical uncertainties.
In the right panel, the vertical features are sky lines, whereas the extrusion
close to the [S II] emission of I20210N is a cluster of saturated pixels that
is excluded from the IRAF extraction of the 1D spectrum. Figure 3: Optical
spectra of the I20210 components. Top panel: I20210N. Middle panel: I20210S.
Bottom panel: The spatially extended South Nebula. In all panels: (i) the
detected signal is reported along with its rms uncertainty (cyan bands); (ii)
the zero-flux level (dashed line) is indicated; and (iii) the positions of
major emission (top) and absorption features (bottom) are labeled accordingly.
In this work, we present the results of the optical spectroscopic analysis of
the $z=0.056$ dual AGN IRAS 20210+1121 (I20210 hereafter; Perez et al. 1990,
P90 hereafter), which is composed of two interacting galaxies oriented in the
N-S direction and separated by $12^{\prime\prime}.2$ (i.e. $\sim$13.3 kpc;
Davies et al. 2002; Arribas et al. 2004). Considered at first as being
composed of a Seyfert 2 with asymmetric emission lines (the southern
component) and a normal galaxy (the northern component), X-ray observations
performed with XMM-Newton revealed that this system is actually a merger
between two obscured AGN hosts (Piconcelli et al. 2010), in which the southern
member is an ultraluminous infrared galaxy (ULIRG; e.g., Sanders et al. 1988).
Additionally, despite having access to spectroscopic data in the near-infrared
(Burston et al. 2001), the optical spectrum of the northern member was still
unobserved, due to its faintness compared to the southern galaxy (Heisler &
Vader 1995). The image of the I20210 system, obtained by combining the grizy
exposures from the Pan-STARRS1 survey (PS1; Chambers & Pan-STARRS Team 2016),
is shown in Fig. 1. This image already gives us an idea of the complex
structure of the system, displaying a luminous bridge that connects the two
galaxies.
This paper is organized as follows. We describe the observation and data-
reduction process in Sect. 2. We characterize the extracted spectrum of the
northern galaxy in Sect. 3, as well as that of the southern one in Sect. 4. We
discuss the relevant physical properties of the structural components of the
southern I20210 member in Sects. 5 and 6. We estimate the supermassive black
hole (SMBH) mass of both I20210 members in Sect. 7. Finally, we summarize our
findings in Sect. 8. For simplicity, we abbreviate the names of the two
galaxies to I20210N (northern member) and I20210S (southern member) hereafter.
Throughout the article, we adopt a $\Lambda$-CDM cosmology with $H_{0}=70$ km
s-1 Mpc-1, $\Omega_{\rm M}=0.3$ and $\Omega_{\Lambda}=0.7$.
## 2 Observations and data reduction
Observations of the I20210 optical spectra were carried out on 2010 August 01
at the Telescopio Nazionale Galileo (TNG; Canarian Islands, Spain). The
spectra were simultaneously obtained with the $B$-band grism (wavelength range
$\lambda\lambda$3000 – 8430 Å, dispersion of 2.52 Å px-1,
$\lambda/\Delta\lambda=585$, implying a resolution of 9.8 Å that corresponds
to $\sim$510 km s-1) of the DOLoRes instrument (point-spread function PSF
$\sim 0^{\prime\prime}.85$), coupled to the $1^{\prime\prime}.5$ slit. To this
end, the instrument configuration was rotated to a position angle of 166∘ in
order to align the slit along the system axis connecting the two nuclei. The
two exposures of 600 s each (total exposure time of 1200 s) were then reduced
with standard IRAF procedures to extract and calibrate the one-dimensional
spectra. We show the slit position and orientation (P.A. $=166^{\circ}$ east
of north) along with the directions of the apertures used to extract the
spectra of each object in Fig. 1, superimposed to the PS1 image of the system.
The resulting spectra have signal-to-noise ratios of ${\rm S}/{\rm N}\sim
23.6$ (I20210N) and $\sim$33.2 (I20210S), respectively, as computed in line-
free continuum regions (Rosales-Ortega et al. 2012).
During the extraction and calibration procedures of the 1D spectra, we found
that a spectrum emitted from a third location was visible southwards of
I20210S, at a projected distance of $\sim$6′′ (corresponding to $\sim$6.5 kpc
given the distance scale of 1.087 kpc/′′ at $z=0.056$) from its trace center.
This additional spectrum, already identified by P90 in their low-resolution
data as extended emission in the I20210S host galaxy (South Nebula,
hereafter), is shown in 2D form in Fig. 2: it exhibits the main transitions
detected in I20210S (H$\beta$ $\lambda$4862, [O III]
$\lambda\lambda$4959,5007, H$\alpha$+[N II] $\lambda\lambda$6548,6583 and [S
II] $\lambda\lambda$6716,6731) detached by $\sim$6′′ from the I20210S nuclear
spectrum, extending over $\sim$2′′ (i.e. $\sim$2.3 kpc) in the N–S direction
and blueshifted by $\sim$450 km s-1 with respect to the systemic rest frame.
We thus extracted and calibrated it in the same way as we do for the spectra
of the main components.
Since the I20210 system is viewed through the Galactic plane, we dereddened
the spectra with the Milky Way extinction curve by Pei (1992) and $A_{\rm
V}=0.6$ (P90). The final spectra obtained in this way are shown in Fig. 3. A
visual inspection of the (so-far undetected) I20210N optical spectrum reveals
prominent [O II] $\lambda$3727, [O III] $\lambda\lambda$4959,5007,
H$\alpha$+[N II] $\lambda\lambda$6548,6583 and [S II]
$\lambda\lambda$6716,6731 emission lines, as well as the lack of the H$\beta$
$\lambda$4862 feature. Such a spectrum shows strong similarities with those of
typical Seyfert 2 galaxies, such as NGC 1667 (Ho et al. 1993, 1995; Jones et
al. 2009) and Mrk 1018 (Osterbrock 1981).
## 3 Characterization of I20210 North
We first proceed to estimate the amount of intrinsic dust extinction in each
object. To this end, we decided to measure the reddening $E(B-V)$ of the AGN
spectrum through the Balmer decrement $F_{{\rm H}\alpha}/F_{{\rm H}\beta}$
(e.g., Miller & Mathews 1972):
$\frac{F_{{\rm H}\alpha}}{F_{{\rm H}\beta}}=\frac{I_{{\rm H}\alpha}}{I_{{\rm
H}\beta}}\cdot 10^{-0.4E(B-V)(1+R_{V})(\kappa_{\alpha}-\kappa_{\beta})},$ (1)
with the intrinsic ratio $I_{{\rm H}\alpha}/I_{{\rm H}\beta}$ depending on the
physical conditions of the emitting gas only (see e.g., Gaskell & Ferland
1984, and refs. therein), and with $R_{V}$ and $\kappa_{\lambda}$ determined
by the adopted extinction model. Since no evidence for narrow lines associated
with H$\beta$ is visible bluewards of the [O III] doublet in the I20210N
spectrum, we first proceed to model the underlying continuum in order to
recover the Balmer emission from the narrow-line region (NLR) of I20210N.
### 3.1 I20210N continuum and emission-line fitting
We modeled the I20210N continuum under the assumption of a negligible AGN
contribution to the continuum emission. This is justified by the fact that the
I20210N central engine ($L_{\rm X}=4.7\times 10^{42}$ erg s-1) is highly
obscured by a column density $N_{\rm H}\sim 5\times 10^{23}$ cm-2 (Piconcelli
et al. 2010) and, therefore, no light from accretion activity is visible. We
subtract the stellar continuum with absorption lines from the I20210N galaxy
spectrum by using the penalized-pixel fitting public code pPXF (Cappellari &
Emsellem 2004; Cappellari 2012, 2017). The spectrum is fitted with a linear
combination of stellar spectra templates from the MILES library (Vazdekis et
al. 2010), which contains single stellar population synthesis models covering
the same wavelength range as the I20210N spectrum with a full width at half
maximum (FWHM) resolution of 2.54 Å. This procedure also yields information
about the kinematics status of the stellar population in the galaxy through
the stellar velocity dispersion $\sigma_{v}^{*}$.
Figure 4: Model of the rest-frame spectrum of I20210N. Upper panel: Full
I20210N spectrum (black solid line), along with the best-fit starlight model
adopted for continuum subtraction (yellow dot-dashed line), the best-fit
reddened emission profiles (green short-dashed lines), the global spectral
model (red solid line), and the masks applied to the telluric absorption lines
(grey bands) shown superimposed to the data. Lower panels: Zoom on the
continuum-subtracted emission lines (black solid line), shown along with the
global best fit (red solid line) and the best-fit single components (green
short-dashed lines) for the blended [O II] doublet, H$\alpha$+[N II] and [S
II] transitions. The standardized residuals after the best-fit subtraction are
also shown in separate windows below each spectral region. In all panels, the
zero-level flux (black long-dashed line) is indicated; in the panel with
H$\alpha$+[N II] and [S II], the masks applied to the telluric absorption
lines (grey bands) are shown superimposed to the data.
We rebinned the MILES templates ($\lambda/\Delta\lambda\sim 2.5$ Å) to match
the DOLoRes spectral resolution of $\sim$10 Å. We include low-order additive
(4th-degree) and multiplicative (1st-degree) Legendre polynomials to adjust
the continuum shape of the templates to the observed spectrum. During the
fitting procedure, strong emission features are masked out and the spectra are
shifted to the rest frame. The pPXF best-fit model is chosen through
$\chi^{2}$ minimization. To estimate the uncertainty on the velocity
dispersion, we produced $10^{3}$ realizations of the I20210N spectrum by
adding noise to the pPXF best-fit model; this noise is drawn from a Gaussian
distribution with dispersion equal to the rms of the input spectrum. We then
iterate the pPXF fitting procedure over such mock spectra and compute the
error associated with $\sigma_{v}^{*}$ as the standard deviation of the
parameter posterior distribution. In doing so, we find a best-fit
$\sigma_{v}^{*}=390\pm 50$ km s-1. The residual spectrum obtained by
subtracting off the best-fit stellar model from the spectrum is then used to
derive emission-line properties. This procedure allowed us to recover the
H$\beta$ narrow emission and therefore compute the Balmer decrement $F_{{\rm
H}\alpha}/F_{{\rm H}\beta}$. Both the fitted starlight continuum and the
residual emission-line spectrum of I20210N are shown in Fig. 4.
We note that the derived stellar-velocity dispersion value is very high
compared to what is expected for typical galaxies: for example, a search in
the catalogue of galactic dynamics by Forbes & Ponman (1999) yields only two
elliptical/S0 objects with $\sigma_{v}^{*}>300$ km s-1. Similarly, the stellar
velocity dispersions measured by Falcón-Barroso et al. (2017) in a large
sample of galaxies from the CALIFA survey and by Perna et al. (2021) in a
sample of nearby ULIRGs never exceed $\sim$200 km s-1. Nevertheless, objects
exhibiting exceptional values of $\sigma_{v}^{*}$ exist: it is, for instance,
the case of NGC 6240 ($\sigma_{v}^{*}\sim 360$ km s-1; Doyon et al. 1994),
which is indeed a final-state merging system. Therefore, the stellar velocity
dispersion value found in I20210N may indicate that the internal kinematics of
the galaxy is deeply altered by the gravitational interaction with I20210S.
We then fit the relevant emission lines with Gaussian profiles through the IDL
minimization package MPFIT (Markwardt 2009). All the narrow components are
simultaneously fitted considering them as emitted at the same distance from
the AGN, that is, with equal FWHM in the velocity space. In addition, we fix
the intensities of the faint components of the [O III] and [N II] doublets to
a ratio $1/3.06$ with the respective dominant component (e.g., Osterbrock &
Ferland 2006). In order to compute meaningful uncertainties of measurement for
the free parameters, we iterate this process over $10^{3}$ Monte-Carlo (MC)
realizations of each line spectrum. Such realizations have fluxes at each
wavelength altered by a random quantity extracted from a Gaussian
distribution, which is centered at the specific flux value and wide as the
corresponding 1$\sigma$ rms flux error.
The best fit of the I20210N emission lines is shown in Fig. 4, along with the
corresponding standardized residuals111The standardized residuals are computed
everywhere as $\left[F_{\lambda}-F_{\lambda}^{\rm(BF)}\right]/\sigma_{F}$,
where $F_{\lambda}^{\rm(BF)}$ is the best-fit model of the line flux and
$\sigma_{F}$ is the standard deviation of the dimensional residuals
$F_{\lambda}-F_{\lambda}^{\rm(BF)}$ (e.g., Cook & Weisberg 1982).. The value
of the Balmer decrement derived from this procedure is $2.97\pm 0.31$,
compatible within errors to both the intrinsic ratio $I_{{\rm
H}\alpha}/I_{{\rm H}\beta}\sim 2.85$ typical of [H II] region-like objects and
the AGN ratio of 3.1 (Veilleux & Osterbrock 1987). Therefore, we can assume
that the NLR of I20210N is viewed along a non-reddened line of sight, with
$E(B-V)\sim 0$. The best-fit parameters of the narrow emission lines are
reported in Table 1, with FWHM corrected for the instrumental broadening
$\Delta v_{\rm inst}\sim 510$ km s-1 corresponding to the DOLoRes resolution
of $\sim$10 Å:
${\rm FWHM}_{\rm corr}=\sqrt{{\rm FWHM}_{\rm obs}^{2}-\Delta v_{\rm
inst}^{2}}.$ (2)
| I20210N ($\chi^{2}/\nu_{\rm d.o.f.}=427/405$) | I20210S ($\chi^{2}/\nu_{\rm d.o.f.}=442/405$) | South Nebula ($\chi^{2}/\nu_{\rm d.o.f.}=228/209$)
---|---|---|---
Transition | Flux ($10^{-14}$ erg s-1 cm-2) | FWHM (km s-1) | Flux ($10^{-14}$ erg s-1 cm-2) | FWHM (km s-1) | Blueshift (km s-1) | Flux ($10^{-14}$ erg s-1 cm-2) | FWHM (km s-1) | Blueshift (km s-1)
| N | B | N | B | —N–B— | | |
[O II] $\lambda\lambda$3726,3729 | $1.060\pm 0.055$ | $690\pm 40$ | $10.4\pm 1.1$ | $10.6\pm 4.4$ | $530\pm 90$ | * | * | — | — | —
[O III] $\lambda$4363 | — | — | $0.500\pm 0.054$ | $0.42\pm 0.17$ | ” | * | * | — | — | —
H$\beta$ | $0.214\pm 0.010$ | ” | $13.72\pm 0.94$ | $6.3\pm 2.1$ | ” | $1960\pm 470$ | $330\pm 180$ | $0.0691\pm 0.0036$ | $710\pm 330$ | $550\pm 150$
[O III] $\lambda$5007 | $1.058\pm 0.047$ | ” | $85.2\pm 5.4$ | $32.1\pm 9.8$ | ” | $2080\pm 310$ | $390\pm 120$ | $0.1193\pm 0.0073$ | ” | ”
[O I] $\lambda$6300 | $0.193\pm 0.010$ | ” | $4.13\pm 0.20$ | $2.39\pm 0.57$ | ” | $1910\pm 350$ | $230\pm 150$ | — | — | —
H$\alpha$ | $0.623\pm 0.028$ | ” | $42.5\pm 2.0$ | $19.5\pm 4.3$ | ” | * | * | $0.1418\pm 0.0071$ | ” | ”
[N II] $\lambda$6583 | $0.863\pm 0.039$ | ” | $33.7\pm 1.6$ | $8.0\pm 2.2$ | ” | * | * | $0.0334\pm 0.0047$ | ” | ”
[S II] $\lambda$6717 | $0.436\pm 0.021$ | ” | $6.80\pm 0.31$ | $2.98\pm 0.66$ | ” | * | * | $0.0371\pm 0.0012$ | ” | ”
[S II] $\lambda$6731 | $0.366\pm 0.018$ | ” | $7.21\pm 0.32$ | $5.0\pm 1.0$ | ” | * | * | $0.0273\pm 0.0010$ | ” | ”
$E(B-V)$ | $\sim$0 | | $0.271\pm 0.019$ | $0.195\pm 0.091$ | | | | $\sim$0 | |
| I20210N | I20210S | Outflow | South Nebula
Ratio | Value | Uncertainty | Value | Uncertainty | Value | Uncertainty | Value | Uncertainty
$\log{\left(\mbox{{[O\,{\scriptsize III}]}}/\mbox{{H$\beta$}}\right)}$ | $0.686$ | $0.040$ | $0.798$ | $0.061$ | $0.71$ | $0.28$ | $0.237$ | $0.049$
$\log{\left(\mbox{{[N\,{\scriptsize II}]}}/\mbox{{H$\alpha$}}\right)}$ | $0.142$ | $0.039$ | $-0.114$ | $0.044$ | $-0.39$ | $0.21$ | $-0.628$ | $0.083$
$\log{\left(\mbox{{[S\,{\scriptsize II}]}}/\mbox{{H$\alpha$}}\right)}$ | $0.110$ | $0.041$ | $-0.489$ | $0.044$ | $-0.39$ | $0.19$ | $-0.343$ | $0.037$
$\log{\left(\mbox{{[O\,{\scriptsize I}]}}/\mbox{{H$\alpha$}}\right)}$ | $-0.509$ | $0.042$ | $-0.933$ | $0.052$ | $-0.91$ | $0.20$ | — | —
$\log{\left(\mbox{{[O\,{\scriptsize II}]}}/\mbox{{H$\beta$}}\right)}$ | $0.695$ | $0.043$ | $-0.120$ | $0.077$ | $0.23$ | $0.33$ | — | —
$\log{\left(\mbox{{[O\,{\scriptsize III}]}}/\mbox{{[O\,{\scriptsize II}]}}\right)}$ | $-0.001$ | $0.042$ | $0.913$ | $0.073$ | $0.48$ | $0.31$ | — | —
-Transition not detected in the spectrum.
${}^{{\rm{}^{\prime\prime}}}$Value fixed to be equal to the first non-null one
upwards in the column.
∗Value anchored to the best fit of the [O III] $\lambda$5007 broad component.
Table 1: Top: Best-fit parameters of the dereddened diagnostic emission lines
in the optical spectrum of I20210N, I20210S NLR (“N” components) and
outflowing emission (“B” components), and emission from the South Nebula. The
integrated fluxes presented here have been dereddened by the indicated amount
of $E(B-V)$ with the SMC extinction by Pei (1992). Bottom: Values of the
diagnostic line ratios with the corresponding uncertainties.
Figure 5: BPT diagnostic diagrams showing the position of I20210N (red
circle), I20210S (green square) and its outflow (blue star), and the South
Nebula (yellow triangle), along with the relative uncertainties on top of the
SDSS data from the OSSY database (Oh et al. 2015, grey dots). As a reference,
in the first three panels, the extreme-starburst and Seyfert-LINER
classification boundaries by Kewley et al. (2001, solid lines) are indicated,
along with the pure star-formation boundary by Kauffmann et al. (2003, long-
dashed line), the alternative Seyfert-LINER relation by Cid Fernandes et al.
(2010, dotted line), and the redshift-dependent relation at $z\sim 0.13$ by
Kewley et al. (2013a, dot-dashed line) in the [O III]/H$\beta$-to-[N
II]/H$\alpha$ diagram. In the [O III]/H$\beta$-to-[O II]/H$\beta$ diagram, the
star-forming and Seyfert-LINER boundaries by Lamareille (2010, triple dot-
dashed line) are indicated, along with the mixed-region boundary (short-dashed
line).
### 3.2 I20210N classification
To assess the nature of the AGN hosted in I20210N, we computed the [O
III]/H$\beta$, [N II]/H$\alpha$, [S II]/H$\alpha$, [O I]/H$\alpha$ and [O
II]/H$\beta$ logarithmic line ratios from the best-fit emission line
parameters. The derived values are presented in Table 1, along with their
uncertainties. For completeness, we also report the value of the [O III]/[O
II] ratio that is used to further classify active galaxies (e.g., Heckman
1980; Kewley et al. 2006).
The values derived for I20210N are plotted in the BPT diagrams shown in Fig.
5, superimposed to the values for SDSS-DR7 objects retrieved from the OSSY
database (Oh et al. 2011, 2015). To discriminate between the different classes
of emission-line galaxies (star-forming, Seyferts, LINERs), we adopt from the
current literature the relations defining the boundaries between types of
galactic activity: (i) the extreme-starburst relation and the Seyfert-LINER
boundaries by Kewley et al. (2001) in the original diagrams by Baldwin et al.
(1981); (ii) the star-forming, Seyfert-LINER and mixed-region boundaries by
Lamareille (2010) in the [O III]/H$\beta$-to-[O II]/H$\beta$ diagram; (iii)
for the [O III]/H$\beta$-to-[N II]/H$\alpha$ diagram, the pure star-formation
boundary by Kauffmann et al. (2003), the alternative Seyfert-LINER relation by
Cid Fernandes et al. (2010) and the redshift-dependent star formation boundary
by Kewley et al. (2013a, see also ) computed for $z=0.128\pm 0.044$, that is,
the average redshift value of the OSSY catalog (Oh et al. 2015).
Figure 5 shows that the I20210N line ratios are generally consistent with
those found for Seyfert galaxies in the [O III]/H$\beta$-to-[N II]/H$\alpha$
and [O III]/H$\beta$-to-[O II]/H$\beta$ diagrams, while they remain
intermediate between a Seyfert and a LINER in the [O III]/H$\beta$-to-[S
II]/H$\alpha$ diagram, where they sit on the Seyfert-to-LINER boundary by
Kewley et al. (2001). Due to their intermediate nature, Heckman (1980) calls
these kinds of objects “transition galaxies” that lie between Seyfert 1.9
(Osterbrock 1981) and LINERs. However, the severe unresolved blending that
affects the H$\alpha$+[N II] emission system may offer an alternative
explanation: in fact, decreasing intrinsic H$\alpha$ intensity would
eventually move the position of I20210N in both the [O III]/H$\beta$-to-[S
II]/H$\alpha$ and [O III]/H$\beta$-to-[O I]/H$\alpha$ diagrams in the Seyfert
region (neglecting the shift in the [O III]/H$\beta$-to-[N II]/H$\alpha$
diagram given by the consequent increase in the [N II] emission). In addition,
the hard X-ray luminosity of $\sim$5$\times$1042 erg s-1 measured for I20210N
by Piconcelli et al. (2010) helps in breaking this uncertainty, pushing the
classification of I20210N towards a Seyfert 2 galaxy.
Figure 6: Model of the rest-frame spectrum of I20210S. Upper panel: Full
I20210S spectrum (black solid line), along with the local power laws adopted
for continuum subtraction (yellow dot-dashed lines), the best-fit reddened
narrow (green short-dashed lines) and broad emission profiles (blue dotted
lines), and the global spectral model (red solid line) shown superimposed to
the data. The intervals used for the local continuum subtraction under the
emission lines are marked, along with the masks applied to the [O I]+[Fe X]
emission and to the telluric H2O absorption line (grey bands). Lower panel:
Zoom on the continuum-subtracted emission lines (black solid line), shown
along with the global best fit (red solid line) and the best-fit narrow (green
short-dashed lines) and broad components (blue dotted lines). The standardized
residuals after the best-fit subtraction are also shown in separate windows
below each spectral region. In all panels, the zero-level flux (black long-
dashed line) is indicated; in the panel with [O I], the mask applied to the [O
I]+[Fe X] emission line is shown superimposed to the data, as well as the one
for the H2O telluric absorption in the panel with H$\alpha$+[N II] and [S II]
(grey bands).
## 4 Characterization of I20210 South
For both the nuclear spectrum and the South Nebula, we applied the same
fitting procedure as done for I20210N. Therefore, we first proceed to model
the I20210S continuum emission. We assume for I20210S an AGN-dominated object,
that is, one exhibiting a power-law continuum reddened by foreground dust;
such an assumption is well motivated by the quasar-like energetics of I20210S
(e.g., $L_{\rm bol}\sim 10^{45}$ erg s-1; Piconcelli et al. 2010).
### 4.1 I20210S nuclear continuum and emission-line fitting
The optical spectrum of I20210S is heavily reddened by intrinsic dust, which
is particularly evident by the lack of a flux rise bluewards the [O II]
emission. However, due to the presence of a large number of emission and
telluric lines that greatly reduce the intervals of featureless spectral
regions, we decided not to perform a global continuum fit to be subtracted
from the spectrum. Instead, we selected sections of the I20210S spectrum free
of major features and adjacent to the lines of interest, and interpolated them
with local power laws in order to remove the underlying AGN emission (see Fig.
6).
Then we used the MC fitting procedure to model the emission lines with
Gaussian profiles, which are shown in Fig. 6, along with the corresponding
standardized residuals. Differently from I20210N, we used two components for
each transition to account for the total emission profile in the spectrum, as
was also done by Arribas et al. (2014). The main narrow components obey the
prescriptions presented in Sect. 3.1; the additional emission, as already
found by P90, consists in a broad line blueshifted by $\sim$400 km s-1 with
FWHM $\sim 2000$ km s-1. The possibility that such features are an effect of
the orientation of the I20210S narrow-line region (NLR) as described in
Bisogni et al. (2017) is ruled out. In fact, for emission lines associated
with permitted transitions the asymmetries would be redshifted with respect to
the line center (see e.g., their figure 3). Therefore, we can conclude that
I20210S exhibits evidence of an ionized gas outflow.
Figure 7: Best-fit profiles of the emission lines from the South Nebula in the
rest frame. Left panel: The H$\beta$+[O III] spectral region. Right panel: The
H$\alpha$+[N II] and [S II] spectral region. In both panels, the global
emission profile (red solid line) is shown superimposed to the line spectrum
(black histogram) along with the single components of the H$\alpha$+[N II] and
[S II] blended profiles (green short-dashed lines), and the zero-flux level
(black long-dashed line) is indicated. The standardized residuals after the
best-fit subtraction are also shown in separate windows below each spectral
region.
To account for this additional emission, we include the broad components in
the fit of the I20210S line profiles anchoring the blueshift and FWHM values
of the transitions affected by severe blending – namely, the [O II]
$\lambda\lambda$3726,3729 doublet, the H$\gamma$+[O III] $\lambda$4363, the
H$\alpha$+[N II] system and the [S II] doublet – to those of the [O III]
$\lambda$5007 (see Table 1). This choice is motivated by the fact that the [O
III] emission has the highest S/N; in the cases where an anchoring to its
parameters is adopted, only the line amplitude is left free to vary. In
addition, we estimate the amount of intrinsic dust extinction for the NLR and
the outflow separately since the two regions are, in principle, located at
different distances from the central engine and can thus be affected by
different amounts of reddening.
The Balmer ratio derived from the MC fit for the narrow components is $F_{{\rm
H}\alpha}/F_{{\rm H}\beta}=4.108\pm 0.079$, corresponding to $E(B-V)_{\rm
NLR}=0.271\pm 0.019$ mag for the AGN intrinsic ratio $I_{{\rm
H}\alpha}/I_{{\rm H}\beta}=3.1$ (Veilleux & Osterbrock 1987) and the SMC
extinction by Pei (1992), whereas a ratio $F_{{\rm H}\alpha}/F_{{\rm
H}\beta}=3.80\pm 0.34$ for the outflow yields $E(B-V)_{\rm out}=0.195\pm
0.091$. Finally, we applied the extinctions derived in this way to deredden
the corresponding emission-line amplitudes. The best fit of the reddened
I20210S spectrum is shown in Fig. 6, whereas the best-fit parameters of both
its NLR and outflow emission are reported in Table 1. On average, the I20210S
wind has an outflow velocity $\Delta v=330\pm 170$ km s-1 and ${\rm
FWHM}=2000\pm 390$ km s-1: such values are a factor of $\sim$2.4 higher than
the corresponding mean parameters found by Arribas et al. (2014) in ULIRGs
hosting AGN (see their table 2), and more in line with those found by
Rodríguez Zaurín et al. (2013) for ionized outflows in nearby ULIRGs (see
their Table 2) and by Zakamska et al. (2016) in high-$z$ reddened quasars (see
their table 1) where the emission-line profiles are modeled using multiple
Gaussians.
### 4.2 South Nebula spectrum
Next we estimated the intrinsic reddening of the South Nebula. The detection
of both H$\beta$ and H$\alpha$ narrow transitions allows us to apply the MC
line-fitting procedure described in the case of I20210N (see Fig. 7 and Sect.
3.1). Lacking any trace of an underlying continuum that could have been used
in the determination of the reddening law, we adopted the SMC extinction by
Pei (1992), as in the case of the parent nucleus. This in turn yields $F_{{\rm
H}\alpha}/F_{{\rm H}\beta}=2.05\pm 0.10$, which is lower than the intrinsic
ratio of 2.85 valid for [H II] regions. Also, the low associated error of
measurement potentially indicates a poorly determined estimate of the Balmer
ratio, likely due to the uncertainties in extracting a continuum-less spectrum
that is $\sim$100 times less intense than the I20210S nuclear emission.
Therefore, the issue of determining the South Nebula intrinsic reddening is
clearly a matter that ought to be left to more sensitive, spatially resolved
spectroscopic future data; in the following, we consider it compatible with
$E(B-V)\sim 0$.
We then fit the parameters of the five emission features that are clearly
identified, namely H$\beta$, [O III], H$\alpha$, [N II] and [S II], without
applying any dereddening (see Table 1). Interestingly, the South Nebula
exhibits a blueshift of $550\pm 150$ km s-1 with respect to the systemic
redshift and a FWHM of $710\pm 330$ km s-1. Such features are a clear
indication of highly disrupted gas (Bellocchi et al. 2013), similar to that
found by Ramos Almeida et al. (2017) in the Teacup Galaxy ($L_{\rm[OIII]}\sim
5\times 10^{42}$ erg s-1 according to Reyes et al. 2008, to be compared with
$L_{\rm[OIII]}\sim 6.5\times 10^{42}$ erg s-1 for I20210S) at comparable
distances ($\sim$5.6 kpc) from the central engine.
### 4.3 I20210S classification
As done in Sect. 3.2 for I20210N, we computed the line ratios for all the
regions decomposed from the spectrum of I20210S, namely, the NLR, the outflow
and the South Nebula, and we placed them in the relevant BPT diagrams to
obtain a first discrimination between an AGN or star-formation powered
emission. A visual inspection confirms that the NLR properties are fully
consistent with their AGN nature, as well as with the outflow emission falling
well inside the AGN region shown to be in agreement with the scenario of an
ionized wind driven by the nuclear activity.
The South Nebula sits close enough to the boundary between AGN and star-
forming galaxies to prevent its straightforward inclusion among the AGN-
powered processes. However, the kinematic properties of this region (velocity
blueshift of $\sim$500 km s-1, FWHM of $\sim$700 km s-1) may actually be
interpreted as being due to the I20210S outflow, which has stripped out
ionized gas from the I20210S nucleus. The possibility that the South Nebula is
an extended NLR component blown out of the central engine by radiation
pressure is in principle supported by studies that ubiquitously find NLRs
extended over $\sim$10 kpc from the central engine in both Type 1 and Type 2
quasars (see e.g. Husemann et al. 2013, and refs. therein); in the case of
I20210S, this is less likely and it is expected, rather, to be extended, non-
outflowing gas associated with extreme mergers that is due to the fact that
blown-out NLRs typically show ${\rm FWHM}<250$ km s-1 (Bellocchi et al. 2013).
Our data do not allow a deeper exploration of the spectral properties of the
South Nebula. Therefore, we point out that due to the intermediate values of
its diagnostic line ratios between AGN and star-forming galaxies, this
detached emitting region is a very interesting environment in which the
effects of AGN feedback may be at work in pushing the gas outside the central
region of the host galaxy (negative feedback) while also triggering some
amount of star formation into it (positive feedback; see e.g., Maiolino et al.
2017). Thus, it is worthy of further investigation with high-quality spatially
resolved spectroscopy.
## 5 Physical properties of the outflow in I20210S
Next we were able to characterize the physics of the I20120S ionized wind. To
this aim, we estimated the outflowing mass $M_{\rm out}$ and the mass loss
rate $\dot{M}$ of the ionized gas following the method presented in Kakkad et
al. (2016) and Bischetti et al. (2017). Under the assumptions that (i) the AGN
wind is free, spherically or biconically symmetric, and mass-conserving; (ii)
the AGN wind has mass-outflow rate and velocity independent on the outflow
radius (Rupke et al. 2002, 2005); and (iii) most of the oxygen consists of O2+
ions, we can use the relation by Carniani et al. (2015):
$\log{\left(\frac{M_{\rm out}}{{\rm
M}_{\odot}}\right)}=7.6+\log{\left(\frac{C}{10^{{\rm[O/H]}-{\rm[O/H]}_{\odot}}}\right)}+\log{\left(\frac{L^{\rm
out}_{\rm[OIII]}}{10^{44}\mbox{ erg s}^{-1}}\right)}-\log{\left(\frac{\langle
n_{e}\rangle}{10^{3}\mbox{ cm}^{-3}}\right)}$
(3)
where $C=\langle n_{e}\rangle^{2}/\langle n_{e}^{2}\rangle$,
${\rm[O/H]}-{\rm[O/H]}_{\odot}$ is the gas metallicity relative to the solar
value, $L^{\rm out}_{\rm[OIII]}$ is the outflowing [O III] $\lambda 5007$
luminosity, and $\langle n_{e}\rangle$ is the average electron density. The
latter is, in turn, related to the electron temperature, $T_{e}$, which can be
derived from the line ratios $\left(I_{4959}+I_{5007}\right)/I_{4363}$ of the
outflow emission (Osterbrock & Ferland 2006):
$\frac{I_{4959}+I_{5007}}{I_{4363}}\approx
7.90\cdot\exp{\left(\frac{32,900\mbox{ K}}{T_{e}}\right)}.$ (4)
From the decomposition of the emission lines in the spectrum of I20210S
through the fit with multiple Gaussian components described in Sect. 3 (see
Fig. 6), we compute $\left(I_{4959}+I_{5007}\right)/I_{4363}=100\pm 70$, which
corresponds to $T_{e}=12,900\pm 3600$ K and is in agreement with the value of
$\sim$104 K generally assumed for AGN outflows (see e g. Perna et al. 2017,
and refs. therein).
Figure 8: Continuum-subtracted I20210S off-axis H$\beta$+[O III] spectra
extracted at a 5-px offset in the northern direction (left panel) and at an
8-px offset in the southern direction (right panel). In each panel, the best-
fit model is shown (red solid line) along with the profiles of the narrow
(green short-dashed line) and broad components (blue dotted line), and the
zero-flux level is indicated (black long-dashed line). The standardized
residuals after the best-fit subtraction are also shown in separate windows
below each spectral region. The fit to the H$\beta$ emission is not accounted
for the calculations of $\chi^{2}$ and $p_{F}$, which are performed on the [O
III] doublet only (see text), and is shown here for visual purposes only.
The electron density $\langle n_{e}\rangle$ is then related to the ratio
$I_{6717}/I_{6731}$ between the components of the [S II] doublet through:
$\frac{I_{6717}}{I_{6731}}=1.49\cdot\frac{1+3.77x}{1+12.8x},$ (5)
with $x=10^{-2}\langle n_{e}\rangle T_{e}^{-1/2}$ (Weedman 1968; Osterbrock &
Ferland 2006; Sanders et al. 2016). However, we compute a ratio
$I_{6717}/I_{6731}=0.60\pm 0.25$ for the ionized wind which is on the
saturating side of Eq. 5, and it only allows us to establish a lower limit at
95% probability of $\langle n_{e}\rangle\gtrsim 4000$ cm-3 to the outflow
electron density. This might either be an indication of a high electron
density or just a consequence of the severe blending that affects the [S II]
region at the low spectral resolution of DOLoRes, preventing us from deriving
a reliable estimate of $\langle n_{e}\rangle$. The same issue holds for the [O
II] doublet, which could have been used in place of the [S II] for such a
measurement (Osterbrock & Ferland 2006) but is even more blended because of
its peak separation of $\sim$3 Å only.
As an alternative possibility for deriving solid estimates of $\langle
n_{e}\rangle$ for the I20210S outflow, we also consider the application of the
trans-auroral ratio (TR) method by Rose et al. (2018). This method, based on
the evaluation of the line ratios – [S II]4068,4076/[S II]6717,6731 and [O
II]3726,3729/[O II]7319,7331 – allows us to obtain at once both $\langle
n_{e}\rangle$ and the intrinsic reddening $E(B-V)$ of the emitting gas. We
thus fit the trans-auroral doublets [S II] $\lambda\lambda$4068,4076 and [O
II] $\lambda\lambda$7319,7331 through the MC procedure with two narrow and two
broad components each, fixing their widths to the corresponding values for the
[O III] $\lambda\lambda$4959,5007 (see Table 1). This yields ${\rm
TR}(\mbox{{[S\,{\scriptsize II}]}})_{\rm out}=0.192\pm 0.051$ and ${\rm
TR}(\mbox{{[O\,{\scriptsize II}]}})_{\rm out}=1.72\pm 0.94$. Having derived
for the outflow a ionization parameters $\log{U}_{\rm out}=-3.09\pm 0.47$ from
its relation to the [O III]/H$\beta$ and [N II]/H$\alpha$ ratios (Baron &
Ménard 2019, BM19 hereafter), we can finally compare its TRs to the
simulations presented in Davies et al. (2020, see their figure 7), obtaining
$\langle n_{e}\rangle_{\rm out}=10,400^{+4000}_{-5200}$ cm-3 and $E(B-V)_{\rm
out}=0.34^{+0.24}_{-0.15}$.
Quantity | Value | Units
---|---|---
| Outflow | South Nebula |
$T_{e}$ | $12,900\pm 3600$ | $\sim$10,000 | K
$\langle n_{e}\rangle$ | $\gtrsim$5000 | $\sim$100 | cm-3
$L_{\rm[OIII]}^{\rm out}$ | $(2.44\pm 0.74)\times 10^{42}$ | $(9.05\pm 0.55)\times 10^{39}$ | erg s-1
$v_{\rm max}$ | $2160\pm 380$ | $1100\pm 430$ | km s-1
$R_{\rm out}$ | $2.20\pm 0.14$ | $6.52\pm 0.43$ | kpc
$t_{\rm dyn}$ | $0.99\pm 0.27$ | $5.8\pm 2.6$ | Myr
$M_{\rm out}$ | $\left(1.94^{+0.69}_{-0.51}\right)\times 10^{5}$ | $\sim 3\times 10^{4}$ | M⊙
$\dot{M}$ | $0.59^{+0.46}_{-0.26}$ | $\sim 6\times 10^{-3}$ | M⊙ yr-1
$\dot{E}_{\rm kin}$ | $\left(0.86^{+1.27}_{-0.54}\right)\times 10^{42}$ | $\sim 2\times 10^{39}$ | erg s-1
$\dot{P}_{\rm out}$ | $\left(0.80^{+0.88}_{-0.44}\right)\times 10^{34}$ | $\sim 4\times 10^{31}$ | erg cm-1
Table 2: Summary of the relevant physical properties of the ionized outflow
discovered in the I20210S optical spectrum and of the South Nebula. Upper
section: Quantities that are independent of the electron density. Lower
section: Quantities dependent on the electron density, for which a value of
$\langle n_{e}\rangle=5000$ cm-3 (Rose et al. 2018) is assumed in the case of
the outflow. Note that the quoted errors of measurement are only indicative of
the magnitude of the statistical uncertainties, not the systematics, affecting
the computed values (see Sect. 5).
As pointed out in the literature (Rose et al. 2018; Spence et al. 2018; Davies
et al. 2020), the TR method allows us to probe denser gas with respect to the
use of the “traditional” [S II] doublet, whose emission is likely produced at
the ionization front where the electron density significantly decreases. This
issue is probably at the base of the high values of ionized gas mass and mass
outflow rate recently found in AGN winds (e.g., Carniani et al. 2015; Kakkad
et al. 2016; Bischetti et al. 2017; Perna et al. 2017), for which values of
$10^{2}$ cm-3 $\lesssim\langle n_{e}\rangle\lesssim 10^{3}$ cm-3 are usually
assumed. Such an assumption is justified from measurements of the outflow
electron density based on the [S II] method: for example, Arribas et al.
(2014) get $\langle n_{e}\rangle\sim 400$ cm-3 for the outflowing emission in
ULIRGs, whereas Perna et al. (2020) find $\sim$200 cm-3 in the archetypal
ULIRG Arp 220. For comparison, the values of $\langle n_{e}\rangle$ found by
Rose et al. (2018) for AGN-driven outflows in ULIRGs fall in the range
$3000\div 56,000$ cm-3, with a median value of $\sim$5000 cm-3. Also, Kakkad
et al. (2018) obtained spatially resolved values of $\langle n_{e}\rangle$ up
to $\sim$2000 cm-3 for ionized winds in nearby radio-selected Seyfert
galaxies. Due to the limited DOLoRes spectral resolution and the severe
blending that affects the I20210S trans-auroral emission lines with nearby
features (e.g., the H$\delta$ close to the [S II] $\lambda\lambda$4068,4076,
the He I blueward and the [Ni II] redward of the [O II]
$\lambda\lambda$7319,7331), we cannot draw any firm conclusion on the
reliability of the I20210S outflow electron density derived with the TR
method. Therefore, in the following discussion of the physical properties of
the I20210S ionized wind, we adopted $\langle n_{e}\rangle\sim 5000$ cm-3
(Rose et al. 2018) as our main reference when computing all the related
quantities.
With $L^{\rm out}_{\rm[OIII]}=(2.44\pm 0.74)\times 10^{42}$ erg s-1 obtained
from the outflow [O III] flux reported in Table 1, and the further assumptions
of $C\approx 1$ and ${\rm[O/H]}\sim{\rm[O/H]}_{\odot}$ (i.e., solar
metallicity), Eq. 3 yields $M_{\rm
out}=\left(1.94^{+0.69}_{-0.51}\right)\times 10^{5}$ M⊙. Clearly, this value
and those based on it are affected by the assumption on $\langle
n_{e}\rangle$. We then derive the expression of the outflowing mass rate
$\dot{M}$ from the fluid-field continuity equation as done in Bischetti et al.
(2017), in order to provide a local estimate of this quantity at the outflow
termination radius $R_{\rm out}$ (e.g., Feruglio et al. 2015):
$\dot{M}=3\frac{M_{\rm out}v_{\rm max}}{R_{\rm out}}.$ (6)
In order to estimate the spatial extension of the outflow, we performed a
series of adjacent, 1-px wide (i.e., $\sim$0.27 kpc, owing to the DOLoRes
angular scale of $0.252$ arcsec px-1 and the scale distance of 1.087 kpc
arcsec-1 at $z=0.056$) extractions of the I20210S spectrum along its 2D trace
in the high-S/N region of H$\beta$+[O III] .
First, we fit a Gaussian function to the trace profile at the [O III] peak to
get the trace width $\sigma=1.67\pm 0.01$ px (i.e., $\sim$0.46 kpc). Then, we
extracted 1D off-axis spectra in both the northern and southern direction
offset by 3 to 12 px ($\sim$0.8 to $\sim$3.3 kpc) from the aperture center
(Perna et al. 2015; Bischetti et al. 2017) in order to both exclude the signal
enclosed in the instrumental PSF of $0^{\prime\prime}.85$
($0^{\prime\prime}.43$ in each direction, i.e., 2.5 px) and avoid overlap with
either I20210N or the South Nebula. Finally, we calibrated such spectra with
the same wavelength dispersion and sensitivity function applied to the I20210S
average spectrum, and applied the MC fitting procedure to the continuum-
subtracted [O III] emission, only letting the line amplitudes free to vary –
narrow FWHM, broad FWHM and blueshifts are fixed to the values reported in
Table 1. We produced the MC fit for both the two-component model and a
comparison single-component model of the emission lines, in which the broad
emission from the outflow is neglected. In this way, we are able to identify
through a statistical $F$-test the transition region where the outflow signal
becomes negligible with respect to the NLR emission: specifically, we define
the significance threshold of the outflow by requesting an $F$-test
probability $p_{F}>0.90$.
The statistical analysis yields significant emission associated with the
outflow up to 5 px ($\sim$1.3 kpc) in the northern direction, with
$p_{F}\gtrsim 0.93$ ($\chi^{2}/\nu_{\rm d.o.f}=76/73$); at larger distances
along this direction, the signal emitted from the ionized wind quickly becomes
indistinguishable from the noise, and thus has no impact on the best fit
($p_{F}=0$). Instead, the outflow emission in the southern direction remains
significant out to 8 px ($\sim$2 kpc, $p_{F}\gtrsim 0.93$, $\chi^{2}/\nu_{\rm
d.o.f}=78/73$). The “terminal” line spectra extracted at 5-px northward and
8-px southward offset are shown in Fig. 8. From this point on, we therefore
adopt the distance $R_{\rm out}=2.20\pm 0.14$ kpc (i.e. $8.0\pm 0.5$ px) as
our fiducial value for the termination radius of the I20210S ionized wind
within the applicability limits of the $F$-test statistics. This choice is
motivated by the observation that an 8-px offset produces an extraction lying
beyond 3$\sigma$ pixels from the trace center, and thus fiducially outside the
2.5-px PSF radius.
Then we calculate $v_{\rm max}=|\Delta v|_{\rm[OIII]}^{\rm
out}+2\sigma_{\rm[OIII]}^{\rm out}=2160\pm 380$ km s-1 (see Bischetti et al.
2017, and references therein) from the outflowing [O III] $\lambda$5007
parameters reported in Table 1. In this way, from Eq. 6, we obtain
$\dot{M}=0.59^{+0.46}_{-0.26}$ M⊙ yr-1. Finally, we derive the outflow kinetic
power $\dot{E}_{\rm kin}$, the dynamical time scale $t_{\rm dyn}$ and the
outflow momentum rate $\dot{P}_{\rm out}$ as:
$\displaystyle\dot{E}_{\rm kin}=\frac{1}{2}\dot{M}v_{\rm max}^{2},$ (7)
$\displaystyle t_{\rm dyn}\approx\frac{R_{\rm out}}{v_{\rm max}},$ (8)
$\displaystyle\dot{P}_{\rm out}=\dot{M}v_{\rm max},$ (9)
which yield $\dot{E}_{\rm kin}=\left(0.86^{+1.27}_{-0.54}\right)\times
10^{42}$ erg s-1, $t_{\rm dyn}=0.99\pm 0.27$ Myr and $\dot{P}_{\rm
out}=\left(0.80^{+0.88}_{-0.44}\right)\times 10^{34}$ erg cm-1, respectively.
We report all these quantities in Table 2, highlighting that, given the high
reference value of $\sim$5000 cm-3 adopted for the outflow $\langle
n_{e}\rangle$, the electron-density dependent parameters might be
underestimated by a factor of $\sim$10$\div$50.
It should also be noted that although the quantities presented in Table 2 are
given along with the errors, these should actually be interpreted as rough
estimates, since the systematic uncertainties insisting on Eq. 3 exert a bias
on them at the level of $1\div 2$ orders of magnitude (Bischetti et al. 2017).
Given this caveat, the values of $M_{\rm out}$, $\dot{M}$ and $\dot{E}_{\rm
kin}$ obtained for the outflow of I20210S are in line with those found by
Rupke & Veilleux (2013) for a sample of nearby galaxy mergers (see also
Rodríguez Zaurín et al. 2013; Spence et al. 2018). In order to definitively
assess the AGN nature of the I20210S outflow, we compare its kinetic power to
the expected ejected mass rate, $\dot{M}_{\rm SN}$, energy output,
$\dot{E}_{\rm SN}$, and momentum injection, $\dot{P}_{\rm SN}$, of starbursts
associated with supernova (SN) explosions (Brusa et al. 2015). According to
Veilleux et al. (2005), such quantities are related to the host galaxy’s star
formation rate (SFR) by:
$\dot{M}_{\rm SN}\lesssim 0.26\left(\frac{{\rm SFR}}{{\rm M}_{\odot}\mbox{
}{\rm yr}^{-1}}\right),$ (10) $\dot{E}_{\rm SN}\lesssim 7\times
10^{41}\left(\frac{{\rm SFR}}{{\rm M}_{\odot}\mbox{ }{\rm yr}^{-1}}\right),$
(11) $\dot{P}_{\rm SN}\lesssim 5\times 10^{33}\left(\frac{{\rm SFR}}{{\rm
M}_{\odot}\mbox{ }{\rm yr}^{-1}}\right),$ (12)
whereas the SFR is linked to the host-galaxy IR ($8\div 1000$ $\mu$m)
luminosity $L^{*}_{\rm IR}$ by (Kennicutt 1998; Kennicutt & Evans 2012;
Kennicutt & De Los Reyes 2021):
$\frac{{\rm SFR}}{{\rm M}_{\odot}\mbox{ }{\rm yr}^{-1}}=3.9\times
10^{-44}\left(\frac{L^{*}_{\rm IR}}{{\rm erg}\mbox{ }{\rm s}^{-1}}\right).$
(13)
We estimate $L^{*}_{\rm IR}\sim 3.4\times 10^{44}$ erg s-1 for I20210S from
the values for the total IR luminosity of the I20210 system and the AGN IR
luminosity for the single members presented in Imanishi & Saito (2014), who
estimated the contributions to the total galaxy IR emission coming from the
active nucleus through photometric aperture size at high spatial resolution
(see their Tables 1, 3, and 5). This in turn yields ${\rm SFR}\sim 13$ M⊙
yr-1, and hence $\dot{M}_{\rm SN}\lesssim 3.4$ M⊙ yr-1, $\dot{E}_{\rm
SN}\lesssim 9\times 10^{42}$ erg s-1 and $\dot{P}_{\rm SN}\lesssim 6.5\times
10^{34}$ erg cm-1. Such values are $\sim$6 to $\sim$10 times higher than those
listed in Table 2. Therefore, a starburst at work in I20210S is potentially
able to produce the observed ionized outflow. However, Veilleux et al. (2005)
note that Eqs. 10 to 12 give the limit values for a thermalization efficiency
of 100% – that is, when none of the starburst-injected energy is radiated
away. Since typical starburst thermalization efficiencies are of the order of
$\sim$10% (see Veilleux et al. 2005, and references therein), the actual
energy output from SNe is expected to be (at most) in line with the values
listed in Table 2. This fact, in combination with an IR emission powered by
the AGN (Imanishi & Saito 2014), leads us to conclude that the wind in I20210S
is likely AGN-driven, although a non-negligible contribution from star
formation cannot be ruled out. Establishing the main driving mechanism of the
I20210S outflow is even further challenged by the uncertainty in its electron
density, which biases the derivation of the physical properties that can be
directly compared with the expected SN energetics; future spatially resolved
observations of I20210S will therefore also be of paramount importance in
precisely assessing the nature of its ionized wind.
Figure 9: Schematic representation of the emitting components observed in the
spectrum of I20210S: the high-velocity outflow (green shaded area) – assumed
biconical for simplicity, since only one slit position is available and the
PSF size prevents us from resolving the morphology of such a small structure –
and the South Nebula (magenta solid ellipse), plotted on top of the PS1 grizy
image of the galaxy. The slit position and orientation (white dot-dashed
lines) are reported; in the upper left corner, the diameter of
$0^{\prime\prime}.85$ of the DOLoRes PSF (white dashed circle) is also
indicated.
Broadened emission lines in ULIRGs hosting Seyfert nuclei are a common
feature. Rodríguez Zaurín et al. (2013) reported that up to 94% of nearby
objects of this kind ($z<0.175$) show strongly blueshifted ($\Delta v>150$ km
s-1) [O III] broad emission components (${\rm FWHM}>500$ km s-1) that are
emitted by near-nuclear ($R_{\rm out}\lesssim 3.5$ kpc) warm ionized outflows.
At the same time, while they are fully detectable in optical and UV spectra,
such outflows are usually not capable of injecting enough power into the
surrounding environment of AGN to effectively affect the host-galaxy ISM and
star formation. In the face of a required $\dot{E}_{\rm kin}$ of the order of
0.5% to 5% of the total AGN radiant energy (Di Matteo et al. 2005; Hopkins &
Elvis 2010), Fiore et al. (2017) showed, in fact, that the majority of near-
nuclear warm outflows clusters around $\dot{E}_{\rm kin}\sim 0.001L_{\rm bol}$
(see their Figure 1). The I20210S outflow appears to be consistent with this
scenario, given its $\dot{E}_{\rm kin}/L_{\rm bol}$ ratio of $\sim$0.002.
However, its power can still be sufficient to locally affect the star
formation rate in some host-galaxy regions, as demonstrated by the anti-
correlation found between the distribution of star-forming clouds and wind
zones in AGN hosts over resolved spatial regions that are $\sim$3$\div$7-kpc
wide (Cano-Díaz et al. 2012; Carniani et al. 2015; Cresci et al. 2015). Given
its proximity, brightness, and spatial structure, I20210 therefore stands as
an extremely peculiar laboratory in which the impact and interplay of ongoing
galaxy merging on both star formation and AGN activity could be investigated
in great detail.
## 6 Physical properties of the South Nebula
In this section, we briefly discuss the properties of the South Nebula. As
described in Sect. 4.2, this region exhibits interesting intermediate
ionization properties between AGN-powered (FWHM $\sim 700$ km s-1, velocity
blueshift of $\sim$500 km s-1) and star-forming gas clouds. We present the
schematic structure of I20210S in Fig. 9, overlapping the position and
extension – within the spectrograph slit – of both the outflow and the South
Nebula to the PS1 image of the galaxy. From this picture, it is evident that
the outflow propagating southwards extends outside enough of the innermost
nuclear region to touch the inner regions of the disk structure, potentially
interacting with the I20210S baryonic reservoir. Furthermore, the South Nebula
is located on both the extensions of the outflow and the I20210S Western
spiral arm, which makes it additionally interesting.
According to Heckman et al. (2000), a galaxy with $L_{\rm IR}\sim 3\times
10^{45}$ erg s-1 as I20210S (Piconcelli et al. 2010) has an average rotational
velocity $\langle v_{\rm rot}\rangle$ of $\sim$250 km s-1; at a distance of
$\sim$2.2 kpc from the central engine, this translates to an escape velocity
$v_{\rm esc}\sim 550$ km s-1 when assuming a galactic radius of $\sim$6.5 kpc
(i.e., the distance of the South Nebula). For comparison, the wind has a
maximum velocity that is about four times higher (see Table 2), and the South
Nebula itself exhibits $v_{\rm max}=1100\pm 430$ km s-1 (about twice as high).
Therefore, this implies that the nebula is being ejected outside the host
galaxy by the ionized outflow, which also triggers possible star formation
activity via quasar feedback as suggested by the placement of the South Nebula
line ratios on the AGN-star formation boundary (see Sect. 4.3).
It is also interesting to consider its physical properties, as done in Sect. 5
for the main outflow. To this end, we assume $T_{e}\sim 10^{4}$ K, which
according to Eq. 5 translates to $\langle n_{e}\rangle\sim 100$ cm-3 for the
South Nebula [S II] ratio $I_{6717}/I_{6731}=1.36\pm 0.10$ (see Tab 1). This,
in turn, yields $M_{\rm out}\sim 3\times 10^{4}$ M⊙, given
$L_{\rm[OIII]}=(9.05\pm 0.55)\times 10^{39}$ erg s-1 from the [O III] flux
listed in Table 1, and finally $\dot{M}=M_{\rm out}v_{\rm max}/R_{\rm out}\sim
6\times 10^{-3}$ M⊙ yr-1 (Bischetti et al. 2017) for $R_{\rm out}=6.52\pm
0.43$ kpc (see Fig. 2). We report all of these quantities in Table 2 along
with the corresponding $t_{\rm dyn}$, $\dot{E}_{\rm kin}$ and $\dot{P}_{\rm
out}$, to allow for a direct comparison with the values that hold for the
outflow.
## 7 I20210 SMBH mass estimates
In order to evaluate the SMBH mass $M_{\rm BH}$ in both objects, we used the
approach detailed in BM19 for Type II AGN, in which obscuration prevents us
from detecting the broad components of permitted emission lines. In this case,
the following single-epoch relation linking $M_{\rm BH}$ to a BLR virial shape
factor $\varepsilon$ that summarizes the uncertainities on the real BLR
geometry, the monochromatic AGN luminosity $\lambda L_{\lambda}(5100\mbox{
\AA})$ at 5100 Å and the broad H$\alpha$ FWHM gives:
$\log{\left(\frac{M_{\rm BH}}{{\rm
M}_{\odot}}\right)}=\log{\varepsilon}+6.90+0.54\cdot\log{\left[\frac{\lambda
L_{\lambda}(5100\mbox{ }\AA)}{10^{44}\mbox{ }{\rm erg}\mbox{ }{\rm
s}^{-1}}\right]}+2.06\cdot\log{\left[\frac{{\rm FWHM}^{\rm(BLR)}_{{\rm
H}\alpha}}{10^{3}\mbox{ }{\rm km}\mbox{ }{\rm s}^{-1}}\right]}$
(14)
The validity of this relation holds as long FWHM${}^{\rm(BLR)}_{{\rm
H}\alpha}$ and the [O III]/H$\beta$ ratio are measured for AGN-dominated
systems and are therefore related by the following logarithmic linear
relation:
$\log{\left(\frac{{\rm[O\mbox{ {\scriptsize III}}]}}{{\rm
H}\beta}\right)}=(0.58\pm 0.07)\cdot\log{\left[\frac{{\rm
FWHM}^{\rm(BLR)}_{{\rm H}\alpha}}{{\rm km}\mbox{ }{\rm
s}^{-1}}\right]}-(1.38\pm 0.38)$
(15)
According to Figs. 3 and 4 of BM19, this happens for AGN-dominated systems
with [O III]/H$\beta$ $\gtrsim 0.55,$ where the line intensities are not
contaminated by star formation in the host galaxy. Since, based on Table 1, we
have $\log{(\mbox{{[O\,{\scriptsize III}]}}/\mbox{{H$\beta$}})}\sim 0.7$ for
I20210N and $\sim$0.8 for I20210S, respectively, we can apply Eqs. 14 and 15
to both I20210 members. We do not quote any errors for the following
estimations of physical quantities involved in the determination of $M_{\rm
BH}$, since the measurement method is indirect and is therefore subject to
uncertainties of at least $\sim$0.5 dex (see BM19 and references therein).
Quantity | I20210N | I20210S | Units
---|---|---|---
$L^{\rm(NLR)}_{{\rm H}\beta}$ | $(1.622\pm 0.075)\times 10^{40}$ | $(1.040\pm 0.072)\times 10^{42}$ | erg s-1
$L_{\rm bol}$ | $5.2\times 10^{43}$ | $3.6\times 10^{45}$ | erg s-1
$\lambda L_{\lambda}(5100\mbox{ }\AA)$ | $7.0\times 10^{42}$ | $2.7\times 10^{44}$ | erg s-1
${\rm FWHM}^{\rm(BLR)}_{{\rm H}\alpha}$ | 3650 | 5700 | km s-1
$M_{\rm BH}$ | $2.9\times 10^{7}$ | $5.2\times 10^{8}$ | M⊙
$\lambda_{\rm Edd}$ | 0.01 | 0.05 | —
$\sigma_{v}^{*}$ | $390\pm 50$ | — | km s-1
$M_{*}$ | $\lesssim$1.5$\times 10^{12}$ | — | M⊙
Table 3: Physical parameters of the AGN hosted in the I20210 members.
Quantities derived from the application of proportionality relations (Eq. 14
to Eq. 18) are reported without errors due to the uncertainties of at least
$\sim$0.5 dex affecting them (see BM19 and references therein).
For I20210N, we derive $\lambda L_{\lambda}(5100\mbox{ \AA})$ using its
absorption-corrected hard X-ray luminosity $L_{2-10\mbox{ }{\rm
keV}}=4.7\times 10^{42}$ erg s-1 (Piconcelli et al. 2010) via Eq. 5 from
Maiolino et al. (2007):
$\log{L_{2-10\mbox{ }{\rm keV}}}=0.721\cdot\log{\left[\lambda
L_{\lambda}(5100\mbox{ }\AA)\right]}+11.78.$ (16)
This yields $\lambda L_{\lambda}(5100\mbox{ }\AA)\simeq 7.0\times 10^{42}$ erg
s-1. The FWHM of the invisible H$\alpha$ broad component is estimated from Eq.
15; in this way, we find ${\rm FWHM}^{\rm(BLR)}_{{\rm H}\alpha}\simeq 3650$ km
s-1, corresponding to $M_{\rm BH}\simeq 2.9\times 10^{7}$ M⊙ if the value of
$\varepsilon=1.075$ by Reines & Volonteri (2015), which is representative of
assuming the mean virial factor $\langle f\rangle=4\varepsilon=4.3,$ derived
by Grier et al. (2013) by measuring the stellar velocity dispersion in the
host galaxies of powerful nearby quasars.
For I20210S, we decided to avoid directly obtaining $\lambda
L_{\lambda}(5100\mbox{ \AA})$ from the observed spectrum due to the
uncertainty on its intrinsic reddening; similarly, we did not apply Eq. 16 to
indirectly compute it, since only a lower limit to $L_{2-10\mbox{ }{\rm
keV}}\gtrsim 5\times 10^{43}$ erg s-1 is reported in Piconcelli et al. (2010).
Instead, we relied on Eq. 6 from BM19:
$\log{L_{\rm bol}}=\log{L^{\rm(NLR)}_{{\rm
H}\beta}}+3.48+\max{\left\\{0,\mbox{
}0.31\cdot\left[\log{\left(\frac{{\rm[O\mbox{ {\scriptsize III}}]}}{{\rm
H}\beta}\right)}-0.6\right]\right\\}}$
(17)
and Eq. 6 by Netzer (2009):
$\log{\left[\lambda L_{\lambda}(5100\mbox{ }\AA)\right]}=1.09\cdot\log{L_{\rm
bol}}-5.23.$ (18)
From the value for I20210S of $L^{\rm(NLR)}_{{\rm H}\beta}=(1.040\pm
0.072)\times 10^{42}$ erg s-1 computed from the H$\beta$ flux listed in Table
1, we were thus able to derive $L_{\rm bol}\simeq 3.6\times 10^{45}$ erg s-1
and $\lambda L_{\lambda}(5100\mbox{ }\AA)\simeq 2.7\times 10^{44}$ erg s-1.
The value for $L_{\rm bol}$ estimated in this way is fully compatible with the
value of $\sim$3$\times 10^{45}$ erg s-1 assumed by Piconcelli et al. (2010)
on the basis of the I20210S infrared luminosity in the range $10\div 100$
$\mu$m (Sargsyan et al. 2011). Finally, Eq. 15 yields ${\rm
FWHM}^{\rm(BLR)}_{{\rm H}\alpha}\simeq 5700$ km s-1, which translates into
$M_{\rm BH}\simeq 5.2\times 10^{8}$ M⊙.
These SMBH masses imply an Eddington luminosity $L_{\rm Edd}\sim 3.7\times
10^{45}$ erg s-1 for I20210N and $\sim$6.6$\times 10^{46}$ erg s-1 for
I20210S, respectively. Combining them with the AGN bolometric luminosities of
$\sim$5.2$\times 10^{43}$ erg s-1 for I20210N and $\sim$3.6$\times 10^{45}$
for I20210S, we obtain Eddington ratios $\lambda_{\rm Edd}=L_{\rm bol}/L_{\rm
Edd}\sim 0.01$ for I20210N and $\sim$0.05 for I20210S. Although these values
are affected by large uncertainties, such ratios are consistent with the
galaxy classification from the BPT diagrams, with I20210S clearly falling in
the Seyfert region and with I20210N exhibiting more mixed properties. In
addition, we combine the system of equations in Eq. 9 from BM19 to infer the
stellar mass $M_{*}$ of the I20210N host galaxy from its $\sigma_{v}^{*}$ of
390 km s-1 (derived in Sect. 3.1), finding $M_{*}\sim 1.5\times 10^{12}$ M⊙.
We summarize all these parameters in Table 3.
The value of $M_{*}$ derived for I20210N is $\sim$50 times larger than that
typically expected from the $M_{\rm bulge}$-to-$M_{\rm BH}$ relation ($M_{\rm
bulge}\simeq 10^{3}M_{\rm BH}$; e.g., Magorrian et al. 1998; Häring & Rix
2004; Gültekin et al. 2009); however, we highlight that the I20210N kinematics
is likely altered by its gravitational interaction with I20210S, thus making
its measured $\sigma_{v}^{*}$ unreliable for the purposes of estimating the
stellar mass. Therefore, the value of $M_{*}$ derived in this way should only
be treated as an (overestimated) upper limit to the I20210N baryonic content.
## 8 Summary and conclusions
In this article, we present an optical spectroscopic analysis of the AGN pair
hosted in the interacting system IRAS 20210+1121 (I20210; P90; Heisler & Vader
1995; Burston et al. 2001; Davies et al. 2002; Arribas et al. 2004; Piconcelli
et al. 2010) at $z=0.056$. This study is based on spectroscopy taken through a
slit aligned along the nuclei of the two interacting galaxies. The high-
quality data taken at the Telescopio Nazionale Galileo allowed us to perform a
detailed study of the light emission from both components and from their
surrounding environment in the rest-frame wavelength range of 3500 – 7300 Å,
with the possibility of a comprehensive characterization of this interacting
galaxy pair. Here we summarize our main findings:
* •
I20210N, the northern member of the I20210 system, can be definitively
classified as a Seyfert 2 galaxy with an exceptional stellar velocity
dispersion of $\sigma_{v}^{*}\sim 400$ km s-1, hosting an AGN powered by a
black hole with $M_{\rm BH}\sim 3\times 10^{7}$ M⊙ that radiates at 1% of its
Eddington limit.
* •
The environment around I20210S, the southern component is a powerful Type II
quasar with $M_{\rm BH}\sim 5\times 10^{8}$ M⊙ radiating at 5% of its
Eddington limit and revealed to be highly structured, with an ionized outflow
and a detached gaseous nebula (the South Nebula) alongside the nuclear
emission.
* •
The physical properties of the ionized outflow derived from the analysis of
the broad emission-line components ($T_{e}\sim 10^{4}$ K, $\langle
n_{e}\rangle\gtrsim 5000$ cm-3, $v_{\rm max}\sim 2000$ km s-1, $R_{\rm
out}\sim 2$ kpc, $M_{\rm out}\sim 2\times 10^{5}$ M⊙, $\dot{M}\sim 0.6$ M⊙
yr-1) are in line with those found in other powerful AGN hosted in ULIRGs
(Rodríguez Zaurín et al. 2013; Rupke & Veilleux 2013; Kakkad et al. 2018;
Spence et al. 2018). This suggests that the I20210S AGN activity has
potentially a direct impact on the host-galaxy environment through quasar
feedback; however, these results need to be further investigated with higher
resolution spectral observations in order to constrain the value of the wind
electron density and thus allow for a better characterization of the feedback
mechanism to the star formation activity in I20210S (e.g., Carniani et al.
2015; Fiore et al. 2017);
* •
The South Nebula exhibits dynamical properties consistent with those of highly
disrupted gas stripped out of the I20210S nucleus (velocity blueshift of
$\sim$500 km s-1, FWHM of $\sim$700 km s-1 that is similar to the case of the
Teacup Galaxy (Ramos Almeida et al. 2017) and coupled to intermediate
ionization properties between AGN-powered and star-forming gas. Such features
qualify this region as a very interesting target for a deeper investigation of
the potential feedback processes – either triggered by AGN activity or by the
galaxy merger – at work in I20210S.
Thanks to the above properties, the I20210 system can be characterized as a
very interesting target in the local Universe that ought to be investigated
with dedicated multi-wavelength follow-ups aimed at a detailed study of the
effects of AGN feedback coupled to host-galaxy interaction on the AGN
surrounding environment. In particular, obtaining higher resolution spectra
($\lambda/\Delta\lambda\gtrsim 1500$) is crucial to improving the emission-
line diagnostics of the I20210S components (nucleus, outflow, South Nebula)
and to allow for a precise evaluation of the I20210S outflow physical
conditions. Furthermore, integral-field spectroscopic observations are
required to accurately constrain both the morphology and interplay of outflows
and any off-nuclear emitting region in I20210S.
###### Acknowledgements.
We thank our anonymous referee for their helpful comments. GV, EP, CV, MB, CF
and FF acknowledge support from PRIN MIUR project “Black Hole winds and the
Baryon Life Cycle of Galaxies: the stone-guest at the galaxy evolution
supper”, contract #2017PH3WAT. GV also acknowledges financial support from
Premiale 2015 MITic (PI: B. Garilli). CRA acknowledges financial support from
the Spanish Ministry of Science, Innovation and Universities (MCIU) under
grant with reference RYC-2014-15779, from the European Union’s Horizon 2020
research and innovation programme under Marie Skłodowska-Curie grant agreement
No 860744 (BiD4BESt), from the State Research Agency (AEI-MCINN) of the
Spanish MCIU under grants ”Feeding and feedback in active galaxies” with
reference PID2019-106027GB-C42, ”Feeding, feedback and obscuration in active
galaxies” with reference AYA2016-76682-C3-2-P, and ”Quantifying the impact of
quasar feedback on galaxy evolution (QSOFEED)” with reference EUR2020-112266.
CRA also acknowledges support from the Consejería de Economía, Conocimiento y
Empleo del Gobierno de Canarias and the European Regional Development Fund
(ERDF) under grant with reference ProID2020010105 and from IAC project
P/301404, financed by the Ministry of Science and Innovation, through the
State Budget and by the Canary Islands Department of Economy, Knowledge and
Employment, through the Regional Budget of the Autonomous Community. Based on
observations made with the Italian Telescopio Nazionale Galileo (TNG) operated
on the island of La Palma by the Fundación Galileo Galilei of the INAF
(Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de
los Muchachos of the Instituto de Astrofísica de Canarias. IRAF is distributed
by the National Optical Astronomy Observatories, which is operated by the
Association of Universities for Research in Astronomy, Inc. (AURA) under
cooperative agreement with the National Science Foundation. Reproduced with
permission from Astronomy & Astrophysics, © ESO.
## References
* Arribas et al. (2004) Arribas, S., Bushouse, H., Lucas, R. A., Colina, L., & Borne, K. D. 2004, AJ, 127, 2522
* Arribas et al. (2014) Arribas, S., Colina, L., Bellocchi, E., Maiolino, R., & Villar-Martín, M. 2014, A&A, 568, A14
* Baldwin et al. (1981) Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5
* Baron & Ménard (2019) Baron, D. & Ménard, B. 2019, MNRAS, 487, 3404
* Bellocchi et al. (2013) Bellocchi, E., Arribas, S., Colina, L., & Miralles-Caballero, D. 2013, A&A, 557, A59
* Bischetti et al. (2017) Bischetti, M., Piconcelli, E., Vietri, G., et al. 2017, A&A, 598, A122
* Bisogni et al. (2017) Bisogni, S., Marconi, A., & Risaliti, G. 2017, MNRAS, 464, 385
* Brusa et al. (2015) Brusa, M., Bongiorno, A., Cresci, G., et al. 2015, MNRAS, 446, 2394
* Burston et al. (2001) Burston, A. J., Ward, M. J., & Davies, R. I. 2001, MNRAS, 326, 403
* Cano-Díaz et al. (2012) Cano-Díaz, M., Maiolino, R., Marconi, A., et al. 2012, A&A, 537, L8
* Cappellari (2012) Cappellari, M. 2012, pPXF: Penalized Pixel-Fitting stellar kinematics extraction
* Cappellari (2017) Cappellari, M. 2017, MNRAS, 466, 798
* Cappellari & Emsellem (2004) Cappellari, M. & Emsellem, E. 2004, PASP, 116, 138
* Carniani et al. (2015) Carniani, S., Marconi, A., Maiolino, R., et al. 2015, A&A, 580, A102
* Cattaneo et al. (2009) Cattaneo, A., Faber, S. M., Binney, J., et al. 2009, Nature, 460, 213
* Chambers & Pan-STARRS Team (2016) Chambers, K. C. & Pan-STARRS Team. 2016, in American Astronomical Society Meeting Abstracts, Vol. 227, American Astronomical Society Meeting Abstracts #227, 324.07
* Cicone et al. (2018) Cicone, C., Brusa, M., Ramos Almeida, C., et al. 2018, Nature Astronomy, 2, 176
* Cid Fernandes et al. (2010) Cid Fernandes, R., Stasińska, G., Schlickmann, M. S., et al. 2010, MNRAS, 403, 1036
* Cook & Weisberg (1982) Cook, R. D. & Weisberg, S. 1982, Residuals and Influence in Regression
* Cresci et al. (2015) Cresci, G., Mainieri, V., Brusa, M., et al. 2015, ApJ, 799, 82
* Croton et al. (2006) Croton, D. J., Springel, V., White, S. D. M., et al. 2006, MNRAS, 365, 11
* Davies et al. (2020) Davies, R., Baron, D., Shimizu, T., et al. 2020, MNRAS, 498, 4150
* Davies et al. (2002) Davies, R. I., Burston, A., & Ward, M. J. 2002, MNRAS, 329, 367
* Davis et al. (1985) Davis, M., Efstathiou, G., Frenk, C. S., & White, S. D. M. 1985, ApJ, 292, 371
* De Rosa et al. (2018) De Rosa, A., Vignali, C., Husemann, B., et al. 2018, MNRAS, 480, 1639
* Di Matteo et al. (2005) Di Matteo, T., Springel, V., & Hernquist, L. 2005, Nature, 433, 604
* Doyon et al. (1994) Doyon, R., Wells, M., Wright, G. S., et al. 1994, ApJ, 437, L23
* Ellison et al. (2019) Ellison, S. L., Viswanathan, A., Patton, D. R., et al. 2019, MNRAS, 487, 2491
* Elvis (2000) Elvis, M. 2000, ApJ, 545, 63
* Fabian (2012) Fabian, A. C. 2012, ARA&A, 50, 455
* Falcón-Barroso et al. (2017) Falcón-Barroso, J., Lyubenova, M., van de Ven, G., et al. 2017, A&A, 597, A48
* Farrah et al. (2007) Farrah, D., Bernard-Salas, J., Spoon, H. W. W., et al. 2007, ApJ, 667, 149
* Feruglio et al. (2015) Feruglio, C., Fiore, F., Carniani, S., et al. 2015, A&A, 583, A99
* Fiore et al. (2017) Fiore, F., Feruglio, C., Shankar, F., et al. 2017, A&A, 601, A143
* Forbes & Ponman (1999) Forbes, D. A. & Ponman, T. J. 1999, MNRAS, 309, 623
* Gaskell & Ferland (1984) Gaskell, C. M. & Ferland, G. J. 1984, PASP, 96, 393
* Grier et al. (2013) Grier, C. J., Martini, P., Watson, L. C., et al. 2013, ApJ, 773, 90
* Gültekin et al. (2009) Gültekin, K., Richstone, D. O., Gebhardt, K., et al. 2009, ApJ, 698, 198
* Häring & Rix (2004) Häring, N. & Rix, H.-W. 2004, ApJ, 604, L89
* Heckman (1980) Heckman, T. M. 1980, A&A, 87, 152
* Heckman & Best (2014) Heckman, T. M. & Best, P. N. 2014, ARA&A, 52, 589
* Heckman et al. (2000) Heckman, T. M., Lehnert, M. D., Strickland, D. K., & Armus, L. 2000, ApJS, 129, 493
* Heisler & Vader (1995) Heisler, C. A. & Vader, J. P. 1995, AJ, 110, 87
* Hernquist (1989) Hernquist, L. 1989, Nature, 340, 687
* Ho et al. (1995) Ho, L. C., Filippenko, A. V., & Sargent, W. L. 1995, ApJS, 98, 477
* Ho et al. (1993) Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1993, ApJ, 417, 63
* Hopkins & Elvis (2010) Hopkins, P. F. & Elvis, M. 2010, MNRAS, 401, 7
* Hopkins et al. (2006) Hopkins, P. F., Hernquist, L., Cox, T. J., et al. 2006, ApJS, 163, 1
* Husemann et al. (2013) Husemann, B., Wisotzki, L., Sánchez, S. F., & Jahnke, K. 2013, A&A, 549, A43
* Imanishi & Saito (2014) Imanishi, M. & Saito, Y. 2014, ApJ, 780, 106
* Jones et al. (2009) Jones, D. H., Read, M. A., Saunders, W., et al. 2009, MNRAS, 399, 683
* Kakkad et al. (2018) Kakkad, D., Groves, B., Dopita, M., et al. 2018, A&A, 618, A6
* Kakkad et al. (2016) Kakkad, D., Mainieri, V., Padovani, P., et al. 2016, A&A, 592, A148
* Kauffmann et al. (2003) Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, MNRAS, 346, 1055
* Kennicutt (1998) Kennicutt, Robert C., J. 1998, ApJ, 498, 541
* Kennicutt & De Los Reyes (2021) Kennicutt, Robert C., J. & De Los Reyes, M. A. C. 2021, ApJ, 908, 61
* Kennicutt & Evans (2012) Kennicutt, R. C. & Evans, N. J. 2012, ARA&A, 50, 531
* Kewley et al. (2013a) Kewley, L. J., Dopita, M. A., Leitherer, C., et al. 2013a, ApJ, 774, 100
* Kewley et al. (2001) Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001, ApJ, 556, 121
* Kewley et al. (2006) Kewley, L. J., Groves, B., Kauffmann, G., & Heckman, T. 2006, MNRAS, 372, 961
* Kewley et al. (2013b) Kewley, L. J., Maier, C., Yabe, K., et al. 2013b, ApJ, 774, L10
* Kormendy & Ho (2013) Kormendy, J. & Ho, L. C. 2013, ARA&A, 51, 511
* Lamareille (2010) Lamareille, F. 2010, A&A, 509, A53
* Lynden-Bell (1969) Lynden-Bell, D. 1969, Nature, 223, 690
* Magorrian et al. (1998) Magorrian, J., Tremaine, S., Richstone, D., et al. 1998, AJ, 115, 2285
* Maiolino et al. (2017) Maiolino, R., Russell, H. R., Fabian, A. C., et al. 2017, Nature, 544, 202
* Maiolino et al. (2007) Maiolino, R., Shemmer, O., Imanishi, M., et al. 2007, A&A, 468, 979
* Markwardt (2009) Markwardt, C. B. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 411, Astronomical Data Analysis Software and Systems XVIII, ed. D. A. Bohlender, D. Durand, & P. Dowler, 251
* Miller & Mathews (1972) Miller, J. S. & Mathews, W. G. 1972, ApJ, 172, 593
* Netzer (2009) Netzer, H. 2009, MNRAS, 399, 1907
* Oh et al. (2011) Oh, K., Sarzi, M., Schawinski, K., & Yi, S. K. 2011, ApJS, 195, 13
* Oh et al. (2015) Oh, K., Yi, S. K., Schawinski, K., et al. 2015, ApJS, 219, 1
* Osterbrock (1981) Osterbrock, D. E. 1981, ApJ, 249, 462
* Osterbrock & Ferland (2006) Osterbrock, D. E. & Ferland, G. J. 2006, Astrophysics of gaseous nebulae and active galactic nuclei
* Pei (1992) Pei, Y. C. 1992, ApJ, 395, 130
* Perez et al. (1990) Perez, E., Manchado, A., Garcia-Lario, P., & Pottasch, S. R. 1990, A&A, 227, 407
* Perna et al. (2020) Perna, M., Arribas, S., Catalán-Torrecilla, C., et al. 2020, A&A, 643, A139
* Perna et al. (2021) Perna, M., Arribas, S., Pereira Santaella, M., et al. 2021, A&A, 646, A101
* Perna et al. (2015) Perna, M., Brusa, M., Cresci, G., et al. 2015, A&A, 574, A82
* Perna et al. (2017) Perna, M., Lanzuisi, G., Brusa, M., Cresci, G., & Mignoli, M. 2017, A&A, 606, A96
* Piconcelli et al. (2010) Piconcelli, E., Vignali, C., Bianchi, S., et al. 2010, ApJ, 722, L147
* Ramos Almeida et al. (2017) Ramos Almeida, C., Piqueras López, J., Villar-Martín, M., & Bessiere, P. S. 2017, MNRAS, 470, 964
* Reines & Volonteri (2015) Reines, A. E. & Volonteri, M. 2015, ApJ, 813, 82
* Reyes et al. (2008) Reyes, R., Zakamska, N. L., Strauss, M. A., et al. 2008, AJ, 136, 2373
* Rodríguez Zaurín et al. (2013) Rodríguez Zaurín, J., Tadhunter, C. N., Rose, M., & Holt, J. 2013, MNRAS, 432, 138
* Rosales-Ortega et al. (2012) Rosales-Ortega, F. F., Arribas, S., & Colina, L. 2012, A&A, 539, A73
* Rose et al. (2018) Rose, M., Tadhunter, C., Ramos Almeida, C., et al. 2018, MNRAS, 474, 128
* Rupke et al. (2002) Rupke, D. S., Veilleux, S., & Sanders, D. B. 2002, ApJ, 570, 588
* Rupke et al. (2005) Rupke, D. S., Veilleux, S., & Sanders, D. B. 2005, ApJS, 160, 115
* Rupke & Veilleux (2013) Rupke, D. S. N. & Veilleux, S. 2013, ApJ, 768, 75
* Sanders & Mirabel (1996) Sanders, D. B. & Mirabel, I. F. 1996, ARA&A, 34, 749
* Sanders et al. (1988) Sanders, D. B., Soifer, B. T., Elias, J. H., et al. 1988, ApJ, 325, 74
* Sanders et al. (2016) Sanders, R. L., Shapley, A. E., Kriek, M., et al. 2016, ApJ, 816, 23
* Sargsyan et al. (2011) Sargsyan, L., Weedman, D., Lebouteiller, V., et al. 2011, ApJ, 730, 19
* Scannapieco & Oh (2004) Scannapieco, E. & Oh, S. P. 2004, ApJ, 608, 62
* Spence et al. (2018) Spence, R. A. W., Tadhunter, C. N., Rose, M., & Rodríguez Zaurín, J. 2018, MNRAS, 478, 2438
* Springel et al. (2005) Springel, V., White, S. D. M., Jenkins, A., et al. 2005, Nature, 435, 629
* Vazdekis et al. (2010) Vazdekis, A., Sánchez-Blázquez, P., Falcón-Barroso, J., et al. 2010, MNRAS, 404, 1639
* Veilleux et al. (2005) Veilleux, S., Cecil, G., & Bland-Hawthorn, J. 2005, ARA&A, 43, 769
* Veilleux et al. (2002) Veilleux, S., Kim, D. C., & Sanders, D. B. 2002, ApJS, 143, 315
* Veilleux & Osterbrock (1987) Veilleux, S. & Osterbrock, D. E. 1987, ApJS, 63, 295
* Weedman (1968) Weedman, D. W. 1968, PASP, 80, 314
* Zakamska et al. (2016) Zakamska, N. L., Hamann, F., Pâris, I., et al. 2016, MNRAS, 459, 3144
|
arxiv-papers
| 2021-07-26T14:42:14 |
2024-09-04T03:07:18.915650
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Francesco Gabriele Saturni, Giustina Vietri, Enrico Piconcelli,\n Christian Vignali, Manuela Bischetti, Angela Bongiorno, Sara Cazzoli, Chiara\n Feruglio, Fabrizio Fiore, Bernd Husemann, Cristina Ramos Almeida",
"submitter": "Francesco Gabriele Saturni Dr.",
"url": "https://arxiv.org/abs/2107.12242"
}
|
2107.12249
|
# On the absence of shock waves and vacuum birefringence in Born–Infeld
electrodynamics
Hedvika Kadlecová [email protected] Institute of Physics of the
ASCR, ELI–Beamlines project, Na Slovance 2, 18221, Prague, Czech Republic
###### Abstract
We study the interaction of two counter–propagating electromagnetic waves in
vacuum in the Born–Infeld electrodynamics. First we investigate the Born case
for linearly polarized beams, ${\bf E}\cdot{\bf B}=0$, i. e.
$\mathfrak{G}^{2}=0$ (crossed field configuration), which is identical for
Born–Infeld and Born electrodynamics; subsequently we study the general
Born–Infeld case for beams which are nonlinearly polarized,
$\mathfrak{G}^{2}\neq 0$.
In both cases, we show that the nonlinear field equations decouple using self-
similar solutions and investigate the shock wave formation. We show that the
only nonlinear solutions are exceptional travelling wave solutions which
propagate with constant speed and which do not turn into shocks.
In the Born case, we naturally obtain exceptional wave solutions for
counter–propagating (real photon–photon scattering) and for a co–propagating
(non-interacting) beam orientation we investigate their direction of
propagation. In the Born–Infeld case, we have additionally chosen the
solutions which have constant phase velocities to match the limits of phase
velocities of the background field in the Born case. We obtain two types of
exceptional wave solutions, then we numerically analyze which phase velocities
correspond to the counter– or co–propagating beams and subsequently we
determine the direction of propagation of the exceptional waves.
We discuss the cross–section of the process to be measured together with our
proposed direct detection of the photon–photon scattering, [1, 2].
photon–photon scattering, quantum electrodynamics, nonlinear waves
###### pacs:
12.20.Ds, 41.20.Jb, 52.38.-r, 53.35.Mw, 52.38.r-, 14.70.Bh
††preprint: APS/123-QED
## I Introduction
The photon–photon scattering in vacuum occurs via the generation of virtual
electron–positron pair creation resulting in vacuum polarization [3]. The
photon–photon scattering is one of the most important nonlinear processes in
today’s particle physics. The process breaks the linearity of the Maxwell
equations and is one of the oldest predictions of quantum electrodynamics
(QED). It is convenient to use the Heisenberg–Euler approach in QED [4, 3, 5]
to investigate such process.
The indirect measurement of this process was achieved only very recently. In
2013 [6, 7], it was proposed to look for the light-by-light scattering in
ultra-peripheral heavy-ion collisions at the LHC. Off–shell photon–photon
scattering [8] was indirectly observed in collisions of heavy ions accelerated
by standard charged particle accelerators with $4.4\,\sigma$ significance. See
review article [9] and results of experiments obtained with the ATLAS detector
at the Large Hadron Collider [10, 11] in 2017, where the cross–section was
measured and identified as compatible with standard model QED predictions [7,
12, 13, 14].
Such studies of fundamental physics will become possible due to the increasing
availability of high power lasers. This raises an interest in experimental
observations and motivates theoretical studies of nonlinear QED in laser-laser
scattering [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], the interaction of
relatively long wavelength radiation with X-ray photons [26, 27, 28],
nonlinear laser–plasma interaction [29, 30], and complex problems on the
boundary of nonlinear QED.
In the limit of extremely intense electromagnetic fields, the Maxwell
equations are modified due to the nonlinear process of photon-photon
scattering that makes the vacuum refraction index depend on the field
amplitude. Due to the nonlinearity of the field equations, the electromagnetic
field interacts with itself and generates deformations in the light cone [31].
In the context of nonlinear electrodynamics, introducing the background field
affects the propagation velocity of the electromagnetic wave and creates the
birefringence effect, i.e. the speed of wave propagation depends on the wave
polarization. The vacuum behaves as a dispersive medium in the presence of
electromagnetic waves with small but finite wavenumbers [32, 33].
Birefringence was motivated by an analogy with the effect in crystalography
and means that the incoming light splits into two waves in the vacuum which
serves as a medium: the ordinary wave and the extraordinary wave. The ordinary
wave propagates parallel to the optic axis with polarization perpendicular to
the optic axis and refractive index $n_{or}$. The extraordinary wave has
polarization in the direction of the optic axis and has refractive index
$n_{ex}$. For example, when unpolarized light enters an uniaxial birefringent
material, it is split into two beams travelling in different directions. The
ordinary ray doesn’t change direction while the extraordinary ray is refracted
as it travels through the material. The magnitude of birefringence is given by
$\Delta n=n_{or}-n_{ex}$. Birefringence was also studied in astrophysics, [34,
35]. The phenomenon of birefringence exists in all physically acceptable
nonlinear electrodynamics except for the Born–Infeld electrodynamics [36].
Nonlinear properties of the QED vacuum in the long wavelength and low
frequency limit are described by the Heisenberg-Euler Lagrangian [4], which
describes electromagnetic fields in dispersionless media whose refraction
index depends on the electromagnetic field itself. In media where the
refraction index dependence on the field amplitude leads to the nonlinear
response, the electromagnetic wave can evolve into a configuration with
singularities [37].
The appearance of singularities in the Heisenberg-Euler electrodynamics is
noticed in [38] where a singular particular solution of the equations derived
from the Heisenberg–Euler Lagrangian is obtained. In [39], the wave steepening
is demonstrated by numerical integration of the nonlinear QED vacuum
electrodynamics equations.
The nonlinear properties of the QED vacuum have been extensively addressed in
a number of publications. The theoretical problem of nonlinear effects of
light propagation is considered in [32], where they study photon splitting in
an external field in the full Heisenberg–Euler theory. Another extensive
studies can be found in [40, 41, 42]. Further results on nontrivial vacua and
on curved spacetimes can be found in [43, 44, 45]. The photon splitting in
crossed electric and magnetic fields is considered, for example, in [46].
Nonlinear wave mixing in cavities is analyzed in [47]. Nonlinear interaction
between an electromagnetic pulse and a radiation background is investigated in
[48]. In the monograph [33], the vacuum birefringence phenomena is described
within the framework of the geometrical optics approximation by using a
unified formalism. In the work [49], they incorporate the weakest dispersion
into Heisenberg–Euler theory, and in [50] the approach used in [33] is
generalized allowing one to obtain the dispersion equation for the
electromagnetic wave frequency and wavenumber. This process, in particular,
results in decreasing the velocity of counter-propagating electromagnetic
waves. As is well known, the co-propagating waves do not change their
propagation velocity because the co-propagating photons do not interact, e.g.,
see [51].
The finite amplitude wave interaction in the QED vacuum results in high order
harmonics generation [49, 52, 53, 54, 39]. High frequency harmonics generation
can be a powerful tool to explore the physics of the nonlinear QED vacuum. The
highest harmonics can be used to probe the high energy region because they are
naturally co-propagating and allow the measurement of QED effects in the
coherent harmonic focus. High–order harmonics generation in vacuum is studied
in detail in [52, 53].
Next to the development of the Heisenberg–Euler general expression for the
quantum nonlinearities in the Lagrangian of QED [4, 55, 8], there was an
interest in a theory of QED with the upper limit on the strength of the
electromagnetic field, today known as the Born–Infeld theory, which represents
a very unique nonlinear modification of the QED Lagrangian.
The Born–Infeld electrodynamics behaves as an isotropic medium with a
polarization–independent refractive index. The individual plane wave
propagating at the speed of light in homogeneous isotropic scattering reduces
its phase velocity uniformly [56]. For example, in the process of
photon–photon scattering, a counter–propagating, circularly polarized
monochromatic wave of the same helicity serves as an isotropic medium for the
other counter–propagating wave, studied from the classical perspective in
[57].
The first attempt to derive Born–Infeld nonlinear electrodynamics was made by
Mie [58, 59, 60] based on the construction of a purely electromagnetic theory
of charged particles in order to obtain a model of a classical electron. The
theory can be considered as a covariant generalization of Mie’s theory which
is in close correspondence to the principle of general covariance, [61].
Interestingly, the nonlinear process of photon–photon scattering is present in
the Born–Infeld electrodynamics already at the classical level, such studies
were conducted by Schrödinger [62, 57].
The Born–Infeld theory gained new interest in 1985, when it was found as a
limiting case of string theory. In [63] it was found that the Born–-Infeld
Lagrangian is the exact solution of a constant Abelian external vector field
problem in the open Bose string theory with the number of spacetime dimension
$D=26$. In [64], they also showed that gauge fields on a D-brane are described
by the same Dirac–Born–Infeld type of Lagrangian. Interestingly, let’s remark
on the duality between the brane velocity, which is limited by the velocity of
light and limiting electric fields in Born–Infeld theory [65]. The Born-Infeld
action [66] to second order might be obtained from higher-curvature gravity in
Kaluza-Klein theory [67, 68, 69] and it has also application in supersymmetry
[70].
Besides the theoretical application to string theory, there is also an
interest in experimental research in the Born–Infeld theory. Next to the
search for photon–photon scattering in vacuum there is also need to test QED
and non–standard models like Born–Infeld theory and scenarios where mini
charged particles are involved or axion–like bosons [71]. Newly, the PVLAS
experiment [72] measured new limits also on the existence of hypothetical
particles which couple to two photons, axion-like and milli-charged particles,
besides casting upper limits on the magnetic birefringence predicted by QED.
In other words, the photon–photon scattering provides a tool for the search
for new physics in which new particles can participate, see search for the
process in X–ray region [73]. The experimental observation and precision tests
of the parameter for the Born–Infeld in the low energy effective Lagrangian
are still waiting for reaching necessary sensitivity for its measurement in
the process of photon–photon scattering.
In the case of arbitrary polarizations, using the precise phase matching when
an ultra-intense laser beam crosses a low power beam, it is possible to
propose a set of experiments allowing either to detect photon-photon
scattering or to set new limits on the relevant parameters which might improve
by several orders of magnitude the current constraints obtained by PVLAS
collaboration.
Thanks to the availability of the PW-class lasers, a complete test of all the
parameters appearing in the low energy effective photonic Lagrangian could be
done now including the parameter for the Born–Infeld term. The experiments
could be performed at HERCULES [15, 74], at the new laser ZEUS [75], in the
University of Michigan, at the new laser LUXE [76], in DESY, and more probably
at the ELI facility [77]. In the future, the $100$ PW laser at SIOM [27], may
enable a new class of precision tests of the Standard Model and beyond.
The behaviour of shock waves in the Born–Infeld nonlinear electrodynamics has
been studied thoroughly. An early theoretical analysis was made by Boillat who
showed that both polarization modes travel along the light cone of one optical
metric in exceptional nonlinear electrodynamics like Born–Infeld’s [36, 78].
The shock waves in the Born and Born–Infeld theories were studied in [79]. The
propagation of shock waves in nonlinear theories is determined by optical
metrics and polarization conditions. They describe in general the propagation
of two differently polarized waves where the effect of birefringence is just a
special case. The two optical metrics reduce to an identical one for the
Born–Infeld electrodynamics, i.e. Born–Infeld is a special case without
birefringence.
The term exceptional means that no shocks are formed (in the sense of Lax
representation) [66, 36] and that the fields on the wavefront always satisfy
the field equations. Born-Infeld electrodynamics is called completely
exceptional and it is the only completely exceptional regular nonlinear
electrodynamics [80]. The electrodynamics shows special features such as the
absence of shock waves and birefringence. In [81], the study was extended to
the motion of more general discontinuity fronts. Considering the convexity of
the energy density, they derived relations concerning exceptional waves
(linearly degenerated) and shock fronts with discontinuities of the field.
They showed that the characteristic shocks, which are moving with the wave
velocity, are unbounded, and the shocks allow arbitrary coefficients for the
Born–Infeld electrodynamics. The discontinuities do not evolve into shocks,
but when the shock exists at some initial time it propagates on characteristic
surfaces, i.e. the Cauchy problem is well–posed. In [82], the formation of
singularities in the Born and Born–Infeld electrodynamics was studied for
plane wave–pulse motions along one spatial direction.
The general problem of shock wave development remains an open question. Quite
recently there have been some progress in 3D, the general problem of shock
formation was resolved by D. Christodoulou in [83] in 3D dimensions, where he
proved that the shock waves are absent in 3D space for the Chaplygin gas known
also as scalar Born–Infeld theory. There was provided a complete description
of the maximal development of the initial data. This description is setting up
the problem continuation of the solution beyond the point where it stops to be
regular. Such solutions belong to the so-called free boundary cathegory
problems which posses additional property that the initial data have singular
character because of the behaviour of the solutions at the blow up surface.
Similar problem was investigated for the case with spherical symmetry for a
barotropic fluid [84]. There is given a complete description of the
singularities which are associated with the development of shocks in terms of
smooth functions.
Let us mention that the global existence of classical, smooth, finite–energy
solutions to the 3D case for small amplitude initial data in the
Maxwell–Born–Infeld system was proved in [85]. The Born-Infeld equations have
been solved for transverse plane waves in a rectangular waveguide. Waveguides
can be used to test nonlinear effects in electrodynamics. It was shown that
the energy velocity acquires a dependence on the amplitude and the harmonic
components appear as a consequence of the nonlinear behavior [86].
Geometrical aspects of light propagation in nonlinear electrodynamics and
propagation of light in the modified QED vacuum were investigated in [87],
where it is shown that the propagation of discontinuities of the
electromagnetic field in a nonlinear regime (in dielectrics or in modified QED
vacua) can be described in terms of an effective modification of the
Minkowskian geometry of spacetime. This property has been known in the
Born–Infeld electrodynamics for a long time [88], and investigated further in
[87]. There exists an analogy between photon propagation in nonlinear
electrodynamics and its behavior in an external gravitational field, there is
also a possibility of existence of an electromagnetic analogue of the
gravitational black hole.
The Born–Infeld electrodynamics was also investigated using plasmas in [89]
where the behaviour of large amplitude electrostatic waves in a cold plasma
was studied including the linear and nonlinear waves.
Recently, we addressed the problem of nonlinear wave evolution in a quantum
vacuum in the Heisenberg-Euler approximation looking for a theoretical
description of the electromagnetic shock wave formation in a nonlinear QED
vacuum. We presented and analyzed an analytical solution of the nonlinear
field equations which describes the finite amplitude electromagnetic wave
counter–propagating to the crossed electromagnetic field, [1], i.e. two
counter–propagating waves in the QED quantum vacuum. The configuration
corresponds to the collision of the short and long wavelength electromagnetic
pulses. It may be considered as a model of interaction of the high intensity
laser pulse with the X–ray pulse generated by an XFEL. The purpose of the
study was to propose an experiment for the direct detection of photon–photon
scattering.
The solution of the field equations was found in the form of a simple wave.
The finite amplitude wave evolution of the nonlinear wave solution permits
high order harmonic generation, wave steepening and formation of a shock wave
in the vacuum. The resulting electromagnetic wave breaking had a backwards
character.
Furthermore, we have found that the resulting electromagnetic wave breaking
has a backward character (also called rarefaction wave), the wave steepens and
breaks in the backward direction. At the shock wave front, where the
approximation stops being valid, the electron–positron pairs are being created
during the Breit–Wheeler process, they are further accelerated by the
electromagnetic wave and emit gamma–ray photons. Such emission leads to the
electron–positron avalanche via the multi-photon Breit-Wheeler mechanism. In
the proposed experiment we suggest to reach realistic energies of the
electron–positron pair cross–section instead of targeting directly the
Schwinger limit $E_{S}$ and to detect the gamma–ray photons in the secondary
processes of photon–photon scattering. The proposed experiment should serve
for the direct detection of photon–photon scattering in the quantum vacuum
which was not observed before.
In the subsequent paper [2], we have widened our analysis and generalized our
study. In detail, we analyzed the wave breaking direction of the
electromagnetic wave. It depends on the strength of the electromagnetic field
$E_{0}$ (sign of $f^{\prime}$) and has forward character for weak fields and
backward shock wave character for stronger fields. The self–similar solution
was analyzed by the method of characteristics and by a perturbation method. We
have demonstrated in more detail that the solution describes high order
harmonic generation, wave steepening and formation of a shock wave.
We have also investigated new relativistic electromagnetic soliton solutions
and nonlinear waves in a quantum vacuum [90] in the same setup of two
counter–propagating electromagnetic waves. The balance between the vacuum
polarization (dispersion and difraction) shown, together with the nonlinear
effects, can produce not only the formation of one-dimensional Korteveg-de-
Vries (KdV) type solitons, but also a multidimensional generalization of the
KdV solutions: the Kadomtsev-Petviashvily solitons. This type of soliton can
propagate over a large distance without changing its shape and naturally plays
an important role in experimental physics because such solitons can be
measured. These solutions have many implications from fluid mechanics to solid
state physics, plasma physics and also in quantum field theory. There also
exist electromagnetic photonic solitons in the Born–Infeld theory [91, 66].
Photonic solitons are very important for nonlinear optics. They pass through
one another without scattering.
In this paper, we investigate the problem of nonlinear wave evolution in a
quantum vacuum in another important Born–Infeld electrodynamics. We were
looking for a detailed theoretical description of the electromagnetic shock
wave formation and its absence in the nonlinear quantum vacuum. We present and
analyze analytical solutions of the Born–Infeld electrodynamics field
equations for the finite amplitude electromagnetic wave counter–propagating to
the crossed electromagnetic field, i.e. two counter–propagating
electromagnetic waves. Such configuration may correspond to the collision of a
low–frequency, very high intensity laser pulse with a high frequency X–ray
pulse generated by an XFEL. The first, long wavelength electromagnetic wave is
approximated by a constant crossed field and the derived corresponding
nonlinear field equations contain expressions for the relatively short
wavelength pulse. The solutions of the nonlinear field equations are found in
a form of the simple wave, also called the Riemann wave. We investigate the
development of the shock waves, their formation and the wave steepening, in
more detail. We show that the only nonlinear solutions of the Born–Infeld
field equations are nonlinear waves with constant phase velocities which do
not turn into shocks, the so called exceptional travelling wave solutions. We
discuss the absence of the shock formation process.
First, we have investigated the field equations of the Born Lagrangian for
linearly polarized beams, which are identical to the equations for the
Born–Infeld Lagrangian for the crossed field configuration ${\bf E}\cdot{\bf
B}=0$, i.e. $\mathfrak{G}^{2}=0$. Second, we generalize the study to the more
general case of nonlinearly polarized beams in the Born–Infeld Lagrangian:
${\bf E}\cdot{\bf B}\neq 0$ and therefore $\mathfrak{G}^{2}\neq 0$.
The paper is organized as follows: Section I serves as an introduction to our
problem and we review the current state of knowledge about vacuum
birefringence and absence of shock waves in Born–Infeld.
In Section II, we review Born–Infeld and Born electrodynamics, and their field
equations. Also we review the properties of the Born–Infeld theory such as
canonical variables and Legendre transformations, duality rotations and a
conservation law in Born–Infeld theory.
In Section III, we derive the nonlinear field equations in Born theory, we add
small amplitude perturbations and linearize the coefficients. Specifically: In
Subsection III.1, we derive the Born field equations, in Subsection III.2 we
derive the phase velocity, in Subsection III.3, we linearize the coefficients
in the equations, in Subsection III.4 we solve the Born field equations by
assuming the solution in a form of a simple wave, we show that the system of
equations decouple for the ordinary wave case. The solution has the form of a
nonlinear wave without dispersion in the linear approximation. In Subsection
III.5, we analyze the properties of the self-similar solutions. We analyze the
solutions by the method of characteristics for two possible cases, $-$ and
$+$, corresponding to two orientations of the beams, counter–propagating and
co–propagating beams. We analyze the wave breaking and the character of the
breaking wave for the $-$ and $+$ cases. We show that the only solutions are
exceptional waves with constant phase velocities.
In Section IV, we derive the field equations for our problem in Born–Infeld
electrodynamics. Specifically:
In Subsection IV.1, we solve the field equations for our problem in
Born–Infeld electrodynamics. First, we add weak linear corrections to the
fields and perform a linearization of the coefficients around the constant
background field. We assume the solution in the form of a simple wave and show
that the system of equations decouple. The solution has the form of a
nonlinear wave without dispersion in the linear approximation. We also derive
the phase velocities for the system of equations.
In Subsection IV.2, we show that in the cases where the phase velocities are
constant, the exceptional waves are the only solutions of the equations and so
we demonstrate the absence of shock waves in the Born–Infeld theory for the
physically relevant solutions. We discuss solutions of the field equation of
type I which are similar to the solutions in Born theory. We analyze the
solutions by the method of characteristics for two possible cases, $-$ and
$+$, corresponding to two orientations of the beams, counter–propagating and
co–propagating. We discuss properties of the solutions, demonstrate that the
shock wave steepening does not take place and that only the exceptional waves
are created.
In Subsection IV.3, we discuss solutions of the field equations of type II
with non–zero right hand side where we choose the solutions for which the
phase velocities are constant.
In Subsection IV.4, we plot the numerically calculated phase velocities to
determine their value in order to see which case $-$ and $+$ they correspond
to: to the counter–propagating or co–propagating. According to this we were
able to discuss the direction of propagation of the resulting nonlinear waves.
In Subsection IV.5 we summarize the properties of the solutions of the system
of field equations, their direction of movement and phase velocities for the
two possible cases, $-$ and $+$, corresponding to two orientations of the
beams (counter–propagating and co–propagating). We discuss the contribution of
the process in Born-Infeld to the cross–section of the photon–photon
scattering process.
We devote Section VI to the discussion of the experimental differentiation of
Born–Infeld and Heisenberg–Euler theories.
The main results of the paper are summarized in Section VII. The Appendices A,
B, C, D and E contain the detailed coefficients of the linearization performed
in the paper.
## II Born–Infeld and Born electrodynamics
### II.1 The Born–Infeld and Born Lagrangians
The first model of a nonlinear electrodynamics was proposed by Born [92] in
1933 with the following choice of the Lagrangian,
$\mathcal{L}_{B}=-b^{2}\left(\sqrt{1-\frac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}}-1\right),$
(1)
where ${\bf E}$ and ${\bf B}$ are electric and magnetic fields, $b$ being the
free Born–Infeld constant (also known as the field strength parameter) having
the dimension of the electromagnetic field and the units $c=\hbar=1$. In more
detail, the Born theory was described in [93].
Born’s motivation was to find classical solutions representing electrically
charged particles with finite self-energy. The mechanism restricting the
particle’s velocity in relativistic mechanics to values smaller than $c$ is
going to restrict the electric field in the Born theory with $\mathcal{L}_{B}$
(1) to values smaller than the critical field $b$ (when ${\bf B}=0$) [80].
The Born theory was not satisfactory in several directions. The main
difficulties were connected to the fact that the self–energy of a point charge
is infinite. There were unexplained facts concerning the existence of
elementary particles, the structure of the nuclei and the conversion of these
particles into other particles or photons, [93]. The Born theory holds just
for wavelengths close to the radius of the electron and breaks down at shorter
lengths. The electromagnetic laws needed to be modified and the quantum laws
adapted to the new field equations. The Born–Infeld electrodynamics then
corresponds to the unitarian idea; i.e., to find classical solutions
representing electrically charged particles with finite self-energy.
A year later, the Born–Infeld electrodynamics was developed [94, 80] with the
Lagrangian given by
$\mathcal{L}_{BI}=-b^{2}\left(\sqrt{1-\frac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\frac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}}-1\right),$
(2)
where a new pseudoscalar invariant, the term $\mathcal{G}=\bf{E}\cdot\bf{B}$,
was added to the Born–Infeld Lagrangian while maintaing the Lagrangian as
relativistically covariant.
The Born and the Born-Infeld theories reduce to the linear Maxwell theory for
fields which are much weaker than the critical field $b$,
($b\rightarrow\infty$, i.e., classical linear electrodynamics),
$\mathcal{L}_{M}=\frac{1}{2}(\bf{E}^{2}-\bf{B}^{2}).$ (3)
The Born-Infeld theory is a unique nonlinear theory of the electromagnetic
field because it is the only theory which does not lead to a birefringence
effect, the propagation velocities in all directions do not depend on the wave
polarization, i.e. the velocity of light in the Born-Infeld theory does not
depend on its polarization. The Maxwell theory and the nonlinear
electrodynamics of Born and Infeld are the only relativistic theories in which
this holds true, [32].
### II.2 The field equations of Born–Infeld electrodynamics
The field equations for the Born–Infeld Lagrangian $\mathcal{L}_{BI}$ (2) are
given by
$\partial_{\mu}\left(\frac{\partial\mathcal{L}_{BI}}{\partial(\partial_{\mu}{\Phi})}\right)-\frac{\partial{\mathcal{L}_{BI}}}{\partial\Phi}=0,$
(4)
where
$\Phi=(-\phi,\bf{A}).$ (5)
Every theory of electrodynamic type is described by the source free Maxwell
equations, the first pair of Maxwell field equations reads,
$\displaystyle\nabla\cdot{\bf B}$ $\displaystyle=0,$
$\displaystyle\nabla\times{\bf E}$ $\displaystyle=-{\partial_{t}{\bf B}}.$ (6)
The second pair can be found by varying the Lagrangian $\mathcal{L}_{BI}$ (2),
which gives the field equations. The second pair of equations can be written
as
$\displaystyle\nabla\times{\bf H}$ $\displaystyle=\partial_{t}{\bf D},$
$\displaystyle\nabla\cdot{\bf D}$ $\displaystyle=0,$ (7)
together with the nonlinear constitutive relations,
${\bf E}={\bf E\,(D,B)},\quad{\bf H}={\bf H\,(D,B)}.\\\ $ (8)
The choice of ${\bf D}$ and ${\bf B}$ as the canonical pair of variables leads
to a consistent formulation of the nonlinear theory. The consistency of the
above equations and their relativistic covariance is guaranteed by the
existence of the invariant action principle, the existence of a scalar
Lagrangian density $\mathcal{L}_{BI}$, which may be any function of the scalar
invariant $\mathfrak{F}$ and the pseudoscalar invariant of the electromagnetic
field tensor $\mathfrak{G}$, the so-called Poincaré invariants, and can be
expressed as $\mathcal{L}_{BI}(\mathfrak{F},\mathfrak{G})$, [95],
$\displaystyle\mathfrak{F}$
$\displaystyle=\frac{1}{4}F_{\mu\nu}F^{\mu\nu}=\frac{1}{2}\left({\bf
B}^{2}-{\bf E}^{2}\right),$ $\displaystyle\mathfrak{G}$
$\displaystyle=\frac{1}{4}F_{\mu\nu}\tilde{F}^{\mu\nu}={\bf E}\cdot{\bf B},$
(9) $\displaystyle\tilde{F}^{\mu\nu}$
$\displaystyle=\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}F_{\rho\sigma},$
where $\varepsilon^{\mu\nu\rho\sigma}$ is the Levi-Civita symbol in four
dimensions. The equations (6) follow from the assumption of existence of
potentials. Equations (7) follow from varying the Lagrange function
$\mathcal{L}_{BI}(\mathfrak{F},\mathfrak{G})$. The equations have a form in
relativistic tensor notation (Bianchi indentities
$\partial_{[\mu}F_{\nu\lambda]}=0$):
$\displaystyle\partial_{\mu}F_{\nu\lambda}+\partial_{\lambda}F_{\mu\nu}+\partial_{\nu}F^{\lambda\mu}$
$\displaystyle=0,$ $\displaystyle\partial_{\mu}h^{\mu\nu}$ $\displaystyle=0,$
where
$h^{\mu\nu}=\frac{\partial\mathcal{L}}{\partial
F_{\mu\nu}}=\frac{\partial\mathcal{L}}{\partial\mathfrak{F}}F^{\mu\nu}+\frac{\partial\mathcal{L}}{\partial\mathfrak{G}}\tilde{F}^{\mu\nu}.$
(10)
The Born (1) and the Born–Infeld (2) Lagrangians can be rewritten in terms of
Poincaré invariants as
$\mathcal{L}_{B}=-b^{2}\left(\sqrt{1+\frac{2\mathfrak{F}}{b^{2}}}-1\right),$
(11)
and
$\mathcal{L}_{BI}=-b^{2}\left(\sqrt{1+\frac{2\mathfrak{F}}{b^{2}}-\frac{\mathfrak{G}^{2}}{b^{4}}}-1\right).$
(12)
Born-Infeld electrodynamics is called completely exceptional and it is the
only completely exceptional regular nonlinear electrodynamics [80]. The
electrodynamics shows special features as the absence of shock waves and
birefringence.
### II.3 Legendre transformations and Duality rotations
We choose to have ${\mathcal{L}_{BI}(\bf E,\bf B)}$ dependent on the pair of
variables $(\bf E,\bf B)$. We can choose three other different pairs from the
variables ${\bf E},{\bf B},{\bf D}$ and ${\bf H}$, treating each as an
independent set. Transitions between these choices can be described in analogy
with Legendre transformations.
The dependent variables ${\bf D}$ and ${\bf H}$ are determined from the
constitutive relations,
$\displaystyle{\bf H}$
$\displaystyle=-\frac{\partial{\mathcal{L}_{BI}}}{\partial{\bf B}},\quad{\bf
D}=\frac{\partial{\mathcal{L}_{BI}}}{\partial{\bf E}}.$ (13)
By exchanging the dependence of $\mathcal{L}_{BI}$ on the other three
combinations of two independent variable pairs from ${\bf E},\,{\bf B}$, ${\bf
D}$ and ${\bf H}$, we can get the remaining Legendre transformations, see
[80].
We can rewrite the Lagrangian as
$\displaystyle\mathcal{L}_{BI}$ $\displaystyle=b^{2}(-l+1),$ (14)
$\displaystyle l$
$\displaystyle=\sqrt{1-\frac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\frac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}}.$
(15)
For our choice, ${\mathcal{L}_{BI}(\bf E,\bf B)}$, we obtain the constitutive
relations, using (13):
$\displaystyle{\bf H}$ $\displaystyle=\frac{1}{l}\bigg{(}{\bf
B}-\frac{1}{b^{2}}(\bf{E}\cdot\bf{B}){\bf E}\bigg{)},$ (16) $\displaystyle{\bf
D}$ $\displaystyle=\frac{1}{l}\bigg{(}{\bf
E}+\frac{1}{b^{2}}(\bf{E}\cdot\bf{B}){\bf B}\bigg{)}.$ (17)
Nonlinear electrodynamics has no internal symmetries, but Born–Infeld
electrodynamics has a new conservation law for some choices of the Lagrangian.
The symmetry is the invariance of the field equations under the duality
rotations of canonical fields ${\bf D}$ and ${\bf B}$ (Hodge duality rotation
through an angle $\theta$), also called $\gamma$-invariance [96] and [80, 97],
$\displaystyle{\bf D}+i{\bf B}$
$\displaystyle=e^{i\theta}({\bf\bar{D}}+i{\bf\bar{B}}),$ $\displaystyle{\bf
E}+i{\bf H}$ $\displaystyle=e^{i\theta}({\bf\bar{E}}+i{\bf\bar{H}}),$ (18)
which is a canonical transformation (and not an internal symmetry) and a
symmetry transformation. The generator of the duality rotations represents an
important constant of the motion, the total charge, and it is also the
generator of the phase transformations of the field.
The duality rotations (18) lead to identity in the Born–Infeld
electrodynamics,
${\bf E}\cdot{\bf B}={\bf D}\cdot{\bf H}.$ (19)
## III Born field equations
In this section, we will derive and analyze the field equations in the set up
of two counter–propagating waves in vacuum in the Born electrodynamics.
For the sake of brevity, we consider the two counter–propagating
electromagnetic waves to be of the same polarization. We will work in the
orthogonal coordinate system, $(x,y,z)$, where the two waves propagate along
the $x-$axis. We assume the components of the waves as ${\bf E}=(0,0,E_{z})$
and ${\bf B}=(0,B_{y},0)$, which means that the term $\mathfrak{G}={\bf
E}\cdot{\bf B}=0$. This is usually called a crossed field configuration. In
this case the Born–Infeld Lagrangian (2) reduces to the Born Lagrangian (1),
hence the analysis can be done in the Born electrodynamics. We have studied
the problem in this configuration in our previous work, [1, 2, 90].
In our setup, we investigate only the ordinary wave propagation from the
birefringence effect. For studies, which include also the extraordinary wave
and the nonlinear wave evolution in the full Born–Infeld electrodynamics, we
will need to study nonlinearly polarized beams, $\mathfrak{G}={\bf E}\cdot{\bf
B}\neq 0$, which will be investigated in the next section in Born–Infeld
electrodynamics.
The idea is to use our knowledge about solving the field equations in the
Heisenberg–Euler approximation of the two counter–propagating electromagnetic
waves to solve the field equations in the Born and subsequently in the
Born–Infeld electrodynamics, see Section IV.
Let us mention that the term $\mathfrak{G}^{2}=({\bf E}\cdot{\bf B})^{2}$ can
be neglected in situations where the interaction is far away from
singularities [93]. Therefore we can say that the interaction happens far away
from the creation of shock wave fronts (singularities) by our choice of the
crossed field configuration.
### III.1 Derivation of Born field equations
The field equations were found by varying the Lagrangian (1) with respect to
the potential ${\bf A}$ (the first set comes from the set of equations (6) and
the second set from equations (7)):
$\partial_{t}B_{y}-\partial_{x}E_{z}=0,$ (20) $\displaystyle-$
$\displaystyle\left[1+\frac{E^{2}_{z}}{b^{2}}\frac{1}{\left(1-(E^{2}_{z}-B^{2}_{y})/b^{2}\right)}\right]\partial_{t}E_{z}$
$\displaystyle+$
$\displaystyle\left[1-\frac{B^{2}_{y}}{b^{2}}\frac{1}{\left(1-(E^{2}_{z}-B^{2}_{y})/b^{2}\right)}\right]\partial_{x}B_{y}$
$\displaystyle+$
$\displaystyle\frac{1}{b^{2}}\frac{E_{z}B_{y}}{\left(1-(E^{2}_{z}-B^{2}_{y})/b^{2}\right)}(\partial_{t}B_{y}+\partial_{x}E_{z})=0,$
(21)
where we denote $E_{z}\equiv E$ and $B_{y}\equiv B$ and the condition
$1-1/b^{2}(E^{2}-B^{2})>0$ should be valid.
Subsequently, we add the small amplitude perturbation to the fields,
$\displaystyle E$ $\displaystyle=E_{0}+a_{z}(x,t),$ $\displaystyle B$
$\displaystyle=B_{0}+b_{y}(x,t),$ (22)
where the fields $E_{0},B_{0}$ represent the constant electromagnetic
background field and $a_{z}(x,t)$, $b_{y}(x,t)$ are perturbations. The
equations (20, 21) can be rewritten (using the expressions (22)) in the
following form:
$\displaystyle\partial_{t}b_{y}(x,t)$ $\displaystyle=\partial_{x}a_{z}(x,t),$
(23) $\displaystyle\alpha\,\partial_{t}a_{z}(x,t)$
$\displaystyle-\beta\,[\partial_{x}a_{z}(x,t)+\partial_{t}b_{y}(x,t)]-\gamma\,\partial_{x}b_{y}(x,t)=0,$
(24)
where the coefficients $\alpha,\beta$ and $\gamma$ become,
$\displaystyle\alpha$
$\displaystyle=1+\frac{(E_{0}+a_{z})^{2}}{b^{2}}\frac{1}{\left(1-\cfrac{1}{b^{2}}\left[(E_{0}+a_{z})^{2}-(B_{0}+b_{y})^{2}\right]\right)},$
$\displaystyle\beta$
$\displaystyle=\frac{1}{b^{2}}\frac{(E_{0}+a_{z})(B_{0}+b_{y})}{\left(1-\cfrac{1}{b^{2}}\left[(E_{0}+a_{z})^{2}-(B_{0}+b_{y})^{2}\right]\right)},$
(25) $\displaystyle\gamma$
$\displaystyle=1-\frac{(B_{0}+b_{y})^{2}}{b^{2}}\frac{1}{\left(1-\cfrac{1}{b^{2}}\left[(E_{0}+a_{z})^{2}-(B_{0}+b_{y})^{2}\right]\right)}.$
### III.2 Derivation of the phase velocity
Here we derive the coefficients of the background field to calculate the phase
velocities, we assume that $a_{z}(x,t)=b_{y}(x,t)=0$ and obtain from Eqs. (25)
that
$\displaystyle\alpha_{0}$
$\displaystyle=\frac{1+\frac{B^{2}_{0}}{b^{2}}}{\left(1-\cfrac{1}{b^{2}}(E^{2}_{0}-B^{2}_{0})\right)},$
$\displaystyle\beta_{0}$
$\displaystyle=\frac{E_{0}B_{0}}{b^{2}}\frac{1}{\left(1-\cfrac{1}{b^{2}}(E^{2}_{0}-B^{2}_{0})\right)},$
(26) $\displaystyle\gamma_{0}$
$\displaystyle=\frac{1-\frac{E^{2}_{0}}{b^{2}}}{\left(1-\cfrac{1}{b^{2}}(E^{2}_{0}-B^{2}_{0})\right)}.$
Furthermore, for the crossed field case, we choose $B_{0}=E_{0}$ for
simplicity, we obtain
$\displaystyle\alpha_{0}$ $\displaystyle=1+\frac{E^{2}_{0}}{b^{2}},$
$\displaystyle\beta_{0}$ $\displaystyle=\frac{E^{2}_{0}}{b^{2}},$ (27)
$\displaystyle\gamma_{0}$ $\displaystyle=1-\frac{E^{2}_{0}}{b^{2}}.$
In order to find the wave phase velocity from the linearized equations, (23)
and (24), we look for solutions with the form:
$a_{z}\propto\exp(-i\omega t+iqx),\;\;b_{y}\propto\exp(-i\omega t+iqx),$ (28)
where $q$ is the wave number and $\omega$ is the frequency. Substituting (28)
into the field equations (23, 24) for the background field with
$\alpha=\alpha_{0},\beta=\beta_{0}$ and $\gamma=\gamma_{0}$, (26), we obtain
an algebraic set of equations for the wave velocity $v={\omega}/q$. Since the
Born–Infeld medium is dispersionless, the phase velocity, $v_{ph}=\omega/q$,
and the group velocity, $v_{g}=\partial{\omega}/\partial{q}$, are equal:
$v=v_{ph}=v_{g}$.
Then we obtain the set of equations,
$\displaystyle a_{z}+vb_{y}$ $\displaystyle=0,$ $\displaystyle
v(b_{y}\beta_{0}-a_{z}\alpha_{0})-(a_{z}\beta_{0}+b_{y}\gamma_{0})$
$\displaystyle=0,$ (29)
which has two solutions,
$\displaystyle v_{1,2}$
$\displaystyle=\frac{-\beta_{0}\pm\sqrt{\beta^{2}_{0}+\alpha_{0}\gamma_{0}}}{\alpha_{0}}.$
(30)
The expression under the square root can be simplified (using (26)) as
$\displaystyle\beta^{2}_{0}+\alpha_{0}\gamma_{0}=\cfrac{1}{\left(1-\cfrac{1}{b^{2}}(E^{2}_{0}-B^{2}_{0})\right)},$
(31)
which results for the crossed field ($B_{0}=E_{0}$, (27)) in
$\beta^{2}_{0}+\alpha_{0}\gamma_{0}=1.$ (32)
The velocities (30) can be simplified by using the expressions (27) and (32),
$\displaystyle v_{1,2}$ $\displaystyle=\frac{-\beta_{0}\pm 1}{\alpha_{0}},$
(33)
then we find the velocities,
$\displaystyle v_{1}$ $\displaystyle=-1,$ (34) $\displaystyle v_{2}$
$\displaystyle=\frac{\gamma_{0}}{\alpha_{0}}=\frac{1-\cfrac{E^{2}_{0}}{b^{2}}}{1+\cfrac{E^{2}_{0}}{b^{2}}}.$
(35)
The phase velocities $v=v_{1,2}$ are the velocities for the wave propagation
over the background crossed field in the Born theory. The solution $v_{1}$
corresponds to the co-propagating waves case and the solution $v_{2}$
corresponds to the case of counter–propagating waves whose velocity is lower
than speed of light $c$.
The phase velocity also diminishes as the field strength parameter $b$
increases. In the limit $b\rightarrow\infty$, which leads to the linear
Maxwell theory, the phase velocity $v_{2}\rightarrow 1$. The obtained result
is used further as a limit case for the background crossed field.
### III.3 Linearization of the coefficients in the equations
Now, we perform the linearization of the coefficients $\alpha,\beta$ and
$\gamma$ about the constant background field,
$\displaystyle\alpha$
$\displaystyle=\alpha_{0}+\alpha_{a}a_{z}+\alpha_{b}b_{y},$
$\displaystyle\beta$ $\displaystyle=\beta_{0}+\beta_{a}a_{z}+\beta_{b}b_{y},$
(36) $\displaystyle\gamma$
$\displaystyle=\gamma_{0}+\gamma_{a}a_{z}+\gamma_{b}b_{y},$
where we have denoted,
$\displaystyle\alpha_{a_{z}}$
$\displaystyle=(\partial_{a_{z}}{\alpha})|_{a_{z},b_{y}=0},\quad\alpha_{b_{y}}=(\partial_{b_{y}}{\alpha})|_{a_{z},b_{y}=0},$
$\displaystyle\beta_{a_{z}}$
$\displaystyle=(\partial_{a_{z}}{\beta})|_{a_{z},b_{y}=0},\quad\beta_{b_{y}}=(\partial_{b_{y}}{\beta})|_{a_{z},b_{y}=0},$
(37) $\displaystyle\gamma_{a_{z}}$
$\displaystyle=(\partial_{a_{z}}{\gamma})|_{a_{z},b_{y}=0},\quad\gamma_{b_{y}}=(\partial_{b_{y}}{\gamma})|_{a_{z},b_{y}=0}.$
Next, we need to expand the following expression
$g(a_{z},b_{y})=\frac{1}{1-\cfrac{1}{b^{2}}\left\\{(E_{0}+a_{z})^{2}-(B_{0}+b_{y})^{2}\right\\}},$
(38)
into a Taylor series in two variables $a_{z},b_{y}$ around the point
($a_{z},b_{y}=0$), using this expansion, the parameters $\alpha$, $\beta$ and
$\gamma$ become:
$\displaystyle\alpha$
$\displaystyle=1+\frac{(E_{0}+a_{z})^{2}}{b^{2}}g(a_{z},b_{y}),$
$\displaystyle\beta$
$\displaystyle=\frac{1}{b^{2}}(E_{0}+a_{z})(B_{0}+b_{y})g(a_{z},b_{y}),$ (39)
$\displaystyle\gamma$
$\displaystyle=1-\frac{(B_{0}+b_{y})^{2}}{b^{2}}g(a_{z},b_{y}).$
We perform the Taylor series for $B_{0}=E_{0}$, the crossed field
configuration. The expansion then becomes
$\displaystyle g(a_{z},b_{y})$ $\displaystyle\approx
1+\frac{2E_{0}}{b^{2}}(a_{z}-b_{y})$ (40) $\displaystyle+$
$\displaystyle\frac{1}{b^{4}}\left[(4E^{2}_{0}-b^{2})a_{z}^{2}-8E^{2}_{0}a_{z}b_{y}+(4E^{2}_{0}+b^{2})b^{2}_{y}\right].$
In the following text, we use just the first two linear terms of (40) in the
linearization. We identify the coefficients
$\alpha_{a_{z}},\beta_{a_{z}},\gamma_{a_{z}}$ and
$\alpha_{b_{y}},\beta_{b_{y}},\beta_{b_{y}}$, in the general formulae for
$\alpha,\beta,\gamma$ (25), for the special choice, $B_{0}=E_{0}$, of the
crossed field.
The coefficients (37) have final form,
$\displaystyle\alpha_{a_{z}}$
$\displaystyle=\frac{2E_{0}}{b^{2}}\left(1+\frac{E_{0}^{2}}{b^{2}}\right),\quad\alpha_{b_{y}}=-2\frac{E^{3}_{0}}{b^{4}},$
$\displaystyle\beta_{a_{z}}$
$\displaystyle=\frac{E_{0}}{b^{2}}\left(1+\frac{2E^{2}_{0}}{b^{2}}\right),\quad\beta_{b_{y}}=\frac{E_{0}}{b^{2}}\left(1-\frac{2E^{2}_{0}}{b^{2}}\right),$
(41) $\displaystyle\gamma_{a_{z}}$
$\displaystyle=-2\frac{E^{3}_{0}}{b^{4}},\quad\gamma_{b_{y}}=\frac{2E_{0}}{b^{2}}\left(\frac{E_{0}^{2}}{b^{2}}-1\right).$
### III.4 Born self–similar solutions
In this subsection, we solve the field equations for the Born electrodynamics.
We approach solving the nonlinear equations using a Riemann wave (simple wave)
which is well known in nonlinear wave theory [98, 99, 100]. We have solved the
field equations using the simple wave in the Heisenberg–Euler approximation in
[1, 2]. Thanks to the similar structure of the field equations, we obtain
similar solutions, but a difference comes from the Born Lagrangian (1) in the
form of different constant coefficients $\alpha_{a_{z}},\alpha_{b_{y}}$,
$\beta_{a_{z}},\beta_{b_{y}}$ and $\gamma_{a_{z}},\gamma_{b_{y}}$ (41).
We start with the field equations (23, 24) with parameter functions
$\alpha(a_{z},b_{y}),\beta(a_{z},b_{y})$ and $\gamma(a_{z},b_{y})$ (25) in the
linear approximation (36).
#### III.4.1 Self–similar solutions
We are assuming the relation $b_{y}=b_{y}(a_{z})$, $\partial_{t}b_{y}=({\rm
d}b_{y}/{\rm d}a_{z})\partial_{t}a_{z}$, and $\partial_{x}b_{y}=({\rm
d}b_{y}/{\rm d}a_{z})\partial_{x}a_{z}$. The field equations (23, 24) become:
$\displaystyle\partial_{t}a_{z}$ $\displaystyle=\frac{{\rm d}a_{z}}{{\rm
d}b_{y}}\partial_{x}a_{z},$ (42) $\displaystyle\partial_{t}a_{z}$
$\displaystyle=\frac{1}{\alpha}\left(2\beta+\gamma\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}\right)\partial_{x}a_{z}.$ (43)
When we compare the two equations above, we get a quadratic equation for the
function $b_{y}(a_{z})$ in the form
$\displaystyle\gamma\left(\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}\right)^{2}+2\beta\frac{{\rm d}b_{y}}{{\rm d}a_{z}}-\alpha=0,$ (44)
which has two unique solutions
$\displaystyle\left(\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}\right)=\frac{-\beta\pm\sqrt{\beta^{2}+\alpha\gamma}}{\gamma}.$ (45)
Furthermore, we use the weak and finite amplitude approximation when we assume
the solution in the form
$\displaystyle\left(\frac{{\rm d}b_{y}}{{\rm d}a_{z}}\right)=\nu,$ (46)
where we assume $\nu$ in the linearized form as
$\nu=\nu_{0}+\nu_{a_{z}}a_{z}+\nu_{b_{y}}b_{y},$ (47)
with new parameters $\nu_{0}$, $\nu_{a_{z}}$ and $\nu_{b_{y}}$, which are
derived later. For the two solutions for ${\rm d}b_{y}/{\rm d}a_{z}$ (45), we
obtain two sets of parameters $\nu_{0}$, $\nu_{a_{z}}$ and $\nu_{b_{y}}$. We
discuss the two of them in the next subsection (III.5) where we investigate
the wave steepening of the separate solutions.
In the following calculation, we use the definition of tangent to a surface at
a point $(\alpha_{0},\beta_{0},\gamma_{0})$ as
$\displaystyle
f(\alpha,\beta,\gamma)=f(\alpha,\beta,\gamma)|_{\alpha_{0},\beta_{0},\gamma_{0}}+\partial_{\alpha}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}(\alpha-\alpha_{0})$
$\displaystyle+$
$\displaystyle\partial_{\beta}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}(\beta-\beta_{0})+\partial_{\gamma}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}(\gamma-\gamma_{0}),$
(48)
where ${{\rm d}b_{y}}/{{\rm d}a_{z}}=f(\alpha,\beta,\gamma)$.
We obtain the resulting coefficients as
$\displaystyle\nu_{0}=$ $\displaystyle
f|_{\alpha_{0},\beta_{0},\gamma_{0}}=\frac{-\beta_{0}\pm 1}{\gamma_{0}},$ (49)
and
$\displaystyle\partial_{\alpha}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}$
$\displaystyle=\pm\frac{1}{2},$ (50)
$\displaystyle\partial_{\beta}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}$
$\displaystyle=\frac{1}{\gamma_{0}}\left(-1\pm\beta_{0}\right),$ (51)
$\displaystyle\partial_{\gamma}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}$
$\displaystyle=\pm\frac{\alpha_{0}}{2\gamma_{0}}-\frac{\left(-\beta_{0}\pm
1\right)}{\gamma_{0}^{2}},$ (52)
where we can rewrite the expressions (36) as
$\displaystyle\alpha-\alpha_{0}$
$\displaystyle=\alpha_{a_{z}}a_{z}+\alpha_{b_{y}}b_{y},$
$\displaystyle\beta-\beta_{0}$
$\displaystyle=\beta_{a_{z}}a_{z}+\beta_{b_{y}}b_{y},$ (53)
$\displaystyle\gamma-\gamma_{0}$
$\displaystyle=\gamma_{a_{z}}a_{z}+\gamma_{b_{y}}b_{y},$
and we have used the relation
$\beta^{2}_{0}+\alpha_{0}\gamma_{0}=1.$ (54)
The linear coefficients $\nu_{0}$, $\nu_{a_{z}}$ and $\nu_{b_{y}}$ then have a
final form,
$\displaystyle\nu_{0}=$ $\displaystyle f|_{\alpha_{0},\beta_{0},\gamma_{0}},$
$\displaystyle\nu_{a_{z}}=$
$\displaystyle\alpha_{a_{z}}f_{\alpha}+\beta_{a_{z}}f_{\beta}+\gamma_{a_{z}}f_{\gamma},$
(55) $\displaystyle\nu_{b_{y}}=$
$\displaystyle\alpha_{b_{y}}f_{\alpha}+\beta_{b_{y}}f_{\beta}+\gamma_{b_{y}}f_{\gamma},$
where we denoted the derivatives
$f_{\alpha}=\partial_{\alpha}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}},\,f_{\beta}=\partial_{\beta}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}},\,f_{\gamma}=\partial_{\gamma}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}.$
The explicit expressions for $f_{\alpha},f_{\beta},f_{\gamma}$, by using
expressions (49), (50), (51) and (52), have a form:
$\displaystyle f_{\alpha}$ $\displaystyle=\pm\frac{1}{2},$ $\displaystyle
f_{\beta}$ $\displaystyle=\frac{1}{\gamma_{0}}\left(-1\pm\beta_{0}\right),$
(56) $\displaystyle f_{\gamma}$
$\displaystyle=\pm\frac{\alpha_{0}}{2\gamma_{0}}-\left(\frac{-\beta_{0}\pm
1}{\gamma^{2}_{0}}\right).$
The problem reduces to finding a solution to the differential equation (46).
The equation has a form of total differential, therefore it can be solved by
the method of integration factor, which we choose as
$m(a_{z})=\exp(-\nu_{b_{y}}a_{z})$.
The relation $b_{y}=b_{y}(a_{z})$, which solves the equation, has a structure,
$\frac{1}{\nu_{b_{y}}}\exp{(-\nu_{b_{y}}a_{z})}\left((\nu_{0}+\nu_{b_{y}}b_{y})+\frac{\nu_{a_{z}}}{\nu_{b_{y}}}(\nu_{b_{y}}a_{z}+1)\right)=\delta,$
(57)
where $\delta$ is arbitrary constant. We can rewrite it and get the function
$b_{y}=b_{y}(a_{z})$ explicitly:
$b_{y}=\delta\,\exp(\nu_{b_{y}}a_{z})-\frac{\nu_{a_{z}}}{\nu_{b_{y}}}(\nu_{b_{y}}a_{z}+1)-\frac{\nu_{0}}{\nu_{b_{y}}}.$
(58)
We determine the constant $\delta$ thanks to the initial condition
$b_{y}|_{a_{z}=0}=0$,
$\delta=\frac{\nu_{a_{z}}+\nu_{0}\nu_{b_{y}}}{\nu^{2}_{b_{y}}}.$ (59)
In order to use and stay in the weak amplitude approximation, we perform
Taylor expansion of the first term in (130) to the first order,
$\exp{(\nu_{b_{y}}a_{z})}\approx 1+\nu_{b_{y}}a_{z}+\dots$ (60)
This produces the simplified first term in (130) as
$b_{y}=\delta\,(\nu_{b_{y}}a_{z}+1)-\frac{\nu_{a_{z}}}{\nu_{b_{y}}}(\nu_{b_{y}}a_{z}+1)-\frac{\nu_{0}}{\nu_{b_{y}}}.$
(61)
In order to simplify the expression for $b_{y}$ (61) even more, we substitute
(59) into (61), and obtain a solution which shows a linear relation between
$a_{z}$ and $b_{y}$,
$b_{y}=\nu_{0}a_{z}.$ (62)
Let’s get back to the field equations (42) and (43) which we aim to solve. We
rewrite equation (42) as
$\partial_{t}a_{z}-\frac{1}{\nu}\partial_{x}a_{z}=0,$ (63)
where $\nu$ is given by equation (47).
In order to continue, we perform another linearization of the $1/\nu$ factor
as
$\displaystyle f(\nu)$
$\displaystyle=f(\nu)|_{\nu_{0}}+\partial_{\nu}{f}|_{\nu_{0}}(\nu-\nu_{0}),$
(64)
and subsequently we obtain
$\displaystyle\frac{1}{\nu}=\frac{1}{\nu_{0}}\left(1-a_{z}\frac{\nu_{a_{z}}+\nu_{0}\nu_{b_{y}}}{\nu_{0}}\right).$
(65)
#### III.4.2 The final form of the nonlinear wave
Using the previous results, we can write equation (63) with the factor $1/\nu$
(65) in the final form:
$\partial_{t}a_{z}+f(a_{z})\partial_{x}a_{z}=0,$ (66)
with the factor $f(a_{z})$ given by
$f(a_{z})=-\frac{1}{\nu_{0}}\left[1-a_{z}\frac{(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})}{\nu_{0}}\right].$
(67)
This is the final form of the equation which we use in the following analysis.
In the limit $a_{z}=0$, the wave moves with the phase velocity of the
unperturbed case $-1/\nu_{0}$. The solution contains the two possible
solutions for ${\rm d}b_{y}/{\rm d}a_{z}$ (45), which are determined by two
different sets of parameters $\nu_{0}$, $\nu_{a_{z}}$ and $\nu_{b_{y}}$. Later
on we identify which one of the two solutions corresponds to the two possible
beam orientations. We have denoted the counter–propagating waves as the $-$
case and the co–propagating waves as the $+$ case. In the $-$ case, the
photon–photon scattering takes place and the two beams interact with each
other which results in a diminution of the velocity of the reacting waves,
[101]. In contrast, in the $+$ case, the photons do not interact. We discuss
the two cases in the next subsections.
In general, this form of equation contains the information whether the shock
waves are being created, the wave steepening takes place and high–order
harmonics are being generated. The two possible resulting wave equations have
similar structure and the properties of the waves are hidden in the two sets
of parameters $\nu_{0}$, $\nu_{a_{z}}$ and $\nu_{b_{y}}$ for the $+$ and $-$
solutions. We will discuss the two branches of solutions in the next
Subsection III.5 where we investigate the wave steepening of the two possible
solutions in more detail.
### III.5 Properties of Born self–similar solutions
In this subsection we analyze the properties of equation (66). The equation
can be analyzed by the method of characteristics, we shortly review this
method as well as wave breaking. Furthemore, we analyze the properties of the
nonlinear electromagnetic wave in more detail.
#### III.5.1 Method of characteristics and wave breaking
We can solve the equation (66) by the method of characteristics. The
characteristic equations for the Eq. (66) are
$\frac{{\rm d}x}{{\rm d}t}=f(a_{z}),\;\frac{{\rm d}a_{z}}{{\rm d}t}=0.$ (68)
Their solutions are $a_{z}(x,t)=A_{0}(x_{0})$ and $x=f(A_{0}(x_{0}))t+x_{0}$,
where the function $a_{z}(x,t)$ transfers along the characteristic $x_{0}$
without any distortion. Therefore for any differentiable function $A=A(x)$, we
can write solution $a_{z}$ in a form
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}[x-f(a_{z}(x,t))t],$ (69)
where $A_{0}$ is an arbitrary function determined by the initial condition,
$a_{z}(x)|_{t=0}=A_{0}(x)$.
Wave breaking is a typical behavior of waves in nonlinear dispersionless
media. We can write the solution of equation (66) in an implicit form (69)
with the Euler coordinate $x$ dependent on the Lagrange coordinate $x_{0}$ and
time $t$. The location where the wave breaks is determined by the gradient of
function $a_{z}(x,t)$. The wave breaks when the gradient becomes infinite
[102]. We obtain such result by differentiating equation (142) as
$\displaystyle\partial_{x}a_{z}$
$\displaystyle=\frac{A^{\prime}_{0}(x_{0})}{1+A^{\prime}_{0}(x_{0})f^{\prime}\,t},$
(70) $\displaystyle\quad t_{br}$
$\displaystyle=-\frac{1}{A^{\prime}_{0}(x_{0})f^{\prime}},$ (71)
where it is denoted
$\displaystyle A^{\prime}(x_{0})$ $\displaystyle=\rm{d}A_{0}/\rm{d}x_{0},$
(72) $\displaystyle f^{\prime}$ $\displaystyle=\partial_{a_{z}}f(a_{z}).$ (73)
The gradient becomes infinite at time $t_{br}$, when the denominator of
equation (70) vanishes at some point $x_{br}$. At the time $t_{br}$, when the
wave breaks, the amplitude,
$a_{z}(x_{br},t_{br})=a_{m}\sin{[k(x_{br}-f(a_{z}(x_{br},t_{br}))]}$, remains
constant. Such singularity is called the wave breaking or the gradient
catastrophe.
#### III.5.2 The character of the breaking wave for the counter–propagating
waves: the $-$ solutions in Born theory
Here we will identify and concentrate on the $-$ solutions of equation (45).
The $-$ solutions correspond to the case of counter–propagating waves where
the waves interact with each other and the photon–photon scattering process
takes place. We identify the phase velocity as the phase velocity $v_{2}$ (see
equation (35)), because the phase velocity decreases and becomes less than the
speed of light $c$. We can also relate the parameter $\nu^{-}_{0}$ to the
phase velocity $v_{2}$ by
$\nu^{-}_{0}=-\frac{1}{v_{2}},$ (74)
where $v_{2}>0$.
We can rewrite $f(a_{z})$ in equation (66) by using the explicit expression
for $\nu^{-}_{0}$ (74):
$f^{-}(a_{z})=v_{2}+a_{z}\frac{(\nu^{-}_{a_{z}}+\nu^{-}_{0}\nu^{-}_{b_{y}})}{{\nu^{-}_{0}}^{2}}.$
(75)
The final equation (66) can be rewritten in the standard form corresponding to
the equation of nonlinear wave without dispersion [98, 99],
$\partial_{t}\bar{a}_{z}+(v_{2}+\bar{a}_{z})\partial_{x}\bar{a}_{z}=0,$ (76)
where we have denoted
$\bar{a}_{z}=\frac{(\nu^{-}_{a_{z}}+\nu^{-}_{0}\nu^{-}_{b_{y}})}{{\nu^{-}_{0}}^{2}}a_{z}.$
(77)
Therefore the direction of the wave breaking is given by the sign in front of
the function $f^{\prime}$ in equation (73). In order to investigate the wave
steepening, we analyze the expression (77) which we can rewrite using Eq. (73)
as
$\bar{a}_{z}=f^{\prime}a_{z},\quad
f^{\prime}=\frac{(\nu^{-}_{a_{z}}+\nu^{-}_{0}\nu^{-}_{b_{y}})}{{\nu^{-}_{0}}^{2}}.$
(78)
After performing the subtitution $\alpha_{0}$, $\beta_{0}$ and $\gamma_{0}$
(27) into $f_{\alpha},f_{\beta},f_{\gamma}$ (56), we observe that it is
convenient to express the functions $f_{\alpha},f_{\beta},f_{\gamma}$ in terms
of the phase velocity $v_{2}$. The explicit expressions are:
$\displaystyle f_{\alpha}$
$\displaystyle=-\frac{1}{2},\,f_{\beta}=-\frac{1}{v_{2}},\,f_{\gamma}=\frac{1}{2}\frac{1}{v_{2}^{2}}.$
(79)
Then the coefficients $\nu_{a},\nu_{b}$ (55) become:
$\displaystyle\nu^{-}_{a_{z}}$
$\displaystyle=-\frac{E_{0}}{b^{2}v^{2}_{2}}\left[\left(\frac{1+E^{2}_{0}}{b^{2}}\right)v^{2}_{2}+\left(1+\frac{2E^{2}_{0}}{b^{2}}\right)v_{2}+\frac{E^{2}_{0}}{b^{2}}\right],$
$\displaystyle\nu^{-}_{b_{y}}$
$\displaystyle=\frac{E_{0}}{b^{2}v^{2}_{2}}\left[\frac{E^{2}_{0}}{b^{2}}v^{2}_{2}-\left(1-\frac{2E^{2}_{0}}{b^{2}}\right)v_{2}+\left(\frac{E^{2}_{0}}{b^{2}}-1\right)\right].$
(80)
The function $f^{\prime-}$ ( Eq. (75)) has the form
$\displaystyle f^{\prime-}=\frac{E_{0}}{b^{2}}$
$\displaystyle\left[-\left(1+\frac{E^{2}_{0}}{b^{2}}\right)v^{2}_{2}-\left(1+\frac{3E^{2}_{0}}{b^{2}}\right)v_{2}\right.$
(81)
$\displaystyle\left.+\left(1-\frac{3E^{2}_{0}}{b^{2}}\right)-\left(\frac{E^{2}_{0}}{b^{2}}-1\right)\frac{1}{v_{2}}\right].$
The steepening factor $f^{\prime-}$ in the general form (81) is expressed in
terms of the phase velocity $v_{2}$ (35) and the the Born–Infeld constant $b$.
If a singularity is formed, the electromagnetic wave breaking creates a shock
wave, which has a forward character for $f^{\prime-}>0$ and backwards
character for $f^{\prime-}<0$. There is also a possibility that
$f^{\prime}=0$, then the shock waves are not created and only exceptional
waves are the solutions of the equations [66, 36].
In the limit $b\rightarrow\infty$, which leads to the linear Maxwell theory,
the steepening factor $f^{\prime-}\rightarrow 0$ and the phase velocity
$v\rightarrow 1$. This corresponds to the fact that wave steepening does not
happen in classical Maxwell theory. Subsequently, the resulting nonlinear wave
equation (77) with $f^{-}(a_{z})$ (75) becomes (in the limit to the Mawell
theory):
$\partial_{t}a_{z}+v_{2}\partial_{x}a_{z}=0,$ (82)
where
$f^{-}(a_{z})|_{b\rightarrow\infty}=v_{2}.$ (83)
Continuing in the Born theory, after we substitute the phase velocity $v_{2}$
into equations (80) and (81), we obtain the coefficients $\nu_{a},\nu_{b}$
(80),
$\displaystyle\nu^{-}_{a_{z}}$
$\displaystyle=-\frac{2E_{0}}{b^{2}}\frac{\left(1+\cfrac{E^{2}_{0}}{b^{2}}\right)}{\left(1-\cfrac{E^{2}_{0}}{b^{2}}\right)},$
$\displaystyle\nu^{-}_{b_{y}}$
$\displaystyle=-\frac{2E_{0}}{b^{2}}\frac{1}{\left(1-\cfrac{E^{2}_{0}}{b^{2}}\right)},$
and importantly, the steepening factor becomes
$f^{\prime-}=0.$ (84)
This means that the only solutions for this case are exceptional waves. The
exceptional travelling wave solutions propagate with constant speed and do not
turn into shocks [66]. The existence of only exceptional waves is in full
accordance with the known literature [103, 36, 78] and [81].
Lastly, lets have a look at the shock wave steepening analytically. It does
not take place, we can show it explicitly. The gradient (70) and the time of
wave breaking (71) are
$\displaystyle\partial_{x}a_{z}$ $\displaystyle=A^{\prime}_{0}(x_{0}),$ (85)
$\displaystyle\quad t_{br}$ $\displaystyle=-\infty.$ (86)
The final form of the nonlinear wave equation for the counter–propagating case
$-$ is
$\partial_{t}a_{z}+v_{2}\partial_{x}a_{z}=0,$ (87)
and its solution (142),
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}[x-v_{2}t],$ (88)
propagates with constant phase velocity $v_{2}$ along the increasing $x$-axis.
This exceptional wave is the real contribution to the outgoing radiation from
the photon–photon scattering in the Born electrodynamics.
#### III.5.3 The character of the breaking wave for the co–propagating
waves: the $+$ solutions in Born theory
The $+$ solutions correspond to the case of co–propagating waves which are
non-interacting and where photon–photon scattering does not occur. We identify
the phase velocity of the resulting wave as $v_{1}=-1$. Additionally, the
parameter $\nu^{+}_{0}$ has the same value as the phase velocity $v_{1}$,
$\nu^{+}_{0}=v_{1}=-1.$ (89)
We can rewrite the function $f(a_{z})$ (67) as
$f^{+}(a_{z})=f^{+}_{0}+a_{z}(x,t)f^{\prime+},$ (90)
where
$f^{+}_{0}=-\frac{1}{\nu^{+}_{0}}=1,\quad
f^{\prime+}=\frac{\nu^{+}_{a_{z}}+\nu^{+}_{0}\nu^{+}_{b_{y}}}{{\nu^{+}_{0}}^{2}}.$
(91)
By substituting $\alpha_{0}$, $\beta_{0}$ and $\gamma_{0}$ (27) into
$f_{\alpha},f_{\beta},f_{\gamma}$ (56) for the $+$ case, using again $v_{2}$,
we obtain
$\displaystyle f_{\alpha}$
$\displaystyle=\frac{1}{2},\,f_{\beta}=-1,\,f_{\gamma}=\frac{1}{2v_{2}}-\cfrac{1}{\left(1-\cfrac{E^{2}_{0}}{b^{2}}\right)}.$
(92)
The coefficients $\nu_{a},\nu_{b}$ (55) become
$\displaystyle\nu^{+}_{a_{z}}$
$\displaystyle=\frac{E^{3}_{0}}{b^{4}v_{2}}\left[-1-v+\frac{2v_{2}}{\left(1-\cfrac{E^{2}_{0}}{b^{2}}\right)}\right],$
$\displaystyle\nu^{+}_{b_{y}}$
$\displaystyle=\frac{E_{0}}{b^{2}v_{2}}\left[\left(\frac{E^{2}_{0}}{b^{2}}+1\right)v_{2}+\frac{E^{2}_{0}}{b^{2}}-1\right].$
The function $f^{\prime+}$ (56), expressed in terms of the phase velocity, has
the form
$\displaystyle f^{\prime+}=\frac{E_{0}}{b^{2}}$
$\displaystyle\left[1-\frac{E^{2}_{0}}{b^{2}}+v_{2}\left(-1-\frac{2E^{2}_{0}}{b^{2}}+2\frac{E^{2}_{0}}{b^{2}}\frac{1}{\left(1-\cfrac{E^{2}_{0}}{b^{2}}\right)}\right)\right].$
(94)
The general form of the steepening factor $f^{\prime+}$ (94) is now expressed
in the phase velocity $v_{1}$ (35) and the $b$ is the Born–Infeld constant.
In the limit $b\rightarrow\infty$, which leads to the linear Maxwell theory,
the steepening factor $f^{\prime+}\rightarrow 0$ and the phase velocity
$v_{1}\rightarrow 1$. The resulting equation becomes
$\partial_{t}a_{z}-\frac{1}{v_{1}}\partial_{x}a_{z}=0,$ (95)
where
$f^{+}(a_{z})|_{b\rightarrow\infty}=-\frac{1}{v_{1}}.$ (96)
We continue the analysis in the Born theory. After using the phase velocity
$v_{1}$, the coefficients (LABEL:eq:coefNew1) and (94) reduce to
$\displaystyle\nu^{+}_{a_{z}}$ $\displaystyle=0,\quad\nu^{+}_{b_{y}}=0,$ (97)
and importantly (from equation (91)):
$f^{\prime+}=0.$ (98)
The shock wave steepening does not take place in this case because the
co–propagating waves do no interact and the photon–photon scattering does not
occur. Only exceptional waves are created.
The final form of the nonlinear wave equation for the co–propagating case $+$
has the form
$\partial_{t}a_{z}+\partial_{x}a_{z}=0,$ (99)
and its solution,
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}[x-t],$ (100)
which propagates to the left with the constant phase velocity $v_{1}=-1$ along
the x-axis. The analytical expressions for the shock wave steepening are the
same as for the previous case $-$, see equations (85) and (86). There is no
steepening for the exceptional waves.
## IV Derivation of field equations in the Born–Infeld theory
In this section we derive and analyze the field equations for the problem of
two counter–propagating waves in Born–Infeld electrodynamics for nonlinearly
polarized beams. In other words, we generalize our study from the previous
section to include $\mathfrak{G}={\bf E}\cdot{\bf B}\neq 0$ in the
calculations. As mentioned in previous section, this generalized setup will
give rise to extraordinary waves in theoretical nonlinear wave evolution in
the full Born–Infeld electrodynamics.
Since the nonlinear Born–Infeld electrodynamics describes the electromagnetic
fields in an isotropic medium with a polarization–independent refractive index
[56, 57], the extraordinary wave will move in the same direction as the
ordinary wave regardless of their different polarization states.
We work in an orthogonal coordinate system, $(x,y,z)$, where the two waves
propagate along the $x-$axis. We assume ${\bf E}=(0,E_{y},E_{z})$ and ${\bf
B}=(0,B_{y},B_{z})$, which is the simplest generalization of the previous
setup in the Born theory from Section III. The functions $E_{z}(t,x)$
$B_{y}(t,x)$ are functions of time $t$ and position $x$. However, the second
equation in (6) allows us to assume an $E_{y}$ only dependent on $t$, and
either $B_{z}$ dependent only on $x$ or $B_{z}$ equal to a constant. To
simplify our ansatz we have chosen to use $B_{z}=\text{const}=0$. The setup is
thus rather mathematical but it is very useful for our task of studying the
nonlinear wave development in the Born–Infeld electrodynamics.
The term $\mathfrak{G}^{2}$ is of the fourth order in $F_{\mu\nu}$ and
therefore can be neglected except in the immediate neighbourhood of
singularities. In this section we investigate the case where we might create
these singularities or shock wave fronts [93], i.e. $\mathfrak{G}^{2}\neq 0$.
In terms of energy, the integral around the position of the singularity is
proportional to the angular momentum density $j$, which is nonvanishing and
can not be neglected close to the sigularity.
The field equations are found by varying the Lagrangian (12) according to the
potential ${\bf A}$:
$\displaystyle\partial_{t}B_{y}$ $\displaystyle=\partial_{x}E_{z},$
$\displaystyle-\alpha\partial_{t}E_{z}$
$\displaystyle+\beta\left[\partial_{t}B_{y}+\tau\partial_{x}E_{z}\right]$
$\displaystyle+\gamma\partial_{x}B_{y}-\delta\partial_{t}E_{y}=0,$ (101)
$\displaystyle-\epsilon\partial_{t}E_{z}$
$\displaystyle+\zeta\partial_{x}B_{y}+\eta\partial_{t}B_{y}$
$\displaystyle+\theta\partial_{x}E_{z}-\iota\partial_{t}E_{y}=0,$
where the coefficients are:
$\displaystyle\alpha$
$\displaystyle=1+\frac{E_{z}^{2}}{b^{2}}\frac{1}{\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)},$
$\displaystyle\beta$
$\displaystyle=\frac{1}{b^{2}}\frac{E_{z}B_{y}}{\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)},$
(102) $\displaystyle\gamma$
$\displaystyle=1-\cfrac{1}{b^{2}}E^{2}_{y}-\cfrac{B^{2}_{y}\left(1-\cfrac{1}{b^{2}}E^{2}_{y}\right)^{2}+\cfrac{1}{b^{2}}E_{z}B_{y}E^{2}_{y}}{b^{2}\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)},$
$\displaystyle\delta$
$\displaystyle=\frac{1}{b^{2}}\cfrac{E_{z}E_{y}}{\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)}.$
The coefficients in the second set of field equations,
$\displaystyle\epsilon$
$\displaystyle=\frac{1}{b^{2}}E_{y}E_{z}\cfrac{\left(1+\cfrac{1}{b^{2}}B^{2}_{y}\right)}{\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)},$
$\displaystyle\zeta$
$\displaystyle=\frac{E_{y}E_{z}}{b^{2}}\left(1-\cfrac{B^{2}_{y}\left(1-\cfrac{1}{b^{2}}B^{2}_{y}\right)}{b^{2}\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)}\right),$
(103) $\displaystyle\eta$
$\displaystyle=-\cfrac{E_{y}B_{y}}{b^{2}}\left(2-\cfrac{\left(1-\cfrac{1}{b^{2}}B^{2}_{y}\right)\left(1+\cfrac{1}{b^{2}}E^{2}_{y}\right)}{\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)}\right),$
and
$\displaystyle\theta$ $\displaystyle=\alpha\frac{E_{y}B_{y}}{b^{2}},$ (104)
$\displaystyle\iota$
$\displaystyle=1+\cfrac{1}{b^{2}}B^{2}_{y}+\cfrac{1}{b^{2}}E^{2}_{y}\cfrac{\left(1-\cfrac{1}{b^{2}}B^{2}_{y}\right)\left(1+\cfrac{1}{b^{2}}E^{2}_{y}\right)}{\left(1-\cfrac{\bf{E}^{2}-\bf{B}^{2}}{b^{2}}-\cfrac{(\bf{E}\cdot\bf{B})^{2}}{b^{4}}\right)},$
$\displaystyle\tau$ $\displaystyle=1-\frac{1}{b^{2}}E^{2}_{y}.$
We observe that the set of equations (101) is the simplest generalization of
our equations in Born electrodynamics (20, 21). The field equations (101)
reduce to the two field equations for Born theory (20, 21) in Section III when
we set $E_{y}=0$. The condition
$1-({\bf{E}^{2}-\bf{B}^{2}})/b^{2}-({\bf{E}\cdot\bf{B}})^{2}/b^{4}>0$ holds.
The equations describe both, the ordinary and the extraordinary wave
propagation; the latter being determined by the only components $E_{y}(t)$ and
$B_{z}=0$. The ordinary and extraordinary waves will have the same direction
of propagation but different phase velocities thanks to the absence of the
birefringence effect in the Born–Infeld electrodynamics.
### IV.1 Solving the field equations
#### IV.1.1 Adding weak linear corrections
In this section, we add a small amplitude perturbation to the fields to solve
the field equations. Then we perform a linearization procedure which also
includes the linearization of the coefficients in the equations about the
constant background field.
Again, we add the weak linear amplitude corrections to the fields,
$\displaystyle E_{y}$ $\displaystyle=E_{0}+a_{y}(t),$ $\displaystyle E_{z}$
$\displaystyle=E_{0}+a_{z}(x,t),$ (105) $\displaystyle B_{y}$
$\displaystyle=B_{0}+b_{y}(x,t),$
where the fields $E_{0},B_{0}$ represent the constant electromagnetic
background field and $a_{y}(t),a_{z}(x,t)$ and $b_{y}(x,t)$ are amplitude
corrections.
After we substitute (105) into the field equations (101), these can be
rewritten as
$\displaystyle\partial_{t}b_{y}(x,t)$ $\displaystyle=\partial_{x}a_{z}(x,t),$
$\displaystyle-\alpha\,\partial_{t}a_{z}(x,t)$
$\displaystyle+\beta\,\left[\tau\partial_{x}a_{z}(x,t)+\partial_{t}b_{y}(x,t)\right]$
$\displaystyle+\gamma\,\partial_{x}b_{y}(x,t)-\delta\partial_{t}a_{y}=0,$
(106) $\displaystyle-\epsilon\,\partial_{t}a_{z}(x,t)$
$\displaystyle+\zeta\partial_{x}b_{y}(x,t)+\eta\partial_{t}b_{y}(x,t)$
$\displaystyle+\theta\,\partial_{x}a_{z}(x,t)-\iota\partial_{t}a_{y}=0.$
#### IV.1.2 Linearization of the coefficients
We assume the linearized coefficients $\alpha,\beta,\gamma$,
$\delta,\epsilon,\zeta$, $\eta,\theta,\iota,\tau$ about the constant
background field in the form:
$\displaystyle\alpha$
$\displaystyle=\alpha_{0}+\alpha_{a}a_{z}+\alpha_{b}b_{y}+\alpha_{y}a_{y},$
$\displaystyle\beta$
$\displaystyle=\beta_{0}+\beta_{a}a_{z}+\beta_{b}b_{y}+\beta_{y}a_{y},$
$\displaystyle\gamma$
$\displaystyle=\gamma_{0}+\gamma_{a}a_{z}+\gamma_{b}b_{y}+\gamma_{y}a_{y},$
$\displaystyle\delta$
$\displaystyle=\delta_{0}+\delta_{a}a_{z}+\delta_{b}b_{y}+\delta_{y}a_{y},$
$\displaystyle\epsilon$
$\displaystyle=\epsilon_{0}+\epsilon_{a}a_{z}+\epsilon_{b}b_{y}+\epsilon_{y}a_{y},$
(107) $\displaystyle\zeta$
$\displaystyle=\zeta_{0}+\zeta_{a}a_{z}+\zeta_{b}b_{y}+\zeta_{y}a_{y},$
$\displaystyle\eta$
$\displaystyle=\eta_{0}+\eta_{a}a_{z}+\eta_{b}b_{y}+\eta_{y}a_{y},$
$\displaystyle\theta$
$\displaystyle=\theta_{0}+\theta_{a}a_{z}+\theta_{b}b_{y}+\theta_{y}a_{y},$
$\displaystyle\iota$
$\displaystyle=\iota_{0}+\iota_{a}a_{z}+\iota_{b}b_{y}+\iota_{y}a_{y},$
$\displaystyle\tau$
$\displaystyle=\tau_{0}+\tau_{a}a_{z}+\tau_{b}b_{y}+\tau_{y}a_{y},$
where we denote:
$\displaystyle\alpha_{a_{z}}$
$\displaystyle=(\partial_{a_{z}}{\alpha})|_{a_{z},b_{y},a_{y}=0},$
$\displaystyle\alpha_{b_{y}}$
$\displaystyle=(\partial_{b_{y}}{\alpha})|_{a_{z},b_{y},a_{y}=0},$ (108)
$\displaystyle\alpha_{a_{y}}$
$\displaystyle=(\partial_{a_{y}}{\alpha})|_{a_{z},b_{y},a_{y}=0}.$
The other constant factors in the linearized coefficients (107) are similarly
denoted. The coefficients are listed in their final forms in the Appendix B
due to their lengthy character.
The constant parameters $\alpha_{0},\beta_{0},\gamma_{0},\delta_{0}$,
$\epsilon_{0},\zeta_{0},\eta_{0},\theta_{0},\iota_{0}$ and $\lambda_{0}$ are
listed in Appendix A. The parameters are obtained by setting
$a_{z}(x,t)=b_{y}(x,t)=0$ in the linearized coefficients (107) and then
simplifying by setting $B_{0}=E_{0}$. This simplification was used in all the
previous calculations to have compatible results.
#### IV.1.3 The derivation of the phase velocity
The phase velocity is derived from the linearized background equations (106)
by using the same relations (28) as in the Born model, since the medium is
dispersionless in Born–Infeld. We denote the phase velocity by
$v=v_{ph}=v_{g}$.
To obtain the algebraic expressions for $v$, we substitute expressions (28)
into the equations for the background field.
We obtain the equations for the background field when we assume
$a_{z}=b_{y}=a_{y}=0$ in the field equations (106), in other words, we set the
coefficients $\alpha,\beta,\gamma,\dots$ to constant coefficients
$\alpha_{0},\beta_{0},\gamma_{0},\delta_{0}$,
$\epsilon_{0},\zeta_{0},\eta_{0},\theta_{0},\iota_{0}$ and $\lambda_{0}$,
where the coefficients are listed in Appendix A.
We obtain the background field equations:
$\displaystyle a_{z}+vb_{y}$ $\displaystyle=0,$ (109)
$\displaystyle-v(-\alpha_{0}a_{z}+\beta_{0}b_{y}-\delta_{0}a_{y})+\left[\gamma_{0}b_{y}+\beta_{0}\tau_{0}a_{z}\right]$
$\displaystyle=0,$ (110) $\displaystyle
v(\epsilon_{0}a_{z}-b_{y}\eta_{0}+a_{y}\iota_{0})+(b_{y}\zeta_{0}+a_{z}\theta_{0})$
$\displaystyle=0.$ (111)
When substituting (109) into (110), we get a quadratic equation for the first
phase velocity,
$\displaystyle-\alpha_{0}v^{2}-vM+\gamma=0,$ (112)
with solutions:
$\displaystyle
v_{1,2}=\frac{M\pm\sqrt{M^{2}+4\alpha_{0}\gamma_{0}}}{-2\alpha_{0}},$ (113)
where $M=\left(1+\tau_{0}\right)\beta_{0}-\cfrac{a_{y}}{b_{y}}\delta_{0}$. The
solutions generalize those obtained with the Born theory (33). These
velocities are properties of the ordinary wave. One velocity describes the
real, physical, phase velocity for two counter–propagating electromagnetic
waves, with photon–photon scattering (case $-$). The other velocity
corresponds to co–propagating, non-interacting waves without photon–photon
scattering (case $+$).
Similarly, when we substitute (109) into (111), we get a quadratic equation
for the phase velocity,
$\displaystyle-\epsilon_{0}v^{2}-v\left(\eta_{0}+\theta_{0}-\cfrac{a_{y}}{b_{y}}\iota_{0}\right)+\zeta_{0}=0,$
(114)
with two solutions:
$\displaystyle
v_{3,4}=\frac{\left(\eta_{0}+\theta_{0}-\cfrac{a_{y}}{b_{y}}\iota_{0}\right)\pm\sqrt{\left(\eta_{0}+\theta_{0}-\cfrac{a_{y}}{b_{y}}\iota_{0}\right)^{2}+4\epsilon_{0}\zeta_{0}}}{-2\epsilon_{0}}.$
(115)
These phase velocities are new and we assume that they apply to the
extraordinary wave. Only one of the velocities represents the physical one,
i.e. corresponds to the $-$ case: counter–propagating waves with photon–photon
scattering. The other phase velocity corresponds to $+$ case: co–propagating,
non–interacting waves. We cannot derive more specific expressions now. In
general, the phase velocities depend on the ratio of the two functions
$a_{y}/b_{y}$ and the other coefficients are constant. We will investigate
them more in detail later, see Subsection IV.4.
#### IV.1.4 The simple wave solutions
In this section we solve the equations by using the self–similar solutions
that are well known in nonlinear wave theory [98, 99, 100]. As done
previously, we use two related variables $b_{y}$ and $a_{z}$, assuming
$b_{y}=b_{y}(a_{z})$), but include a new function, $a_{y}(t)$, which is freely
specified. Because of this term, the two beams have nonlinear polarization in
general, thus it is possible to investigate the birefringence effect.
The analysis begins by taking the total differentials:
$\partial_{t}b_{y}=({\rm d}b_{y}/{\rm d}a_{z})\partial_{t}a_{z}$, and
$\partial_{x}b_{y}=({\rm d}b_{y}/{\rm d}a_{z})\partial_{x}a_{z}$. By using
them in the linearized set of field equations (106) we obtain the set of three
equations:
$\displaystyle\partial_{t}a_{z}$ $\displaystyle=\frac{{\rm d}a_{z}}{{\rm
d}b_{y}}\partial_{x}a_{z},$ (116) $\displaystyle\partial_{t}a_{z}$
$\displaystyle=\frac{1}{\alpha}\left\\{\partial_{x}a_{z}\left[(1+\tau)\beta+\frac{{\rm
d}b_{y}}{{\rm d}a_{z}}\gamma\right]-\delta\partial_{t}a_{y}\right\\},$ (117)
$\displaystyle\partial_{t}a_{z}$ $\displaystyle\left(-\epsilon+\frac{{\rm
d}b_{y}}{{\rm d}a_{z}}\eta\right)=-\partial_{x}a_{z}\left(\frac{{\rm
d}b_{y}}{{\rm d}a_{z}}\zeta+\theta\right)+\iota\partial_{t}a_{y},$ (118)
These can be solved by observing that equations (116), (117) and (118) should
be equal. This results in the set of two equations for $\partial_{x}a_{z}$ and
$\partial_{t}a_{y}$:
$\displaystyle\partial_{x}a_{z}$
$\displaystyle\left\\{1-\frac{1}{\alpha}\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}\left[(1+\tau)\beta+\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}\gamma\right]\right\\}=-\partial_{t}a_{y}\frac{\delta}{\alpha}\frac{{\rm
d}b_{y}}{{\rm d}a_{z}},$ (119) $\displaystyle\partial_{x}a_{z}$
$\displaystyle\left[-\epsilon+\eta\frac{{\rm d}b_{y}}{{\rm d}a_{z}}+\frac{{\rm
d}b_{y}}{{\rm d}a_{z}}\left(\zeta\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}+\theta\right)\right]=\iota\partial_{t}a_{y}\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}.$ (120)
The two equations above should be equal because they have the same form. After
substitution of one into the other, we obtain another quadratic equation for
the total differential ${\rm d}b_{y}/{\rm d}a_{z}$:
$\displaystyle\left(\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}\right)^{2}(\delta\zeta+\iota\gamma)$ $\displaystyle+\left(\frac{{\rm
d}b_{y}}{{\rm
d}a_{z}}\right)\left[\eta\delta+\theta\delta-\iota\beta^{2}(1+\tau)\right]$
$\displaystyle+\alpha\left(\iota+\epsilon\frac{\delta}{\alpha}\right)=0.$
(121)
The quadratic equation (121) has two solutions,
$\displaystyle\left(\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}\right)_{1,2}=\frac{-N\pm\sqrt{N^{2}-4\alpha(\delta\zeta+\iota\gamma)(\iota-\epsilon(\delta/\alpha)}}{2(\delta\zeta+\iota\gamma)},$
(122)
where
$N=\left[\delta(\eta+\theta)-\iota{\beta}^{2}(1+\tau)\right].$ (123)
We have thus shown that the field equations decouple when we look for a
solution in a simple wave form, which is one of the main results of this
paper.
### IV.2 Solutions of type I equations
In this section we discuss the solutions of the decoupled field equations and
their meaning. By type I equation we refer to an equation in a form (116).
This can be rewritten as
$\partial_{t}a_{z}-\frac{1}{\nu}\partial_{x}a_{z}=0,$ (124)
where, after linearization, the function $\nu$ has the form:
$\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}=\nu,\quad\nu=\nu_{0}+\nu_{a_{z}}a_{z}+\nu_{b_{y}}b_{y}+\nu_{a_{y}}a_{y}.$
(125)
The type I equation has the same form as the equation (63) for Born [104] and
the equation (50) for Heisenberg-Euler electrodynamics [2], consequently the
derivation below also generalizes the results in previous sections.
The two sets of coefficients $\nu_{0}$, $\nu_{a_{z}}$, $\nu_{b_{y}}$ can be
derived for each of the total differentials $({\rm d}b_{y}/{\rm
d}a_{z})_{1,2}$ (122) in Mathematica, using linearized coefficients (107) with
the background coefficients in Appendix A. The resulting coefficients are
listed in the Appendix C because of their complexity.
Let us mention, for notation purposes, that we denote the two sets of
coefficients as
$\nu^{\pm}=\nu^{\pm}_{0}+\nu^{\pm}_{a_{z}}a_{z}+\nu^{\pm}_{b_{y}}b_{y}+\nu^{\pm}_{a_{y}}a_{y},$
(126)
where the $\pm$ sign is motivated by the clear correspondence between the $-$
and $+$ and counter–propagating waves and co–propagating waves, respectively.
For simplicity, we use only the definition (125) in the following text. The
$\pm$ notation is be used from the next subsection on.
We return to the solution of the differential equation (125). The equation can
be rewritten as
$\frac{{\rm d}b_{y}}{{\rm
d}a_{z}}=\nu,\quad\nu=\nu^{\prime}_{0}+\nu_{a_{z}}a_{z}+\nu_{b_{y}}b_{y},$
(127)
such that the terms on the right hand side are constant with respect to the
variables in total differential. $\nu^{\prime}_{0}$ is denoted by
$\nu^{\prime}_{0}=(\nu_{0}+\nu_{a_{y}}a_{y}),$ (128)
which is a constant with respect to variable $a_{z}$.
The equation can be solved by the method of integration factor, chosen as
$m(a)=\exp(-\nu_{b_{y}}a_{z})$. The relation $b_{y}=b_{y}(a_{z})$ is
determined by
$\frac{1}{\nu_{b_{y}}}\exp{(-\nu_{b_{y}}a_{z})}\left((\nu^{\prime}_{0}+\nu_{b_{y}}b_{y})+\frac{\nu_{a_{z}}}{\nu_{b_{y}}}(\nu_{b_{y}}a_{z}+1)\right)=\delta_{1},$
(129)
where $\delta_{1}$ is an arbitrary constant. Therefore the function
$b_{y}=b_{y}(a_{z})$ has the form
$b_{y}=\delta_{1}\,\exp(\nu_{b_{y}}a_{z})-\frac{\nu_{a_{z}}}{\nu_{b_{y}}}(\nu_{b_{y}}a_{z}+1)-\frac{\nu^{\prime}_{0}}{\nu_{b_{y}}}.$
(130)
After Taylor-expanding the first term in equation (130), the constant
$\delta_{1}$ can be determined by the initial condition $b_{y}|_{a_{z}=0}=0$:
$\delta_{1}=\frac{\nu_{a_{z}}+\nu^{\prime}_{0}\nu_{b_{y}}}{\nu^{2}_{b_{y}}}.$
(131)
Equation (131) can be used to express $b_{y}$:
$b_{y}=\nu^{\prime}_{0}a_{z}.$ (132)
This can be rewritten using $\nu^{\prime}_{0}$ (128) as
$b_{y}=(\nu_{0}+\nu_{a_{y}}a_{y})a_{z},$ (133)
which we need to linearize to
$b_{y}=\nu_{0}a_{z}.$ (134)
The function $b_{y}$ has the same form as in the Born (see Subsection III.4)
and Heisenberg–Euler approximation [1, 2].
After we substitute $b_{y}$ (133) into $\nu$ (127), we obtain
$\nu=\nu_{0}+(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})a_{z}+\nu_{a_{y}}a_{y}+\nu_{b_{y}}\nu_{a_{y}}a_{y}a_{z},$
(135)
where we neglect the last nonlinear term due to linearization. While solving
equation (124) we need to evaluate $1/\nu(a_{z},b_{y})$. We use a Taylor
expansion in two variables which yields
$\frac{1}{\nu}=\frac{1}{\nu_{0}}\left\\{1-\frac{1}{\nu_{0}}\left[(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})a_{z}+\nu_{a_{y}}a_{y}\right]\right\\}.$
(136)
We rewrite equation (124) as
$\partial_{t}a_{z}+f(a_{z},a_{y})\partial_{x}a_{z}=0,$ (137)
where $f(a_{z},a_{y})=\cfrac{1}{\nu}$ can be summarized as
$f(a_{z},a_{y})=-\frac{1}{\nu_{0}}\left\\{1-\frac{1}{\nu_{0}}\left[(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})a_{z}+\nu_{a_{y}}a_{y}\right]\right\\}.$
(138)
Furthermore we can put equation (137) into a standard form [98, 99] which
describes a nonlinear wave without dispersion,
$\partial_{t}\overline{a}_{z}+\left(-\frac{1}{\nu_{0}}+\overline{a}_{z}+\overline{a}_{y}\right)\partial_{x}\overline{a}_{z}=0,$
(139)
where
$\overline{a}_{z}=\frac{1}{\nu^{2}_{0}}(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})a_{z},\;\overline{a}_{y}=\frac{\nu_{a_{y}}}{\nu^{2}_{0}}a_{y}.$
(140)
The final form of this equation contains information about the shock wave
creation and subsequent effects such as higher-order harmonic generation. The
formula is being solved for the variable $a_{z}$ while there is a free
arbitrary function $a_{y}$.
This nonlinear equation solves the type I equation and has a form similar to
the nonlinear waves (66) together with (67) in Born, (see Subsection III.4)
and the nonlinear waves (54) together with (55) in Heisenberg-Euler
approximation [1, 2], but with different constant coefficients.
In the limit $a_{z}=a_{y}=0$, the wave will move with the velocity
$-1/\nu_{0}$, as in the unperturbed case. We have two solutions,
$\nu_{0}=\nu^{\pm}_{0}$ ($\nu^{-}_{0}$ for counter–propagating waves and
$\nu^{+}_{0}$ for co–propagating waves), see the summary section IV.5.
The above final equation is the general result with profile distortion by the
presence of functions $a_{z}$ and $a_{y}$ which suggest that wave steepening
takes place. But let’s check the physical relevance of our results.
#### IV.2.1 The characteristic equations
We solve equation (137) by the method of characteristics. The characteristic
equation for the equation (124) and the resulting equation for $a_{z}$ are
$\frac{{\rm d}x}{{\rm d}t}=f(a_{z},a_{y}),\;\frac{{\rm d}a_{z}}{{\rm d}t}=0.$
(141)
For any differentiable function $A=A(x)$, we can write the self–similar
solution $a_{z}$ as
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}[x-f(a_{z}(x,t),a_{y}(t))t],$ (142)
where $A_{0}$ is an arbitrary function determined by the initial condition
$a_{z}(x)|_{t=0}=A_{0}(x)$.
#### IV.2.2 The condition for existence of exceptional waves, wave
steepening and physical solutions
We use our knowledge about the phase velocities in Born electrodynamics (see
Section III): in the counter–propagating case $-$, the phase velocity $v_{2}$
(35) is positive and less than the speed of light $c=1$, and photon–photon
occurs. In the co–propagating case $+$, the phase velocity $v_{1}=-1$ (34); in
this case the beams do not interact. We observe that the phase velocities are
constant for both cases. In the general Born–Infeld electrodynamics we shall
expect similar limits of the phase velocities. The phase velocities are
dependent on the ratio $a_{y}/b_{y}$, $v_{1,2}=v_{1,2}(a_{y}/b_{y})$ and
$v_{3,4}=v_{3,4}(a_{y}/b_{y})$, where $a_{y}$ is an arbitrary function.
In order to choose the relevant (physical) phase velocities for our problem of
photon–photon scattering in Born–Infeld, we shall impose as a requirement the
constant limiting values for the phase velocities obtained for Born, i.e. we
should require the ratio $a_{y}/b_{y}$ to be a constant. This is also possible
thanks to the behaviour in the Born–Infeld electrodynamics as an isotropic
medium with a polarization–independent refractive index.
In order to find the ratio, we start with expression (134). This yields
$\cfrac{a_{y}}{b_{y}}=\frac{a_{y}}{\nu_{0}a_{z}}=k_{BI},$ (143)
which tells us to look for the ratio $a_{y}/a_{z}$ and to determine the
constant $k_{BI}$.
In order to find these, we start to study the wave steepening (see
Subsubsection (III.5.3) for a basic review, and [100] for a general one). The
wave steepening will happen forward for $\partial_{a_{z}}f(a_{z},a_{y})>0$,
backwards for $\partial_{a_{z}}f(a_{z},a_{y})<0$, or we get only exceptional
waves for $\partial_{a_{z}}f(a_{z},a_{y})=0$.
The characteristics (141) have an envelope
$\displaystyle 1$ $\displaystyle=-\partial_{a_{z}}f(a_{z}(x,t),a_{y}(t))t,$
(144)
from where we obtain
$\partial_{a_{z}}f(a_{z},a_{y})=\frac{(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})+\nu_{a_{y}a}\nu_{0}k_{BI}}{\nu^{2}_{0}},$
(145)
where we used $a_{y}=k_{BI}\nu_{0}a_{z}$ (143).
We can obtain the explicit expression for the constant $k_{BI}$ only for the
exceptional wave:
$k_{BI}=-\frac{(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})}{\nu_{a_{y}}\nu_{0}}.$ (146)
Using equation (143) we get
$a_{y}=-\frac{(\nu_{a_{z}}+\nu_{0}\nu_{b_{y}})a_{z}}{\nu_{a_{y}}}.$ (147)
In fact, the requirement of a constant phase velocity shows that the only
possible waves which can satisfy this are the exceptional waves. Moreover it
also determines the explicit form of the originally free function $a_{y}$ for
all x.
We get the final equation in the form:
$\partial_{t}a_{z}+f(\nu_{0})\partial_{x}a_{z}=0,$ (148)
subject to the initial condition $a_{z}(x)|_{t=0}=A_{0}(x)$, and where
$f(\nu_{0})=-\frac{1}{\nu_{0}}.$ (149)
The self–similar solution is
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}\left[x+\frac{1}{\nu_{0}}t\right],$ (150)
where the velocity of propagation is the constant $-1/\nu_{0}$. The direction
of motion for the two solutions $\nu_{0}=\nu^{\pm}_{0}$ is given by the sign
of the velocities $\nu^{\pm}_{0}$. If $\nu^{\pm}_{0}>0$, the wave moves to the
left, otherwise it moves to the right along the $x$ axis. $\nu^{-}_{0}$ refers
to counter–propagating waves and $\nu^{+}_{0}$ to co–propagating waves (see
the results in the final summary IV.5 for detailed study of the direction of
propagation).
To select the physical phase velocities for our problem required finding the
constant ratio $a_{y}/a_{z}$. This is satisfied only for the exceptional waves
in the solutions. Additionally, this sets the free function $a_{y}$ to a
specific expression. Therefore we can claim that the first equation (124) only
has exceptional waves as physically relevant solutions for our case of
photon–photon scattering process and that the wave steepening does not take
place in this case. This is in agreement with the literature published thus
far on the existence of shock waves in the Born–Infeld theory.
### IV.3 Solutions of type II equations
The other type of equation in our set are equations (117) and (118), which we
call type II equations. Here we choose the first one (117) to investigate:
$\partial_{t}a_{z}+g(a_{z},b_{y},a_{y})\partial_{x}a_{z}=-\frac{\delta}{\alpha}\partial_{t}a_{y},$
(151)
where
$g(a_{z},b_{y},a_{y})=-\frac{1}{\alpha}\left\\{(1+\tau)\beta+\frac{{\rm
d}b_{y}}{{\rm d}a_{z}}\gamma\right\\}.$ (152)
We can rewrite this equation using the result for ${\rm d}b_{y}/{\rm d}a_{z}$
(134) as
$\displaystyle g(a_{z},b_{y},a_{y})$
$\displaystyle=-\frac{1}{\alpha}\left[(1+\tau)\beta+\gamma\nu\right].$ (153)
The main difference from the previous equation is the non–zero right hand side
which suggests the presence of a radiation source: a current determined by
$\partial_{t}a_{y}$. Since $\partial_{t}a_{y}\neq 0$, $a_{z}$ will not be
constant along the characteristics and in general, the characteristics will
not be straight lines. In what follows, we investigate when the wave breaking
might arise [100].
Equation (151) can be reduced to an ordinary differential equation by the
method of characteristics. This yields one characteristic equation,
$\frac{{\rm d}x}{{\rm d}t}=g(a_{z},b_{y},a_{y}),$ (154)
and equation (151) reduces then to
$\frac{{\rm d}a_{z}}{{\rm d}t}=-\frac{\delta}{\alpha}\partial_{t}a_{y}.$ (155)
Before we proceed further, we need to linearize the function
$g(a_{z},b_{y},a_{y})$ as
$\displaystyle g(a_{z},b_{y},a_{y})$
$\displaystyle=g_{0}+g_{a_{z}}a_{z}+g_{b_{y}}b_{y}+g_{a_{y}}a_{y},$ (156)
where the explicit coefficients $g_{0},g_{a_{z}},g_{b_{y}}$ and $g_{a_{y}}$
can be found in Appendix D. We linearize also the coefficient $\delta/\alpha$,
denoting it as $q$,
$\displaystyle
q=-\frac{\delta}{\alpha}=q_{0}+q_{a_{z}}a_{z}+q_{b_{y}}b_{y}+q_{a_{y}}a_{y},$
(157)
where the coefficients $q_{0},q_{a_{z}},q_{b_{y}}$ and $q_{a_{y}}$ are listed
in Appendix E.
#### IV.3.1 The Cauchy initial condition
Firstly, we analyze equation (151) without its right hand side. We can use the
relation $b_{y}=\nu_{0}a_{z}$ (134), obtaining
$\displaystyle g(a_{z},a_{y})$
$\displaystyle=g_{0}+(g_{a_{z}}+g_{b_{y}}\nu_{0})a_{z}+g_{a_{y}}a_{y}.$ (158)
The characteristic equations reduce to:
$\frac{{\rm d}x}{{\rm d}t}=g(a_{z},a_{y}),\;\frac{{\rm d}a_{z}}{{\rm d}t}=0,$
(159)
while the self–similar solution for $a_{z}$ reads
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}[x-g(a_{z}(x,t),a_{y}(t))t],$ (160)
where $A_{0}$ is an arbitrary function subject to the initial condition
$a_{z}(x)|_{t=0}=A_{0}(x)$.
The envelope of characteristics becomes
$1=-\partial_{a_{z}}g(a_{z},a_{y})t,$ (161)
where
$\partial_{x_{0}}g(a_{z},a_{y})=(g_{a_{z}}+g_{b_{y}}\nu_{0})+g_{a_{y}}\nu_{0}k^{II}_{BI}.$
(162)
We have used the requirement for constant phase velocity (143), and a
different constant $k^{II}_{BI}$ for the type II equation. The wave breaks
forward if $\partial_{a_{z}}g(a_{z},a_{y})>0$ or backwards if
$\partial_{a_{z}}g(a_{z},a_{y})<0$; we get only exceptional waves if
$\partial_{a_{z}}g(a_{z},a_{y})=0$.
We can determine the constant $k^{II}_{BI}$ only for the exceptional waves
($\partial_{a_{z}}g(a_{z},a_{y})=0$), then we obtain the explicit expression
for $k^{II}_{BI}$ as
$k^{II}_{BI}=-\frac{(g_{a_{z}}+g_{b_{y}}\nu_{0})+g_{a_{y}}\nu_{0}}{g_{a_{y}}\nu_{0}}.$
(163)
By using the expression (143) for $k^{II}_{BI}$, we obtain the final relation
between $a_{z}$ and $a_{y}$ via the constant factor
$a_{y}=-\frac{(g_{a_{z}}+g_{b_{y}}\nu_{0})}{g_{a_{y}}}a_{z}.$ (164)
The choice of a constant phase velocity leads to exceptional waves as the only
possibility. This also determines the explicit form of the originally free
function $a_{y}$ for all x. As a consequence, wave steepening does not occur.
We obtain the final equation in the form
$\partial_{t}a_{z}+g_{0}\partial_{x}a_{z}=0,$ (165)
where the explicit expression for $g_{0}$ (212) is listed in Appendix D.
The self–similar solution for $a_{z}$ reduces to
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}[x-g_{0}t]\quad\forall x,$ (166)
where the velocity of propagation is the constant $g_{0}$ moving along the
$x-$axis. The direction of motion depends on the two solutions
$g_{0}=g^{\pm}_{0}$. For the solutions $g^{\pm}_{0}>0$, the wave moves to the
right and otherwise to the left along the $x-$axis, see the more detailed
discussion in the final summary IV.5.
As a result of selecting the physical phase velocities for our problem from
all phase velocities for type II solutions, we observed the need to have the
ratio $a_{y}/a_{z}$ as a constant. This condition is satisfied only for the
exceptional waves in our solutions. Moreover, it sets the free function
$a_{y}$ to a specific expression.
#### IV.3.2 The solution with the right hand side
The characteristic equation (155) can be rewritten by substituting $b_{y}$
(134) and $a_{y}$ (164):
$\frac{{\rm d}a_{z}}{{\rm
d}t}=\left\\{q_{0}+[q_{a_{z}}-g_{a_{z}}+\nu_{0}(q_{b_{y}}-g_{b_{y}})]a_{z}\right\\}\partial_{t}a_{y}.$
(167)
The characteristic equation is
$\frac{{\rm d}a_{z}}{{\rm
d}t}-Ma_{z}\partial_{t}a_{y}=q_{0}\partial_{t}a_{y},$ (168)
where
$M=q_{a_{z}}-g_{a_{z}}+\nu_{0}(q_{b_{y}}-g_{b_{y}}).$ (169)
We solve the left hand side first and then proceed to find the particular
solution for the right hand side. The solution of the equation
$\frac{{\rm d}a^{0}_{z}}{{\rm d}t}=Ma_{z}\partial_{t}a_{y}$ (170)
is
$a^{0}_{z}(x,t)=ce^{M[a_{y}(t)-a_{y}(0)]},$ (171)
where $c\neq 0$ for all $x$. To find the particular solution we integrate for
$t>0$, $t\in(0,t)$ and $-\infty<x<\infty$; obtaining
$a^{p}_{z}(x,t)=-\frac{q_{0}}{M}.$ (172)
We obtain the general solution of equation (155) by combining equations (171)
and (172):
$a^{c}_{z}(x,t)=-\frac{q_{0}}{M}+ce^{M[a_{y}(t)-a_{y}(0)]}\quad$ (173)
for all $x$ and the real constant $c$.
We assemble the final, singular solution by combining the solutions of
equations (159), the initial value solution (166) and the previous general
solution. This results in
$a^{f}_{z}(x,t)=A_{0}(x_{0})-\frac{q_{0}}{M}+ce^{M[a_{y}(t)-a_{y}(0)]},$ (174)
where we get $x=g_{0}t+x_{0}$ from the first characteristic equation (154).
The final solution was obtained by integrating the coupled ordinary
differential equations (154) and (155). The initial value problem with data
$a_{z}(x)=A_{0}(x)$ for $t=0$ is now modified by a constant
$a^{f}_{z}(x)=A_{0}(x)-q_{0}/M$ for $t=0$.
Let’s look explicitly at the function $g$ and the envelope of characteristics,
respectively:
$\displaystyle g(a_{z},a_{y})$ $\displaystyle=g_{0}$
$\displaystyle+(g_{a_{z}}+g_{b_{y}}\nu_{0})[A_{0}(x_{0})-\frac{q_{0}}{M}+ce^{M[a_{y}(t)-a_{y}(0)]}]$
$\displaystyle+g_{a_{y}}a_{y}$ (175)
and
$1=-(g_{a_{z}}+g_{b_{y}}\nu_{0})\partial_{x_{0}}g(a_{z},a_{y})t.$ (176)
In the above,
$\partial_{x_{0}}g(a_{z},a_{y})=-\frac{1}{(g_{a_{z}}+g_{b_{y}}\nu_{0})}\partial_{x_{0}}A_{0}(x_{0}),$
(177)
and
$\partial_{x_{0}}{A_{0}(x_{0})}=0,$ (178)
since $x=g_{0}t+x_{0}$ and $g_{0}$ is a constant. Therefore, we claim again
that the type II equation (151) only has exceptional waves as physically
relevant solutions and wave steepening does not occur.
The interpretation of these results is that the source term $a_{y}$, on the
right hand side of equation (151), even in the form (172) is too weak to
create a strong shock wave. Therefore the shock can not be produced [100].
To summarize, for the solutions of type I and type II equations we have
restricted the phase velocities to physically consistent quantities. This
allowed us to obtain the limit values for the photon–photon scattering process
in Born electrodynamics. In the process we needed to find the constant ratio
$a_{y}/a_{z}$. We have showed that such requirement is satisfied only by the
exceptional waves in our solutions. This also determines the free function
$a_{y}$ to a specific expression characteristic of each type of equations (I
or II).
In other words, the only physically relevant solutions to equations of type I
or II are exceptional waves where wave steepening does not occur. The results
are in full agreement with the published literature about the exceptional
waves which do not turn into shocks, [103, 36, 78], which is connected to the
absence of birefringence in the Born–Infeld electrodynamics [80, 79].
In other words, the only physically relevant solutions to equations of type I
or II are exceptional waves where wave steepening does not occur. The results
are in full agreement with the published literature about the exceptional
waves which do not turn into shocks, [103, 36, 78], which is connected to the
absence of birefringence in the Born–Infeld electrodynamics [80, 79].
### IV.4 The constant physical phase velocities
We need to determine which solutions correspond to the two situations:
1. 1.
case $-$: the mutually interacting, counter–propagating waves in which the
photon–photon scattering can happen.
2. 2.
case $+$: the co–propagating, non-interacting waves where photon–photon
scattering does not occur.
It is enough to focus on the sign of the phase velocities. In other words,
whether they approach a constant value with increasing $E_{0}$, i.e. one of
the values which we have obtained for Born electrodynamics, $v_{1}$ (34) and
$v_{2}$ (35) (see Section III) and which we have briefly reviewed in the
beginning of Subsubsection IV.2.2. We discuss our solutions with respect to
the two cases of beam orientation, $-$ and $+$: we plot the numerical values
of the phase velocities in Mathematica and look for their matching limiting
value of the phase velocities, $v_{1}$ or $v_{2}$, for the background field in
this section, see the summary in Section IV.5.
The phase velocities originating from the first two equations in the set (106)
are the phase velocities $v_{1,2}$. We need to evaluate the ratio
$a_{y}/b_{y}$ and plot $v_{1,2}=v_{1,2}(a_{y}/b_{y})$ as a function of
$E_{0}$.
Using the expression for exceptional waves (147) and equation (134), we obtain
the ratio $a_{y}/b_{y}$ as
$\cfrac{a_{y}}{b_{y}}=-\frac{\nu_{a_{z}}+\nu_{0}\nu_{b_{y}}}{\nu_{0}\nu_{a_{y}}},$
(179)
which is a constant determined by two other possible sets of constants.
We start with the case $v_{1,2}$ and its coefficients $\nu_{0}$,
$\nu_{a_{z}}$, $\nu_{b_{y}}$ in order to visualize
$v_{1,2}=v_{1,2}(a_{y}/b_{y})$ (113) and determine the two cases (the counter–
and co–propagating cases). The expression for the velocities $v_{1,2}$ becomes
constant using equation (179) as
$\displaystyle
v_{1,2}=\frac{M\pm\sqrt{M^{2}+4\alpha_{0}\gamma_{0}}}{2\alpha_{0}},$ (180)
where
$M=(1+\tau_{0})\beta_{0}+\cfrac{\nu_{a_{z}}+\nu_{0}\nu_{b_{y}}}{\nu_{0}\nu_{a_{y}}}\delta_{0}$.
Explicitly, using equation (179) and the two possible values of
$\nu^{\pm}_{0}$ (207), $\nu^{\pm}_{a_{z}}$ (C), $\nu^{\pm}_{b_{y}}$ (210) and
$\nu^{\pm}_{a_{y}}$ (211), the velocities $v_{1,2}$ become:
$\displaystyle
v^{\pm}_{1}=\frac{M+\sqrt{M^{2}+4\alpha_{0}\gamma_{0}}}{2\alpha_{0}},$ (181)
where
$M_{1}=\left(1+\tau_{0}\right)\beta_{0}+\left(\cfrac{\nu_{a_{z}}+\nu_{0}\nu_{b_{y}}}{\nu_{0}\nu_{a_{y}}}\right)^{\pm}\delta_{0}$.
Moreover, the phase velocities $v^{\pm}_{2}$ become
$\displaystyle
v^{\pm}_{2}=\frac{M-\sqrt{M^{2}+4\alpha_{0}\gamma_{0}}}{2\alpha_{0}},$ (182)
where
$M_{2}=(1+\tau_{0})\beta_{0}+\left(\cfrac{\nu_{a_{z}}+\nu_{0}\nu_{b_{y}}}{\nu_{0}\nu_{a_{y}}}\right)^{\pm}\delta_{0}$.
The phase velocities $v^{\pm}_{1}$ and $v^{\pm}_{2}$ are plotted in Fig. 1 and
Fig. 2. The phase velocities $v^{+}_{1}$ or $v^{-}_{1}$ seem to correspond to
the counter–propagating case ($-$) because they approach the value of the
phase velocity (34) which has a maximum of the speed of light $c=1$.
Furthermore, $v^{+}_{2}$ or $v^{-}_{2}$ correspond to the co–propagating case
($+$) because they approach the value of the phase velocity (35) which is
$-1$. In the figures we use $E_{0}$ normalized to the Schwinger limit $E_{S}$
($b=10^{-3}$) in order to see the positive and negative values of the phase
velocities. The number for the parameter $b$ is chosen conveniently to
demonstrate the phase velocities visually.
Figure 1: The phase velocity $v^{\pm}_{1}$. The phase velocity $v^{\pm}_{1}$
corresponds to the counter–propagating case, i.e. is positive
($v^{\pm}_{1}>0$), finite, and it appoaches a constant value. Figure 2: The
phase velocity $v^{\pm}_{2}$. The phase velocity $v^{\pm}_{2}$ seems to
correspond to the co–propagating case, i.e. is negative ($v^{\pm}_{2}<0$) and
finite.
Even though we plot the different expressions for $v^{\pm}_{1}$, these have
almost the same dependence on $E_{0}$ and furthermore, are positive. Therefore
they correspond to the counter–propagating case $-$.
The near-identical behaviour of the overlaying curves in each figures 1 and 2
could be attributed to the ratio $a_{y}/b_{y}$ which we discuss in the next
paragraph. We observe that there are some numerical fluctuations as $E_{0}$
increases. These are consequences of the linear approximation of the
coefficients that we performed in our calculation.
Interestingly, if we visualize the ratio $a_{y}/b_{y}$ given by the constant
expression (179) as it depends on $E_{0}$, we obtain Fig. 3.
Figure 3: The ratio ${a_{y}/b_{y}}^{\pm}$ (179) is visualized as it depends on
$E_{0}$.
The ratio ${a_{y}/b_{y}}^{\pm}$ goes to zero very quickly as a function of
$E_{0}$, both in negative values for ${a_{y}/b_{y}}^{-}$ and in positive
values for ${a_{y}/b_{y}}^{+}$. Therefore, the contribution of this term is
most relevant only early in the interaction. This also means that the source
$a_{y}$ is small and insufficiently strong to more greatly influence the wave
development.
We continue with the case $v_{3,4}$. These phase velocities originate from the
first and the third equations in the set (106). Below we present
visualizations of $v_{3,4}=v_{3,4}(a_{y}/b_{y})$ (115) to determine which
velocity corresponds to which case (counter– or co–propagating).
We obtain the ratio $a_{y}/b_{y}$ from the expression for exceptional waves
(164) and the relation for $b_{y}$ (134),
$\cfrac{a_{y}}{b_{y}}=-\cfrac{g_{a_{z}}+\nu_{0}g_{b_{y}}}{\nu_{0}g_{a_{y}}},$
(183)
which is determined by the constants
$g^{\pm}_{a_{z}},g^{\pm}_{a_{y}},g^{\pm}_{b_{y}}$ (D) and the two values of
$\nu^{\pm}_{0}$ (207), $\nu^{\pm}_{a_{z}}$ (C), $\nu^{\pm}_{b_{y}}$ (210), and
$\nu^{\pm}_{a_{y}}$ (211).
The velocities $v_{3,4}$ become constant using equation (183). The phase
velocities $v^{\pm}_{3}$ and $v^{\pm}_{4}$ can have two possible values thanks
to two possible values of $\nu^{\pm}_{0}$ (207), $\nu^{\pm}_{a_{z}}$ (C),
$\nu^{\pm}_{b_{y}}$ (210) and $\nu^{\pm}_{a_{y}}$ (211). The two possible
values of $v^{\pm}_{3}$ can be expressed as
$\displaystyle
v^{\pm}_{3}=\frac{N+\sqrt{N^{2}+4\epsilon_{0}\zeta_{0}}}{-2\epsilon_{0}},$
(184)
where
$N_{3}=\eta_{0}+\theta_{0}+\left(\cfrac{g_{a_{z}}+\nu_{0}g_{b_{y}}}{\nu_{0}g_{a_{y}}}\right)^{\pm}\iota_{0}$.
The velocities $v^{\pm}_{4}$ become
$\displaystyle
v^{\pm}_{4}=\frac{N-\sqrt{N^{2}+4\epsilon_{0}\zeta_{0}}}{-2\epsilon_{0}},$
(185)
where
$N_{4}=\eta_{0}+\theta_{0}+\left(\cfrac{g_{a_{z}}+\nu_{0}g_{b_{y}}}{\nu^{\pm}_{0}g_{a_{y}}}\right)^{\pm}\iota_{0}$.
The phase velocities $v^{+}_{3}$ and $v^{-}_{4}$ are plotted in figures 4 and
5. The phase velocities $v^{\pm}_{3}$ and $v^{\pm}_{4}$ seem to correspond to
the co–propagating case since their values are negative. In the graphs in
order to see the positive and negative values of the phase velocities we have
used the normalized $E_{0}$ to the Schwinger limit $E_{S}$ ($b=10^{-3}$).
The near-identical behaviour of the overlaying curves in each figures 4 and 5
could be attributed to the fraction $a_{y}/b_{y}$. In this case, its
contribution is negligible and the curve remains linear.
Figure 4: The physical phase velocity $v^{\pm}_{3}$. The values are negative
for both cases $v^{\pm}_{3}$. Figure 5: The physical phase velocity
$v^{\pm}_{4}$. The values are negative for both cases $v^{\pm}_{4}$.
### IV.5 Summary
In this section we summarize the main results and discuss the direction of
propagation of the resulting waves, thanks to the investigation in section
IV.4 above.
#### IV.5.1 Summary of the solutions
The type I equation (124) has the form
$\partial_{t}a_{z}+f(\nu_{0})\partial_{x}a_{z}=0,$ (186)
and is subject to the initial condition $a_{z}(x)|_{t=0}=A_{0}(x)$. In
equation (186),
$f(\nu_{0})=-\frac{1}{\nu_{0}}.$ (187)
The self–similar solution is given by
$a_{z}(x,t)=A_{0}(x_{0})=A_{0}\left[x+\frac{1}{\nu_{0}}t\right],$ (188)
where the velocity of propagation is the constant $-1/\nu_{0}$.
The direction of motion is given by the two values for $\nu_{0}=\nu^{\pm}_{0}$
(207) which is visualized in Fig. 6, where we have normalized $E_{0}$ to the
Schwinger limit $E_{S}$ and used $b=10^{-3}$. We observe that $\nu^{-}_{0}>0$,
therefore the wave moves to the right along the $x$ axis and corresponds to
the counter–propagating case $-$ of the two beams, $\nu^{+}_{0}<0$, therefore
the wave moves to the left along the $x$ axis and corresponds to the
co–propagating case $+$ of the two beams. The results align with those in
Born, see Section III.
Figure 6: The coefficients $\nu^{\pm}_{0}$ visualized as a function of
$E_{0}.$
Type II of equations (151) have the form
$\partial_{t}a_{z}+g^{\pm}_{0}\partial_{x}a_{z}=q\partial_{t}a_{y},$ (189)
where the explicit expression for $g^{\pm}_{0}$ is given by equation (212),
and the final singular solution with $x=g^{\pm}_{0}t+x_{0}$ has the form
$a^{f}_{z}(x,t)=A_{0}[x-g^{\pm}_{0}t]-\frac{q_{0}}{M}+ce^{M[a_{y}(t)-a_{y}(0)]}$
(190)
for all $x$ and real constant $c$, and where the velocity of propagation is
the constant $g_{0}$ moving along the $x-$axis. The direction of motion
depends on the two solutions $g_{0}=g^{\pm}_{0}$. The solutions are both
$g^{\pm}_{0}<0$ and the wave moves to the left along the $x-$axis, see Fig. 7.
Figure 7: The coefficients $g^{\pm}_{0}$ as a function of on $E_{0}$. The
horizontal line at the value $-1$ is there for comparison and represents the
phase velocity limit for the co–propagating beams in Born electrodynamics.
The solution was obtained by integrating the coupled ordinary differential
equations (154) and (155) together with the initial data $a_{z}(x)=A_{0}(x)$
for $t=0$. The final solution also contains the particular solution which
modifies the former by an additional constant $a^{f}_{z}(x)=A_{0}(x)-q_{0}/M$
for $t=0$.
## V The cross–section
The motivation of this paper has been the deeper understanding of the
photon–photon scattering in the Born–Infeld electrodynamics which contributes
also to effective cross–section.
In our previous papers [1, 2] we proposed an experiment for direct detection
of the photon–photon scattering by detecting the gamma-rays coming from the
electron-positron pair formation in the secondary processes generated on the
shock wave fronts. Such experiment might also enable us to study the
Born–Infeld (BSM) contribution to the process since the phase shift and
cross–section of the process could be measured together in the proposed
experiment.
We have discussed the subsequent production of electron–positron pairs in the
photon–photon scattering process in the Heisenberg–Euler electrodynamics in
the low energy photon approximation $\omega\ll m$ in [1, 2] at the shock wave
fronts where the approximation is no longer valid. Therefore, our results are
limited to this low energy regime and will lose their validity if we approach
the Schwinger limit $E_{S}$.
When the low energy photon approximation breaks in QED, the photon–photon
interaction can result in the creation of real electron–positron pairs via the
Breit–Wheeler process [105], thanks to the saturation of the wave steepening
and the electromagnetic shock wave formation. Reaching the energies for
electron–positron generation requires much lower laser intensities than for
reaching the Schwinger field $E_{S}$. These intensities can be achieved in the
near future at ELI.
In Born–Infeld electrodynamics, the shock wave fronts do not develop but the
exceptional waves contribute to the outgoing radiation. The contribution is
visible from the explicit cross–section for the photon–photon scattering.
The cross–section for low energy photon–photon scattering in BI and QED
(unpolarized initial states with summation over final polarizations) [106, 56]
is
$\displaystyle\sigma_{\gamma\gamma}$
$\displaystyle=\left(\frac{1}{64b^{4}}+\frac{11\alpha^{2}}{720b^{2}m^{4}}+\frac{139\alpha^{4}}{32400m^{8}}\right)\frac{\omega^{6}}{\pi^{2}}\left(3+\cos^{2}\theta\right)^{2},$
(191)
where the expression depends on the scattering angle $\theta$ and the photon
frequency $\omega$, the fermion mass $m$ and the fine structure constant
$\alpha$.
The total cross–section is given by
$\displaystyle\sigma^{tot}_{\gamma\gamma}$
$\displaystyle=\left(\frac{7}{20b^{4}}+\frac{77\alpha^{2}}{225b^{2}m^{4}}+\frac{973\alpha^{4}}{10125m^{8}}\right)\frac{\omega^{6}}{\pi}.$
(192)
The additional terms with the free Born–Infeld parameter $b$ in the formulae
signify the additive character of the photon–photon process in the Born–Infeld
electrodynamics and can be seen as a contribution from the beyond standard
model (BSM) particles.
According to the cross–section formula (191), we can expect that the
cross–section $\sigma_{\gamma\gamma\rightarrow e_{-}e_{+}}$ will include
additive terms with the parameter $b$ (to our knowledge such cross–section was
not published in the literature).
Therefore we might expect a contribution to the electron–positron pair
production from BSM physics, in our case from the Born–Infeld part. And also a
contribution to the subsequent emission of the gamma-ray photons leading to
the electron-positron avalanche thanks to the multiphoton Breit-Wheeler
mechanism [107]. To support our statement, it was also shown [108] that the
self–similar solutions in Born–Infeld produce an electron-positron avalanche.
It would be interesting to investigate also the contributions from other
non–standard models together with scenarios involving minicharged particles or
axion-like bosons in BSM physics.
## VI On experimental differentiation of Born–Infeld and Heisenberg–Euler
theories
There is an interest in experimental research to test QED and non–standard
models like Born–Infeld theory and scenarios where mini charged particles or
axion–like bosons [71] are involved. For example, the PVLAS experiment [72]
not only has obtained limits on the parameter $b$ in the Born–Infeld model,
but also to the existence of axion-like and milli-charged particles.
Furthermore, it has set upper limits on the magnetic birefringence predicted
by QED.
It is better to discuss the experimental estimates with respect to the
effective Lagrangian formed by the two theories, Born–Infeld and
Heisenberg–Euler, because the quantized version of the Born–Infeld theory is
missing. Furthermore, it is hard to predict any connection to the real world
besides the connection to string theory which does not help in this context.
Part of the testing of QED is also the possibility to distinguish between the
two theories, Born–Infeld (and other non–standard models together with
scenarios involving minicharged particles or axion-like bosons) and
Heisenberg-Euler, by precision test experiments. The effective Lagrangian is
defined as
$\displaystyle\mathcal{L}_{eff}$
$\displaystyle\simeq-{\mathfrak{F}}+\zeta_{L}{\mathfrak{F}}^{2}+\zeta_{T}{\mathfrak{G}}^{2},$
(193)
where the parameters are constants, $\zeta_{L}$ corresponding to the QED
theory and $\zeta_{T}$ to the Born–Infeld theory.
In order to distinguish between the two theories we need to measure the two
parameters independently. Previous experiments [72] were sensitive only to the
difference $|4\zeta_{T}-7\zeta_{L}|$ and therefore were unable to set a
constraint on a pure Born–Infeld theory.
The search for photon–photon scattering in vacuum can be done by measuring
phase shifts and ellipticities. These can be used to determine both
coefficients, $\zeta_{L}$ and $\zeta_{T}$ [19], in two counter–propagating
waves which one of them represented an ultra high power beam. As a result, it
will be possible to determine the precision estimates for $\zeta_{T}$ and
$\zeta_{L}$. Furthermore, it will be possible to estimate the upper and lower
bounds of the QED parameter $\kappa$ and the Born–Infeld free parameter $b$.
We note in passing that the phase shift in Born–Infeld is naturally zero
because of the absence of birefringence.
To summarize, the complete test of all the parameters appearing in the low
energy effective Lagrangian could be done, including the parameter for the
Born–Infeld term, thanks to the availability of PW–class lasers. The
experiments could be performed at HERCULES [15, 74], at the ZEUS laser [75],
at LUXE [76] in DESY, at the ELI facility [77] or at the future $100$ PW laser
at SIOM [27], thus providing a new class of precision tests of the Standard
Model and beyond.
## VII Conclusion
We have investigated the problem of nonlinear wave evolution in a quantum
vacuum in the important framework of Born–Infeld electrodynamics. We have been
looking for a detailed theoretical description of the electromagnetic shock
wave formation and its possible absence in a nonlinear quantum vacuum. We have
investigated the two counter–propagating waves in the framework of Born–Infeld
electrodynamics: a problem that describes the finite amplitude electromagnetic
wave counter-propagating with respect to the crossed electromagnetic field for
two linearly (Born) and nonlinearly polarized waves (Born–Infeld). This study
has been motivated by our previous work on photon–photon scattering in vacuum
[1, 2, 90] in the Heisenberg–Euler approximation; there we investigated the
simpler problem of two linearly polarized waves.
For the linearly polarized waves, which correspond to the crossed field
configuration (${\bf E}\cdot{\bf B}=0$, i.e. $\mathfrak{G}^{2}=0$), the
Born–Infeld Lagrangian reduces to the Born Lagrangian as its special subcase.
We have investigated the field equations of Born electrodynamics, which are
identical to the equations for the Born–Infeld electrodynamics for the crossed
field ${\bf E}\cdot{\bf B}=0$ (and hence referred to as Born). In general, the
term $\mathfrak{G}^{2}$ is of the fourth order in $F_{\mu\nu}$ and therefore
can be neglected except in the immediate neighbourhood of singularities, far
away from the creation of shock wave fronts [93].
Let us mention that there is a similarity with exact solutions of Einstein’s
equations called gyratons [109, 110, 111, 112, 113] which describe a
gravitational field of a spinning beam of light. The beam’s metric terms
$g_{ui}$ can be set to zero locally using a gauge transformation, but they
cannot be globally removed because the gauge invariant contour $\oint
g_{ui}(u,x^{i}){\rm d}x^{i}$ around the position of the gyraton (singularity)
is proportional to the nonzero angular momentum density $j_{i}$, which is
nonvanishing.
We have solved the Born field equations analytically assuming the solution in
the form of a simple wave. We added the small amplitude perturbations and
linearized the coefficients to study the singularity formation. We have showed
that the system of equations decoupled for the ordinary wave. The solutions
have the form of a nonlinear wave without dispersion in the linear
approximation.
We have presented and analyzed the analytical solutions in the Born theory for
the $+$ and $-$ solutions. These correspond to the counter–propagating waves
($-$ case) and co–propagating waves ($+$ case). We have presented the
analytical formulae for both cases and have shown explicitly that the only
solutions for both cases are exceptional waves.
We have analyzed the wave breaking in detail. For both cases, the wave
steepening factor reduces to zero ($f^{\prime\pm}=0$), therefore only
exceptional waves are the solutions. In the $-$ case, for the
counter–propagating waves, these interact with each other and thus the
photon–photon scattering process takes place. This results in the exceptional
wave propagating in the forward direction with constant phase velocity. In the
$+$ case, the co–propagating waves do not interact with each other and no
photon–photon scattering occurs. The resulting exceptional wave propagates in
the backward direction with constant phase velocity. This exceptional wave is
not a real physical contribution to the outgoing radiation from the
photon–photon scattering process in Born electrodynamics.
We have shown explicitly that the only solutions for both cases are
exceptional waves: the exceptional traveling wave solutions which propagate
with constant speed and which do not turn into shocks [103, 36, 78] and [81].
The existence of only exceptional waves is fully consistent with the known
literature. In comparison to the Heisenberg–Euler approximation [1, 2], where
the shock wave development takes place and has backward character, the shock
wave development does not occur in Born electrodynamics for our problem of two
counter–propagating laser beams.
To investigate the problem with the Born–Infeld Lagrangian, we have extended
our previous study of linearly polarized beams to the more general case of
nonlinearly polarized beams (i.e. ${\bf E}\cdot{\bf B}\neq 0$, where the term
$\mathfrak{G}^{2}$ is non–vanishing). We have also investigated the
extraordinary wave propagation. We have assumed the simplest generalization of
our setup in order to solve the field equations and investigate the nonlinear
wave evolution. The setup is thus rather mathematical but has enabled us to
solve the field equations and investigate the shock wave development in the
Born–Infeld framework. Furthermore we have discussed the contribution to the
outgoing radiation in the proposed experiment aimed at the direct detection of
the photon–photon scattering.
We have added weak linear amplitude corrections and have linearized the
coefficients to study the singularity formation. We have obtained a set of
three equations for two variables ($a_{z}(x,t)$ and $b_{y}(x,t)$) together
with a free given function $a_{y}(t)$. We have shown that the field equations
for Born–Infeld decouple and can be solved, which is one of the main results
of this paper.
Further, we have analyzed the equations by the method of characteristics. This
has enabled us to discuss the possible shock wave development and to analyze
the direction of motion of the resulting waves. The set of equations consists
of two types: the type I equation is the nonlinear wave equation and the type
II equations are the nonlinear wave equations with non-zero right hand side.
The equations contain information about the ordinary and extraordinary waves
in the vacuum. The equation of type I corresponds to the ordinary wave, the
type II equations with $a_{y}$ on the right hand side correspond to the
development of the extraordinary wave. The waves propagate in the same
direction thanks to the absence of birefringence in the Born–Infeld
electrodynamics, but with different phase velocities.
Through the analysis of shock wave development by the method of
characteristics we have found the following properties:
In the type I equation we have shown that the requirement on the phase
velocities to be constant (physically relevant) causes that the only relevant
solutions of the equation are the exceptional waves. The nonlinear form of the
resulting equation agrees with our results in Born theory [104] and the
Heisenberg–Euler approximation [1, 2].
The type II equation with the right hand side $\partial_{t}a_{y}$ also gives
only exceptional waves as solutions. We have shown that the only shock wave
which exists in the Born–Infeld is the one given as the initial condition and
that it just propagates further. The interpretation is that the source given
by $a_{y}$ on the right hand side of equation (151) in a form of a function
(172) is too weak to create a strong shock wave, therefore shock cannot be
produced [100].
We have analyzed and plotted the phase velocities and have identified their
directions of propagation. The phase velocities originating from the first two
equations in the set (106) are the phase velocities $v^{\pm}_{1,2}$. The phase
velocities $v^{\pm}_{1}$ and $v^{\pm}_{2}$ are plotted in figures 1 and 2. The
phase velocities $v^{\pm}_{1}$ seem to correspond to the counter–propagating
waves and the phase velocities $v^{\pm}_{2}$ correspond to the co–propagating
waves. The phase velocities $v^{\pm}_{3,4}$ originate from the first and the
third equations in the set (106). The phase velocities $v^{\pm}_{3}$ and
$v^{\pm}_{4}$ are plotted in figures 4 and 5. The phase velocities
$v^{\pm}_{3}$ and $v^{\pm}_{4}$ seem to correspond to the co–propagating waves
since their values are negative.
We have analyzed the direction of propagation of the exceptional solutions.
The solution of the type I equation (124), where the velocity of propagation
is the constant $-1/\nu_{0}$, moves in the direction given by the two
solutions $\nu^{\pm}_{0}$. We have observed that $\nu^{-}_{0}$ has positive
values, therefore the wave moves to the right along the $x$ axis and
corresponds to the counter–propagating case of the two beams. We have observed
that $\nu^{+}_{0}$ has negative values, therefore the wave moves to the left
along the $x$ axis and the solutions correspond to the co–propagating case.
The direction of motion of the type II solutions (151) depends on the two
solutions $\nu^{\pm}_{0}$. The motion is then governed by the negative values
of $g^{\pm}_{0}$ and thus the wave moves to the left along the $x-$axis.
The solutions have the form of nonlinear waves without dispersion in the
linear approximation; we have shown that the only physically relevant
solutions are the exceptional waves which do not turn into shocks.
To summarize the solutions of type I and type II equations: the only
physically relevant solutions are exceptional waves; wave steepening does not
occur in either case. Upon choosing the physical phase velocities from all
possible phase velocities, we have needed to find the constant ratio of
$a_{y}/a_{z}$. We have shown that, in our solutions, such requirement is
satisfied only for the exceptional waves. This also sets the free function
$a_{y}$ to a specific expression, characteristic for each type of equation (I
or II).
These are other main results of the paper, they are also in full agreement
with the published literature about the exceptional waves which do not turn
into shocks, [103, 36, 78], which is connected to the absence of birefringence
in the Born–Infeld electrodynamics [80, 79]. This summary represents the other
main result of the paper.
We have reviewed the cross–section for the photon–photon scattering in
Born–Infeld electrodynamics. The cross–section for the low energy
photon–photon scattering in the Born–Infeld and QED contains additional terms
with the free Born–Infeld parameter $b$, which signifies the additive
character of the photon–photon process in the Born–Infeld electrodynamics.
This can be seen as a contribution from the beyond standard model (BSM)
particles. Similarly, we can expect that the cross–section
$\sigma_{\gamma\gamma\rightarrow e_{-}e_{+}}$ will include additive terms with
the parameter $b$. Therefore we might expect a contribution to the
electron–positron pair production from BSM physics, in our case from the
Born–Infeld part, and a contribution to the subsequent emission of gamma-ray
photons (leading to the electron-positron avalanche) thanks to the multiphoton
Breit-Wheeler mechanism [107].
In other words, we can say that our recent proposal for the direct detection
of photon–photon scattering might be used to study the contributions from BSM
physics (represented by the Born–Infeld electrodynamics). Additionally, we
might get new experimental estimates for the parameters in QED and other
non–standard models, together with scenarios involving minicharged particles
or axion-like bosons in BSM physics.
Finally, let’s mention that there is an interest in differentiation between
the two electrodynamics: Born–Infeld (and other non–standard models together
with scenarios involving minicharged particles or axion-like bosons) and
Heisenberg–Euler. This could be achieved by precision test experiments using
the effective Lagrangian. The measurements would be based on the phase shifts
and the ellipticities of the colliding laser beams. Such measurements could be
done presently with high precision at PW laser facilities such as ELI
Beamlines or the ZEUS facility at DESY. Alternatively, the measurement of the
QED refraction indices might be possible by large scale laser interferometers
such as LIGO, GEO or VIRGO [114].
The measurement of parameter $b$ in the Born–Infeld theory will fulfill the
long-standing need to determine the free Born–Infeld constant. In the
introduction we have reviewed all the possible experiments and fields in which
such measurement was proposed and whether the value $b$ was estimated as an
upper or lower bound. The assumption that the numerical value of $b$ should be
close to the finite value of the electromagnetic energy of electron is still
true.
These theoretical and experimental studies, like the one in this paper, are
important from the fundamental physics point of view. Heisenberg–Euler
electrodynamics (QED) is considered to reflect the reality of our world better
than the alternative, nonlinear Born–Infeld electrodynamics. In the classical
vacuum, the classical Born–Infeld electrodynamics with point charges is
well–defined and does not suffer from the UV–divergence problems that
Heisenberg–Euler QED has in the quantum vacuum. The Born–Infeld quantisation
does not exist at the moment and even if it were to, such quantization might
not be UV–divergence–free and might not be able to explain all electromagnetic
phenomena. The measurement of the free Born–Infeld parameter $b$ would enable
us to distiguish the right electrodynamics theory and to move the theoretical
research further by cutting out the theories which are not correct. Let us
mention that this would help to distinguish other non–standard models,
together with scenarios involving minicharged particles or axion-like bosons
in BSM physics, and possibly open the door to new physics.
The importance of this investigation is also demonstrated by the recent study
on the photon–photon scattering experiment at LHC [115]. The process of
photon–photon scattering is the most promising process to study today in order
to get some answers to fundamental questions addressed in particle theoretical
physics, alongside with photon splitting, especially in looking for the
numerical value of the free Born–Infeld parameter $b$.
## Appendix A The coefficients for the background field
The whole Appendix is related to the Section IV where we investigate the
counter–propagating beams in Born–Infeld electrodynamics.
The coefficients $\alpha_{0}$, $\beta_{0}$, $\gamma_{0}$, $\delta_{0}$,
$\epsilon_{0}$, $\zeta_{0}$, $\eta_{0}$, $\theta_{0}$, $\iota_{0}$,
$\lambda_{0}$ in the set of equations (109, 110, 111) have the form:
The coefficients in the second equation (110) are:
$\displaystyle\alpha_{0}$
$\displaystyle=1+\frac{E_{0}^{2}}{b^{2}}\frac{1}{C},$ $\displaystyle\beta_{0}$
$\displaystyle=\frac{1}{b^{2}}\frac{E^{2}_{0}}{C},$ (194)
$\displaystyle\gamma_{0}$
$\displaystyle=\left(1-\cfrac{1}{b^{2}}E^{2}_{0}\right)-\cfrac{E^{2}_{0}\left(1-\cfrac{1}{b^{2}}E^{2}_{0}\right)^{2}+\cfrac{1}{b^{2}}E^{4}_{0}}{b^{2}C},$
$\displaystyle\delta_{0}$ $\displaystyle=\frac{1}{b^{2}}\cfrac{E^{2}_{0}}{C}.$
The coefficients in the third equation (111) are:
$\displaystyle\epsilon_{0}$
$\displaystyle=\frac{1}{b^{2}}E^{2}_{0}\cfrac{\left(1+\cfrac{1}{b^{2}}E^{2}_{0}\right)}{C},$
$\displaystyle\zeta_{0}$
$\displaystyle=\frac{E^{2}_{0}}{b^{2}}\left(1-\cfrac{E^{2}_{0}\left(1-\cfrac{1}{b^{2}}E^{2}_{0}\right)}{b^{2}C}\right),$
(195) $\displaystyle\eta_{0}$
$\displaystyle=-\cfrac{E^{2}_{0}}{b^{2}}\left(2-\cfrac{\left(1-\cfrac{1}{b^{2}}E^{2}_{0}\right)\left(1+\cfrac{1}{b^{2}}E^{2}_{0}\right)}{C}\right),$
$\displaystyle\theta_{0}$ $\displaystyle=\frac{E^{2}_{0}}{b^{2}}\alpha_{0},$
(196) $\displaystyle\tau_{0}$ $\displaystyle=1-\frac{E^{2}_{0}}{b^{2}},$
where the constant $C$ reads
$C=1-\frac{E^{2}_{0}}{b^{2}}-\frac{E^{4}_{0}}{b^{4}}.$ (197)
## Appendix B The coefficients for the Born–Infeld Lagrangian
This Appendix shows the derivation of the coefficients, $\alpha$, $\beta$,
$\gamma$, $\delta$, $\epsilon$, $\zeta$, $\eta$, $\theta$, $\iota$ and $\tau$,
(108).
The $\alpha$ and $\beta$ coefficients,
$\alpha_{a_{z}},\alpha_{b_{y}},\alpha_{a_{y}}$ and
$\beta_{a_{z}},\beta_{b_{y}},\beta_{a_{y}}$ are:
$\displaystyle\alpha_{a_{z}}$
$\displaystyle=\frac{2E_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right),\;\alpha_{b_{y}}=-\frac{2E^{3}_{0}}{b^{4}C^{2}}\left(1-\frac{E^{2}_{0}}{b^{2}}\right),$
$\displaystyle\alpha_{a_{y}}$
$\displaystyle=\frac{2E^{3}_{0}}{b^{4}C^{2}}\left(1+\frac{E^{2}_{0}}{b^{2}}\right),$
(198) $\displaystyle\beta_{a_{z}}$
$\displaystyle=\frac{E_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}C}\right),\;\beta_{b_{y}}=\frac{E_{0}}{b^{2}C}\left[1-\frac{2E^{2}_{0}}{b^{2}C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)\right],$
$\displaystyle\beta_{a_{y}}$
$\displaystyle=\frac{2E^{3}_{0}}{b^{4}C^{2}}\left(1+\frac{E^{2}_{0}}{b^{2}}\right).$
The $\gamma$ coefficients $\gamma_{a_{z}},\gamma_{b_{y}},\gamma_{a_{y}}$ are:
$\displaystyle\gamma_{a_{z}}$
$\displaystyle=-\frac{E_{0}}{b^{4}C}\left\\{E^{2}_{0}+\frac{2E^{2}_{0}}{b^{2}C}\left[E^{2}_{0}+\left(1-\frac{E_{0}}{b^{2}}\right)^{2}\right]\right\\},$
$\displaystyle\gamma_{a_{y}}$
$\displaystyle=-\frac{2E_{0}}{b^{2}}-\frac{2E_{0}}{b^{4}C}\left[E^{2}_{0}-\frac{2E^{2}_{0}}{b^{2}}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)+\left(1-\frac{E^{2}_{0}}{b^{2}}\right)^{2}\right]-\frac{2E^{3}_{0}}{b^{6}C^{2}}\left[E^{2}_{0}+\left(1-\frac{E^{2}_{0}}{b^{2}}\right)^{2}\right],$
(199) $\displaystyle\gamma_{b_{y}}$
$\displaystyle=\frac{E^{3}_{0}}{b^{4}C}\left\\{-1+\frac{2}{b^{2}C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)\left[E^{2}+\left(1-\frac{E^{2}_{0}}{b^{2}}\right)^{2}\right]\right\\}.$
The $\delta$ coefficients $\delta_{a_{z}},\delta_{b_{y}},\delta_{a_{y}}$ are:
$\displaystyle\delta_{a_{z}}$
$\displaystyle=\frac{E_{0}}{b^{2}C}\left(1+\frac{2E^{2}_{0}}{C^{2}}\right),$
$\displaystyle\delta_{b_{y}}$
$\displaystyle=-\frac{2E^{3}_{0}}{b^{4}C^{2}}\left(1-\frac{E^{2}_{0}}{b^{2}}\right),$
(200) $\displaystyle\delta_{a_{y}}$
$\displaystyle=\frac{E_{0}}{b^{2}C}\left[1+\frac{2E^{2}_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\right].$
The $\epsilon$ coefficients
$\epsilon_{a_{z}},\epsilon_{b_{y}},\epsilon_{a_{y}}$ are:
$\displaystyle\epsilon_{a_{z}}$
$\displaystyle=\frac{E_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\left(1+\frac{2E^{2}_{0}}{b^{2}C}\right),$
$\displaystyle\epsilon_{a_{y}}$
$\displaystyle=\frac{E_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\left[1+\frac{2E^{2}_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\right],$
(201) $\displaystyle\epsilon_{b_{y}}$
$\displaystyle=\frac{2E^{7}_{0}}{b^{C}}.$
The $\zeta$ coefficients $\zeta_{a_{z}},\zeta_{b_{y}},\zeta_{a_{y}}$ are:
$\displaystyle\zeta_{a_{z}}$
$\displaystyle=\frac{E_{0}}{b^{2}}\left[1-\frac{E^{2}_{0}}{b^{2}C}\left(1+\frac{2E^{2}_{0}}{b^{2}C}\right)\right],$
$\displaystyle\zeta_{a_{y}}$
$\displaystyle=\frac{E_{0}}{b^{2}}\left[1-\frac{E^{2}_{0}}{b^{2}C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)\left(1+\frac{2E^{3}_{0}}{b^{4}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\right)\right],$
(202) $\displaystyle\zeta_{b_{y}}$
$\displaystyle=-\frac{E^{3}_{0}}{b^{4}C}\left[\frac{E^{2}_{0}}{b^{2}}-\left(1-\frac{E^{2}_{0}}{b^{2}}\right)+\frac{E^{2}_{0}}{b^{2}C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)^{2}\right].$
The $\eta$ coefficients $\eta_{a_{z}},\eta_{b_{y}},\eta_{a_{y}}$ are:
$\displaystyle\eta_{a_{z}}$
$\displaystyle=\frac{2E^{3}_{0}}{b^{4}C^{2}}\left(1-\frac{E^{4}_{0}}{b^{4}}\right),$
$\displaystyle\eta_{a_{y}}$
$\displaystyle=\frac{2E^{3}_{0}}{b^{4}C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)\left[\frac{1}{C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)^{2}-1\right]-\frac{E_{0}}{b^{2}}\left[2-\frac{1}{C}\left(1-\frac{E^{4}_{0}}{b^{4}}\right)\right],$
(203) $\displaystyle\eta_{b_{y}}$
$\displaystyle=-\frac{2E^{3}_{0}}{b^{4}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\left[\frac{1}{C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)^{2}+1\right]-\frac{E_{0}}{b^{2}}\left[2-\frac{1}{C}\left(1-\frac{E^{4}_{0}}{b^{4}}\right)\right].$
The $\theta$ coefficients $\theta_{a_{z}},\theta_{b_{y}},\theta_{a_{y}}$ are:
$\displaystyle\theta_{a_{z}}$
$\displaystyle=\frac{2E^{3}_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}C}\right),$
$\displaystyle\theta_{a_{y}}$
$\displaystyle=\frac{E^{3}_{0}}{b^{2}C}\left[+\frac{2E^{2}_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\right]+E_{0},$
(204) $\displaystyle\theta_{b_{y}}$
$\displaystyle=\frac{E^{3}_{0}}{b^{2}C}\left[1-\frac{2E^{2}_{0}}{b^{2}C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)\right]+E_{0}.$
The $\iota$ and $\tau$ coefficients,
$\iota_{a_{z}},\iota_{b_{y}},\iota_{a_{y}}$ and
$\tau_{a_{z}},\tau_{b_{y}},\tau_{a_{y}}$ are:
$\displaystyle\iota_{a_{z}}$
$\displaystyle=\frac{2E^{3}_{0}}{b^{4}C^{2}}\left(1-\frac{E^{4}_{0}}{b^{4}}\right),$
$\displaystyle\iota_{a_{y}}$
$\displaystyle=\frac{2E_{0}}{b^{2}}+\frac{2E_{0}}{b^{2}C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)\left[1+2\frac{E^{2}_{0}}{b^{2}}+\frac{E^{2}_{0}}{b^{2}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)^{2}\right],$
(205) $\displaystyle\iota_{b_{y}}$
$\displaystyle=-\frac{2E^{3}_{0}}{b^{4}C}\left(1+\frac{E^{2}_{0}}{b^{2}}\right)\left[1+\frac{1}{C}\left(1-\frac{E^{2}_{0}}{b^{2}}\right)^{2}\right],$
$\displaystyle\tau_{a_{z}}$
$\displaystyle=0,\,\tau_{a_{y}}=-\frac{2E_{0}}{b^{2}},\,\tau_{b_{y}}=0.$ (206)
## Appendix C The coefficients for the cases 1 and 2 of the total
differential (122)
We obtained two cases/solutions of the total differential $\left({\rm
d}b_{y}/{\rm d}a_{z}\right)_{1,2}$, which have the form (122) together with
expression(123), where we have denoted the total differential as $\nu$ (125)
and linearized it in the variables $a_{z}$, $b_{y}$, $a_{y}$ around a constant
field using coefficients $\nu_{a_{z}}$, $\nu_{b_{y}}$, $\nu_{a_{y}}$. In what
follows we will use the notation with $\pm$ used in equation (126) to
distinguish between the two cases of solutions.
The $\nu_{\pm}$ coefficients are the following:
$\displaystyle\nu^{\pm}_{0}=\left(b^{4}B^{2}\left(2b^{6}E^{6}_{0}-b^{4}E^{8}_{0}-2b^{2}E^{10}_{0}+E^{12}_{0}+b^{4}E^{4}_{0}B+E^{4}_{0}B^{2}\pm
B^{3}T\right)\right)/2B^{3}A,$ (207)
where we have denoted the larger expressions $A$, $B$, $D$, $T$ as
$\displaystyle A=$ $\displaystyle
b^{12}-2b^{10}E^{2}_{0}-4b^{8}E^{4}_{0}+3b^{6}E^{6}_{0}+7b^{4}E^{8}_{0}-2b^{2}E^{10}_{0}-2E^{12}_{0},$
$\displaystyle B=$ $\displaystyle b^{4}-b^{2}E^{2}_{0}-E^{4}_{0},$ (208)
$\displaystyle D=$
$\displaystyle-4b^{30}+12b^{28}E^{2}_{0}+36b^{26}E^{4}_{0}-104b^{24}E^{6}_{0}-152b^{22}E^{8}_{0}+352b^{20}E^{10}_{0}+429b^{18}E^{12}_{0}-618b^{16}E^{14}_{0}-743b^{14}E^{16}_{0}+536b^{12}E^{18}_{0}$
$\displaystyle+724b^{10}E^{20}_{0}-168b^{8}E^{22}_{0}-348b^{6}E^{24}_{0}-24b^{4}E^{26}_{0}+64b^{2}E^{28}_{0}+16E^{30}_{0},$
$\displaystyle T=$ $\displaystyle\sqrt{(1/b^{6}B^{6})D}.$
The coefficients $\nu_{a_{z}}^{\pm}$ are:
$\displaystyle\nu_{a_{z}}^{\pm}$ $\displaystyle=\left(\mp 4b^{42}E_{0}\mp
80b^{2}E_{0}^{41}\mp 16E^{43}_{0}\pm b^{24}E^{19}_{0}(3065\mp 632T)\pm
2b^{16}E^{27}_{0}(1898\mp 159T)\pm 2b^{32}E^{11}_{0}(354\mp 61T)\right.$
$\displaystyle\pm\left.2b^{38}E^{5}_{0}(1\mp 10T)\pm 16b^{6}E^{37}_{0}(27\mp
2T)+2b^{36}E^{7}_{0}(\mp 95+T)\pm 8b^{4}E^{39}_{0}(\pm
3+T)+8b^{8}E^{35}_{0}(\pm 78+T)\right.$
$\displaystyle\left.+2b^{40}E^{3}_{0}(\pm 11+2T)+4b^{12}E^{31}_{0}(\mp
553+23T)+14b^{20}E^{23}_{0}(\mp 289+40T)+2b^{10}E^{33}_{0}(\mp
324+83T)+b^{34}E^{9}_{0}(\pm 174+127T)\right.$
$\displaystyle\left.+b^{30}E^{13}_{0}(892\pm 321T)+b^{14}E^{29}_{0}(432\pm
361T)+b^{28}E^{15}_{0}(\mp 1725+416T)+b^{26}E^{17}_{0}(\pm 2282+453T)\right.$
$\displaystyle\left.+b^{18}E^{25}_{0}(\pm 2489+481T)+b^{22}E^{21}_{0}(3325\pm
493T)\right)/2B^{5}A^{2}T.$ (209)
The coefficients $\nu_{b_{y}}^{\pm}$ are:
$\displaystyle\nu_{b_{y}}^{\pm}$ $\displaystyle=\left(\mp 4b^{42}E_{0}\pm
72b^{2}E_{0}^{41}\pm 16E^{43}_{0}\pm b^{22}E^{21}_{0}(8689\mp 1667T)\pm
2b^{14}E^{29}_{0}(6632\mp 513T)\pm 2b^{24}E^{19}_{0}(3495\mp 382T)\right.$
$\displaystyle\pm\left.2b^{30}E^{13}_{0}(1306\mp 337T)\pm
2b^{16}E^{27}_{0}(1190\mp 73T)\pm 4b^{12}E^{31}_{0}(40\mp 21T)\pm
16b^{32}E^{11}_{0}(23\mp 5T)+87b^{34}E^{9}_{0}(\mp 2+T)\right.$
$\displaystyle+\left.8b^{4}E^{31}_{0}(\pm 11+T)+2b^{40}E^{3}_{0}(\pm
11+2T)+2b^{10}E^{33}_{0}(\mp 1006+5T)+4b^{8}E^{35}_{0}(\mp
132+5T)+4b^{6}E^{37}_{0}(\pm 9+7T)\right.$
$\displaystyle\left.+2b^{36}E^{7}_{0}(\mp 31+12T)+2b^{38}E^{5}_{0}(5\pm
12T)+3b^{28}E^{15}_{0}(\mp 511+44T)+2b^{20}E^{23}_{0}(\mp 2211+245T)\right.$
$\displaystyle\left.+b^{26}E^{17}_{0}(\mp 4392+1063T)+b^{18}E^{25}_{0}(\mp
10091+1355T)\right)/2B^{5}A^{2}T.$ (210)
The coefficients $\nu_{a_{y}}^{\pm}$ are:
$\displaystyle\nu_{a_{y}}^{\pm}$ $\displaystyle=\left(\mp 2b^{44}E_{0}\pm
8b^{42}E_{0}^{3}\mp 92b^{4}E^{41}_{0}\mp 96b^{2}E^{43}_{0}\mp 16E^{45}_{0}\pm
2b^{14}E^{31}_{0}(45\mp 46T)+8b^{38}E^{7}_{0}(\mp 7+T)\right.$
$\displaystyle+\left.b^{40}E^{5}_{0}(\pm 10+T)+8b^{6}E^{39}_{0}(\pm 60+T)\mp
2b^{10}E^{35}_{0}(384\pm 7T)+2b^{8}E^{37}_{0}(\pm
497+17T)+b^{34}E^{11}_{0}(\pm 185+18T)\right.$
$\displaystyle+\left.19b^{26}E^{19}_{0}(\pm 55+37T)-6b^{12}E^{33}_{0}(\pm
511+37T)\mp b^{36}E^{9}_{0}(44\pm 57T)+2b^{32}E^{13}_{0}(\pm 139+158T)\mp
b^{30}E^{15}_{0}(503\pm 252T)\right.$
$\displaystyle\left.+2b^{16}E^{29}_{0}(\pm 2443+339T)+b^{18}E^{27}_{0}(\pm
1120+479T)\mp b^{28}E^{17}_{0}(1081\pm 808T)\mp b^{22}E^{23}_{0}(1493\pm
849T)\right.$ $\displaystyle\left.\mp b^{20}E^{25}_{0}(4662\pm
1135T)+b^{24}E^{21}_{0}(\pm 2783+1190T)\right)/b^{2}B^{5}A^{2}T.$ (211)
## Appendix D The coefficients for the function $g(a_{z},b_{y},a_{z})$
The coefficients
$g^{\pm}_{0},g^{\pm}_{a_{z}},g^{\pm}_{b_{y}},g^{\pm}_{a_{y}}$, in the
linearized form of the function $g(a_{z},b_{y},a_{y})$ (156), have the form:
$\displaystyle g^{\pm}_{0}$
$\displaystyle=-\left\\{\frac{1}{\alpha_{0}}(1+\tau_{0})\beta_{0}+\frac{\gamma_{0}}{\alpha_{0}}\nu^{\pm}_{0}\right\\},$
(212)
and
$\displaystyle g^{\pm}_{a_{z}}$
$\displaystyle=\frac{\alpha_{a_{z}}}{\alpha^{2}_{0}}(\beta_{0}+\gamma_{0}\nu^{\pm}_{0}+\beta_{0}\tau_{0})-\frac{1}{\alpha_{0}}(\beta_{a_{z}}+\gamma_{a_{z}}\nu^{\pm}_{0}+\gamma_{0}\nu^{\pm}_{a_{z}}+\beta_{a_{z}}\tau_{0}+\beta_{0}\tau_{a_{z}}),$
$\displaystyle g^{\pm}_{b_{y}}$
$\displaystyle=\frac{1}{\alpha^{3}_{0}}\left[-2\alpha_{a_{y}}\alpha_{a_{z}}(\beta_{0}+\gamma_{0}\nu^{\pm}_{0}+\beta_{0}\tau_{0})+\alpha_{0}\alpha_{a_{z}}(\beta_{a_{y}}+\gamma_{a_{y}}\nu^{\pm}_{0}+\gamma_{0}\nu^{\pm}_{a_{y}}+\beta_{a_{y}}\tau_{0}+\beta_{0}\tau_{a_{y}})\right.$
$\displaystyle+$
$\displaystyle\left.\alpha_{0}\alpha_{a_{y}}(\beta_{a_{z}}+\gamma_{a_{z}}\nu^{\pm}_{0}+\gamma_{0}\nu^{\pm}_{a_{z}}+\beta_{a_{z}}\tau_{0}+\beta_{0}\tau_{a_{z}})-\alpha^{2}_{0}(\gamma_{a_{z}}\nu^{\pm}_{a_{y}}+\gamma_{a_{y}}\nu^{\pm}_{a_{z}}+\beta_{a_{z}}\tau_{a_{y}}+\beta_{a_{y}}\tau_{a_{z}})\right],$
(213) $\displaystyle g^{\pm}_{a_{y}}$
$\displaystyle=\frac{1}{\alpha^{4}_{0}}\left\\{\alpha_{0}\left(-2\alpha_{a_{y}}\alpha_{a_{z}}(\beta_{b_{y}}+\gamma_{b_{y}}\nu^{\pm}_{0}+\gamma_{0}\nu^{\pm}_{b_{y}}+\beta_{b_{y}}\tau_{0}+\beta_{0}\tau_{b_{y}})+\alpha_{0}\alpha_{a_{z}}(\gamma_{a_{y}}\nu^{\pm}_{b_{y}}+\gamma_{b_{y}}\nu^{\pm}_{a_{y}}+\beta_{a_{y}}\tau_{b_{y}}+\beta_{b_{y}}\tau_{a_{y}})\right.\right.$
$\displaystyle+$
$\displaystyle\left.\left.\alpha_{0}\alpha_{a_{y}}(\gamma_{a_{z}}\nu^{\pm}_{b_{y}}+\gamma_{b_{y}}\nu^{\pm}_{a_{z}}+\beta_{a_{z}}\tau_{b_{y}}+\beta_{b_{y}}\tau_{a_{z}})\right)\right.$
$\displaystyle+$
$\displaystyle\left.\alpha_{b_{y}}\left(6\alpha_{a_{y}}\alpha_{a_{z}}(\beta_{0}+\gamma_{0}\nu^{\pm}_{0}+\beta_{0}\tau_{0})-2\alpha_{0}\alpha_{a_{y}}(\beta_{a_{z}}+\gamma_{a_{z}}\nu^{\pm}_{0}+\gamma_{0}\nu^{\pm}_{a_{z}}+\beta_{a_{z}}\tau_{0}+\beta_{0}\tau_{a_{z}})\right.\right.$
$\displaystyle+$
$\displaystyle\left.\left.\alpha_{0}(-2\alpha_{a_{z}}(\beta_{a_{y}}+\gamma_{a_{y}}\nu^{\pm}_{0}+\gamma_{0}\nu^{\pm}_{a_{y}}+\beta_{a_{y}}\tau_{0}\tau_{a_{y}})+\alpha_{0}(\gamma_{a_{z}}\nu^{\pm}_{a_{y}}+\gamma_{a_{y}}\nu^{\pm}_{a_{z}}+\beta_{a_{z}}\tau_{a_{y}}+\beta_{a_{y}}\tau_{a_{z}}))\right)\right\\}.$
## Appendix E The coefficients for the function $q(a_{z},b_{y},a_{z})$
The coefficients $q_{0},q_{a_{z}},q_{b_{y}},q_{a_{y}}$, in the linearized form
of the function $q(a_{z},b_{y},a_{y})$ (157), have the form:
$\displaystyle q_{0}$ $\displaystyle=-\frac{\delta_{0}}{\alpha_{0}},$ (214)
and
$\displaystyle q_{a_{z}}$
$\displaystyle=\frac{1}{\alpha_{0}}\left(\frac{\alpha_{a_{z}}}{\alpha_{0}}\delta_{0}-\delta_{a_{z}}\right),$
$\displaystyle q_{b_{y}}$
$\displaystyle=\frac{1}{\alpha_{0}^{2}}\left(-2\alpha_{b_{y}}\alpha_{a_{z}}\frac{\delta_{0}}{\alpha_{0}}+\alpha_{a_{z}}\delta_{b_{y}}+\alpha_{b_{y}}\delta_{a_{z}}\right)\,$
$\displaystyle q_{a_{y}}$
$\displaystyle=\frac{1}{\alpha^{3}_{0}}\left[2\alpha_{b_{y}}\alpha_{a_{y}}\left(3\alpha_{a_{z}}\frac{\delta_{0}}{\alpha_{0}}-\delta_{a_{z}}\right)-2\alpha_{a_{z}}(\alpha_{a_{y}}\delta_{b_{y}}+\alpha_{b_{y}}\delta_{a_{y}})\right].$
(215)
###### Acknowledgements.
H. K. wishes to thank: Prof. I. Białynicki–Birula for enlightening and kind
discussions on Born and the Born–Infeld theory, their relativistic covariance,
and his pointing my attention to the beauty of the original Born–Infeld paper;
Dr. T. Pecháček for discussions on relativistic covariance and his detailed
final reading of the paper; Prof. S. Bulanov for discussions and his interest
in the manuscript; Dr. T. Chrobok for discussion and his interest in this
work; Dr. Ch. Lu for helful discussions on the submission process and his
interest in this work; Prof. G. Gibbons for a discussion about plane wave
solutions; Prof. G. Gregori for pointing our attention to the corrected
version of the paper about PVLAS experiment; Dr. E. Chacon–Golcher for a very
detailed and thorough final reading of the manuscript and his comments. H.K.
is grateful for kind, supportive and helpful report from the anonymous referee
who pointed my attention to the mathematical work on shock wave development by
prof. D. Christodoulou, Y. Brenier and D. Serre, which I was not aware of and
which motivated further investigation. The work was supported by the project
High Field Initiative (CZ$.02.1.01/0.0/0.0/15\\_003/0000449$) from European
Regional Development Fund. H. K. was supported by the fellowship (award) Czech
edition of L’Oréal UNESCO For Women In Science 2019.
## References
* H. Kadlecová _et al._ [2019a] H. Kadlecová, G. Korn, and S. V. Bulanov, Electromagnetic shocks in the quantum vacuum, Phys. Rev. D 99, 036002 (2019a).
* H. Kadlecová _et al._ [2019b] H. Kadlecová, S. V. Bulanov, and G. Korn, Properties of finite amplitude electromagnetic waves propagating in the quantum vacuum, PPCF 61, 084002 (2019b).
* V. B. Berestetski _et al._ [1982] V. B. Berestetski, E. M. Lifshitz, and L. P. Pitaevskii, Quantum electrodynamics (Volume 4, Course of Theoretical Physics, Second edition) (Pergamon Press, Oxford, 1982).
* W. Heisenberg and H. Euler [1936] W. Heisenberg and H. Euler, Folgerungen aus der Diracschen Theorie des Positrons, Zeit. für Phys. 98, 714 (1936).
* W. Dittrich and H. Gies [2000] W. Dittrich and H. Gies, Probing the quantum vacuum: Perturbative effective action approach in quantum electrodynamics and its application (Springer-Verlag Berlin Heidelberg, Berlin, 2000).
* D. d’Enterria and G. G. da Silveira [2013] D. d’Enterria and G. G. da Silveira, Observing Light-by-light Scattering at the Large Hadron Collider, Phys. Rev. Lett. 111, 080405 (2013).
* D. d’Enterria and G. G. da Silveira [2016] D. d’Enterria and G. G. da Silveira, Erratum: Observing Light-by-light Scattering at the Large Hadron Collider, Phys. Rev. Lett. 116, 129901(E) (2016).
* R. Karplus and M. Neuman [1951] R. Karplus and M. Neuman, The Scattering of Light by Light, Phys. Rev. 83, 776 (1951).
* G. Baur _et al._ [2002] G. Baur, K. Hencken, D. Trautmann, S. Sadovsky, and Y. Kharlov, Coherent $\gamma\gamma$ and $\gamma$A interactions in very peripheral collisions at relativistic ion colliders, Phys. Rep. 364, 359 (2002).
* M. Abboud et al. [2017] M. Abboud et al., Evidence for Light-by-light scattering in heavy-ion collisions with the ATLAS detector at the LHC, Nature Phys. 13, 852 (2017).
* G. Aad et al. [2019] G. Aad et al., Observation of Light-by-light Scaterring in Ultraperipheral Pb + Pb Collisions with the ATLAS Detector, Phys. Rev. Lett. 123, 052001 (2019).
* M. Kłusek-Gawenda _et al._ [2016] M. Kłusek-Gawenda, P. Lebiedowicz, and A. Szczurek, Light-by-light scattering in ultraperipheral Pb-Pb collisions at energies available at the CERN Large Hadron Collider, Phys. Rev. C 93, 044907 (2016).
* I. M. Dremin [2019] I. M. Dremin, Geometry of ultraperipheral nuclear collisions, Int. J. of Mod. Phys. A 34, 1950068 (2019).
* S. R. Klein [2017] S. R. Klein, A clash of photons, Nature Physics 13 (2017).
* G. A. Mourou _et al._ [2006] G. A. Mourou, T. Tajima, and S. V. Bulanov, Optics in the relativistic regime, Rev. Mod. Phys. 78, 309 (2006).
* Marklund and Shukla [2006] M. Marklund and P. K. Shukla, Nonlinear collective effects in photon–photon and photon–plasma interactions, Rev. Mod. Phys. 78, 591 (2006).
* A. Di Piazza _et al._ [2012] A. Di Piazza, C. Müller, K. Z. Hatsagortsyan, and C. H. Keitel, Extremely high–intensity laser interactions with fundamental quantum systems, Rev. Mod. Phys. 84, 1177 (2012).
* S. S. Bulanov et al. [2010] S. S. Bulanov et al., Schwinger limit attainability with extreme power lasers, Phys. Rev. Lett. 105, 220407 (2010).
* D. Tommasini _et al._ [2008] D. Tommasini, A. Ferrando, H. Michinel, and M. Seco, Detecting photon-photon scattering in vacuum at exawatt lasers, Phys. Rev. A 77, 042101 (2008).
* A. Parades _et al._ [2014] A. Parades, D. Novoa, and D. Tommasini, Self-induced mode mixing of ultraintense lasers in vacuum, Phys. Rev. A 90, 063803 (2014).
* King and Heinzl [2016] B. King and T. Heinzl, Measuring vacuum polarization with high-power lasers, HPLaser 4, e5 (2016).
* J. K. Koga _et al._ [2012] J. K. Koga, S. V. Bulanov, T. Zh. Esirkepov, A. S. Pirozkhov, M. Kando, and N. N. Rosanov, Possibility of measuring photon–photon scattering via relativistic mirrors, Phys. Rev. A 86, 053823 (2012).
* F. Karbstein and R. Shaisultanov [2015] F. Karbstein and R. Shaisultanov, Stimulated photon emission from the vacuum, Phys. Rev. D 91, 113002 (2015).
* H. Gies _et al._ [2018] H. Gies, F. Karbstein, C. Kohlfürst, and N. Seegert, Photon-photon scattering at the high-intensity frontier, Phys. Rev. D 97, 076002 (2018).
* S. V. Bulanov _et al._ [2019] S. V. Bulanov, P. V. Sasorov, S. S. Bulanov, and G. Korn, Synergic Cherenkov–Compton Radiation, Phys. Rev. D. 100, 016012 (2019).
* H.-P. Schlenvoigt _et al._ [2016] H.-P. Schlenvoigt, T. Heinzl, U. Schramm, T. E. Cowan, and R. Sauerbrey, Detecting vacuum birefringence with x-ray free electron lasers and high-power optical lasers: a feasibility study, Phys. Scr. 91, 023010 (2016).
* B. Shen _et al._ [2018] B. Shen, Z. Bu, J. Xu, T. Xu, L. Ji, R. Li, and Z. Xu, Exploring vacuum birefringence based on a 100 PW laser and an x-ray free electron laser beam, Plas. Phys. and Contr. Fusion 60, 044002 (2018).
* T. Heinzl _et al._ [2006] T. Heinzl, B. Liesfeld, K.-U. Amthor, H. Schwoerer, R. Sauerbrey, and A. Wipf, On the observation of vacuum birefringence, Opt. Commun. 267, 318 (2006).
* P. K. Shukla and B. Eliasson [2010] P. K. Shukla and B. Eliasson, Recent developments in quantum plasma physics, Plas. Phys. Control. Fusion 52, 124040 (2010).
* A. Di Piazza and K. Z. Hatsagortsyan [2008] A. Di Piazza and K. Z. Hatsagortsyan, Quantum vacuum effects in strong laser beams, Plas. Phys. Control. Fusion 52, 124035 (2008).
* C. A. M. de Melo _et al._ [2015] C. A. M. de Melo, L. G. Medeiros, and P. J. Pompeia, Causal structure and birefringence in nonlinear electrodynamics, Mod. Phys. Lett. A 30, 1550025 (2015).
* Z. Białynicka–Birula and I. Białynicki–Birula [1970] Z. Białynicka–Birula and I. Białynicki–Birula, Nonlinear effects in quantum electrodynamics. Photon propagation and photon splitting in an external field, Phys. Rev. D 2, 2341 (1970).
* Dittrich and Gies [1998] W. Dittrich and H. Gies, Light propagation in nontrivial QED vacua, Phys. Rev. D 58, 025004 (1998).
* K. Hattori and K. Itakura [2013a] K. Hattori and K. Itakura, Vacuum birefringence in strong magnetic fields: (i) photon polarization tensor with all the landau levels, Ann. Phys. 330, 23 (2013a).
* K. Hattori and K. Itakura [2013b] K. Hattori and K. Itakura, Vacuum birefringence in strong magnetic fields: (ii) complex refractive index from the lowest landau levels, Ann. Phys. 334, 58 (2013b).
* G. Boillat [1970] G. Boillat, Nonlinear electrodynamics: Lagrangians and equations of motion, J. Math. Phys. 11, 941 (1970).
* L. D. Landau and E. M. Lifshitz [1984] L. D. Landau and E. M. Lifshitz, Electrodynamics of continous media (Pergamon Press, Oxford, 1984).
* M. Lutzky and J. S. Toll [1959] M. Lutzky and J. S. Toll, Formation of discontinuities in classical nonlinear electrodynamics, Phys. Rev. 113, 1649 (1959).
* P. Böhl _et al._ [2015] P. Böhl, B. King, and H. Ruhl, Vacuum high-harmonic generation in the shock regime, Phys. Rev. A 92, 032115 (2015).
* S. L. Adler [1971] S. L. Adler, Photon splitting and photon dispersion in a strong magnetic field, Ann. Phys. 67, 599 (1971).
* E. Brezin and C. Itzykson [1971] E. Brezin and C. Itzykson, Polarization phenomena in vacuum nonlinear electrodynamics, Phys. Rev. D 3, 618 (1971).
* V. I. Ritus [1975] V. I. Ritus, The lagrange function of an intensive electromagnetic field and quantum electrodynamics at small distances, Sov. Phys. JETP 42, 774 (1975).
* J. I. Latorre _et al._ [1995] J. I. Latorre, P. Pascual, and R. Tarrach, Speed of light in non–trivial vacua, Nucl. Phys. B 437, 60 (1995).
* I. T. Drummond and S. J. Hathrell [1980] I. T. Drummond and S. J. Hathrell, QED vacuum polarization in a background gravitational field and its effect on the velocity of photons, Phys. Rev. D 22, 343 (1980).
* G. M. Shore [1996] G. M. Shore, Faster than light photons in gravitational fields - causality, anomalies and horizons, Nucl. Phys. B 460, 379 (1996).
* V. O. Papanyan and V. I. Ritus [1972] V. O. Papanyan and V. I. Ritus, Vacuum polarization and photon splitting in an intense field, Sov. Phys. JETP 34, 1195 (1972).
* G. Brodin _et al._ [2004] G. Brodin, D. Eriksson, and M. Marklund, Nonlinear resonant wave interaction in vacuum, Phys. Scr. T107, 209 (2004).
* M. Marklund _et al._ [2003] M. Marklund, G. Brodin, and L. Stenflo, Electromagnetic wave collapse in a radiation background, Phys. Rev. Lett. 91, 16 (2003).
* N. N. Rosanov [1993] N. N. Rosanov, Four-wave interactions of intense radiation in vacuum, JETP 76, 991 (1993).
* Lorenci _et al._ [2000] V. A. D. Lorenci, R. Klippert, M. Novello, and J. M. Salim, Light propagation in non–linear electrodymics, Phys. Lett. B 482, 137 (2000).
* A. Zee [2010] A. Zee, Quantum field theory in a nutshell, 2nd edition (in a nutshell) (Princeton University Press; Second edition, February 21, 2010).
* A. Di Piazza _et al._ [2005] A. Di Piazza, K. Z. Hatsagortsyan, and C. H. Keitel, Harmonic generation from laser-driven vacuum, Phys. Rev. D 72, 085005 (2005).
* A. M. Fedotov and N.B. Narozhny [2007] A. M. Fedotov and N.B. Narozhny, Generation of harmonics by a focused laser beam in vacuum, Phys. Lett. A 362, 1 (2007).
* N. B. Narozhny and A. M. Fedotov [2007] N. B. Narozhny and A. M. Fedotov, Third-harmonic generation in a vacuum at the focus of a high-intensity laser beam, Laser Physics 17/4, 350–357 (2007).
* H. Euler and B. Kockel [1935] H. Euler and B. Kockel, Über die Sreuung von licht an licht nach der Diracschen Theorie, Naturwissenschaften 23, 246 (1935).
* A. Rebhan and G. Turk [2017] A. Rebhan and G. Turk, Polarization effects in Light-by-light scattering: Euler–Heisenberg versus Born–Infeld, Int. J. Mod. Phys. A 32, 1750053 (2017).
* E. Schrödinger [1943a] E. Schrödinger, A New Exact Solution in non-linear Optics (Two-wave-system), Proc. Roy. Irish Acad. A 49, 59 (1943a).
* G. Mie [1912a] G. Mie, Grundlagen einer Theorie der Materie, Annalen der Physik 37, 511 (1912a).
* G. Mie [1912b] G. Mie, Grundlagen einer Theorie der Materie, Annalen der Physik 39, 1 (1912b).
* G. Mie [1913] G. Mie, Grundlagen einer Theorie der Materie, Annalen der Physik 40, 1 (1913).
* J. D. Norton [1993] J. D. Norton, General covariance and the foundations of general relativity: eight decades of dispute, Rep. Bog. Phys. 791458, 2767 (1993).
* E. Schrödinger [1942] E. Schrödinger, Non-linear Optics, Proc. Roy. Irish Acad. A 47, 77 (1942).
* E. S. Fradkin and A. A. Tseytlin [1985] E. S. Fradkin and A. A. Tseytlin, Non–linear electrodynamics from quantized strings, Phys. Lett. B 163, 123 (1985).
* R. G. Leigh [1989] R. G. Leigh, Dirac–Born–Infeld action from Dirichlet $\sigma$-model, Mod. Phys. Lett. A 4, 2767 (1989).
* C. Bachas [1996] C. Bachas, D-brane dynamics, Phys. Lett. B 374, 37 (1996).
* G. W. Gibbons and C. A. R. Herdeiro [2001] G. W. Gibbons and C. A. R. Herdeiro, Born–Infeld theory and stringy causality, Phys. Rev. D 63, 064006 (2001).
* T. Kaluza [1921] T. Kaluza, Zum unitätsproblem in der physik, Sitzungsber. Preuss. Akad. Wiss. Berlin. (Math. Phys.) , 966 (1921).
* O. Klein [1926] O. Klein, The atomicity of electricity as a quantum theory law, Nature 118, 516 (1926).
* P. S. Wesson [1999] P. S. Wesson, Space–time-matter: Modern Kaluza-Klein theory (Singapore: World Scientific, 1999).
* Y. Aldabergenov and S. V. Ketov [2018] Y. Aldabergenov and S. V. Ketov, Modified Born–Infeld-Dilaton-Axion coupling in supersymmetry, Symmetry 11, 14 (2018).
* D. Tommasini _et al._ [2009] D. Tommasini, A. Ferrando, H. Michinela, and M. Seco, Precision tests of QED and non-standard models by searching photon-photon scattering in vacuum with high power lasers, JHEP 911, 43 (2009).
* F. Della Valle [2016] F. Della Valle, The PVLAS experiment: measuring vacuum magnetic birefringence and dichroism with a birefringent Fabry–Perot cavity, Eur. Phys. Jour. C 76, 24 (2016).
* T. Inada et al. [2014] T. Inada et al., Search for photon–photon elastic scattering in the x-ray region, Phys. Lett. B 732, 356 (2014).
* V. S. Yanovsky et al. [2008] V. S. Yanovsky et al., Electron–positron pair production from vacuum in the field of high-intensity laser radiation, Opt. Express 16, 2109 (2008).
* Zettawatt-Equivalent Ultrashort pulse laser System () [ZEUS] Zettawatt-Equivalent Ultrashort pulse laser System (ZEUS), (https://zeus.engin.umich.edu/) .
* [76] H. Abramowicz et al., Conceptual design report for the luxe experiment, arXiv:2102.02032 .
* [77] Extreme Light Infrastructure, European Project, (http://www.eli-beams.eu) .
* G. Boillat [1972a] G. Boillat, Shock relations in nonlinear electrodynamics, Phys. Rev. Lett. A 40, 1 (1972a).
* Ch. Minz _et al._ [2016] Ch. Minz, H.-H. von Borzeszkowski, T. Chrobok, and G. Schellstede, Shock Wave Polarizations and Optical Metrics in the Born and the Born-Infeld Electrodynamics, Ann. of Phys. (Elsevier) 364, 248 (2016).
* I. Białynicki-Birula [1983] I. Białynicki-Birula, Nonlinear electrodynamics: Variations on the theme by Born–Infeld (in Festschrift of J. Lopuszanski, Quantum Theory of Particles and Fields, Eds. B. Jancewicz and J. Lukierski, p. 31 - 42, World Scientific, Singapore, 1983).
* G. Boillat and T. Ruggeri [2004] G. Boillat and T. Ruggeri, Energy momentum, Wave velocities and characteristic shocks in Euler’s variational equations with application to the Born–Infeld Theory, J. Math. Phys. 45, 3468 (2004).
* Y. Brenier [2004] Y. Brenier, Hydrodynamics structure of the augmented Born–Infeld equation, Arch. Rat. Mech. Anal. 172, 65 (2004).
* D. Christodoulou [2007] D. Christodoulou, The formation of shocks in 3-dimensional fluids (EMS Monographs in Mathematics, European Mathematical Society, 2007).
* D. Christodoulou and A. Lisibach [2015] D. Christodoulou and A. Lisibach, Shock Development in Spherical Symmetry (arXiv:1501.04235, 2015).
* J. Speck [2012] J. Speck, The nonlinear stability of the trivial solution to the Maxwell–Born–Infeld system, J. Math. Phys. 83, 083703 (2012).
* R. Ferraro [2007] R. Ferraro, Testing Born–Infeld electrodynamics in waveguides, Phys. Rev. Lett. 99, 230401 (2007).
* M. Novello _et al._ [2000] M. Novello, V. A. De Lorenci, J. M. Salim, and R. Klippert, Geometrical aspects of light propagation in nonlinear electrodynamics, Phys. Rev. D 61, 045001 (2000).
* J. Plebanski [1970] J. Plebanski, Lecture notes on nonlinear electrodynamics (Lectures on non-linear electrodynamics: an extended version of lectures given at the Niels Bohr Institute and NORDITA, Copenhagen, in October 1968 - 150 p., 1970).
* D. A. Burton _et al._ [2011] D. A. Burton, R.M.G.M. Trines, T.J. Walton, and H. Wen, Exploring born-infeld electrodynamics using plasmas, J. Phys. A 44, 095501 (2011).
* S. V. Bulanov _et al._ [2020] S. V. Bulanov, P. V. Sasorov, F. Pegoraro, H. Kadlecová, S. S. Bulanov, T. Zh. Esirkepov, N. N. Rosanov, and G. Korn, Electromagnetic solitons in quantum vacuum, Phys. Rev. D. 101, 016016 (2020).
* G. W. Gibbons [1998] G. W. Gibbons, Born-Infeld particles and Dirichlet p-branes, Nucl. Phys. B 514, 603 (1998).
* M. Born [1933] M. Born, Modified field equations with a finite radius of the electron, Nature 132, 282 (1933).
* M. Born and L. Infeld [1933] M. Born and L. Infeld, Electromagnetic mass, Nature 132, 970 (1933).
* M. Born and L. Infeld [1934] M. Born and L. Infeld, Foundations of the new field theory, Proc. R. Soc. Lond. 144, 425 (1934).
* J. Schwinger [1951] J. Schwinger, Gauge invariance and vaccum polarization, Phys. Rev. 82, 664 (1951).
* E. Schrödinger [1943b] E. Schrödinger, Dynamics and Scattering-Power of Born’s Electron, Proc. Royal Irish Acad. Sec A: Math. and Phys. Sci. 48, 91 (1942/1943b).
* G. W. Gibbons and D. A. Rasheed [1995] G. W. Gibbons and D. A. Rasheed, Electric–magnetic duality rotations in non–linear electrodynamics, Nucl. Phys. B 454, 185 (1995).
* Kadomtsev and Karpman [1971] B. B. Kadomtsev and V. I. Karpman, Nonlinear waves, Sov. Phys. Usp. 14, 40 (1971).
* B. B. Kadomstev [2001] B. B. Kadomstev, Cooperative effects in plasmas in Reviews of plasma physics, edited by V. D. Shafranov, Volume 22 (Springer, Boston, MA, 2001).
* G. B. Whitham [2011] G. B. Whitham, Linear and nonlinear waves (John Wiley & Sons, 2011).
* Zee [2010] A. Zee, Quantum field theory in a nutshell (Princeton University Press, New York, 2010).
* A. V. Panchenko _et al._ [2008] A. V. Panchenko, T. Zh. Esirkepov, A. S. Pirozhkov, M. Kando, F. F. Kamenets, and S. V. Bulanov, Interaction of electromagnetic waves with caustics in plasma flows, Phys. Rev. E 78, 056402 (2008).
* G. Boillat [1972b] G. Boillat, Exact Plane–wave Solution of Born–Infeld electrodynamics, Lett. al Nuovo Cimento 4, 274 (1972b).
* [104] H. Kadlecová, Electromagnetic waves in born electrodynamics, Submitted, arXiv:2103.03575 .
* G. Breit and J. A. Wheeler [1934] G. Breit and J. A. Wheeler, Collision of two light quanta, Phys. Rev. 46, 1087 (1934).
* J. M. Davila _et al._ [2014] J. M. Davila, Ch. Schubert, and M. A. Trejo, Photonic processes in Born–Infeld theory, Int. J. Mod. Phys. A 29, 1450174 (2014).
* A. I. Nikishov and V. I. Ritus [1970] A. I. Nikishov and V. I. Ritus, Interaction of electrons and photons with a very strong electromagnetic field, Sov. Phys. Usp. 13, 303 (1970).
* E. Yu. Petrov and A. V. Kudrin [2013] E. Yu. Petrov and A. V. Kudrin, Exact self-similar solutions in born–infeld theory, Phys. Rev. D 87, 087703 (2013).
* V. P. Frolov and D. V. Fursaev [2005] V. P. Frolov and D. V. Fursaev, Gravitational field of a spinning radiation beam pulse in higher dimensions, Phys. Rev. D 71, 104034 (2005).
* V. P. Frolov _et al._ [2005] V. P. Frolov, W. Israel, and A. Zelnikov, Gravitational field of relativistic gyratons, Phys. Rev. D 72, 084031 (2005).
* V. P. Frolov and A. Zelnikov [2006] V. P. Frolov and A. Zelnikov, Gravitational field of charged gyratons, Class. Quant. Grav. 23, 2119 (2006).
* H. Kadlecová _et al._ [2009] H. Kadlecová, A. Zelnikov, P. Krtouš, and J. Podolský, Gyratons on direct–product spacetimes, Phys. Rev. D 80, 024004 (2009).
* H. Kadlecová and P. Krtouš [2010] H. Kadlecová and P. Krtouš, Gyratons on Melvin universe, Phys. Rev. D 82, 044041 (2010).
* B. Döbrich and H. Gies [2009] B. Döbrich and H. Gies, Interferometry of light propagation in pulsed fields, Eur. Lett. Ass. EPL (Europhysics Letters) 87, 2 (2009).
* J. Ellis _et al._ [2017] J. Ellis, N. E. Mavromatos, and T. You, Light-by-light scattering constraint on Born-Infeld theory, Phys. Rev. Lett. 118, 261802 (2017).
|
arxiv-papers
| 2021-07-26T14:54:52 |
2024-09-04T03:07:18.936392
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Hedvika Kadlecov\\'a",
"submitter": "Hedvika Kadlecova",
"url": "https://arxiv.org/abs/2107.12249"
}
|
2107.12255
|
# Massive electrons and unconventional room-temperature superconductivity in
superhydrides
Theja N. De Silva Department of Chemistry and Physics, Augusta University,
Augusta, Georgia 30912, USA.
###### Abstract
The search for room-temperature superconducting materials has been at the
center of modern research for decades. The recent discovery of high-
temperature superconductivity, under extreme pressure in hydrogen-rich
materials, is a tremendous achievement in this research front. This discovery
offers a route in the search for room temperature superconductivity at ambient
pressure. The superconductivity of these hydrogen-rich materials was confirmed
by the observation of zero-resistance, isotope effects, effect of magnetic
field, and other standard properties. However, some of the experimental
features were puzzling as they were not consistent with the known
superconductivity theories. These debatable features have lead to a series of
recent publications downplaying the existence of superconductivity in these
superhydrides. Here we propose a concept of massive electrons under pressure
and successfully explain all non-standard experimental observations. Our
massive electron concept explains the large effective mass of the
quasiparticles, the reason for the high critical temperatures for moderate
electron-phonon couplings, and a 3-5 orders of magnitude larger conductivity
causing a narrow resistivity broadening at the transition in the presence of
magnetic field. We anticipate our findings will lead to a new directions and
tweaks in current research in the search for ambient-pressure, room-
temperature superconductors.
## I I. Introduction
Superconductivity research has been at the heart of modern condensed matter
physics and material science research for decades. As superconductors can
conduct electric current without resistance, the potential application of
superconductors in technology can have a revolutionized impact. There are two
types of superconductors that have been discovered to date: conventional and
unconventional superconductors. Conventional superconductors are understood as
materials whose superconductivity originates from electron-phonon
interactions. The properties of these phonon-driven superconductors can be
described by the Bardeen-Cooper-Schrieffer (BCS) and Migdal-Eliashberg
theories bcs . The superconductivity of unconventional superconductors is
believed to be driven by strong electron-electron correlations. The most
prominent unconventional superconductors are cuprates cuoA ; cuoB , iron
pnictides ironpA ; ironpB , and nickalates nickalateA ; nickalateB ;
nickalateC . Even though the nature of superconductivity in unconventional
materials is not fully understood, it is known that both of these
superconducting types exhibit standard properties. These properties include
the Meissner effect, upper and lower critical magnetic fields, critical
currents, and resistive transition at critical temperatures.
The longstanding challenge in superconductivity research was finding a room
temperature superconducting material. The search for room temperature
superconductivity was renewed after the discovery of unconventional copper
based superconductors with critical temperatures as high as 133 K at ambient
pressure. The BCS theory provides clues for achieving high critical
temperatures for conventional superconductors. The theory suggests that high
frequency Debye phonons, strong electron-phonon interactions, and high density
of states can enhance the critical temperatures of conventional
superconductors. Following these clues, magnesium diboride (MgB2) has been
synthesized and found to be superconducting below 39 K at ambient pressure
mgb2A ; mgb2B . The high frequency phonon spectrum due to the light elements
in MgB2 is believed to be the reason for this relatively higher critical
temperature. Based on the idea of high phonon frequency due to the light
hydrogen atom, Ashcroft proposed the possibility of having high temperature
phonon based superconductivity in hydrogen rich materials, if attractive
pairing interaction exists ashcroft1 ; ashcroft2 . Ascroft’s proposal was
pioneered by the idea of a _chemical pre-compression_ effect proposed by
Gilman gilman . Gilman’s proposal of achieving high temperature
superconductors came soon after the discovery of ambient pressure hydrogen
rich superconductors Th4H15 at a critical temperature of 8 K THsc . Motivated
by Gilman’s idea and the discovery of a hydride superconductor, subsequent
studies on Pd–H and Pd–Cu–H systems were reported to exhibit superconductivity
below 10 K pdH . Despite the support of calculations showing that metallic
hydrogen is a good candidate for a room temperature superconductor, all
experimental efforts turned out be negative for pure hydrogen. Therefore,
researchers shifted their efforts toward binary and ternary hydride compounds.
The density functional theory, Monte-Carlo, and other numerically based
calculations numcA ; numcB ; numcC ; numcD ; numcE ; numcF ; numcG ; numcH ;
numcI ; numcJ ; numcK and the discovery of phonon based high temperature
superconductivity in H3S at high pressure SCEX1 ignited a wealth of research
by synthesizing superhydrides at very high pressure values. To date, about
dozen synthesized superhydrides have shown to be near-room temperature
superconductors at high pressure. These include phosphorous hydrides exp1 ,
lanthanum hydrides exp2 ; exp3 ; exp4 , yttrium hydrides exp5 ; exp6 ; exp7 ,
thorium hydrides exp8 , binary cerium hydrides exp9 , ternary lanthanum-
yttrium hydrides exp10 and carbonaceous-sulfur hydride ternary compounds
SCEX2 . The most notable among these compounds are the lanthanum hydride exp2
; exp3 ; exp4 and carbonaceous sulfur hydride (C-S-H) systems SCEX2 . Two
recent experiments by Drozdov _et al_ exp2 and Snider _et al_ SCEX2 report
near-room temperature superconductivity for LaH10 at pressure 267 GPa and room
temperature superconductivity for C-S-H at pressure 275 GPa. The
superconductivity of these compounds was confirmed by the observation of zero
resistance and magnetic susceptibility. Furthermore, these experiments show a
decrease in critical temperature in the presence of an external magnetic
field. The conventional nature of the superconductivity in these compounds was
confirmed by a pronounced isotope effect on the critical temperature. The
experimental estimates further support that these clathrate-like hydrides
superconductors are strongly type-II.
For both conventional and unconventional, the magnetic field responses to
type-I and type-II superconductors are very different. The type-I
superconductors completely expel the magnetic field up to the T-temperature
dependent critical critical field $H_{c}(T)$, beyond at which it becomes
normal metal. This perfect diamagnetism can be described by a supercurrent
circulating within a thin surface layer of the superconductor. The thickness
of this surface layer is a temperature dependent material parameter known as
the London penetration depth $\lambda(T)$. On the other hand, the type-II
superconductors are perfect diamagnet only up to the lower critical field
$H_{c1}(T)<H_{c}(T)$. In the range of magnetic field up to the upper critical
field $H_{c1}(T)<H<H_{c2}(T)$, the magnetic flux can penetrate into the
material in the form of vortices and can give rise flow resistance to the
critical current. The Ginzburg-Landau parameter $\kappa=\lambda(0)/\xi(0)$
which is approximately a temperature independent quantity can be used to
determine whether a superconductor is type-I or type-II. Here, the coherence
length $\xi(T)=(\xi_{0}^{-1}+l^{-1})^{-1}$ is usually taken as the shortest of
the Pippard coherence length $\xi_{0}(T)$ or the mean free path $l(T)$. All
experimental evidence suggests that superhydride superconductors are type-II
as $\kappa\gg 1$.
Most of the experimental data strongly supports the existence of
superconductivity in superhydrides. However, some of the experimental features
are puzzling and seem to violate known standard superconducting properties.
While some features support type-I superconductivity, others support type-II.
This disparity shows the simultaneous coexistence of type-I and type-II.
Another puzzling question is the high critical temperature with moderate
electron-phonon coupling. Theoretical calculations suggest that the
dimensionless electron-phonon coupling $\Lambda\sim 2$ in superhydrides have
moderate values numcD . In general, the width of the resistive transition in
the presence of a magnetic field is expected to be large for type-II
superconductors. For example, the width of the resistive transition in MgB2 at
a magnetic field $H=0.15H_{c2}$ is about $\Delta T_{C}/T_{C}\sim 0.15\%$. In
contrast, the resistive transition width of C-S-H at the same magnetic field
is smaller than that of MgB2 by a factor of about 50 mgb2B ; hirsch1 . This
narrow transition width in the C-S-H system and other superhydrides apparently
suggests that these are type-I superconductors SCEX2 . Further, using an
experimental sample size and measured resistance, the resistivity of the C-S-H
system was calculated by Dogan _et al_ dogan . The calculation was done using
a resistive formula derived from the four-point van der Pauw procedure. The
calculated resistivity was found to fall into the poor metal/semimetal range
above the critical temperature. The resistivity below the critical temperature
was found to be 2-3 orders of magnitude lower falling into the typical metal
range. In addition, by analyzing resistivity broadening experimental data,
Hirsch _et al_ hirsch2 argued that the zero-temperature critical current
density in C-S-H systems is five orders of magnitude larger than that of
standard superconductors. The experimental data reported in Ref. SCEX1 for
the H3S system indicates that the effective mass of the electrons at pressure
150 GPa is larger than the expected effective mass by a factor of about 10
hirsch3 . This mass enhancement is not consistent with the electron-phonon
interactions estimated for H3S, nor the theoretical calculations numcA ; numcB
; numcC ; numcD ; numcE ; numcF ; numcG . Due to the fact that these
experimental features are not able to be explained using the standard
superconductivity theories, a series of recent articles argue that the
superhydrides under pressure are either a unique kind of superconductors or
not superconductors at all hirsch1 ; dogan ; hirsch2 ; hirsch3 ; hirsch4 .
In this paper, we successfully answer all debatable experimental observations
above using a massive electron concept. We show that the effective mass of the
electrons exponentially increases with pressure. The mass enhancement makes
the density of states larger resulting in strong effective interactions
between electrons at high pressure. Thus, the superhydrides under pressure are
strongly interacting conventional BCS superconductors. However the
conventional classification of type-I versus type-II is not applicable to the
superhydrides as the coherence length and the penetration depth are pressure
dependent. We show that the narrow width of the resistivity transition
originates from the flow resistivity of vortices in the presence of a magnetic
field. We find that the flux flow resistivity is exponentially smaller at high
pressure due to the pressure dependence on the coherence length.
## II II. Pressure dependence on the effective mass
In this section, we briefly illustrate the pressure dependence on the
effective mass using a simplified picture. Let’s consider the pressure change
on the material unit cell $\Delta P\equiv P=P_{ex}-P_{0}$, where $P_{ex}$ and
$P_{0}$ are the applied pressure and the ambient pressure, respectively. The
volume of the unit cell shrinks under the applied pressure so the change in
volume can be written as,
$\displaystyle\frac{V-V_{0}}{V_{0}}=-K_{V}\Delta P,$ (1)
where $K_{V}$ is the compressibility and $V-V_{0}$ is the change in volume.
The onsite Coulomb repulsion $U(P)$ between electrons increases as the cell
volume decreases,
$\displaystyle U(P_{ex})-U(P_{0})=-K_{U}\frac{V-V_{0}}{V_{0}}$ (2)
$\displaystyle\frac{U(P_{ex})-U(P_{0})}{U(P_{0})}=K_{C}dP,$
where $K_{C}=K_{U}K_{V}/U(P_{0})$ is a material dependent constant.
Approximating $U(P_{0})\rightarrow U(P)$, we find,
$\displaystyle\frac{1}{U}\frac{dU}{dP}=K_{C}.$ (3)
The pressure dependence of the onsite repulsion is then given by the solution
of this equation, $U=U_{0}e^{K_{C}P}$. The pressure dependence on the
tunneling energy or the hopping integral $t$ can also be approximated in a
similar fashion. The pressure dependence on the tunneling energy then has the
form $t=t_{0}e^{-K_{t}P}$.
The electronic part of the effective Hamiltonian for the propagation of
quasiparticles can be written as,
$\displaystyle H_{0}=\sum_{k}\epsilon_{k}c^{\dagger}_{k\sigma}c_{k\sigma},$
(4)
where $c^{\dagger}_{k\sigma}/c_{k\sigma}$ represents the creation/annihilation
of a quasiparticle of wavevector $k$ with spin $\sigma=\uparrow,\downarrow$.
Regardless of the lattice structure, the energy dispersion of the weakly
interacting quasiparticles has the form
$\epsilon_{k}=-2t\sum_{\delta}\cos(\vec{k}\cdot\vec{\delta})$, where
$\vec{\delta}$ is the nearest neighbor lattice vector. For the case of
strongly interacting electronic systems, one can consider propagation of holes
in the presence of doping in the background of anti-ferromagnetism efm1 ; efm2
. In this case, the quasiparticle dispersion has the form
$\epsilon_{k}=-(2t^{2}/U)\sum_{\delta}\cos(\vec{k}\cdot\vec{\delta})$. In the
continuity limit, the quasiparticle dispersion can be approximated by
expanding the cosine term to get $\epsilon\sim\hbar^{2}k^{2}/(2m^{\ast})$,
where $m^{\ast}$ is the effective mass of the quasiparticles and $\hbar$ is
the Planck’s constant. The effective mass of the quasiparticle is
$m^{\ast}=\hbar^{2}/(2\delta ta_{0})$ and $m^{\ast}=\hbar^{2}U/(2\delta
t^{2}a_{0})$ for the weakly interacting electrons systems and strongly
interacting holes systems, respectively. Here $a_{0}$ is the lattice constant
of the underlying host lattice. Using the pressure dependence of the
interaction parameters presented before, the effective mass of the relevant
quasiparticles responsible for superconductivity in superhydrides under
pressure can be written in the form,
$\displaystyle m^{\ast}=m_{0}e^{KP},$ (5)
where $m_{0}=m_{e}(1+\Lambda)$ with bare electron mass $m_{e}$ and
dimensionless electron-phonon coupling $\Lambda$. Notice, here $P$ is the
pressure relative to the ambient pressure, $K$ is a material dependent
parameter, and we have neglected the pressure dependence on $\Lambda$. As we
see in the following sections, neglecting pressure dependence on $\Lambda$ has
no effect on our conclusions. The material dependent parameter $K$
encapsulates the structural and lattice details of the system.
## III III. Determination of the material dependent parameter $K$
We start with the reduced BCS Hamiltonian in the mean-field approximation,
$\displaystyle
H=\sum_{k,\sigma}\xi_{k}c^{\dagger}_{k\sigma}c_{k\sigma}+\sum_{k}(\Delta_{k}c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow}+\Delta_{k}^{\ast}c_{-k\downarrow}c_{k\uparrow}),$
(6)
where $\Delta_{k}=\sum_{k^{\prime}}V_{k,k^{\prime}}\langle
c_{-k^{\prime}\downarrow}c_{k^{\prime}\uparrow}\rangle$ is the superconducting
order parameter defined through the thermal expectation value $\langle
c_{-k^{\prime}\downarrow}c_{k^{\prime}\uparrow}\rangle$ with respect to the
Hamiltonian $H$. The momentum conserving effective attractive interaction
between quasiparticles $V_{k,k^{\prime}}$ originates from the electron-phonon
interaction. Notice that we are working in the grand canonical ensemble to
take care of the conservation of quasiparticle number, so we defined
$\xi_{k}=\epsilon_{k}-\mu$, where $\mu$ is the chemical potential. The
diagonalization of the Hamiltonian is straight forward using the usual
Bogoliubov transformation,
$\displaystyle
c_{k\sigma}=\cos(\theta_{k})\gamma_{k}-\sigma\sin(\theta_{k})e^{i\phi_{k}}\gamma^{\dagger}_{-k,-\sigma},$
(7)
to get,
$\displaystyle
H=\sum_{k\sigma}E_{k}\gamma^{\dagger}_{k\sigma}\gamma_{k\sigma}+\sum_{k}(\xi_{k}-E_{k}),$
(8)
where the superconducting energy gap $E_{k}=\sqrt{\xi_{k}^{2}+\Delta_{k}^{2}}$
and the coherence factor is
$\displaystyle\cos\theta_{k}=\sqrt{\frac{E_{k}+\xi_{k}}{2E_{k}}}.$ (9)
Deriving the thermal expectation value $\langle
c_{-k^{\prime}\downarrow}c_{k^{\prime}\uparrow}\rangle$ with respect to the
diagonalized Hamiltonian and relating it to the superconducting order
parameter, the finite temperature gap equation has the form,
$\displaystyle\Delta_{k}=-\sum_{k^{\prime}}V_{k,k^{\prime}}\frac{\Delta_{k^{\prime}}}{2E_{k}^{\prime}}\tanh\biggr{(}\frac{E_{k^{\prime}}}{2k_{B}T}\biggr{)}.$
(10)
Following the traditional BCS formalism, we take the interaction to be
attractive, $V_{k,k^{\prime}}=-v/V<0$ only for when $\xi_{k}$ and
$\xi_{k^{\prime}}$ are within an energy $\hbar\omega_{D}$ and zero otherwise.
For phonon-mediated superhydride superconductors, $\omega_{D}$ is the phonon
bandwidth known as Debye frequency. This form of the interaction allows us to
take $\Delta_{k}=\Delta e^{i\phi}$ for $|\xi_{k}|<\hbar\omega_{D}$, and
$\Delta_{k}=0$ otherwise. Assuming the density of states for both spins
$g(\epsilon)$ is slowly varying in the vicinity of chemical potential
$\mu\simeq\epsilon_{F}$, we have,
$\displaystyle
1=\frac{g(\epsilon_{F})v}{2}\int_{0}^{\hbar\omega_{D}}\frac{d\xi}{\sqrt{\xi^{2}+\Delta^{2}}}\tanh\biggr{(}\frac{\sqrt{\xi^{2}+\Delta^{2}}}{2k_{B}T}\biggr{)}.$
(11)
The density of states at the Fermi energy $\epsilon_{F}$ can be written as
$g(\epsilon_{F})=g_{0}(\epsilon_{F})e^{KP}$, where the density of states at
ambient pressure is
$\displaystyle
g_{0}(\epsilon_{F})=\biggr{(}\frac{3n}{\pi^{4}\hbar^{6}}\biggr{)}^{1/3}m_{0}.$
(12)
The quasiparticle density $n$ is related to the Fermi wavevector $k_{F}$ as
$k_{F}=(3\pi^{2}n)^{1/3}$. The gap equation can be used to solve for the zero
temperature order parameter $\Delta_{0}\equiv\Delta(T=0)$,
$\displaystyle\Delta_{0}=\frac{\hbar\omega_{D}}{\sinh\biggr{(}\frac{2}{g(\epsilon_{F})v}\biggr{)}}.$
(13)
The finite temperature gap equation at the critical temperature $T=T_{C}$ can
be used to determine the critical temperature of the system. Setting
$\Delta(T=T_{C})=0$ and changing the variable $s=\xi/(2k_{B}T_{c})$, we have,
$\displaystyle\frac{2}{g(\epsilon_{F})v}=\int_{0}^{S_{0}/2}\frac{\tanh(s)}{s}ds,$
(14)
where $s_{0}=\hbar\omega_{d}/k_{B}T_{C}$. By approximating
$tanh(s)/s\simeq(1+s^{2})^{-1/2}$ and completing the integration, we find the
critical temperature,
$\displaystyle
k_{B}T_{C}=\frac{\hbar\omega_{D}}{2}\frac{1}{\sinh[2/g(\epsilon_{F})v]}.$ (15)
The critical temperature of the weak coupling superconductors $T_{C}(w)$ can
be estimated by the large $x=2/g(\epsilon_{F})v$ expansion of the function
$[\sinh(x)]^{-1}\rightarrow 2e^{-x}$, where we find
$\displaystyle k_{B}T_{C}(w)=\hbar\omega_{D}e^{-\frac{2}{g(\epsilon_{F})v}}.$
(16)
This is only a factor of $2e^{C}/\pi=1.134$ smaller than the well established
critical temperature of weak coupling BCS superconductors, where $C=0.577215$
is the Euler-Mascheroni constant tinkham96 .
The superhydrides under pressure are strong coupling superconductors due to
the large effective interaction parameter $g(\epsilon_{F})v$. This is due to
the large density of states entering through the effective mass. Therefore, we
find the critical temperature of the strong coupling superhydrides
superconductors at higher pressure values using the small $x$ expansion of the
function $\sinh(x)\rightarrow x$:
$\displaystyle
k_{B}T_{C}=\frac{\hbar\omega_{D}vg_{0}(\epsilon_{F})}{4}e^{KP}.$ (17)
This clearly shows that the $\ln(T_{C})$ has a linear dependence on the
pressure at high pressure values and the slope of the $\ln(T_{C})$ vs $P$ is
the material dependent parameter $K$. We find the $K$ values for both C-H-S
and H3S systems using the experimental values of critical temperature. As
shown in FIG. 1, the experimental data has a clear linear dependence on the
pressure, indicating the validity of our theory. Using a linear fit to the
experimental data, we find the $K$ values for the C-S-H system and H3S system,
$K_{CHS}=0.007/$GPa and $K_{HS}=0.021/$ GPa, respectively. See FIG. 1 for
details.
Figure 1: (color online) Linear pressure dependence on $\ln{T_{C}}$ at high
pressure, where $T_{C}$ is the critical temperature. The orange squares
represent the experimental data for H3S system extracted from FIG. 1 of Ref.
SCEX1 . The blue circles represent the experimental data for carbonaceous
sulfur hydride (C-S-H) presented in Ref. SCEX2 . The solid lines are the
linear fit for experimental data.
## IV IV. Pressure dependence on the coherence length, the London penetration
depth, and the Ginzburg-Landau parameter
Let’s start with the standard BCS formulas for the coherence length
$\xi(T,P)$, and the London penetration depth $\lambda_{L}(T,P)$ tinkham96 ,
where we include the arguments of pressure $P=P_{ext}-P_{0}$ dependence in the
definitions.
$\displaystyle\xi(T,P)=\frac{\hbar\nu_{F}}{\pi\Delta},$ (18)
where the Fermi velocity $\nu_{F}=k_{F}/m^{\ast}$. The Fermi wavevector is
related to the density of quasiparticles $n$ through
$k_{F}=(3\pi^{2}n)^{1/3}$. The London penetration depth,
$\displaystyle\lambda_{L}(T,P)=\biggr{(}\frac{m^{\ast}c^{2}}{4\pi
ne^{2}}\biggr{)}^{1/2},$ (19)
can be written in terms of the coherence length and the explicit mass
dependence,
$\displaystyle\lambda_{L}(T,P)=\biggr{(}\frac{3\hbar^{2}c^{2}}{4\pi^{2}e^{2}m_{0}^{2}}\biggr{)}^{1/2}\biggr{(}\frac{1}{\Delta^{3}\xi(T,P)^{3}}\biggr{)}^{1/2}\frac{m_{0}}{m^{\ast}},$
(20)
where $c$ is the speed of light and $e$ is the electron charge. For the
purpose of comparison, we provide the zero-temperature London penetration
depth as a fraction of its ambient pressure value,
$\displaystyle\lambda_{L0}(P)\equiv\frac{\lambda_{L}(0,P)}{\lambda_{L}(0,0)}=e^{\frac{KP}{2}}.$
(21)
When deriving this, we assume that the quasiparticle density $n$ remains the
same for the all pressure values. Similarly, we provide the zero-temperature
coherence length as a fraction of its ambient pressure value,
$\displaystyle\xi_{0}(P)\equiv\frac{\xi(0,P)}{\xi(0,0)}=Ae^{-2KP},$ (22)
where we defined a constant
$A=4/[(g_{0}(\epsilon_{F})v]e^{-2/[g_{0}(\epsilon_{F})v]}$. Notice the
exponential term in the definition of $A$, this is because we assume that the
ambient pressure superhydrides are weak coupling superconductor. The coherence
length and the Landau penetration depth can be used to find the pressure
dependent Ginzburg-Landau parameter,
$\displaystyle\kappa_{0}(P)\equiv\frac{\kappa(0,P)}{\kappa(0,0)}=\frac{1}{A}e^{\frac{5KP}{2}}.$
(23)
As opposed to the other superconductors, we argue that the coherence length
and the Landau penetration depth cannot be considered as relevant length
scales for the superhydrides under pressure. This is due to the fact that they
have strong pressure dependence as shown above. Thus, the classification of
type-I versus type-II may not be appropriate for superhydrides, unless one
specifies the pressure.
## V V. Enhancement of the density of states
In this section, we justify the validity of our theory by comparing the
density of states. First, we extract the experimental density of states from
the experimental measurements for the H3S system. The lower critical field and
the upper critical field within the BCS theory are given by tinkham96 ,
$\displaystyle
H_{C1}(T,P)=\frac{\phi_{0}}{4\pi\lambda^{2}(T,P)}\ln[\kappa(T,P)],$ (24)
and
$\displaystyle H_{C2}(T,P)=\frac{\phi_{0}}{2\pi\xi^{2}(T,P)},$ (25)
where $\phi_{0}=hc/2e$ is the flux quantum. These two can be combined into a
single equation to determine the Landau Ginzburg parameter using the lower and
upper critical fields,
$\displaystyle\kappa^{2}(T,P)=\frac{H_{C2}(T,P)}{2H_{C1}(T,P)}\ln[\kappa(T,P)].$
(26)
Once the Landau Ginzburg parameter is known, the thermodynamic critical field,
$\displaystyle H_{C}(T,P)=\frac{\phi_{0}}{2\sqrt{2}\lambda_{L}(T,P)\xi(T,P)},$
(27)
can be determined by,
$\displaystyle H_{C}(T,P)=\frac{H_{C2}(T,P)}{\sqrt{2}\kappa(T,P)}.$ (28)
The zero temperature limit of the thermodynamic critical field is related to
the density of states and the zero temperature gap function,
$\displaystyle H_{C}(0,P)=\sqrt{2\pi g(\epsilon_{F})}\Delta_{0}.$ (29)
The pressure dependent superconducting gap $\Delta_{0}$ is related to the
pressure dependent critical temperature, $\Delta_{0}=2k_{B}T_{C}$, note the
factor $2$ on the right-hand side in our theory as opposed to the factor of
$1.763$ in standard weak coupling BCS approximation. Finally, the experimental
density of states at a given pressure can be determined by using the
experimental determination of thermodynamic critical field and the critical
temperature,
$\displaystyle g(\epsilon_{F})=\frac{H^{2}_{C}(0,P)}{8\pi(k_{B}T_{C})^{2}}.$
(30)
Using the magnetization measurements, the lower critical field for the H3S
system has been extracted to be $H_{C1}(0)=0.03T$ by Drozdov _et al_ SCEX1 .
However, using the sample geometry of the NRS experiment nmr , Hirsch _et al_
hirsch3 argued that $H_{C1}>2.5T$. Using this lower bound for the lower
critical field and the experimental value for the upper critical filed
$H_{C2}=70T$, Eq. (26) gives us $\kappa=4.6$ for the H3S system. Equation (28)
then yields $H_{C}(0)=10.8T$. We then use the Eq. (30) to find the density of
states for both spins hirsch3 :
$\displaystyle g(\epsilon_{F})=1.053/eV{\AA}^{3}$ (31)
This density of states is about 28 times larger than that of the ambient
pressure sulfer hydride , $g_{0}(\epsilon_{F})=0.038/eV{\AA}^{3}$ dos . Using
the $K=0.021/GPa$, the density of states of the H3S system at pressure
$P=155GPa$, we find, $g_{(}\epsilon_{F})=g_{0}(\epsilon_{F})e^{KP}\equiv
0.911/eV{\AA}^{3}$. This excellent agreement justifies our massive electron
concept for the superhydrides under pressure.
## VI VI. Resistive broadening at the superconducting transition
Type-II superconductors show a broadening in resistivity at the
superconducting transition in the presence of an applied magnetic field. Below
the critical temperature, when the applied magnetic field is smaller than the
upper critical field, but larger than the lower critical field, the material
enters into the mixed phase. In the mixed phase, the magnetic field penetrates
into the material as flux quantum. The flux bundles appears as vortices with
normal conducting core forced by the diverging superfluid velocity. These
vortices interact through repulsive forces, mediated by the vortex currents,
but stay together due to the magnetic pressure.
In the mixed phase, the circulating current causes the motion of the vortices.
This motion causes the flux-flow resistivity which broadens the
superconducting transition. The resistivity, caused by the dissipation,
originates from the normal core current in the vortex and the supercurrent
around it Caroli64 . The vortex motion creates a disturbance to the
supercurrent around the vortex, which results the creation of an electric
field distribution bardeen65 . To have the continuity of the electric field, a
normal current circulates within the core of the vortex. This normal core
current creates the first dissipation. The electric field, which is
perpendicular to both vortex direction and the vortex velocity, is created due
to the motion of the vortices in a magnetic field kim69 ; josph65 . The second
source of dissipation is created by this electric field outside the vortex
core tinkham96 . It has been shown that both of these dissipation have a
similar order of magnitude tinkham96 ; bardeen65 .
When a vortex is in motion, two forces can act on the vortex, the Lorentz
force and the frictional force. The Lorentz force on a vortex includes both a
Lorentz-like force caused by the magnetic field pressure gradient in an
external current tinkham64 , and the Magnus contribution caused by the
relative motion between the vortex and the supercurrent deGennes64 ;
Nozieres66 ; Brandt95 . The Lorentz force is the only external force acting on
a vortex in a clean system. However, the materials always have disorder and
defects causing a frictional force on a moving vortex. This frictional force
is important in restoring the vortex motion disturbed by the dissipation in
the vortex core Kopnin76 . In a clean enough system, the vortex flow gives
rise to a flux flow resistance due to these forces acting on a vortex. Using
the condition for the dynamical equilibrium where the frictional force is
equal to the Lorentz force, the flow resistivity has been derived Stranad64 ;
thesis ,
$\displaystyle\rho(T,P)=\frac{2\pi\xi^{2}(T,P)\rho_{n}(T,P)B}{\phi_{0}},$ (32)
where $B$ is the applied magnetic field and
$\rho_{n}(T,P)=m^{\ast}/(ne^{2}\tau)$ is the normal-state resistivity with
$\tau$ being the relaxation time. Taking the zero-temperature limits, the flux
flow resistivity as a fraction of its ambient pressure value,
$\displaystyle\rho_{0}(P)\equiv\frac{\rho(0,P)}{\rho(0,0)}=A^{2}e^{-3KP}.$
(33)
Note the exponentially decaying factor $e^{-3KP}$ for the H3S and $C-S-H$
systems at their highest critical temperatures, $7.2\times 10^{-5}$ and
$3.5\times 10^{-3}$, respectively. These are almost 4-orders of magnitude
smaller and 3-orders of magnitude smaller than that of the ambient pressure
resistivities, respectively. This low resistivity gives a larger conductivity
for the supercurrent at the transition, therefore the resistivity broadening
is very small as evident by the experiments.
## VII VII. Conclusions
We proposed a concept of a massive electron scheme to explain the non-standard
properties of high-temperature superhydride superconductors. We showed that
the effective mass of the electron-quasiparticles exponentially increases with
applied pressure and agrees with experimental critical temperatures. Our
investigation showed that the superhydrides are strongly interacting-
conventional BCS superconductors at high pressure due to the large density of
states. The estimated density of states and conductivity at the transition, in
the presence of a magnetic field, are consistent with the experimental
observations. We showed that the coherence length, the London penetration
depth, and the Landau–Ginsburg parameter all have strong pressure dependence,
hence the traditional categorization of type-I versus type-II superconductors
is not applicable to superhydrides. Further, we showed that the conductivity
at the superconducting transition in the presence of magnetic field is 3-5
orders of magnitude larger than that of other superconductors. Therefore, the
superconducting transition width is very narrow, similar to type-I
superconductors as seen in experiments. This larger conductivity is due to the
strong pressure dependence on the coherence length. In addition to H3S and
C-S-H systems, the LaH10 system also shows near room temperature
superconductivity under pressure exp2 . Even though we have not compared LaH10
data with our theory, we anticipate our theory is applicable to this system
also.
## VIII VIII. ACKNOWLEDGMENTS
We are grateful to Dr. Ranga Dias and his collaborators for sharing their
experimental data with us. We further acknowledge valuable communications with
Dr. Dias.
## References
* (1) J. Bardeen, L. N. Cooper,, J. R. Schrieffer, _Theory of Superconductivity_ , Phys. Rev. 108, 1175 (1957).
* (2) Bednorz, J.G., Muller, K.A. _Possible high T c superconductivity in the Ba-La-Cu-O system_. Z. Physik B - Condensed Matter 64, 189–193 (1986).
* (3) C.W. Chu, L.Z. Deng, B. Lv, _Hole-doped cuprate high temperature superconductors_ , Physica C: Superconductivity and its Applications, 514, 2015, Pages 290-313.
* (4) Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, _Iron-Based Layered Superconductor La[O 1-xFx]FeAs ($x$ = 0.05 - 0.12) with Tc = 26 K_, J. Am. Chem. Soc., 130, 3296, (2008).
* (5) Hideo Hosono, Kazuhiko Kuroki, _Iron-based superconductors: Current status of materials and pairing mechanism_ , Physica C: Superconductivity and its Applications, 514, 2015, Pages 399-422.
* (6) Li, D., Lee, K., Wang, B.Y. et al. _Superconductivity in an infinite-layer nickelate_. Nature 572, 624–627 (2019)
* (7) Hepting, M., Li, D., Jia, C.J. et al. _Electronic structure of the parent compound of superconducting infinite-layer nickelates_. Nat. Mater. 19, 381–385 (2020).
* (8) Pacchioni, G. _Nickelates enter the scene_. Nat Rev Mater 5, 171 (2020).
* (9) Nagamatsu, J., Nakagawa, N., Muranaka, T. _et al._ _Superconductivity at 39 K in magnesium diboride._ Nature 410, 63–64 (2001).
* (10) P.C.Canfield. L.Budko, D.K.Finnemore, _An overview of the basic physical properties of MgB 2_, Physica C: Superconductivity, 385, 1–2 (2003).
* (11) N. W. Ashcroft, _Hydrogen Dominant Metallic Alloys: High Temperature Superconductors?_ ,Phys. Rev. Lett. 92, 187002 (2004).
* (12) N. W. Ashcroft, _Metallic Hydrogen: A High-Temperature Superconductor?_ , Phys. Rev. Lett. 21, 1748 (1968).
* (13) J. J. Gilman, _Lithium Dihydrogen Fluoride—An Approach to Metallic Hydrogen_ , Phys. Rev. Lett. _26_ , 546 (1971).
* (14) C. B. Satterthwaite and I. L. Toepke, _Superconductivity of hydrides and deuterides of thorium_ , Phys. Rev. Lett. 25, 741 (1970).
* (15) T. Skoskiewicz, _Superconductivity in the palladium-hydrogen and palladium-nickel-hydrogen systems_ , Phys. Status Solidi A 11, K123 (1972). 10.1002/pssa.2210110253
* (16) Yinwei Li, Jian Hao, Hanyu Liu, Yanling Li, and Yanming Ma, _The metallization and superconductivity of dense hydrogen sulfide_ , J. Chem. Phys. 140, 174712 (2014).
* (17) Duan, D., Liu, Y., Tian, F. et al. _Pressure-induced metallization of dense (H2S)2H2 with high-Tc superconductivity_ , Sci Rep 4, 6968 (2014).
* (18) N. Bernstein, C. Stephen Hellberg, M. D. Johannes, I. I. Mazin, and M. J. Mehl, _What superconducts in sulfur hydrides under pressure and why_ , Phys. Rev. B 91, 060511(R) (2015).
* (19) Ion Errea, Matteo Calandra, Chris J. Pickard, Joseph Nelson, Richard J. Needs, Yinwei Li, Hanyu Liu, Yunwei Zhang, Yanming Ma, and Francesco Mauri, _High-Pressure Hydrogen Sulfide from First Principles: A Strongly Anharmonic Phonon-Mediated Superconductor_ , Phys. Rev. Lett. 114, 157004 (2015).
* (20) Flores-Livas, J., Sanna, A. and Gross, E. _High temperature superconductivity in sulfur and selenium hydrides at high pressure_ , Eur. Phys. J. B 89, 63 (2016).
* (21) D. A. Papaconstantopoulos, B. M. Klein, M. J. Mehl, and W. E. Pickett, _Cubic H 3S around 200 GPa: An atomic hydrogen superconductor stabilized by sulfur _, Phys. Rev. B 91, 184511 (2015).
* (22) E. J. Nicol and J. P. Carbotte, _Comparison of pressurized sulfur hydride with conventional superconductors_ , Phys. Rev. B 91, 220507(R) (2015).
* (23) Jose A. Flores-Livas, Maximilian Amsler, Christoph Heil, _et al_ , _Superconductivity in metastable phases of phosphorus-hydride compounds under high pressure_ , Phys. Rev. B 93, 020508(R) (2016).
* (24) Feng Peng, Ying Sun, Chris J. Pickard, Richard J. Needs, Qiang Wu, and Yanming Ma, _Hydrogen Clathrate Structures in Rare Earth Hydrides at High Pressures: Possible Route to Room-Temperature Superconductivity_ , Phys. Rev. Lett. 119, 107001 (2017).
* (25) Hanyu Liu, Ivan I. Naumov, Roald Hoffmann, N. W. Ashcroft, and Russell J. Hemley, _Potential high-T c superconducting lanthanum and yttrium hydrides at high pressure_, PNAS, 114, 6990-6995 (2017).
* (26) Eva Zurek and Tiange Bi, _High-temperature superconductivity in alkaline and rare earth polyhydrides at high pressure: A theoretical perspective_ , J. Chem. Phys. 150, 050901 (2019).
* (27) Drozdov, A., Eremets, M., Troyan, I. _et al_. _Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system_. Nature 525, 73–76 (2015).
* (28) A.P. Drozdov, M. I. Eremets, I. A. Troyan, _Superconductivity above 100 K in PH3 at high pressures_ , arXiv:1508.06224 (2015).
* (29) Drozdov, A.P., Kong, P.P., Minkov, V.S. _et al_. _Superconductivity at 250 K in lanthanum hydride under high pressures_. Nature 569, 528–531 (2019).
* (30) Maddury Somayazulu, Muhtar Ahart, Ajay K. Mishra, _et al_ , _Evidence for Superconductivity above 260 K in Lanthanum Superhydride at Megabar Pressures_ , Phys. Rev. Lett. 122, 027001 (2019).
* (31) A. D. Grockowiak, M. Ahart, T. Helm, _et al_ , _Hot Hydride Superconductivity above 550 K_ , arXiv:2006.03004 (2020).
* (32) P. P. Kong, V. S. Minkov, M. A. Kuzovnikov, _et al_ , _Superconductivity up to 243 K in yttrium hydrides under high pressure_ , arXiv:1909.10482 (2019).
* (33) I. A. Troyan, D. V. Semenok, A. G. Kvashnin, A. V. Sadakov, O. A. Sobolevskiy, V. M. Pudalov, A. G. Ivanova, V. B. Prakapenka, E. Greenberg, A. G. Gavriliuk _et al._ ,_Anomalous high-temperature superconductivity in YH 6_, Adv. Mater. 33, 2006832 (2021).
* (34) ] E. Snider, N. Dasenbrock-Gammon, R. McBride, X. Wang, N. Meyers, K. V. Lawler, E. Zurek, A. Salamat, and R. P. Dias, _Synthesis of Yttrium Superhydride Superconductor with a Transition Temperature up to 262 K by Catalytic Hydrogenation at High Pressures_ , Phys. Rev. Lett. 126, 117003 (2021).
* (35) Dmitry V. Semenok, Alexander G. Kvashnin, Anna G. Ivanova, _et al_ , _Superconductivity at 161 K in thorium hydride ThH 10: Synthesis and properties_, Materials Today, 33, 36 (2020).
* (36) Wuhao Chen, Dmitrii V. Semenok, Xiaoli Huang, Haiyun Shu, Xin Li, Defang Duan, Tian Cui, Artem R. Oganov, _High-Temperature Superconductivity in Cerium Superhydrides_ , arXiv:2101.01315 (2021).
* (37) D. V. Semenok, I. A. Troyan, A. G. Kvashnin, A. G. Ivanova, M. Hanfland, A. V. Sadakov, O. A. Sobolevskiy, K. S. Pervakov, A. G. Gavriliuk, I. S. Lyubutin _et al_.,_Superconductivity at 253 K in lanthanum-yttrium ternary hydrides_ , Mater. Today (2021).
* (38) Snider, E., Dasenbrock-Gammon, N., McBride, R. _et al_. _Room-temperature superconductivity in a carbonaceous sulfur hydride_. Nature 586, 373–377 (2020).
* (39) J.E. Hirsch, F. Marsiglio, _Absence of high temperature superconductivity in hydrides under pressure_ , arXiv:2010.10307 (2020).
* (40) M. Dogan and M. L. Cohen, _Anomalous behavior in highpressure carbonaceous sulfur hydride_ , Physica C 583, 1353851 (2021).
* (41) J. E. Hirsch and F. Marsiglio, _Nonstandard superconductivity or no superconductivity in hydrides under high pressure_ , Phys. Rev. B 103, 134505 (2021).
* (42) J. E. Hirsch and F. Marsiglio, _Meissner effect in nonstandard superconductors_ , Physica C 587, 1353896 (2021).
* (43) J. E. Hirsch and F. Marsiglio, _Absence of magnetic evidence for superconductivity in hydrides under high pressure_ , Physica C 584, 1353866 (2021).
* (44) Kerson Huang and Efstratios Manousakis, _Antiferromagnetic order and high-temperature superconductivity_ , Phys. Rev. B 36, 8302 (1987).
* (45) J. E. Hirsch, _Antiferromagnetism, localization, and pairing in a two-dimensional model for CuO 2_, Phys. Rev. Lett. 59, 228 (1987).
* (46) M. Tinkham, _Introduction to Superconductivity_ , 2nd ed. (McGraw-Hill, Singapore, 1996)
* (47) I. Troyan, A. Gavriliuk, R. Ruffer _et al_ , _Observation of superconductivity in hydrogen sulfide from nuclear resonant scattering_ , Science 351, 1303 (2016).
* (48) J. A. Flores-Livas, L. Boeri, A. Sanna _et al_ , _A perspective on conventional high-temperature superconductors at high pressure: Methods and materials_ , Physics Reports, 856, 1-78, (2020).
* (49) Caroli, C., P. G. de Gennes, and J. Matricon, _Bound fermion states on a vortex line in a type II superconductor_ , Phys. Lett. 9, 307 (1964).
* (50) Bardeen, J. and M. J. Stephen, _Theory of the motion of vortices in superconductors_ , Phys. Rev. 140, A 1197–1207 (1965).
* (51) Kim, Y. B. and M. J. Stephen, _Flux flow and irreversible effects_ , in Superconductivity, R. D. Parks (ed.), vol. 2, pp. 1107–1165 (Marcel Dekker, Inc., New York, 1969).
* (52) Josephson, B. D. _Potential differences in the mixed state of type II superconductors_ , Phys. Lett. 16, 242 (1965).
* (53) M. Tinkham, _Viscous flow of flux in type-II superconductors_ , Phys. Rev. Lett. 13, 804 (1964).
* (54) de Gennes, P. G. and J. Matricon, _Collective modes of vortex lines in superconductors of the second kind_ , Rev. Mod. Phys. 36, 45 (1964).
* (55) Nozieres, P. andW. F. Vinen, _The motion of flux lines in type II superconductors_ , Philos. Mag. 14, 667–688 (1966).
* (56) Brandt, E. H., _The flux-line lattice in superconductors_ , Rep. Prog. Phys. 58, 1465–1594 (1995).
* (57) Kopnin, N. B. and V. E. Kravtsov, _Conductivity and Hall effect of pure type-II superconductors at low temperatures_ , Pis’ma Zh. Eksp. Teor. Fiz. 23, 631 (1976a). [English translation: Sov. Phys.–JETP Lett. 23, 578 (1976)].
* (58) Strnad, A. R., C. F. Hempstead, and Y. B. Kim, _Dissipative mechanism in type-II superconductors_ , Phys. Rev. Lett. 13, 794 (1964).
* (59) A. Rydh, Vortex properties from resistive transport measurements on extreme type-II superconductors, Ph.D. thesis, Solid State Physics, Royal Institute of Technology (KTH), Stockholm, Sweden (2001).
|
arxiv-papers
| 2021-07-26T15:00:56 |
2024-09-04T03:07:18.961169
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Theja N. De Silva",
"submitter": "Theja N. De Silva",
"url": "https://arxiv.org/abs/2107.12255"
}
|
2107.12260
|
# Rees algebras of ideals of star configurations
A. Costantini , B. Drabkin , L. Guerrieri University of California, Riverside,
Riverside CA 92521, USA. _e-mail_ : [email protected] University of
Technology and Design, Singapore, Singapore _e-mail_ :
[email protected] University, Instytut Matematyki,
30-348 Kraków, Poland. _e-mail_ : [email protected]
( )
###### Abstract
In this article we study the defining ideal of Rees algebras of ideals of star
configurations. We characterize when these ideals are of linear type and
provide sufficient conditions for them to be of fiber type. In the case of
star configurations of height two, we give a full description of the defining
ideal of the Rees algebra, by explicitly identifying a minimal generating set.
## 1 Introduction
Ideals of star configurations arise in algebraic geometry, in connection with
the study of intersections of subvarieties or subschemes in a projective
space. Given a family of hypersurfaces meeting properly in $\mathbb{P}^{n}$, a
_star configuration of codimension $c$_ is the union of all the codimension
$c$ complete intersection subschemes obtained by intersecting $c$ of the
hypersurfaces (see [15]). The terminology derives from the special case of ten
points located at pairwise intersections of five lines in $\mathbb{P}^{2}$,
with the lines positioned in the shape of a star.
From a commutative algebra perspective, ideals defining star configurations
represent an interesting class, since a great amount of information is known
about their free resolutions, Hilbert functions and symbolic powers (see for
instance [14, 15, 12, 23, 25, 2, 3, 29, 24]). In this article we study their
Rees algebras, about which little is currently known (see for instance [19,
13, 27, 5]).
If $I=(g_{1},\ldots,g_{\mu})$ is an ideal in a Noetherian ring $R$, the _Rees
algebra_ of $I$ is the subalgebra
$\mathcal{R}(I)\coloneq R[It]=R[g_{1}t,\dots,g_{\mu}t]\subseteq R[t]$
of the polynomial ring $R[t]$. In particular, $\,g_{1}t,\dots,g_{\mu}t\,$ are
$R$-algebra generators of $\mathcal{R}(I)$, and the algebraic structure of
$\mathcal{R}(I)$ is classically understood by determining the ideal of
relations among these generators. The latter is called the _defining ideal_ of
the Rees algebra, and its generators are called the _defining equations_ of
$\mathcal{R}(I)$. Geometrically, $\mathrm{Proj}(\mathcal{R}(I))$ is the blow-
up of the affine scheme $X=\mbox{\rm Spec}(R)$ along the subscheme $V(I)$.
Determining the defining ideal of Rees algebras is usually difficult. Indeed,
although the defining equations of degree one can be easily determined from a
presentation matrix of the given ideal, a full understanding of the defining
ideal of $\mathcal{R}(I)$ often requires prior knowledge of a free resolution
of $I$ and of its powers $I^{m}$. On the other hand, only a few classes of
ideals have well-understood free resolutions, and usually a free resolution
for $I$ does not provide information on the free resolutions of its powers
$I^{m}$. Nevertheless, the problem becomes manageable if one imposes algebraic
conditions on a presentation matrix of $I$ (see for instance [30, 26, 22]),
especially when one can exploit methods from algebraic combinatorics (see for
instance [32, 19, 11, 8, 13, 16, 1]).
The rich combinatorial structure of ideals of star configurations sometimes
allows to deduce information on their Rees algebra. For instance, _monomial
star configurations_ , which are constructed choosing the hypersurfaces to be
coordinate hyperplanes in $\mathbb{P}^{n}$, are monomial ideals associated
with discrete polymatroids, and hence are of fiber type by work of Herzog,
Hibi and Vladoiu (see [19, 3.3]). Recall that an ideal $\,I\subseteq
R=K[x_{1},\ldots,x_{n}]\,$ is said to be of _fiber type_ if the non-linear
equations of the Rees algebra $\mathcal{R}(I)$ are given by the defining
equations of the _fiber cone_ $\,F(I)\coloneq\mathcal{R}(I)\otimes_{R}K$.
In order to study _linear star configurations_ , which are constructed
choosing arbitrary hyperplanes in $\mathbb{P}^{n}$, one can instead exploit
the combinatorial properties of hyperplane arrangements. In particular,
Garrousian, Simis and Tohăneanu proved that ideals of this kind of height two
are of fiber type, and provided a (non-minimal) generating set for the
defining ideal of their Rees algebra (see [13, 4.2 and 3.5]). Similar results
were obtained for linear star configurations of height three in
$\mathbb{P}^{2}$ by Burity, Tohăneanu and Xie (see [5, 3.4 and 3.5]), who also
conjectured that ideals of linear star configurations of any height are of
fiber type.
In this context, it is then natural to ask the following question.
###### Question 1.1.
Let $\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}$ be a family of homogeneous
polynomials in $K[x_{1},\ldots,x_{n}]$ and let $I_{c,\mathcal{F}}$ be the
ideal of the star configuration of height $c$ obtained from the hypersurfaces
defined by the $F_{i}$’s. Under what conditions on $\mathcal{F}$ is
$I_{c,\mathcal{F}}$ of fiber type?
Although in general ideals of star configurations may not be of fiber type, we
show that this is always the case when the elements of $\mathcal{F}$ form a
regular sequence. More precisely, our first main result (see 3.3 and 3.4) is
the following.
###### Theorem 1.2.
Let $\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}$ be a homogeneous regular sequence
in $K[x_{1},\ldots,x_{n}]$. Let $I$ be the ideal of a star configuration of
height $c\geq 2$ constructed on the hypersurfaces defined by the elements of
$\mathcal{F}$. Then for any $m\geq 1$, $I^{m}$ is of fiber type. Moreover, the
defining equations of the fiber cone $F(I^{m})$ have degree at most two.
Our key observation is that, under these assumptions, $I$ and its powers
$I^{m}$ are generated by _monomials in the $F_{i}$’s_, i.e. elements of the
form $\,F_{1}^{i_{1}}\cdots F_{t}^{i_{t}}$. Hence, the defining ideal of the
Rees algebra of $I^{m}$ can be deduced from its Taylor resolution (see [28,
Chapter IV]). This method was previously used by several authors to study Rees
algebras of squarefree monomial ideals (see for instance [32, 11, 21, 16]). We
remark that the content of 1.2 was already known in the case when
$\mathcal{F}=\\{x_{1},\ldots,x_{n}\\}$ (see [19, 3.3] and [18, 5.3(b)]).
However, our proof is substantially different, since we only perform algebraic
manipulations on the generators of $I^{m}$, while the proof of [18, 5.3(b)]
heavily relies on the use of Gröbner bases and monomial orders (via results of
De Negri [7, 2.5 and 2.6], see [18, 5.2 and 5.3]).
In the light of 1.2, it is natural to explore how moving away from the
assumption that the elements of $\,\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}\,$
form a regular sequence affects the structure of the defining ideal of the
Rees algebra $\mathcal{R}(I_{c,\mathcal{F}})$. Our second main result (see 4.4
and 4.5) shows that the length of a maximal regular sequence in the family
$\mathcal{F}$ characterizes the linear-type property of ideals of linear star
configurations. Recall that an ideal is said to be of _linear type_ if the
defining ideal of its Rees algebra only consists of linear equations.
More deeply, when the $F_{i}$’s are all linear, in 4.10 we characterize the
regular sequences contained in $\mathcal{F}$ in terms of the matrix whose
entries are the coefficients of the $F_{i}$’s. This turns out to be
particularly useful in the case of linear star configurations of height two.
Indeed, an ideal $I$ of this kind is perfect, hence it is classically known
that the maximal minors of the _Jacobian dual_ of a Hilbert-Burch matrix of
$I$ are defining equations of the Rees algebra $\mathcal{R}(I)$ (see Section 2
for the definition of Jacobian dual matrices). Thanks to 4.10, we can
interpret the defining ideal described by Garrousian, Simis and Tohăneanu in
[13, 4.2 and 3.5] in terms of the associated primes of the ideal of maximal
minors of the Jacobian dual. More precisely, our main result (see 6.5 and
6.14) is the following.
###### Theorem 1.3.
Let $I$ be the ideal of a linear star configuration of height two. Then, the
defining ideal of the Rees algebra $\mathcal{R}(I)$ is
$\mathcal{L}+\mathcal{P}$, where $\mathcal{L}$ consists of linear equations
and (under mild assumptions) $\mathcal{P}$ is the only associated prime of the
ideal of maximal minors of a Jacobian dual for $I$ that is not generated by
monomials.
The proof of 1.3 proceeds in three steps. First, we identify a minimal
generating set for the ideal of maximal minors of a Jacobian dual for $I$ (see
5.4). Next, we prove that suitable irreducible factors of these minimal
generators span all the non-linear equations described in [13] (see 6.5),
hence they are the minimal non-linear equations of the Rees algebra. Finally,
using 4.10 and under mild assumptions on the matrix of coefficients of the
$F_{i}$’s, we provide a primary decomposition of the ideal of maximal minors
of the Jacobian dual and show that $\mathcal{P}$ satisfies the required
property (see 6.14). Our approach is entirely algebraic, combining linear
algebra with divisibility arguments. As a biproduct, it allows us to identify
the degrees of each non-linear generator of the defining ideal of
$\mathcal{R}(I)$ (see Remark 6.6), which was not obvious from the
combinatorial arguments appearing in the proof of [13, 3.5].
We remark that our methods can be potentially extended to ideals of linear
star configurations of height greater than two, since the notion of Jacobian
dual matrix is defined with no restriction on the height of the ideal under
examination (see [31, p. 191]). Moreover, it seems reasonable to believe that
the defining ideal of the fiber cone can be determined from the associated
primes of the ideal of maximal minors of the Jacobian dual matrix in other
cases as well. In fact, the defining ideal of the fiber cone is sometimes
known to coincide with the radical of the ideal of maximal minors of a
Jacobian dual. This is, for instance, the case for certain equigenerated
ideals with a linear presentation (including perfect ideals of height three
that are linearly presented, see [22, 7.1, 7.2 and 7.4]), or certain
equigenerated homogeneous ideals of arbitrary height (see [19, 3.5]).
We now describe how this article is structured.
In Section 2 we collect the necessary background on ideals of star
configurations and Rees algebras that we will use throughout. In Section 3 we
study the Rees algebras of (powers of) ideals of star configurations defined
with respect to a regular sequence $\,\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}$.
The main results of this section are 3.3 and 3.4.
Section 4 is devoted to the linear type property of ideals of star
configurations. In particular, 4.5 provides a criterion for such an ideal to
be of linear type. In the case of linear star configurations, we also fully
characterize when these ideals are of linear type locally up to a certain
height (see 4.4 and 4.5).
In the remaining sections we focus on linear star configurations of height
two. In Section 5 we construct a minimal generating set for the ideal of
maximal minors of their Jacobian dual, which we exploit in Section 6 in order
to characterize the defining ideal of the Rees algebra of linear star
configurations of height two (see 6.5 and 6.14).
## 2 Background
In this section we collect the necessary background information on ideals of
star configurations and Rees algebras.
### 2.1 Star configurations
Throughout this article $R=K[x_{1},\ldots,x_{n}]\,$ denotes a standard graded
polynomial ring over a field $K$ and
$\,\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}\,$ denotes a set of homogeneous
elements in $R$. For any integer $c\,$ with $\,1\leq c\leq t$, let
$I_{c,\mathcal{F}}=\bigcap_{1\leq i_{1}<\ldots<i_{c}\leq
t}(F_{i_{1}},\ldots,F_{i_{c}}),$
###### Definition 2.1.
If $\,1\leq c\leq\mathrm{min}\\{n,t\\}$ and any subset of $\mathcal{F}$ of
cardinality $c+1$ is a regular sequence, then $\,I_{c,\mathcal{F}}\,$ is
called the _ideal of the star configuration of height $c$ on the set
$\mathcal{F}$_.
We say that $\,I_{c,\mathcal{F}}\,$ defines a _linear star configuration_ if
in addition all the $F_{i}$ are homogeneous of degree one, and a _monomial
star configuration_ if $\,\mathcal{F}=\\{x_{1},\ldots,x_{n}\\}$.
The following proposition summarizes useful results about ideals of star
configurations.
###### Proposition 2.2.
Let $I_{c,\mathcal{F}}$ be the ideal of the star configuration of height $c$
on the set $\,\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}$. For $s\leq t$, denote
$\,\displaystyle{J_{s,\mathcal{F}}=\sum_{1\leq i_{1}<\ldots<i_{s}\leq
t}(F_{i_{1}}\cdots F_{i_{s}})}$. Then:
1. (1)
([15, 2.3]) $\,\displaystyle{\,I_{c,\mathcal{F}}=J_{t-c+1,\mathcal{F}}}$. In
particular, the minimal number of generators of $I_{c,\mathcal{F}}$ is
$\,\mu(I_{c,\mathcal{F}})=\binom{t}{t-c+1}\geq t,\,$ and
$\,\mu(I_{c,\mathcal{F}})=t\,$ if $\,c=2$.
2. (2)
([15, 3.3 and 3.5]) $\,I_{c,\mathcal{F}}\,$ is a perfect ideal and has a
linear resolution.
3. (3)
([5, 2.2 and 3.1] and [6, 3.1, 5.2 and 5.3]) If the $F_{i}$’s are all linear
or $\,\mathcal{F}=\\{x_{1},\ldots,x_{n}\\}$, then $\,I_{c,\mathcal{F}}\,$ has
linear powers (i.e. for every $m$, $\,I^{m}_{c,\mathcal{F}}\,$ has a linear
resolution).
### 2.2 Rees algebras
Although some of the definitions included in this subsection make sense over
any Noetherian ring and for any ideal, we assume throughout that $R\coloneq
K[x_{1},\ldots,x_{n}]$ is a polynomial ring over a field $K$ and that
$\,I=(g_{1},\ldots,g_{\mu})\,$ is a graded ideal of $R$. The _Rees algebra_ of
$I$ is the subalgebra
$\mathcal{R}(I)\coloneq\bigoplus_{i\geq 0}I^{i}t^{i}\subseteq R[t]$
of the polynomial ring $R[t]$, where $I^{0}\coloneq R$. Notice that
$\,\mathcal{R}(I)=R[g_{1}t,\dots,g_{\mu}t]\,$ is a graded ring (with grading
inherited from that of $R[t]$) and there is a natural graded epimorphism
$\varphi\colon S\coloneq
R[T_{1},\ldots,T_{\mu}]\twoheadrightarrow\mathcal{R}(I),$
defined by $\,\varphi(T_{i})=g_{i}t\,$ for all $\,1\leq i\leq{\mu}$. The ideal
$\mathcal{J}\coloneq\mathrm{ker}(\varphi)$ is called the _defining ideal_ of
the Rees algebra $\mathcal{R}(I)$, and the generators of $\mathcal{J}$ are
called the _defining equations_ of $\mathcal{R}(I)$. Notice that
$\,\displaystyle{\mathcal{J}=\bigoplus_{s\geq 0}\mathcal{J}_{s}}\,$ is a
graded ideal (in the $T_{i}$ variables) of the polynomial ring $S$.
The linear equations of $\mathcal{R}(I)$ can be easily determined from a
presentation matrix of $I$. Specifically, if
$R^{s}\stackrel{{\scriptstyle M}}{{\longrightarrow}}R^{\mu}\longrightarrow
I\to 0$
is a presentation of $I$, then
$\mathcal{J}_{1}=(\lambda_{1},\ldots,\lambda_{s})$, where the $\lambda_{i}$’s
are homogeneous linear polynomials in $S$ satisfying the matrix equation
$[\lambda_{1},\ldots,\lambda_{s}]=[T_{1},\ldots,T_{\mu}]\cdot M.$
We denote $\mathcal{J}_{1}$ by $\mathcal{L}$. The ideal $I$ is said to be _of
linear type_ if $\mathcal{J}=\mathcal{L}$.
Given an arbitrary ideal $I$, one should expect that the defining ideal of the
Rees algebra $\mathcal{R}(I)$ also contains non-linear equations. The latter
are usually difficult to determine, however if the $g_{i}$ all have the same
degree, the non-linear equations can sometimes be identified by analyzing the
_fiber cone_ of $I$, which is defined as
$F(I)\coloneq\mathcal{R}(I)\otimes_{R}K=K[g_{1},\dots,g_{\mu}]\cong\frac{K[T_{1},\ldots,T_{\mu}]}{\mathcal{I}}.$
Indeed, by construction one always has
$\,\mathcal{L}+\mathcal{I}S\subseteq\mathcal{J},\,$ and the ideal $I$ is
called of _fiber type_ when the latter inclusion is an equality. In fact, the
rings $S$ and $\mathcal{R}(I)$ can be given natural structures of bigraded
$K$-algebras by setting $\mbox{\rm deg}(T_{i})=(0,1)$ and $\mbox{\rm
deg}(x_{i})=(d_{1},0)$, where $d_{i}$ is the degree of $x_{i}$ in $R$. Then,
$\,\displaystyle{\mathcal{J}=\bigoplus_{i,j\geq 0}\mathcal{J}_{(i,j)}}\,$ is a
bigraded ideal and $I$ is of fiber type if and only if the defining ideal of
$\mathcal{R}(I)$ is
$\mathcal{J}=[\mathcal{J}]_{(\ast,1)}+[\mathcal{J}]_{(0,\ast)}.$
When $I$ is a perfect ideal of height two, one can best exploit the
information contained in a Hilbert-Burch presentation matrix $M$ of $I$
through the notion of a Jacobian dual matrix, which was first introduced in
[31]. Recall that the _Jacobian dual_ $B(M)$ of $M$ is an $n\times(\mu-1)$
matrix with coefficients in $S$, satisfying the equation
$[x_{1},\ldots,x_{n}]\cdot B(M)=[T_{1},\ldots,T_{\mu}]\cdot M.$
A priori one could find several possible Jacobian duals associated with a
given presentation $M$, however $B(M)$ is uniquely determined in the case when
the entries of $M$ are linear polynomials in $R=K[x_{1},\ldots,x_{n}]$.
Moreover, if $\,s=\mbox{\rm max}\\{n,\mu-1\\}\,$ and $I_{s}(B(M))$ denotes the
ideal of maximal minors of $B(M)$, then the defining ideal $\mathcal{J}$ of
$\mathcal{R}(I)$ always satisfies the inclusion
$\hypertarget{eqJacdual}{}\mathcal{L}+I_{s}(B(M))\subseteq\mathcal{J}$ (2.1)
Notice that, if $M$ has linear entries, then the entries of $B(M)$ are in
$K[T_{1},\ldots,T_{\mu}]$. Hence, the images of the generators of
$I_{s}(B(M))$ in the fiber cone $F(I)$ are in the defining ideal $\mathcal{I}$
of $F(I)$. Although the inclusion in Eq. 2.1 is usually strict, 2.5 below
provides an instance when equality holds. Before we state the theorem we need
to recall the following definition, which we will use often throughout this
article.
###### Definition 2.3.
An ideal $I$ satisfies the _$G_{s}$ condition_ if
$\mu(I_{\mathfrak{p}})\leq\dim R_{\mathfrak{p}}$ for every $\mathfrak{p}\in
V(I)$ of height at most $s-1$. Equivalently, if $M$ is any presentation matrix
of $I$, then $I$ satisfies the $G_{s}$ condition if and only if for every
$\,1\leq i\leq s-1$, $\,\mbox{\rm ht}(I_{\mu(I)-i}(M))\geq i+1$.
Moreover, $I$ is said to satisfy $G_{\infty}$ if it satisfies the $G_{s}$
condition for every integer $s$.
###### Remark 2.4.
Ideals of linear type always satisfy $G_{\infty}$.
Under the assumption that $I$ satisfies $G_{n}$, the Rees algebra of $I$ is
described by the following result of Morey and Ulrich. Recall that the Krull
dimension of the fiber cone $F(I)$ is called the _analytic spread_ of $I$ and
is denoted with $\ell(I)$.
###### Theorem 2.5 ([26, 1.3]).
Let $R=K[x_{1},\ldots,x_{n}]$ and assume that $K$ is infinite. Let $I\subseteq
R$ be a linearly presented perfect ideal of height 2 with $\mu(I)\geq n+1$ and
assume that $I$ satisfies $G_{n}$. Let $M$ be a Hilbert-Burch matrix resolving
$I$, then the defining ideal of $\mathcal{R}(I)$ is
$\mathcal{J}=\mathcal{L}+I_{n}(B(M))$
where $B(M)$ is the Jacobian dual of $M$. Moreover, $\mathcal{R}(I)$ and
$F(I)$ are Cohen-Macaulay, $I$ is of fiber type and $\ell(I)=n$.
In Section 6 we will also use some results on Rees algebras of ideals
generated by $a$-fold products of linear forms from [13], which we briefly
recall here. Let $\,\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}$ be a set of
homogeneous linear polynomials in $R=K[x_{1},\ldots,x_{n}]$ and suppose that
$t\geq n$. Notice that each $F_{i}$ defines a line through the origin in
$\mathbb{P}^{n-1}$, hence the set $\mathcal{F}$ defines a _central hyperplane
arrangement_ in $\mathbb{P}^{n-1}$. A _linear dependency_ among $s$ of the
given linear forms is a relation of the form
$\hypertarget{dependency}{}D\colon
c_{i_{1}}F_{i_{1}}+\ldots+c_{i_{s}}F_{i_{s}}=0.$ (2.2)
Given a linear dependency $D$, one can define the following homogeneous
polynomial in $S=R[T_{1},\ldots,T_{\mu}]$
$\hypertarget{deltadependency}{}\partial
D\colon\sum_{j=1}^{s}c_{i_{j}}\prod_{j\neq k=1}^{s}T_{i_{k}}.$ (2.3)
The following result of Garrousian, Simis and Tohăneanu relates the $\partial
D$’s to the fiber cone and Rees algebra of ideals generated by $(t-1)$-fold
products of linear forms.
###### Theorem 2.6.
With the notation above, let $\,\displaystyle{I=\sum_{1\leq
i_{1}<\ldots<i_{t-1}\leq t}(F_{i_{1}}\cdots F_{i_{t-1}})}$. Then,
1. (1)
([13, 4.2]) $I$ is of fiber type.
2. (2)
([13, 3.5]) The defining ideal of the fiber cone $F(I)$ is generated (possibly
not minimally) by all elements of the form $\partial D$ as in Eq. 2.3, where
$D$ varies within the set of linear dependencies among any $\,t-1$ of the
$F_{i}$’s.
3. (3)
([13, 4.9 and 4.10]) $\mathcal{R}(I)$ and $F(I)$ are Cohen-Macaulay.
## 3 Star configurations on a regular sequence
In this section we assume the following setting.
###### Setting 3.1.
Let $R=K[X_{1},\ldots,X_{n}]\,$ be a polynomial ring over a field $K$ and let
$\,\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}\,$ be a homogeneous regular sequence
in $R$. For $\,1\leq c\leq\mathrm{min}\\{n,t\\}$, let
$I=I_{c,\mathcal{F}}=\bigcap_{1\leq i_{1}<\ldots<i_{c}\leq
t}(F_{i_{1}},\ldots,F_{i_{c}})$
be the ideal of the star configuration of height $c$ on the set $\mathcal{F}$.
For a fixed $m\geq 1$, let $\,g_{1},\ldots,g_{\mu}$ be a minimal generating
set for the $m$-th power $I^{m}$ of $I$, and write
$S=R[T_{1},\ldots,T_{\mu}]$.
Since the elements of $\mathcal{F}$ form a regular sequence, from 2.2 it
follows that the $I$ is minimally generated by _monomials in the $F_{i}$_,
i.e. products $\,F_{1}^{i_{1}}\ldots F_{n}^{i_{n}}\,$ for some integers
$i_{j}\geq 0$. Notice that the exponents $i_{j}$ are uniquely determined
because the $F_{i}$’s form a regular sequence. Moreover, since every $m$-th
power $I^{m}$ is minimally generated by $m$-fold products of minimal
generators of $I$, with the assumptions and notations of 3.1 we may also
assume that $\,g_{1},\ldots,g_{\mu\,}$ are monomials in the $F_{i}$’s.
The defining ideal of $\,\mathcal{R}(I^{m})$ can then be determined from the
Taylor resolution of $I^{m}$ (see [28, Chapter IV]). More precisely, for
$\,1\leq s\leq t\,$ let
$\mathcal{I}_{s}=\\{\alpha=(i_{1},\ldots,i_{s})\,|\,1\leq i_{1}\leq\ldots\leq
i_{s}\leq t\\}.$
For each $\,\alpha\in\mathcal{I}_{s}\,$ denote $\,g_{\alpha}=g_{i_{1}}\cdots
g_{i_{s}},\,$ $\,T_{\alpha}=T_{i_{1}}\cdots T_{i_{s}}\,$ and
$\hypertarget{Talphabeta}{}T_{\alpha,\beta}=\frac{g_{\beta}}{\gcd(g_{\alpha},g_{\beta})}\,T_{\alpha}-\frac{g_{\alpha}}{\gcd(g_{\alpha},g_{\beta})}\,T_{\beta}.$
(3.1)
Then, the defining ideal of $\,\mathcal{R}(I^{m\,})$ is
$\mathcal{J}=\mathcal{J}_{1}+\bigcup_{s=2}^{\infty}\mathcal{J}_{s},$
where
$\,\displaystyle{\mathcal{J}_{s}=\\{T_{\alpha,\beta}\,|\,\alpha,\beta\in\mathcal{I}_{s}\,\\}}\,$
for every $s\geq 2\,$.
Since $\mathcal{R}(I^{m\,})$ is Noetherian, there exists an $N\geq 2$ so that
$\,\displaystyle{\mathcal{J}_{s}\subseteq\sum_{i=1}^{N}\mathcal{J}_{i}}\,$ for
all $s\geq N$. In order to estimate such an $N$, the first step is to identify
redundant relations, which we do in the following lemma. To simplify the
notation, for multi-indices $\alpha=(i_{1},\cdots,i_{s})$ and
$\beta=(j_{1},\cdots,j_{s})$, denote
$\,\theta=\displaystyle{\frac{g_{\beta}}{\gcd(g_{\alpha},g_{\beta})}}\,$ and
$\,\delta=\displaystyle{\frac{g_{\alpha}}{\gcd(g_{\beta},g_{\alpha})}}$. Then,
Eq. 3.1 can be rewritten as
$\hypertarget{relation}{}T_{\alpha,\beta}=\theta\,T_{i_{1}}\cdots
T_{i_{s}}-\delta\,T_{j_{1}}\cdots T_{j_{s}}$ (3.2)
###### Lemma 3.2.
With the notation above, let
$\,\displaystyle{\theta_{1}\coloneq\frac{g_{j_{1}}}{\gcd(g_{i_{1}},g_{j_{1}})}}$.
Assume that, up to reordering the indices $i_{k}$ and $j_{k}$ in Eq. 3.2,
$\theta_{1}$ divides $\theta$. Then, for $s\geq 2$ the relation in Eq. 3.2 can
be expressed as an $S$-linear combination of relations in $\mathcal{J}$ of
degree at most $s-1$.
###### Proof.
Write $\theta=\theta_{1}\theta^{\prime}$ and define
$\,\displaystyle{\delta_{1}\coloneq\frac{g_{i_{1}}}{\gcd(g_{i_{1}},g_{j_{1}})}}$.
Notice that $\theta_{1}T_{i_{1}}-\delta_{1}T_{j_{1}}\in\mathcal{J}_{1}$. Using
this linear relation we can rewrite
$\theta\,T_{i_{1}}\cdots T_{i_{s}}-\delta\,T_{j_{1}}\cdots
T_{j_{s}}=\theta^{\prime}\,T_{i_{2}}\cdots
T_{i_{s}}(\theta_{1}T_{i_{1}}-\delta_{1}T_{j_{1}})-T_{j_{1}}(\delta\,T_{j_{2}}\cdots
T_{j_{s}}-\delta_{1}\theta^{\prime}\,T_{i_{2}}\cdots T_{i_{s}}).$
Now, the term $\,\theta_{1}T_{i_{1}}-\delta_{1}T_{j_{1}}\,$ is clearly in
$\mathcal{J}_{1}$ and the term $\,\delta\,T_{j_{2}}\cdots
T_{j_{s}}-\delta_{1}\theta^{\prime}\,T_{i_{2}}\cdots T_{i_{s}}\,$ is in
$\mathcal{J}_{s-1}$. Indeed, observe that
$\theta^{\prime}g_{i_{2}}\cdots g_{i_{s}}=\frac{\theta
g_{\alpha}}{\theta_{1}g_{i_{1}}}=\frac{\delta
g_{\beta}}{\delta_{1}g_{j_{1}}}=\frac{\delta g_{j_{2}}\cdots
g_{j_{s}}}{\delta_{1}},$
and therefore $\,\delta_{1}\theta^{\prime}g_{i_{2}}\cdots g_{i_{s}}=\delta
g_{j_{2}}\cdots g_{j_{s}}.$ ∎
Notice that, whenever there exists a grading so that the $F_{i}\in\mathcal{F}$
have the same degree, then $I$ is generated by elements of the same degree
$\alpha(I)$. Then, each power $I^{m}$ is generated by all the monomials in the
$F_{i}$’s of degree $\,\alpha(I)m\,$ that are not divisible by $F^{m+1}$ for
any $F\in\mathcal{F}$. Moreover, the fiber cone $F(I^{m})$ is isomorphic to
the _toric ring_ of $\,\\{g_{1},\ldots,g_{\mu}\\}$. In particular, its
defining ideal is generated by all binomials of the form
$\,T_{\alpha}-T_{\beta}\,$ so that $\,g_{\alpha}=g_{\beta}$ (see [17,
Proposition 10.1.1] for a proof).
###### Theorem 3.3.
With the assumptions and notations of 3.1, assume also that $I$ is generated
by elements of the same degree with respect to some $\mathbb{Z}_{>0}$-grading.
Then, the ideal $I^{m}$ is of fiber type.
###### Proof.
We need to prove that all the relations of the form described in Eq. 3.2 can
be expressed as an $S$-linear combination of linear relations and fiber-type
relations. Working by induction on $s$, it suffices to show that any relation
$\,\displaystyle{T_{\alpha,\beta}=\theta\,T_{i_{1}}\cdots
T_{i_{s}}-\delta\,T_{j_{1}}\cdots T_{j_{s}}}\,$ of degree $s\geq 2$ that is
not of fiber type can be expressed as an $S$-linear combination of relations
of smaller degree in the $T_{i}$ variables.
Since $T_{\alpha,\beta}$ is not of fiber type, there exists a form
$F_{1}\in\mathcal{F}$ so that $F_{1}$ divides $\theta$. Moreover, since for
any $F\in\mathcal{F}$ we know that $F^{m+1}$ cannot divide any generator of
$I^{m}$, by possibly relabeling the indices, we may assume that $F_{1}$
divides $g_{j_{1}}$ and $F_{1}^{m}$ does not divide $g_{i_{1}}$. As in 3.2,
let
$\,\displaystyle{\theta_{1}\coloneq\frac{g_{j_{1}}}{\gcd(g_{i_{1}},g_{j_{1}})}}$.
If $\theta_{1}$ divides $\theta$, we are done by 3.2. Hence we assume the
opposite condition. Write
$\theta_{1}=F_{1}^{\,p_{1}}\cdots F_{a}^{\,p_{a}}G_{1}^{\,q_{1}}\cdots
G_{b}^{\,q_{b}}$
where the $F_{k}$’s and $G_{k}$’s are elements of $\mathcal{F}$ such that, for
all $k$, $F_{k}^{\,p_{k}}$ divides $\theta$, while no $G_{k}^{\,q_{k}}$
divides $\theta$. Notice that $p_{1}\geq 1$ by what was said above about
$F_{1}$. Moreover, since $T_{\alpha,\beta}$ does not satisfy the assumption of
3.2, necessarily we must have that $b\geq 1$, so there exists at least one
such form $G_{1}$. In particular, notice that $G_{1}$ divides $g_{j_{1}}$ but
$G_{1}^{\,m}$ does not divide $g_{i_{1}}$.
Similarly, let
$\,\displaystyle{\delta_{1}\coloneq\frac{g_{i_{1}}}{\gcd(g_{i_{1}},g_{j_{1}})}}\,$.
If $\delta_{1}$ divides $\delta$, the thesis follows by applying 3.2 to
$\delta$ and $\delta_{1}$. So, assume that $\delta_{1}$ does not divide
$\delta$. Then, there exist a form $H_{1}\in\mathcal{F}$ and an integer
$r_{1}\geq 1$ such that $H_{1}^{\,r_{1}}$ divides $\delta_{1}$ but does not
divide $\delta$. In particular, $H_{1}$ divides $g_{i_{1}}$ and $H_{1}^{m}$
does not divide $g_{j_{1}}$. Moreover, since $\gcd(\theta_{1},\delta_{1})=1$,
we also know that $H_{1}$ does not divide $\theta_{1}$ and $G_{1}$ does not
divide $\delta_{1}$.
Now, rewriting
$\,\displaystyle{g_{i_{1}}=\gcd(g_{i_{1}},g_{j_{1}})\delta_{1}}$ and
$\,\displaystyle{g_{j_{1}}=\gcd(g_{i_{1}},g_{j_{1}})\theta_{1}}$, we get that
$\hypertarget{gik-gkkequation}{}\theta\,\delta_{1\,}g_{i_{2}}\cdots
g_{i_{s}}=\delta\,\theta_{1\,}g_{j_{2}}\cdots g_{j_{s}}$ (3.3)
We claim that either there exists a $g_{j_{k}}$ with $k\geq 2$ such that
$H_{1}$ divides $g_{j_{k}}$ and $G_{1}^{m}$ does not divide $g_{j_{k}}$, or
there exists a $g_{i_{k}}$ such that $G_{1}$ divides $g_{i_{k}}$ and
$H_{1}^{m}$ does not divide $g_{i_{k}}$. Indeed, suppose that for all $k\geq
2$, $H_{1}$ divides $g_{j_{k}}$ if and only if $G_{1}^{m}$ divides $g_{j_{k}}$
and $H_{1}^{m}$ divides $g_{i_{k}}$ if and only if $G_{1}$ divides
$g_{i_{k}}$. Assume that $H_{1}$ divides exactly $c$ of the $g_{j_{k}}$’s for
$k\geq 2$ and that $G_{1}$ divides exactly $d$ of the $g_{i_{k}}$’s for $k\geq
2$. Then, the degree of $G_{1}$ on each side of Eq. 3.3 is at least $q_{1}+cm$
and at most $q_{1}-1+dm$. Similarly, the degree of $H_{1}$ on each side of Eq.
3.3 is at least $r_{1}+dm$ and at most $r_{1}-1+cm$. Hence, we must
simultaneously have that $cm\leq dm-1$ and $dm\leq cm-1$, which is impossible.
This proves our claim. Up to relabeling the $j_{k}$’s or $i_{k}$’s, we may
assume that $k=2$.
From the discussion above it follows that there exist generators
$g_{h_{1}},g_{h_{2}}$ of $I^{m}$ such that either
$H_{1}g_{j_{1}}=G_{1}g_{h_{1}}$ and $H_{1}g_{h_{2}}=G_{1}g_{j_{2}}$, or
$G_{1}g_{i_{1}}=H_{1}g_{h_{1}}$ and $G_{1}g_{h_{2}}=H_{1}g_{i_{2}}$. Let us
consider the first case (the second case is equivalent). We can write
$\displaystyle T_{\alpha,\beta}$ $\displaystyle=$ $\displaystyle\theta
T_{i_{1}}\cdots T_{i_{s}}-\delta T_{j_{1}}\cdots T_{j_{s}}$ $\displaystyle=$
$\displaystyle\theta T_{i_{1}}\cdots T_{i_{s}}-\delta T_{h_{1}}T_{h_{2}}\cdots
T_{j_{s}}+\delta T_{j_{3}}\cdots
T_{j_{s}}(T_{h_{1}}T_{h_{2}}-T_{j_{1}}T_{j_{2}}).$
The last term is a multiple of a fiber-type relation by a monomial in $S$.
Moreover, from our choice of $h_{1}$ and $h_{2}$ it follows that
$\,\displaystyle{\delta=\frac{g_{\alpha}}{\gcd(g_{\alpha},g_{\gamma})}},\,$
where $\gamma=(h_{1},h_{2},j_{3},\ldots,j_{s})$. Hence,
$T_{\alpha,\beta}^{(1)}=\theta T_{i_{1}}\cdots T_{i_{s}}-\delta
T_{h_{1}}T_{h_{2}}\cdots T_{j_{s}}\in\mathcal{J}_{s}$
and we only need to prove our claim for this new relation. Set
$\,\displaystyle{\theta_{1}^{(1)}\coloneq\frac{g_{h_{1}}}{\gcd(g_{i_{1}},g_{h_{1}})}=\frac{\theta_{1}}{G_{1}}}$
(where the latter equality holds because
$\,\displaystyle{\gcd(g_{i_{1}},g_{h_{1}})=H_{1}\gcd(g_{i_{1}},g_{j_{1}})}$).
Then there are two possibilities: either this new relation
$T_{\alpha,\beta}^{(1)}$ satisfies the assumption of 3.2 and the proof is
complete, or we iterate the process considering another form
$G_{k}\in\\{G_{1},\ldots,G_{b}\\}$. As before, by subtracting a multiple of a
fiber-type relation from $T_{\alpha,\beta}^{(1)}$, we reduce to a new relation
$T_{\alpha,\beta}^{(2)}$ such that the term corresponding to
$\theta_{1}^{(1)}$ is
$\,\displaystyle{\theta_{1}^{(2)}\coloneq\frac{\theta_{1}^{(1)}}{G_{k}}}$.
Since the monomial $F_{1}^{p_{1}}\cdots F_{a}^{p_{a}}$ divides $\theta$,
iterating this argument finitely many times, we reduce to prove our claim for
a relation satisfying the assumption of 3.2. ∎
###### Theorem 3.4.
With the assumptions and notations of 3.1, assume also that $I$ is generated
by elements of the same degree with respect to some $\mathbb{Z}_{>0}$-grading.
Then for all $m\geq 1$, the defining ideal of the Rees algebra
$\mathcal{R}(I^{m})$ is generated in degree at most 2 in the $T$-variables.
###### Proof.
It is sufficient to show that any fiber-type relation of the form
$\hypertarget{eqfiber}{}T_{i_{1}}\cdots T_{i_{s}}-T_{j_{1}}\cdots
T_{j_{s}}\in\mathcal{J}$ (3.4)
with $s\geq 3$ can be expressed as linear combination of fiber-type relations
of degree at most $s-1$. First, assume that the following condition holds.
$(\ast)$: Up to relabeling the indices, there exists a generator $g_{h}\in I$
such that $g_{i_{1}}g_{i_{2}}=g_{j_{1}}g_{h}$.
In this case, we get
$T_{i_{1}}\cdots T_{i_{s}}-T_{j_{1}}\cdots T_{j_{s}}=T_{i_{3}}\cdots
T_{i_{s}}(T_{i_{1}}T_{i_{2}}-T_{j_{1}}T_{h})-T_{j_{1}}(T_{j_{2}}\cdots
T_{j_{s}}-T_{h}T_{i_{3}}\cdots T_{i_{s}}),$
and observe that the last summand corresponds to a relation of degree $s-1$,
since
$g_{j_{2}}\cdots g_{j_{s}}=\frac{g_{i_{1}}g_{i_{2}}\cdots
g_{i_{s}}}{g_{j_{1}}}=g_{h}g_{i_{3}}\cdots g_{i_{s}}.$
Hence, this case is concluded and we may assume that condition $(\ast)$ does
not hold for the relation (3.4). Write
$g_{i_{1}}=\gcd(g_{i_{1}},g_{j_{1}})F_{1}^{p_{1}}\cdots F_{a}^{p_{a}}$ and
$g_{j_{1}}=\gcd(g_{i_{1}},g_{j_{1}})G_{1}^{q_{1}}\cdots G_{b}^{q_{b}}$, where
$F_{1},\dots,F_{a},G_{1},\dots G_{b}\in\mathcal{F}$. Observe that since the
ideal $I^{m}$ is equigenerated, then $\mbox{\rm deg}(F_{1}^{p_{1}}\cdots
F_{a}^{p_{a}})=\mbox{\rm deg}(G_{1}^{q_{1}}\cdots G_{b}^{q_{b}})$. Moreover,
$\hypertarget{Fl-Glequation}{}F_{1}^{p_{1}}\cdots F_{a}^{p_{a}}g_{i_{2}}\cdots
g_{i_{s}}=G_{1}^{q_{1}}\cdots G_{b}^{q_{b}}g_{j_{2}}\cdots g_{j_{s}}.$ (3.5)
Now, necessarily, we can find a $k\geq 2$ so that either $g_{i_{k}}$ is
divisible by some $G_{l}$ and not divisible by at least one power $F_{l}^{m}$
or $g_{j_{k}}$ is divisible by some $F_{l}$ and not divisible by at least one
power $G_{l}^{m}$. Indeed, if exactly $c$ elements among
$g_{i_{2}},\ldots,g_{i_{s}}$ are divisible by some of the $G_{1},\ldots,G_{b}$
and they are all divisible also by $F_{1}^{m}\cdots F_{a}^{m}$, then the
degree of each $F_{l}$ in each side of Eq. 3.5 is at least $cm+p_{l}$ while
the degree of each $G_{l}$ is at most $cm$. Similarly, if exactly $d$ elements
among $g_{j_{2}},\ldots,g_{j_{s}}$ are divisible by some of the
$F_{1},\ldots,F_{a}$ and they are all divisible also by $G_{1}^{m}\cdots
G_{b}^{m}$, then the degree of each $G_{l}$ in each side of Eq. 3.5 is at
least $dm+q_{l}$ while the degree of each $F_{l}$ is at most $dm$. These
conditions cannot be satisfied simultaneously, hence for some $k\geq 2$ there
must exist a generator $g_{j_{k}}$ divisible by some $F_{l}$ and not divisible
by at least one power $G_{l}^{m}$. In particular, this argument implies that
if $\mbox{\rm deg}(F_{1}^{p_{1}}\cdots F_{a}^{p_{a}})=1$, then
$g_{i_{1}}=\gcd(g_{i_{1}},g_{j_{1}})F_{1}$,
$g_{j_{1}}=\gcd(g_{i_{1}},g_{j_{1}})G_{1}$ and
$g_{j_{1}}g_{j_{k}}=g_{i_{1}}g_{h}$ where $g_{h}=\frac{G_{1}}{F_{1}}g_{j_{k}}$
is a generator of $I$. Therefore, condition $(\ast)$ is satisfied.
If $\mbox{\rm deg}(F_{1}^{p_{1}}\cdots F_{a}^{p_{a}})\geq 2$, without loss of
generality, possibly relabelling appropriately we may assume that $g_{i_{2}}$
is divisible by $G_{1}$ and not divisible by $F_{1}^{m}$ and consider the
generators of $I^{m}$, $g_{h_{1}}:=g_{i_{1}}\frac{G_{1}}{H_{1}}$ and
$g_{h_{2}}:=g_{i_{2}}\frac{F_{1}}{G_{1}}$. It then follows that
$T_{i_{1}}\cdots T_{i_{s}}-T_{j_{1}}\cdots T_{j_{s}}=T_{i_{3}}\cdots
T_{i_{s}}(T_{i_{1}}T_{i_{2}}-T_{h_{1}}T_{h_{2}})+T_{h_{1}}T_{h_{2}}T_{i_{3}}\cdots
T_{i_{s}}-T_{j_{1}}\cdots T_{j_{s}}.$
Since the first summand is a multiple of a fiber-type relation of degree 2, we
only need to prove the theorem for the relation
$T_{h_{1}}T_{h_{2}}T_{i_{3}}\cdots T_{i_{s}}-T_{j_{1}}\cdots T_{j_{s}}$.
Observe that now
$g_{h_{1}}=\gcd(g_{h_{1}},g_{j_{1}})F_{1}^{p_{1}-1}F_{2}^{p_{2}}\cdots
F_{a}^{p_{a}}$. If this new relation does not satisfy condition $(\ast)$, by
iterating the process, in a finite number of steps we get a relation in which
the total degree of the monomial in the forms $F_{1},\ldots,F_{a}$ is one, and
hence condition $(\ast)$ must finally be satisfied. This concludes the proof.
∎
###### Remark 3.5.
As observed in the introduction, when $\mathcal{F}=\\{x_{1},\ldots,x_{n}\\}$,
the powers $I_{c,\mathcal{F}}^{m}$ are polymatroidal ideals satisfying the
strong exchange property. Hence, by [19, 3.3] and [18, 5.3(b)] it was already
known that these ideals are of fiber type and their fiber-type relations have
degree at most two The result [18, 5.3] is proved using a sorting technique,
relying on Gröbner bases and monomial orders. Also, mapping variables $y_{i}$
to the $F_{i}\in\mathcal{F}$ defines a flat map
$\,\displaystyle{K[y_{1},\ldots,y_{t}]\to K[F_{1},\ldots,F_{t}]}$. Since
formation of Rees algebras commutes with flat base change (see [10, 1.3]),
then 3.3 and 3.4 follow from the case of a monomial regular sequence. However,
our direct proof recovers the known results while also giving a new proof in
the monomial case that requires less technical machinery.
## 4 The linear type property of ideals of star configurations
In this section we study under what conditions the ideal of a star
configuration is of linear type. Moreover, we give a criterion to determine
how this property may fail in the case of linear star configurations. Our
first result characterizes the linear type property of star configurations of
hypersurfaces.
###### Theorem 4.1.
Suppose that $I_{c,\mathcal{F}}\subseteq R=K[x_{1},\ldots,x_{n}]$ is a star
configuration on $\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}$. Let $n=\dim R$ and
assume that $\mathcal{F}$ contains a regular sequence of length $n$. The
following are equivalent:
1. (1)
$I_{c,\mathcal{F}}$ is of linear type.
2. (2)
$\mu(I_{c,\mathcal{F}})\leq n$.
3. (3)
$c=2$ and $n=t$, i.e. $\mathcal{F}$ is a regular sequence.
###### Proof.
Assume that $I_{c,\mathcal{F}}$ is of linear type. Then, $I_{c,\mathcal{F}}$
satisfies the $G_{\infty}$ condition and in particular
$\mu(I_{c,\mathcal{F}})\leq n$. In general, by 2.2
$\,\mu(I_{c,\mathcal{F}})=\binom{t}{t-c+1}\geq t\geq n$. Hence, since $2\leq
c\leq n-1$, $\mu(I_{c,\mathcal{F}})\leq n$ if and only if $c=2$ and $n=t$. In
particular, in this case $\mu(I_{c,\mathcal{F}})=n$.
Finally, if $c=2$ and $n=t$, we know by 3.3 and 3.4 that $I_{c,\mathcal{F}}$
is of fiber type and the non-linear relations are generated by binomials of
degree 2. But, if there was a nonzero relation of degree two of the form
$T_{i_{1}}T_{i_{2}}-T_{i_{3}}T_{i_{4}}$ for distinct indices
$i_{1},i_{2},i_{3},i_{4}$, setting $I_{c,\mathcal{F}}=(g_{1},\ldots,g_{n})$,
we would have $g_{i_{1}}g_{i_{2}}-g_{i_{3}}g_{i_{4}}=0$. Since $c=2$, we have
that $g_{i}=\prod_{j\neq i}F_{j}$ and this implies
$F_{i_{1}}F_{i_{2}}=F_{i_{3}}F_{i_{4}}$ that is a contradiction since $n=t$
and $F_{1},\ldots,F_{t}$ form a regular sequence. Therefore there are no
relations of degree two and $I_{c,\mathcal{F}}$ is of linear type. ∎
###### Remark 4.2.
From the proof of (3) implies (1) it follows that if $c=2$ and all the
elements of $\mathcal{F}$ form a regular sequence then $I=I_{2,\mathcal{F}}$
is of linear type. In this case, since $I$ is a perfect ideal of height 2,
then the Rees algebra of $I$ is Cohen-Macaulay by [20, 2.6].
It is clear from the definition that localizing an ideal of linear type at any
prime ideal produces an ideal of linear type. In the rest of this section we
aim to measure how an ideal of star configurations may fail to be of linear
type by examining its linear type property locally. For this purpose, we first
need to understand how ideals of star configurations behave under
localization.
###### Lemma 4.3.
Let $I_{c,\mathcal{F}}\subseteq R$ be a star configuration on $\mathcal{F}$
and let $\mathfrak{p}$ be a prime ideal of $R$. Set
$\mathcal{F}^{\prime}:=\mathcal{F}\cap\mathfrak{p}$. Then
$(I_{c,\mathcal{F}})_{\mathfrak{p}}=\left\\{\begin{array}[]{ccc}I_{c,\mathcal{F}^{\prime}}R_{\mathfrak{p}}&\mbox{if
}|\mathcal{F}^{\prime}|>c\\\ \mbox{ complete intersection of height
}c&\mbox{if }|\mathcal{F}^{\prime}|=c\\\ R_{\mathfrak{p}}&\mbox{if
}|\mathcal{F}^{\prime}|<c.\\\ \end{array}\right.$
###### Proof.
Finite intersection of ideals commutes with localization, hence
$(I_{c,\mathcal{F}})_{\mathfrak{p}}=\bigcap_{i_{1},\ldots,i_{c}}(F_{i_{1}},\ldots,F_{i_{c}})_{\mathfrak{p}}.$
If some $F_{i_{j}}\not\in\mathfrak{p}$, the ideal
$(F_{i_{1}},\ldots,F_{i_{c}})_{\mathfrak{p}}=R_{\mathfrak{p}}$. If
$|\mathcal{F}^{\prime}|<c$, this necessarily happens for all the ideals in the
intersection and $(I_{c,\mathcal{F}})_{\mathfrak{p}}=R_{\mathfrak{p}}$.
If some $F_{i_{1}},\ldots,F_{i_{s}}\in\mathfrak{p}$ and they form a regular
sequence in $R$, then they form a regular sequence also in the ring
$R_{\mathfrak{p}}$. Hence, in the case $|\mathcal{F}^{\prime}|=c$, then
$(I_{c,\mathcal{F}})_{\mathfrak{p}}=(F_{i_{1}},\ldots,F_{i_{c}})R_{\mathfrak{p}}$
where $\mathcal{F}^{\prime}=\\{F_{i_{1}},\ldots,F_{i_{c}}\\}$ and it is a
complete intersection of height $c$. Instead, if $|\mathcal{F}^{\prime}|>c$,
the ideal
$(I_{c,\mathcal{F}})_{\mathfrak{p}}=I_{c,\mathcal{F^{\prime}}}R_{\mathfrak{p}}$
is a star configuration. ∎
Although the proofs of 4.1 and 4.3 work for any star configuration of
hypersurfaces, in the rest of this section we restrict to the case when the
$F_{i}$’s are all linear.
###### Proposition 4.4.
Let $R=K[x_{1},\ldots,x_{n}]$ and let $I_{c,\mathcal{F}}$ be a linear star
configuration of height $c$. The following are equivalent:
1. (1)
$(I_{c,\mathcal{F}})_{\mathfrak{p}}$ is of linear type for every prime
$\mathfrak{p}\in\mbox{\rm Spec}(R)$ with $\mbox{\rm ht}{\mathfrak{p}}\leq
s-1$.
2. (2)
$I_{c,\mathcal{F}}$ satisfies the $G_{s}$ condition.
###### Proof.
By Remark 2.4, condition (1) always implies condition (2). Thus assume that
$I_{c,\mathcal{F}}$ satisfies the $G_{s}$ condition. By way of contradiction,
say that there exists a prime ideal $\mathfrak{p}$ of height $\leq s-1$ such
that $(I_{c,\mathcal{F}})_{\mathfrak{p}}$ is not of linear type. By 4.3, this
implies $|\mathcal{F}\cap\mathfrak{p}|>c.$ Call $\mathfrak{q}$ the ideal
generated by the elements of $\mathcal{F}\cap\mathfrak{p}$. Clearly
$\mathfrak{q}$ is prime since the elements of $\mathcal{F}$ are linear forms
and $\mathfrak{q}\subseteq\mathfrak{p}$. Furthermore
$\mathcal{F}\cap\mathfrak{q}=\mathcal{F}\cap\mathfrak{p}$ and thus by 4.3,
$(I_{c,\mathcal{F}})_{\mathfrak{q}}$ is a star configuration of height $c$. We
want to show that $\,\mu((I_{c,\mathcal{F}})_{\mathfrak{q}})>\mbox{\rm
ht}\mathfrak{q}$; since $\mbox{\rm ht}\mathfrak{q}\leq s-1,\,$ this would
contradict the assumption that $I_{c,\mathcal{F}}$ satisfies the $G_{s}$
condition.
Notice that the height of $\mathfrak{q}$ is equal to the length of a maximal
regular sequence contained in $\mathcal{F}\cap\mathfrak{p}$ and for this
reason $(I_{c,\mathcal{F}})_{\mathfrak{q}}$ satisfies the assumptions of 4.1.
Hence, from 4.1 it follows immediately that
$\,\mu((I_{c,\mathcal{F}})_{\mathfrak{q}})>\mbox{\rm ht}\mathfrak{q}$ whenever
$c\geq 3$. If $c=2$, observe that
$\,\mu((I_{c,\mathcal{F}})_{\mathfrak{q}})=|\mathcal{F}\cap\mathfrak{q}|=|\mathcal{F}\cap\mathfrak{p}|$
and it suffices to show that $\mathcal{F}\cap\mathfrak{p}$ is not a regular
sequence. But if this were a regular sequence,
$(I_{c,\mathcal{F}})_{\mathfrak{p}}$ would be a star configuration of height 2
generated over a regular sequence, hence it would be of linear type by Remark
4.2. ∎
We now describe the set of primes at which $I_{c,\mathcal{F}}$ fails to be of
linear type. We call this set of primes the _non-linear type locus_ of
$I_{c,\mathcal{F}}$ and denote it by
$NLT(I_{c,\mathcal{F}})=\\{\mathfrak{p}\in\mbox{\rm Spec}(R)\mbox{ :
}(I_{c,\mathcal{F}})_{\mathfrak{p}}\mbox{ is not of linear type}\\}.$
###### Proposition 4.5.
Let $R=K[x_{1},\ldots,x_{n}]$ and let $I_{c,\mathcal{F}}$ be a linear star
configuration of height $c$. Then, any set
$\,\mathcal{H}\subseteq\mathcal{F}\,$ of cardinality $s\leq n$ that is not a
regular sequence generates a prime ideal $\mathfrak{q}\in
NLT(I_{c,\mathcal{F}})\setminus\\{(x_{1},\ldots,x_{n})\\}$.
Moreover, if $c=2$ the minimal elements of $NLT(I_{2,\mathcal{F}})$ that are
not maximal ideals of $R$ are all of this form. If instead $c\geq 3$, then
$\,NLT(I_{c,\mathcal{F}})=\\{\mathfrak{p}\in\mbox{\rm Spec}(R)\mbox{ :
}|\mathcal{F}\cap\mathfrak{p}|>c\\}$.
###### Proof.
First consider a set $\mathcal{H}\subseteq\mathcal{F}$ of cardinality $s\leq
n$ that is not a regular sequence. Since $I_{c,\mathcal{F}}$ is a star
configuration, clearly $|\mathcal{H}|>c+1$. The elements of $\mathcal{H}$ are
linear forms, thus they generate a prime ideal $\mathfrak{q}$ of $R$ which is
clearly non-maximal since it has depth strictly smaller than $n$. By 4.3,
$\mbox{\rm
ht}(\mathfrak{q})<|\mathcal{H}|\leq|\mathcal{F}\cap\mathfrak{q}|\leq\mu((I_{c,\mathcal{F}})_{\mathfrak{q}}).$
Hence, $(I_{c,\mathcal{F}})_{\mathfrak{q}}$ is not of linear type.
In the case $c=2$, to show that all minimal primes in $NLT(I_{2,\mathcal{F}})$
arise in this way, let $\mathfrak{p}$ be a non-maximal prime ideal of $R$ such
that $(I_{2,\mathcal{F}})_{\mathfrak{p}}$ is not of linear type. By 4.3,
$(I_{2,\mathcal{F}})_{\mathfrak{p}}$ is the star configuration of height 2
over the set $\mathcal{F}\cap\mathfrak{p}$ in the ring $R_{\mathfrak{p}}$. If
$|\mathcal{F}\cap\mathfrak{p}|>n$, consider any subset
$\mathcal{H}\subseteq(\mathcal{F}\cap\mathfrak{p})$ of cardinality $n$. Since
$\mathfrak{p}$ is non-maximal, necessarily $\mathcal{H}$ is not a regular
sequence. Then the prime ideal $\mathfrak{q}$ generated by the elements of
$\mathcal{H}$ is contained in $\mathfrak{p}$ and is such that
$(I_{2,\mathcal{F}})_{\mathfrak{q}}$ is not of linear type by the first part
of this proof.
If, instead, $|\mathcal{F}\cap\mathfrak{p}|\leq n$, call $\mathfrak{q}$ the
prime ideal generated by all the elements of $\mathcal{F}\cap\mathfrak{p}$.
Clearly $\mathfrak{q}\subseteq\mathfrak{p}$ and
$\mathcal{F}\cap\mathfrak{p}=\mathcal{F}\cap\mathfrak{q}$. Notice that
$\mathcal{F}\cap\mathfrak{p}$ is not a regular sequence, since by Remark 4.2 a
star configuration of height 2 generated over a regular sequence is always of
linear type. Therefore,
$\mbox{\rm
ht}(\mathfrak{q})<|\mathcal{F}\cap\mathfrak{p}|=|\mathcal{F}\cap\mathfrak{q}|=\mu((I_{2,\mathcal{F}})_{\mathfrak{q}}).$
Hence $(I_{2,\mathcal{F}})_{\mathfrak{q}}$ is not of linear type.
When $c\geq 3$, choose a non-maximal prime ideal $\mathfrak{p}.$ By 4.3, if
$|\mathcal{F}\cap\mathfrak{p}|\leq c$, then
$(I_{c,\mathcal{F}})_{\mathfrak{p}}$ is of linear type. Otherwise, if
$|\mathcal{F}\cap\mathfrak{p}|>c$, let $\mathfrak{q}$ be the prime ideal
generated by the elements of $\mathcal{F}\cap\mathfrak{p}$. Proceeding as in
the proof of 4.4, we can apply 4.1 to deduce that
$(I_{c,\mathcal{F}})_{\mathfrak{q}}$ is not of linear type. Since
$\mathfrak{q}\subseteq\mathfrak{p}$, then also
$(I_{c,\mathcal{F}})_{\mathfrak{p}}$ is not of linear type. ∎
We next apply the previous result to characterize when $I_{c,\mathcal{F}}$
satisfies the $G_{n}$ condition.
###### Theorem 4.6.
Let $R=K[x_{1},\ldots,x_{n}]$ and let $I_{c,\mathcal{F}}$ be a linear star
configuration of height $c$.
1. (1)
The ideal $I_{c,\mathcal{F}}$ satisfies the $G_{n}$ condition.
2. (2)
Every subset of $\mathcal{F}$ of cardinality $n$ is a regular sequence and
$c\in\\{2,n-1\\}.$
###### Proof.
Recall that by 4.4, $I_{c,\mathcal{F}}$ satisfies the $G_{n}$ condition if and
only if $NLT(I_{c,\mathcal{F}})=\\{(x_{1},\ldots,x_{n})\\}$.
First assume that there exists one subset of cardinality $n$ of $\mathcal{F}$
that is not a regular sequence. By 4.5, it then follows that
$NLT(I_{c,\mathcal{F}})$ contains some non-maximal prime ideal of $R$. Thus
$I_{c,\mathcal{F}}$ does not satisfy the $G_{n}$ condition.
Hence, we can assume that every subset of $\mathcal{F}$ of cardinality $n$ is
a regular sequence. If $c=2$, the conclusion follows by 4.5. If $c=n-1$, let
$\mathfrak{p}$ be a non-maximal prime ideal of $R$. We show that in this case
$|\mathcal{F}\cap\mathfrak{p}|\leq c$ and conclude that
$(I_{c,\mathcal{F}})_{\mathfrak{p}}$ is of linear type by 4.3. Indeed,
$|\mathcal{F}\cap\mathfrak{p}|>c=n-1$ if and only if at least $n$ distinct
elements of $\mathcal{F}$ are in $\mathfrak{p}$. But, since
$I_{n-1,\mathcal{F}}$ is a star configuration, any $n$ distinct elements of
$\mathcal{F}$ form a regular sequence and hence they cannot be contained in a
non-maximal prime ideal of $R$.
Finally, if $2<c<n-1$, let $\mathcal{H}$ be a subset of $\mathcal{F}$ of
cardinality $n-1$, and let $\mathfrak{p}$ be the prime ideal generated by the
elements of $\mathcal{H}$. Since every subset of $\mathcal{F}$ of cardinality
$n$ is a regular sequence, it follows that any
$F_{i}\in\mathcal{F}\setminus\mathcal{H}$ is regular modulo $\mathfrak{p}$,
hence is not in $\mathfrak{p}$. Therefore,
$\mathcal{F}\cap\mathfrak{p}=\mathcal{H}$ has cardinality $n-1>c$, hence the
conclusion follows from 4.5. ∎
###### Corollary 4.7.
Let $R=K[x_{1},\ldots,x_{n}]$ and assume that any subset of $\mathcal{F}$ of
cardinality $n$ is a regular sequence. Let $I_{2,\mathcal{F}}$ be a linear
star configuration of height $2$. Then $I$ is of fiber type and the defining
ideal of $\mathcal{R}(I)$ is given by
$\mathcal{J}=\mathcal{L}+I_{n}(B(M)),$
where $B(M)$ is the ideal of maximal minors of the Jacobian dual of $M$.
Moreover, $\mathcal{R}(I)$ and $F(I)$ are Cohen-Macaulay.
###### Proof.
From 4.6 it follows that $I$ satisfies the $G_{n}$ condition. Then the thesis
follows from 2.5. ∎
In the next section we give a more accurate description of the Rees algebra of
linear star configurations of height two, providing an explicit generating set
for the ideal of maximal minors of the Jacobian dual. This also allows us to
determine the defining ideal in the case when the $G_{n}$ condition is not
satisfied. For this purpose, it will be convenient to better control the
subsets of regular sequences contained in $\mathcal{F}$. The following lemma
will be useful.
###### Lemma 4.8.
Let $K$ be an infinite field and let $I_{c,\mathcal{F}}$ be a linear star
configuration of height $c$ defined over the set
$\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}\subseteq K[y_{1},\ldots,y_{d}]$. Assume
that the maximal regular sequence contained in $\mathcal{F}$ has length $n$.
Then, after renaming the variables of the ring, we can always assume
$\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}=\\{x_{1},\ldots,x_{n},L_{1},\ldots,L_{r}\\}\subseteq
K[x_{1},\ldots,x_{n}]$
where $L_{1},\ldots,L_{r}\in(x_{1},\ldots,x_{n})$ are linear forms.
###### Proof.
By relabeling the indices assume that $F_{1},\ldots,F_{n}$ is a regular
sequence of maximal length contained in $\mathcal{F}$. After a linear change
of variables, this regular sequence of linear forms can be always expressed as
$x_{1},\ldots,x_{n}$. If some linear form $F_{j}$ with $j>n$ has a monomial of
the form $y_{k}$, with $y_{k}$ distinct from $x_{1},\ldots,x_{n}$, then
clearly $F_{j},x_{1},\ldots,x_{n}$ is a regular sequence of length $n+1$
contradicting the hypothesis. It follows that $F_{n+1},\ldots,F_{t}$ must be
contained in $(x_{1},\ldots,x_{n})$. ∎
Thanks to 4.8, whenever the $F_{i}$’s are linear forms we can always reduce to
the following setting.
###### Setting 4.9.
Let $K$ be an infinite field and let $R=K[x_{1},\ldots,x_{n}]$. Assume that
$\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}=\\{x_{1},\ldots,x_{n},L_{1},\ldots,L_{r}\\}\subseteq
R$
where $L_{i}\coloneq\sum_{j=1}^{n}u_{ij}x_{j}$ with $u_{ij}\in K$ and $t=n+r$.
Let $U$ denote the $n\times r$ matrix on the elements $u_{ij}$ and let
$I_{c,\mathcal{F}}\subseteq R$ be a star configuration on $\mathcal{F}$.
###### Proposition 4.10.
Fix $2\leq s\leq n$. With the assumptions and notations of 4.9, the following
conditions are equivalent:
1. (1)
Any subset of $\mathcal{F}$ of cardinality $s$ is a regular sequence.
2. (2)
For every $1\leq h\leq\mbox{\rm min}\\{r,s\\}$, all the submatrices of the
matrix $U$ of size $(h+n-s)\times h$ have maximal rank.
###### Proof.
Recall that a finite set of linear forms in $R$ is a regular sequence if and
only if those forms are linearly independent over the base field $K$. This is
equivalent to have that the matrix expressing their coefficients as a function
of the variables $x_{1},\ldots,x_{n}$ has maximal rank.
After fixing $s$, assume one submatrix $V$ of $U$ of size $(h+n-s)\times h$
has not maximal rank. For simplicity, up to permuting rows and columns, we may
assume this to be the matrix obtained considering the first $h+n-s$ rows and
the first $h$ columns of $U$ for some $h\leq\mbox{\rm min}\\{r,s\\}$. If
$h=s$, this implies that the linear forms $L_{1},\ldots,L_{h}$ are not
linearly independent over $K$, thus not a regular sequence. Otherwise, if
$h<s$, consider the set $\\{L_{1},\ldots,L_{h},x_{n+h-s+1},\ldots,x_{n}\\}$.
Going modulo the regular sequence $\\{x_{n+h-s+1},\ldots,x_{n}\\}$, this set
reduces to the set of linear forms
$\\{\overline{L_{1}},\ldots,\overline{L_{h}}\\}$. The matrix of coefficients
of these linear forms with respect to $x_{1},\ldots,x_{n+h-s}$ is exactly the
matrix $V$. It follows that $\overline{L_{1}},\ldots,\overline{L_{h}}$ are not
linearly independent, hence not a regular sequence. Therefore
$\\{L_{1},\ldots,L_{h},x_{n+h-s+1},\ldots,x_{n}\\}$ is not a regular sequence
and has cardinality $s$.
Conversely, assume that condition 2 is satisfied. Consider a subset
$\mathcal{H}$ of $\mathcal{F}$ of cardinality $s$. Possibly going modulo the
forms of type $x_{1},\ldots,x_{n}$ belonging to $\mathcal{H}$, we reduce to
consider a set of linear forms
$\\{\overline{L_{i_{1}}},\ldots,\overline{L_{i_{h}}}\\}$ with $h\leq s$ and
whose coefficients with respect to $\overline{x_{1}},\ldots,\overline{x_{n}}$
form a submatrix of $U$ of size $(h+n-s)\times h$. It follows that those forms
are linearly independent and hence a regular sequence. Thus also the elements
of $\mathcal{H}$ form a regular sequence. ∎
###### Remark 4.11.
The proof of the 4.10 shows that there is a one-to-one correspondence between
the subsets of $\mathcal{F}$ of cardinality $s\leq n$ and the submatrices of
$U$ of size $(h+n-s)\times h$. One of such subsets of $\mathcal{F}$ is a
regular sequence if and only if the corresponding matrix has maximal rank.
## 5 The maximal minors of the Jacobian dual of linear star configurations of
height two
The main goal of this section is to determine a minimal generating set for the
ideal of maximal minors of the Jacobian dual of a presentation of
$I_{2,\mathcal{F}}$ (see 5.4). In the case when $I_{2,\mathcal{F}}$ satisfies
$G_{n}$, by 2.5 these are the non-linear equations defining the Rees algebra
of $I_{2,\mathcal{F}}$. In the next section, we will use 5.4 to identify the
non-linear equations of the Rees algebra also when the $G_{n}$ condition is
not satisfied (see 6.5).
###### Proposition 5.1.
With the assumptions and notations of 4.9, let $c=2$. There exists a
presentation matrix $M$ of $I_{2,\mathcal{F}}$ whose Jacobian dual can be
expressed in the following form:
$\hypertarget{jacobiandual}{}\footnotesize
B(M)=\begin{bmatrix}T_{1}&0&\ldots&0&u_{11}T_{n+1}&u_{21}T_{n+2}&\ldots&u_{r1}T_{n+r}\\\
-T_{2}&T_{2}&\ldots&0&u_{12}T_{n+1}&u_{22}T_{n+2}&\ldots&u_{r2}T_{n+r}\\\
0&-T_{3}&\ldots&\vdots&\vdots&\vdots&\ldots&\vdots\\\
\vdots&0&\ldots&\vdots&\vdots&\vdots&\ldots&\vdots\\\
\vdots&\vdots&\ldots&0&\vdots&\vdots&\ldots&\vdots\\\
0&0&\ldots&T_{n-1}&u_{1,n-1}T_{n+1}&u_{2,n-1}T_{n+2}&\ldots&u_{r,n-1}T_{n+r}\\\
0&0&\ldots&-T_{n}&(u_{1n}T_{n+1}-T_{n})&(u_{2n}T_{n+2}-T_{n})&\ldots&(u_{rn}T_{n+r}-T_{n})\\\
\end{bmatrix}.$ (5.1)
###### Proof.
Since $I_{2,\mathcal{F}}$ is perfect of height two, by the Hilbert-Burch
Theorem a presentation matrix of $I_{2,\mathcal{F}}$ is
$\hypertarget{presentationmatrix}{}\footnotesize
M=\begin{bmatrix}F_{1}&0&\ldots&0\\\ -F_{2}&F_{2}&\ldots&\vdots\\\
0&-F_{3}&\ldots&\vdots\\\ \vdots&0&\ldots&0\\\ \vdots&\vdots&\ldots&F_{t-1}\\\
\vdots&\vdots&\ldots&-F_{t}\\\ \end{bmatrix}.$ (5.2)
Recall that
$\mathcal{F}=\\{F_{1},\ldots,F_{t}\\}=\\{x_{1},\ldots,x_{n},L_{1},\ldots,L_{r}\\}$
where $L_{i}=\sum_{j=1}^{n}u_{ij}x_{j}$. Therefore, the Jacobian dual $B(M)$
is equal to
$\footnotesize\begin{bmatrix}T_{1}&0&\ldots&0&-u_{11}T_{n+1}&(u_{11}T_{n+1}-u_{21}T_{n+2})&\ldots&(u_{r-1,1}T_{n+r-1}-u_{r1}T_{n+r})\\\
-T_{2}&T_{2}&\ldots&0&-u_{12}T_{n+1}&(u_{12}T_{n+1}-u_{22}T_{n+2})&\ldots&(u_{r-1,2}T_{n+r-1}-u_{r2}T_{n+r})\\\
0&-T_{3}&\ldots&\vdots&\vdots&\vdots&\ldots&\vdots\\\
\vdots&0&\ldots&\vdots&\vdots&\vdots&\ldots&\vdots\\\
\vdots&\vdots&\ldots&0&\vdots&\vdots&\ldots&\vdots\\\
0&0&\ldots&T_{n-1}&-u_{1,n-1}T_{n+1}&(u_{1,n-1}T_{n+1}-u_{2,n-1}T_{n+2})&\ldots&(u_{r-1,n-1}T_{n+r-1}-u_{r,n-1}T_{n+r})\\\
0&0&\ldots&-T_{n}&(T_{n}-u_{1n}T_{n+1})&(u_{1n}T_{n+1}-u_{2n}T_{n+2})&\ldots&(u_{r-1,n}T_{n+r-1}-u_{rn}T_{n+r})\\\
\end{bmatrix}.$
By column operations this matrix can be reduced to the form given in Eq. 5.1.
∎
Notice that the Jacobian dual of $I_{2,\mathcal{F}}$ is of size
$n\times(t-1)$. To compute the ideal of maximal minors of the Jacobian dual we
introduce the following notations.
###### Definition 5.2.
Assume $r=t-n\geq 1$. As in 4.9, for $i=1,\ldots,r$ consider the linear forms
$L_{i}\coloneq\sum_{k=1}^{n}u_{ik}x_{k}$ with $u_{ik}\in K$ and let $U$ be the
$n\times r$ matrix on the elements $u_{ik}$.
Consider a set of indexes $\chi\subseteq\\{1,\ldots,t\\}$ such that
$|\chi|=r$. Write
$\chi=\\{i_{1},\ldots,i_{h},j_{1},\ldots,j_{r-h}\\}$
with $i_{1},\ldots,i_{h}\leq n$ and $j_{1},\ldots,j_{r-h}\geq n+1$. Denote by
$U_{\chi}$ the $h\times h$ minor of $U$ defined by taking the rows
$i_{1},\ldots,i_{h}$ and removing the columns $j_{1}-n,\ldots,j_{r-h}-n$. In
the case $h=0$, we set $U_{\chi}\coloneq 1$.
###### Definition 5.3.
Consider the polynomial ring $K[T_{1},\ldots,T_{t}]$. Suppose $r=t-n\geq 1$
and define the following polynomials. Given $\Theta\subseteq\\{1,\ldots,t\\}$
such that $|\Theta|=r-1$, set
$\\{1,\ldots,t\\}\setminus\Theta=\\{k_{1},\ldots,k_{n+1}\\},$ where
$k_{i}<k_{i+1}$ for every $i=1,\ldots,n$. Define
$m_{\Theta}:=\sum_{i=1}^{n+1}(-1)^{\alpha(\Theta,k_{i})}\left(\dfrac{T_{k_{1}}\cdots
T_{k_{n+1}}}{T_{k_{i}}}\right)U_{\Theta\cup\\{k_{i}\\}}\in
K[T_{1},\ldots,T_{t}],$
where $U_{\Theta\cup\\{k_{i}\\}}$ is defined as in Definition 5.2. The
exponent $\alpha(\Theta,k_{i})$ is obtained as follows. Let $h<n+1$ be the
integer such that $k_{h}\leq n$ and $k_{h+1}>n$. Then
$\alpha(\Theta,k_{i})=\left\\{\begin{array}[]{cc}n-h+i-k_{i}&\mbox{ if }i\leq
h\\\ n+i&\mbox{ if }i>h.\end{array}\right.$
Notice that none of the monomials of $m_{\Theta}$ is divisible by any variable
$T_{j}$ for $j\in\Theta$ and that $m_{\Theta}\in(T_{k_{i}},T_{k_{l}})$ for any
$k_{i},k_{l}$.
We now state our main theorem about the ideal of maximal minors of the
Jacobian dual matrix of $I_{2,c}(\mathcal{F})$.
###### Theorem 5.4.
With the assumptions of 4.9, let $c=2$, and let $B$ be the Jacobian dual
matrix for $I_{2,\mathcal{F}}$ described by Eq. 5.1. Moreover, assume that
$r=t-n\geq 1$. The ideal $I_{n}(B)$ of the maximal minors of $B$ is minimally
generated by all the polynomials $m_{\Theta}$ defined in Definition 5.3 such
that $\,n\not\in\Theta$.
Before proving the theorem, we need a technical definition and a lemma that
give us control on the minors of the submatrices of $B$ containing the last
$r$ columns.
###### Definition 5.5.
Adopt the same notation of Definition 5.3. Let $B$ be the Jacobian dual matrix
for $I_{2,\mathcal{F}}$ described by Eq. 5.1 and denote its columns by
$A_{1},\ldots,A_{n-1},C_{1},\ldots,C_{r}$. Consider a set of indices
$\Theta\subseteq\\{1,\ldots,n-1\\}$ such that $|\Theta|=r-1.$
If $1\not\in\Theta$ write $\Theta=\Theta_{1}\cup\ldots\cup\Theta_{s}\,$ such
that for every $\,i=1,\ldots,s$:
* •
$\Theta_{i}=\\{k_{i},k_{i}+1,\ldots,k_{i}+l_{i}-1\\}$ contains $l_{i}$
consecutive indexes.
* •
$k_{i}+l_{i}-1<k_{i+1}-1$.
In particular, for every $i$, $k_{i}-1\not\in\Theta$. If $\,1\in\Theta$, write
in the same way $\,\Theta=\Theta_{0}\cup\Theta_{1}\cup\ldots\cup\Theta_{s}$
with $1\in\Theta_{0}$.
For $s=0$ (i.e. $\Theta=\\{1,\ldots,r-1\\}$), let $p_{\Theta}^{(1,1)}$ denote
the maximal minor of the submatrix of $B$ computed by only removing the first
$r-1$ columns. If $s\geq 1$, for $i=1,\ldots,s$ and $j=1,\ldots,l_{i}$ denote
by $p_{\Theta}^{(i,j)}$ the maximal minor of the matrix obtained from $B$ by
removing all the columns $A_{k}$ for $k\in\Theta$ and by doing the following
column operations:
* •
for $1\leq h<i$ replace the column $A_{k_{h}-1}$ with the sum of consecutive
columns $A_{k_{h}-1}+A_{k_{h}}+\ldots+A_{k_{h}+l_{h}-1}$.
* •
replace the column $A_{k_{i}-1}$ with the sum of consecutive columns
$A_{k_{i}-1}+A_{k_{i}}+\ldots+A_{k_{i}+j-2}$.
Only for $i=s$, we define in an analogous way another minor
$p_{\Theta}^{(s,l_{s}+1)}$.
In the proof of 5.4 we show that the minor $p_{\Theta}^{(s,l_{s}+1)}$
coincides with the polynomial $m_{\Theta}$, and that the ideal generated by
all the polynomials $m_{\Theta}$ coincides with the ideal generated by all the
minors $p_{\Theta}^{(1,1)}$. A key fact is that the minors
$p_{\Theta}^{(i,j)}$ define a sequence, where every element is obtained from
the previous one by replacing one column of the corresponding submatrix with
its sum with a column of $B$ excluded from such submatrix. We rename this
sequence of minors as
$p_{\Theta}^{(1)},p_{\Theta}^{(2)},\ldots,p_{\Theta}^{(e)}:=p_{\Theta}^{(1,1)},\ldots,p_{\Theta}^{(1,l_{1})},p_{\Theta}^{(2,1)},\ldots,p_{\Theta}^{(2,l_{2})},\ldots,p_{\Theta}^{(s,l_{s})},p_{\Theta}^{(s,1)},\ldots,p_{\Theta}^{(s,l_{s}+1)}.$
The following lemma provides a formula relating different elements of this
sequence of minors, which allows us to prove 5.4 by induction. To help the
reader dealing with the technicality of our argument, in Example 5.7 we show
how to apply the lemma for small values of $r$.
###### Lemma 5.6.
Adopt the same notations as in Definition 5.3 and Definition 5.5. For
$i=1,\ldots,s$ and $j=1,\ldots,l_{i}$, set
$\,\Theta(i,j)\coloneq\Theta\setminus\\{k_{i}+j-1\\}\cup\\{k_{i}-1\\}$. For
$h=1,\ldots,e-1$ we have
$\hypertarget{minorsformula}{}p_{\Theta}^{(h)}\coloneq
p_{\Theta}^{(i,j)}=p_{\Theta}^{(h+1)}-p_{\Theta(i,j)}^{(h-j+1)}.$ (5.3)
###### Proof.
Recalling that the matrix $B$ can be expressed as in Eq. 5.1, up to permuting
columns, by definition $p_{\Theta}^{(i,j)}$ is the determinant of a matrix of
the form
$\footnotesize\begin{bmatrix}0&&\\\ \vdots&&\\\ T_{k_{i}-1}&&\\\ 0&&\\\
\vdots&B^{\prime}\\\ -T_{k_{i}-1+j}&&\\\ 0&&\\\ \vdots&&\\\ 0&&\end{bmatrix}.$
Thus we can write
$p_{\Theta}^{(h)}=p_{\Theta}^{(i,j)}=(-1)^{k_{i}}T_{k_{i}-1}M_{1}-(-1)^{k_{i}+j}T_{k_{i}-1+j}M_{2}$,
where $M_{1}$ and $M_{2}$ are minors of the matrix $B^{\prime}$. Similarly, by
definition $p_{\Theta}^{(h+1)}$ is the determinant of the same matrix, where
the first column is replaced by its sum with the column of $B$ containing the
two variables $T_{k_{i}-1+j},-T_{k_{i}+j}$. Hence it has the form
$p_{\Theta}^{(h+1)}=(-1)^{k_{i}}T_{k_{i}-1}M_{1}-(-1)^{k_{i}+j+1}T_{k_{i}+j}M_{3}$
for some minor $M_{3}$ of $B^{\prime}$. It follows that
$p_{\Theta}^{(h)}=p_{\Theta}^{(h+1)}-(-1)^{k_{i}+j}(T_{k_{i}+j}M_{3}+T_{k_{i}-1+j}M_{2}).$
We have to show that
$p_{\Theta(i,j)}^{(h-j+1)}=(-1)^{k_{i}+j}(T_{k_{i}-1+j}M_{2}+T_{k_{i}+j}M_{3})$.
The second term of the equality is clearly the determinant of the matrix
$\footnotesize\begin{bmatrix}0&&\\\ \vdots&&\\\ T_{k_{i}-1+j}&&\\\
-T_{k_{i}+j}&B^{\prime}\\\ 0&&\\\ \vdots&&\\\ 0&&\end{bmatrix}.$
The first column of this matrix is equal to the column $A_{k_{i}-1+j}$ of $B$.
The remaining columns are obtained by performing the operations described by
Definition 5.5 on the set $\Theta$ to obtain the minor $p_{\Theta}^{(i,j)}$.
Now, notice that all the indexes smaller than $k_{i}-1$ are in $\Theta$ if and
only if they are in the set
$\Theta(i,j)=\Theta\setminus\\{k_{i}+j-1\\}\cup\\{k_{i}-1\\}$. Then, by
performing $h-j$ of the operations described in Definition 5.5 on the set
$\Theta(i,j)$ we obtain the same matrix as above. Hence, the determinant of
the matrix above coincides with $p_{\Theta(i,j)}^{(h-j+1)}$. ∎
###### Example 5.7.
Adopt the same notation as in Definition 5.5 and 5.6.
* •
In the case when $r=1$, $B$ has only one maximal minor. In the proof of 5.4,
this minor will be shown to be equal to $m_{\Theta}$ with $\Theta=\emptyset$.
* •
In the case when $r=2$, following the notation of Definition 5.5 and 5.6, we
deal with sets $\Theta=\\{i\\}$ with $1\leq i\leq n-1$. If $i=1$, we have
$s=0$ and $p_{\Theta}^{(1,1)}=m_{\Theta}$. For $i\geq 2$, $s=1$ and the
sequence corresponding to $\Theta$ is $p_{\Theta}^{(1,1)},p_{\Theta}^{(1,2)},$
where $p_{\Theta}^{(1,1)}$ is the minor of $B$ obtained removing the column
$A_{i}$ and $p_{\Theta}^{(1,2)}$ is the minor obtained by removing the column
$A_{i}$ and replacing $A_{i-1}$ by $A_{i-1}+A_{i}$. This second minor is equal
to $m_{\Theta}$. 5.6 gives
$p_{\Theta}^{(1,1)}=p_{\Theta}^{(1,2)}-p_{\Theta^{\prime}}^{(1,1)},$
with $\Theta^{\prime}=\\{i-1\\}$. Inductively this shows that
$p_{\Theta}^{(1,1)}$ is in the ideal generated by the minors of the form
$m_{\\{j\\}}$ for $j\leq i$.
* •
Consider also the case when $r=3$. Here the sequences correspond to sets
$\Theta=\\{i,j\\}$ with $1\leq i<j\leq n-1$. Again we have
$p_{\\{1,2\\}}=m_{\\{1,2\\}}$. Then we have to describe three possible cases:
$\\{1,j\\}$ with $j\geq 3$, $\\{i,i+1\\}$, and $\\{i,j\\}$ with $i-j\geq 2$.
In the first case $s=1,l_{1}=1$, and similarly to the case $r=2$ we get
$p_{\\{1,j\\}}^{(1,1)}=p_{\\{1,j\\}}^{(1,2)}-p_{\\{1,j-1\\}}^{(1,1)},$
and $p_{\\{1,j\\}}^{(1,2)}=m_{\\{1,j\\}}$. For $\Theta=\\{i,i+1\\}$ we find
$s=1,l_{1}=2$. The minor $p_{\Theta}^{(1,1)}$ is obtained by removing columns
$A_{i},A_{i+1}$, $p_{\Theta}^{(1,2)}$ is obtained by also replacing the column
$A_{i-1}$ by $A_{i-1}+A_{i}$, and $p_{\Theta}^{(1,3)}=m_{\\{i,i+1\\}}$ is
obtained replacing the column $A_{i-1}$ by $A_{i-1}+A_{i}+A_{i+1}$. 5.6 gives
$p_{\\{i,i+1\\}}^{(1,1)}=p_{\\{i,i+1\\}}^{(1,2)}-p_{\\{i-1,i+1\\}}^{(1,1)}=(p_{\\{i,i+1\\}}^{(1,3)}-p_{\\{i-1,i\\}}^{(1,1)})-p_{\\{i-1,i+1\\}}^{(1,1)}.$
In the case $\Theta=\\{i,j\\}$ with $i>1$, $i-j\geq 2$, we have
$s=2,l_{1}=1,l_{2}=1$. Here we have $p_{\Theta}^{(1,1)}$,
$p_{\Theta}^{(2,1)}$, $p_{\Theta}^{(2,2)}=m_{\Theta}$ that are obtained
subsequently by first removing columns $A_{i},A_{j}$, then replacing $A_{i-1}$
by $A_{i-1}+A_{i}$, and finally replacing also $A_{j-1}$ by $A_{j-1}+A_{j}$.
By 5.6
$p_{\\{i,j\\}}^{(1,1)}=p_{\\{i,j\\}}^{(2,1)}-p_{\\{i-1,j\\}}^{(1,1)}=(p_{\\{i,j\\}}^{(2,2)}-p_{\\{i,j-1\\}}^{(2)})-p_{\\{i-1,j\\}}^{(1,1)},$
where the notation $(2)$ stands for $(1,2)$ if $j-1=i+1$ and for $(2,1)$ if
$j-1>i+1$. Also in this case it follows that each $p_{\Theta}^{(1,1)}$ is in
the ideal generated by the polynomials $m_{\Theta}$. Indeed one can combine
all the previous formulas and use inductively the fact that the second term of
each new equality corresponds to a set $\Theta^{\prime}$ containing smaller
indexes.
We are now ready to prove 5.4.
###### Proof.
(of 5.4). As in Definition 5.5 denote the columns of $B$ by
$A_{1},\ldots,A_{n-1},C_{1},\ldots$, $C_{r}$. Given
$\Psi\subseteq\\{A_{1},\ldots,A_{n-1},C_{1},\ldots,C_{r}\\}$ such that
$|\Psi|=r-1$, denote by $p_{\Psi}$ the $n\times n$ minor of $B$ obtained by
removing all the columns contained in $\Psi$. Observe that we need to prove
that the ideal
$I_{n}(B)=(p_{\Psi}\mbox{ :
}\Psi\subseteq\\{A_{1},\ldots,A_{n-1},C_{1},\ldots,C_{r}\\},\mbox{
}|\Psi|=r-1)$
is equal to the ideal
$(m_{\Theta}\mbox{ : }\Theta\subseteq\\{1,\ldots,n-1,n+1,\ldots,t\\},\mbox{
}|\Theta|=r-1).$
We fix $n$ and work by induction on $r$. If $r=1$, the matrix $B$ described in
Eq. 5.1 reduces to the form
$\footnotesize\begin{bmatrix}T_{1}&0&0&\ldots&0&u_{11}T_{n+1}\\\
-T_{2}&T_{2}&0&\ldots&0&u_{12}T_{n+1}\\\
0&-T_{3}&T_{3}&\ldots&\vdots&\vdots\\\ \vdots&0&-T_{4}&\ldots&\vdots&\vdots\\\
\vdots&\vdots&0&\ldots&0&\vdots\\\
\vdots&\vdots&\ldots&\vdots&T_{n-1}&u_{1,n-1}T_{n+1}\\\
0&0&0&\ldots&-T_{n}&(u_{1n}T_{n+1}-T_{n})\\\ \end{bmatrix}.$
A quick computation by induction on $n$ shows that for every $n$ the
determinant of this matrix is equal to
$m_{\Theta}=-T_{1}\cdots T_{n}+\sum_{i=1}^{n}\left(\dfrac{T_{1}\cdots
T_{n}}{T_{i}}\right)u_{1i}$
which, according to Definition 5.3, corresponds to the set $\Theta=\emptyset$.
Hence, we assume that the result is true for $r-1$ and we prove it for some
$1<r<n$. The case $r\geq n$ will be considered later.
Consider the matrix in Eq. 5.1 and take the submatrix $B^{\prime}$ obtained by
eliminating one column $C_{j}$ with $j\in\\{1,\ldots,r\\}$. The ideal of
maximal minors of $B^{\prime}$ is contained in $I_{n}(B)$ and its generators
are also generators of $I_{n}(B)$. By the inductive hypothesis we have
$I_{n}(B^{\prime})=(m_{\Theta^{\prime}}\mbox{ :
}\Theta^{\prime}\subseteq\\{1,\ldots,n-1,n+1,\ldots,t\\}\setminus\\{n+j\\},\mbox{
}|\Theta^{\prime}|=r-2)$
as ideal of the polynomial ring
$K[T_{1},\ldots,\widehat{T_{n+j}},\ldots,T_{t}]$. Following the notation of
Definition 5.3 and working back in the polynomial ring $K[T_{1},\ldots,T_{t}]$
we observe that each of such $m_{\Theta^{\prime}}$ coincides with $m_{\Theta}$
with $\Theta\coloneq\Theta^{\prime}\cup\\{n+j\\}$. Since the same argument can
be applied to any $j\in\\{1,\ldots,r\\}$, we reduce to considering only the
minors of $B$ for submatrices containing all the last $r$ columns
$C_{1},\ldots,C_{r}$. In particular we have to show that
$(p_{\Psi}\mbox{ : }\Psi\subseteq\\{A_{1},\ldots,A_{n-1}\\},\mbox{
}|\Psi|=r-1)=(m_{\Theta}\mbox{ : }\Theta\subseteq\\{1,\ldots,n-1\\},\mbox{
}|\Theta|=r-1).$
Consider first the submatrix obtained from $B$ by deleting the first $r-1$
columns. This matrix is equal to
$\footnotesize
B^{\star}:=\begin{bmatrix}0&0&\ldots&0&u_{11}T_{n+1}&\ldots&u_{r1}T_{t}\\\
\vdots&\vdots&\ldots&\vdots&\vdots&\ldots&\vdots\\\
T_{r}&0&\ldots&\vdots&\vdots&\ldots&\vdots\\\
-T_{r+1}&T_{r+1}&\ldots&\vdots&\vdots&\ldots&\vdots\\\
0&-T_{r+2}&\ldots&\vdots&\vdots&\ldots&\vdots\\\
\vdots&\vdots&\ldots&0&\vdots&\ldots&\vdots\\\
0&0&\ldots&T_{n-1}&u_{1,n-1}T_{n+1}&\ldots&u_{r,n-1}T_{t}\\\
0&0&\ldots&-T_{n}&(u_{1n}T_{n+1}-T_{n})&\ldots&(u_{rn}T_{t}-T_{n})\\\
\end{bmatrix}.$
One can check by induction on $n$ that its determinant is equal to
$p_{\\{A_{1},\ldots,A_{r-1}\\}}=(-1)^{r}\left[\sum_{i=r}^{t}(-1)^{\alpha_{i}}\left(\dfrac{T_{r}\cdots
T_{t}}{T_{i}}\right)U_{\Theta\cup\\{i\\}}\right]=m_{\Theta^{\star}},$
where $\Theta^{\star}:=\\{1,\ldots,r-1\\}$ and $\alpha_{i}=\mbox{\rm
max}\\{i-n-1,0\\}=\alpha(\Theta^{\star},i)$ as in Definition 5.3.
Consider now an arbitrary set of indices $\Theta\subseteq\\{1,\ldots,n-1\\}$
such that $|\Theta|=r-1,$ and $\Theta\neq\Theta^{\star}$. Using the notation
of Definition 5.5, we want to show that $m_{\Theta}=p_{\Theta}^{(s,l_{s}+1)}$
and therefore is in the ideal $I_{n}(B)$. Write
$\\{1,\ldots,t\\}\setminus\Theta=\\{k_{1},\ldots,k_{n+1}\\}$ such that
$k_{i}<k_{i+1}$ for every $i=1,\ldots,n$. By construction,
$p_{\Theta}^{(s,l_{s}+1)}$ is the minor of a matrix in which all the variables
$T_{j}$ for $j\in\Theta$ do not appear. In particular, after permuting the
rows and replacing the variables $T_{r},\ldots,T_{n-1}$ by
$T_{k_{1}},\ldots,T_{k_{n-r}}$ keeping the same order, this matrix is equal to
the matrix $B^{\star}$. This implies that, up to a sign,
$p_{\Theta}^{(s,l_{s}+1)}=\sum_{i=1}^{n+1}(-1)^{\beta_{i}}\left(\dfrac{T_{k_{1}}\cdots
T_{k_{n+1}}}{T_{k_{i}}}\right)U_{\Theta\cup\\{k_{i}\\}}=m_{\Theta},$
where each $\beta_{i}$ is determined by the permutations of the rows performed
in the process, and equals $\alpha(\Theta,k_{i})$.
This proves that each $m_{\Theta}$ is in $I_{n}(B)$ for all sets
$\Theta\subseteq\\{1,\ldots,t\\}\setminus\\{n\\}$ with $|\Theta|=r-1$. Using
now 5.6 iteratively as described in Example 5.7, it follows that each
$p_{\Psi}$ with $\,\Psi\subseteq\\{A_{1},\ldots,A_{n-1}\\}\,$ is in the ideal
generated by the minors of the form $m_{\Theta}$. Indeed, as in Definition
5.5, $p_{\Psi}=p_{\Theta}^{(1,1)}$ where $\Theta$ is the set of indexes
corresponding to the columns in $\Psi$. Now, apply Eq. 5.3 iteratively,
starting from $p_{\Theta}^{(1,1)}$. The index in first term on the right side
of Eq. 5.3 increases at each iteration, until the term becomes a
$p_{\Theta}^{(s,l_{s}+1)}=m_{\Theta}$. The second term on the right-hand side
of Eq. 5.3 is determined by a set of indexes obtained from one of those
appearing in the previous iteration by replacing an index with a strictly
smaller index. Hence, it eventually coincides with $m_{\Theta^{\star}}$.
To conclude we only have to discuss the case $r\geq n$. Clearly all the
columns $C_{1},\ldots,C_{r}$ in the second part of the matrix are all
equivalent up to permutation of the variables. Hence, similarly as in the
previous case, the result on all the minors involving at least one of the
first $n-1$ columns can be obtained by reducing to the case $r=n-1$. Finally,
we only need to prove the statement for $n\times n$ minors involving only
columns of the form $C_{1},\ldots,C_{r}$. By renaming the variables, it is
sufficient then to consider the matrix
$\footnotesize\begin{bmatrix}u_{11}T_{n+1}&\ldots&u_{n1}T_{2n}\\\
\vdots&\vdots&\ldots\\\ u_{1,n-1}T_{n+1}&\ldots&u_{n,n-1}T_{2n}\\\
(u_{1n}T_{n+1}-T_{n})&\ldots&(u_{nn}T_{2n}-T_{n})\\\ \end{bmatrix}.$
Expanding with respect to the last row, the determinant of this matrix is
$T_{n+1}\cdots
T_{2n}\,U_{\Theta\cup\\{n\\}}+\sum_{i=1}^{n}(-1)^{n+i+1}\left(\dfrac{T_{n}\cdots
T_{2n}}{T_{n+i}}\right)U_{\Theta\cup\\{n+i\\}}=m_{\Theta}$
where $\Theta=\\{1,\ldots,n-1,2n+1,\ldots,t\\}.$ ∎
###### Remark 5.8.
Observe that also the polynomials $m_{\Theta}$ such that $n\in\Theta$ are in
the ideal $I_{n}(B)$. Indeed any of such $m_{\Theta}$ can be expressed as
linear combination with coefficients in $\\{1,-1\\}$ of generators of the form
$m_{\Theta\setminus\\{n\\}\cup\\{k\\}}$, for $k\not\in\Theta$.
###### Remark 5.9.
Similarly as in Definition 5.2, let
$\Theta=\\{i_{1},\ldots,i_{h},j_{1},\ldots,j_{r-1-h}\\}$ be a set of indexes
such that $i_{1},\ldots,i_{h}\leq n$ and $j_{1},\ldots,j_{r-1-h}\geq n+1$.
Call $M$ the submatrix of $U$ of size $h\times(h+1)$ obtained by taking rows
$i_{1},\ldots,i_{h}$ and removing the columns $j_{1},\ldots,j_{r-1-h}$. Then,
the polynomial $m_{\Theta}$ is zero if and only if the rank of $M$ is $<h$.
Indeed, by Definition 5.3, $m_{\Theta}=0$ if and only if for every
$k_{i}\not\in\Theta$, the minor $U_{\Theta\cup\\{k_{i}\\}}=0$. This is
equivalent to say that all the submatrices of $M$ of size $h\times h$ and all
the submatrices of $U$ of size $(h+1)\times(h+1)$ containing $M$ are
simultaneously singular. Hence this means that $\mbox{rank}(M)<h$.
## 6 Linear star configurations of height two
In this section we exploit the results of Section 5 to determine the defining
ideal of the Rees algebra of ideals of linear star configurations of height
two. In particular, in 6.14 we relate the non-linear equations identified in
[13, 3.5 and 4.2] (see 2.6) to the associated primes of the ideal of maximal
minors of the Jacobian dual.
### 6.1 Defining ideal of the Rees algebra
Our first goal is to identify an ideal $\mathcal{P}$, defined in terms of the
polynomials $m_{\Theta}$, as the candidate for the non-linear part of the
defining ideal of the Rees algebra of $I_{2,\mathcal{F}}$. The generators of
this ideal $\mathcal{P}$ are introduced in the following lemma.
###### Lemma 6.1.
Let $\Theta$ and $m_{\Theta}$ be defined as in Definition 5.3. Suppose
$m_{\Theta}\neq 0$. Then $m_{\Theta}=fh_{\Theta}$ where $f$ is either a unit
or a squarefree monomial in the variables $T_{i}$ and $h_{\Theta}$ is an
irreducible nonzero and non-monomial element of $k[T_{1},\ldots,T_{t}]$.
###### Proof.
Let $k_{1},\ldots,k_{n+1}$ be the indexes not belonging to $\Theta$. Clearly
the variable $T_{k_{i}}$ divides $m_{\Theta}$ if and only if
$U_{\Theta\cup\\{k_{i}\\}}=0$. For simplicity rename
$U_{i}:=U_{\Theta\cup\\{k_{i}\\}}$. By relabeling the indexes, we can assume
that there exists $e\geq 2$ such that $U_{i}\neq 0$ for $i\leq e$ and
$U_{i}=0$ for $i>e$. Indeed by assumption $m_{\Theta}\neq 0$ and, by 5.4, it
is in the defining ideal of the Rees algebra of $I_{2,\mathcal{F}}$. Hence
$m_{\Theta}$ cannot be a monomial in the variables $T_{1},\ldots,T_{t}$ and
therefore at least two minors $U_{i}$ are nonzero. Now, if $e=n+1$, then
$h_{\Theta}=m_{\Theta}.$ Otherwise define
$\hypertarget{htheta}{}h_{\Theta}\coloneq\frac{m_{\Theta}}{T_{k_{e+1}}\cdots
T_{k_{n+1}}}.$ (6.1)
We have now that $h_{\Theta}$ can be expressed as $h_{\Theta}=\alpha
T_{k_{1}}+\beta$ where $\alpha=\sum_{i=2}^{e}(T_{k_{2}}\cdots
T_{k_{e}})T_{k_{i}}^{-1}U_{i}$ and $\beta=T_{k_{2}}\cdots T_{k_{e}}U_{1}$.
Since $U_{i}\neq 0$ for every $i\leq e$, we get that $\alpha$ and $\beta$ have
no common factors and $h_{\Theta}$ is irreducible. ∎
###### Definition 6.2.
For every $\Theta$ defined as in Definition 5.3, let $h_{\Theta}$ be defined
as in 6.1. We denote by $\mathcal{P}$ the ideal generated by the $h_{\Theta}$.
Notice that the ideal $\mathcal{P}$ is contained in the non-linear part of the
defining ideal $\mathcal{J}$ of the Rees algebra of $I_{2,\mathcal{F}}$.
Indeed, since $I_{n}(B)\subseteq\mathcal{J}$, by 5.4 and 6.1,
$m_{\Theta}=fh_{\Theta}\in\mathcal{J}$. But $f$ is either a unit or a
squarefree monomial in the variables $T_{1},\ldots,T_{t}$ and cannot be in
$\mathcal{J}$. Since $\mathcal{J}$ is prime, it follows that
$h_{\Theta}\in\mathcal{J}$.
Moreover, observe that if $I_{2,\mathcal{F}}$ satisfies the $G_{n}$ condition,
then $\mathcal{P}=I_{n}(B)$ and this coincides with the non-linear part of
$\mathcal{J}$ by 2.5. In 6.5 we prove that in general
$\mathcal{L}+\mathcal{P}=\mathcal{J}$, however $I_{2,\mathcal{F}}$ may no
longer satisfy $G_{n}$. The following example shows that when the $G_{n}$
condition is not satisfied, one might have a proper containment
$I_{n}(B)\subsetneq\mathcal{P}$.
###### Example 6.3.
Let $R=K[x_{1},x_{2},x_{3},x_{4}]$ and
$\mathcal{F}=\\{x_{1},x_{2},x_{3},x_{4},L_{1},L_{2}\\},$ where
$L_{1}=x_{1}+x_{2}+x_{3}+x_{4}$ and $L_{2}=x_{2}+2x_{3}+3x_{4}$. Then, by 4.6
the ideal $I_{2,\mathcal{F}}$ does not satisfy the $G_{n}$ condition. Notice
that
$U=\begin{bmatrix}1&0\\\ 1&1\\\ 1&2\\\ 1&3\\\
\end{bmatrix}\quad\mathrm{and}\quad B=\begin{bmatrix}T_{1}&0&0&T_{5}&0\\\
-T_{2}&T_{2}&0&T_{5}&T_{6}\\\ 0&-T_{3}&T_{3}&T_{5}&2T_{6}\\\
0&0&-T_{4}&T_{5}-T_{4}&3T_{6}-T_{4}\\\ \end{bmatrix}.$
In this case,
$\mathcal{L}=(x_{1}T_{1}-x_{2}T_{2},x_{2}T_{2}-x_{3}T_{3},x_{3}T_{3}-x_{4}T_{4},x_{4}T_{4}-L_{1}T_{5},x_{4}T_{4}-L_{2}T_{6})$
and $I_{4}(B)$ is generated by:
$\displaystyle m_{6}\\!\\!$ $\displaystyle=$ $\displaystyle
T_{1}T_{2}T_{3}T_{5}-T_{1}T_{2}T_{3}T_{4}+T_{1}T_{2}T_{4}T_{5}+T_{1}T_{4}T_{3}T_{5}+T_{4}T_{2}T_{3}T_{5},$
$\displaystyle m_{5}\\!\\!$ $\displaystyle=$ $\displaystyle
3T_{1}T_{2}T_{3}T_{6}-T_{1}T_{2}T_{3}T_{4}+2T_{1}T_{2}T_{4}T_{6}+T_{1}T_{4}T_{3}T_{6},$
$\displaystyle m_{3}\\!\\!$ $\displaystyle=$ $\displaystyle
T_{1}T_{2}T_{4}T_{5}-2T_{1}T_{2}T_{4}T_{6}-T_{1}T_{2}T_{5}T_{6}+T_{1}T_{4}T_{5}T_{6}+2T_{2}T_{4}T_{5}T_{6},$
$\displaystyle m_{2}\\!\\!$ $\displaystyle=$ $\displaystyle
T_{1}T_{4}T_{3}T_{5}-T_{1}T_{6}T_{3}T_{4}-T_{1}T_{6}T_{4}T_{5}-2T_{1}T_{6}T_{3}T_{5}+T_{4}T_{6}T_{3}T_{5},$
$\displaystyle m_{1}\\!\\!$ $\displaystyle=$ $\displaystyle-
T_{4}T_{2}T_{3}T_{5}+2T_{6}T_{2}T_{4}T_{5}+T_{6}T_{4}T_{3}T_{5}+3T_{6}T_{2}T_{3}T_{5}.$
Also, $\,\displaystyle{h_{1}=\frac{m_{1}}{T_{5}}=\frac{m_{5}}{T_{1}}}\,$ and
$\,\mathcal{P}=(m_{6},m_{3},m_{2},h_{1})\supsetneq I_{4}(B)$. 6.5 will show
that the defining ideal of $\mathcal{R}(I_{2,\mathcal{F}})$ is
$\mathcal{L}+\mathcal{P}$. Each of the polynomials $h_{i}$ has the form
$\partial D_{i}$ for dependency $D_{i}$ among the elements
$x_{1},x_{2},x_{3},x_{4},L_{1},L_{2}$ as in Eq. 2.2 and Eq. 2.3. We have:
$\displaystyle D_{6}\\!\\!\\!$ $\displaystyle\colon$ $\displaystyle
x_{1}+x_{2}+x_{3}+x_{4}-L_{1}=0,$ $\displaystyle D_{5}=D_{1}\\!\\!\\!$
$\displaystyle\colon$ $\displaystyle x_{2}+2x_{3}+3x_{4}-L_{2}=0,$
$\displaystyle D_{3}\\!\\!\\!$ $\displaystyle\colon$ $\displaystyle
2x_{1}+x_{2}-x_{4}-2L_{1}+L_{2}=0,$ $\displaystyle D_{2}\\!\\!\\!$
$\displaystyle\colon$ $\displaystyle x_{1}-x_{3}-2x_{4}-L_{1}+L_{2}=0.$
In the previous example, the non-linear part of the defining ideal of
$\mathcal{R}(I_{2,\mathcal{F}})$ is generated by the polynomials $\partial D$
corresponding to the minimal dependencies $D$ among the elements of
$\mathcal{F}$. This is true in general. Indeed, in 6.5 below, we show that the
defining ideal of $\mathcal{R}(I_{2,\mathcal{F}})$ is
$\mathcal{L}+\mathcal{P}$, where
$\mathcal{L}=(\lambda_{1},\ldots,\lambda_{t-1})$ is the ideal of linear
relations of $I_{2,\mathcal{F}}$. From the presentation matrix of
$I_{2,\mathcal{F}}$ it is clear that
$\displaystyle{\lambda_{i}=F_{i}T_{i}-F_{i+1}T_{i+1}}$ for every
$i=1,\ldots,t-1$.
###### Lemma 6.4.
With the assumptions and notations of 4.9, let $I_{2,\mathcal{F}}$ be a linear
star configuration of height 2. Assume that, up to reordering the variables,
the first row of the matrix $U$ is zero. Set
$\mathcal{G}=\\{x_{2},\ldots,x_{n},L_{1},\ldots,L_{r}\\}.$ Let $B$ and $B^{*}$
be Jacobian dual matrices for $I_{2,\mathcal{F}}$ and $I_{2,\mathcal{G}}$
respectively. Then
$I_{n}(B)=(T_{1})I_{n-1}(B^{*}).$
Moreover, the ideal $\mathcal{P}$ defined in Definition 6.2 for
$I_{2,\mathcal{F}}$ and for $I_{2,\mathcal{G}}$ is the same.
###### Proof.
Expressing $B$ and $B^{*}$ as in Eq. 5.1, it is easy to observe that
$\footnotesize B=\begin{bmatrix}T_{1}&0&\ldots&0\\\ -T_{2}&&&\\\ 0&&B^{*}&\\\
\vdots&&&\\\ 0&&&\end{bmatrix}.$
Both statements now follow from the definitions. ∎
###### Theorem 6.5.
With the assumptions and notations of 4.9, let $I_{2,\mathcal{F}}$ be a the
ideal of a linear star configuration of height two. Then, the ideal
$\mathcal{P}$ is the non-linear part of the defining ideal of the Rees algebra
of $I_{2,\mathcal{F}}$. In particular $\mathcal{J}=\mathcal{L}+\mathcal{P}.$
###### Proof.
By 2.6 it is sufficient to prove that the polynomial $\partial D$ associated
to any dependency $D$ among the elements of $\mathcal{F}$ is in the ideal
$\mathcal{P}$.
First we show that every polynomial $h_{\Theta}$ is of the form $\partial D$
for some dependency $D$. Indeed consider the natural map $\varphi\colon
R[T_{1},\ldots,T_{t}]\to\mathcal{R}(I_{2,\mathcal{F}})$ and let
$G=\prod_{i=1}^{t}F_{i}$. Then, using Eq. 6.1, we have that
$0=\varphi(h_{\Theta})=\sum_{i=1}^{e}(-1)^{\alpha(\Theta,i)}U_{\Theta\cup\\{k_{i}\\}}\varphi\Big{(}\frac{T_{k_{1}}\cdots
T_{k_{e}}}{T_{k_{i}}}\Big{)}=\sum_{i=1}^{e}(-1)^{\alpha(\Theta,i)}U_{\Theta\cup\\{k_{i}\\}}G^{\,e-2}F_{k_{i}}.$
It follows that, after dividing by $G^{e-2}$, the last term is a dependency
$D$ among the elements of $\mathcal{F}$ in the sense of Eq. 2.2. Therefore,
$h_{\Theta}$ is the corresponding polynomial $\partial D$ as defined in Eq.
2.3.
To prove that, for any dependency $D$, $\partial D$ is in $\mathcal{P}$ we
work by induction on $r\geq 1$. If $r=1$, up to multiplying by units, there is
only one dependency $D$ and clearly $\partial D=uh_{\Theta}$ with
$\Theta=\emptyset$ and for some $u\in K$.
Assume now that $r\geq 2$ and notice that any dependency $D$ can be written as
$D\colon a_{1}L_{1}+\ldots+a_{r}L_{r}+b_{1}x_{1}+\ldots+b_{n}x_{n}=0,$
where the coefficients $b_{j}$ are uniquely determined after
$a_{1},\ldots,a_{r}\in K$ are chosen. Using the inductive hypothesis we can
deal with all the dependencies such that at least one of the coefficients
$a_{1},\ldots,a_{r}$ is zero. For simplicity assume that $a_{r}=0$ and
consider the star configuration $I_{2,\mathcal{F^{\prime}}}$ where
$\mathcal{F^{\prime}}:=\mathcal{F}\setminus\\{L_{r}\\}$. Let
$\mathcal{P^{\prime}}$ be the ideal generated by the polynomials $h_{\Theta}$
constructed for $I_{2,\mathcal{F^{\prime}}}$ as in Definition 6.2. We can look
at $\mathcal{P^{\prime}}$ as an ideal of $K[T_{1},\ldots,T_{t}]$. Using 5.1,
5.4 and 6.1, it can be easily checked that
$\mathcal{P^{\prime}}\subseteq\mathcal{P}$. Now, all the dependencies $D$ such
that $a_{r}=0$ are also dependencies among the elements of
$\mathcal{F^{\prime}}$. Hence, by the inductive hypothesis, the corresponding
polynomials $\partial D\in\mathcal{P^{\prime}}\subseteq\mathcal{P}$.
To conclude we can restrict to the case where $a_{1},\ldots,a_{r}\neq 0$, and
since the polynomial $\partial D$ is unique up to multiplying scalars, we can
further assume that $a_{1}=1$. Let now $U$ be the $n\times r$ matrix of the
coefficients $u_{ij}$ as in 4.9. By 6.4 we can always reduce to the case in
which no rows of $U$ are zero. Hence, once fixed such $a_{1},\ldots,a_{r}$,
let $\chi\subseteq\\{1,\ldots,n\\}$ be a (possibly empty) maximal set of
indexes such that $b_{j}=0$ for every $j\in\chi$ and the rows of the matrix
$U$ indexed by the elements of $\chi$ are linearly independent. By definition
$|\chi|\leq n$. We show also that $|\chi|\leq r-1$. Indeed, by way of
contradiction and by relabeling, say that $\chi\supseteq\\{1,\ldots,r\\}$.
Thus $b_{1},\ldots,b_{r}=0$, which implies that the linear forms
$x_{r+1},\ldots,x_{n},L_{1},\ldots,L_{r}$ are not linearly independent. But by
Remark 4.11 the assumption that the first $r$ rows of $U$ are linearly
independent implies that $x_{r+1},\ldots,x_{n},L_{1},\ldots,L_{r}$ form a
regular sequence, a contradiction.
Without loss of generality, say now that $\chi=\\{1,\ldots,h\\}$ with $0\leq
h\leq\mbox{\rm min}\\{r-1,n\\}$. The dependency $D$ becomes $\,D\colon
L_{1}+a_{2}L_{2}+\ldots+a_{r}L_{r}+b_{h+1}x_{h+1}+\ldots+b_{n}x_{n}=0$. If
$h=r-1$, the equations with respect to $x_{1},\ldots,x_{r-1}$ determine a
linear system in $r-1$ equations $\,-u_{1k}=a_{2}u_{2k}+\ldots+a_{r}u_{rk}\,$
for $k=1,\ldots,r-1$ and $r-1$ indeterminates $a_{2},\ldots,a_{r}$. The
assumption that the first $r-1$ rows of $U$ are linearly independent forces
this system to have a unique solution. Hence, up to multiplying units, $D$ is
the only dependency related to such set $\chi$, and necessarily, setting
$\Theta=\chi$, the corresponding polynomial $\partial D$ is $h_{\Theta}$.
Suppose now that $h<r-1$. Since the first $h$ rows of $U$ are linearly
independent, by permuting the columns we may assume that the minor
$W:=U_{\chi\cup\\{n+1,\ldots,n+r-h\\}}$ is nonzero. For $i=1,\ldots,r-h$
define
$\Theta_{i}:=\chi\cup\\{n+1,\ldots,n+r-h\\}\setminus\\{n+i\\}.$
By construction $|\Theta_{i}|=r-1$ and $h_{\Theta_{i}}$ is well-defined. We
claim that
$\partial D=\sum_{i=1}^{r-h}\frac{a_{i}}{W}\left(\frac{T_{n+1}\cdots
T_{n+r-h}}{T_{n+i}}\right)h_{\Theta_{i}}\in\mathcal{P}.$
To do this we need to check that the coefficients of each term coincide. The
quantity on the right-hand side is a sum of terms of the form
$c_{j}(T_{h+1}\cdots T_{n+r})(T_{j}^{-1})$ for $j\geq h+1$. We have to prove
that $c_{j}=b_{j}$ if $j\leq n$ and $c_{j}=a_{j-n}$ if $j\geq n+1$. We
consider different subcases.
Case (i): $n+1\leq j\leq n+r-h$. This term appears only once among the terms
of $h_{\Theta_{j-n}}$. Observe that by Definition 5.3,
$\,\alpha(\Theta_{i},n+i)=-h$. Thus the coefficient of the term we are
considering is $\,(-1)^{h}U_{\Theta_{j-n}\cup\\{j\\}}=(-1)^{h}W$. Hence,
$\,c_{j}=(-1)^{h}a_{j-n}(W^{-1})W=(-1)^{h}a_{j-n}$.
Case (ii): $j>n+r-h$. For $k=1,\ldots,h$, consider the linear system on the
$h$ equations
$a_{r-h+1}u_{r-h+1,k}+\ldots+a_{r}u_{rk}=-(a_{1}u_{1,k}+\ldots+a_{r-h}u_{r-h,k}).$
Set $\sigma_{i,j}\coloneq\alpha(\Theta_{i},n+i)+\alpha(\Theta_{i},j)$. Observe
that by Definition 5.3 $\,\sigma_{i,j}=j-(n+r-h)$. By Cramer’s rule we get
$c_{j}=(-1)^{h}\sum_{i=1}^{r-h}(-1)^{\sigma_{i,j}}\frac{a_{i}}{W}\,U_{\Theta_{i}\cup\\{j\\}}=(-1)^{h}a_{j-n}.$
Case (iii): $h<j\leq n$. Notice that $b_{j}=-(a_{1}u_{1j}+\ldots+a_{r}u_{rj})$
and
$c_{j}=\sum_{i=1}^{r-h}(-1)^{\sigma_{i,j}}\frac{a_{i}}{W}\,U_{\Theta_{i}\cup\\{j\\}},$
where in this case $\sigma_{i,j}=-h+1$. Computing the minor
$U_{\Theta_{i}\cup\\{j\\}}$ with respect to the $j$-th row, we express
$U_{\Theta_{i}\cup\\{j\\}}=u_{ij}W+\sum_{k=r-h+1}^{r}(-1)^{r-h+k}u_{kj}\,U_{\Theta_{i}\cup\\{n+k\\}}.$
Hence, by replacing $U_{\Theta_{i}\cup\\{j\\}}$ in the equation for $c_{j}$
and applying (ii) we get
$\displaystyle c_{j}$ $\displaystyle=$
$\displaystyle(-1)^{1-h}\Big{(}\sum_{i=1}^{r-h}a_{i}u_{ij}+\sum_{i=1}^{r-h}\frac{a_{i}}{W}\sum_{k=r-h+1}^{r}(-1)^{r-h+k}u_{kj}U_{\Theta_{i}\cup\\{n+k\\}}\Big{)}$
$\displaystyle=$
$\displaystyle(-1)^{1-h}\Big{(}\sum_{i=1}^{r-h}a_{i}u_{ij}+\sum_{k=r-h+1}^{r}u_{kj}a_{k}\Big{)}=(-1)^{h}b_{j}.$
This concludes the proof after multiplying all the $c_{j}$ by $(-1)^{h}$. ∎
###### Remark 6.6.
From 6.5 and its proof it follows that the polynomials $h_{\Theta}$ defined in
6.1 are a minimal generating set for the non-linear part $\mathcal{P}$ of the
defining ideal $\mathcal{J}$ of $\mathcal{R}(I_{2,\mathcal{F}})$. Moreover,
Eq. 6.1 provides an explicit formula for each $h_{\Theta}$. In particular, the
degrees of the non-linear equations of the Rees algebra can be explicitly
calculated from the coefficients of the linear forms
$\\{L_{1},\ldots,L_{r}\\}$ and are always between 4 and $n$. In fact, since
any three of $x_{1},\ldots,x_{n},L_{1},\ldots,L_{r}$ are a regular sequence,
there is no dependency of degree at most 3.
###### Corollary 6.7.
Assume $r=1$. Set
$\displaystyle{\mathcal{F}=\\{x_{1},\ldots,x_{n},u_{e+1}x_{e+1}+\ldots+u_{n}x_{n}\\}}$
with $e\geq 0$ and $u_{i}\neq 0$ for all $e+1\leq i\leq n$. Then, the defining
ideal of $\mathcal{R}(I_{2,\mathcal{F}})$ is equal to $\mathcal{L}+(f)$ where
$f=T_{e+1}\cdots T_{n}-\sum_{i=e+1}^{n}\left(\frac{T_{e+1}\cdots
T_{n+1}}{T_{i}}\right)u_{i}.$
### 6.2 Primary decomposition
The aim of this subsection is to interpret the ideal $\mathcal{P}$ in terms of
the primary decomposition of $I_{n}(B)$. We have already observed that when
$I_{2,\mathcal{F}}$ satisfies the $G_{n}$ condition, $I_{n}(B)=\mathcal{P}$ is
the defining ideal of the fiber cone of $I_{2,\mathcal{F}}$, thus a prime
ideal.
When the $G_{n}$ condition is no longer satisfied, $I_{n}(B)$ is not a prime
ideal and we believe that $\mathcal{P}$ is one of its minimal primes. In
particular we state the following conjecture.
###### Conjecture 6.8.
The ideal $\mathcal{P}$ is the only associated prime of $I_{n}(B)$ not
generated by monomials.
We prove that the conjecture is true under an additional assumption on the
matrix of coefficients $U$. We start by some observing some properties of the
height of $I_{n}(B)$.
###### Remark 6.9.
By the Eagon-Northcott Theorem [9, Theorem 1], the ideal $I_{n}(B)$ has height
$\leq r$. This upper bound is met if $I_{2,\mathcal{F}}$ satisfies the $G_{n}$
condition. Indeed, in this case, by 2.5, $I_{n}(B)$ is the defining ideal of
the fiber cone of $I_{2,\mathcal{F}}$. Set
$\mathcal{F}^{\prime}=\mathcal{F}\setminus\\{F_{t}\\}$, and call $B^{\prime}$
the Jacobian dual matrix of $I_{2,\mathcal{F}^{\prime}}$. By 4.6, it is easy
to observe that also $I_{2,\mathcal{F}^{\prime}}$ satisfies condition $G_{n}$
and by 5.4, $I_{n}(B^{\prime})S\subsetneq I_{n}(B)$. The fact that these
ideals are primes and an inductive argument on $r$ imply that $\mbox{\rm
ht}I_{n}(B)=r$.
However, next result shows that if we remove the $G_{n}$ assumption, then the
height of $I_{n}(B)$ can be arbitrarily smaller than $r$. In the case when
$I_{2,\mathcal{F}}$ does not satisfy the $G_{n}$ condition, by 4.10 the matrix
of coefficients $U$ must have some zero minor. Next lemma shows that the
presence of such zero minors corresponds to containments of $I_{n}(B)$ in some
monomial prime ideals of height at most $r$.
###### Lemma 6.10.
Consider a set of indexes $\chi\subseteq\\{1,\ldots,t\\}$ such that
$|\chi|\leq r$. Let $\mathfrak{p}_{\chi}$ be the prime ideal of
$k[T_{1},\ldots,T_{t}]$ generated by the variables $T_{k}$ for $k\in\chi$. The
following are equivalent:
1. (1)
The ideal $I_{n}(B)\subseteq\mathfrak{p}_{\chi}$.
2. (2)
Each minor of $U$ of the form $U_{\Omega}$ with $\chi\subseteq\Omega$ is zero.
In particular, in the case $|\chi|=r$ this gives that
$I_{n}(B)\subseteq\mathfrak{p}_{\chi}$ if and only if $U_{\chi}=0$.
###### Proof.
By 5.4, the ideal $I_{n}(B)$ is generated by the polynomials $m_{\Theta}$ for
all the sets of indexes $\Theta$ such that $|\Theta|=r-1,n\not\in\Theta$. If
there exists at least two indexes $k_{i},k_{j}\in\chi\setminus\Theta$, then
each monomial of $m_{\Theta}$ is divisible either by $T_{k_{i}}$ or by
$T_{k_{j}}$ or by both and therefore $m_{\Theta}\in\mathfrak{p}_{\chi}$.
Hence, we need to discuss only the case $\chi\subseteq\Theta\cup\\{k_{i}\\}$
for some $k_{i}\not\in\Theta$. If $k_{i}\in\chi$, all the monomials of
$m_{\Theta}$ except one are divisible by $T_{k_{i}}$ but none of them is
divisible by any other variable $T_{k_{j}}$ generating $\mathfrak{p}_{\chi}$.
The only monomial of $m_{\Theta}$ not divisible by $T_{k_{i}}$ has coefficient
$U_{\Theta\cup\\{k_{i}\\}}$. Hence $m_{\Theta}\in\mathfrak{p}_{\chi}$ if and
only if $U_{\Theta\cup\\{k_{i}\\}}=0$.
If instead we assume $\chi\subseteq\Theta$, we have that none of the monomials
of $m_{\Theta}$ is in $\mathfrak{p}_{\chi}$ and the only possibility to have
$m_{\Theta}\in\mathfrak{p}_{\chi}$ is to have $U_{\Theta\cup\\{j\\}}=0$ for
every $j\not\in\Theta$. The thesis follows since this must hold for every
$\Theta$. ∎
To describe the primary decomposition of $I_{n}(B)$, in order to avoid too
much technicality, we focus on the case when a suitable condition on the
matrix $U$ (but weaker than the $G_{n}$ condition) is satisfied.
###### Remark 6.11.
By Remark 5.9 and 6.10 the following are equivalent:
1. (1)
$m_{\Theta}\neq 0$ for every $\Theta$.
2. (2)
For every $h\geq 1$, every submatrix of size $h\times(h+1)$ of $U$ has maximal
rank.
3. (3)
$I_{n}(B)$ is not contained in any monomial prime ideal of height $<r$.
Notice that if $r=1$, these equivalent conditions are always satisfied, while
if $r=2$ this are satisfied if and only if no row of $U$ is zero. 6.4 shows
that, to study the primary decomposition of $I_{n}(B)$, we can always reduce
to assume that the matrix $U$ has no zero rows. Thus if $r\leq 2$ our proof
covers all the possible cases.
We now need a couple of lemmas to show that, for some distinct sets $\Theta$,
the corresponding $h_{\Theta}$ are associated in the polynomial ring
$K[T_{1},\ldots,T_{t}]$. First we explore further the relation between the
$h_{\Theta}$’s and dependencies pointed out in 6.5.
###### Lemma 6.12.
Let $\Theta$, $m_{\Theta}$, $h_{\Theta}$ be defined as in Definition 5.3 and
6.1. If $h_{\Theta}\neq 0$, then, up to multiplying by a factor in $K$, there
exists a unique dependency
$D:a_{1}L_{1}+\ldots+a_{r}L_{r}+b_{1}x_{1}+\ldots+b_{n}x_{n}=0$ such that
$b_{j}=0$ for $j\in\Theta$, $j\leq n$ and $a_{n-j}=0$ for $j\in\Theta$, $j>n$.
In particular $h_{\Theta}=\partial D$.
###### Proof.
The case $r=1$ is clear. Applying the same inductive argument on $r$ as in the
proof of 6.5, we reduce to prove the statement in the case
$\Theta=\\{j_{1},\ldots,j_{r-1}\\}\subseteq\\{1,\ldots,n\\}$. Since
$h_{\Theta}\neq 0$, also $m_{\Theta}\neq 0$ and by Remark 5.9 this implies
that the rows $j_{1},\ldots,j_{r-1}$ of the matrix $U$ are linearly
independent. Again, as in the proof of 6.5, this condition implies that, up to
multiplying a scalar, there exists a unique dependency $D$ such that
$b_{j_{1}},\ldots,b_{j_{r-1}}=0$. Necessarily $h_{\Theta}=\partial D$. ∎
###### Lemma 6.13.
Let $\Theta$, $m_{\Theta}$, $h_{\Theta}$ be defined as in Definition 5.3 and
6.1. Suppose that $h_{\Theta}\neq m_{\Theta}\neq 0$. Given $j\in\Theta$ and
$k\not\in\Theta$ such that the minor $U_{\Theta\cup\\{k\\}}=0$, define
$\Theta^{\prime}:=\Theta\setminus\\{j\\}\cup\\{k\\}$. Then, if
$m_{\Theta^{\prime}}\neq 0$, there exists a unit $a\in K$ such that
$h_{\Theta}=ah_{\Theta^{\prime}}.$
###### Proof.
Let $\Theta=\\{j_{1},\ldots,j_{r-1}\\}$ and let
$\\{k_{1},\ldots,k_{n+1}\\}=\\{1,\ldots,t\\}\setminus\Theta$. Since $0\neq
m_{\Theta}\neq h_{\Theta}$ by reordering the indexes, there exists $1\leq e<n$
such that $U_{\Theta\cup\\{k_{l}\\}}=0$ for $l\leq e$ and
$U_{\Theta\cup\\{k_{l}\\}}\neq 0$ for $l>e$. Take
$k\in\\{k_{1},\ldots,k_{e}\\}$ and $j\in\\{j_{1},\ldots,j_{r-1}\\}$. Observe
that $U_{\Theta^{\prime}\cup\\{j\\}}=U_{\Theta\cup\\{k\\}}=0$. By definition
of $m_{\Theta}$ we can write
$\frac{m_{\Theta}}{T_{k}}=\sum_{l=1}^{n+1}(-1)^{\alpha(\Theta,l)}\left(\frac{T_{k_{1}}\cdots
T_{k_{n+1}}}{T_{k_{l}}T_{k}}\right)U_{\Theta\cup\\{k_{l}\\}},$
$\frac{m_{\Theta^{\prime}}}{T_{j}}=\sum_{l=1}^{n+1}(-1)^{\alpha(\Theta^{\prime},l)}\left(\frac{T_{k_{1}}\cdots
T_{k_{n+1}}}{T_{k_{l}}T_{k}}\right)U_{\Theta^{\prime}\cup\\{k_{l}\\}}.$
Hence, to conclude we have to show that there exists a unit $a\in K$ such that
for every $l\in\\{k_{1},\ldots,k_{n+1}\\}\setminus\\{k\\}$,
$\hypertarget{signformula}{}U_{\Theta^{\prime}\cup\\{k_{l}\\}}=(-1)^{\alpha(\Theta,k_{l})-\alpha(\Theta^{\prime},k_{l})}aU_{\Theta\cup\\{k_{l}\\}}.$
(6.2)
This equality will also imply that one of these two minors is zero if and only
if so is the other one and the thesis will then follow by 6.1.
Using 6.12, it is sufficient to show that $h_{\Theta}$ and
$h_{\Theta^{\prime}}$ correspond to the same dependency $D$. Let
$D:a_{1}L_{1}+\ldots+a_{r}L_{r}+b_{1}x_{1}+\ldots+b_{n}x_{n}=0$ be the
dependency such that $h_{\Theta}=\partial D$. For simplicity set
$c_{l}\coloneq b_{l}$ for $l\leq n$ and $c_{l}\coloneq a_{n-l}$ for $l>n$.
Hence $c_{j_{l}}=0$ for every $l=1,\ldots,r-1$ and moreover, since
$U_{\Theta\cup\\{k\\}}=0$ then also $c_{k}=0$. Since also
$m_{\Theta^{\prime}}\neq 0$, then $h_{\Theta^{\prime}}\neq 0$ and by 6.12, the
dependency $D^{\prime}$ associated to $h_{\Theta^{\prime}}$ is then equal to
$aD$ for some nonzero $a\in K$. It follows that
$h_{\Theta^{\prime}}=ah_{\Theta}$. ∎
Next theorem shows that, under the assumption that all the $m_{\Theta}$ are
nonzero (which by Remark 6.11 corresponds to a maximal rank condition on
certain submatrices of $U$), the ideal $I_{n}(B)$ can be obtained intersecting
$\mathcal{P}$ with all the monomial primes of height at most $r$ containing
$I_{n}(B)$. Following the notation of 6.10, define $\Lambda$ to be the set of
all the monomial prime ideals of $K[T_{1},\ldots,T_{t}]$ of the form
$\mathfrak{p}_{\chi}$ for $\chi\subseteq\\{1,\ldots,t\\}$, $|\chi|\leq r$ and
such that $I_{n}(B)\subseteq\mathfrak{p}_{\chi}$.
###### Theorem 6.14.
Consider the set $\Lambda$ defined above and call
$Q=\bigcap_{\mathfrak{p}\in\Lambda}\mathfrak{p}.$ Let $\mathcal{P}$ be defined
as in Definition 6.2. Suppose that $m_{\Theta}\neq 0$ for every $\Theta$. Then
$I_{n}(B)$ is radical and its primary decomposition is
$\hypertarget{primary}{}I_{n}(B)=Q\cap\mathcal{P}.$ (6.3)
###### Proof.
By 2.6 and 6.5, the ideal $\mathcal{P}$ is the defining ideal of the fiber
cone of $I_{2,\mathcal{F}}$, hence it is prime of height $r$. The inclusion
$I_{n}(B)\subseteq Q\cap\mathcal{P}$ follows by 6.10 and 6.1. For the other
inclusion consider an element $\alpha=\sum\alpha_{i}h_{\Theta_{i}}\in
Q\cap\mathcal{P}$. Since each $m_{\Theta}\in I_{n}(B)$, we can restrict to
consider only the case in which $h_{\Theta_{i}}\neq m_{\Theta_{i}}$ for every
$i$. By 6.1, this implies that some minor of $U$ of the form
$U_{\Theta_{i}\cup\\{k\\}}=0$. Hence, setting $\chi=\Theta_{i}\cup\\{k\\}$, by
6.10 $\mathfrak{p}_{\chi}\in\Lambda$ and therefore
$Q\subseteq\mathfrak{p}_{\chi}$. But now, none of the monomials of
$h_{\Theta_{i}}$ is in $\mathfrak{p}_{\chi}$. Since $\alpha\in Q$ and
$\mathfrak{p}_{\chi}$ is generated by variables, this clearly implies
$\alpha_{i}\in\mathfrak{p}_{\chi}$. In particular, for every $i$, we get
$\alpha_{i}\in
Q_{\Theta_{i}}:=\bigcap_{\scriptstyle\mathfrak{p}_{\chi}\in\Lambda_{i}}\mathfrak{p}_{\chi},$
where $\Lambda_{i}$ is the set of primes in $\Lambda$ not containing
$h_{\Theta_{i}}.$ Thus we can reduce to fix one set of indexes $\Theta$ such
that $m_{\Theta}\neq h_{\Theta}$, and show that $Q_{\Theta}h_{\Theta}\subseteq
I_{n}(B)$. This would imply that $\alpha\in I_{n}(B)$ and the proof would be
complete.
Call $k_{1},\ldots,k_{n+1}$ the indexes not belonging to $\Theta$ and, by
reordering, consider the positive integer $e$ such that $1\leq e<n$ and
$\,U_{\Theta\cup\\{k_{l}\\}}=0$ if and only if $l\leq e$. Suppose
$\Theta=\\{j_{1},\ldots,j_{r-1}\\}$. Consider the sets
$E=\Theta\cup\\{k_{1},\ldots,k_{e}\\}$ and $\mathcal{E}=\\{T_{i}\mbox{ : }i\in
E\\}$. Let $H:=I_{r,\mathcal{E}}\in K[T_{1},\ldots,T_{t}]$ be the ideal of the
star configuration of height $r$ on the set $\mathcal{E}$.
We first prove that any associated prime $\mathfrak{p}$ of $H$ is in the set
$\Lambda$ and $h_{\Theta}\not\in\mathfrak{p}$. This will imply that
$Q_{\Theta}\subseteq H$ and we later prove that $Hh_{\Theta}\subseteq
I_{n}(B)$.
By 6.10, our first statement is true if we show that for every subset
$\chi\subseteq E$ such that $|\chi|=r$, the minor $U_{\chi}=0$ and
$h_{\Theta}\not\in\mathfrak{p}_{\chi}$. This is true by assumption for the
subsets $\chi$ containing $\Theta$. For the others sets, this can be done
applying 6.13, inductively on the cardinality of
$\chi\cap\\{k_{1},\ldots,k_{e}\\}$. The basis of the induction is the case
when $\Theta\subseteq\chi$. In the case
$|\chi\cap\\{k_{1},\ldots,k_{e}\\}|=2$, we get
$\chi=\Theta^{\prime}\cup\\{k_{l}\\}$ for some
$\Theta^{\prime}=\Theta\setminus\\{j\\}\cup\\{k_{i}\\}$ of the form considered
in 6.13. Since $m_{\Theta^{\prime}}\neq 0$, as a consequence of 6.13, we get
that $h_{\Theta}$ and $h_{\Theta^{\prime}}$ are associated the polynomial ring
$K[T_{1},\ldots,T_{t}]$ and therefore $U_{\chi}=0$, since also
$U_{\Theta\cup\\{k_{l}\\}}=0$. Moreover, by definitions
$h_{\Theta^{\prime}}\not\in\mathfrak{p}_{\chi}$, and thus also
$h_{\Theta}\not\in\mathfrak{p}_{\chi}$. An iterative application of 6.13,
using the fact that all the $m_{\Theta}$’s are nonzero, allows to deal in the
same way with all the other cases.
We now prove that $Hh_{\Theta}\subseteq I_{n}(B)$. The minimal generators of
$H$ have degree $e$ and are of the form $\beta_{\Omega}=\prod_{i\in
E\setminus\Omega}T_{i}$ where $\Omega\subseteq E$ and $|\Omega|=r-1$. For
every of such sets $\Omega$, since $m_{\Omega}\neq 0$, as consequence of 6.1
and of 6.13, we get $m_{\Omega}=\beta_{\Omega}h_{\Omega}$. Moreover,
$h_{\Theta}$ and $h_{\Omega}$ are associated in $K[T_{1},\ldots,T_{t}]$.
Hence, for some unit $u\in K$, we have
$\beta_{\Omega}h_{\Theta}=\beta_{\Omega}uh_{\Omega}=um_{\Omega}\in
I_{n}(B)$.This implies $Hh_{\Theta}\subseteq I_{n}(B)$ and concludes the
proof. ∎
## Acknowledgements
We thank Paolo Mantero for interesting conversations on ideals of star
configurations that partly motivated this project, and Kuei-Nuan Lin for
referring us to the work of Garrousian, Simis and Tohaneanu [13]. We also are
grateful to Alexandra Seceleanu for helpful feedback on a preliminary version
of this preprint. The third author is supported by the NAWA Foundation grant
Powroty "Applications of Lie algebras to Commutative Algebra".
## References
* [1] A. Almousa, K.–N.Lin and W. Liske, Rees Algebras of Closed Determinantal Facet Ideals, preprint available at arXiv:2008.10950.
* [2] J. Biermann, H. De Alba, F. Galetto, S. Murai, U. Nagel, A. O’Keefe, T. Römer, A. Seceleanu, Betti numbers of symmetric shifted ideals. J. Algebra 560 (2020), 312–342.
* [3] S. Bisui, E. Grifo, H. T. Hà, and T. T. Nguyen, Demailly’s conjecture and the containment problem, preprint available at arXiv:2009.05022.
* [4] W. Bruns, A. Conca, and M. Varbaro, Maximal minors and linear powers. J. Reine Angew. Math. 702 (2015), 41–53.
* [5] R. Burity, S. O. Tohăneanu and Y. Xie, Ideals generated by $a$-fold products of linear forms have linear graded free resolution, preprint available at arXiv:2004.07430v2.
* [6] A. Conca and J. Herzog, Castelnuovo-Mumford regularity of products of ideals. Collect. Math. 54 (2003), 137–152.
* [7] E. De Negri, Toric rings generated by special stable sets of monomials. Math. Nachr. 203 (1999), 31–45.
* [8] M. DiPasquale, C. Francisco, J. Mermin, J. Schweig and G. Sosa, The Rees algebra of a two-Borel ideal is Koszul. Proc. Amer. Math. Soc. 147 (2019), 467–-479.
* [9] J. A. Eagon and D. G. Northcott, Ideals defined by matrices and a certain complex associated with them. Proc. Roy. Soc. Ser. A 269 (1962), 188–204.
* [10] D. Eisenbud, C. Huneke and B. Ulrich, What is the Rees algebra of a module? Proc. Amer. Math. Soc. 131 (2003), 701–708.
* [11] L. Fouli and K.–N. Lin, Rees algebras of square–free monomial ideals. J. Commut. Algebra 7 (2015), 25–54.
* [12] F. Galetto, On the ideal generated by all squarefree monomials of a given degree. J. Commut. Algebra 12 (2020), 199–215.
* [13] M. Garrousian, A. Simis, and S. O. Tohăneanu, A blowup algebra for hyperplane arrangements. Algebra Number Theory 12 (2018), 1401–-1429.
* [14] A. V. Geramita, B. Harbourne and J. Migliore, Star configurations in $\mathbb{P}^{n}$, J. Algebra 376 (2013), 279–299.
* [15] A. V. Geramita, B. Harbourne, J. Migliore and U. Nagel, Matroid configurations and symbolic powers of their ideals. Trans. Amer. Math. Soc. 369 (2017), 7049–7066.
* [16] H. T. Hà and S. Morey, Algebraic Algorithms for Even Circuits in Graphs. In Current Trends on Monomial and Binomial Ideals, Mathematics (2019), 7(9), 859.
* [17] J. Herzog and T. Hibi, Monomial ideals. Graduate Texts in Mathematics, 260. Springer-Verlag London, Ltd., London, 2011.
* [18] J. Herzog and T. Hibi, Discrete polymatroids. J. Algebr. Comb.16 (2002), 239–268.
* [19] J. Herzog, T. Hibi and M. Vladoiu. Ideals of fiber type and polymatroids, Osaka J. Math. 95 (2005), 807–829.
* [20] J. Herzog, A. Simis and W. Vasconcelos, Approximation complexes of blowing-up rings. J. Algebra 74 (1982), 466–493.
* [21] A. Kumar and R. Kumar, Regularity, Rees algebra, and Betti numbers of certain cover ideals. Arch. Math. 115 (2020), 267–278.
* [22] A. Kustin, C. Polini and B. Ulrich, The equations defining blowup algebras of height three Gorenstein ideals, Algebra Number Theory 11 (2017), 1489–1525.
* [23] K.–N. Lin and Y.–H. Shen, Symbolic powers and free resolutions of generalized star configurations of hypersurfaces. To appear in Michigan Math. J., (2021) arXiv:1912.04448.
* [24] K.–N. Lin and Y.–H. Shen, Symbolic powers of generalized star configurations of hypersurfaces, (2021) preprint arXiv:2106.02955v1.
* [25] P. Mantero, The structure and free resolutions of the symbolic powers of star configurations of hypersurfaces. Trans. Amer. Math. Soc. 373 (2020), 8785–-8835.
* [26] S. Morey and B. Ulrich, Rees algebras of ideals of low codimension, Proc. Amer. Math. Soc. 124 (1996), 3653–3661.
* [27] L. Nicklasson, On the Betti numbers and Rees algebras of ideals with linear powers, preprint available at arXiv:1904.01995.
* [28] D. Taylor, Ideals generated by monomials in an $R$-sequence, Ph.D. dissertation, University of Chicago, 1966.
* [29] S. O. Tohăneanu and Y. Xie, On the Geramita–Harbourne–Migliore conjecture, Trans. Amer. Math. Soc. 374 (2021), 4059–-4073.
* [30] B. Ulrich and W. Vasconcelos, The equation of Rees Algebras of ideals with linear presentation. Math. Z. 214 (1993), 79–92.
* [31] W. V. Vasconcelos, On the equations of Rees algebras. J. Reine Angew. Math. 418 (1991), 189–218.
* [32] R.H. Villarreal, Rees algebras of edge ideals, Comm. Algebra 23 (1995), 3513–3524.
|
arxiv-papers
| 2021-07-26T15:06:59 |
2024-09-04T03:07:18.974179
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Alessandra Costantini, Ben Drabkin, Lorenzo Guerrieri",
"submitter": "Alessandra Costantini",
"url": "https://arxiv.org/abs/2107.12260"
}
|
2107.12265
|
# Co-Optimization of Design and Fabrication Plans for Carpentry
Haisen Zhao [email protected] University of Washington and Shandong
University , Max Willsey [email protected] University of Washington
, Amy Zhu [email protected] University of Washington , Chandrakana
Nandi [email protected] University of Washington , Zachary Tatlock
[email protected] University of Washington , Justin Solomon
[email protected] Massachusetts Institute of Technology and Adriana Schulz
[email protected] University of Washington
###### Abstract.
Past work on optimizing fabrication plans given a carpentry design can provide
Pareto-optimal plans trading off between material waste, fabrication time,
precision, and other considerations. However, when developing fabrication
plans, experts rarely restrict to a single design, instead considering
families of design variations, sometimes adjusting designs to simplify
fabrication. Jointly exploring the design and fabrication plan spaces for each
design is intractable using current techniques. We present a new approach to
jointly optimize design and fabrication plans for carpentered objects. To make
this bi-level optimization tractable, we adapt recent work from program
synthesis based on equality graphs (), which encode sets of equivalent
programs. Our insight is that subproblems within our bi-level problem share
significant substructures. By representing both designs and fabrication plans
in a new bag of parts (BOP) , we amortize the cost of optimizing design
components shared among multiple candidates. Even using BOP , the optimization
space grows quickly in practice. Hence, we also show how a feedback-guided
search strategy dubbed Iterative Contraction and Expansion on E-graphs (ICEE)
can keep the size of the manageable and direct the search toward promising
candidates. We illustrate the advantages of our pipeline through examples from
the carpentry domain.
Fabrication, Programming languages
††journal: TOG††ccs: Computing methodologies Shape modeling††ccs: Computing
methodologies Graphics systems and interfaces
Figure 1. Our system jointly explores the space of discrete design variants
and fabrication plans to generate a Pareto front of (design, fabrication plan)
pairs that minimize fabrication cost. In this figure, (a) is the input design
for a chair and the Pareto front that only explores the space of fabrication
plans for this design, (b) shows the Pareto front generated by joint
exploration of both the design variants and fabrication plans for the chair,
where each point is a (design, fabrication plan) pair. Design variations
indicate different ways to compose the same 3D model from a collection of
parts and are illustrated with the same color in the Pareto front. A physical
chair is fabricated by following the result fabrication plan. This example
shows that the fabrication cost can be significantly improved by exploring
design variations.
## 1\. Introduction
While optimizing designs for fabrication is a long-standing and well studied
engineering problem, the vast majority of the work in this area assumes that
there is a unique map from a design to a fabrication plan. In reality,
however, many applications allow for _multiple fabrication alternatives_.
Consider, for example, the model shown in Figure 1 where different fabrication
plans trade off material cost and fabrication time. In this context,
fabrication-oriented design optimization becomes even more challenging, since
it requires exploring the landscape of optimal fabrication plans for _many_
design variations. Every variation of the original design (Figure 1)
determines a new landscape of fabrication plans with different cost trade-
offs. Designers must therefore navigate the _joint_ space of design and
fabrication plans to find the optimal landscape of solutions.
In this work, we present a novel approach that simultaneously optimizes both
the design and fabrication plans for carpentry. Prior work represents
carpentry designs and fabrication plans as programs [Wu et al., 2019] to
optimize the fabrication plan of a _single design_ at a time. Our approach
also uses a program-like representation, but we _jointly_ optimize the design
and the fabrication plan.
Our problem setting has two main challenges. First, the discrete space of
fabrication plan alternatives can vary significantly for each discrete design
variation. This setup can be understood as a _bi-level_ problem, characterized
by the existence of two optimization problems in which the constraint region
of the upper-level problem (the joint space of designs and fabrication plans)
is implicitly determined by the lower-level optimization problem (the space of
feasible fabrication plans given a design). The second challenge is that there
are multiple conflicting fabrication objectives. Plans that improve the total
production time may waste more material or involve less precise cutting
operations. Our goal is therefore to find _multiple_ solutions to our
fabrication problem that represent optimal points in the landscape of possible
trade-offs, called the _Pareto front_. Importantly, the different fabrication
plans on the Pareto front may come from different design variations. The
complexity of the bi-level search space combined with the need for finding a
landscape of Pareto-optimal solutions makes this optimization challenging.
We propose a method to make this problem computationally tractable in light of
the challenges above. Our key observation is that there is redundancy on both
levels of the search space that can be exploited. In particular, different
design variations may share similar subsets of parts, which can use the same
fabrication plans. We propose exploiting this sharing to encode a large number
of design variations and their possible fabrication plans compactly. We use a
data structure called an _equivalence graph (e-graph)_ [Nelson, 1980] to
maximize sharing and thus amortize the cost of heavily optimizing part of a
design since all other design variations sharing a part benefit from its
optimization.
E-graphs have been growing in popularity in the programming languages
community; they provide a compact representation for equivalent programs that
can be leveraged for theorem proving and code optimization. There are two
challenges in directly applying to design optimization under fabrication
variations, detailed below.
First, the different fabrication plans for a given design are all semantically
equivalent programs. However, the fabrication plans associated with different
design variations, in general, are not semantically equivalent, i.e., they may
produce different sets of parts. This makes it difficult to directly apply
traditional techniques which exploit sharing by searching for minimal cost,
but still semantically equivalent, versions of a program. One of our key
technical contributions is therefore a new data structure for representing the
search space, which we call the Bag-of-Parts (BOP) E-graph. This data
structure takes advantage of common substructures across both design _and_
fabrication plans to maximize redundancy and boost the expressive power of
e-graphs.
Second, optimization techniques built around e-graphs have adopted a two stage
approach: _expansion_ (incrementally growing the e-graph by including more
equivalent programs111In the programming languages literature, this is known
as _equality saturation_.) followed by _extraction_ (the process of searching
the e-graph for an optimal program). In particular, the expansion stage has
not been feedback-directed, i.e., the cost of candidate programs has only been
used in extraction, but that information has not been fed back in to guide
further e-graph expansion. A key contribution of our work is a method for
Iterative Contraction and Expansion on E-graphs (ICEE). Because ICEE is
feedback-directed, it enables us to effectively explore the large
combinatorial space of designs and their corresponding fabrication plans. ICEE
also uses feedback to prune the least valuable parts of the e-graph during
search, keeping its size manageable. Further, these expansion and contraction
decisions are driven by a multi-objective problem that enables finding a
diverse set of points on the Pareto front.
We implemented our approach and compared it against prior work and against
results generated by carpentry experts. Our results show that ICEE is up to
$17\times$ faster than prior approaches while achieving similar results. In
some cases, it is the only approach that successfully generates an optimal set
of results due to its efficiency in exploring large design spaces. We showcase
how our method can be applied to a variety of designs of different complexity
and show how our method is advantageous in diverse contexts. For example we
achieve 25% reduced material in one model, 60% reduced time in another, and
20% saved total cost in a third when assuming a carpenter charges $40/h, when
compared to a method that does not explore design variations.
## 2\. Related Work
##### Optimization for Design and Fabrication
Design for fabrication is an exciting area of research that aims to
automatically achieve desired properties while optimizing fabrication plans.
Examples of recent work include computational design of glass façades [Gavriil
et al., 2020], compliant mechanical systems [Tang et al., 2020], barcode
embeddings [Maia et al., 2019], and interlocking assemblies [Wang et al.,
2019; Cignoni et al., 2014; Hildebrand et al., 2013], among many others
[Bickel et al., 2018; Schwartzburg and Pauly, 2013]. Fabrication
considerations are typically taken into account as constraints during design
optimization, but these methods assume that there is an algorithm for
generating _one_ fabrication plan for a given design. To the best of our
knowledge, no prior work explores the multi-objective space of fabrication
alternatives during design optimization.
There is also significant literature on fabrication plan optimization for a
_given_ design under different constraints. Recent work includes optimization
of composite molds for casting [Alderighi et al., 2019], tool paths for 3D
printing [Zhao et al., 2016; Etienne et al., 2019], and decomposition for CNC
milling [Mahdavi-Amiri et al., 2020; Yang et al., 2020]. While some of these
methods minimize the distance to a target design under fabrication constraints
[Zhang et al., 2019; Duenser et al., 2020], none of them explores a space of
design modification to minimize fabrication cost.
In contrast, our work _jointly_ explores the design and fabrication space in
the carpentry domain, searching for the Pareto-optimal design variations that
minimize multiple fabrication costs.
##### Design and Fabrication for Carpentry
Carpentry is a well-studied domain in design and fabrication due to its wide
application scope. Prior work has investigated interactive and optimization
methods for carpentry design [Umetani et al., 2012; Koo et al., 2014; Song et
al., 2017; Garg et al., 2016; Fu et al., 2015]. There is also a body of work
on fabrication plan optimization [Yang et al., 2015; Koo et al., 2017; Leen et
al., 2019; Lau et al., 2011]. Closest to our work is the system of Wu et al.
[2019], which represents both carpentry designs and fabrication plans as
programs and introduces a compiler that optimizes _fabrication_ plans for a
_single_ design. While our work builds on the domain specific languages (DSLs)
proposed in that prior work, ours is centered on the fundamental problem of
design optimization under fabrication alternatives, which has not been
previously addressed.
##### Bi-Level Multi-Objective Optimization
Our problem and others like it are _bi-level_ , with a nested structure in
which each design determines a different space of feasible fabrication plans.
The greatest challenge in handling bi-level problems lies in the fact that the
lower level problem determines the feasible space of the upper level
optimization problem. More background on bi-level optimization can be found in
the book by Dempe [2018], as well as review papers by Lu et al. [2016] and
Sinha et al. [2017].
Bi-level problems with multiple objectives can be even more challenging to
solve [Dempe, 2018]. Some specific cases are solved with classical approaches,
such as numerical optimization [Eichfelder, 2010] and the
$\epsilon$-constraint method [Shi and Xia, 2001]. Heuristic-driven search
techniques have been used to address bi-level multi-objective problems, such
as genetic algorithms [Yin, 2000] and particle swarm optimization [Halter and
Mostaghim, 2006]. These methods apply a heuristic search to both levels in a
nested manner, searching over the upper level with NSGA-II operations, while
the evaluating each individual call in a low-level NSGA-II process [Deb and
Sinha, 2009]. Our ICEE framework also applies a genetic algorithm during
search. Different from past techniques, ICEE does not nest the two-level
search but rather reuses structure between different upper-level feasible
points. ICEE jointly explores both the design and fabrication spaces using the
BOP E-graph representation.
##### E-graphs
An is an efficient data structure for compactly representing large sets of
equivalent programs. E-graphs were originally developed for automated theorem
proving [Nelson, 1980], and were first adapted for program optimization by
Joshi et al. [2002]. These ideas were further expanded to handle programs with
loops and conditionals [Tate et al., 2009] and applied to a variety of domains
for program optimization, synthesis, and equivalence checking [Stepp et al.,
2011; Willsey et al., 2021; Nandi et al., 2020; Panchekha et al., 2015; Wu et
al., 2019; Wang et al., 2020; Premtoon et al., 2020].
Recently, have been used for optimizing designs [Nandi et al., 2020], and also
for optimizing fabrication plans [Wu et al., 2019], but they have not been
used to simultaneously optimize both designs and fabrication plans. Prior work
also does not explore feedback-driven expansion and contraction for managing
large optimization search spaces.
## 3\. Background
In this section, we introduce some mathematical preliminaries used in the rest
of the paper.
### 3.1. Multi-Objective Optimization
A multi-objective optimization problem is defined by set of objectives
$f_{i}:\mathbf{x}\mapsto\mathbb{R}$ that assign a real value to each point
$\mathbf{x}\in\mathcal{X}$ in the feasible search space $\mathcal{X}$. We
choose the convention that _small_ values of $f_{i}(\mathbf{x})$ are desirable
for objective $f_{i}$.
As these objectives as typically _conflicting_ , our algorithm searches for a
diverse set of points that represent optimal trade-offs, called _Pareto
optimal_ [Deb, 2014]:
###### Definition 3.1 (Pareto optimality).
A point $\mathbf{x}\in\mathcal{X}$ is _Pareto optimal_ if there does not exist
any $\mathbf{x}^{\prime}\in\mathcal{X}$ so that $f_{i}(\mathbf{x})\geq
f_{i}(\mathbf{x}^{\prime})$ for all $i$ and
$f_{i}(\mathbf{x})>f_{i}(\mathbf{x}^{\prime})$ for at least one $i$.
We use $F:\mathbf{x}\mapsto\mathbb{R}^{N}$ to denote the concatenation
$(f_{1}(\mathbf{x}),\ldots,f_{N}(\mathbf{x}))$. Pareto optimal points are the
solution to the multi-objective optimization:
(1) $\min_{\mathbf{x}}F(\mathbf{x})\ \ \mathrm{s.t.}\
\mathbf{x}\in\mathcal{X}.$
The image of all Pareto-optimal points is called the _Pareto front_.
##### Non-Dominated Sorting
Genetic algorithms based on non-dominated sorting are a classic approach to
multi-objective optimization [Deb et al., 2002; Deb and Jain, 2013]. The key
idea is that sorting should be done based on proximity to the Pareto front.
These papers define the concept of Pareto layers, where layer $0$ is the
Pareto front, and layer $l$ is the Pareto front that would result if all
solutions from layers $0$ to $l-1$ are removed. When selecting parent
populations or when pruning children populations, solutions in lower layers
are added first, and when a layer can only be added partially, elements of
this layer are chosen to increase diversity. Different variations of this
method use different strategies for diversity; we use NSGA-III [Deb and Jain,
2013] in our work.
##### Hypervolume
Hypervolume [Auger et al., 2009] is a metric commonly used to compare two sets
of image points during Pareto front discovery. To calculate the hypervolume,
we draw the smallest rectangular prism (axis-aligned, as per the $L^{1}$ norm)
between some reference point and each point on the pareto front. We then union
the volume of each shape to calculate the hypervolume. Thus, a larger
hypervolume implies a better approximation of the Pareto front.
### 3.2. Bi-level Multi-Objective Optimization
Given a design space $\mathcal{D}$ that defines possible variations of a
carpentry model, our goal is to find a design $d\in\mathcal{D}$ and a
corresponding fabrication plan $p\in\mathcal{P}^{d}$ that minimizes a vector
of conflicting objectives, where $\mathcal{P}^{d}$ is the space of fabrication
plans corresponding to design $d$. This setup yields the following multi-
objective optimization problem:
$\min_{p,d}F(d,p)\ \ \text{s.t.}\ \ d\in\mathcal{D},\ \ p\in\mathcal{P}^{d}$
where $\mathcal{P}^{d}$ defines the space of all possible plans for
fabrication the design $d$. Generally, our problem can be expressed as a bi-
level multi-objective optimization that searches across designs to find those
with the best fabrication costs, and requires optimizing the fabrication for
each design during this exploration [Lu et al., 2016]:
$\min_{d}F(d,p)\ \ \text{s.t.}\ \ \ \ d\in\mathcal{D},\ \ \
p=\arg\min_{p}F(d,p)$
where $\arg\min$ refers to Pareto-optimal solutions to the multi-objective
optimization problem.
A naïve solution to this bi-level problem would be to search over the design
space $\mathcal{D}$ using a standard multi-objective optimization method,
while solving the nested optimization problem to find the fabrication plans
given a design at each iteration. Given the combinatorial nature of our
domain, this would be prohibitively slow, which motivates our proposed
solution.
### 3.3. Equivalence Graphs (E-graphs)
Typically, programs (often referred to as terms) are viewed as tree-like
structures containing smaller sub-terms. For example, the term $3\times 2$ has
the operator $\times$ at its “root” and two sub-terms, $3$ and $2$, each of
which has no sub-terms. Terms can be expressed in multiple syntactically
different ways. For example, in the language of arithmetic, the term $3\times
2$ is semantically equivalent to $3+3$, but they are syntactically different.
Naïvely computing and storing all semantically equivalent but syntactically
different variants of the a term requires exponential time and memory. For a
large program, this makes searching the space of equivalent terms intractable.
E-graphs [Nelson, 1980] are designed to address this challenge—an is a data
structure that represents many equivalent terms efficiently by sharing sub-
terms whenever possible. An not only stores a large set of terms, but it
represents an equivalence relation over those terms, i.e., it partitions the
set of terms into equivalence classes, or e-classes, each of which contains
semantically equivalent but syntactically distinct terms. In Section 4.2, we
show how to express carpentry designs in a way that captures the benefits of
the .
###### Definition 3.2 (E-graph).
An is a set of equivalence classes or e-classes. An e-class is a set of
equivalent e-nodes. An e-node is an operator from the given language paired
with some e-class children, i.e., $f(c_{1},\ldots,c_{n})$ is an e-node where
$f$ is an operator and each $c_{i}$ is an e-class that is a child of this
e-node. An e-node may have no children, in which case we call it a leaf. An
represents an equivalence relation over terms. Representation is defined
recursively:
* •
An represents a term if any of its e-classes do.
* •
An e-class represents a term if any of its e-nodes do. All terms represented
by e-nodes in the same e-class are equivalent.
* •
An e-node $f(c_{1},\ldots,c_{n})$ represents a term $f(t_{1},\ldots,t_{n})$ if
each e-class $c_{i}$ represents term $t_{i}$. A leaf e-node $g$ represents
just that term $g$.
Figure 2 shows an example of an and representation. Note how the maximizes
sharing even across syntactically distinct, semantically equivalent terms.
When adding e-nodes or combining e-classes, the automatically maintains this
maximal sharing property, using existing e-nodes whenever possible.
Figure 2. An example . E-classes (dotted boxes labelled by letters) contain
equivalent e-nodes (solid boxed) which refer to children e-classes (arrows).
The e-class (c) contains one leaf e-node, $3$, and it represents one term,
$3$. The e-class (b) contains two e-nodes, $\textsf{{(c)}}+\textsf{{(c)}}$ and
$\textsf{{(c)}}*\textsf{{(d)}}$, and it represents two terms: $3+3$ and $3*2$.
Although the e-class (a) only contains one e-node, it represents 4 terms:
$(3+3)+(2+2)$, $(3*2)+(2+2)$, $(3+3)+4$, and $(3*2)+4$. If $+$ is cheaper than
$*$, then $(3+3)+4$ is the cheapest term represented by e-class (a).
## 4\. Optimization Algorithm
Figure 3. Example of three different design variations of a model and
corresponding fabrication plans. Design variations determine different ways to
decompose a 3D model into a set of parts. Fabrication plans define how these
parts are _arranged_ in pieces of stock material and the cut order
(illustrated by the numbers along each cut).
Our algorithm takes as input a carpentry design with a discrete set
$\mathcal{D}$ of possible design variations. Design variations determine
different ways to decompose a 3D model into a set of fabricable parts, as
shown in Figures 3 and 4. These can be manually or automatically generated
(see Section 1.1 of the supplemental material).
Our goal is to find Pareto-optimal solutions that minimize fabrication cost,
where each solution is a pair of design variation and fabrication plan.
Similar to prior work [Wu et al., 2019], we measure cost in terms of material
usage ($f_{c}$), cutting precision ($f_{p}$), and fabrication time ($f_{t}$).
Section 1.3 of the supplemental material describes how these metrics are
computed for this work.
Figure 4. Example of a space of design variations, $\mathcal{D}$. Each of the
four connectors can have three different connecting variations, resulting in
81 design variations. Note that some of the different design variations may
use the same parts (as d1, d2), and will be treated as redundant during our
optimization. This model produces 13 unique bags of parts.
### 4.1. Motivation and Insights
Given an algorithm for finding the Pareto-optimal fabrication plans for a
given design (e.g., the one proposed by Wu et al. [2019]), a brute force
method would simply find the Pareto-optimal solutions for each of the possible
design variations $d\in\mathcal{D}$ and take the dominant ones to form the
Pareto front of the combined design/fabrication space. Since design variations
can produce an exponentially large space of designs $\mathcal{D}$, this
approach would be intractable for complex models. An alternative approach
could use a discrete optimization algorithm to explore the design space (e.g.
hill climbing). This approach would still need to compute the Pareto-optimal
fabrication plans for each design explored in every iteration, which can
expensive for complex design variants (e.g., it takes 8-10 minutes to compute
Pareto-optimal fabrication plans for a single design variation of the chair
model in Figure 1 using the approach of Wu et al. [2019]).
We address these challenges with two key insights:
1. (1)
Design variants will share common sub-parts (both within a single variant and
across different variants). As shown in Figure 3, even in a design where no
two parts are the same, there is significant overlap _across_ design
variations. Exploiting this sharing can prevent recomputing the fabrication
cost from scratch for every design variation. We propose using a to capture
this sharing when (sub-)designs have the same bag of parts; we call this the
BOP E-graph.
2. (2)
The space of design variants is too large to exhaustively explore, and even a
single variant may have many Pareto-optimal fabrication plans. We propose
using the BOP E-graph to guide the exploration in an incremental manner, with
a new technique called ICEE (Iterative Contraction and Expansion of the
E-graph) that jointly explores the design and fabrication plan spaces.
### 4.2. Bag of Parts (BOP) E-graph
Our algorithm selects a Pareto-optimal set of fabrication plans, each of which
will produce a design variation of the given model. A fabrication plan
consists of four increasingly detailed things:
1. (1)
A bag of parts, a bag222 A bag or multiset is an unordered set with
multiplicity, i.e. it may contain the same item multiple times. We will use
the terms interchangeably. (a.k.a. multiset) of atomic parts that compose the
model.
2. (2)
An assignment that maps those parts to individual pieces of stock material.
3. (3)
A packing for each piece of stock in the assignment that dictates _how_ those
parts are arranged in that stock.
4. (4)
A cutting order for each packing that specifies the order and the tool
(chopsaw, tracksaw, etc.) used to cut the stock into the parts.
We say that an arrangement is items 1-3: a bag of parts assigned to and packed
within pieces of stock material, but _without cutting order decided_. We can
create a language to describe arrangements; a term in the arrangement language
is one of the following:
* •
An atomic node is a childless operator that represents a bag of parts packed
into a single piece of stock. For example,
$\\{\square,\square,\triangle\\}_{p,b}$ maps two squares and one triangle all
to the same piece of stock of type $b$ using a packing $p$.
* •
A union node takes two child arrangements and composes them into a single
arrangement. The following arrangement is a union node of two atomic nodes:
${\\{\square,\square\\}_{p_{1},b}\cup\\{\triangle\\}_{p_{2},b}}$. It packs two
squares into stock of type $b$ using packing $p_{1}$, and it packs a triangle
into a different piece of stock of the same type $b$ using packing $p_{2}$.
To put arrangements into an , we must define the notion of equivalence that
the uses to determine which e-nodes go in the same e-class. The more flexible
this notion is (i.e., the larger the equivalence relation), the more sharing
the can capture.
To maximize sharing, we say two arrangements are equivalent if they use the
same bag of parts (BOP), even if those parts are assigned to different stock
or packed differently. For example, $\\{\square,\square\\}_{p_{1},b}$ is
equivalent to $\\{\square,\square\\}_{p_{2},c}$ even though they use different
kinds of stock, and $\\{\square,\square,\triangle\\}_{p_{3},b}$ is equivalent
to $\\{\square,\triangle\\}_{p_{4},b}\cup\\{\square\\}_{p_{5},b}$ even though
the former uses one piece of $b$ stock and the latter uses two.
Given our arrangement language and the BOP notion of equivalence, we can now
describe the central data structure of our algorithm, the BOP E-graph. Recall
from Section 3.3 that e-nodes within an have e-class children rather than
e-node children. So, viewing our arrangement language at the level, union
e-nodes take two e-classes as children. All e-nodes in the same e-class are
equivalent, i.e., they represent terms that use the same bag of parts but that
arrange those parts differently into stock.
Figure 5 gives two example design variations and a BOP E-graph that captures a
common sub-arrangement between the two. The e-classes E1 and E2 represent
terms that correspond to the two box designs, and E4 captures ways to arrange
the $y$ and $z$ parts which the variants share. The design variant including
part $w$ also captures sharing with itself: e-class E5 reuses the arrangement
in e-class $E9$.
Figure 5. Two variants 5 of a box design encoded in one BOP E-graph 5. The
bold edges show a root term that requires 3 atomic packings 5. The BOP E-graph
encodes multiple arrangements for both design variants. E-classes are drawn as
dotted boxes and annotated with the bag of parts represented by that e-class.
(Only the e-nodes are semantically in the ; the name and bag of parts are just
visual aides.) E-classes E1 and E2 are root e-classes since they represent the
bags of parts required by the design variants. Union and atomic e-nodes are
shown as squares with “U”s or circles with “A”s, respectively. Atomic e-nodes
correspond to packings of parts within a piece of stock 5. An example root
term in the BOP E-graph is bolded; using the syntax from Section 4.2, this is
the term
$\\{x,y\\}_{A_{4},\textsf{long}}\cup\\{y\\}_{A_{9},\textsf{short}}\cup\\{z\\}_{A_{10},\textsf{short}}$.
Note that arrangements and the BOP E-graph do not mention designs. We do not
“store” designs in the , we just need to remember which e-classes represent
bags of parts that correspond to designs that we are interested in. This can
be done outside the with a mapping from designs to e-classes. Many designs
(especially symmetric ones) may have the same bag of parts. We call an e-class
that is associated with a design a root e-class, and we call a term
represented by a root e-class a root term. The BOP E-graph itself does not
handle root vs. non-root e-classes or terms differently, these are only used
by the remainder of the algorithm to remember which arrangements correspond to
design variants. The BOP E-graph will maximize sharing across design
variations and arrangements since it makes no distinction between the two.
Figure 6. Algorithm overview used the example in Figure 5. The first step
initializes a BOP E-graph (Section 4.3.2, Section 4.3.3) with several design
variants and a small number of fabrication arrangements (a). U and A represent
union and atomic e-nodes respectively. As part of the ICEE loop, the algorithm
extracts a Pareto Front (Section 4.3.4) which is used to score the e-classes
in the BOP E-graph (b). For example, the gray e-class containing a “U” and an
“A” e-node indicates a low score, i.e., the e-class did not contribute to
Pareto-optimal solutions. The BOP E-graph is then contracted (Section 4.3.5)
by removing the low-scored e-classes (and their parent e-nodes) to get a
compressed BOP E-graph (c). As described in Section 4.3.6, this contracted BOP
E-graph is then further expanded (d) by exploring more design variants and
fabrication arrangements. The algorithm exits the loop when the termination
conditions are reached, returning the final Pareto Front (e).
### 4.3. Iterative Contraction and Extension on E-graphs (ICEE)
#### 4.3.1. Overview
ICEE takes a feasible design space $\mathcal{D}$ as input, and outputs a
Pareto front where each solution $s$ represents a (design, fabrication) pair.
An overview of this algorithm is shown in Figure 6.
The initialization step selects a small subset of design variants from
$\mathcal{D}$ (Section 4.3.2) and then generates a small number of fabrication
arrangements for each one (Section 4.3.3). All of these are added to the BOP
E-graph, maintaining the property of maximal sharing, as described above. ICEE
then applies the extraction algorithm (Section 4.3.4) to generate a Pareto
front from the current BOP E-graph. This process will compute many different
solutions $s$ and their fabrication costs $F(s)=(f_{m}(s),f_{p}(s),f_{t}(s))$,
all of which are stored in the solution set $\mathcal{S}$.
The resulting Pareto front is used to compute ranking scores for each e-class
in the BOP E-graph; the ranking score measures how often this bag of parts is
used in Pareto-optimal solutions and how many fabrication variations have been
explored for this bag of parts. Using these scores, ICEE contracts the BOP
E-graph by pruning e-classes that have been sufficiently explored but still do
not contribute to Pareto-optimal solutions (Section 4.3.5).
Having pruned the of the less relevant e-classes, ICEE then expands the BOP
E-graph in two ways (Section 4.3.6). First, it suggests more design variations
based on the extracted Pareto-Optimal designs. Second, it generates more
fabrication arrangements for both the new generated design variations and some
of the previously existing e-classes. The ranking scores are used to select
e-classes for expansion.
ICEE then extracts the new Pareto front from the updated BOP E-graph and
repeats the contraction and expansion steps until the following termination
criteria are met: 1) there is no hypervolume improvement within $t_{d}$
iterations, or 2) we exceed $mt_{d}$ iterations. Additionally, we set a
timeout $T$ beyond which we no longer change the BOP E-graph, but continue to
extract based on crossover and mutation until one of the termination criteria
is met. In our experiments, we set $t_{d}=10$, $mt_{d}=200$, and $T=4$ hours.
#### 4.3.2. Initial Generation of Design Variants
We bootstrap our search with the observation that design variations with more
identical parts tend to be cheaper to fabricate because less time is spent
setting up fabrication processes. Therefore, instead of initializing the BOP
E-graph with $K_{d}$ designs randomly selected from $\mathcal{D}$, we randomly
select up to $10^{5}$ designs and select the top $K_{d}$ designs from this set
that have a maximal number of identical parts.
#### 4.3.3. Fabrication Arrangements Generation
Again, instead of randomly generating $K_{f}$ arrangement variations for a
given design, we use heuristics; namely, that (1) we can minimize the number
of cuts by stacking and aligning material to cut multiple parts with a single
cut, and (2) we can minimize the material cost by packing as many parts as
possible to a single stock. Since a similar method for generating arrangement
variations has been previously proposed by Wu et al. [2019], we leave a
detailed discussion of the algorithm for supplemental material (Section 1.2).
We note that the key difference between our method and the prior heuristic-
driven algorithm is that we incorporate storage and direct control schemes
that enable the method to output $K_{f}$ variations that are _different_ from
the ones generated during previous iterations of ICEE. This is essential to
enable incremental expansion of the BOP E-graph without restoring variations
that have already been pruned in previous contraction steps.
#### 4.3.4. Pareto Front Extraction
In parlance, extraction is the process of selecting the “best” represented
term from an according to some (typically single-objective) cost function. One
way to view extraction is that it simply chooses which e-node should be the
canonical representative of each e-class; once that is done, each e-class
represents a single term. Since our cost function is multi-objective, we must
instead extract a set of terms (arrangements) from the BOP E-graph that forms
a Pareto front.
We use a genetic algorithm [Deb and Jain, 2013] to extract terms from the BOP
E-graph. The population size is set to $N_{pop}$. The genome is essentially a
list of integers, one per e-class, that specifies which e-node is the
representative. Since the BOP E-graph may have multiple root e-classes
(corresponding to multiple design variations), we combine the genes for all
the root e-classes, only picking a single e-node among all of them. In effect,
this means the genome defines both a design variation and the arrangement for
that design.
For example, consider the bold term within the BOP E-graph in Figure 5. The
genome for that term is as follows, where $*$ could be any integer since that
e-class is not used by the term:
$\begin{array}[]{cccccccc}E_{1},E_{2}&E_{3}&E_{4}&E_{5}&E_{6}&E_{7}&E_{8}&E_{9}\\\
0&1&0&*&*&0&0&*\end{array}$
The root e-classes $E_{1}$ and $E_{2}$ share a single integer $0$, meaning
that the genome chooses the $0$th e-node _across both_ e-classes, and that it
uses the first of the two design variants. Since this encoding boils down to a
list of integers, which is valid as long as each integer corresponds to an
e-node in that e-class, we can use simple mutation and single-point crossover
operations.
A term does not completely determine a fabrication plan; it only specifies the
arrangement. We need to additionally compute the cutting order for a given
term to define a solution $s$ and then evaluate the fabrication costs. We
observe that the material cost does not depend on the cutting order and that
precision and fabrication costs strongly correlate once the arrangement is
fixed. This is not surprising since cutting orders that minimize set-ups will
jointly reduce time and precision error. Given this observation, we can
compute two solutions for each term, using two single-objective optimizations
for computing cutting order: one that minimizes precision, and the other
fabrication time.
We use two strategies to speed up these optimizations: (1) storing computed
cutting orders in atomic e-nodes that will be shared across many terms and (2)
a branch and bound technique. The optimization works as follows. Given a term,
we first compute the optimal plans for the atomic e-nodes that have not been
previously optimized. For each such e-node, we try to generate maximal $P$
different orders of cuts, then extract the optimal plans with [Wu et al.,
2019] method. We use this result to compute an upper and a lower bound for the
term. If the lower bound is not dominated by the Pareto front of all computed
solutions $\mathcal{S}$, we run an optimization that uses the upper bound as a
starting point (see Section 1.4 of the supplemental material for details).
We again terminate the algorithm if there is no hypervolume improvement within
$t_{p}$ iterations, or if we exceed $mt_{p}$ iterations. In our experiments,
we set $t_{p}=20$ and $mt_{p}=200$ and set the probability of crossover
($mc_{p}$) and mutation ($mm_{p}$) are set to be $0.95$, $0.8$ respectively.
#### 4.3.5. BOP E-graph Contraction
As the algorithm proceeds, BOP E-graph contraction keeps the data structure
from growing too large. To contract the BOP E-graph, we search for e-classes
that represent bags of parts that have been sufficiently explored by the
algorithm but are not present in Pareto-optimal designs. This indicates that
we have already discovered the best way to fabricate these bags of parts but
they still do not contribute to Pareto optimal solutions; these e-classes are
then deleted.
To measure how much an e-class has been explored, we first compute how many
variations of fabrication arrangements have been encoded in the BOP E-graph.
This number is stored over the and updated after each expansion step to ensure
consistency following contraction steps. The exploration score, $E_{score}$,
is then defined as this value divided by the number of possible fabrication
arrangements for an e-class, which we approximate by the number of parts in
the e-class multiplied by the number of orientations of each part that can be
assigned to the stock lumber.
The impact of an e-class, $I_{score}$, is measured based on how often it is
used in the set of solutions in the current Pareto front. We use the
assignment of solutions $s$ to layers determined by the non-dominated sorting
(3.1) to compute $I_{score}$ for a given e-class. We initialize a $I_{score}$
with value $0$ and increment it by $10^{M-l}$ every time this e-class is used
in a solution from layer $l$, where $M$ is the total number of valid layers.
We normalize all computed exploration and impact scores to be between zero and
one and then assign the following pruning score to each e-class:
$P_{score}=w\cdot I_{score}+(1-w)\cdot(1-E_{score}),w\in[0.0,1.0]$
where the weight $w$ is chosen to trade-off between exploration and impact. If
the $P_{score}$ is smaller that the pruning rate, $P_{rate}$, the e-class is
removed along with any e-nodes pointing to this e-class (i.e. parent e-nodes).
We set $w$ and $P_{rate}$ to $0.7$ and $0.3$ in our implementation.
#### 4.3.6. BOP E-graph Expansion
We expand the BOP E-graph by first generating new design variations and then
by generating fabrication arrangements for both the existing and newly
generated design.
We generate new design variations using a single step of a genetic algorithm
that operates over the design space. The probability of crossover ($mc_{d}$)
and mutation ($mm_{d}$) are set to be $0.95$, $0.8$ respectively. We select
the parent design variations from $\mathcal{S}$ based on the non-dominated
sorting technique (Section 3.1). Since many solutions in $\mathcal{S}$ can
correspond to the same design, we assign designs to the lowest layer that
includes that design. We then generate new design variations with crossover
and mutation operations. We use an integer vector encoding for each design.
This time, the vector indexes the joint variations, e.g., for the designs
shown in Figure 4, $d_{1}=[0,2,1,0],d_{2}=[1,0,0,2]$. We get $K_{m}\cdot
K_{d}$ design variations by applying $K_{m}$ times of the single step genetic
algorithm. Then we apply the same heuristic done during initialization
(Section 4.3.2), selecting the top $K_{nd},K_{nd}\in[0,k_{d}]$. Finally, the
resulting $K_{nd}$ designs are included to the BOP E-graph. We set $K_{m}=10$
in our implementation.
We generate fabrication arrangements for each of the new design variations
using the algorithm described in Section 4.3.3, and they are added to the BOP
E-graph maintaining the maximal sharing property. We further generate
fabrication arrangements for existing design variations, using a similar
scoring function used during contraction. This is done in two steps. First we
select root e-classes to expand based only on their impact score; namely, we
take the top $K_{d}$ root e-classes using non-dominated sorting. We then
proceed to generate $K_{f}\times K_{d}$ fabrication arrangements using the
algorithm described in Section 4.3.3). However, instead generating the same
number of fabrication arrangements variations for every selected root e-class,
the number is adaptive to their pruning scores $P_{score}$ (as defined in
Section 4.3.5).
Model | $n_{p}$ | #C | #CV | $|\mathcal{D}|$ | Model | $n_{p}$ | #C | #CV | $|\mathcal{D}|$
---|---|---|---|---|---|---|---|---|---
Frame | 4 | 4 | 22 | 13 | A-Chair | 18 | 3 | 6 | 4
L-Frame | 6 | 8 | 16 | 65 | F-Pot | 8 | 1 | 4 | 4
A-Bookcase | 12 | 6 | 16 | 192 | Z-Table | 15 | 6 | 16 | 63
S-Chair | 14 | 14 | 32 | 66438 | Loom | 18 | 4 | 10 | 36
Table | 12 | 10 | 24 | 1140 | J-Gym | 23 | 8 | 16 | 54
F-Cube | 12 | 8 | 23 | 5 | D-Chair | 17 | 10 | 22 | 2280
Window | 12 | 16 | 32 | 10463 | Bookcase | 15 | 22 | 44 | 65536
Bench | 29 | 6 | 14 | 57 | Dresser | 10 | 10 | 25 | 480
Table 1. Statistics for each input model, showing the complexity in number of
parts ($n_{p}$), number of connectors (#C), number of connecting variations
(#CV), and number of design variations that define unique bag of parts
$\mathcal{D}$. Figure 7. Models used for all experiments in Section 5. Brown
is used to indicate the models which are only made from 1D sequential cuts of
lumber. Gray is for only from 2D partitioning of sheets. Orange is for both
using 1D sequential cuts of lumber and 2D partitioning of sheets.
## 5\. Results and Discussion
In order to gauge the utility of our tool, we want to answer the following
questions:
1. (1)
How much does searching the design space with the fabrication space improve
generated fabrication plans?
2. (2)
How does our tool compare with domain experts who are asked to consider
different design variations?
3. (3)
How does our tool’s performance compare to a baseline naïve approach?
### 5.1. Models
We evaluate our method using the examples in Figure 7. Statistics for each
model are shown in Table 1. These models vary widely in visual complexity and
materials used — some are made from 1D sequential cuts on lumber, where others
require 2D partitioning of sheets. Note the complexity of the search is not
inherent to the visual complexity of the model, rather, it is determined by
the number of connecting variations and the number of arrangements, which
defines the size of the design space and the space of fabrication plans,
respectively. For example, the Adirondack chair is more visually complex than
the simple chair in Figure 7, but because it has about 5000 times fewer design
variations, it converges much more quickly. Models of Art bookcase, Dining
room chair, F-Pot, Z-Table, Bench, and Adirondack chair are taken from [Wu et
al., 2019].
### 5.2. Running environment
The parameters used in our ICEE algorithm are scaled based on the complexity
of each model, measured in terms of the number of parts $n_{p}$ and the size
of the design space $|\mathcal{D}|$. We further introduce a single tuning
parameter $\alpha\in[0.0,1.0]$, which allows us to trade-off between exploring
more design variations (smaller values of $\alpha$) versus exploring more
fabrication plans for given design variations (larger values). For all our
experiments, we set $\alpha$ to the default value of 0.75. The ICEE parameters
are set as follows: $K_{d}=2^{\lceil{\log_{10}|\mathcal{D}|}\rceil}$,
$N_{pop}=4\cdot K_{d}$, $K_{f}=\beta\cdot n_{p}$,
$K_{nd}=\lfloor(1.0-\alpha)\cdot K_{d}\rfloor$, and $P=2\cdot(\beta-2)$,
$t_{d}=10$, $mt_{d}=200$, $mc_{d}=0.95$, $mm_{d}=0.80$, $T=4$ hours,
$t_{p}=20$, $mt_{p}=200$, $mc_{p}=0.95$, $mm_{p}=0.80$, $w=0.7$,
$P_{rate}=0.3$ and $K_{m}=10$, where $\beta=\lfloor
44\cdot\alpha^{7}+2\rfloor$.
We report the running times of our algorithm in Table 2 for the models in
Figure 7. The above times are measured on a MAC laptop computer with 16GB RAM
and a 2.3 GHz 8-Core Intel Core i9 CPU. More discussion of the running time is
in the supplemental material.
Model | #O | #Iter | #EDV | #Arr | #PDV | CEt(m) | Et(m) | Total(m)
---|---|---|---|---|---|---|---|---
Frame | 2 | 11 | 8 | 181 | 3 | 0.7 | 2.1 | 2.8
L-Frame | 2 | 24 | 19 | 2818 | 3 | 2.1 | 6.1 | 8.2
A-Bookcase | 3 | 25 | 25 | 28700 | 3 | 20.5 | 228.6 | 249.0
S-Chair | 2 | 15 | 136 | 35656 | 6 | 27.6 | 122.0 | 149.6
Table | 2 | 18 | 50 | 9346 | 9 | 5.9 | 34.9 | 40.8
F-Cube | 2 | 23 | 4 | 3499 | 3 | 1.4 | 4.0 | 5.5
Window | 2 | 23 | 116 | 81026 | 4 | 32.8 | 98.9 | 131.7
Bench | 2 | 25 | 16 | 37436 | 3 | 30.3 | 215.1 | 245.4
A-Chair | 2 | 28 | 4 | 14440 | 3 | 3.1 | 9.6 | 12.7
F-Pot | 3 | 14 | 3 | 185 | 2 | 1.7 | 13.0 | 14.7
Z-Table | 3 | 70 | 41 | 336091 | 6 | 17.1 | 71.1 | 88.2
Loom | 3 | 21 | 10 | 1812 | 5 | 3.1 | 74.6 | 77.7
J-Gym | 3 | 46 | 18 | 286239 | 3 | 37.0 | 72.0 | 109.0
D-Chair | 2 | 18 | 40 | 15054 | 7 | 27.7 | 228.8 | 256.5
Bookcase | 3 | 15 | 32 | 34756 | 11 | 39.4 | 336.8 | 376.3
Dresser | 3 | 20 | 44 | 22209 | 5 | 14.1 | 241.2 | 255.4
Table 2. Some statistics and running times for our ICEE algorithm. For each
model, we firs report the number of targeting objective (#O) where 2 indicates
material usage ($f_{c}$) and fabrication time ($f_{t}$), and 3 indicates all
of the three objective including cutting precision ($f_{p}$). We also report
the number of iterations (#Iter), explored design variations (#EDV) and
arrangements (#Arr), and Pareto front design variations (#PDV). We report the
running time of BOP E-graph contraction and expansion (CEt), and Pareto front
extraction (Et), as well as the total time. All running time are in minutes.
### 5.3. Benefits of Design Exploration
Figure 8. Pareto fronts computed from our pipeline with design optimization as
colored dots. Each color corresponds to a different design. The gray dots
indicate the Pareto fronts of all explored design variations. These are
compared against Pareto fronts computed without design optimization
(fabrication optimization only, using the original model as the input design)
as squares, and expert fabrication plans as diamonds. Often, fabrication plans
from a design variant are more optimal than those generated from an input
design. For the unit of objective metrics, material usage ($f_{c}$) is in
dollars, cutting precision ($f_{p}$) is in inches, fabrication time ($f_{t}$)
is in minutes. Some (design, fabrication plan) pairs indicated with capital
letters are visualized in Figure 9 and Figure 10.
To demonstrate the benefit of simultaneous exploration of the design variation
and fabrication plan spaces, we compare our tool against optimizing the
fabrication plan for a single design.
Figure 8 shows the comparison between our pipeline and the Carpentry Compiler
pipeline [Wu et al., 2019] which only considers a single design. The parameter
setting of their pipeline and additional results can be found in Section 2 of
the supplemental material. We explore the trade-offs for fabrication time and
material usage for the designs where all cuts can be executed with standard
setups (these are considered to have no precision error) and include a third
objective of precision error for the models where that is not possible. The
Pareto fronts across designs generated by our tool cover a larger space and
many of our solutions dominate those from previous work.
Exploring design variations enables better coverage of the Pareto front, which
enables finding better trade-offs. These trade-offs are lower-cost overall,
cover more of the extrema, and are more densely available. For example, a
hobbyist may want to minimize material cost independent of time, as the
manufacturing process is enjoyable, and they consider it to have no cost.
Material cost is hard to save, but our exploration of design variations enable
solutions that reduce material cost by 7% in the Loom, 7% in the Jungle Gym,
15% in the Frame, and 25% in the Bookcase. On the other hand, someone with
free access to reclaimed wood may only care about the total manufacturing
time. Our approach enables solutions that reduce fabrication time by 60% — two
models saved between 50-60%, three between 30-35%, and four between 20-30%,
for example — a huge time savings. If creating a very precise model is
imperative, and a user would take great care to manufacture it exactly, then
for four models, we find solutions that reduce error by 61-77%. The detailed
data are listed in Table S5 of the supplemental material.
Some examples don’t lie at the extrema: businesses often need to find a
balance between the cost of materials, time spent on a project, and overall
project quality, and the particular tradeoff will depend on their accounting
needs. Our method enables finding solutions with better tradeoffs. Concretely,
consider a carpenter charging $40/h. When scalarizing our multi-objective
function into this single objective of money, we have several examples where
the lowest cost on our Pareto front is 5-8% cheaper than the lowest cost on
the baseline Pareto front, such as the Z-Table, Flower pot, Jungle Gym,
Dresser, Bookcase, and Art Bookcase. The window and frame have cost savings of
of 12% and 20%, respectively. Though a cost reduction of several percent might
appear insignificant, in production at scale, it represents thousands of
dollars potentially saved. This scalarization function is just one way for a
user to judge the tradeoff between different aspects of the multi-objective
function. In reality, the user probably has some notion of what tradeoff would
be ideal for their purposes, and will use the pareto front to represent the
full space of options and make an informed choice. This scalarized tradeoff is
further examined in the Table S7 of the supplemental material.
Figure 9. Two examples where searching the design space revealed fabrication
plans that completely dominated the fabrication plans generated for the input
design. With the design variations, our pipeline could search for a design
variation of the frame which turns all angled cutting to vertical. With Design
B, we find a fabrication plan which takes less time than the least time-
consuming plan A of the input design. Similarly, we show two fabrication plans
of the A-Bookcase model where the design and fabrication plan B dominates the
input design A. The fabrication costs are indicated in the figure with the
order of material cost, precision error, and fabrication time. The cutting
orders are labeled with colored dots and numbers, where colors indicate
selected cutting tools, and stacked cuts are labeled with the same number.
Figure 10. Two examples where exploring different designs lead to a wider
diversity of plans, where each tradeoff on the Pareto front is only possible
because of the underlying design. The window provides a simpler example.
Design A is very uniform, with only three distinct parts. This design makes it
easy to save on fabrication time because we can stack the cuts across
different stocks. Design B features more varied cuts, unlike A, where each of
the sides was the same length. This irregularity allows all the parts to be
effectively rearranged onto just two pieces of stock. Regular pieces would not
fit as nicely and result in wastage. Material cost is very low, but because of
the tight packing, much more time is needed to make each individual cut. The
bookcase example showcases how some unintuitive design decisions lead to cost
savings. In this example, Design A’s two long, identical side pieces mean more
opportunities for stacking, of which the fabrication plan takes full
advantage. This design enables a very low time cost, but uses a lot of
material. Design B’s left side is broken up by the shelves, and without a
second long piece, it is possible to pack all the pieces onto a single piece
of lumber. Here, the material usage is economical, but the carpenter must take
time to cut pieces from a complex layout.
Figure 9 highlights how exploring design variations generates fabrication
plans that can dominate those generated from no design variation exploration.
Figure 10 then demonstrates how design variations enable diverse tradeoffs
that save on different costs.
### 5.4. Comparison with Experts
For each model, we asked carpentry experts to generate design variations and
fabrication plans. The resulting points are plotted as diamonds in Figure 8.
Since experts produce each solution by hand, they produced Pareto fronts with
many fewer solutions than our tool. For 14 of 16 models (except the Loom and
Dresser models), solutions generated by our tool dominate the expert
solutions. This suggests that, generally, although expert plans seem sensible,
our tool generates better output thanks to its ability to generate more design
variations and fabrication plans, including potentially unintuitive packing or
cutting orders, and evaluate them much more quickly than a human.
### 5.5. Performance Evaluation
To test whether the BOP E-graph’s sharing is important for our tool’s
performance, we compare against a nested-optimization pipeline built on the
Carpentry Compiler [Wu et al., 2019]. The baseline approach invokes the
Carpentry Compiler pipeline on each design variant that our tool explores, and
then it takes the Pareto front across all design variations.
Model | $|\mathcal{D}|$ | #EDV | Time (min)
---|---|---|---
Ours | Baseline
Frame | 13 | 8 | 2.8 | 6.5
Jungle Gym | 54 | 18 | 109.0 | 761.2
Long frame | 65 | 19 | 8.2 | 59.7
Table | 1140 | 59 | 40.8 | 612.8
Window | 10463 | 116 | 131.7 | 2050.0
Table 3. Results of the performance validation experiment. “Ours” indicates
the ICEE algorithm of this paper. “Baseline” indicates extracting the Pareto
front fabrication plans for each design variation explored by our method
independently with the Carpentry Compiler pipeline [Wu et al., 2019]. The size
of design space $|\mathcal{D}|$ and the number of explored design variations
(EDV) are also reported. Our method and the baseline method produce two Pareto
fronts which are indistinguishable. This conclusion is not shown here; direct
comparisons of hypervolume can be non-intuitive due to the scale and how
hypervolume is measured. Please refer to the supplemental material (Figure
S1), which contains plots comparing the results of the two methods. Even with
identical results, our time improvement is significant.
We choose five models of varying complexity to evaluate performance and show
results in Table 3. We tuned the parameters of the baseline method so we could
achieve results that were as close as possible, if not qualitatively the same
(when the baseline method ran to completion). Full results are available in
the supplemental material (Table S6 and Figure S1). This indicates that our
co-optimization approach yields similar results to the nested approach over
the same space. When it comes to performance, our approach is about one order
of magnitude faster. We attribute this speedup to the sharing captured by the
BOP E-graph; we only had to evaluate common sub-designs and sub-fabrication-
plans one time, whereas the baseline shared no work across its different
invocations for each design variant.
### 5.6. Fabricated Results
We validated our pipeline by physically constructing some of the models
according to the design variation-fabrication plan pairs generated by our
tool. Figure 11 shows the results.
Figure 11. Fabrication results of two window variations. The different designs
and fabrication plans trade off fabrication time and material usage.
## 6\. Discussion
Figure 12. A loom model with mixed material where two kinds of wood (spruce
plywood and medium density fiberboard sheet) and one kind of metal (aluminum
sheet) are assigned to each part.
### 6.1. Multi-Materials and Cutting Tools
Mechanical or aesthetic reasons might motivate designers to combine multiple
materials, such as different types of wood, or wood and metal, in one model.
Adding new materials to our approach involves almost no implementation
overhead: we must select which cutting tools are appropriate, and accommodate
the material’s costs into our metrics. Then, we simply need to indicate which
material a given part is made of, exactly the same way we designate whether
parts belong on 1D lumber or 2D stock. As shown in Figure 12, we have created
a mixed-material model to showcase our ability to handle this added design
complexity. The loom is made of two different types of wood as well as one
kind of metal. All parts are optimized in the same and treated identically to
the base implementation. We describe the cost metrics for different materials
in the supplemental material (Section 1.3.1).
Figure 13. Pareto fronts computed from by our pipeline for the Frame model
with three objective functions, material usage $f_{c}$, fabrication time
$f_{t}$ and stability performance. The physical stability of each design
variation is simulated with Abaqus/CAE 2021, measured with the maximal
displacement (Max U). All displacements are in inches. In this figure, (a) is
the displacement visualization in a direction, (b) is the displacement
visualization of the same design but with a different direction, (c) plots the
Pareto fronts computed from our pipeline where three design variations are
selected.
### 6.2. Objectives
Our method also naturally extends to other objective functions. We show one
example in Figure 13, where we consider stability as an additional objective
which we calculate with physical simulation. Notably, stability is invariant
to the fabrication plan, and depends solely on the design itself, so it only
needs to be measured once, at the root node. However, two designs can have
different stability costs but share the same bag of parts. Figure 13 (a) and
(b), exhibits one bag of parts which captures two different designs.
In this example, since the other metrics (time and material cost) do not
exhibit this dependency, we can simply assign to the root nodes the stability
cost of the best-performing design that corresponds to that bag of parts; thus
the cost for any given bag of parts is the best cost of any design that is
represented by that bag of parts. Note that fabrication plans depend solely on
the bag of parts. In general, if we want to use more than one metric like this
one — a metric that depends on the design, and is not completely determined by
a term in the e-graph — we would need to compute the different trade-offs for
the variations during extraction, as was done with cutting order and
precision, described in Section 4.3.4.
### 6.3. Convergence
While our results show the significance of the approach to reduce fabrication
cost in practice, we cannot make any guarantees that plans we output are on
the globally-optimal Pareto front. Indeed, we do not anticipate that any
alternative approach would be able to have such strong guarantees given the
inherent complexity of the problem. This convergence limitation impacts our
method in three different ways.
##### Parameter Tuning
Due to limitations in exploring the full combinatorial space, parameters of
our search algorithm may influence convergence. Because the key aspect of ICEE
is simultaneously searching “broad” (design variations) and “deep”
(fabrication plans for various designs), we expose the $\alpha$ parameter that
trades-off between depth and breadth during search. Exposing this single
parameter, enables us to improve performance in special circumstances. For
example, when not much can be gained from design variations, a larger $\alpha$
will enable searching deeply on a particular design finding better solutions.
All the results shown in this paper use the default value for $\alpha$ that we
have found effective in practice.
##### Comparison with Wu et al. [2019]
The fundamental difference between our work and [Wu et al., 2019] is that
incorporating more design variations increases the design space, enabling us
to find better performing results. Since the search space of this prior work
is a subset of the search space we explore, our results should be strictly
better. However, since neither method can ensure the results lie on the true
Pareto front due to limitations in convergence, tuning parameters of both
approaches may influence this result. An example of this limitation in shown
in A-Chair example in Fig7. We show in the supplemental material (Section 2.4)
how tuning $\alpha$ to explore more deeply improves this result and also
report experiments for tuning the 4 parameters from [Wu et al., 2019].
##### Increasing the Design Space
A final implication of the intractable search is that it is possible to
achieve worse results by increasing the design space in special circumstances.
We discuss in the supplemental material (Section 2.4) an example where we make
the design space 145 times larger by including variations that do not benefit
the search.
### 6.4. Limitations and Future Work
Our current approach encodes only discrete design variants in the BOP E-graph.
An interesting direction for future work would be to support continuous
variations in the designs space which can provide a larger space of
fabrication plans to explore. However, supporting continuous design variants
in an would require designing a new metric for comparing two design variants
for equivalence. This is challenging because e-graphs heavily exploit
transitivity, so any error in the metric could lead to arbitrarily different
designs being considered “equivalent”.
Several steps of our algorithm can also be parallelized for performance (e.g.
generating design variants)—we leave this as an extension for the future.
We are also keen to explore broader applications of the ICEE strategy for
integrating feedback-directed search in other -based optimization techniques.
Past work applying for design optimization in CAD [Nandi et al., 2020] and for
improving accuracy in floating-point code [Panchekha et al., 2015] have relied
on ad hoc techniques for controlling the growth of the , e.g., by carefully
tuning rules used during rewrite-based optimization. We hope to explore
whether ICEE can help with optimization in such domains by focusing the search
on more-profitable candidates and easing implementation effort by reducing the
need for experts to carefully tune rewrite rules.
The most time-consuming part of our ICEE algorithm lies in the Pareto front
extraction phase. A pruning strategy with learning-based methods for
predicting the objective metrics of an arrangement might be an interesting and
valuable area of research.
Another direction we are eager to explore is accounting for other factors in
the Pareto front. Currently, our technique finds a design variant and
fabrication plan that optimizes fabrication time, material cost, and
precision. Other interesting factors that can guide the search include ease of
assembly and strength of the part.
## 7\. Conclusion
We have presented a new approach to co-optimizing model design variations and
their fabrication plans. Our approach relies on the insight that fabrication
plans across design variants will share similar structure. We capture this
sharing with the BOP E-graph data structure that considers fabrication plans
equivalent if they produce the same bag of parts. The BOP E-graph also lets us
guide the search toward profitable design variants/fabrication plans with a
technique we call ICEE (Iterative Contraction and Expansion of ) that may be
useful for uses of in other applications. Results generated by our tool
compare favorably against both expert-generated designs and a baseline built
using prior work, indicating that the sharing captured by the BOP E-graph is
essential to efficiently exploring the large, combined space of design
variants and fabrication plans.
## References
* [1]
* Alderighi et al. [2019] Thomas Alderighi, Luigi Malomo, Daniela Giorgi, Bernd Bickel, Paolo Cignoni, and Nico Pietroni. 2019\. Volume-aware design of composite molds. _ACM Transactions on Graphics_ (2019).
* Auger et al. [2009] Anne Auger, Johannes Bader, Dimo Brockhoff, and Eckart Zitzler. 2009. Theory of the hypervolume indicator: optimal $\mu$-distributions and the choice of the reference point. In _Proceedings of the tenth ACM SIGEVO workshop on Foundations of genetic algorithms_. 87–102.
* Bickel et al. [2018] Bernd Bickel, Paolo Cignoni, Luigi Malomo, and Nico Pietroni. 2018. State of the Art on Stylized Fabrication. _Computer Graphics Forum_ 37, 6 (2018), 325–342. https://doi.org/10.1111/cgf.13327
* Cignoni et al. [2014] Paolo Cignoni, Nico Pietroni, Luigi Malomo, and Roberto Scopigno. 2014. Field-aligned mesh joinery. _ACM Transactions on Graphics (TOG)_ 33, 1 (2014), 1–12.
* Deb [2014] Kalyanmoy Deb. 2014\. Multi-objective optimization. In _Search methodologies_. Springer, 403–449.
* Deb and Jain [2013] Kalyanmoy Deb and Himanshu Jain. 2013. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. _IEEE transactions on evolutionary computation_ 18, 4 (2013), 577–601.
* Deb et al. [2002] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. 2002\. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. _Trans. Evol. Comp_ 6, 2 (April 2002), 182–197.
* Deb and Sinha [2009] Kalyanmoy Deb and Ankur Sinha. 2009. Solving bilevel multi-objective optimization problems using evolutionary algorithms. In _International conference on evolutionary multi-criterion optimization_. Springer, 110–124.
* Dempe [2018] Stephan Dempe. 2018\. _Bilevel optimization: theory, algorithms and applications_. TU Bergakademie Freiberg, Fakultät für Mathematik und Informatik.
* Duenser et al. [2020] Simon Duenser, Roi Poranne, Bernhard Thomaszewski, and Stelian Coros. 2020. RoboCut: hot-wire cutting with robot-controlled flexible rods. _ACM Transactions on Graphics (TOG)_ 39, 4 (2020), 98–1.
* Eichfelder [2010] Gabriele Eichfelder. 2010\. Multiobjective bilevel optimization. _Mathematical Programming_ 123, 2 (2010), 419–449.
* Etienne et al. [2019] Jimmy Etienne, Nicolas Ray, Daniele Panozzo, Samuel Hornus, Charlie CL Wang, Jonàs Martínez, Sara McMains, Marc Alexa, Brian Wyvill, and Sylvain Lefebvre. 2019\. CurviSlicer: slightly curved slicing for 3-axis printers. _ACM Transactions on Graphics (TOG)_ 38, 4 (2019), 1–11.
* Fu et al. [2015] Chi-Wing Fu, Peng Song, Xiaoqi Yan, Lee Wei Yang, Pradeep Kumar Jayaraman, and Daniel Cohen-Or. 2015. Computational Interlocking Furniture Assembly. _ACM Trans. Graph._ 34, 4, Article 91 (July 2015), 11 pages. https://doi.org/10.1145/2766892
* Garg et al. [2016] Akash Garg, Alec Jacobson, and Eitan Grinspun. 2016\. Computational design of reconfigurables. _ACM Trans. Graph._ 35, 4 (2016), 90–1.
* Gavriil et al. [2020] Konstantinos Gavriil, Ruslan Guseinov, Jesús Pérez, Davide Pellis, Paul Henderson, Florian Rist, Helmut Pottmann, and Bernd Bickel. 2020. Computational design of cold bent glass façades. _ACM Transactions on Graphics (TOG)_ 39, 6 (2020), 1–16.
* Halter and Mostaghim [2006] Werner Halter and Sanaz Mostaghim. 2006. Bilevel optimization of multi-component chemical systems using particle swarm optimization. In _2006 IEEE International Conference on Evolutionary Computation_. IEEE, 1240–1247.
* Hildebrand et al. [2013] Kristian Hildebrand, Bernd Bickel, and Marc Alexa. 2013\. Orthogonal slicing for additive manufacturing. _Computers & Graphics_ 37, 6 (2013), 669–675.
* Joshi et al. [2002] Rajeev Joshi, Greg Nelson, and Keith Randall. 2002\. Denali: A Goal-directed Superoptimizer. _SIGPLAN Not._ 37, 5 (May 2002), 304–314. https://doi.org/10.1145/543552.512566
* Koo et al. [2017] Bongjin Koo, Jean Hergel, Sylvain Lefebvre, and Niloy J. Mitra. 2017\. Towards Zero-Waste Furniture Design. _IEEE Transactions on Visualization and Computer Graphics_ 23, 12 (Dec 2017), 2627–2640. https://doi.org/10.1109/TVCG.2016.2633519
* Koo et al. [2014] Bongjin Koo, Wilmot Li, JiaXian Yao, Maneesh Agrawala, and Niloy J Mitra. 2014. Creating works-like prototypes of mechanical objects. _ACM Transactions on Graphics_ 33, 6 (2014).
* Lau et al. [2011] Manfred Lau, Akira Ohgawara, Jun Mitani, and Takeo Igarashi. 2011. Converting 3D Furniture Models to Fabricatable Parts and Connectors. In _ACM SIGGRAPH 2011 Papers_ _(SIGGRAPH ’11)_. ACM, New York, NY, USA, Article 85, 6 pages. https://doi.org/10.1145/1964921.1964980
* Leen et al. [2019] Danny Leen, Tom Veuskens, Kris Luyten, and Raf Ramakers. 2019\. JigFab: Computational Fabrication of Constraints to Facilitate Woodworking with Power Tools. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ _(CHI ’19)_. ACM, New York, NY, USA, Article 156, 12 pages. https://doi.org/10.1145/3290605.3300386
* Lu et al. [2016] Jie Lu, Jialin Han, Yaoguang Hu, and Guangquan Zhang. 2016\. Multilevel decision-making: A survey. _Information Sciences_ 346 (2016), 463–487.
* Mahdavi-Amiri et al. [2020] Ali Mahdavi-Amiri, Fenggen Yu, Haisen Zhao, Adriana Schulz, and Hao Zhang. 2020. VDAC: volume decompose-and-carve for subtractive manufacturing. _ACM Transactions on Graphics (TOG)_ 39, 6 (2020), 1–15.
* Maia et al. [2019] Henrique Teles Maia, Dingzeyu Li, Yuan Yang, and Changxi Zheng. 2019. LayerCode: optical barcodes for 3D printed shapes. _ACM Transactions on Graphics (TOG)_ 38, 4 (2019), 1–14.
* Nandi et al. [2020] Chandrakana Nandi, Max Willsey, Adam Anderson, James R. Wilcox, Eva Darulova, Dan Grossman, and Zachary Tatlock. 2020. Synthesizing Structured CAD Models with Equality Saturation and Inverse Transformations. In _Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation, PLDI 2020, London, UK, June 15-20, 2020_ , Alastair F. Donaldson and Emina Torlak (Eds.). ACM, 31–44. https://doi.org/10.1145/3385412.3386012
* Nelson [1980] Charles Gregory Nelson. 1980\. _Techniques for Program Verification_. Ph.D. Dissertation. Stanford University, Stanford, CA, USA. AAI8011683.
* Panchekha et al. [2015] Pavel Panchekha, Alex Sanchez-Stern, James R. Wilcox, and Zachary Tatlock. 2015. Automatically Improving Accuracy for Floating Point Expressions. _SIGPLAN Not._ 50, 6 (June 2015), 1–11. https://doi.org/10.1145/2813885.2737959
* Premtoon et al. [2020] Varot Premtoon, James Koppel, and Armando Solar-Lezama. 2020\. Semantic Code Search via Equational Reasoning. In _Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation_ _(PLDI 2020)_. Association for Computing Machinery, New York, NY, USA, 1066–1082. https://doi.org/10.1145/3385412.3386001
* Schwartzburg and Pauly [2013] Yuliy Schwartzburg and Mark Pauly. 2013. Fabrication-aware design with intersecting planar pieces. In _Computer Graphics Forum_ , Vol. 32. Wiley Online Library, 317–326.
* Shi and Xia [2001] Xinping Shi and Hong Sheng Xia. 2001. Model and interactive algorithm of bi-level multi-objective decision-making with multiple interconnected decision makers. _Journal of Multi-Criteria Decision Analysis_ 10, 1 (2001), 27–34.
* Sinha et al. [2017] Ankur Sinha, Pekka Malo, and Kalyanmoy Deb. 2017. A review on bilevel optimization: from classical to evolutionary approaches and applications. _IEEE Transactions on Evolutionary Computation_ 22, 2 (2017), 276–295.
* Song et al. [2017] Peng Song, Chi-Wing Fu, Yueming Jin, Hongfei Xu, Ligang Liu, Pheng-Ann Heng, and Daniel Cohen-Or. 2017. Reconfigurable Interlocking Furniture. _ACM Trans. Graph._ 36, 6, Article 174 (Nov. 2017), 14 pages. https://doi.org/10.1145/3130800.3130803
* Stepp et al. [2011] Michael Stepp, Ross Tate, and Sorin Lerner. 2011. Equality-Based Translation Validator for LLVM. In _Proceedings of the 23rd International Conference on Computer Aided Verification_ _(CAV’11)_. Springer-Verlag, Berlin, Heidelberg, 737–742.
* Tang et al. [2020] Pengbin Tang, Jonas Zehnder, Stelian Coros, and Bernhard Thomaszewski. 2020. A harmonic balance approach for designing compliant mechanical systems with nonlinear periodic motions. _ACM Transactions on Graphics (TOG)_ 39, 6 (2020), 1–14.
* Tate et al. [2009] Ross Tate, Michael Stepp, Zachary Tatlock, and Sorin Lerner. 2009\. Equality Saturation: A New Approach to Optimization. In _Proceedings of the 36th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages_ _(POPL ’09)_. ACM, New York, NY, USA, 264–276. https://doi.org/10.1145/1480881.1480915
* Umetani et al. [2012] Nobuyuki Umetani, Takeo Igarashi, and Niloy J. Mitra. 2012\. Guided Exploration of Physically Valid Shapes for Furniture Design. _ACM Trans. Graph._ 31, 4, Article 86 (July 2012), 11 pages. https://doi.org/10.1145/2185520.2185582
* Wang et al. [2020] Yisu Remy Wang, Shana Hutchison, Jonathan Leang, Bill Howe, and Dan Suciu. 2020. SPORES: Sum-Product Optimization via Relational Equality Saturation for Large Scale Linear Algebra. _Proc. VLDB Endow._ 13, 12 (July 2020), 1919–1932. https://doi.org/10.14778/3407790.3407799
* Wang et al. [2019] Ziqi Wang, Peng Song, Florin Isvoranu, and Mark Pauly. 2019\. Design and structural optimization of topological interlocking assemblies. _ACM Transactions on Graphics (TOG)_ 38, 6 (2019), 1–13.
* Willsey et al. [2021] Max Willsey, Chandrakana Nandi, Yisu Remy Wang, Oliver Flatt, Zachary Tatlock, and Pavel Panchekha. 2021\. egg: Fast and Extensible Equality Saturation. _Proc. ACM Program. Lang._ 5, POPL, Article 23 (Jan. 2021), 29 pages. https://doi.org/10.1145/3434304
* Wu et al. [2019] Chenming Wu, Haisen Zhao, Chandrakana Nandi, Jeffrey I Lipton, Zachary Tatlock, and Adriana Schulz. 2019\. Carpentry compiler. _ACM Transactions on Graphics (TOG)_ 38, 6 (2019), 1–14.
* Yang et al. [2020] Jinfan Yang, Chrystiano Araujo, Nicholas Vining, Zachary Ferguson, Enrique Rosales, Daniele Panozzo, Sylvain Lefevbre, Paolo Cignoni, and Alla Sheffer. 2020\. DHFSlicer: double height-field slicing for milling fixed-height materials. _ACM Transactions on Graphics (TOG)_ 39, 6 (2020), 1–17.
* Yang et al. [2015] Yong-Liang Yang, Jun Wang, and Niloy J Mitra. 2015\. Reforming Shapes for Material-aware Fabrication. In _Computer Graphics Forum_ , Vol. 34. Wiley Online Library, 53–64.
* Yin [2000] Yafeng Yin. 2000\. Genetic-algorithms-based approach for bilevel programming models. _Journal of transportation engineering_ 126, 2 (2000), 115–120.
* Zhang et al. [2019] Xiaoting Zhang, Guoxin Fang, Mélina Skouras, Gwenda Gieseler, Charlie Wang, and Emily Whiting. 2019. Computational design of fabric formwork. (2019).
* Zhao et al. [2016] Haisen Zhao, Fanglin Gu, Qi-Xing Huang, Jorge Garcia, Yong Chen, Changhe Tu, Bedrich Benes, Hao Zhang, Daniel Cohen-Or, and Baoquan Chen. 2016\. Connected fermat spirals for layered fabrication. _ACM Transactions on Graphics (TOG)_ 35, 4 (2016), 1–10.
|
arxiv-papers
| 2021-07-26T15:16:14 |
2024-09-04T03:07:18.994088
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Haisen Zhao, Max Willsey, Amy Zhu, Chandrakana Nandi, Zachary Tatlock,\n Justin Solomon, Adriana Schulz",
"submitter": "Haisen Zhao",
"url": "https://arxiv.org/abs/2107.12265"
}
|
2107.12271
|
# Anomalous photoluminescence emission of monolayer MoS2-QD heterostructure
on hBN.
H L Pradeepa [email protected] Department of Physics, Indian Institute
of Science, Bangalore 560012, India
###### Abstract
Monolayer transition metal dichalcogenides(2D) and zero dimensional quantum
dots(QD) are known to have unique optical properties in their individual limit
such as high binding energy of excitons. The combination of these two systems
is of particular interest in understanding various aspects of energy transfer,
charge transfer, dynamics of excitons, etc. In this manuscript, we report the
anomalous photoluminescence(PL) emission in one such heterostructure MoS2-CdSe
QD. We observe multiple exciton emission peaks of the heterostructure on hBN
substrate which are absent on SiO2. Our observation open up the questions,
whether the local potential due to the lattice mismatch between MoS2 and hBN
has any role in deciding the emission of these peaks or the strain field of
MoS2 and hBN is the reason for the emergence of multiple emission. In
addition, the altered quantum potential of QD due the presence of hBN and MoS2
may also leads to such multiple emissions.
Photoluminiscence, interlayer exciton, MoS2, QD
The recent studies of two-dimensional semiconductors(2D) and their hybrid
structure with zero-dimensional(0D) semiconductors have led to discovering of
many fascinating properties that are absent in their bulk counterparts.Wang
_et al._ (2018); Ross _et al._ (2013); Bera _et al._ (2010); Raja _et al._
(2016) MoS2 monolayer, one such 2D semiconductor is known to have a very
interesting properties such as transition from indirect to direct band gap
from bulk(1.2 eV) to monolayer(1.8 eV) limit, high binding energy of
excitons(0.4 to 0.6 eV) and trions(30 to 40 meV) at room
temperature.Ramasubramaniam (2012) We can easily tune the PL emission of CdSe
QDs from 2.4 eV to 1.8 eV,Pradeepa, Bid, and Basu (2020); Haridas _et al._
(2013) where as the absorption and PL emission of monolayer MoS2 lies in
between 2.15 eV to 1.7 eV,Mak _et al._ (2013) this makes the CdSe QDs a
suitable 0D emitter with monolayer MoS2 to make 2D-0D hetero-structure and
study their exotic properties.Alexeev _et al._ (2019); Jin _et al._ (2019);
Tran _et al._ (2019); Seyler _et al._ (2019) From the fundamental
perspective, interest has been focused on the novel aspects of light-matter
interactions that can occur between the two nanoscale materials in the form of
energy and charge transfer processes between photo excited excitons which can
be generated in one or both the layers while under certain conditions
formation of interlayer or hybrid excitons can also take place.Boulesbaa _et
al._ (2016); Alexeev _et al._ (2019)
Here we report the low temperature PL study of monolayer MoS2(2D)- CdSe (0D)
hybrid structures. We observed multiple exciton emission peaks in the
heterostructure of MoS2-QD in which hBN is used as underneath substrate. These
emissions on hBN are absent on SiO2 substrate region. Our observation opens
the questions whether the local potential due to the lattice mismatch between
MoS2 and hBN has any role in deciding the emission of these peaks or the
strain field of MoS2 and hBN is the reason for the emergence of multiple
emission. The altered quantum potential of QD due the presence of hBN and MoS2
may also contribute to these multiple emission mechanism.
Figure 1: (a) Optical image of h-BN on SiO2.(b) Optical image of MoS2 on h-BN
and on SiO2.(c) Optical image of QD-MoS2 heterostructure on h-BN and on SiO2.
(d) Room temperature Raman spectra of the monolayer MoS2 on SiO2 and on h-BN.
(e) Schematic of the experimental set-up. Inset shows the band diagram of QD
and MoS2, the energies of valance and conduction band of both QD and MoS2
favor the possibility of interlayer excitons.
h-BN flakes were exfoliated on polydimethylsiloxane (PDMS) sheets and then
transferred on 300 nm SiO2 substrates, fig. 1(a) shows the optical image of
h-BN transferred on SiO2. MoS2 monolayer flakes were prepared using standard
exfoliation technique on PDMS sheets. Optical microscopy and Raman
spectroscopy were used to identify the monolayer. MoS2 monolayers were then
transferred in such a way that some portion of MoS2 is on h-BN and some
portion on SiO2 as shown in Fig. 1(b). CdSe QDs were synthesized following
methods described earlier de Mello Donega _et al._ (2003); Qu and Peng
(2002). The QD monolayer was transferred on MoS2 using Langmuir-Blodgett(LB)
techniqueCollier _et al._ (1997); Dabbousi _et al._ (1994); Heath, Knobler,
and Leff (1997), using LB trough(Kibron Microtrough G2,Finland). Fig. 1(c)
shows the optical image of the QD-MoS2 heterostructure on h-BN and on SiO2.
Fig. 1(d) shows the Raman spectra of MoS2 on SiO2 and on h-BN at room
temperature. In schematic fig. 1 (d) the vertical cross sectional view of the
hetero-structure has been shown. As seen in the band diagram of QD and MoS2,
the energies of valance and conduction band of both QD and MoS2 favor the
possibility of interlayer excitons.
PL and Raman spectra were collected using the Horiba (LabRam model)
instrument, using 532 nm continues wave (CW) laser to excite the sample
keeping the laser power $\sim$ 2 $\mu$W. Signals were collected using charge
coupled device(CCD). 300 g/mm grating and 1800 g/mm grating were used to
collect the PL and the Raman spectra respectively. 50x (Olympus NA-0.45)
objective was used collect both PL and Raman data. Montana (Cryostation model)
was used mounted to Horiba system to collect low temperature spectra.
Figure 2: Temperature dependent PL spectra: (a) MoS2 on SiO2, defect peak is
observed at low temperatures at lower energy. (b) MoS2 on h-BN, PL intensity
of MoS2 on h-BN was increased compared MoS2 on SiO2. (c) QD on SiO2, a broad
defect peak is observed in QD PL spectra also at low temperatures. (d) QD-MoS2
on SiO2, inset shows the zoomed spectra of MoS2, where the B exciton peak is
overlapped with the QD spectra.
The PL emission spectra of monolayer MoS2 at K (K′) point consists of two
peaks because of presence of the strong spin orbital interaction at around
1.88 eV (called A exciton) and 2.0 eV (called B exciton). Fig. 2(a) shows the
temperature dependent PL spectra of MoS2 on SiO2, defect peak is observed at
low temperatures at lower energy. The A exciton PL is more sensitive compare
to the B exciton PL. The PL intensity of A exciton increases with decreasing
temperature(T). As we decrease the T, we observe that the total PL intensity
of A exciton increases with blue shift, further, this A exciton peak can be
deconvoluted into exciton and trion peaks, the intensity of exciton increases
with decrease in T. This suggests that as we decrease the T the A exciton peak
is more exciton in nature. The defect induced peak starts appearing at lower
temperatures, this is due to the less available energy for the carriers at low
T to overcome the trapping potential.
Fig. 2(b) shows the the temperature dependent PL spectra of MoS2 on h-BN, it
is observed that the PL intensity of MoS2 on h-BN was increased compared to
MoS2 on SiO2, this increase may be due to substrate induced doping. Fig. 2(c)
shows the temperature dependent PL spectra of QD on SiO2. The PL of QD
increases with decrease in T, also the energy blue shifts as we decrease the
T. We observe a broad defect peak at lower energy in the PL at low
temperatures which is associated to the trapping states.
Figure 3: (a) Temperature dependent PL spectra of QD-MoS2 on h-BN. (a) QD-
MoS2 on h-BN in the broad energy range. (b) PL spectra showing the zoomed
region of MoS2 emission from 20 K to 100 K. (c) Normalized spectra of zoomed
regime K. (d) Normalized PL spectra showing the of QD-MoS2 from 130 K to 290
K.
Fig. 2(d) shows the PL spectra of QD-MoS2 on SiO2 at different temperatures.
MoS2 B exciton spectra is overlapped with the QD PL spectra, where as A
exciton peak is still high enough to observe. QD peak is blue shifted and PL
intensity is increased as we decrease the temperature. QD PL intensity
decreases on MoS2 indicating the energy transfer from QD. fig. 2(d) inset
shows the zoomed spectra of MoS2 in the QD-MoS2 heterostructure spectra.
Fig. 3(a) shows the temperature dependent PL spectra of QD-MoS2
heterostructure on h-BN. Very interestingly, along with the blue shift and
increase in PL intensity we observe extra peaks near the A and B exciton peaks
of MoS2. Fig. 3(b) shows the temperature dependent PL spectra of QD-MoS2
heterostructure on h-BN zoomed near the A and B excitons of MoS2 from 20 K to
100 K. Between A exciton(energy-1.90 eV) and QD (energy-2.16 eV) multiple
peaks are observed at 1.94 ev, 1.99 ev and 2.02 eV. As we increase the
temperature the A exciton peak decreases, also the intensity of the higher
energy multiple exciton peaks decrease .
For clarity, we plotted the normalized PL spectra of QD-MoS2 heterostructure
on h-BN. We can clearly see that the ratio of A exciton to multiple excitons
peaks increases with increasing the temperature till 100 K as shown in the
normalized spectra in fig. 3(c). The peak near 1.99 eV starts merging with the
peak near 2.02 eV after 100 K, where as the peak near 1.94 eV disappear after
130 K. Normalized PL spectra from 130 K to 290 K as shown in Fig. 3(d). The
intensity of the higher energy peak at 2.02 eV increases with increasing the
temperature from 130 K to 290 K. Interestingly this exciton peak dominates the
QD spectra at higher temperatures. More interestingly this peak is blue
shifted with increasing the temperature.
In another sample we observed similar multiple PL peaks at low T at higher
powers which is shown in Fig. 4. Fig. 4(a) and (b) show the optical images of
the MoS2 on hBN heterostructure before and after transferring QDs
respectively. Fig. 4(c) shows the PL spectra of QD-MoS2 on SiO2 at 20 K at
high laser powers about, we did not observe any multiple peaks. Fig. 4(d)
shows the PL spectra of QD-MoS2 on hBN at the same T and same power, we can
clearly observe multiple PL emissions which are resolved by fitting the PL
spectra using multiple Lorentzian.
As we discussed earlier, there is a possibility of the formation of interlayer
exciton in this structure. We try to understand the multiple emission
mechanism in terms of the interlayer excitons. There is a possibility that
these interesting peaks may be the signature of Mooré excitons which are the
interlayer excitons formed in the MoS2 and QD structure trapped in the
possibly formed Mooré potential due to the crystal plane mismatch between MoS2
and h-BN. However, these interesting observations need to be further explored
in detail from Mooré potential and other aspects.
Figure 4: Optical images of the second sample before (a) and after (b)
transferring the QD. (c) shows the PL spectra of QD-MoS2 on SiO2. (d) PL
spectra QD-MoS2 on hBN, multiple PL emissions are resolved by fitting the PL
spectra using multiple Lorentzian.
It has been shown that if hBN is used as a capping layer in MoSe2/WSe2
heterostructure, the inhomogeneous PL linewidths will reduce giving rise to
equal energy spaced interlayer excitons at lower temperatures which can be
attributed to Mooré excitons.Tran _et al._ (2019) It is also interesting to
note that these kind of inhomogeneous broadening in the presence of hBN may
also occur due the presence of multiple Mooré domains or strain caused by the
substrate and the interlayer spacing within the laser spot. In addition, the
multiple excitons peaks in the PL spectra can also be observed in the
heterostructure due the quantized energy levels caused by the confinement
effects. Tran _et al._ (2019); Torchynska, Dybiec, and Ostapenko (2005)
The observation of multiple PL peaks depends on various factors. Firstly, as
discussed previously, the amount of lattice mismatch between MoS2 and hBN and
whether this mismatch can create Mooré potentials. secondly the strain created
by this match and its effect on the PL spectra.Marzin and Bastard (1994);
Cusack, Briddon, and Jaros (1997) This strain field which is mostly biaxial in
nature and is effective on the entire area of the QD covered on the MoS2-hBN
which can lead to multiple confined electronic level.Thoai, Hu, and Koch
(1990); Grundmann, Stier, and Bimberg (1995) This strain effect can also be
the reason for the observed multiple peaks in the spectra. These kind of
multiple PL emission of QD combined with quantum well which are similar to 2D
semiconductors in many ways are observed at low T at higher laser
powers.Torchynska, Dybiec, and Ostapenko (2005) However further studies is
expected in these directions.
In summary, we measured the PL spectra of MoS2-QD heterostructure on SiO2 and
on hBN at low temperatures. At low T, we observe multiple PL emission peaks of
the heterostructure on hBN which are absent on SiO2. These multiple peaks may
be arising due to the lattice match between MoS2 and hBN or the strain field
created by the MoS2-hBN or due the altered quantum potential of QD due the
presence of hBN. Further explanation and detailed lifetime and other studies
are expected in near future.
Acknowledgment: Author thanks CSIR-UGC for financial support and DST
Nanomission for funding. Author thanks Aveek Bid and Jaydeep Kumar Basu for
discussion. Author thanks Komal Sharma for helping in QD synthesis. Author
thanks the facilities of CeNSE and IISc.
* This manuscript was the residue results of the Ph.D work of the author under the guidance of Aveek Bid and Jaydeep Kumar Basu in IISc.
## References
* Wang _et al._ (2018) G. Wang, A. Chernikov, M. M. Glazov, T. F. Heinz, X. Marie, T. Amand, and B. Urbaszek, “Colloquium: Excitons in atomically thin transition metal dichalcogenides,” Reviews of Modern Physics 90, 021001 (2018).
* Ross _et al._ (2013) J. S. Ross, S. Wu, H. Yu, N. J. Ghimire, A. M. Jones, G. Aivazian, J. Yan, D. G. Mandrus, D. Xiao, W. Yao, _et al._ , “Electrical control of neutral and charged excitons in a monolayer semiconductor,” Nature communications 4, 1–6 (2013).
* Bera _et al._ (2010) D. Bera, L. Qian, T.-K. Tseng, and P. H. Holloway, “Quantum dots and their multimodal applications: a review,” Materials 3, 2260–2345 (2010).
* Raja _et al._ (2016) A. Raja, A. Montoya-Castillo, J. Zultak, X.-X. Zhang, Z. Ye, C. Roquelet, D. A. Chenet, A. M. Van Der Zande, P. Huang, S. Jockusch, _et al._ , “Energy transfer from quantum dots to graphene and mos2: The role of absorption and screening in two-dimensional materials,” Nano letters 16, 2328–2333 (2016).
* Ramasubramaniam (2012) A. Ramasubramaniam, “Large excitonic effects in monolayers of molybdenum and tungsten dichalcogenides,” Physical Review B 86, 115409 (2012).
* Pradeepa, Bid, and Basu (2020) H. Pradeepa, A. Bid, and J. K. Basu, “Strong suppression of emission quenching in core quantum dots coupled to monolayer mos 2,” Nanoscale Advances 2, 3858–3864 (2020).
* Haridas _et al._ (2013) M. Haridas, J. Basu, A. Tiwari, and M. Venkatapathi, “Photoluminescence decay rate engineering of cdse quantum dots in ensemble arrays embedded with gold nano-antennae,” Journal of Applied Physics 114, 064305 (2013).
* Mak _et al._ (2013) K. F. Mak, K. He, C. Lee, G. H. Lee, J. Hone, T. F. Heinz, and J. Shan, “Tightly bound trions in monolayer mos 2,” Nature materials 12, 207–211 (2013).
* Alexeev _et al._ (2019) E. M. Alexeev, D. A. Ruiz-Tijerina, M. Danovich, M. J. Hamer, D. J. Terry, P. K. Nayak, S. Ahn, S. Pak, J. Lee, J. I. Sohn, _et al._ , “Resonantly hybridized excitons in moiré superlattices in van der waals heterostructures,” Nature 567, 81–86 (2019).
* Jin _et al._ (2019) C. Jin, E. C. Regan, A. Yan, M. I. B. Utama, D. Wang, S. Zhao, Y. Qin, S. Yang, Z. Zheng, S. Shi, _et al._ , “Observation of moiré excitons in wse 2/ws 2 heterostructure superlattices,” Nature 567, 76–80 (2019).
* Tran _et al._ (2019) K. Tran, G. Moody, F. Wu, X. Lu, J. Choi, K. Kim, A. Rai, D. A. Sanchez, J. Quan, A. Singh, _et al._ , “Evidence for moiré excitons in van der waals heterostructures,” Nature 567, 71–75 (2019).
* Seyler _et al._ (2019) K. L. Seyler, P. Rivera, H. Yu, N. P. Wilson, E. L. Ray, D. G. Mandrus, J. Yan, W. Yao, and X. Xu, “Signatures of moiré-trapped valley excitons in mose 2/wse 2 heterobilayers,” Nature 567, 66–70 (2019).
* Boulesbaa _et al._ (2016) A. Boulesbaa, K. Wang, M. Mahjouri-Samani, M. Tian, A. A. Puretzky, I. Ivanov, C. M. Rouleau, K. Xiao, B. G. Sumpter, and D. B. Geohegan, “Ultrafast charge transfer and hybrid exciton formation in 2d/0d heterostructures,” Journal of the American Chemical Society 138, 14713–14719 (2016).
* de Mello Donega _et al._ (2003) C. de Mello Donega, S. G. Hickey, S. F. Wuister, D. Vanmaekelbergh, and A. Meijerink, “Single-step synthesis to control the photoluminescence quantum yield and size dispersion of cdse nanocrystals,” The Journal of Physical Chemistry B 107, 489–496 (2003).
* Qu and Peng (2002) L. Qu and X. Peng, “Control of photoluminescence properties of cdse nanocrystals in growth,” Journal of the American Chemical Society 124, 2049–2055 (2002).
* Collier _et al._ (1997) C. Collier, R. Saykally, J. Shiang, S. Henrichs, and J. Heath, “Reversible tuning of silver quantum dot monolayers through the metal-insulator transition,” Science 277, 1978–1981 (1997).
* Dabbousi _et al._ (1994) B. Dabbousi, C. Murray, M. Rubner, and M. Bawendi, “Langmuir-blodgett manipulation of size-selected cdse nanocrystallites,” Chemistry of Materials 6, 216–219 (1994).
* Heath, Knobler, and Leff (1997) J. R. Heath, C. M. Knobler, and D. V. Leff, “Pressure/temperature phase diagrams and superlattices of organically functionalized metal nanocrystal monolayers: the influence of particle size, size distribution, and surface passivant,” The Journal of Physical Chemistry B 101, 189–197 (1997).
* Marzin and Bastard (1994) J.-Y. Marzin and G. Bastard, “Calculation of the energy levels in inasgaas quantum dots,” Solid state communications 92, 437–442 (1994).
* Cusack, Briddon, and Jaros (1997) M. Cusack, P. Briddon, and M. Jaros, “Absorption spectra and optical transitions in inas/gaas self-assembled quantum dots,” Physical Review B 56, 4047 (1997).
* Thoai, Hu, and Koch (1990) D. T. Thoai, Y. Hu, and S. W. Koch, “Influence of the confinement potential on the electron-hole-pair states in semiconductor microcrystallites,” Physical Review B 42, 11261 (1990).
* Grundmann, Stier, and Bimberg (1995) M. Grundmann, O. Stier, and D. Bimberg, “Inas/gaas pyramidal quantum dots: Strain distribution, optical phonons, and electronic structure,” Physical Review B 52, 11969 (1995).
* Torchynska, Dybiec, and Ostapenko (2005) T. Torchynska, M. Dybiec, and S. Ostapenko, “Ground and excited state energy trend in in as/ in ga as quantum dots monitored by scanning photoluminescence spectroscopy,” Physical Review B 72, 195341 (2005).
|
arxiv-papers
| 2021-07-26T15:23:20 |
2024-09-04T03:07:19.010346
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Pradeepa H L",
"submitter": "Pradeepa H L",
"url": "https://arxiv.org/abs/2107.12271"
}
|
2107.12278
|
# Optimizing Topological Switching in Confined 2D-Xene Nanoribbons via Finite-
Size Effects
Muhammad Nadeem [email protected], [email protected],
[email protected] ARC Centre of Excellence in Future Low-Energy Electronics
Technologies (FLEET), University of Wollongong, Wollongong, New South Wales
2525, Australia Institute for Superconducting and Electronic Materials
(ISEM), Australian Institute for Innovative Materials (AIIM), University of
Wollongong, Wollongong, New South Wales 2525, Australia. Chao Zhang School
of Physics, University of Wollongong, Wollongong, NSW 2522, Australia
Dimitrie Culcer ARC Centre of Excellence in Future Low-Energy Electronics
Technologies (FLEET), University of New South Wales, Sydney 2052, Australia.
School of Physics, University of New South Wales, Sydney 2052, Australia.
Alex R. Hamilton ARC Centre of Excellence in Future Low-Energy Electronics
Technologies (FLEET), University of New South Wales, Sydney 2052, Australia.
School of Physics, University of New South Wales, Sydney 2052, Australia.
Michael S. Fuhrer ARC Centre of Excellence in Future Low-Energy Electronics
Technologies (FLEET), Monash University, Clayton, Victoria 3800, Australia.
School of Physics and Astronomy, Monash University, Clayton, Victoria 3800,
Australia. Xiaolin Wang ARC Centre of Excellence in Future Low-Energy
Electronics Technologies (FLEET), University of Wollongong, Wollongong, New
South Wales 2525, Australia Institute for Superconducting and Electronic
Materials (ISEM), Australian Institute for Innovative Materials (AIIM),
University of Wollongong, Wollongong, New South Wales 2525, Australia.
###### Abstract
In a blueprint for topological electronics, edge state transport in a
topological insulator material can be controlled by employing a gate-induced
topological quantum phase transition. Here, by studying the width dependence
of electronic properties, it is inferred that zigzag-Xene nanoribbons are
promising materials for topological electronics with a display of unique
physical characteristics associated with the intrinsic band topology and the
finite-size effects on gate-induced topological switching. First, due to
intertwining with intrinsic band topology-driven energy-zero modes in the
pristine case, spin-filtered chiral edge states in zigzag-Xene nanoribbons
remain gapless and protected against backward scattering even with finite
inter-edge overlapping in ultra-narrow ribbons, i.e., a 2D quantum spin Hall
material turns into a 1D topological metal. Second, mainly due to width- and
momentum-dependent tunability of the gate-induced inter-edge coupling, the
threshold-voltage required for switching between gapless and gapped edge
states reduces as the width decreases, without any fundamental lower bound.
Third, when the width of zigzag-Xene nanoribbons is smaller than a critical
limit, topological switching between edge states can be attained without bulk
bandgap closing and reopening. This is primarily due to the quantum
confinement effect on the bulk band spectrum which increases the nontrivial
bulk bandgap with decrease in width. The existence of such protected gapless
edge states and reduction in threshold-voltage accompanied by enhancement in
the bulk bandgap overturns the general wisdom of utilizing narrow-gap and wide
channel materials for reducing the threshold-voltage in a standard field
effect transistor analysis and paves the way toward low-voltage topological
devices.
††preprint: AIP/123-QED
## I Introduction
Two-dimensional topological insulators are promising materials for topological
quantum electronic devices where edge state transport can be controlled by a
gate-induced electric field Wray (2012); Seidel (2019). In general, edge state
transport can be controlled either by a perpendicular electric field, which
drives a topological phase transition via bulk bandgap closing and reopening
Ezawa (2013a); Liu _et al._ (2014, 2015); Pan _et al._ (2015); Qian _et
al._ (2014); Zhang _et al._ (2017); Molle _et al._ (2017); Collins _et al._
(2018); Nadeem _et al._ (2021) or via inter-edge tunneling between gapped
edge states, assisted by a transverse electric field Xu _et al._ (2019). In
the latter case, though a very weak transverse electric field is sufficient to
induce inter-edge tunneling, edge state conductance quantization may remain a
challenge constraining the geometry of topological insulator ribbons Xu _et
al._ (2019). In the former case, a blueprint for topological quantum
electronics, the strength of the critical electric field required for
topological switching depends upon the strength of quantum mechanical
perturbations, such as the spin-orbit interaction (SOI) Kane and Mele (2005a,
b) and Bernevig–Hughes–Zhang (BHZ) mass term Bernevig, Hughes, and Zhang
(2006); König _et al._ (2008), which reflect the bulk band topology and
therefore lead to a quantized edge state conductance. In this class, numerous
theoretical proposals for 2D topological insulator materials, which exhibit
electrically-driven topological phase transition, such as staggered sublattice
potentials Ezawa (2013a); Molle _et al._ (2017); Nadeem _et al._ (2021),
mirror symmetry breaking Liu _et al._ (2014), and the Stark effect Qian _et
al._ (2014); Liu _et al._ (2015); Pan _et al._ (2015); Zhang _et al._
(2017); Collins _et al._ (2018), have been put forward. Among these various
proposals, electric field switching has been demonstrated Collins _et al._
(2018) in ultrathin (monolayer or bilayer) Na3Bi where the experimentally
reported bandgap of 360 meV significantly exceeds the thermal energy at room
temperature (25 meV) and the critical electric field is about 1.1 V nm-1.
Though a large topological bulk bandgap is desirable to enable quantum spin
Hall (QSH) phenomena at room temperature, one of the main challenges with such
blueprint topological switching mechanism is the requirement of
unrealistically large critical electric field to close the topological bandgap
Molle _et al._ (2017); Vandenberghe and Fischetti (2017). For instance, the
critical electric field is of the order of 0.05 V nm-1 for silicene Molle _et
al._ (2017), 1.0 V nm-1 for stanene Molle _et al._ (2017), and 1.42 V nm-1
for 1T′-MoS2 Qian _et al._ (2014). The critical electric field further
increases for heavy elemental topological insulators such as Bi/SiC Reis _et
al._ (2017) where the experimentally reported bandgap is 800 meV. In light of
this, only recently, it is shown that the critical electric field can be
significantly reduced via the tunable Rashba effect in 2D-Xenes (G, Si, Ge,
Sn, and P, As, Sb, Bi) Nadeem _et al._ (2021). Though the Rahsba effect is
considerably large in heavy elemental 2D-Xenes such as functionalized
bismuthene, the Rashba effect remains negligibly small for relatively lighter
group-IV (Si, Ge) and group-V (P, As) elemental 2D-Xenes Nadeem _et al._
(2021).
Here we note that, apart from the relativistic quantum mechanical phenomenon
of SOI which plays a central role in characterizing the band topology and
limiting the critical electric field, the finite-size geometry incorporates
two additional critical phenomena: quantum confinement effects on the bulk
subbands and inter-edge coupling between spin-filtered chiral edge states. For
the study of fundamental phenomena in both laboratory and device applications,
it is crucial to investigate the fundamental topological features and the edge
state transport in the finite-size geometry of a topological insulator.
Finite-size effects have been studied for various topological insulator
materials via the thickness dependence of surface electronic dispersion in
thin films of 3D topological insulators Shan, Lu, and Shen (2010); Liu _et
al._ (2010); Lu _et al._ (2010) and Dirac semimetals Pan _et al._ (2015);
Collins _et al._ (2018) and the width dependence of edge state electronic
dispersion in 2D topological insulator materials such as HgTe Zhou _et al._
(2008), transition metal dichalcogenides (TMDC) with 1T′ phase, TMDC-1T′ Das,
Sen, and Mahapatra (2020), and 2D-Xenes Ezawa (2006); Han _et al._ (2007);
Son, Cohen, and Louie (2006); Brey and Fertig (2006); Ezawa and Nagaosa
(2013); Cano-Cortés, Ortix, and van den Brink (2013). However, less attention
has been devoted to finite-size effects on the gate-induced topological
switching, a central part for the working of topological electronics devices.
By studying the width dependence of electronic properties in zigzag Xene
nanoribbons (ZXNRs), it is demonstrated that both the SOI-induced barrier in
the bulk and the gate-control of quantized conductance along the edges can be
optimized via finite-size effects. It is inferred that ZXNRs are promising
materials for topological electronics with a display of unique physical
characteristics associated with the intrinsic band topology and the finite-
size effects on gate-induced topological switching. Through tight binding
calculations of band dispersion, density of states (DOS), conductance
quantization, edge state wave functions and their width dependence, we
highlight several results that are crucial for understanding fundamental
aspects and developing novel device concepts. Our findings and the analysis
presented for ZXNRs remain valid for any 2D topological insulator material
with buckled honeycomb structure terminated on zigzag edges.
First, spin-filtered chiral edge states in ZXNRs remain gapless and protected
against backward scattering even with finite inter-edge overlapping in ultra-
narrow ribbons, i.e., a 2D quantum spin Hall (QSH) material turns into 1D
topological metal. Such robustness in ZXNRs is deeply rooted in intertwining
between SOI-induced spin-filtered chiral modes and the intrinsic band
topology-driven energy-zero modes in pristine honeycomb lattice structures.
Furthermore, the topological protection of 1D metallic modes is a consequence
of different momentum space locations for edge state crossings (at time-
reversal invariant momenta (TRIM) $k=\pi$) and anti-crossings (around valleys
k=K/K′). Here the edge state crossing point is a Dirac point formed by edge
state gapless Dirac dispersion while the edge state anti-crossing point is a
momentum space location where the edge state spectrum becomes a massive Dirac
dispersion. This is highly contrasting from other 2D topological insulator
materials with inverted band structure, in which the edge state crossing and
anti-crossing points coexist, and in which hybridization due to inter-edge
overlapping opens an energy gap and leads to a gapped edge state spectrum Zhou
_et al._ (2008). For instance, in inverted band topological insulators such as
thin films of X2Y3 [X=Bi,Sb;Y=Te,Se] Shan, Lu, and Shen (2010); Liu _et al._
(2010); Zhang _et al._ (2010) semiconductors, type-I HgTe/CdTe Bernevig,
Hughes, and Zhang (2006); König _et al._ (2007) and type-II InAs/GaSb/AlSb
Liu _et al._ (2008); Knez, Du, and Sullivan (2011); Du _et al._ (2015)
semiconducting quantum well structures, Na3Bi thin films Pan _et al._ (2015);
Collins _et al._ (2018), and monolayers of TMDC with 1T′ phase Qian _et al._
(2014); Wu _et al._ (2018), both edge state crossing and anti-crossing points
coexist at $\Gamma$-point.
Second, the critical electric-field required for switching between gapless and
gapped edge states reduces as the width of ZXNRs decreases, without any
fundamental lower bound. We demonstrate explicitly that such size dependence
of the threshold-voltage stems from a series of non-trivial quantum mechanical
phenomena associated with the geometric structure of ZXNRs. First, the edge
state wave functions at the crossing point are independent of the edge
termination and hence remain insensitive to electric fields. On the other
hand, edge state wave functions and the gate-induced coupling between
overlapping edge states across anti-crossing points are strongly dependent on
the particular edge termination and hence can be tuned via a gate electric
field. Second, with a decrease in width, the momentum space location of the
edge state anti-crossing points moves away from the valleys toward the TRIM
($k=\pi$). Furthermore, at particular momenta around the anti-crossing points,
the magnitude of the inter-edge overlap increases with decrease in width. As a
result, gate-induced coupling between spin-filtered chiral edge states is
enhanced as the ZXNR width is reduced. It shows that finite-size effects on
the edge spectrum play a central role in optimizing the gate-controlled edge
state transport, such that size dependence of the threshold-voltage stems from
width- and momentum-dependent tunability of the gate-induced coupling between
inter-edge spin-filtered chiral states.
Third, when the width of ZXNRs is smaller than a critical limit, quantum
confinement enhances the topological bulk bandgap and, hence, the energy
spacing between the bulk subbands and the edge states, which in turn leads to
topological switching between gapless and gapped edge states without bulk
bandgap closing. Both of these finite-size phenomena, central to the control
of edge state transport, are completely missing in wide ZXNRs: there the
critical electric-field is limited by the SOI-induced barrier, and the
topological phase transition is accompanied by bulk bandgap closing and
reopening.
It is important to note that the threshold reduction could in principle be
achieved with a built-in electric field due to static charges. However, for a
fixed SOI in ZXNRs, a simple enhancement of the built-in electric field also
reduces the topological bandgap in the QSH phase, which may be detrimental to
dissipationless quantized edge state conductance due to admixing of edge modes
with bulk subbands. In this regard, the size-dependent and momentum-dependent
tunability of gate-induced inter-edge coupling is a novel mechanism that
reduces the critical gate electric field even as the topological bulk bandgap
is enhanced by quantum confinement. On the one hand, it reduces the threshold-
voltage by lowering the SOI-induced barrier in the bulk, on the other hand it
enhances the bulk bandgap even in lighter monoelemental 2D-Xenes such that the
detrimental contributions from the bulk material to the edge current are
avoided and allows safe residing of the chemical potential within this gap.
Figure 1: Optimization of electronic properties in quantum confined ZXNRs. (a)
A ZXNR with lattice parameters. (b) Width dependence of threshold-voltage and
bulk bandgap. With decrease in the width of ZXNRs, threshold-voltage decreases
while nontrivial bulk bandgap increases. (c) Density of states and topological
phase transition in an ultra-narrow ZXNR with N = 4. In the absence of gate
electric field ($\lambda_{v}=0$), finite density of states (cyan) at zero-
energy represent the presence of protected helical edge states in QSH phase.
With increasing gate electric field, ZXNR enters into normal insulating (NI)
phase (red) with gapped edge states while passing through a critical gapless
phase (grey). (d) Quantized edge state conductance and the critical gate
electric field for N = 10, 15, and 25. Here, Rashba SOI is ignored for
simplicity. When Rashba SOI is incorporated, topological quantum field effect
further reduces the threshold-voltage and bulk bandgap. Here N represents the
number of zigzag chains and simulates the width of ZXNRs as
$W_{z}=\sqrt{3}Na_{0}/2$.
These features make quantum confined spin-orbit coupled ZXNRs special for
topological quantum devices, enabling optimal gate-controlled transport
through edge state channels via finite-size effects on the electronic
properties. The existence of gapless edge states and reduction in threshold-
voltage accompanied by enhancement in the bulk bandgap overturns the general
wisdom of utilizing narrow gap and wide channel materials for reducing the
threshold-voltage in a standard field effect transistor analysis, other than
negative capacitance mechanism Salahuddin and Datta (2008). Furthermore, the
advantage of utilizing ultra-narrow ZXNRs is multi-fold: (i) the availability
of large edge state conducting modes for enhanced signal-to-noise ratio via
multiple edge state channels, (ii) optimized geometry for topological
electronic devices where an array of ZXNRs, set apart by trivial insulating
layers/wires along vertical/lateral direction, is sandwiched between top and
bottom gates separated by top and bottom dielectrics, and (iii) low-voltage
and energy-efficient switching due to compressible subthreshold swing in ZXNRs
via topological quantum field effect Nadeem _et al._ (2021) .
## II Finite-size effects on ZXNRs
Figure 1(a) shows a ZXNR where the primitive lattice vectors are represented
by $a_{1}=a_{0}(1,0)$ and $a_{2}=a_{0}(1/2,\sqrt{3}/2)$, $d_{z}$ represents
the buckling length, while dashed rectangle (composing dimer line of A and B
sublattice sites) represents the unit cell for ZXNR. The width of ZXNR is
represented by the length of unit cell (dimer line),
$W_{z}=\sqrt{3}N_{d}a_{0}/4$ where $N_{d}=2N$ represents the number of sites
in the dimer line and $N$ represents the number of zigzag lines. The length of
ZXNR $L_{z}=Dl_{z}$, where D represents the number of dimer lines and $l_{z}$
is the width of a dimer line, can be written as a function of the number of
sites in the zigzag line ($N_{z}$) as $L_{z}=N_{z}a_{0}/2$.
After the seminal work by Kane-Mele Kane and Mele (2005a, b) on graphene, it
has been shown that other 2D-Xene nanoribbons (Si, Ge, Sn, and P, As, Sb, Bi)
with honeycomb lattice structure are also QSH insulators Min _et al._ (2006);
Liu, Feng, and Yao (2011); Liu, Jiang, and Yao (2011); Xu _et al._ (2013);
Hsu _et al._ (2015); Reis _et al._ (2017); Li _et al._ (2018). Among 2D
topological insulator materials, quantum spin Hall (QSH) insulators with
honeycomb lattice structure terminated on zigzag edges are a special class
where spin-filtered chiral modes are intertwined with the intrinsic band
topology of the pristine honeycomb lattice structure. In ZXNRs, the intrinsic
band topology, characterized by a non-vanishing winding number, leads to
energy-zero flat bands in the nontrivial regime of the first Brillouin zone.
The sublattice-resolved intrinsic SOI, modeled through next-nearest hopping
Kane and Mele (2005a, b), disperses these localized modes into spin-filtered
chiral edge states. To simulate the energy-zero modes in the pristine case,
spin-filtered chiral edge states in the spin-orbit coupled case, gate-induced
topological switching, and the dependence of electronic properties on the
width of ZXNRs, in general, we study the next-nearest neighbor tight-binding
model Hamiltonian Kane and Mele (2005a, b)
$H=t\sum_{\langle
ij\rangle\alpha}c_{i\alpha}^{\dagger}c_{j\alpha}+i\lambda_{so}\sum_{\langle\langle
ij\rangle\rangle\alpha\beta}v_{ij}c_{i\alpha}^{\dagger}s_{\alpha\beta}^{z}c_{j\beta}+\frac{\lambda_{v}}{2}\sum_{i\alpha}c_{i\alpha}^{\dagger}v_{i}c_{i\alpha}+i\lambda_{R}(E_{z})\sum_{\langle
ij\rangle\alpha\beta}c_{i\alpha}^{\dagger}(\textbf{s}_{\alpha\beta}\times\hat{\textbf{d}}_{ij})_{z}c_{j\beta}\;.$
(1)
where the first term is the nearest neighbor hopping generating Dirac
dispersion in the vicinity of valleys K(K′) while the second term is the
intrinsic Kane-Mele type SOI ($\lambda_{so}=\Delta_{so}/3\sqrt{3}$), which
opens nontrivial QSH bulk bandgap Kane and Mele (2005a, b) and induces
topologically protected spin-filtered chiral edge states. The third term
represents the staggered sublattice potential induced by the gate electric
field ($E_{v}=\lambda_{v}/\alpha_{v}$ where $\alpha_{v}$ is the buckling-
dependent parameter) which drives the QSH phase into a trivial insulating
phase - termed here as topological switching. The fourth term is the spin-
mixing Rashba SOI associated with the gate electric field Kane and Mele
(2005a, b); Rashba (2009), $\Delta_{R}=\alpha_{R}E_{v}$ where
$\Delta_{R}=3\lambda_{R}/2$ and $\alpha_{R}$ is a Rashba SOI parameter.
Here $c_{i\alpha}^{\dagger}(c_{i\alpha})$ is the creation (annihilation)
electron operator with spin polarization $\alpha=\uparrow,\downarrow$ on site
i, the Pauli matrix $s^{z}$ describes the electron intrinsic spin while
$s_{\alpha\beta}^{z}$ are the corresponding matrix elements describing the
spin polarization $\alpha$ and $\beta$ on sites i and j, $v_{i}=+1(-1)$ for
sublattice A (B), and $v_{ij}=\textbf{d}_{ik}\times\textbf{d}_{kj}=\pm 1$
connects sites i and j on sublattice A (B) via the unique intermediate site k
on sublattice B (A). The nearest-neighbor bond vectors $\textbf{d}_{ik}$ and
$\textbf{d}_{kj}$ connect the i (k) and k (j) sites on the A and B
sublattices.
To begin with, by numerically diagonalizing the tight binding model, we study
the finite-size effects on the pristine and spin-orbit coupled ZXNRs by
varying the width of ZXNRs. For simplicity, the width of ZXNRs
$W_{z}=\sqrt{3}Na_{0}/2$ is simulated by the number of zigzag chains (N) by
setting $a_{0}=1$. In the pristine case, as shown in figure 2 (a), the
intrinsic topology of honeycomb lattice structures leads to strongly localized
energy-zero flat edge states between valleys K and K′, $\Delta
k_{x}=K-K^{\prime}=2\pi/3a_{0}$, a nontrivial regime of the first Brillouin
zone characterized by a non-vanishing winding number. The intrinsic SOI drives
pristine ZXNRs into the QSH phase and disperses these energy-zero flat edge
states into spin-filtered chiral edge modes, as shown in figure 2 (b). Due to
the presence of both time-reversal and inversion symmetry, the edge states are
Kramers pairs, forming a fourfold degenerate Dirac point at the edge state
crossing point, TRIM, $k_{x}=\pi$. Figure 2 (c) shows that when a gate
electric field is applied, the Kramers partners split along the energy axis
while the twofold degenerate Dirac points in the spin down and spin up sectors
move toward the corners of the nontrivial regime, valleys K and K′
respectively. As a result, due to spin-valley locking, ZXNRs show a spin-
polarized semi-metallic behavior at a critical point where
$\lambda_{v}=\lambda_{v}^{c}$. When the strength of the staggered sublattice
potential exceeds a threshold limit, the Dirac points in both spin sectors are
gapped out at the anti-crossing points and the system enters the trivial
regime.
Figure 2: Finite-size effects in quantum confined ZXNRs. (a-c) Width
dependence of one-dimensional electronic band dispersion for pristine ZXNRs
hosting localized energy-zero edge states (a), spin-orbit coupled ZXNRs
hosting QSH phase (b), and the gate induced critical point in spin-orbit
coupled ZXNRs(c). In wide ZXNRs (N = 100) anti-crossing point lies at the
valley ($k_{x}=2\pi/3$) and the critical gate electric field reads
$E_{c}=2\Delta_{so}/\alpha_{v}$. In narrow quantum confined ZXNRs (N = 10, 5),
anti-crossing point moves from valley towards TRIM $k_{x}=\pi$ and the
critical gate electric field reduces from the SOI driven barrier, i.e.,
$E_{c}<2\Delta_{so}/\alpha_{v}$. Moreover, around anti-crossing points, the
energy spacing between edge states and the bulk subbands increases with
decrease in width of ZXNRs. Such an increase in the bulk bandgap shows that
topological switching is not accompanied by bulk bandgap closing and reopening
in quantum confined ZXNRs. (d) Width dependence of momentum space location of
anti-crossing point. (e,f) Density of states for ZXNRs in QSH phase with N = 5
(e) and edge state density of states for N = 1, 2, 3, 4 and 5 (f). Finite
density of state around the energy-zero level shows that edge states in the
QSH insulators with ZXNRs remain gapless even for ultra-narrow width. Here we
set $a_{0}=t=1$ and $\lambda_{R}=0$.
The bulk and edge state electronic band dispersion, obtained via numerical
diagonalization of the tight binding model, show a number of counter-intuitive
features depending upon the width of ZXNRs, which may prove to be interesting
for both fundamental and novel device applications in topological electronics.
### II.1 From 2D QSH insulator to 1D topological metal
The trademark of spin-orbit coupled ZXNRs, i.e., spin-filtered chiral edge
states in 2D QSH sheets, remains protected even when the system becomes
effectively 1D and the QSH effect is no longer well defined. That is, as shown
in figure 2(b), (i) the spin-filtered chiral edge states remain gapless even
for ultra-narrow ribbons and (ii) the bulk bandgap increases with decrease in
width. It implies that as one makes the ZXNR narrower, it retains its
topological character, i.e., has well-defined 1D metallic modes associated
with the edges, each with spin-momentum locking, and the bulk bandgap grows.
So a narrow ZXNR remains a robust 1D topological metal, with a large energy
separation between the edge states and the bulk subbands, characterized by a
non-vanishing winding number associated with the intrinsic band topology. This
non-intuitive result in ZXNRs seems interesting and differentiates ZXNRs from
other 2D topological insulators with inverted band structure, where the effect
of quantum confinement is to push the system toward a large-gap conventional
insulator.
This observation can be understood from fundamental quantum mechanical
considerations in narrow ZXNRs: sublattice-resolved intrinsic SOI, quantum
confinement, and the longitudinal momentum-dependent inter-edge coupling.
First of all, both the nontrivial bulk bandgap and the dispersing spin-
filtered chiral edge states are indebted to the sublattice-resolved intrinsic
SOI, i.e., next-nearest hopping localizes the bulk electrons while the
electrons traversing along the edges remain effectively free. This SOI-induced
mechanism remains true, irrespective of the ZXNR width. In addition, even in
the 1D limit, as discussed below, the protection of spin-filtered chiral edge
modes is guaranteed by the vanishing inter-edge coupling at TRIM $k_{x}=\pi$.
On the other hand, the enhancement of a topological bulk bandgap is because of
the quantum confinement effect on the bulk band spectrum. As shown in figure
1(b), in the absence of gate potential, while bulk band varies as
$E_{G/B}=|2\Delta_{so}|$ in the wide ZXNRs, the bulk bandgap varies as
$E_{G/B}=|2\Delta_{so}+E_{qc}|$ in narrow ZXNRs. Here the energy gap $E_{qc}$
in the subband structure, induced by quantum confinement, is inversely
proportional to the ZXNR width Ezawa (2006); Han _et al._ (2007); Son, Cohen,
and Louie (2006); Brey and Fertig (2006); Ezawa and Nagaosa (2013); Cano-
Cortés, Ortix, and van den Brink (2013).
### II.2 Low-Voltage topological switching without bulk bandgap closing
While finite-size effects have been widely studied in 2D-Xenes Ezawa (2006);
Han _et al._ (2007); Son, Cohen, and Louie (2006); Brey and Fertig (2006);
Ezawa and Nagaosa (2013); Cano-Cortés, Ortix, and van den Brink (2013),
effects of quantum confinement and momentum-dependent inter-edge overlapping
on the gate-induced topological switching in spin-orbit coupled ZXNRs have
received comparatively less attention. Similarly to the
$\mathbb{PT}$-symmetric case, $\mathbb{PT}$-symmetry breaking via gate
electric field also displays interesting features in narrow ZXNRs. As depicted
in figure 1(b) and 2(c), when the width of ZXNRs is smaller than a critical
limit ($W_{z}^{c}$), (i) the critical gate electric field required for
switching between gapless and gapped edge states decreases with decrease in
width, and (ii) topological switching between gapless and gapped edge state
spectrum is not accompanied by bulk bandgap closing and reopening.
First, with decreasing ZXNR width, the gate induced anti-crossing points in
the edge state spectrum move away from the valleys toward TRIM. Since the
momentum space location of anti-crossing points is directly associated with
the threshold-voltage, the threshold-voltage decreases as the anti-crossing
points move closer to the TRIM. For instance, in wide ZXNRs (N = 100), the
spin-filtered Dirac points are gapped out exactly at the valleys K/K′ and the
SOI-induced barrier for critical electric field reads
$E_{c}=2\Delta_{so}/\alpha_{v}$. On the other hand, in narrow quantum confined
ZXNRs (N = 10, 5), the edge states are gapped out before reaching to the
valleys K/K′. As a result, the critical electric field reduces significantly
from the SOI-driven barrier $E_{c}<2\Delta_{so}/\alpha_{v}$ with decrease in
the width. This trend suggests that the critical electric field has no lower
bound and any nonzero electric field can open an energy gap in the edge state
spectrum of ultra-narrow ZXNRs.
Second, the evolution of bulk subbands during topological switching from
gapless to gapped edge state spectrum looks quite different for wide and
narrow ribbons. In wide ZXNRs, during edge state evolution under the gate
electric field, the bulk bandgap closes at the valleys when
$E_{c}=2\Delta_{so}/\alpha_{v}$. At this point, the highest occupied molecular
orbital and lowest unoccupied molecular orbital (HOMO-LUMO), carrying the same
spin as that for edge states at particular valleys, of the bulk band spectrum
become valley degenerate with the edge states. The bulk bandgap reopens when
$E_{c}>2\Delta_{so}/\alpha_{v}$. It implies that the topological switching via
electric field is accompanied by a quantum phase transition between
$\mathbb{Z}_{2}$-nontrivial and $\mathbb{Z}_{2}$-trivial insulating phases
where the bulk bandgap closes and reopens at the valleys. On the one hand, in
narrow ZXNRs where bulk subbands and edge states are widely separated in
energy due to quantum confinement, transitioning between gapless and gapped
edge state spectrum occurs without bulk bandgap closing and reopening. The
closing and reopening of the bulk bandgap is not necessary in narrow ZXNRs, as
the 1D system is no longer a 2D topological insulator with a well-defined
$\mathbb{Z}_{2}$ index. Hence, no bandgap closing and reopening is needed to
switch the topology.
Such a finite-size-driven topological switching of edge state conductance,
without bulk bandgap closing and reopening, is an entirely different concept
from the previously studied quantum phase transition of the bulk band topology
induced by symmetry breaking Ezawa, Tanaka, and Nagaosa (2013); Yang _et al._
(2013); Rachel (2016); Matsumoto _et al._ (2020); Schindler (2020). Since the
symmetry class of ZXNRs remains unchanged, irrespective of width, the quantum
critical point for transitioning between $\mathbb{Z}_{2}$-nontrivial and
$\mathbb{Z}_{2}$-trivial should remain the same, constrained by the SOI-
induced barrier. However, apart from SOI terms, quantum confinement induces an
extra contribution to the bulk bandgap of narrow ZXNRs. Since the gate
electric field cannot manipulate the bandgap due to quantum confinement but
the only one induced by SOI, it leads to topological switching of the edge
state conductance via spin-filtered chiral modes without bulk bandgap closing.
The critical electric field reads
$E_{c}^{TS}<E_{c}^{QPT}=2\Delta{so}/\alpha_{v}$, where the superscript "TS"
and "QPT" represent topological switching and quantum phase transition
respectively. It shows that the switching without bulk gap closing/reopening
is a sheer consequence of the quantum confinement effect on the bulk band
spectrum of narrow ZXNRs and can be verified from the calculated band
dispersion (2).
Accompanying finite-size effects on the edge state spectrum and quantum
confinement effects on the bulk band spectrum, another critical phenomenon
occurs, associated with the bulk band spectrum: the Rashba effect. For a
specific width of ZXNRs, the Rashba SOI further lowers the critical gate
electric field via topological quantum field effect on the bulk band spectrum
Nadeem _et al._ (2021). For quantum confined spin-orbit coupled ZXNRs, low-
energy single-particle electronic dispersion in the vicinity of Dirac points
reads as follows:
$E(k_{x})=\pm\sqrt{v_{F}^{2}k_{x}^{2}+v_{F}^{2}k_{n}^{2}+\frac{1}{2}\Bigg{|}6\sqrt{3}\lambda_{so}-\alpha_{v}E_{v}\Bigg{(}\frac{1}{2}+\sqrt{\frac{1}{4}+\Theta_{v(c)}\Bigg{(}\frac{2\alpha_{R}}{\alpha_{v}}\Bigg{)}^{2}}\Bigg{)}\Bigg{|}^{2}}\;.$
(2)
where $v_{F}=\sqrt{3}a_{0}t/2$ is the Fermi velocity, $\Theta_{v(c)}=1(0)$ for
valence (conduction) bands in the QSH phase while $\Theta_{v(c)}=0(1)$ for
valence (conduction) bands in the trivial phase and $k_{n}$ is the quantized
transverse momentum along the confinement direction. Such finite-size-
dependent quantization of transverse momentum divides the electronic band
dispersion into infinite set of discrete subabands indexed by quantum number n
= 1, 2, 3…. Specific to our interest in this study, the band dispersion shows
that quantum confinement induces an additional factor, $v_{F}^{2}k_{n}^{2}$,
which enhances the bulk bandgap. The discretized transverse momentum $k_{n}$
is related to the longitudinal momentum $k_{x}$ as follows Brey and Fertig
(2006):
$k_{x}=\frac{k_{n}}{\tan(k_{n}W_{z})}$ (3)
### II.3 Role of intrinsic topology in pristine ZXNRs
The flat bands in the edge state spectrum of pristine ZXNRs are not generated
from the intrinsic electronic spectrum of 2D-sheets but are rather indebted to
the intrinsic band topology associated with the edge state wave functions. The
electronic dispersion of pristine ZXNRs shows that a critical longitudinal
momentum $k_{x}=k_{x}^{c}$ divides the momentum space regime for the extended
(trivial) and the localized (nontrivial) edge states associated with gapped
dispersing and gapless flat bands respectively. As shown in figure 2(a), the
nontrivial regime of the first Brillouin zone hosting flat bands dwindles with
decreases in the width of ZXNRs. That is, with decreasing width of the ZXNR,
the location of the critical longitudinal momentum $k_{x}=k_{x}^{c}$ moves
toward the TRIM $k_{x}=\pi$. As a result the critical longitudinal momentum
reads $k_{x}=k_{x}^{c}>K$ for narrow ZXNRs, in contrast to $k_{x}=K$ in wide
ZXNRs.
As depicted in figure 2(a) and 2(c), such finite-size effects on the pristine
ZXNRs are intertwined with finite-size effects on the gate-induced topological
switching in spin-orbit coupled ZXNRs. It is interesting to note that the
momentum space location of gate-induced anti-crossing points in spin-orbit
coupled ZXNRs is exactly the same as the critical longitudinal momentum
$k_{x}=k_{x}^{c}$ in pristine ZXNRs. At this point the fourfold degenerate
energy-zero flat bands in pristine ZXNRs are intrinsically gapped out by
finite-size effects while the gate-induced twofold degenerate spin-filtered
Dirac points in spin-orbit coupled ZXNRs are gapped out by the dominating gate
electric field. It implies that the reduction of critical gate electric field
in the spin-orbit coupled ZXNRs is intrinsically associated with the finite-
size effects on the nontrivial character of pristine ZXNRs rather than mere
manipulation of the intrinsic SOI. More specifically, the impact of intrinsic
band topology of pristine ZXNRs on the electronic properties of spin-orbit
coupled ZXNRs can be summarized as follows: in the nontrivial regime, while
the critical momentum space location $k_{x}^{c}$ depends on the width of ZXNR,
the strength of the critical gate electric field $E_{c}$ depends upon both the
strength of SOI and the width of ZXNRs. This effect is further demonstrated by
studying the real space wave functions for edge states, as shown below, that
the reduction in the critical gate electric field is associated with gate-
induced longitudinal momentum-dependence of inter-edge coupling in the
vicinity of $k_{x}^{c}$.
## III Topological edge state transport
The existence and protection of spin-filtered chiral edge states in ultra-
narrow ZXNRs and low-voltage topological switching from gapless to gapped edge
states can be verified by studying the DOS, width and momentum dependence of
inter-edge overlapping, and gate-induced inter-edge coupling for ZXNRs of
various widths.
### III.1 Density of states
The existence of spin-filtered chiral edge states in ultra-narrow ZXNRs is
revisited by analyzing the DOS and the conductance quantization in ZXNRs. In
the absence of a gate electric field, finite DOS at energy-zero represent the
gapless edge states in the QSH phase, as shown in figure 1(c), 2(e) and 2(f).
Each edge state channel contributes $e^{2}/h$ to the conductance, leading to a
quantized conductance of $2e^{2}/h$, figure 1(d), which is an important
signature of the QSH phase. The DOS remains finite at energy-zero level even
for ultra-narrow ZXNRs, figure 2(e), a direct evidence of the existence of
topological edge states in atomic-wide ZXNRs. When the gate electric field is
switched on,the energy-zero DOS disappear in the $\mathbb{Z}_{2}$-trivial
phase, figure 1(c). Furthermore, a sharp disappearance of the DOS in the
$\mathbb{Z}_{2}$-trivial phase shows that DOS measurement must be an efficient
way for determining energy gap in edge spectra.
### III.2 Width/Momentum dependence of edge states
In order to understand how the width and longitudinal momentum-dependence of
inter-edge overlapping/coupling guarantees the protection of conducting edge
states and assists electric field-driven topological switching, we investigate
the real-space wave functions for edge states near the Fermi energy of a spin-
orbit coupled ZXNR terminated at A and B sublattice sites respectively. As
shown in figure 2, the spin-filtered chiral edge states are characterized by a
range of longitudinal momentum $k_{x}\in(2\pi/3a_{0},4\pi/3a_{0})$, defining a
nontrivial regime of the first Brillouin zone. In the vicinity of valleys
$k_{x}\approx 2\pi/3a_{0}$ and $k_{x}\approx 4\pi/3a_{0}$, as depicted in
figure 3(a)-3(c), the real-space squared wave functions decay exponentially
along the confined direction and have finite overlapping. With decrease in the
width, though the amplitude near the edges increases, overlapping between edge
states at the two sides of ZXNRs also increases. As one moves away from
valleys toward TRIM $k_{x}=\pi$, due to large probability distribution near
the edges, the amplitude of squared wave functions increases while the decay
length decreases. For example, as shown in figure 3(d)-3(f), nearly orthogonal
squared wave functions indicate that the penetration depth of exponentially
decaying edge states becomes much smaller than those around valleys K/K′.
Associated with longitudinal momentum around the TRIM $k_{x}=\pi$, as shown in
figure 3(g)-3(i), spin-filtered chiral edge states distributed near the edges
appear to be completely orthogonal and, hence, the inter-edge overlap integral
remains zero even for ultra narrow ZXNRs.
The accuracy of numerical tight binding results, describing the longitudinal
momentum dependence of edge states in quantum confined ZXNRs, can be probed by
obtaining explicit expressions for the wave functions for the spin-filtered
chiral edge states by analytically solving tight binding model Zarea, Büsser,
and Sandler (2008). Based on the nature of wave functions, the edge state
spectrum in ZXNRs can be divided into three regimes of momentum space: (i) In
region I, in the vicinity of TRIM $k_{x}\approx\pi$ where the edge spectrum
forms a fourfold degenerate Dirac point in the absence of gate electric field,
the wave functions are damped oscillatory. (ii) In region II,
$k_{x}\in[2\pi/3a_{0},\pi)\cup(\pi,4\pi/3a_{0}]$ lying between the Dirac
points, the wave functions at the edges decay exponentially along the confined
direction. (iii) In region III, away from the Dirac point
$2\pi/3a_{0}>k_{x}>4\pi/3a_{0}$, the wave functions are oscillatory in nature
and represent the localized 1D edge states due to admixing with bulk subbands.
It implies that the spin-filtered chiral edge states are formed by a
combination of wave functions in region I and II and the nature of these wave
functions changes from exponentially decaying to damped oscillatory as $k_{x}$
moves from region II to region I. It shows that numerical tight binding
calculations are consistent with analytical tight binding results in region I
and II as shown in the insets of figure 3.
Figure 3: Longitudinal momentum and width dependence of real-space squared
wave functions for the spin-filtered chiral edge states. Real-space squared
wave function for the spin-filtered chiral edge states near the Fermi energy
of a ZXNRs with longitudinal momentum lying at Dirac/valley points
$k_{x}=2\pi/3a_{0}$ (a-c), away from Dirac points in the nontrivial regime
(d-f), and in the vicinity of TRIM $k_{x}\approx\pi/a_{0}$ (g-i). The insets,
dashed curves which are consistent with analytical tight binding model
calculations, show that edge states are damped oscillatory in region I (g-i)
while the edge states decay exponentially for momentum away from TRIM, in
region II (a-f). For a fixed N, decay length of edge state decreases as
$k_{x}$ moves from valleys towards the TRIM (from top to bottom). While there
is no inter-edge overlapping in region I, finite inter-edge overlapping in
region II increases with decrease in width (a-c). Here SOI parameters are
taken as $\lambda_{so}/t=0.05$ and $\lambda_{R}=0$ in the absence of gate
potential $\lambda_{v}=0$. The horizontal axis is the confinement direction,
along y-axis of the zigzag 2D-Xenes nanoribbon here.
While tight binding dispersion and DOS confirms the existence of gapless edge
states in atom-wide ZXNRs, the analysis of width and momentum dependence of
edge state wave functions leads to the following three important outcomes: (i)
protection of 1D topological metal, (ii) gate-induced tunability of inter-edge
coupling via correspondence between various momentum space regimes and real
space edge termination, and (iii) size-dependent optimization of topological
switching.
### III.3 Protection of 1D topological metal
The classification of edge state longitudinal momentum $k_{x}$ shows that the
edge states in ZXNRs are similar to those in a conventional quantum Hall strip
where translational symmetry is preserved along the strip. The spin-filtered
chiral edge states along the two sides of ZXNRs are associated with different
$k_{x}$, and they do not hybridize with each other even when there is a finite
overlap along the confined direction in region II Halperin (1982); MacDonald
and Středa (1984). On the other hand, in region I where the energy and
momentum of edge states around the crossing point $k_{x}=\pi$ are nearly equal
and they can possibly couple to open an energy gap, their wave functions do
not overlap in a finite space. It implies that, in the absence of gate
electric field, spin-filtered chiral edge states do not hybridize/couple even
in ultra-narrow ZXNRs.
While excellent conductance quantization, important signature in many
topological states, and its robustness are well-known features of the quantum
Hall effect, even the extensively studied 2D topological insulators Bernevig,
Hughes, and Zhang (2006); König _et al._ (2007); Liu _et al._ (2008); Knez,
Du, and Sullivan (2011); Du _et al._ (2015); König _et al._ (2008) with
inverted band structure show experimentally much more fragile conductance
quantization at low temperatures König _et al._ (2007); Knez, Du, and
Sullivan (2011); Du _et al._ (2015). So a question arises: is there any
advantage to QSH effect in ZXRNs or will the topological protection remain
relatively fragile? Within the accuracy of electronic dispersion, DOS,
quantized conductance, and momentum-dependence of edge state wave functions
found via numerical tight binding approximations, the topological protection
of QSH states in ZXNRs is equivalent to that for quantum Hall insulators. It
suggests that the edge states in ZXNRs are far more stable than other
topological insulator materials with inverted band structure.
As mentioned above, the answer lies in the energy and momentum space location
of conducting modes on opposite edges in a finite-size geometry. The existence
of different momentum-space locations for the edge state crossings and anti-
crossings in ZXNRs is highly contrasting from other 2D topological insulator
materials with inverted band structure, in which edge state crossing and anti-
crossing points coexist, and in which hybridization due to inter-edge
overlapping opens an energy gap and leads to a gapped edge state spectrum Zhou
_et al._ (2008). Furthermore, it is also explicitly demonstrated in section-IV
that edge states in honeycomb structures remain protected against electron-
electron Coulomb interaction which may become inevitable due to inter-edge
overlapping in narrow ribbons. In short, we are not aware of any experimental
obstacles that may cause potential threat to edge state conductance
quantization in ZXNRs. However, precise control of the zigzag edge is required
in device fabrication.
### III.4 Momentum dependence of gate-induced inter-edge coupling
The resemblance of damped oscillatory behavior around $k_{x}=\pi$ to the one
in spin-orbit coupled armchair 2D-Xenes nanoribbons (AXNR) Zarea and Sandler
(2007) suggests that the dynamical evolution of edge states remains
independent of particular edge termination in region I. On the other hand, the
exponentially decaying wave functions in region II are directly associated
with the particular edge termination on A and B sublattice sites and, hence,
their penetration depth can be tuned via gate-induced staggered sublattice
potentials.
Figure 4: Longitudinal momentum dependence of gate-induced inter-edge
coupling. For a fixed width of N=50, effect of gate electric field on real-
space squared wave function for the spin-filtered chiral edge states
associated with longitudinal momentum $k_{x}$ lying in region II (a-f) and
region I (g-i). In the vicinity of valleys $k_{x}=2\pi/3a_{0}$ (a-c), critical
gate electric field localizes both the spin-down and spin-up sectors by
turning exponentially decaying edge states into sinusoidal form. Such gate-
induced enhancement in the penetration depth of chiral edge states and the
gate-induced inter-edge coupling leads to an energy gap in the edge state
spectrum when gate electric field exceeds critical limit. As one moves away
from Dirac point, $k_{x}\approx 5\pi/6a_{0}$ (d-f), similar evolution occurs
in spin-down sector but the spin-up sector remains always exponentially
decaying and traversing. In the vicinity of TRIM $k_{x}=\pi/a_{0}$ (g-i),
penetration depth of edge states remains insensitive to gate electric field
effect and both the spin up and the spin down chiral edge states remains
traversing along the edges even for quite large gate electric field. Here SOI
parameters are taken as $\lambda_{so}/t=0.05$ and $\lambda_{R}=0$. The
horizontal axis is the confinement direction, along y-axis of the nanoribbon
here.
To further investigate the dependence of the gate electric field effect on the
longitudinal momentum $k_{x}$, we study the gate electric field modulation of
edge state wave functions associated with various longitudinal momenta. As
shown in figure 4(g)-4(i), our numerical calculations show that the damped
oscillatory edge states in region I remain insensitive to the gate electric
field, i.e., penetration depth remains the same and both the spin up and down
edge states remain damped oscillatory and traversing along the edges even for
very large gate electric field. On the other hand, as shown in figure
4(a)-4(f), exponentially decaying edge states in region II are highly
sensitive to gate electric field effects. First of all, in the vicinity of
momentum $k_{x}\approx 5\pi/6a_{0}$ as shown in figure 4(d)-4(f), gate
electric field hybridizes spin-down edge states while the amplitude of spin-up
edge states decreases with increasing electric field but they remain
uncoupled. It implies that, spin up edge states remain exponentially decaying
and traversing while spin down edge states become sinusoidal and gapped in
this regime of longitudinal momentum, consistent with tight binding electronic
dispersion. As one moves toward the valley $k_{x}\approx 2\pi/3a_{0}$ as shown
in figure 4(a)-4(c), it can be clearly seen that gate electric field induces
coupling between overlapping exponentially decaying wave functions in both
spin up and down sectors and localizes them. The period of these exponentially
turned sinusoidal wave functions decreases with increase in the gate voltage.
Similar gate-controlled edge state dynamics appears on the other valley,
$k_{x}\approx 4\pi/3a_{0}$, but the spin character is interchanged due to
electric field-induced spin-valley locking.
Based on the edge state dynamics, we came to the following conclusion: in the
absence of gate electric field, the non-vanishing overlap between edge sates
in region II do not hybridize/couple to open energy gap as they lie at
different longitudinal momenta. However, mainly due to spin-valley locking,
the gate electric field splits fourfold degenerate Dirac point in region I and
moves the spin-polarized twofold Dirac points toward region II. The finite
inter-edge overlapping in region II allows the gate electric field to induce
coupling between spin-filtered inter-edge states and open an energy gap in the
edge state spectrum.
### III.5 Size-dependent optimization of topological switching
In region II as shown in figure 3(a)-3(f), an increase in the overlap between
inter-edge states with decreases in width indicates the enhancement of gate-
induced inter-edge coupling in narrow ZXNRs. The gate electric field utilizes
such enhancement of the penetration depth and, hence, inter-edge overlapping
in region II to lower the critical gate electric field required for
topological switching between gapless and gapped edge states. The consistency
of both numerical and analytical tight binding results indicates that the
finite-size effect assistance in topological switching is an artifact of this
gate-induced momentum-dependent coupling between wave functions along the
edges.
The width dependence of gate-induced inter-edge coupling is also consistent
with the width dependence of tight binding electronic dispersion: with a
decrease in the width, the gate-induced anti-crossing points move away from
valleys toward TRIM and, thus, the threshold-voltage decreases. Furthermore,
figure 5 shows that there is no fundamental limit on the threshold-voltage:
For a single zigzag chain, N = 1, the edge states of both pristine and spin-
orbit coupled ZXNR form a gapless Dirac dispersion where crossing and anti-
crossing points coexist at $k_{x}=\pi$. As a result, any non-zero value of
staggered sublattice potential opens an energy gap in the edge state spectrum.
Figure 5: Size-dependent threshold-voltage. Topological switching from gapless
to gapped edge states for a ZXNR with N = 5 (a), N = 3 (b), and N = 1 (c).
Here grey lines represent edge states for pristine ZXNR, purple lines
represent gapless edge states with $\lambda_{so}=0.05t$ and $\lambda_{v}=0$,
red and cyan solid lines represent critical phases with
$\lambda_{v}=\lambda_{v}^{c}$, and red and cyan dashed lines represent gapless
edge states with $\lambda_{v}=0.22t$ (a), $\lambda_{v}=0.12t$ (b), and
$\lambda_{v}=0.02t$ (c). For N = 1, any positive threshold sublattice
potential $\lambda_{v}^{c}$ opens energy gap in the edge state spectrum. Here
we set $a_{0}=t=1$, $\lambda_{so}=0.05t$, and $\lambda_{R}=0$.
## IV Effect of electron-electron Coulomb interactions
Furthermore, though the electron-electron Coulomb interactions become
inevitable in quantum confined ZXNRs, spin-filtered chiral edge conducting
channels may remain gapless even when both intra- and inter-edge Coulomb
interactions are present. For example, contrary to pristine ZXNRs where
Coulomb interactions lead to energy gap by lifting the fourfold degeneracy of
energy-zero flat bands in the edge state spectrum Son, Cohen, and Louie
(2006); Hikihara _et al._ (2003); Fujita _et al._ (1996); Yang _et al._
(2007), it has been explicitly shown that the QSH phase in spin-orbit coupled
ZXNRs remain stabilized against intra-edge Coulomb interactions Xu and Moore
(2006). Moreover, the mass term produced by backward scattering, which may
have originated from the mixing of right and left moving chiral modes carrying
the same spin polarization and located at opposite edges of finite-size ZXNRs,
can be suppressed in the large SOI limit and, hence, the spin-filtered chiral
edge states may also remain protected against unscreened inter-edge Coulomb
interactions Zarea, Büsser, and Sandler (2008). It can be justified by a
simple argument based on the interplay between the strength of intrinsic SOI
and the screening length of Coulomb interactions: Since the decay length of
spin-filtered chiral edge states in ZXNRs is inversely proportional to the
intrinsic SOI, the overlaps between oppositely moving spin-filtered chiral
modes are suppressed with increasing SOI. It shows that, in the large SOI
limit, ultra-narrow ZXNRs can be described by the tight binding model where
both intra- and inter-edge Coulomb interactions are effectively absent. Thus,
even in the presence of Coulomb interactions, a large SOI limit renders
gapless spin-filtered chiral edge states since the reduced inter-edge overlap
diminishes the backward scattering terms.
Based on a similar argument, our findings provide another ground: In the QSH
phase, spin-filtered chiral states associated with a longitudinal momentum
$k_{x}=\pi$ in region I remain protected against backward scattering due to
vanishing inter-edge overlap. On the other hand, in the presence of a gate
electric field, unscreened inter-edge Coulomb interactions may also assist
topological switching by inducing an energy gap due to finite inter-edge
coupling between edge states associated with longitudinal momentum $k_{x}$
lying in region II of the Brillouin zone, similar to the finite-size effect.
In passing, unlike the finite-size effect, which is characterized by critical
longitudinal momentum $k_{x}^{c}$ and remains the same in both pristine and
spin-orbit coupled ZXNRs, the effect of Coulomb interactions in pristine ZXNRs
is completely different from spin-orbit coupled ZXNRs.
## V Low-voltage topological quantum devices
The analogy between the rich momentum-dependent behavior of edge states and
the gate-controlled inter-edge coupling in the spin-orbit coupled ZXNRs leads
to the following two phenomena which are critical for topological quantum
devices: (i) In the absence of gate electric field and hence Rashba SOI, spin-
filtered chiral edge edge states with fourfold degenerate Dirac point at TRIM
$k_{x}=\pi$ remain gapless even for ultra-narrow ZXNRs. Vanishing inter-edge
coupling across the crossing point guarantees that the spin-filtered chiral
edge states (enabling dissipationless and quantized conductance) remain
topologically protected against backscattering and hence the deviation from
conductance quantization - a figure of merit in QSH materials. (ii) Since the
gate electric field splits fourfold Dirac point at TRIM and moves spin-
filtered twofold Dirac points toward valleys, gate-induced coupling due to
finite overlap between spin-filtered inter-edge states across anti-crossing
points assists in opening the energy gap in the spin-filtered chiral edge
states and lowers the critical gate electric field. This artifact of the
finite-size effect, dwindled nontrivial regime of the Brillouin zone without
affecting the bulk band topology and reduced critical gate electric field
without affecting the quantized edge state conductance in the QSH phase,
provides an ideal platform for devising energy-efficient low-voltage
topological quantum devices.
To exemplify the advantages of utilizing spin-orbit coupled ZXNRs for
computing technologies, we explicitly demonstrate the working principle of a
topological quantum field effect transistor (TQFET) and compare its critical
functionalities with a MOSFET. Unlike a MOSFET, where a conventional
semiconductor is utilized as a channel material and conduction is enabled via
bulk electronic states, TQFET configures a topological insulator material as a
channel in which the dissipationless current is carried by topologically
protected edge modes. In a blueprint for a TQFET, the gate electric field
tunes a topological insulator material from a topological insulating phase
(on-state) to a conventional insulating phase (off-state), a phenomenon known
as topological switching. In other words, the topological switching mechanism
relies on transitioning between gapless (on-state) and gapped (off-state) edge
modes. Such a gate electric field-driven topological switching is a
fundamentally different mechanism compared to a traditional carrier inversion
in conventional semiconducting switching devices. Figure 6 shows a schematic
representation for a TQFET configuring quantum confined ZXNR as a channel
between the source and drain. First, the existence of 1D gapless edge states
in the ultra-narrow ZXNRs promises the availability of large edge state
conducting modes for enhanced signal-to-noise ratio via multiple edge state
channels, figure 6(a). It allows optimized geometry for a TQFET where an array
of ZXNRs, set apart by trivial insulating layers/wires along vertical/lateral
direction, is sandwiched between top and bottom gates separated by top and
bottom dielectrics.
Figure 6: Topological quantum field effect transistor. (a) Schematic
representation for a TQFET configuring multiple quantum confined ZXNRs
allowing conduction between source and drain. (b) Topological switching driven
by gate electric field which tunes a ZXNR form on-state with gapless edge
modes ($\lambda_{v}<\lambda_{v}^{c}$) to off-state with gapped edge modes
($\lambda_{v}>\lambda_{v}^{c}$). Here VG is the gate-voltage, VDD is the
supply-voltage, and ID is the source-to-drain current.
Second, the reduction in threshold-voltage with decrease in the channel width,
even though the topological bulk bandgap increases, overturns a general wisdom
of utilizing narrow gap and wide channel materials for reducing threshold-
voltage in a standard field effect transistor analysis. For example, in a
blueprint topological transistor where topological switching is implemented
via bulk bandgap closing and reopening, materials with large bulk bandgap
require an unrealistically large threshold-voltage Molle _et al._ (2017);
Vandenberghe and Fischetti (2017). Moreover, TQFET with quantum confined ZXNRs
is in high contrast to MOSFET in which width-dependence of the threshold-
voltage Vth depends upon the isolation technique used for transistor
fabrication Tsividis and McAndrew (2011): the effective threshold-voltage in a
narrow channel device increases with decreases in width when the transistor is
made using the LOCOS (local oxidation of silicon) process while decreases with
decrease in width when the transistor is made in a shallow-trench-isolation
(STI) process. That is, unlike the size dependence of threshold-voltage on
isolation technique in MOSFET, the reduction of threshold-voltage in a TQFET
is an intrinsic property of ZXNRs associated with topological and quantum
mechanical functionalities. It suggests that, along with vastly different
conduction and switching mechanisms, the technological aspects required for
fabricating a TQFET with ZXNRs also radically differ from those of MOSFETs:
There is no fundamental requirement of specialized technological/isolation
techniques for a low-voltage TQFET with an energy-efficient switching
mechanism.
Third, the reduction in threshold-voltage becomes important for reducing the
supply voltage (VDD) in a low-voltage switching device if the subthreshold
swing is compressible. That is, power dissipation Ionescu and Riel (2011) P
$\approx$ IOFFVDD3 can be reduced while maintaining the device performance
(ION) by simultaneous scaling down $V_{th}$ and $V_{DD}$ and, thus, keeping
the overdrive constant ($\propto$(VDD \- Vth)2). In a MOSFET, incompressible
subthreshold swing leads to an exponential increase in IOFF in the transfer
characteristics Ionescu and Riel (2011). That is, for every 60 mV at room
temperature, there is more than tenfold increase in IOFF. In contrast to
MOSFETs, this is not a problem in a TQFET where subthreshold swing can be
tuned via topological quantum field effect Nadeem _et al._ (2021), a combined
effect of electric field and tunable Rashba SOI that allows overcoming the
“Boltzmann’s tyranny”. Power dissipation can be lowered by reducing the
threshold-voltage via geometric optimization of quantum confined ribbons of
QSH materials while keeping subthreshold swing subthermal via strain
engineering, buckling parameterization, tuning inter-orbital hopping, and
normalization of intrinsic atomic SOI. In summary, quantum confined ZXNRs with
optimized geometry may prompt the progress of topological computing
technologies with vastly lower energy consumed per operation than CMOS
technologies and greet the Moore’s trajectory of transistor miniaturization,
doubling transistors per chip and doubling the processing power every two
years.
## VI Experimental realization and device fabrication
Though a full experimental exploitation of 2D-Xenes is yet to be explored for
device applications, mainly due to several challenges imposed by restricted
synthesis methodologies, substrate effects, and stability issues, an
experimental confirmation of the electronic properties and the buckled
structure for 2D-Xenes has been realized through angle-resolved photoemission
spectroscopy (ARPES) and scanning tunneling microscopy (STM), respectively.
Furthermore, the epitaxial growth of both group-IV Molle _et al._ (2017) and
group-V Khan _et al._ (2021) 2D-Xenes has been realized on different
substrates. For instance, the atomic arrangement of group-IV buckled 2D-Xenes
has been detailed through epitaxy of silicene on metallic (Ag(111), Ir(111),
and ZBr2) Lalmi _et al._ (2010); Vogt _et al._ (2012); Lin _et al._ (2012);
Feng _et al._ (2012); Chiappe _et al._ (2012); Fleurence _et al._ (2012);
Meng _et al._ (2013) and semiconducting (MoS2) Chiappe _et al._ (2014)
substrates, STM studies of germanene on metallic (Au(111), Pt(111), Al(111)
and Hex-AIN) Dávila _et al._ (2014); Li _et al._ (2014); Bampoulis _et al._
(2014); Derivaz _et al._ (2015); D’Acapito _et al._ (2016) and
semiconducting (MoS2) Zhang _et al._ (2016a) substrates, and epitaxial
stanene on Bi2Te3 substrates Zhu _et al._ (2015). The epitaxial synthesis has
also been extended to group-V 2D-Xenes, for instance, a monolayer of
phosphorene on Au(111) substrate Zhang _et al._ (2016b) showing silicene-like
semiconducting character.
While the tight binding model with Kane-Mele type SOI describes a hypothetical
freestanding ZXNRs with poor chemical stability, a substrate supporting the
epitaxial synthesis is highly desired for real-world applications. However,
the supporting substrate brings other challenges into play due to concurrent
bonding interactions. For instance, a metallic substrate short-circuits the
edge states of interest and destroys the topological protection. On the other
hand, semiconducting MoS2 substrate can stabilize 2D-Xenes with protected edge
states; however, 2D-Xenes render into a metal due to compressive strain
Chiappe _et al._ (2014); Zhang _et al._ (2016a). Similar problems persist
for stanene on Bi2Te3 substrate Zhu _et al._ (2015).
Recently, it has been shown that epitaxially deposited bismuthene on the
insulating silicon carbide substrate SiC(0001) is a large bandgap QSH
insulator where structural and electronic properties have been confirmed by
STM and ARPES measurements Reis _et al._ (2017). However, as compared to
freestanding buckled Bi(111) bilayers, considerably larger lattice constant of
5.35 Å stabilizes Bi/SiC into an energetically favorable planar honeycomb
configuration Hsu _et al._ (2015). With all its interesting aspects, the
planar honeycomb configuration is not desirable for gate-induced topological
switching. Furthermore, it is predicted that both As and Sb are plagued by
similar problems Li _et al._ (2018).
This experimental odyssey of 2D-Xenes promises that the fabrication of
topological devices is just a step away. The possible first experimental step
to corroborate our main prediction for device integration, scientific studies,
and technological applications is the synthesis of buckled ZXNRs with
protected edge states on a weakly interacting semiconducting substrate. In
this direction, the growth of functionalized 2D-Xene sheets on a suitable
semiconducting substrate would be a promising development that can bring an
obvious benefit for the realization of low-voltage topological devices Nadeem
_et al._ (2021). For instance, functionalized bismuth monolayers BiX Song _et
al._ (2014) and Bi2XY Zhou _et al._ (2018) where X/Y = H, F, Cl, and Br
stabilize with quasi-planar/low-buckled structure and the strong on-site SOI
opens the topological bandgap at the Dirac points formed by low-lying $p_{x}$
and $p_{y}$ orbitals. Furthermore, recently corroborated first-principle
calculations show that gate-controlled topological quantum phase transition
between different topological states can be realized in functionalized bismuth
monolayer Zhou _et al._ (2021).
## VII Conclusion
It is demonstrated that, in a finite-size geometry, ZXNRs display unique
physical characteristics associated with their intrinsic band topology and the
finite-size effects such as longitudinal momentum-dependent inter-edge
overlapping between spin-filtered chiral edge states and the quantum
confinement effect on the bulk band spectrum. While the damped oscillatory
modes around the edge state crossing momentum remain completely orthogonal and
guarantee protected spin-filtered chiral edge states even in ultra-narrow
ribbons, enhanced gate-induced inter-edge coupling between exponentially
decaying edge states around the anti-crossing points reduces the gate electric
field required for topological switching between gapless and gapped edge
states. In addition, quantum confinement enhances the SOI-induced bandgap in
the nontrivial phase and leads to topological switching without bulk bandgap
closing. On the one hand, it reduces the threshold-voltage by lowering the
SOI-induced barrier in the bulk; on the other hand, it enhances the bulk
bandgap even in lighter monoelemental 2D-Xenes such that the detrimental
contributions from the bulk material to the edge current are avoided and
allows safe residing of the chemical potential within this gap. Furthermore,
similar to wide ZXNRs, the Rashba effect enhances the bandgap in the trivial
phase. Hence, a large nontrivial bulk bandgap by quantum confinement effect to
decouple the conducting edge states from bulk subbands and a large trivial
bandgap by the Rashba effect to overcome thermal excitation makes quantum
confined narrow ZXNRs ideal for engineering energy-efficient low-voltage
topological quantum devices.
The proposed mechanism for optimizing topological switching and devising
concepts for topological electronics is applicable to all 2D-Xene sheets
ranging from silicene to bismuthene as well as other 2D topological insulators
with honeycomb lattice structure. In principle, the threshold-voltage depends
upon the momentum space location of anti-crossing points in the edge state
spectrum, which is the same for both pristine and spin-orbit coupled ZXNRs.
Quantitatively, the threshold value depends upon both the strength of
intrinsic SOI and the width of ZXNRs. In addition, the width of ZXNRs,
$W_{z}=\sqrt{3}Na_{0}/2$, depends upon both the number of zigzag lines $N$ and
the lattice constant $a_{0}$. Increasing lattice constants, ranging from 2.46
Å for graphene to 5.35 Å for bismuthene Reis _et al._ (2017), suggest that
the critical width ($W_{z}^{c}$) would be different for different 2D
topological insulator sheets, even if the number of zigzag lines $N$ is fixed.
Furthermore, a wide range of the intrinsic SOI, from 0.00057 meV for graphene
to 435 meV for bismuthene Reis _et al._ (2017), indicates that the threshold-
voltage would be different for different 2D topological insulators sheets,
even if the width of the ribbon is fixed.
Considering the wide applicability and generality of the proposed mechanism,
we presented a generalized mechanism that clearly indicates how tight binding
electronic dispersion, DOS, and the penetration depth of edge state wave
functions and, thus, the threshold-voltage depend on the number of zigzag
chains. In wide ZXNRs, with $W_{z}>W_{z}^{c}$, the SOI ($\Delta_{so}$) induced
barrier imposes a limit on the threshold voltage
($\lambda_{v}^{c}=2\Delta_{so}$), which could be unrealistically large for
2D-Xenes. However, when $W_{z}<W_{z}^{c}$, the threshold-voltage decreases
with decrease in width ($\lambda_{v}^{c}<2\Delta_{so}$). A qualitative width
dependence study shows that the threshold-voltage can be lowered without any
fundamental limit. For instance, in ultra-narrow ribbons, N=1 say, any non-
zero electric field can switch the edge state conductance by opening an energy
gap in the edge state spectrum. Considering a large variation in the strength
of intrinsic SOI and size of lattice constants for 2D-Xenes, we did not
mention any single threshold limit as a figure of merit for a topological
transistor. Rather, we highlighted that a topological transistor is more
flexible to tunability of critical parameters than its conventional
counterpart, MOSFET. More defined engineering heads-up toward the manufacture
of an ideal topological transistor can be extracted for a specific 2D-Xene
configured as a channel material.
Similar to the gate voltage, finite-size effects can also be employed to tune
the exchange interaction in 2D magnetic topological insulators Ezawa (2013b,
c, 2015a); Högl _et al._ (2020); Li _et al._ (2013); Liang, Wu, and Hu
(2013); Zhou _et al._ (2018, 2021); Shabbir _et al._ (2018); Nadeem _et
al._ (2020) and optical probing of topological signatures in 2D materials Xu
_et al._ (2020). For example, the critical regime in an antiferromagnetic
topological insulator can be optimized to design a topological spin transistor
via gate-induced topological switching of edge state spin transport. This
study may also be generalized to study other topological phases such as
topological superconductors San-Jose _et al._ (2015); Ezawa (2015b). In a
finite-size geometry, Majorana bound states localized along the edges of 2D
topological superconductors Potter and Lee (2010) can be decoupled from bulk
states for robust information processing.
###### Acknowledgements.
This research is supported by the Australian Research Council (ARC) Centre of
Excellence in Future Low-Energy Electronics Technologies (FLEET Project No.
CE170100039), Australian Research Council (ARC) Professional Future Fellowship
(FT130100778), and funded by the Australian Government.
## References
* Wray (2012) L. A. Wray, “Topological transistor,” Nature Physics 8, 705–706 (2012).
* Seidel (2019) J. Seidel, “Nanoelectronics based on topological structures,” Nature Materials 18, 188–190 (2019).
* Ezawa (2013a) M. Ezawa, “Quantized conductance and field-effect topological quantum transistor in silicene nanoribbons,” Applied Physics Letters 102, 172103 (2013a), https://doi.org/10.1063/1.4803010 .
* Liu _et al._ (2014) J. Liu, T. H. Hsieh, P. Wei, W. Duan, J. Moodera, and L. Fu, “Spin-filtered edge states with an electrically tunable gap in a two-dimensional topological crystalline insulator,” Nature Materials 13, 178–183 (2014).
* Liu _et al._ (2015) Q. Liu, X. Zhang, L. B. Abdalla, A. Fazzio, and A. Zunger, “Switching a normal insulator into a topological insulator via electric field with application to phosphorene,” Nano Letters 15, 1222–1228 (2015), pMID: 25607525, https://doi.org/10.1021/nl5043769 .
* Pan _et al._ (2015) H. Pan, M. Wu, Y. Liu, and S. A. Yang, “Electric control of topological phase transitions in dirac semimetal thin films,” Scientific Reports 5, 14639 (2015).
* Qian _et al._ (2014) X. Qian, J. Liu, L. Fu, and J. Li, “Quantum spin hall effect in two-dimensional transition metal dichalcogenides,” Science 346, 1344–1347 (2014), https://science.sciencemag.org/content/346/6215/1344.full.pdf .
* Zhang _et al._ (2017) Z. Zhang, X. Feng, J. Wang, B. Lian, J. Zhang, C. Chang, M. Guo, Y. Ou, Y. Feng, S.-C. Zhang, K. He, X. Ma, Q.-K. Xue, and Y. Wang, “Magnetic quantum phase transition in cr-doped $bi_{2}(se_{x}te_{1-x})_{3}$ driven by the stark effect,” Nature Nanotechnology 12, 953–957 (2017).
* Molle _et al._ (2017) A. Molle, J. Goldberger, M. Houssa, Y. Xu, S.-C. Zhang, and D. Akinwande, “Buckled two-dimensional xene sheets,” Nature Materials 16, 163–169 (2017).
* Collins _et al._ (2018) J. L. Collins, A. Tadich, W. Wu, L. C. Gomes, J. N. B. Rodrigues, C. Liu, J. Hellerstedt, H. Ryu, S. Tang, S.-K. Mo, S. Adam, S. A. Yang, M. S. Fuhrer, and M. T. Edmonds, “Electric-field-tuned topological phase transition in ultrathin $na_{3}bi$,” Nature 564, 390–394 (2018).
* Nadeem _et al._ (2021) M. Nadeem, I. Di Bernardo, X. Wang, M. S. Fuhrer, and D. Culcer, “Overcoming boltzmann’s tyranny in a transistor via the topological quantum field effect,” Nano Letters 21, 3155–3161 (2021), pMID: 33780625, https://doi.org/10.1021/acs.nanolett.1c00378 .
* Xu _et al._ (2019) Y. Xu, Y.-R. Chen, J. Wang, J.-F. Liu, and Z. Ma, “Quantized field-effect tunneling between topological edge or interface states,” Phys. Rev. Lett. 123, 206801 (2019).
* Kane and Mele (2005a) C. L. Kane and E. J. Mele, “${Z}_{2}$ topological order and the quantum spin hall effect,” Phys. Rev. Lett. 95, 146802 (2005a).
* Kane and Mele (2005b) C. L. Kane and E. J. Mele, “Quantum spin hall effect in graphene,” Phys. Rev. Lett. 95, 226801 (2005b).
* Bernevig, Hughes, and Zhang (2006) B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, “Quantum spin hall effect and topological phase transition in hgte quantum wells,” Science 314, 1757–1761 (2006), https://science.sciencemag.org/content/314/5806/1757.full.pdf .
* König _et al._ (2008) M. König, H. Buhmann, L. W. Molenkamp, T. Hughes, C.-X. Liu, X.-L. Qi, and S.-C. Zhang, “The quantum spin hall effect: Theory and experiment,” Journal of the Physical Society of Japan 77, 031007 (2008), https://doi.org/10.1143/JPSJ.77.031007 .
* Vandenberghe and Fischetti (2017) W. G. Vandenberghe and M. V. Fischetti, “Imperfect two-dimensional topological insulator field-effect transistors,” Nature Communications 8, 14184 (2017), https://doi.org/10.1038/ncomms14184 .
* Reis _et al._ (2017) F. Reis, G. Li, L. Dudy, M. Bauernfeind, S. Glass, W. Hanke, R. Thomale, J. Schäfer, and R. Claessen, “Bismuthene on a sic substrate: A candidate for a high-temperature quantum spin hall material,” Science 357, 287–290 (2017), https://science.sciencemag.org/content/357/6348/287.full.pdf .
* Shan, Lu, and Shen (2010) W.-Y. Shan, H.-Z. Lu, and S.-Q. Shen, “Effective continuous model for surface states and thin films of three-dimensional topological insulators,” New Journal of Physics 12, 043048 (2010).
* Liu _et al._ (2010) C.-X. Liu, H. Zhang, B. Yan, X.-L. Qi, T. Frauenheim, X. Dai, Z. Fang, and S.-C. Zhang, “Oscillatory crossover from two-dimensional to three-dimensional topological insulators,” Phys. Rev. B 81, 041307 (2010).
* Lu _et al._ (2010) H.-Z. Lu, W.-Y. Shan, W. Yao, Q. Niu, and S.-Q. Shen, “Massive dirac fermions and spin physics in an ultrathin film of topological insulator,” Phys. Rev. B 81, 115407 (2010).
* Zhou _et al._ (2008) B. Zhou, H.-Z. Lu, R.-L. Chu, S.-Q. Shen, and Q. Niu, “Finite size effects on helical edge states in a quantum spin-hall system,” Phys. Rev. Lett. 101, 246807 (2008).
* Das, Sen, and Mahapatra (2020) B. Das, D. Sen, and S. Mahapatra, “Tuneable quantum spin hall states in confined 1t’ transition metal dichalcogenides,” Scientific Reports 10, 6670 (2020).
* Ezawa (2006) M. Ezawa, “Peculiar width dependence of the electronic properties of carbon nanoribbons,” Phys. Rev. B 73, 045432 (2006).
* Han _et al._ (2007) M. Y. Han, B. Özyilmaz, Y. Zhang, and P. Kim, “Energy band-gap engineering of graphene nanoribbons,” Phys. Rev. Lett. 98, 206805 (2007).
* Son, Cohen, and Louie (2006) Y.-W. Son, M. L. Cohen, and S. G. Louie, “Energy gaps in graphene nanoribbons,” Phys. Rev. Lett. 97, 216803 (2006).
* Brey and Fertig (2006) L. Brey and H. A. Fertig, “Electronic states of graphene nanoribbons studied with the dirac equation,” Phys. Rev. B 73, 235411 (2006).
* Ezawa and Nagaosa (2013) M. Ezawa and N. Nagaosa, “Interference of topologically protected edge states in silicene nanoribbons,” Phys. Rev. B 88, 121401 (2013).
* Cano-Cortés, Ortix, and van den Brink (2013) L. Cano-Cortés, C. Ortix, and J. van den Brink, “Fundamental differences between quantum spin hall edge states at zigzag and armchair terminations of honeycomb and ruby nets,” Phys. Rev. Lett. 111, 146801 (2013).
* Zhang _et al._ (2010) Y. Zhang, K. He, C.-Z. Chang, C.-L. Song, L.-L. Wang, X. Chen, J.-F. Jia, Z. Fang, X. Dai, W.-Y. Shan, S.-Q. Shen, Q. Niu, X.-L. Qi, S.-C. Zhang, X.-C. Ma, and Q.-K. Xue, “Crossover of the three-dimensional topological insulator bi2se3 to the two-dimensional limit,” Nature Physics 6, 584–588 (2010).
* König _et al._ (2007) M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, “Quantum spin hall insulator state in hgte quantum wells,” Science 318, 766–770 (2007), https://science.sciencemag.org/content/318/5851/766.full.pdf .
* Liu _et al._ (2008) C. Liu, T. L. Hughes, X.-L. Qi, K. Wang, and S.-C. Zhang, “Quantum spin hall effect in inverted type-ii semiconductors,” Phys. Rev. Lett. 100, 236601 (2008).
* Knez, Du, and Sullivan (2011) I. Knez, R.-R. Du, and G. Sullivan, “Evidence for helical edge modes in inverted $\mathrm{InAs}/\mathrm{GaSb}$ quantum wells,” Phys. Rev. Lett. 107, 136603 (2011).
* Du _et al._ (2015) L. Du, I. Knez, G. Sullivan, and R.-R. Du, “Robust helical edge transport in gated $\mathrm{InAs}/\mathrm{GaSb}$ bilayers,” Phys. Rev. Lett. 114, 096802 (2015).
* Wu _et al._ (2018) S. Wu, V. Fatemi, Q. D. Gibson, K. Watanabe, T. Taniguchi, R. J. Cava, and P. Jarillo-Herrero, “Observation of the quantum spin hall effect up to 100 kelvin in a monolayer crystal,” Science 359, 76–79 (2018), https://science.sciencemag.org/content/359/6371/76.full.pdf .
* Salahuddin and Datta (2008) S. Salahuddin and S. Datta, “Use of negative capacitance to provide voltage amplification for low power nanoscale devices,” Nano Letters 8, 405–410 (2008), pMID: 18052402, https://doi.org/10.1021/nl071804g .
* Min _et al._ (2006) H. Min, J. E. Hill, N. A. Sinitsyn, B. R. Sahu, L. Kleinman, and A. H. MacDonald, “Intrinsic and rashba spin-orbit interactions in graphene sheets,” Phys. Rev. B 74, 165310 (2006).
* Liu, Feng, and Yao (2011) C.-C. Liu, W. Feng, and Y. Yao, “Quantum spin hall effect in silicene and two-dimensional germanium,” Phys. Rev. Lett. 107, 076802 (2011).
* Liu, Jiang, and Yao (2011) C.-C. Liu, H. Jiang, and Y. Yao, “Low-energy effective hamiltonian involving spin-orbit coupling in silicene and two-dimensional germanium and tin,” Phys. Rev. B 84, 195430 (2011).
* Xu _et al._ (2013) Y. Xu, B. Yan, H.-J. Zhang, J. Wang, G. Xu, P. Tang, W. Duan, and S.-C. Zhang, “Large-gap quantum spin hall insulators in tin films,” Phys. Rev. Lett. 111, 136804 (2013).
* Hsu _et al._ (2015) C.-H. Hsu, Z.-Q. Huang, F.-C. Chuang, C.-C. Kuo, Y.-T. Liu, H. Lin, and A. Bansil, “The nontrivial electronic structure of bi/sb honeycombs on SiC(0001),” New Journal of Physics 17, 025005 (2015).
* Li _et al._ (2018) G. Li, W. Hanke, E. M. Hankiewicz, F. Reis, J. Schäfer, R. Claessen, C. Wu, and R. Thomale, “Theoretical paradigm for the quantum spin hall effect at high temperatures,” Phys. Rev. B 98, 165146 (2018).
* Rashba (2009) E. I. Rashba, “Graphene with structure-induced spin-orbit coupling: Spin-polarized states, spin zero modes, and quantum hall effect,” Phys. Rev. B 79, 161409 (2009).
* Ezawa, Tanaka, and Nagaosa (2013) M. Ezawa, Y. Tanaka, and N. Nagaosa, “Topological phase transition without gap closing,” Scientific Reports 3, 2790 (2013).
* Yang _et al._ (2013) Y. Yang, H. Li, L. Sheng, R. Shen, D. N. Sheng, and D. Y. Xing, “Topological phase transitions with and without energy gap closing,” New Journal of Physics 15, 083042 (2013).
* Rachel (2016) S. Rachel, “Quantum phase transitions of topological insulators without gap closing,” Journal of Physics: Condensed Matter 28, 405502 (2016).
* Matsumoto _et al._ (2020) N. Matsumoto, K. Kawabata, Y. Ashida, S. Furukawa, and M. Ueda, “Continuous phase transition without gap closing in non-hermitian quantum many-body systems,” Phys. Rev. Lett. 125, 260601 (2020).
* Schindler (2020) F. Schindler, “Dirac equation perspective on higher-order topological insulators,” Journal of Applied Physics 128, 221102 (2020), https://doi.org/10.1063/5.0035850 .
* Zarea, Büsser, and Sandler (2008) M. Zarea, C. Büsser, and N. Sandler, “Unscreened coulomb interactions and the quantum spin hall phase in neutral zigzag graphene ribbons,” Phys. Rev. Lett. 101, 196804 (2008).
* Halperin (1982) B. I. Halperin, “Quantized hall conductance, current-carrying edge states, and the existence of extended states in a two-dimensional disordered potential,” Phys. Rev. B 25, 2185–2190 (1982).
* MacDonald and Středa (1984) A. H. MacDonald and P. Středa, “Quantized hall effect and edge currents,” Phys. Rev. B 29, 1616–1619 (1984).
* Zarea and Sandler (2007) M. Zarea and N. Sandler, “Electron-electron and spin-orbit interactions in armchair graphene ribbons,” Phys. Rev. Lett. 99, 256804 (2007).
* Hikihara _et al._ (2003) T. Hikihara, X. Hu, H.-H. Lin, and C.-Y. Mou, “Ground-state properties of nanographite systems with zigzag edges,” Phys. Rev. B 68, 035432 (2003).
* Fujita _et al._ (1996) M. Fujita, K. Wakabayashi, K. Nakada, and K. Kusakabe, “Peculiar localized state at zigzag graphite edge,” Journal of the Physical Society of Japan 65, 1920–1923 (1996).
* Yang _et al._ (2007) L. Yang, C.-H. Park, Y.-W. Son, M. L. Cohen, and S. G. Louie, “Quasiparticle energies and band gaps in graphene nanoribbons,” Phys. Rev. Lett. 99, 186801 (2007).
* Xu and Moore (2006) C. Xu and J. E. Moore, “Stability of the quantum spin hall effect: Effects of interactions, disorder, and $z_{2}$ topology,” Phys. Rev. B 73, 045322 (2006).
* Tsividis and McAndrew (2011) Y. Tsividis and C. McAndrew, _Operation and Modeling of the MOS Transistor_, The Oxford Series in Electrical and Computer Engineering Series (Oxford University Press, 2011).
* Ionescu and Riel (2011) A. M. Ionescu and H. Riel, “Tunnel field-effect transistors as energy-efficient electronic switches,” Nature 479, 329–337 (2011).
* Khan _et al._ (2021) K. Khan, A. K. Tareen, Q. U. Khan, M. Iqbal, H. Zhang, and Z. Guo, “Novel synthesis, properties and applications of emerging group va two-dimensional monoelemental materials (2d-xenes),” Mater. Chem. Front. 5, 6333–6391 (2021).
* Lalmi _et al._ (2010) B. Lalmi, H. Oughaddou, H. Enriquez, A. Kara, S. Vizzini, B. Ealet, and B. Aufray, “Epitaxial growth of a silicene sheet,” Applied Physics Letters 97, 223109 (2010), https://doi.org/10.1063/1.3524215 .
* Vogt _et al._ (2012) P. Vogt, P. De Padova, C. Quaresima, J. Avila, E. Frantzeskakis, M. C. Asensio, A. Resta, B. Ealet, and G. Le Lay, “Silicene: Compelling experimental evidence for graphenelike two-dimensional silicon,” Phys. Rev. Lett. 108, 155501 (2012).
* Lin _et al._ (2012) C.-L. Lin, R. Arafune, K. Kawahara, N. Tsukahara, E. Minamitani, Y. Kim, N. Takagi, and M. Kawai, “Structure of silicene grown on ag(111),” Applied Physics Express 5, 045802 (2012).
* Feng _et al._ (2012) B. Feng, Z. Ding, S. Meng, Y. Yao, X. He, P. Cheng, L. Chen, and K. Wu, “Evidence of silicene in honeycomb structures of silicon on ag(111),” Nano Letters 12, 3507–3511 (2012), pMID: 22658061, https://doi.org/10.1021/nl301047g .
* Chiappe _et al._ (2012) D. Chiappe, C. Grazianetti, G. Tallarida, M. Fanciulli, and A. Molle, “Local electronic properties of corrugated silicene phases,” Advanced Materials 24, 5088–5093 (2012), https://onlinelibrary.wiley.com/doi/pdf/10.1002/adma.201202100 .
* Fleurence _et al._ (2012) A. Fleurence, R. Friedlein, T. Ozaki, H. Kawai, Y. Wang, and Y. Yamada-Takamura, “Experimental evidence for epitaxial silicene on diboride thin films,” Phys. Rev. Lett. 108, 245501 (2012).
* Meng _et al._ (2013) L. Meng, Y. Wang, L. Zhang, S. Du, R. Wu, L. Li, Y. Zhang, G. Li, H. Zhou, W. A. Hofer, and H.-J. Gao, “Buckled silicene formation on ir(111),” Nano Letters 13, 685–690 (2013), pMID: 23330602, https://doi.org/10.1021/nl304347w .
* Chiappe _et al._ (2014) D. Chiappe, E. Scalise, E. Cinquanta, C. Grazianetti, B. van den Broek, M. Fanciulli, M. Houssa, and A. Molle, “Two-dimensional si nanosheets with local hexagonal structure on a mos2 surface,” Advanced Materials 26, 2096–2101 (2014), https://onlinelibrary.wiley.com/doi/pdf/10.1002/adma.201304783 .
* Dávila _et al._ (2014) M. E. Dávila, L. Xian, S. Cahangirov, A. Rubio, and G. L. Lay, “Germanene: a novel two-dimensional germanium allotrope akin to graphene and silicene,” New Journal of Physics 16, 095002 (2014).
* Li _et al._ (2014) L. Li, S.-z. Lu, J. Pan, Z. Qin, Y.-q. Wang, Y. Wang, G.-y. Cao, S. Du, and H.-J. Gao, “Buckled germanene formation on pt(111),” Advanced Materials 26, 4820–4824 (2014), https://onlinelibrary.wiley.com/doi/pdf/10.1002/adma.201400909 .
* Bampoulis _et al._ (2014) P. Bampoulis, L. Zhang, A. Safaei, R. Gastel, B. Poelsema, and H. Zandvliet, “Germanene termination of ge2pt crystals on ge(110),” Journal of physics. Condensed matter : an Institute of Physics journal 26, 442001 (2014).
* Derivaz _et al._ (2015) M. Derivaz, D. Dentel, R. Stephan, M.-C. Hanf, A. Mehdaoui, P. Sonnet, and C. Pirri, “Continuous germanene layer on al(111),” Nano Letters 15, 2510–2516 (2015), pMID: 25802988, https://doi.org/10.1021/acs.nanolett.5b00085 .
* D’Acapito _et al._ (2016) F. D’Acapito, S. Torrengo, E. Xenogiannopoulou, P. Tsipas, J. Marquez Velasco, D. Tsoutsou, and A. Dimoula, “Evidence for germanene growth on epitaxial hexagonal (h)-aln on ag(1 1 1),” Journal of Physics Condensed Matter 28, 045002 (2016).
* Zhang _et al._ (2016a) L. Zhang, P. Bampoulis, A. N. Rudenko, Q. Yao, A. van Houselt, B. Poelsema, M. I. Katsnelson, and H. J. W. Zandvliet, “Structural and electronic properties of germanene on ${\mathrm{mos}}_{2}$,” Phys. Rev. Lett. 116, 256804 (2016a).
* Zhu _et al._ (2015) F. Zhu, W. Chen, Y. Xu, C.-l. Gao, D. Guan, C. Liu, D. Qian, S.-C. Zhang, and J.-f. Jia, “Epitaxial growth of two-dimensional stanene,” Nature materials 14, 1020 (2015).
* Zhang _et al._ (2016b) J. Zhang, S. Zhao, C. Han, Z. Wang, S. Zhong, S. Sun, R. Guo, X. Zhou, C. Gu, Y. Kaidi, Z. Li, and W. Chen, “Epitaxial growth of single layer blue phosphorus: A new phase of two-dimensional phosphorus,” Nano letters 16, 4903–4908 (2016b).
* Song _et al._ (2014) Z. Song, C.-C. Liu, J. Yang, J. Han, M. Ye, B. Fu, Y. Yang, Q. Niu, J. Lu, and Y. Yao, “Quantum spin hall insulators and quantum valley hall insulators of bix/sbx (x=h, f, cl and br) monolayers with a record bulk band gap,” NPG Asia Materials 06, e147–e147 (2014).
* Zhou _et al._ (2018) T. Zhou, J. Zhang, H. Jiang, I. Žutić, and Z. Yang, “Giant spin-valley polarization and multiple hall effect in functionalized bismuth monolayers,” npj Quantum Materials 3:39 (2018), 10.1038/s41535-018-0113-4.
* Zhou _et al._ (2021) T. Zhou, S. Cheng, M. Schleenvoigt, P. Schüffelgen, H. Jiang, Z. Yang, and I. Žutić, “Quantum spin-valley hall kink states: From concept to materials design,” Phys. Rev. Lett. 127, 116402 (2021).
* Ezawa (2013b) M. Ezawa, “Spin valleytronics in silicene: Quantum spin hall–quantum anomalous hall insulators and single-valley semimetals,” Phys. Rev. B 87, 155415 (2013b).
* Ezawa (2013c) M. Ezawa, “Topological kirchhoff law and bulk-edge correspondence for valley chern and spin-valley chern numbers,” Phys. Rev. B 88, 161406 (2013c).
* Ezawa (2015a) M. Ezawa, “Monolayer topological insulators: Silicene, germanene, and stanene,” Journal of the Physical Society of Japan 84, 121003 (2015a), https://doi.org/10.7566/JPSJ.84.121003 .
* Högl _et al._ (2020) P. Högl, T. Frank, K. Zollner, D. Kochan, M. Gmitra, and J. Fabian, “Quantum anomalous hall effects in graphene from proximity-induced uniform and staggered spin-orbit and exchange coupling,” Phys. Rev. Lett. 124, 136403 (2020).
* Li _et al._ (2013) X. Li, T. Cao, Q. Niu, J. Shi, and J. Feng, “Coupling the valley degree of freedom to antiferromagnetic order,” Proceedings of the National Academy of Sciences 110, 3738–3742 (2013), https://www.pnas.org/content/110/10/3738.full.pdf .
* Liang, Wu, and Hu (2013) Q.-F. Liang, L.-H. Wu, and X. Hu, “Electrically tunable topological state in [111] perovskite materials with an antiferromagnetic exchange field,” New Journal of Physics 15, 063031 (2013).
* Shabbir _et al._ (2018) B. Shabbir, M. Nadeem, Z. Dai, M. S. Fuhrer, Q.-K. Xue, X. Wang, and Q. Bao, “Long range intrinsic ferromagnetism in two dimensional materials and dissipationless future technologies,” Applied Physics Reviews 5, 041105 (2018), https://doi.org/10.1063/1.5040694 .
* Nadeem _et al._ (2020) M. Nadeem, A. R. Hamilton, M. S. Fuhrer, and X. Wang, “Quantum anomalous hall effect in magnetic doped topological insulators and ferromagnetic spin-gapless semiconductors—a perspective review,” Small 16, 1904322 (2020).
* Xu _et al._ (2020) G. Xu, T. Zhou, B. Scharf, and I. Žutić, “Optically probing tunable band topology in atomic monolayers,” Phys. Rev. Lett. 125, 157402 (2020).
* San-Jose _et al._ (2015) P. San-Jose, J. L. Lado, R. Aguado, F. Guinea, and J. Fernández-Rossier, “Majorana zero modes in graphene,” Phys. Rev. X 5, 041042 (2015).
* Ezawa (2015b) M. Ezawa, “Antiferromagnetic topological superconductor and electrically controllable majorana fermions,” Phys. Rev. Lett. 114, 056403 (2015b).
* Potter and Lee (2010) A. C. Potter and P. A. Lee, “Multichannel generalization of kitaev’s majorana end states and a practical route to realize them in thin films,” Phys. Rev. Lett. 105, 227003 (2010).
|
arxiv-papers
| 2021-07-26T15:43:19 |
2024-09-04T03:07:19.022386
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Muhammad Nadeem, Chao Zhang, Dimitrie Culcer, Alex R. Hamilton,\n Michael S. Fuhrer, Xiaolin Wang",
"submitter": "Muhammad Nadeem",
"url": "https://arxiv.org/abs/2107.12278"
}
|
2107.12281
|
# Active microphase separation in mixtures of microtubules and tip-
accumulating molecular motors
Bezia Lemma Physics Department, Harvard University, Cambridge, MA 02138, USA
Physics Department, Brandeis University, Waltham, MA 02453, USA Physics
Department, University of California, Santa Barbara, CA 93106, USA Noah P.
Mitchell Kavli Institute for Theoretical Physics, University of California,
Santa Barbara, CA 93106, USA Physics Department, University of California,
Santa Barbara, CA 93106, USA Radhika Subramanian Molecular Biology
Department, Mass. General Hospital Boston, MA 02114, USA Genetics Department,
Harvard Medical School, MA 02115, USA Daniel J. Needleman John A. Paulson
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA
02138, USA Molecular & Cellular Biology Department, Harvard University,
Cambridge, MA 02138, USA Center for Computational Biology, Flatiron
Institute, New York, NY 10010 Zvonimir Dogic Physics Department, University
of California, Santa Barbara, CA 93106, USA Biomolecular Science &
Engineering Department, University of California, Santa Barbara, CA 93106, USA
Physics Department, Brandeis University, Waltham, MA 02453, USA
[email protected]
###### Abstract
Mixtures of microtubules and molecular motors form active materials with
diverse dynamical behaviors that vary based on their constituents’ molecular
properties. We map the non-equilibrium phase diagram of microtubules and tip-
accumulating kinesin-4 molecular motors. We find that kinesin-4 can drive
either global contractions or turbulent-like extensile dynamics, depending on
the concentrations of both microtubules and a bundling agent. We also observe
a range of spatially heterogeneous non-equilibrium phases, including finite-
sized radial asters, 1D wormlike chains, extended 2D bilayers, and system-
spanning 3D active foams. Finally, we describe intricate kinetic pathways that
yield microphase separated structures and arise from the inherent frustration
between the orientational order of filamentous microtubules and the positional
order of tip-accumulating molecular motors. Our work shows that the form of
active stresses and phases in cytoskeletal networks are not solely dictated by
the properties of individual motors and filaments, but are also contingent on
the constituent’s concentrations and spatial arrangement of motors on the
filaments.
## I Introduction
Active matter, the class of materials composed of motile energy-consuming
units, exhibits various non-equilibrium dynamical phases [1, 2, 3, 4, 5, 6].
For instance, active Brownian particles form dense clusters that share
intriguing similarities with conventional gas-liquid phase coexistence,
despite purely repulsive interactions [7, 8, 9, 10]. Active matter also
exhibits distinct dynamical phases with no equilibrium analogs, such as
percolating networks that undergo global contractions and turbulent-like flows
observed in extensile cytoskeletal filaments or microscopic swimmers [11, 12,
13, 14, 15, 16]. Theoretical tools that predict such macroscopic dynamics from
microscopic details are still under development [17, 18, 19, 20, 21].
Consequently, there is a lack of knowledge about the landscape of the possible
dynamic phases that can arise in active matter systems. Our ability to
rationally engineer large-scale dynamics by controlling the behavior of
microscopic constituents is in its infancy [22]. One way to address this
critical knowledge gap is through experiments that measure detailed non-
equilibrium phase diagrams of systems with varied microscopic dynamics.
Motivated by these considerations, we study the self-organization of
microtubule filaments driven by tip-accumulating kinesin-4 molecular motors.
We measure a non-equilibrium phase diagram, finding not only previously
described contracting gels and extensile fluids, but also a range of novel
structures, which include localized 1D micelle-like asters, extended 2D flat
bilayers, monolayer covered condensates, and 3D bilayer-based foam-like
networks. These structures are fundamentally different from previously studied
forms of active matter, due to the importance of both positional and
orientational order. Instead, they are more reminiscent of the diverse
microphase-separated phases that self-assemble from chemically heterogeneous
amphiphilic molecules [23, 24]. However, unlike equilibrium amphiphilic self-
assembly, which is driven by the chemical immiscibility of different segments
[25], the formation and continuous rearrangement of kinesin-4/microtubule
structures are driven by energy-consuming molecular motors. We collectively
name these phenomena active microphase separation.
The dimeric kinesin-4 molecular motors used in this study consume energy from
ATP hydrolysis to step towards microtubule plus ends, where they accumulate
[26, 27, 28]. Kinesin localization results in the formation of segmented
microtubules consisting of a motor-rich segment at the microtubule plus end
and an adjoining motor-poor segment. Thus, the unique properties of kinesin-4
motors yield a reconfigurable building block in which the motor dynamics
encode the filament’s spatial heterogeneity, unlike the permanently encoded
chemical structure of conventional amphiphiles. Microscopic parameters such as
the microtubule length and the kinesin-4 concentration determine the size of
the motor-rich domain. The plus-end segment can slide along other microtubules
to their plus-ends [28, 29].
## II Results
### II.1 Aster formation
Figure 1: Self-organization of reconfiguring asters. (a) Kinesin-4 induces
rapid assembly of asters. (b) The density profile of microtubules (gray)
radially averaged from the z-projection of an aster. Predicted structures
$I_{ideal}$ (dotted black line) based on end-bound kinesin-4 motors, given the
measured density profile of kinesin-4 (blue). Bars are standard error averaged
over three similar radial asters. Inset: Aster with approximate radial
symmetry. (c) Microtubule polydispersity (gray bars) is described by a log-
normal distribution (dashed black line, M=1.4, S=0.6, mean 4.9 $\mu$m, mode
2.8 $\mu$m). (d) Temporal rearrangement of an aster. (e) A large field of view
shows fully-formed asters. The dashed purple line highlights a wormlike
structure. (f) The mean aster volume as a function of time. Open shapes
indicate the aster formation regime. (g) The mean major/minor moment ratio of
asters over time. Bars represent standard deviation. All images are
z-projections over 6.5 $\mu$m, sample contains 200 nM kinesin-4 (blue), 400 nM
tubulin (black).
We first studied the organization of a low concentration of stabilized
microtubules by kinesin-4 motors in a thin parallelepiped chamber (See
Methods). Immediately after mixing, we observed microtubules joined by their
ends [Fig. 1(a), 0 min]. Within the first $\sim$10 minutes, collections of
microtubules continue to merge with each other, while labeled kinesin-4
clusters became visible at locations where filaments joined [Fig. 1(a), 6-12
min]. Subsequently, the nascent kinesin clusters merged with each other,
forming increasingly better-defined radial structures [Fig. 1(a), 18-24 min].
The intensity of the motor-rich aster clusters located at the aster core
increased, indicating a continual accumulation of motors. Within thirty
minutes, the majority of microtubules condensed into radial star-shaped asters
with well-defined kinesin-4 cores at their centers [Fig. 1(a), 30 min].
To understand the aster structure, we measured the density profile of radially
symmetric asters from 3D confocal images [Fig. 1(b)]. The kinesin core had a
radius of $\sim$1 $\mu$m, while the microtubule profile spanned $\sim$10
$\mu$m radially outwards. We hypothesized that microtubules were anchored to
the aster core by their tips. To test this proposition, we modeled the aster’s
structure by convolving the measured microtubule length distribution [Fig.
1(c)] with the intensity profile of the kinesin core (SI). This convolution
yielded a radially averaged microtubule profile that closely matched the
experiments [Fig. 1(b), dashed line], which is consistent with our hypothesis.
After their formation, asters continued to evolve by merging with each other
and undergoing internal rearrangements [Fig. 1(d)]. Over time this yielded
elongated wormlike structures [Fig. 1(e), Vid. 1]. To characterize such
dynamics, we measured the mean three-dimensional moments of the kinesin-rich
aster’s cores. The average ratio between the major and minor moments increased
two-fold, while the mean volume of asters remained approximately constant
[Fig. 1(f), (g)].
### II.2 Global contraction and bilayer formation
Figure 2: Globally contracting networks generate bilayer structures. (a)
Kinesin-4 driven global contraction of labeled microtubules. (b) Microtubule
fluorescence as a function of position along the chamber’s short axis reveals
non-uniform density growth, with peaks at the sample edges. (c) The normalized
width $W_{n}(t)$ of a contracting network decays over time. Dashed lines are
fits of Eq. 1. Inset: Contraction timescale $\tau$ decreases with kinesin
concentration. Error bars indicate standard error (n=3). (d) The final
structure of the contracted bilayer consists of a kinesin 2D sheet (blue) with
microtubules (black) anchored to the surface and pointing along its normal.
(e) $x$-$z$ resliced at the shaded line. (f) Fluorescence intensity profile
along the surface normal. The predicted microtubule fluorescence $I_{ideal}$
(dotted black line) agrees with the measured fluorescence. Bars indicate
standard error over twenty sections of 3 $\mu$m width.
By increasing tubulin concentration above 1 $\mu$M, we observed the emergence
of new dynamics. Instead of forming locally condensed asters, the system
globally contracted into a single structure [Fig. 2(a), Vid. 2]. Material
density was highest at the boundaries of the contracting network [Fig. 2(b)],
similar to dynein-induced contractions studied in cell extracts and purified
systems [30, 16]. We tracked the contracting network’s width $W(t)$ over time
$t$. The normalized width, $W_{n}(t)=W(t)/W(0)$, was described by an
exponential function:
$W_{n}(t)\approx
W_{n}^{\infty}+e^{\frac{-(t-t_{0})}{\tau}}(1-W_{n}^{\infty}),$ (1)
where $t_{0}$ is a time-offset, $W_{n}^{\infty}$ is the final normalized
width, and $\tau$ is the contraction timescale [Fig. 2(c)]. $\tau$ increased
with increasing kinesin concentration [Fig. 2(c)], and decreased with
increasing microtubule number density [Fig. S1].
Examination of the final contracted state revealed a well-defined bilayer
structure in which the kinesin motors formed an extended 2D sheet, with
microtubules protruding from both sides of the sheet, pointing along the
surface normal [Fig. 2(d), (e)]. In analogy to asters, we hypothesized that
microtubules are anchored to the 2D kinesin sheet by their tips. We modeled
the bilayer structure by convolving the measured length distribution of
microtubules with the kinesin intensity profile along the surface normal (SI).
The model of the bilayer structure closely matched the experimentally measured
density profile [Fig. 2(f)]. Thus, our analysis suggests that microtubules are
connected to the high-density kinesin layer by their plus-ends, with their
minus-ends pointing outwards. How an initially disordered contracting network
transforms into a late-stage bilayer structure remains to be studied.
We showed that increasing the microtubule concentration induces a transition
from local asters to large-scale bilayers. To investigate the importance of
initial conditions, we tested if increasing the concentration of fully formed
asters leads to a similar transition. We prepared a sample with a low filament
concentration in a tall sample chamber (250 $\mu$m), which led to the
formation of asters throughout the volume. Once formed, large asters slowly
sedimented into a dense $\sim$50 $\mu$m thick layer, which had an average
tubulin density above 1 $\mu$M [Fig. 3(b-d)]. Uniformly dispersed samples
prepared at such concentrations contracted into bilayers. However, the
sedimented asters did not contract into a single structure. Instead, they
formed a dense continuously rearranging network [Fig. 3(e), Vid. 3]. The lack
of global contraction demonstrates that the form of the long-term steady-state
structures depends not only on the constituents’ local concentration, but also
on the sample history. Intriguingly, the increase in kinesin density due to
sedimentation is an order of magnitude smaller than the increase in tubulin
density [Fig. 3(d)]. Hence, in contrast to microtubules, a significant
fraction of the kinesin does not incorporate into the asters.
Figure 3: Initial conditions determine steady-state dynamics. (a) $x$-$z$
plane images show the aster assembly and sedimentation. The arrow indicates
gravity, $x$-$y$ is the imaging plane. (b) Asters images in the $x$-$y$ at two
different heights at 500 min. (c, d) Temporal evolution of the density
$z$-profiles of microtubules $\rho_{MT}$ and kinesin $\rho_{K4}$ illustrate
material sedimentation. (e) The average microtubule density (purple open
circles) below the sedimentation height (black circles) as a function of time.
The effective tubulin concentration is higher than what is used in [Fig. 2]
yet no global contraction occurs.
### II.3 Surface roughening of contracting networks
Samples prepared with even higher tubulin concentrations (10 $\mu$M) also
underwent global contractions, but exhibited a distinct kinetic pathway and a
different final structure from the above-described bilayers. The sample
evolution proceeded in two stages: an initial global contraction followed by
morphological surface roughening [Vid. 4]. In the first stage, the initially
isotropic network developed nematic order while contracting [Fig. 4(a)]. We
defined $\theta$ as the local orientation of microtubule bundles in the
structure’s interior and $\bar{\theta}$ as the average bundle orientation
[Fig. 4(b), SI]. The scalar order parameter
$S=\langle\cos(2[\theta-\bar{\theta}])\rangle$ indicates the degree of nematic
ordering, with 0 representing isotropic structure and 1 representing perfect
alignment (SI). As the network contracted, its volume $V$ decreased
monotonically, while the order parameter $S$ of the enclosed microtubules
increased [Fig. 4(c)].
Figure 4: Nematic alignment and surface roughening of a contracting network.
(a) $z$-projected images demonstrate that decreasing network volume leads to
increasing nematic alignment. (b) $z$-projection of the microtubule nematic
order. Hue indicates the nematic director indicated by the color wheel, while
intensity indicates coherency (SI). (c) The microtubule nematic order
parameter increases during contraction and then decreases during roughening.
(d) The contracting network’s volume (solid purple) decreases continuously.
Its surface area (dashed black) initially decreases but then increases. (e) A
10 $\mu$m z-projection of the material after surface roughening generates
spherical cavities. (f) A cropped 3D projection highlights the invaginated
structure of the microtubule network. (g) $x$-$y$ and $z$-$y$ show a
hemispherical cavity. Sample composed of 10 $\mu$M tubulin (black), 200 nM
kinesin (blue).
After approximately 120 minutes, the heretofore increasing nematic order
parameter $S$ started decreasing sharply, signaling the onset of the second
stage [Fig. 4(c)]. Simultaneously, the network’s surface area $A$, which had
previously fallen by a factor of two, began to increase [Fig. 4(d)]. This
transition was concomitant with morphological changes, in which the smooth
interface of the contracting network started roughening. Surface roughening
was accompanied by the formation of a dense monolayer consisting of a kinesin
sheet with outwardly pointing microtubules, which enveloped the contracting
network [Fig. 4(e)]. Over time the roughening surface developed invaginations
that rearranged into hemispherical $\sim$50 $\mu$m cavities [Fig. 4(e), (f)].
Microtubules protruding from the surfaces of the hemispherical cavities
reached the cavities’ center, thus creating inverted asters with a sheet of
kinesin half-enveloping radially splayed microtubules [Fig. 4(g)].
We reconstructed the network’s 3D structure using a morphological snakes level
sets algorithm [Fig. 5(a),(b)] [31, 32, 33]. The surface and cross-sectional
views show an initial rounding of the network’s cross-section, followed by a
subsequent roughening [Fig. 5(c)]. Numerical representation of the contracting
network allowed us to quantify the distribution of the cytoskeletal material
both on the surface and within the interior of the contracting network. During
the second stage, while the density of the interior protein remained nearly
constant [Fig. 5(d)], the density of kinesin-4 and microtubules within 5
$\mu$m of the surface increased threefold [Fig. 5(e)].
Figure 5: Surface roughening is accompanied by the formation of a surface-
bound monolayer. (a) Time series of a surface of a contracting network. (b)
$x$-$y$ slices of data corresponding to cuts shown in the previous panel
reveal the formation of a monolayer and invaginations at late times. (c)
$x$-$z$ slices show contracting cross-section until the roughening commences.
(d) Tubulin and kinesin density within the interior of the contracting network
is constant during the roughening phase. (e) Tubulin and kinesin density
within 5 $\mu$m of the surface increase during the roughening phase. (f) The
flux of microtubules from the interior to the surface $\Phi_{V\rightarrow S}$
(black solid), the microtubule surface density $A\partial_{t}\rho_{s}$ (blue
dashed) and the change in surface area $\rho_{s}\partial_{t}A$ (purple short-
dashed) as a function of time. The red long-dashed line indicates the sum of
all three terms. (g) Normal-normal spatial correlations show faster decay as
the material roughens. These correlations are calculated only on a bisected
surface, to reduce the influence of the overall surface curvature. Inset:
Exponential fits to the normal-normal correlation decay between 10-20 $\mu$m
show correlation length decreased by 200 $\mu$m over 50 minutes. Sample
consisted of 10 $\mu$M tubulin (black), 200 nM kinesin (blue).
To understand whether the protein-dense shell arises simply from geometric
deformation of the surface or by drawing material from the bulk, we quantified
the kinematics of the partitioning between the dense network surface and its
contracting interior. In the roughening stage, the surface area $A$ increased
[Fig. 4(d)]. In the absence of any material flux between the surface and the
interior, the areal density of surface-bound microtubules $\rho_{S}$ would
decrease proportionally to the surface area growth:
$A\partial_{t}\langle\rho_{S}\rangle=-\langle\rho_{S}\rangle\partial_{t}A$
(SI). We find that these two terms are, in fact, far from equal and opposite
[Fig. 5(f)], suggesting that there is substantial flux from the interior to
the surface. Meanwhile, the sum total of all microtubule fluorescence is
constant. The implied mass conservation is described by
$A\partial_{t}\langle\rho_{S}\rangle+\langle\rho_{S}\rangle\partial_{t}A=\Phi_{V\rightarrow
S},$ (2)
where $\Phi_{V\rightarrow S}$ is flux of material from the interior to the
surface. We then independently measured the flux of microtubules leaving the
interior of the contracting network,
$\Phi_{V\rightarrow
S}=-V\partial_{t}\langle\rho_{V}\rangle-\langle\rho_{V}\rangle\partial_{t}V,$
(3)
where $\langle\rho_{V}\rangle$ is the average volumetric density of
microtubules and $V$ is the volume of the interior, and find that it
quantitatively accounts for the increasing density of the surface-bound
microtubules $A\partial_{t}\langle\rho_{S}\rangle$ [Fig. 5(f)]. Our analysis
reveals that the density change due to surface area increase
$\langle\rho_{S}\rangle\partial_{t}A$ is small compared to the mass transfer
due to the flux from the interior to the surface $\Phi_{V\rightarrow S}$. The
mechanism that drives the flux of microtubule transport from the interior to
the surface remains unknown.
To quantify the roughening transition, we measured the spatial correlations of
the surface normals. A normal vector $\hat{n}(r,t)$ describes the network at
each surface point $r$ at time $t$ (SI). The averaged correlation between all
normal vectors, separated by a geodesic of length $\Lambda$, is given by
$C(\Lambda,t)=\frac{\langle\hat{n}(r,t)\cdot\hat{n}(r+\Lambda,t)\rangle}{\langle\hat{n}(r,t)\cdot\hat{n}(r,t)\rangle},$
(4)
where angular brackets indicate a spatial average over all initial points and
all geodesic paths of length $\Lambda$. At the beginning of the roughening
stage, the network has an extended flat shape which reflects the chamber
geometry. When restricted to either the top or bottom of the surface, pairs of
normal vectors $\hat{n}$ point in similar directions even at large distances.
Consequently, $C(\Lambda,t)$ remains close to unity for all values of
$\Lambda$. As the surface roughens with time, the correlation between surface
normals $\hat{n}$ decreases. $C(\Lambda,t)$ develops a plateau at large
distances, where the plateau magnitude decreases with time [Fig. 5(g)]. At
smaller length scales, ranging from 1 to 30 $\mu$m, $C(\Lambda,t)$ exhibits
exponential decay. The rate of the exponential increased six-fold from the
beginning to the end of the roughening process. The long-range normal-normal
correlation decayed from $C$($40$ $\mu$m, $100$ min) $\approx 0.85$ to
$C$($40$ $\mu$m, $220$ min) $\approx 0.2$.
### II.4 Active foam formation
At the highest tubulin concentrations studied (40 $\mu$M) we observed a
multistage kinetic pathway of significant complexity [Vid. 5]. In this regime,
the microtubules had an initial orientational order, and initially displayed
subtle bend deformations [Fig. S5]. Subsequently, the buckling dynamics
transitioned into more dramatic splay-like deformations, the onset of which
broke up the continuous network by generating sharp density variations between
filament-rich and filament-poor regions [Fig. 6(a), 80 min]. These changes in
orientational order and local density fluctuations yielded finite-sized
condensates that were well-separated from a background fluid mostly devoid of
protein [Fig. 6(a), 140 min]. A high-density monolayer of kinesin and
microtubules enveloped the condensate surface, with microtubules aligned along
the surface normal. The monolayer-covered condensates were similar to those
observed at lower filament concentrations. The main difference is that active
stresses ruptured the network, creating finite-sized structures. In contrast,
lower microtubule concentrations generated only one contracting network, which
did not break apart.
Figure 6: Splay-like deformations, self-tearing, and roughening at the
highest microtubule concentrations. (a) Maximum intensity $z$-projections over
3 $\mu$m show a splay-like instability that generates density variation and
self-tearing that yields condensates. (b) Evolution of a contracting
condensate surface (left) $x$-$y$ and $x$-$z$ image cross-sections (right).
(c) The volume (solid blue curve) and surface area (black dashed curve) of a
contracting condensate as a function of time. (d) The spatial correlation
between surface normal vectors decay over time. Inset: Exponential fits to the
normal-normal correlation decay between 5-20 $\mu$m show correlation length
decreased by 50 $\mu$m over 80 minutes. (e) Two surface-bound monolayers
zippering into a bilayer. Sample contained 200 nM kinesin (blue), 40 $\mu$M
tubulin (black).
After their formation, condensates exhibited surface roughening. Using the
previously described algorithm, we numerically generated surfaces describing
the evolution of the condensate’s morphology [Fig. 6(b)]. The condensate’s
volume decreased continuously, while its surface area $A$ remained constant
until $\sim$160 minutes, after which $A$ increased sharply [Fig. 6(c)]. As
roughening continued, the mean curvature increased, and the normal-normal
correlation $C(r)$ decreased [Fig. 6(d), Fig. S2]. High-resolution images
revealed the macroscopic mechanism driving the roughening transition.
Crumpling monolayers encountered each other, generating a zippering transition
of the kinesin decorated surfaces which locally produced a well-defined
bilayer [Fig. 6(e)].
On long times, the surface roughening transition generated an active foam,
which consists of a 3D network of bilayers that connect through junctions.
[Fig. 7(a), Fig. S5]. As in conventional foam, the interconnected bilayer
surfaces formed cells, which had elongated or even winding shapes [Fig. 7(b),
Fig. S6]. Unlike conventional foams, cells in an active foam had open sides,
while the constituent bilayers had free-standing edges [Fig. 7(c), Fig.
S6(b)]. The borders of the active foam compartments consist of
microtubule/kinesin-4 bilayers [Fig. 7(b), (c)]. The active foam exhibited
topological rearrangements. Individual cells deformed, while bilayer walls
moved to change the local topology [Fig. 7(d), Vid. 6]. Thus, the surface
roughening transition is the first stage of a unique morphological transition
in which a continuous and smooth space-filling condensate transforms into
perforated foam-like structures. The development of an active foam and its
rearrangements remains an important topic for future investigations.
Figure 7: Surface roughening yields an active foam. (a) Morphological change
from monolayer envelopes to a percolated foam. (b) Ortho-slices show the
complex 3D structure of the active foam. (c) Maximum intensity $z$-projection
over 10 $\mu$m illustrates distinct foam cells which can have free ends or
open faces. (d) A foam cell undergoes topological rearrangements in an active
foam. Samples constituted from 200 nM kinesin (blue), 40 $\mu$M tubulin
(black).
### II.5 A bundling-induced transition from contracting to extensile gels
In the work described so far, we observed local and global contractions with
increasing microtubule concentrations. In comparison, kinesin-1 generates
extensile stresses when microtubules are combined with a microtubule bundling
agent [13, 34]. To investigate the capability of kinesin-4 motors to generate
extensile stresses, we added a non-adsorbing polymer, PEG (polyethylene
glycol), which bundles microtubules while still allowing for their relative
motor-driven sliding [35]. At low microtubule concentrations (4 $\mu$M),
global contractions occurred even in the presence of 0.5% w/w PEG [Fig. 8(a)].
However, beyond a critical filament concentration (10 $\mu$M tubulin), the
material exhibited initial self-generated bend-like patterns which are
suggestive of extensile stresses [Vid. 7] [1, 36]. On longer times scales,
these materials did not contract but rather yielded a continuously rearranging
network, similar to those previously studied [Fig. 8(a)] [37, 38]. The
contractile to extensile transition was quantified by plotting the final
network width $W(t)$ [Fig. 8(b)]. At low filament concentrations, $W(t)$
monotonically decreases and then plateaus, characteristic of contraction.
Increasing microtubule concentration further resulted in a network that
spanned the entire chamber while continuously rearranging. Therefore $W(t)$,
did not change over time. Using particle image velocimetry, we found that the
mean microtubule network speed increased with increasing kinesin
concentration. In contrast to kinesin-1 studies, increasing kinesin-4
concentration increased the velocity-velocity correlation length scale [SI]
[38].
We also observed that extensile gels could transform into globally contracted
bilayers [Fig. 8(c), Vid. 8]. Upon preparation, an active mixture (0.1-0.3%
w/w PEG, 80-90 $\mu$M tubulin) exhibited a bend instability and fluidized.
However, on longer time scales, distinct segments of kinesin-4 appeared. As
these segments became prominent, the motor driven dynamics slowed down. This
dynamical transition was concomitant with the appearance of local bilayer-like
arrangements. In these bilayers, kinesin-4 formed a central line with
microtubules pointing outward on both sides.
Figure 8: Microtubule bundling yields extensile dynamics. (a) The evolution
of the shear-aligned microtubule network depends on filament concentrations.
Samples had 0.5% PEG, 300 nM kinesin. (b) The average microtubule network
width $W(t)$, normalized by the initial width $W(0)$, decreased over time,
with lower microtubule densities contracting faster. The shaded region
indicates the standard deviation from data taken at five non-overlapping
positions over the long axis of the chamber. (c) Extensile instability leads
to the formation of a bilayer structure. This sample chamber was 30 $\mu$m
thick this sample contained 100 nM kinesin (blue), 80 $\mu$M tubulin (black)
and 0.1% PEG.
### II.6 A non-equilibrium phase diagram
As described above, a one-dimensional sweep of tubulin concentration in the
absence of PEG yielded active microphase separated phases, while adding PEG
produced an active extensile fluid. To further characterize the system, we
mapped the non-equilibrium phase diagram by creating samples between 50 and
300 nM kinesin-4, 0.2 to 180 $\mu$M tubulin, and 0% to 2% PEG [Fig. 9(d),(e)].
At relatively low microtubule concentrations, the active material contracted
into localized asters over a wide range of PEG and kinesin-4 concentrations.
Increasing microtubule concentration generated global contractions, again over
a wide range of PEG and kinesin-4 concentrations. At the highest microtubule
concentrations, with little or no PEG, we observed the formation of active
foams. Adding PEG in this regime transformed active foams into extensile
turbulent-like gels similar to those seen in kinesin-1 driven systems.
Presumably, introducing PEG suppressed the formation of asters and bilayer
foams, while promoting the formation of bundles that generate extensile
dynamics [Fig. 9(d)]. Kinesin-4 concentration determined the speed of the
autonomous dynamics but did not substantially affect the boundaries between
the extensile and contracting phases [Fig. 9(e), SI]. The long-term non-
equilibrium phase behavior described here depends on the initial and boundary
conditions, the sample history, and the kinetic pathways [Fig. S7].
Figure 9: The phase diagram of kinesin-4 and microtubules. (a) Microscopic
building blocks: kinesin-4 (blue) attach to a microtubule (grey), walk to the
microtubules plus end, and accumulate at the plus end, creating a
heterogeneous filament that can interact with other filaments by directed
transport or via steric alignment induced by PEG. (b) Mesoscale organizational
motifs include asters, layers, or bundles. (c) Hierarchically organized
mesoscale building blocks yield macroscopic phases including dynamic asters,
globally contracting gels, active bilayer foams, and fluidized extensile
bundles. (d) Phase diagram at 200 nM kinesin as a function of tubulin and PEG
concentration. (e) Phase diagram at 0.5% PEG (w/w) as a function of protein
concentrations.
## III Discussion
In cytoskeletal active matter, extensile active stresses drive continuous
turbulent-like flows, while isotropic contracting active stresses generate
local or global collapse [22, 13, 16, 39, 30, 40, 41, 42]. We studied the
self-organization of microtubules and kinesin-4, a tip-accumulating molecular
motor. In the regime of high concentrations of filaments and bundling agents,
we observed extensile turbulent flows. Reducing either the concentrations of
microtubules or PEG resulted in contraction. These observations demonstrate
that the form of the active stress is not solely dictated by the molecular
properties of cytoskeletal components, but is also dependent on the
concentration of the constituents. This insight is valuable for relating the
mesoscopic active stresses to the structure, interactions, and dynamics of the
microscopic constituents [43, 44, 20, 45]. In the contracting regime, we
observed a myriad of active microphase separated structures. Lowest filament
concentration sample yielded isolated asters [Fig. 1]. With increasing
filament concentrations, asters transformed into 1D wormlike structures,
extended 2D bilayers, and foam-like 3D material [Fig. 2, 7]. These findings
have implications for our understanding of cytoskeletal active matter.
The formation of aster-like structures has previously been observed in
mixtures of microtubules and various molecular motors [2, 46, 47, 48, 16, 49].
Theoretical models of such asters are sometimes couched in the language of
topological defects in liquid crystals. However, the asters studied here are
well-isolated structures in a filament-free background fluid; thus they are
more reminiscent of equilibrium amphiphile-based micelles. Instead of
hydrophobic interactions, their condensation is driven by tip-accumulating
molecular motors. With increasing concentration, amphiphilic systems form 1D
wormlike micelles, 2D membranes and space-filling 3D lamellar, hexagonal, or
disordered gyroid phases [25]. We observed active analogs of these higher-
order phases. Once the microphase separation is complete, motors continue to
reconfigure the material, as we observed for both wormlike structures and
active foams [Vid. 1,3,6]. Kinesin-4 drives these large-scale events by
generating active stresses that are likely distinct from those postulated for
a suspension of aligned active filaments.
Molecular motors can mediate different filament interactions. For example,
they can drive interfilament sliding within an aligned bundle, or they can
cluster tips of isotropically arranged filaments [28, 16, 50]. Clusters of
kinesin-1 motors are thought to primarily induce filament sliding [38].
However, observation of asters in such systems suggests that they retain a
small degree of end-binding [47]. In comparison, kinesin-4 has an enhanced
end-binding property, which has been characterized on the single filament
level [28, 29]. We developed a model of aster structure that predicts the
microtubule profile from a given kinesin profile, but it does not explain the
size of the kinesin core. The latter could be related to the size of the
kinesin-4 cap. More experimentation is needed to elucidate this point, as
single filament experiments suggest that the cap size depends on protein
concentrations and microtubule length [29]. Thus, the balance of spatial
filament decoration and interfilament sliding by molecular motors might
determine the range of possible phases of an active cytoskeletal material, and
is a promising avenue for further investigation.
Active microphase separation has relevance to biological systems. The self-
organization of microtubules and molecular motors have been studied in Xenopus
egg extracts [51, 52]. Dynein drives aster assembly in Xenopus egg extracts,
which globally contract at higher filament concentrations [53, 30, 54]. Such
asters have been used as models for spindle pole assembly [54]. Under other
conditions, stabilized microtubules in Xenopus egg extracts assemble into
structures reminiscent of the bilayers observed in the present work [55]. In
these experiments, extended bilayers of taxol stabilized microtubules form,
with their minus ends pointing away from the midplane. These bilayer
structures serve as models for the spindle midzone, the array of microtubules
that assembles between segregating chromosomes and drives the spindle
elongation and chromosome separation [56, 57, 58]. Much prior work on spindle
midzones focused on factors that determine the extent of antiparallel overlap
of the microtubule ends [28, 59]. However, the reason why this narrow region
of antiparallel overlap stays well aligned across the entire spindle width
remains poorly understood. The similarity between the bilayers observed in the
present work, those formed in Xenopus egg extracts, and the spindle midzone
itself, suggests that similar principles might govern the self-organization of
all of these structures.
Besides revealing a range of active microphase states, our work also
demonstrates rich kinetic pathways that lead to the formation of these phases.
These pathways are influenced by the interplay between the tendency of rod-
like filaments to align due to excluded volume interactions and the propensity
of tip-adhering kinesin motors to drive microphase separation. We observe
filament alignment at high microtubule concentrations, which occurs either
initially during sample loading, or develops over time in a contracting
network [Fig. 4, S5]. Theory dictates that aligned active filaments are
inherently unstable [60]. Specifically, extensile active stresses drive the
bend instability as we observed for the kinesin-4 system in the presence of
bundling interactions [Fig. 8] [37, 61]. Analogously, contractile systems
exhibit splay instabilities, but these have not been experimentally observed.
The interplay between alignment and tip-accumulation is illustrated at high
microtubule concentrations in the absence of bundling interaction [Fig. 4, 6].
Samples prepared in this regime initially exhibit both aligned filaments and
networks contraction. Thus, they are a good candidate for observing the splay
instability. Indeed, we observed splay-like deformations, but these were
associated with self-tearing. This might be a consequence of the extended
nature of microtubule filaments. In polymeric liquid crystals, such as
microtubule-based nematics, splay deformations generate local variations in
the filament concentration [62]. Thus, splay instabilities lead to sharp
density gradients, which in turn could lead to self-tearing, which yields
finite-sized condensates. Beyond this point, the system starts exhibiting
structural rearrangements that are likely driven by the tip-accumulation of
molecular motors. In particular, the rapidly formed condensates become
enveloped by a monolayer of aligned microtubules, which are anchored to a 2D
sheet of kinesin motors. The subsequent surface roughening transition is
related to the zippering of monolayers into bilayers [Fig. 6]. It generates
dramatic topological rearrangements that transform simple compact condensates
into a perforated active foam. Active foams are composed of bilayers, which
have both locally aligned filaments and tip accumulated motors. Thus, they
resolve the above-described constraints that govern the dynamics of
kinesin-4/microtubule systems.
In summary, we demonstrated that kinesin-4 motors self-organize microtubules
into a myriad of hierarchical structures. At a single filament level,
kinesin-4 motors accumulate at microtubule tips to define a spatially
heterogeneous elemental unit capable of higher-order self-assembly. This
segmented structure results from a dynamical process, in contrast to
amphiphilic systems, where the spatial heterogeneity of the basic building
blocks is permanently programmed in the amphiphile’s molecular structure. Tip-
decorated microtubules locally condense to generate higher-order radial
asters. Asters can, in turn, merge to form extended bilayer sheets. The
bilayer sheets form a tissue-like active foam at higher filament
concentrations that undergo intriguing motor-driven topological
rearrangements. Current hydrodynamic theories do not explain these phenomena.
###### Acknowledgements.
We thank Mark Bowick, Boris Shraiman, Linnea Lemma, and Dillon Cislo for
valuable discussions. In addition, we thank Shuo Jiang and Marc Ridilla for
their assistance in purifying kinesin-4 and Sithara Wijeratne for sharing the
results of single-molecule experiments on kinesin-4. DJN acknowledges the
support of NSF-DMR-2004380, NSF-DMR-1420570, and NSF-DMR-0820484. RS was
supported by a grant from the NIH (1DP2GM126894-01). ZD acknowledges the
support of NSF-DMR-2004617 and NSF-MRSEC-2011486. NPM acknowledges support
from the Helen Hay Whitney Foundation and NSF PHY-1748958. We also acknowledge
the use of Brandeis MRSEC optical microscopy and biosynthesis facilities,
which are funded by NSF-MRSEC-2011486.
## IV Methods
### IV.1 Sample Preparation
We studied kinesin-4 driven dynamics by combining GFP-labeled kinesin with
Alexa-647 labeled stabilized microtubules in a buffered solution with an ATP
regeneration system. The solution consisted of DI water with 80 mM PIPES
(piperazine-N, N’-bis), 5 mM magnesium chloride, 1 mM EGTA, 1.4 mM ATP
(Adenosine triphosphate, Sigma A2383), 0.034% pyruvate kinase (PK/LDH, Sigma
P-0294), and 52 mM PEP (Phosphoenolpyruvate, Alfa Aesar B20358) adjusted to a
pH of 6.8 with potassium hydroxide. In addition, to prevent photobleaching, we
added DTT (dithiothreitol, ACROS Organics 16568), Glucose (Sigma G7528),
Catalase (Sigma C40), and Glucose oxidase (Sigma G2133). When noted,
experiments include 35 kDa PEG (polyethylene glycol).
The full-length human kinesin-4 clone Kif4A or fluorescent Kif4A-GFP were
expressed in sf9 cells as described previously [27]. We purified tubulin from
bovine brains according to a previously published protocol [63]. This tubulin
was polymerized and stabilized into microtubules by mixing 60 uM tubulin with
3 mM of the non-hydrolyzable GTP analog GMPcPP
(Guanosine-5’-[($\alpha$,$\beta$)-methyleno]triphosphate, Jena Biosciences
NU-405), and a solution of 1 mM DTT, 80 mM PIPES, 2 mM magnesium chloride, 1
mM EGTA in DI water adjusted to a pH of 6.8 with potassium hydroxide. 3% of
tubulin monomers were labeled with a fluorescent dye, Alexa-Fluor 647
(Invitrogen, A-20006), by a succinimidyl ester linker according to a
previously published protocol [64]. The solution was incubated in a water bath
at 310 K for one hour and then left to cool to room temperature for 6 hours.
Polymerized microtubules were flash-frozen in liquid and subsequently thawed
before creating a sample.
While all active materials consist of GMPcPP polymerized microtubules, the
paper’s concentrations refer to tubulin concentrations. A microtubule consists
of a repeating lattice of $\sim$13 tubulin monomers, each ring of the lattice
is 4 nm [65]. Thus if the mean microtubule length is approximately 4.9 $\mu$m,
each microtubule has roughly 16,000 tubulin monomers.
### IV.2 Chamber Preparation
Each experiment occurs in a chamber with dimensions of 1.5 mm x 0.1 mm x 18 mm
unless noted otherwise. The chamber consists of a glass top and bottom, with
parafilm spacers sealed with NOA 81 UV Adhesive (Norland Products, 8101) at
both ends. The glass was coated with a polyacrylamide brush to suppress
proteins’ adsorption onto the glass [66]. To bond parafilm to the glass, we
warm the parafilm to 338 K and press it onto the glass with the rounded end of
a PRC tube. This process leads to chambers that are 80-100 $\mu$m in height.
### IV.3 Microtubule Length Distribution Measurements
To measure microtubule length distributions, we flow dilute microtubules into
an untreated glass chamber. Microtubules adsorbed onto the glass are imaged
with a 100x objective with a 1.2 NA (Numerical Aperture) and an automated
stage. The resulting data set is segmented based on a simple threshold. Each
segmented object is then fit to an ellipse. If the ellipse has a thin minor
axis compared to its principal axis, then it is recorded as a microtubule with
the principal axis as the length. This process discards overlapping or out-of-
focus microtubules.
### IV.4 Microscopy
Fluorescence images were captured using a Nikon Ti2 base attached to an Andor
Zyla using a 4x Nikon Plan Apo Lambda (NA, 0.2) objective or a 10x Nikon Plan
Fluor objective (NA, 0.3).
Confocal microscopy images were captured with a Crest X-Light V2 spinning disk
system attached to a Nikon Ti2 base and a Hamamatsu ORCA-Flash4.0 V3. The
objective used for the aster sedimentation data was a 40x Plan Fluor objective
(NA, 0.75). The objective used for all other data was a 40x Apo long working
distance water immersion objective (NA, 1.15). Zeiss Immersol W, an NA matched
oil substitute, prevented imaging deterioration due to water evaporation
during long acquisitions.
## References
* Marchetti _et al._ [2013] M. C. Marchetti, J. F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Reviews of Modern Physics 85, 1143 (2013).
* Nedelec _et al._ [1997] F. Nedelec, T. Surrey, A. C. Maggs, and S. Leibler, Nature 389, 305 (1997).
* Schaller _et al._ [2010] V. Schaller, C. Weber, C. Semmrich, E. Frey, and A. R. Bausch, Nature 467, 73 (2010).
* Bricard _et al._ [2013] A. Bricard, J.-B. Caussin, N. Desreumaux, O. Dauchot, and D. Bartolo, Nature 503, 95 (2013).
* Narayan _et al._ [2007] V. Narayan, S. Ramaswamy, and N. Menon, Science 317, 105 (2007).
* Soni _et al._ [2019] V. Soni, E. S. Bililign, S. Magkiriadou, S. Sacanna, D. Bartolo, M. J. Shelley, and W. T. Irvine, Nature Physics 15, 1188 (2019).
* Theurkauff _et al._ [2012] I. Theurkauff, C. Cottin-Bizonne, J. Palacci, C. Ybert, and L. Bocquet, Physical review letters 108, 268303 (2012).
* Palacci _et al._ [2013] J. Palacci, S. Sacanna, A. P. Steinberg, D. J. Pine, and P. M. Chaikin, Science 339, 936 (2013).
* Redner _et al._ [2013] G. S. Redner, M. F. Hagan, and A. Baskaran, Physical review letters 110, 055701 (2013).
* Fily and Marchetti [2012] Y. Fily and M. C. Marchetti, Physical review letters 108, 235702 (2012).
* Thomas _et al._ [2018] C. Thomas, T. Surrey, F. Nédélec, J. Rickman, and J. Roostalu, Cell 175, 796 (2018).
* Dombrowski _et al._ [2004] C. Dombrowski, L. Cisneros, S. Chatkaew, R. E. Goldstein, and J. O. Kessler, Physical review letters 93, 098103 (2004).
* Sanchez _et al._ [2012] T. Sanchez, D. T. N. Chen, S. J. DeCamp, M. Heymann, and Z. Dogic, Nature 491, 431 (2012).
* Zhou _et al._ [2014] S. Zhou, A. Sokolov, O. D. Lavrentovich, and I. S. Aranson, Proceedings of the National Academy of Sciences 111, 1265 (2014).
* Bendix _et al._ [2008] P. M. Bendix, G. H. Koenderink, D. Cuvelier, Z. Dogic, B. N. Koeleman, W. M. Brieher, C. M. Field, L. Mahadevan, and D. A. Weitz, Biophysical journal 94, 3126 (2008).
* Foster _et al._ [2015] P. J. Foster, S. Furthauer, M. J. Shelley, and D. J. Needleman, eLife 4, 1 (2015).
* Liverpool and Marchetti [2005] T. B. Liverpool and M. C. Marchetti, EPL (Europhysics Letters) 69, 846 (2005).
* Gao _et al._ [2015] T. Gao, R. Blackwell, M. A. Glaser, M. D. Betterton, and M. J. Shelley, Physical review letters 114, 048101 (2015).
* Vliegenthart _et al._ [2020] G. A. Vliegenthart, A. Ravichandran, M. Ripoll, T. Auth, and G. Gompper, Science advances 6, 9975 (2020).
* Belmonte _et al._ [2017] J. M. Belmonte, M. Leptin, and F. Nédélec, Molecular Systems Biology 13, 941 (2017).
* Lenz [2020] M. Lenz, Elife 9, 51751 (2020).
* Needleman and Dogic [2017] D. Needleman and Z. Dogic, Nature Reviews Materials 2, 10.1038/natrevmats.2017.48 (2017).
* Israelachvili _et al._ [1976] J. N. Israelachvili, D. J. Mitchell, and B. W. Ninham, Journal of the Chemical Society, Faraday Transactions 2: Molecular and Chemical Physics 72, 1525 (1976).
* Bates and Fredrickson [1990] F. S. Bates and G. H. Fredrickson, Annual review of physical chemistry 41, 525 (1990).
* Safran [2018] S. Safran, _Statistical thermodynamics of surfaces, interfaces, and membranes_ (CRC Press, 2018).
* Bieling _et al._ [2010] P. Bieling, I. A. Telley, and T. Surrey, Cell 142, 420 (2010).
* Subramanian _et al._ [2013] R. Subramanian, S. C. Ti, L. Tan, S. A. Darst, and T. M. Kapoor, Cell 154, 377 (2013).
* Wijeratne and Subramanian [2018] S. Wijeratne and R. Subramanian, Elife 7, 10.7554/eLife.32595 (2018).
* Wijeratne _et al._ [2020] S. Wijeratne, S. A. Fiorenza, R. Subramanian, and M. Betterton, bioRxiv (2020).
* Tan _et al._ [2018] R. Tan, P. J. Foster, D. J. Needleman, and R. J. McKenney, Developmental Cell 44, 233 (2018).
* Marquez-Neila _et al._ [2014] P. Marquez-Neila, L. Baumela, and L. Alvarez, IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 2 (2014).
* Chan and Vese [2001] T. F. Chan and L. A. Vese, IEEE Transactions on image processing 10, 266 (2001).
* Osher and Fedkiw [2006] S. Osher and R. Fedkiw, _Level set methods and dynamic implicit surfaces_ , Vol. 153 (Springer Science & Business Media, 2006).
* Chandrakar _et al._ [2018] P. Chandrakar, J. Berezney, B. Lemma, B. Hishamunda, A. Berry, K. T. Wu, R. Subramanian, J. Chung, D. Needleman, J. Gelles, and Z. Dogic, arXiv (2018).
* Ward _et al._ [2015] A. Ward, F. Hilitski, W. Schwenger, D. Welch, A. W. Lau, V. Vitelli, L. Mahadevan, and Z. Dogic, Nat Mater 14, 583 (2015).
* Ramaswamy [2010] S. Ramaswamy, Annu. Rev. Condens. Matter Phys. 1, 323 (2010).
* Chandrakar _et al._ [2020] P. Chandrakar, M. Varghese, S. A. Aghvami, A. Baskaran, Z. Dogic, and G. Duclos, Physical Review Letters 125, 257801 (2020).
* Henkin _et al._ [2014] G. Henkin, S. J. DeCamp, D. T. N. Chen, T. Sanchez, and Z. Dogic, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372, 10.1098/rsta.2014.0142 (2014).
* Murrell and Gardel [2012] M. P. Murrell and M. L. Gardel, Proceedings of the National Academy of Sciences 109, 20820 (2012).
* e Silva _et al._ [2011] M. S. e Silva, M. Depken, B. Stuhrmann, M. Korsten, F. C. MacKintosh, and G. H. Koenderink, Proceedings of the National Academy of Sciences 108, 9408 (2011).
* Stam _et al._ [2017] S. Stam, S. L. Freedman, S. Banerjee, K. L. Weirich, A. R. Dinner, and M. L. Gardel, Proceedings of the National Academy of Sciences 114, 10037 (2017).
* Kumar _et al._ [2018] N. Kumar, R. Zhang, J. J. De Pablo, and M. L. Gardel, Science advances 4, 7779 (2018).
* Zhang _et al._ [2021] R. Zhang, S. A. Redford, P. V. Ruijgrok, N. Kumar, A. Mozaffari, S. Zemsky, A. R. Dinner, V. Vitelli, Z. Bryant, M. L. Gardel, _et al._ , Nature Materials 20, 875 (2021).
* Blackwell _et al._ [2016] R. Blackwell, O. Sweezy-Schindler, C. Baldwin, L. E. Hough, M. A. Glaser, and M. Betterton, Soft Matter 12, 2676 (2016).
* Ronceray _et al._ [2016] P. Ronceray, C. P. Broedersz, and M. Lenz, Proceedings of the national academy of sciences 113, 2827 (2016).
* Hentrich and Surrey [2010] C. Hentrich and T. Surrey, Journal of Cell Biology 189, 465 (2010).
* Surrey _et al._ [2001] T. Surrey, F. Nédélec, S. Leibler, and E. Karsenti, Science 292, 1167 (2001).
* Kruse _et al._ [2004] K. Kruse, J.-F. Joanny, F. Jülicher, J. Prost, and K. Sekimoto, Physical review letters 92, 78101 (2004).
* Husain and Rao [2017] K. Husain and M. Rao, Physical review letters 118, 078104 (2017).
* Palenzuela _et al._ [2020] H. Palenzuela, B. Lacroix, J. Sallé, K. Minami, T. Shima, A. Jegou, G. Romet-Lemonne, and N. Minc, Current Biology 30, 4534 (2020).
* Hannak and Heald [2006] E. Hannak and R. Heald, Nature protocols 1, 2305 (2006).
* Thawani _et al._ [2019] A. Thawani, H. A. Stone, J. W. Shaevitz, and S. Petry, Elife 8, 43890 (2019).
* Pelletier _et al._ [2020] J. F. Pelletier, C. M. Field, S. Fürthauer, M. Sonnett, and T. J. Mitchison, Elife 9, e60047 (2020).
* Verde _et al._ [1991] F. Verde, J.-M. Berrez, C. Antony, and E. Karsenti, The Journal of cell biology 112, 1177 (1991).
* Mitchison _et al._ [2013] T. J. Mitchison, P. Nguyen, M. Coughlin, and A. C. Groen, Molecular Biology of the Cell 24, 1559 (2013).
* Scholey _et al._ [2016] J. M. Scholey, G. Civelekoglu-Scholey, and I. Brust-Mascher, Biology 5, 51 (2016).
* Anjur-Dietrich _et al._ [2021] M. I. Anjur-Dietrich, C. P. Kelleher, and D. J. Needleman, Cells 10, 465 (2021).
* Yu _et al._ [2019] C. H. Yu, S. Redemann, H. Y. Wu, R. Kiewisz, T. Y. Yoo, W. Conway, R. Farhadifar, T. Müller-Reichert, and D. Needleman, Molecular Biology of the Cell 30, 2503 (2019).
* Hannabuss _et al._ [2019] J. Hannabuss, M. Lera-Ramirez, N. I. Cade, F. J. Fourniol, F. Nédélec, and T. Surrey, Current Biology 29, 2120 (2019).
* Simha and Ramaswamy [2002] R. A. Simha and S. Ramaswamy, Physical review letters 89, 058101 (2002).
* Martínez-Prat _et al._ [2019] B. Martínez-Prat, J. Ignés-Mullol, J. Casademunt, and F. Sagués, Nature physics 15, 362 (2019).
* Meyer [1984] R. B. Meyer, Molecular Crystals and Liquid Crystals 106, 414 (1984).
* Castoldi and Popov [2003] M. Castoldi and A. V. Popov, Protein expression and purification 32, 83 (2003).
* Hyman _et al._ [1991] A. Hyman, D. Drechsel, D. Kellogg, S. Salser, K. Sawin, P. Steffen, L. Wordeman, and T. Mitchison, Methods in enzymology 196, 478 (1991).
* Howard and Clark [2002] J. Howard and R. Clark, Appl. Mech. Rev. 55, B39 (2002).
* Lau _et al._ [2009] A. Lau, A. Prasad, and Z. Dogic, EPL (Europhysics Letters) 87, 48006 (2009).
* Bigun _et al._ [2004] J. Bigun, T. Bigun, and K. Nilsson, IEEE Trans Pattern Anal Mach Intell 26, 1590 (2004).
* Rezakhaniha _et al._ [2012] R. Rezakhaniha, A. Agianniotis, J. T. C. Schrauwen, A. Griffa, D. Sage, C. v. Bouten, F. Van De Vosse, M. Unser, and N. Stergiopulos, Biomechanics and modeling in mechanobiology 11, 461 (2012).
* Berg _et al._ [2019] S. Berg, D. Kutra, T. Kroeger, C. N. Straehle, B. X. Kausler, C. Haubold, M. Schiegg, J. Ales, T. Beier, and M. Rudy, Nature Methods , 1 (2019).
* Cignoni _et al._ [2008] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia, in _Eurographics Italian chapter conference_ , Vol. 2008 (Salerno, Italy, 2008) pp. 129–136.
* Thielicke and Stamhuis [2014] W. Thielicke and E. Stamhuis, Journal of open research software 2 (2014).
Active microphase separation
in mixtures of microtubules and tip-accumulating molecular motors
Supplementary Material
## I Supplementary Information
### I.1 Prediction of Aster and Bilayer Structure
We analytically match the microtubule intensity profile in asters and bilayers
using only the intensity profile of molecular motors and a measured length
distribution of microtubules [Fig. 1(c)]. To do this, we model the microtubule
profile as arising from microtubules of various lengths attached to the
kinesin by their ends. This is equivalent to the convolution of the molecular
motor profile with a probability distribution of microtubule length. We
represent the measured length distribution of stabilized microtubules with a
log-normal distribution $f(\Lambda)$:
$f(\Lambda)=\frac{1}{\Lambda
S\sqrt{2\pi}}\exp\left(-\frac{(\ln(\Lambda)-M)^{2}}{2S^{2}}\right),$ (S1)
where $\Lambda$ is the non-dimensionalized length $L/L_{0}$, and $M$ and $S$
are fit parameters related to the dimensionless mean $\mu/L_{0}$ and the
variance $\sigma^{2}/L_{0}^{2}$ as:
$\mu/L_{0}=e^{M+\frac{S^{2}}{2}},$ (S2)
$\sigma^{2}/L_{0}^{2}=e^{S^{2}+2M}\left(e^{S^{2}}-1\right).$ (S3)
The probability that a microtubule has a dimensionless length greater than
$d/L_{0}$ is the integral of that distribution from the length $d/L_{0}$ out
to infinity:
$I_{1d}(d)=\nu_{1d}\int_{d/L_{0}}^{\infty}f(\Lambda)d\Lambda=\frac{\nu_{1d}}{2}+\frac{\nu_{1d}}{2}erf\left(\frac{M-\ln(d/L_{0})}{\sqrt{2}S}\right),$
(S4)
where $\nu_{1d}$ is a normalization factor that includes the conversion to
fluorescent intensity. This equation represents the normalized microtubule
intensity profile of microtubules perpendicularly anchored on one side of a
plane. The only fit parameter is the normalization factor $\nu_{1d}$, as all
other variables have been extracted from the measured length distribution.
In order to predict the structure of a radially symmetric aster, it is helpful
to extend this analysis to radial coordinates. A radially oriented microtubule
at a distance $r$ from its kinesin anchor takes on the same form, but with a
factor $1/r$:
$I_{r}(d)=\frac{\nu_{r}}{2\pi
r}\int_{r/L_{0}}^{\infty}f(\Lambda)d\Lambda=\frac{\nu_{r}}{4\pi
r}+\frac{\nu_{r}}{4\pi r}erf\left(\frac{M-\ln(r/L_{0})}{\sqrt{2}S}\right),$
(S5)
where $\nu_{r}$ is a normalization factor adjusted for radial coordinates.
Finally, we convolve this result with the imaging point spread function
$f_{ps}$, measured from 50 nm fluorescent colloids.
$I_{MT}^{aster}=I_{K}^{aster}*I_{r}*f_{ps}$ (S6)
Convolving the distribution $I_{r}(d)$ with the radial profile of kinesin
intensity $I_{K}^{aster}$ and the point spread function $f_{ps}$ creates the
radial aster microtubule intensity profile $I_{MT}^{aster}$ which closely
matches the experimental microtubule intensity profile as shown in Figure
1(e).
The equivalent calculation for the contracted bilayer’s microtubule profile,
shown in Figure 2(f), can be reduced to a one-dimensional problem. We
construct the bilayer microtubule profile as microtubules perpendicularly
anchored on a plane and thus use the 1D model for $I_{1d}$ derived earlier.
Convolving $I_{1d}$ with the z-profile of kinesin from the bilayer
$I_{K}^{bilayer}$ in two directions, and then convolving that profile with
point spread function $f_{ps}$ creates the bilayer microtubule z-profile
$I_{MT}^{bilayer}$.
### I.2 Aster Segmentation
The aster data consists of 2-channel z-stacks with a distance of 0.65 $\mu$m
between the imaging planes. The length of a pixel is also 0.65 $\mu$m. To
measure the volume and aspect ratio of asters [Fig. 1(f),(g)], we segment the
kinesin channel through a simple threshold. This binary data set is refined by
a Chan-Vase active contour algorithm operating on the original data set.
### I.3 Sedimentation Height
The sedimentation height $h_{MT}$ is the height from the base of the chamber
at which 1/5 of the total microtubule fluorescence is encompassed. To
calculate this height we define the cumulative microtubule density function
$D(h)$, an integral of material from the floor of the chamber to height $h$,
normalized by the total material in the chamber $\rho_{tot}$
$D(h)=\frac{1}{\rho_{tot}}\int_{0}^{h}\rho(y)dy.$ (S7)
The sedimentation height $h_{MT}$ is then the height at which $D(h_{MT})=0.2$.
The density $\rho_{MT}$ in Figure 3(e) is the mean density of all material
below the height $h_{MT}$.
### I.4 Orientational Order Parameter and Coherency
We calculate orientation fields from images by identifying the principal
spatial derivatives using a structure tensor [67, 68]. A structure tensor
$\mathbf{T}$ of two-dimensional gradients is constructed from a 3D signal
intensity field $I$ as
$\mathbf{T}=\begin{bmatrix}\partial_{x}\partial_{x}I_{xyz}&\partial_{x}\partial_{y}I_{xyz}\\\
\partial_{y}\partial_{x}I_{xyz}&\partial_{y}\partial_{y}I_{xyz}\end{bmatrix}.$
(S8)
The eigenvalue $\lambda_{min}$ of $\mathbf{T}$ associated with the lowest
intensity variation represents the vector $\vec{v}_{min}$ along which the
intensity gradients are smallest. The direction of $\vec{v}_{min}$ gives the
scalar orientation field used to calculate the orientation distribution
function. The coherency $C$ [Fig. 4(b)] is defined as the difference between
the tensor eigenvalues normalized by their sum:
$C=\frac{\lambda_{max}-\lambda_{min}}{\lambda_{max}+\lambda_{min}}.$ (S9)
We calculate a field of local orientations $\theta$ from the local values of
$\vec{v}_{min}$. The contractions analyzed display negligible bend in their
structure, so we define a single average director $\bar{\theta}$ for the
entire material as the mean value of $\theta$:
$\bar{\theta}=\frac{1}{N}\sum_{i=1}^{N}\theta_{i}.$ (S10)
From this we calculate the orientational order parameter $S$, defined as:
$S=\langle\cos(2[\theta-\bar{\theta}])\rangle.$ (S11)
At late times, microtubule bundles appear anchored normal to the surface. We
exclude in the calculation of $\bar{\theta}$ and the orientational order
parameter $S$ by using a mask. The mask is generated from a probability field
$P_{in}$ using iLastik for pixel classification [69].
### I.5 Surface Construction
To construct numerical surfaces, we start by acquiring confocal data such that
each voxel is isotropic. These voxels are classified as “inside” or “outside”
the structure of interest by using iLastik to generate a probability field
$P_{in}$. Then a binary field $F$ is generated from $P_{in}$ using a
morphological snake method. Next a polygonal surface $S$ is constructed from
$F$ using a marching cubes algorithm. Finally, the surface $S$ is remeshed at
a specified triangle size using Meshlab [70]. Code for this process is
available upon request.
### I.6 Normal-normal correlation $C(r)$
To determine the normal-normal correlation of a structure we first generate a
surface for that structure as described above. We then bisect the surface
along the smallest moment of the material. This bisection is to exclude
anticorrelations in $C(r)$ due to the curvature of the surface. We calculate a
normal vector $\hat{n}(r,t)$ at each point $r$ on the two surface halves at
time $t$. The normal-normal correlation is calculated as
$C(\Lambda,t)=\frac{\langle\hat{n}(r,t)\cdot\hat{n}(r+\Lambda,t)\rangle}{\langle\hat{n}(r,t)\cdot\hat{n}(r,t)\rangle}=\frac{1}{N_{i}N_{\Lambda}}\sum_{i}^{N_{i}}\sum_{\Lambda}^{N_{\Lambda}}\frac{\hat{n}(r_{i},t)\cdot\hat{n}(r_{i}+\Lambda,t)}{\hat{n}(r_{i},t)\cdot\hat{n}(r_{i},t)},$
(S12)
where angular brackets indicate a spatial average over all initial points $i$
and all geodesic paths $\Lambda$. We calculate geodesics on each half of the
surface via a fast-marching mesh algorithm. Figure S3 shows the geodesic
distance from a point along a contracted surface and the normal vectors of
that contracted surface. Binning by lengths of the path $\Lambda$ at a
particular time $t$, we calculate $C(r)$. At small length scales, the normal-
normal correlation is reasonably well fit by an exponential. The correlation
length [Fig. 5(g), Fig. 6(d)] is defined as the inverse of the exponent to
this fit. Code to generate normals and calculate geodesic distances is
available upon request.
### I.7 Contraction Kinematics
If the mass of proteins is conserved, there are constraints relating shape
change with protein flux. We consider an enclosed network with a volume $V$
and a boundary of area $A$. The total mass $M$ is the sum of the areal surface
density $\rho_{A}$, plus the sum of the volumetric density $\rho_{V}$ over the
volume:
$M=\int_{A}\rho_{S}dA+\int_{V}\rho_{V}dV.$ (S13)
Assuming mass conservation, the time derivative of this quantity is zero. That
is,
$0=\partial_{t}M=\int_{A}(\partial_{t}\rho_{A})dA+\langle\rho_{A}\rangle\partial_{t}A+\int_{V}(\partial_{t}\rho_{V})dV+\langle\rho_{V}\rangle\partial_{t}V,$
(S14)
where angular brackets indicate a spatial average. Given that protein is found
only on the surface and in the bulk, an increase in the first two terms would
signal a flux of material from the bulk to the surface, whereas an increase in
the second two terms would signal a flux of material into the bulk. That is,
the net flux of protein from the bulk $V$ to the surface $S$ is
$\Phi_{V\rightarrow
S}=A\partial_{t}\langle\rho_{A}\rangle+\langle\rho_{A}\rangle\partial_{t}A$
(S15)
while the net flux of protein from the surface to the bulk is
$\Phi_{S\rightarrow
V}=V\partial_{t}\langle\rho_{V}\rangle+\langle\rho_{V}\rangle\partial_{t}V.$
(S16)
### I.8 Mean Network Speed and Velocity Correlation Length of K4 Driven Gels
The velocity field, $v(r,t)$, of the extensile fluid phase was calculated
using the velocimetry package PIVLab [Fig. S4(a),(b)] [71]. From this data we
calculated the the mean network speed $\langle\left|V\right|\rangle$ defined
as
$\langle\left|V\right|\rangle=\frac{1}{T_{f}-T_{i}}\sum_{t=T_{i}}^{T_{f}}\langle
v(r,t)\rangle$ (S17)
where $T_{f}$ is the final time, and $T_{i}$ indicates time shortly after the
initial gel buckling instability. The average inside the sum is over space as
defined by the variable $r$. Titrating over kinesin concentration, we found
that the mean microtubule network speed $\langle\left|V\right|\rangle$
increased with kinesin concentration [Fig S4(c)].
We used the velocity field $v(r,t)$ to generate a spatial velocity-velocity
correlation $A_{vel}(r)$ defined as
$A_{vel}(r)=\frac{1}{T}\sum_{t}^{T}\langle
A(r,t)\rangle=\left\langle\frac{\langle v(r,t)\cdot
v(r^{\prime},t)\rangle}{\langle v(r^{\prime},t)\cdot
v(r^{\prime},t)\rangle}\right\rangle=\frac{2}{TN(N-1)}\sum_{t}^{T}\sum_{i}^{N}\sum_{j<i}^{N}\frac{v(r_{i},t)\cdot
v(r_{j},t)}{v(r_{j},t)\cdot v(r_{j},t)}$ (S18)
where $T$ is the number of frames evaluated. Here the average inside the sum
is over space as defined by the variable $r^{\prime}$. This correlation was
evaluated in Fourier space to reduce computation time. We measured a
correlation length scale $\lambda$, defined as the length scale at which
$A_{vel}(r)$ has decayed to half of its initial amplitude. In contrast to
studies of truncated kinesin-1, increasing kinesin-4 concentration increased
the velocity-velocity correlation length scale $\lambda$ [Fig. S4(d)] [38].
### I.9 Modifications of contraction phenomena due to boundary conditions
The dynamics and final structure of a global contraction are sensitive to
microtubule concentration, kinesin concentration, initial microtubule
alignment, and boundary conditions. When the material is pinned at the ends of
the chamber the resulting global contraction displays significant
phenomenological differences from its unpinned form [Fig. S7(a)]. First
contracting material pinned at the ends of the chamber contract to a thin
line. Then the line of material buckles, and then the line of material
straightens again. Finally, at long time scales, material accumulates at
intervals along the line of the contraction, forming large aster-like clumps.
Similarly, non-specific adhering to the chamber sides changed the form of the
contraction [Fig. S7(b), (c)].
## II Supplementary Videos
* •
Video 1: At low microtubule concentrations, dynamic asters spontaneously
assemble. This video shows orthogonal planes projected over 6.5 $\mu$m. Sample
is created with 200 nM kinesin-4 (blue), 400 nM tubulin (black).
* •
Video 2: At intermediate microtubule density, networks of microtubules
globally contract. This video shows four fields of epifluorescent imaging of
fluorescent microtubules stitched together. The sample is created with 50 nM
kinesin-4 (blue), 1000 nM tubulin (black).
* •
Video 3: Sedimented asters do not merge and globally contract. This video
shows a 3D projection from confocal stacks, with two xy slices at indicated
positions. These slices are z-projected over 6.5 $\mu$m. Sample is created in
a 300 $\mu$m chamber with 200 nM kinesin-4 (blue), 400 nM tubulin (black).
* •
Video 4: At high microtubule density, global contractions align and then
roughen. This video shows a z-projection from confocal data, along with xz and
yz orthogonal slices projected over 6.5 $\mu$m. Starting at 110 min, a 3D
surface (blue) generated from the dense surface of the condensate is
displayed. This surface shows the location of the orthogonal slices. Sample
consisted of 10 $\mu$M tubulin (black) and 200 nM kinesin (blue).
* •
Video 5: At the highest microtubule density, kinesin-4 drives microtubule
condensation and the subsequent formation of an active foam. This video shows
a 3D projection from confocal stacks of a 333x333x100 $\mu$m field of view.
Intermittent pauses in the video show the interior structure of the material
during its development. Sample contained 200 nM kinesin (blue), 40 $\mu$M
tubulin (black).
* •
Video 6: Whole-chamber epifluorescent imaging of highest-density microtubule
systems buckling, condensing, and forming an active foam. Sample constituted
from 200 nM kinesin (blue), 40 $\mu$M tubulin (black).
* •
Video 7: A series of videos shows a titration of microtubule concentrations in
the presence of PEG, resulting in a transition from extensile to contracting
networks. All videos are epifluorescent imaging of fluorescent microtubules.
* •
Video 8: Two videos of extensile networks transforming into bilayer
structures. The first video is epifluorescent imaging of fluorescent
microtubules (black) and kinesin (blue). The second video is a max-z
projection of confocal imaging of a sample in a thin (30 $\mu$m) chamber.
## III Supplementary Figures
Figure S1: The contraction time scale decreased with increasing microtubule
number density. Plotted is the normalized width $W_{n}$ at five tubulin
concentrations (200 nM kinesin). Dashed lines represent the exponential fit
$f_{c}(t)$. Inset) Characteristic time $\tau$ for each tubulin concentration.
Figure S2: The mean curvature of the condensate surface shown in Fig. 6 at
two points in time, the first after early monolayer formation, the second at
the onset of bilayer formation. Figure S3: During roughening, we quantify the
normal-normal correlation as a function of geodesic distance along the
material surface. (a) Mean curvature of a globally contracting surface at late
time. (b) Geodesic distance, on the material surface, from an initial point
indicated by a white circle. (c) Lilac arrows indicate normal vectors on the
material surface, a random sampling of 10% of the normal vectors are
displayed. The ends of the material along the long axis are cropped off for
the calculation of normals. Figure S4: Increasing kinesin concentration
amplifies the dynamics of an extensile fluid. (a) An extensile network driven
by kinesin. Sample is created with 200 nM kinesin, 13 $\mu$M tubulin (imaged),
0.5% PEG. (b) Color map indicating the magnitude of the material velocity in
the previous panel, with overlaid arrows representing the velocity vector
field $\vec{v}(r)$. (c) Mean speed $\langle|V|\rangle$ as a function of
kinesin. The error bars are standard deviation (n=3). Inset) Spatially
averaged speed $U(t)=\langle\vec{v}(r,t)\rangle$ plotted over time for the
experiment shown in panel (a). (d) Time-averaged velocity autocorrelation
$A(r)$ as a function of kinesin. Inset) Length scale $\lambda$ at which
$2A(\lambda)=A(0)$. Error bars are standard deviation (n=3). Figure S5: Low
magnification imaging shows the slight buckling and splay of microtubules into
monolayer envelopes, followed by the deformation of monolayers into an active
bilayer foam. Figure S6: At high MT concentrations, the mixture coarsens into
an active foam. (a) Maximum intensity projection over 10 $\mu$m in z of an
entire chamber of foam. (b) zoom in (c) Z-stack of 6.5 $\mu$m z-projection
slices with an additional 6.5 $\mu$m in between each slice, showing the 3D
structure of a bilayer foam. Figure S7: The behavior of a globally
contracting system is influenced by the conditions at the borders of the
chamber. (a) A contracting material pinned at the ends of the chamber many
millimeters away. This material first contracts but then buckles at 15 min,
followed by straightening again at 20 min. (b) A global contraction with some
sticking at the parafilm chamber edges. This sample forms an active bilayer
foam as its end state. (c) A global contraction loses the symmetry imparted on
it by the chamber.
|
arxiv-papers
| 2021-07-26T15:45:57 |
2024-09-04T03:07:19.039864
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Bezia Lemma, Noah P. Mitchell, Radhika Subramanian, Daniel J.\n Needleman, Zvonimir Dogic",
"submitter": "Bezia Lemma",
"url": "https://arxiv.org/abs/2107.12281"
}
|
2107.12283
|
# Continental-scale building detection
from high resolution satellite imagery
Wojciech Sirko, Sergii Kashubin, Marvin Ritter, Abigail Annkah, Yasser Salah
Eddine Bouchareb,
Yann Dauphin, Daniel Keysers, Maxim Neumann, Moustapha Cisse, John Quinn
Google Research Address for correspondence: [email protected]
###### Abstract
Identifying the locations and footprints of buildings is vital for many
practical and scientific purposes. Such information can be particularly useful
in developing regions where alternative data sources may be scarce. In this
work, we describe a model training pipeline for detecting buildings across the
entire continent of Africa, using 50 cm satellite imagery. Starting with the
U-Net model, widely used in satellite image analysis, we study variations in
architecture, loss functions, regularization, pre-training, self-training and
post-processing that increase instance segmentation performance. Experiments
were carried out using a dataset of 100k satellite images across Africa
containing 1.75M manually labelled building instances, and further datasets
for pre-training and self-training. We report novel methods for improving
performance of building detection with this type of model, including the use
of mixup (mAP +0.12) and self-training with soft KL loss (mAP +0.06). The
resulting pipeline obtains good results even on a wide variety of challenging
rural and urban contexts, and was used to create the Open Buildings dataset of
516M Africa-wide detected footprints.
## 1 Introduction
Building footprints are useful for a range of important applications, from
mapping, population estimation and urban planning to humanitarian response and
environmental science. In developing regions this information can be
particularly valuable, for instance in areas with infrequent censuses, or with
a high prevalence of informal settlements, or where there is rapid change,
such as in emerging mega-cities. Although the detection of buildings in
developing regions can be technically challenging, it has the potential to
address large information gaps with respect to current knowledge.
In this work, we describe the development of a detection pipeline for
identifying building footprints across the continent of Africa from satellite
imagery of 50 cm resolution. The land surface of Africa is about 20% of the
Earth’s total and has a wide diversity of terrain and building types, meaning
that this is a broad and challenging problem. Challenges include the range of
geological or vegetation features which can be confused with built structures,
settlements with many contiguous buildings not having clear delineations, and
areas characterised by small buildings, which can appear only a few pixels
wide at this resolution. In rural or desert areas, buildings constructed with
natural materials can visually blend in to the surrounding area. Figures 1 and
3 show some examples.
Progress in deep learning methods with remote sensing imagery has created new
possibilities for working at this scale, and recent work on building detection
from high resolution satellite imagery has shown remarkable improvements in
precision and recall, which we review briefly in Section 2. Common to much of
this work is the U-Net architecture [1], an encoder-decoder model for semantic
segmentation. Rather than learning to identify building instances with an end-
to-end model, the idea in this type of ‘bottom-up’ segmentation is to classify
each pixel of an aerial image as building or non-building, and then to find
connected components at some confidence score threshold. Illustrations of our
model operating in this way are shown in Figure 1. Existing studies have
tended to be limited to particular cities or countries, however, leaving an
open question as to how well such methods generalise to wider areas,
particularly in developing regions.
We begin by describing training and evaluation datasets compiled for this work
in Section 3, including weakly labelled and unlabelled image data for pre-
training and self-training respectively. We then describe a number of methods
tested to improve building detection performance, in the following categories:
* •
Choices of architecture, concerning different encoders and decoders (Section
4).
* •
Loss functions which are more appropriate for building segmentation than
generic segmentation choices (Section 5).
* •
Regularization, including mixup and other augmentations (Section 6).
* •
Pre-training (Section 7).
* •
Self-training methods for improving building detection performance using
additional unlabelled data (Section 8).
* •
Pre-processing methods for preparing the input image and labels, including
morphological adjustments (Section 9).
* •
Post-processing methods for converting semantic segmentation predictions into
predicted instances (Section 10).
We report experimental results in Section 11, including ablation studies to
determine the effectiveness of different methods, and evaluation of the
accuracy and consistency of the resulting building detection pipeline in
different contexts.
In summary, the main contributions of this work are: (1) we provide the first
experimental results, to our knowledge, on the training and evaluation of
building detection models on high resolution aerial imagery at a continental
scale, (2) we propose a number of specific methods for improving building
detection performance using the U-Net model, including mixup, self-training,
distance weighting with Gaussian convolutions, and residual decoder blocks,
and (3) the resulting pipeline was used to generate an open dataset of 516M
building footprints across Africa, available at
https://sites.research.google/open-buildings.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Examples of bottom-up building detection: (a,d) test images; (b,e)
semantic segmentation confidences; (c,f) instances of buildings found.
Satellite imagery in this paper: Maxar Technologies, CNES/Airbus.
## 2 Related work
Instance segmentation is a well-studied task, though most literature on
instance segmentation methods are concerned with detecting objects in photos.
In such settings, the best performing methods tend to be end-to-end instance
segmentation models, which break down the problem into feature extraction,
bounding box regression and mask prediction stages; recent examples are YOLOv4
[2] or Hybrid Task Cascades [3]. Satellite imagery, however, has different
characteristics, which has motivated alternative approaches to instance
segmentation. In particular, objects such as buildings can be smaller and more
densely clustered, which is a challenge for methods that use bounding box
regression with a non-maximum suppression (NMS) step, since in cases where
instances are densely arranged, NMS can suppress true detections and reduce
recall.
In satellite imagery, therefore, a more common approach is to first carry out
semantic segmentation to classify each pixel in an image as building or non-
building. Post-processing is then done to extract instances, for example by
thresholding and finding connected components. An example of this type of
encoder-decoder approach that has been successful for building detection is
TernausNetV2 [4], which uses U-Net [1] with a ResNet-based encoder, and three
output classes—building, non-building, and touching edges—in order to
emphasise the boundary regions between instances. Other successful building
detection methods have used different ways of increasing the model’s focus on
edges of nearby instances, such as distance-based weighting of pixels in the
loss [5].
The CVPR DeepGlobe Challenge [6] posed three satellite imagery tasks: building
detection, road detection and land cover mapping. Of the 22 top entries for
building detection, 13 were based on U-Net [1] and only one used an end-to-end
instance segmentation model (Mask-RCNN [7]). The SpaceNet challenge [8] has
convened a number of building detection challenges, most recently the Multi-
Temporal Urban Development Challenge [9], for which four of the top five
entries were based on U-Net. The overall best performing method was HRNet [9],
a semantic segmentation model with a different architecture to U-Net, in that
it dispenses with a decoder stage and uses adaptive spatial pooling.
While progress has been made on methods for building detection in satellite
imagery, the available evidence in the literature and from competitions is
limited in geographical scope. The SpaceNet buildings dataset covers six
cities: Atlanta, Khartoum, Las Vegas, Paris, Rio de Janeiro, and Shanghai. The
SpaceNet Multi-Temporal Urban Development dataset contains labelled images
from much more diverse geography (41,000km2 of imagery in 101 locations),
although given the nature of the challenge, the locations are mainly semi-
urban. Image resolution in this dataset is 4m per pixel, which also means that
detections are limited to larger buildings. In this work, we provide the first
empirical results on the feasibility of detecting the majority of buildings
across an entire continent from 50 cm imagery, assessing model generalisation
across many types of terrain and cultures/styles in widely differing urban and
rural settings.
## 3 Datasets
We next describe the continent-wide datasets prepared for the training and
evaluation of building detection models, and with varying levels of labelling.
The first category is a set of satellite images with full instance labels,
used for conventional supervised learning and also the basis of our
evaluation. Secondly, we prepared a larger set of images with class labels
corresponding to pretext tasks, suitable for pre-training. Thirdly, we
prepared a set of images with no labels at all, used for unsupervised self-
training. For additional evaluation of the final dataset, we also prepared a
sparsely-labelled evaluation dataset. These datasets are summarised in Table
1.
Table 1: Summary of datasets prepared for this work. All images are 600$\times$600 pixels, at 50 cm resolution. Type/Usage | Number of images | Labels | Number of instances
---|---|---|---
Training | 99,902 | Building instances | 1.67M
Evaluation | 1,920 | Building instances | 80,672
Pre-training | 1M | | Coarse location,
---
Fine location,
Nighttime luminance
-
Self-training | 8.7M | - | -
Additional evaluation | 0.9M | Sparse building instances | 0.9M
### 3.1 Supervised learning and evaluation data
(a)
(b)
(c)
(d)
Figure 2: Geographical distribution of data: (a) training set component with
18,149 images and 120,236 building polygons; (b) training set component with
81,753 images and 1.55M building polygons; (c) test set with 1,920 images and
80,672 building polygons; (d) additional sparse evaluation data with 0.9M
images and 0.9M building polygons.
We collected a training set of 99,902 RGB satellite images of size
600$\times$600 pixels, of locations across the African continent. Figure 2
shows the geographical distribution of these images. Given data resources
available to us, these were composed of two different sets with different
geographical densities. The resulting training set has broad coverage across
the continent, with particular concentrations of images for locations in East
and West Africa.
Test locations were chosen according to more specific criteria. When sampling
random locations across large areas, most images do not contain any buildings.
In order to avoid having an evaluation set which was biased towards rural and
empty areas, a set of 47 specific regions of interest was selected. These were
chosen to contain a mix of rural, medium-density and urban areas in different
regions of the continent, including informal settlements in urban areas as
well as refugee facilities.
Figure 3: Examples of building labelling policy, taking into account
characteristics of different areas across the African continent. (1) Example
of a compound containing both dwelling places as well as smaller outbuildings
such as grain stores: the smaller buildings should be ignored. (2) Example of
a round, thatched-roof structure which is difficult to distinguish from trees:
use cues about pathways, clearings and shadows to disambiguate. (3) Example of
internal shadow, indicating that this is an enclosure wall and not a building.
(4) Example of several contiguous buildings for which the boundaries cannot be
distinguished, where the ‘dense buildings’ class should be used.
The labelling policy was developed to take into account characteristic
settings across the continent, with some examples shown in Figure 3. One
challenge is the labelling of small buildings, as structures a few metres
across can be close to the limit of detectability in 50 cm imagery. Another
challenge is the labelling of buildings which are densely positioned in close
proximity to each other. We introduced a _dense building_ class for labelling,
when a human annotator was not able to ascertain the exact boundary between
individual buildings. This is analogous to the _crowd_ type in COCO [10].
### 3.2 Pre-training data
We generated further datasets of satellite imagery, with classification labels
for alternative tasks which were used as the basis for representation learning
experiments and pre-training. A convenient feature of satellite imagery is
that every pixel is associated with a longitude and latitude, so that it can
be linked to various other geospatial data. For example, Jean et al. [11]
demonstrated the use of nighttime lights data to be the basis of a pretext
task, such that a model trained to predict how bright a location is at night
from daytime imagery learns a representation of satellite imagery which helps
as a starting point for other tasks.
We sampled one million images of size 600$\times$600 pixels, at 50 cm per
pixel resolution from across the continent of Africa. Sampling density was not
completely uniform, as source imagery was limited e.g. within large deserts
and other uninhabited areas.
Figure 4: Partitioning of landmass into cells of roughly equal area, according
to S2 geometry: coarse (left) and fine (right).
For each of these images, we computed information which could be used as
pretext task labels. We used the location of the image as a classification
target, by binning the Earth’s surface into parcels of roughly equal area
based on S2 geometry111https://s2geometry.io/, as shown in Figure 4. This
gives us a classification task, in which the goal is to predict for an image
patch which part of the world it comes from. The intuition is that in order to
obtain good performance on this task, a model might learn to distinguish
different vegetation, architectural or geographical features. We also computed
nighttime lights data, using DMSP OLS sensor data. This data is computed by
averaging nighttime light luminance over the course of a year, in order to
correct for temporal factors such as cloud cover. Following the methodology in
[12], we binned the luminance values into four classes, and also retained the
original values. Using this as a supervision label predisposes the model to
pay attention to human-constructed features such as buildings, which emit
light. The methods used to create pre-trained checkpoints with these datasets
are described in Section 7.
### 3.3 Self-training data
This unlabeled dataset was created by sampling 100M 640$\times$640 pixel
satellite images from the African continent. More than 90% of images contained
no buildings, therefore we subsampled the dataset using our best supervised
model, so that only around $\frac{1}{8}$ of images did not contain buildings.
The final dataset after filtering contained 8.7M images.
### 3.4 Additional evaluation data
This sparsely labeled dataset contains 0.9M 448$\times$448 pixel satellite
images from the African continent and is a by-product of the internal Google
Maps evaluation process. Each image is centered on one building detection (not
necessarily from our model), and therefore contains a mixture of images with
buildings and images with features that are easily confused as buildings, such
as rocks or vegetation. For each image, a human evaluator assessed whether
that central point contains a building. If so, they created a label with the
footprint of that single building, and if not the label is empty. Around
$\frac{1}{8}$ of the images in this dataset were centered on non-buildings.
This dataset can therefore be used for estimating precision, but not recall.
It has good coverage of the African continent, but due to the sampling
process, the density of images does not match the real building density in all
locations. See Section 11 for how we used this dataset.
## 4 Model
Our experiments are based on the U-Net model [1], which is commonly used for
segmentation of satellite images. As this is a semantic segmentation model, we
use it to classify each pixel in an input image as building or non-building.
To convert this to an instance segmentation, we threshold the predictions at
some confidence level, and search for connected components (shown in Figure 1,
where we convert from pixel-wise confidences in panels (b) and (e) to detected
instances in panels (c) and (f)).
U-Net is an encoder-decoder architecture, and we use an encoder based on
ResNet-50-v2 [13]. Preliminary experiments with ResNet-v2-101 and
ResNet-v2-152 suggested that deeper encoder architectures did not improve
accuracy.
#### Residual decoder
U-Net [1] and TernausNet-v2 [4] both employ simple decoder blocks consisting
of two (U-Net) or one (TernausNet-v2) convolutional layer(s) and an
upconvolution (also known as transposed convolution or deconvolution) for
upscaling the feature map by a factor of 2. No normalization is typically
performed. One common modification to this structure is simplifying layers
even further, e.g. employing bilinear upsampling instead of upcovolution and
skipping some of the decoder blocks altogether. Such modification is often
employed in the DeepLab [14] model family without any significant performance
loss. We have found that increasing the decoder complexity can however bring
performance gains, at least for the task we consider in this paper. Inspired
by ResNet-v2 [13] residual blocks, we built a decoder block consisting of two
(batch normalization, ReLU, convolution) applications followed by another
(batch normalization, ReLU), residual connection to the input and finally an
upconvolution, as illustrated in Figure 5. We hypothesize that the need for
precise pixel-wise annotations of small objects means that extra decoder
complexity is beneficial in this case; buildings can be as small as 6x6
pixels. We cannot rule out other possibilities though, such as mere parameter
number increase or batch normalization affecting model performance positively.
(a) U-Net decoder block
(b) Residual decoder block
Figure 5: Decoder block structures, (a) as defined for U-Net, and (b) a
modified version with batch norm and residual connection used in our model.
## 5 Loss functions
For each pixel $i$, the model gives a softmax confidence of being in the
building class $\hat{y}_{i}\in[0,1]$, and we have a ground truth label
$y_{i}\in\\{0,1\\}$. Cross entropy loss is defined as:
$L_{\mathrm{CE}}(y,\hat{y})=-\sum_{i}\omega_{i}\left[y_{i}\log{\hat{y}_{i}}+(1-y_{i})\log{(1-\hat{y}_{i}})\right]\
,$ (1)
where $\omega_{i}$ is a weight controlling the importance of the $i$th pixel,
discussed in the next section.
Previous work on building detection has shown that mixing cross entropy loss
with Dice loss is effective [4]. We observed in informal experiments some
further improvement with a closely related formulation, Focal Tversky Loss,
which is defined as:
$L_{\mathrm{FTL}}(y,\hat{y},\beta,\gamma)=\left(1-\frac{\sum_{i}y_{i}\hat{y}_{i}+\epsilon}{\sum_{i}(1-\beta)y_{i}+\sum_{i}\beta\hat{y}_{i}+\epsilon}\right)^{\gamma}\
,$ (2)
where $\beta$ is a parameter controlling the trade-off between false positives
and false negatives, and $\gamma$ is a focal parameter that changes the
relative importance of ‘easy’ ($\hat{y}\approx y$) and ‘hard’ examples. Our
overall loss is given by:
$L=L_{\mathrm{CE}}+\alpha L_{\mathrm{FTL}}\ ,$ (3)
using parameters $\alpha=0.5$, $\beta=0.99$ and $\gamma=0.25$, and the
constant $\epsilon=10^{-6}$ providing numerical stability.
We note that focal losses tend to use $\gamma>1$, which increases the relative
importance of difficult examples. In informal experiments we observed,
however, that test set performance deteriorated when using $\gamma>1$. As the
optimal setting in our experiments boosted the easy examples, we hypothesise
that in our training set, some of the ‘difficult’ examples actually were
mislabelled, which was supported by visual inspection of training examples
with high loss scores. The focal parameter in this case helps to make the loss
robust to label noise.
### 5.1 Weighting
When all pixels are weighted equally, i.e. $\omega_{i}=1$ for all $i$ in Eq.
(1), predictions using the above loss are sub-optimal for building detection.
As the authors of U-Net have noted [1], to distinguish instances it helps to
emphasise the weighting of the pixels at the edges of nearby or touching
instances. Pixels in background regions which are far from any instance can be
down-weighted.
The computation for distance-based pixel weighting in [1] is:
$\omega_{i}=\exp\left(-\frac{d_{1}(i)+d_{2}(i)}{2\sigma^{2}}\right)\ ,$ (4)
where $d_{1}(i)$ and $d_{2}(i)$ are the Euclidean distances from pixel $i$ to
the closest point of the nearest and second-nearest instance, respectively.
Values of this weighting are shown for an example in Figure 6 (left).
We found this formulation to be effective, but slow to compute. The
calculation of $d_{1}(i)$ and $d_{2}(i)$, involving distance transforms for
every instance in an image, is not computationally efficient during training.
Using this method it is therefore necessary to pre-compute weights, which
limits the possibilities for data augmentation. Therefore, we use an
alternative weighting scheme:
1. 1.
Use the labels $y$ to construct an edge image $E$, where $E(i)$ is set to 1 if
the pixel at location $i$ is on the boundary of an instance, and zero
otherwise.
2. 2.
The pixel weights $\omega$ are given by convolving $E$ with a Gaussian kernel
having length scale $\sigma$, then scaling by a constant $c$.
We used settings of $\sigma=3$, $c=200$. Example values of this Gaussian
convolution method are shown in Figure 6 (right). We found this method to give
better final performance in building detection, and to be efficient enough to
compute on the fly during training.
Figure 6: Distance weighting schemes to emphasise nearby edges: U-Net (left)
and Gaussian convolution of edges (right). See text for details.
## 6 Regularization
We use a standard set of image augmentations to provide regularization during
training: random crops (to obtain a 448$\times$448 patch from the full
600$\times$600 image), horizontal and vertical flips, rotations, and random
modifications to the brightness, hue, saturation, and contrast. We observed
that the augmentations to color helped the model to generalise to over- and
under-exposed overhead images, as well as images in which visibility was low
due to atmospheric conditions.
We also use mixup [15] as a regularization method, initially proposed as a
method for classification and which we modify here for segmentation. During
training with this method, a random pair of images $x$ and $x^{\prime}$ are
combined with a weighted average:
$\tilde{x}=\lambda x+(1-\lambda)x^{\prime}\ ,$ (5)
where $\lambda$ is the mixup ratio coefficient ($\lambda\in[0,1)$).
The model then makes a prediction $\hat{y}$ on this averaged image, for which
cross entropy loss is computed on both sets of corresponding labels $y$ and
$y^{\prime}$, and then combined:
$\tilde{L}_{\mathrm{CE}}=\lambda
L_{\mathrm{CE}}\left(\tilde{x},y\right)+(1-\lambda)L_{\mathrm{CE}}\left(\tilde{x},y^{\prime}\right).$
(6)
Note that in the original mixup specification [15], a single loss is computed
on averaged labels. However, in preliminary experiments we found this not to
work as well due to our use of pixel weighting, and so in this case the labels
are not averaged. Note also that we use mixup only for the cross-entropy loss
term in Eq. (3), and do not apply it to Focal Tversky loss. We set
$\lambda=0.05$ in our experiments.
## 7 Pre-training
A common practice is to begin training models with weights initialised from an
ImageNet [16] classifier. In the case of the U-Net model, the encoder stages
can be initialised in this way; the decoder is then randomly initialised.
Attempting to improve on this, we investigated the use of domain-specific pre-
training methods, on the grounds that the images in the ImageNet dataset have
different characteristics than satellite imagery. The datasets described in
Section 3.2 provided tasks with which to pre-train classifier models: the
night-time luminance prediction task as proposed by Xie et al. [12], and the
prediction of location in the world at either coarse granularity or fine
granularity. We trained a variety of ResNet-50 classifier models using these
datasets, and evaluated the performance of the U-Net building detection model
when using these classifiers to initialise the encoder weights.
Using the three pre-training tasks on their own gave poor performance in building detection. Informally, we visually inspected the 7$\times$7 root block filters learned in the initial layer of the ResNet models, and observed that many of the values were close to zero. Speculating that this was caused by our satellite image datasets being more homogeneous in appearance than ImageNet, we tried two variations of pre-training. The first was to start with an ImageNet classifier and then fine-tune the full model on each pre-training task. In this case, the ImageNet weights appeared to be close to local optima, as the model weights did not greatly change during this fine-tuning. The second strategy was to co-train, in which we set up ResNet-50 models with two classification heads: ImageNet and {Luminance | Coarse location | Fine location}. Training batches in this setup contained a mixture of ImageNet and satellite images, with loss computed for the corresponding head.
In practice, ImageNet pre-training was an effective strategy, which we
ultimately used in our detection model. Fine-tuning with nighttime luminance
raised average mAP, though not significantly. A comparison is given in Section
11. One issue may have been that the pre-training schemes that we considered
were all classification tasks, yet the problem we are ultimately interested in
is segmentation. The use of segmentation tasks for pre-training would allow
initialisation of the decoder, for instance, which may improve final detection
performance and training data efficiency.
## 8 Self-training
In comparison with the limited amount of labeled data, a much larger amount of
unlabeled satellite images exists. Leveraging this fact, we employ self-
training to improve the model’s performance, inspired by the Noisy Student
[17] and Naive Student [18] approaches. For self-training we use the unlabeled
dataset described in Section 3.3 and similar image augmentations as for
labeled data. See Figure 7 for a visualization of the performance improvement
due to self-training.
(a) Input image
(b) Teacher confidence
(c) Student confidence
(d) Difference
Figure 7: Comparison of the confidence mask between the teacher and the
student after one iteration of self-training. In panel (d), red areas are
those that the student model finds more likely to be buildings than the
teacher model, and blue areas more likely to be background.
We arrived at our best model by performing multiple iterations of self-
training using soft teacher labels together with a Kullback-Leibler divergence
loss with a focal $\gamma=0.25$ parameter [19]. Based on informal experiments
using hard teacher labels with the previously defined supervised losses
(Section 5), $\gamma\geq 1$, larger students and stochastic depth [20] did not
improve performance. Fine-tuning the student model on our original supervised
data helped only for the first iteration.
Some of the satellite images we used had black regions due to extending beyond
the satellite image asset geometry, and we observed that our best supervised
model (first teacher) failed to detect buildings next to these black pixel
parts. It was caused by incorrectly labeled supervised data. We managed to
leverage self-training with random black mask augmentation to generate a
student model that does not have this issue.
## 9 Pre-processing
#### Erosion of instances
We noticed that in some examples the buildings are so close that the instances
effectively touch each other and form one connected component on the
segmentation mask. To be able to separate these buildings during post-
processing (to identify instances) we had to teach the model to predict at
least one pixel gap between them. Therefore we employed a morphological
erosion operation with kernel size $3\times 3$ pixels during pre-processing of
labeled images to shrink all instances by one pixel.
#### Mapping dense labels
During training, we remapped dense building labels (representing a group of
buildings) to normal building labels. An alternative is to set the pixel
weight to 0 for dense buildings, effectively treating them as ‘unknown’, which
was equivalent in terms of performance.
## 10 Post-processing
#### Ensembling and test-time augmentation
To improve the final performance of the model we combined ensembling and
simple multi-scale test-time augmentation. In case of ensembling we take an
average of the output confidence masks from multiple models on the same input
and in case of test-time augmentations we average the confidence masks
produced at different image scales (1, $\frac{512}{448}$, and
$\frac{576}{448}$).
#### Connected components
Our model is a semantic segmentation model, therefore to obtain building
instances we find the 4-connected components in the thresholded predicted
label image. We calculate the instance confidence score as the average of
confidence scores of the connected component.
#### Dilation of instances
During pre-processing we applied erosion with kernel size $3\times 3$ to
instance masks, to shrink them by one pixel. In post-processing we approximate
the inverse of this operation by performing morphological dilation on each
instance, with the same kernel.
## 11 Evaluation
Table 2: U-Net supervised learning baseline configuration. Encoder: | ResNet50
---|---
Decoder block: | Residual
Loss: | Weighted cross entropy and focal Tversky loss
Distance weighting: | Gaussian convolution
Regularization: | Image augmentations, mixup
Pre- and post-processing: | Erosion and dilation
We carried out an ablation study to determine the contribution of the
techniques described in the preceding sections. The baseline configuration for
supervised learning, with the combination of methods that we found in
preliminary experiments to be most effective, is summarised in Table 2. We use
the training and test sets as described in Section 3.1, and train using a
scaled conjugate gradient optimizer, with initial learning rate 0.2, decaying
by a factor of 0.8 every 10k steps, for a total of 100k steps with batch size
128. Our test set performance metric is mean average precision with an
intersection over union threshold of 0.5 ([email protected]), using COCO metrics
[10].
Ablations were done by changing one configuration setting at a time and
measuring the drop in performance relative to the baseline. The results are
shown in Figure 8. The method most significantly contributing to detection
performance was distance weighting of pixels in cross entropy loss. Finding
the correct boundaries of buildings appears to be the crux of the problem, and
distance weighting encourages the model to focus on the correct classification
for those pixels. Mixup and ImageNet pre-training were the next most
significant methods. One surprising finding from this study was that detection
performance using only cross entropy loss was nearly as good as the baseline,
with only -0.005 mAP difference (not a statistically significant difference,
given the range of variation across replicas).
Figure 8: Ablation study of training methods. The first row shows the mAP
performance of best model including self-training, and the second row shows
the best model with supervised learning only (the baseline). By disabling each
training optimisation in turn from the baseline, we observe the impact on mAP
test performance: distance weighting has the most significant effect, followed
by mixup.
#### Self-training
Our best model was obtained by using the supervised learning baseline as a
teacher and carrying out self-training as described in Section 8. The bottom
row on Figure 8 shows the difference, with mAP increased by 0.057 on average.
Figure 9 shows precision and recall for different categories in the test set:
rural, urban, medium-density (‘towns’), and settlement facilities for
refugees/internally displaced people (‘displaced’). Visual examples of these
categories are shown in Figure 11. We also show the difference in precision
and recall made by the self-training: precision is increased at high recall
levels, with the improvement being consistent across all test set categories.
We carried out evaluations of the best model on more specific splits of the
evaluation set, shown in Figure 10. When visually inspecting the detections
for low-scoring regions, we noted various causes: in rural areas, label errors
(single buildings within a mostly-empty area can be difficult for labellers to
spot); in urban areas, a tendency of the model to split large buildings into
separate instances; and desert terrain, where buildings were hard to
distinguish against the background, and the model did not perform as well.
Figure 9: Precision-recall with IoU threshold 0.5, after self-training. Left:
Results on different categories on test images. Centre: overall difference in
precision-recall on test data compared to the model before self-training,
showing that self-training increases precision at higher recall levels. Right:
Difference in precision at each recall level, broken down by different
categories of test data. Figure 10: Precision-recall in specific regions by
category, of the best model (including self-training). Investigating the
regions with low area under the PR curve, we noted that _Sierra Leone - Tuelo_
and _Mozambique - Macia_ images were sparsely populated, with some buildings
missing from the labels (i.e. human error while labelling). _Egypt - Cairo_
images were low-scoring partly because of a tendency of the detection model to
split large buildings into multiple smaller instances. Detections in desert
regions, such as _Mali - Timbuktu_ were challenging due to low contrast
between roofs and surrounding sandy areas.
Figure 11: Examples of the categories evaluated in Figs. 9 and 10. Imagery:
Maxar Technologies.
#### Pre-training
Table 3 shows the effect of using different weights for initialisation of the
encoder. As in the experiments above, we use the supervised learning baseline
configuration in Table 2, changing only the initialisation weights. We
repeated each experiment five times and report means and confidence intervals.
Overall, ImageNet pre-training, optionally with fine tuning based on nighttime
luminance, appears to be an effective strategy.
Table 3: Mean average precision of the U-Net building detection model, when using different pre-training schemes to initialise encoder weights. Pre-training scheme | $95$% CI mAP
---|---
None | $0.531\pm 0.003$
ImageNet | $0.601\pm 0.018$
Luminance | $0.579\pm 0.005$
Coarse location | $0.582\pm 0.004$
Fine location | $0.583\pm 0.004$
_ImageNet, fine tuned with:_ |
Luminance | $0.610\pm 0.006$
Coarse location | $0.595\pm 0.004$
Fine location | $0.602\pm 0.005$
_ImageNet co-trained with:_ |
Luminance | $0.552\pm 0.028$
Coarse location | $0.572\pm 0.006$
Fine location | $0.558\pm 0.019$
(a) 90% precision confidence score thresholds.
(b) Fraction of detections dropped at 90% precision confidence score
thresholds.
Figure 12: Spatial variations in filtering the full dataset to obtain
estimated 90% precision.
(a) Large buildings
(b) Complex roof structure
(c) Touching buildings
(d) Ambiguous segmentation
(e) Tree occlusion
(f) Confusing natural features
(g) Round shapes vectorized to rectangles
Figure 13: Examples of error types occurring in the final dataset, including
the contouring and deduplication processes. The color of the polygon indicated
the confidence score range: red [0.5;0.6), yellow [0.6;0.7) and green
[0.7;1.0]. In panel (d), note also the example of detections appearing
shifted, which is caused by misalignment between the image used for inference
and the image used for visualization. Orthorectification errors in source
imagery can cause building footprints to be generated a few metres from their
true positions.
Figure 14: Open Buildings dataset confidence score and area distribution.
## 12 Generation of the Open Buildings dataset
We used existing infrastructure in Google Maps to run inference, contouring of
masks into polygons and deduplication. For inference we used our best model
and ran it on available high-resolution satellite imagery in Africa (19.4M
km2, 64% of the land surface of the continent), which included imagery at
different timestamps and resolutions. We used a contouring algorithm that
produces angular shapes and realigns groups of nearby polygons. After
inference and contouring we ended up with 36B building polygons that we
deduplicated into 516M polygons (see Figure 14 for statistics). The
deduplication algorithm grouped overlapping detections then selected the best
polygons based on confidence score of the detections and quality of the
imagery. Some potential false positives were removed by the deduplication
algorithm if detections on overlapping imagery did not agree.
#### Confidence score guidelines
Knowing that model performance varies across regions, we attempted to estimate
score threshold guidelines for different regions. These can be used to filter
the detections in order to achieve a certain precision level (though with
unknown effect on recall). The extra evaluation dataset described in Section 3
provided the means to compute such thresholds for each S2 cell bucket. We
reweighted the samples of this extra evaluation data to match the density of
the Open Buildings dataset, and then for each level-4 S2 cell, calculated the
score thresholds that give 80%, 85% and 90% precision at 0.5 IoU. See Figure
12 for visualization of the 90% precision score thresholds across Africa. To
illustrate the types of errors that cause low precision in the final dataset,
Figure 13 shows examples, including model failures on large or complex
buildings, and spurious detections in areas with confusing natural features.
## 13 Conclusion
We have presented a pipeline for instance segmentation of buildings in
satellite imagery, used to detect buildings across the entire continent of
Africa. The methods that we have found for improving detection performance,
such as self-training, mixup, and alternative forms of distance weighting,
have been applied using the U-Net model, but could in principle be applied to
other types of instance segmentation architectures. There are a number of
possible directions for improving detection performance further. One is the
use of multi-modal imagery, e.g. adding Sentinel imagery to the input. Another
is the use of detection architectures which explicitly find instances, rather
than casting the problem as semantic segmentation. As high-resolution overhead
imagery becomes more widely available, improved methods for mapping the built
environment can help to make progress on a number of practical and scientific
applications.
## 14 Acknowledgements
We would like to thank several people who helped to make this work possible:
Abdoulaye Diack assisted with coordination, Brian Shucker, Rob Litzke, Yan
Mayster, Michelina Pallone, Stephen Albro, and Matt Manolides provided advice
and assistance with the infrastructure used to create the dataset, Andrea
Frome and Mohammad Nassar assisted with preliminary work exploring the use of
DeepLab as an alternative basis for detection, Nyalleng Moorosi helped with
diligence on ethical and safety issues, and Sean Askay helped to scope the
dataset and identify practical applications. The work is part of Google’s
ongoing AI for Social Good initiative.
## References
* [1] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, pages 234–241, 2015.
* [2] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Scaled-YOLOv4: Scaling cross stage partial network. arXiv preprint arXiv:2011.08036, 2020.
* [3] Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, et al. Hybrid task cascade for instance segmentation. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 4974–4983, 2019.
* [4] Vladimir Iglovikov, Selim Seferbekov, Alexander Buslaev, and Alexey Shvets. TernausNetV2: Fully convolutional network for instance segmentation. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, pages 233–237, 2018.
* [5] Neptune AI. Open Solution to the AI Mapping Challenge. https://github.com/neptune-ai/open-solution-mapping-challenge, 2018\.
* [6] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raskar. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, pages 172–181, 2018.
* [7] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask R-CNN. In International Conference on Computer Vision, 2017.
* [8] Adam Van Etten, Dave Lindenbaum, and Todd M Bacastow. SpaceNet: A remote sensing dataset and challenge series. arXiv preprint arXiv:1807.01232, 2018.
* [9] Jing Zhang, Shaofu Lin, Lei Ding, and Lorenzo Bruzzone. Multi-scale context aggregation for semantic segmentation of remote sensing images. Remote Sensing, 12(4):701, 2020.
* [10] Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 1209–1218, 2018.
* [11] Neal Jean, Marshall Burke, Michael Xie, W Matthew Davis, David B Lobell, and Stefano Ermon. Combining satellite imagery and machine learning to predict poverty. Science, 353(6301):790–794, 2016.
* [12] Michael Xie, Neal Jean, Marshall Burke, David Lobell, and Stefano Ermon. Transfer learning from deep features for remote sensing and poverty mapping. In AAAI Conference on Artificial Intelligence, 2016.
* [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, 2016.
* [14] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
* [15] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
* [16] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. Int. J. Comput. Vision, 115(3):211–252, 2015.
* [17] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with Noisy Student improves ImageNet classification. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 10687–10698, 2020.
* [18] Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D Collins, Ekin D Cubuk, Barret Zoph, Hartwig Adam, and Jonathon Shlens. Naive-student: Leveraging semi-supervised learning in video sequences for urban scene segmentation. In European Conference on Computer Vision, pages 695–714. Springer, 2020.
* [19] Shuai Wang, Yanmin Qian, and Kai Yu. Focal kl-divergence based dilated convolutional neural networks for co-channel speaker identification. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5339–5343, 2018.
* [20] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth, 2016.
|
arxiv-papers
| 2021-07-26T15:48:14 |
2024-09-04T03:07:19.055593
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Wojciech Sirko, Sergii Kashubin, Marvin Ritter, Abigail Annkah, Yasser\n Salah Eddine Bouchareb, Yann Dauphin, Daniel Keysers, Maxim Neumann,\n Moustapha Cisse, John Quinn",
"submitter": "John Quinn",
"url": "https://arxiv.org/abs/2107.12283"
}
|
2107.12285
|
Aix Marseille Univ, Université de Toulon, CNRS, CPT, Marseille, France
Centre de Physique Théorique
This is a collection of notes on the properties of left-invariant metrics on
the eight-dimensional compact Lie group ${\rm SU}(3)$. Among other topics we
investigate the existence of invariant pseudo-Riemannian Einstein metrics on
this manifold. We recover the known examples (Killing metric and Jensen
metric) in the Riemannian case (signature $(8,0)$), as well as a Gibbons et al
example of signature $(6,2)$, and we describe a new example, which is
Lorentzian (i.e., of signature $(7,1)$. In the latter case the associated
metric is left-invariant, with isometry group ${\rm
SU}(3)\times{\mathrm{U}}(1)$, and has positive Einstein constant. It seems to
be the first example of a Lorentzian homogeneous Einstein metric on this
compact manifold.
These notes are arranged into a paper that deals with various other subjects
unrelated with the quest for Einstein metrics but that may be of independent
interest: among other topics we describe the various groups that may arise as
isometry groups of left-invariant metrics on ${\rm SU}(3)$, provide
parametrizations for these metrics, give several explicit results about the
curvatures of the corresponding Levi-Civita connections, discuss modified
Casimir operators (quadratic, but also cubic) and Laplace-Beltrami operators.
In particular we discuss the spectrum of the Laplacian for metrics that are
invariant under ${\rm SU}(3)\times{\mathrm{U}}(2)$, a subject that may be of
interest in particle physics.
## 1 Introduction
This paper is an excursion in the land of left-invariant geometry, more
precisely in the land of left-invariant pseudo-Riemannian metrics on the Lie
group ${\rm SU}(3)$. Its main purpose is to illustrate several known concepts
and methods of Riemannian geometry in this particular case.
The only mathematical result which is probably new is the existence and
description of an homogeneous Einstein metric (actually a family), with
Lorentz signature, i.e., with signature $(7,1)$. Of course, any quadratic form
of signature $(7,1)$ in $R^{8}$ gives rise, by group translations, to a
Lorentzian homogeneous metric on the group ${\rm SU}(3)$, such metrics,
usually, are not Einstein metrics. The example that we present in sect. 3
seems to be the first example of an homogeneous Einstein Lorentzian metric on
this $8$-dimensional compact manifold. Its isometry group is isomorphic with
${\rm SU}(3)\times{\mathrm{U}}(1)$.
It is probably the proper place to mention that, for us, the word “metric”
means “pseudo-Riemannian metric”: it is non-degenerate but the requirement of
positive-definiteness is relaxed; the signature can be arbitrary. For the same
reason a metric for which the curvature of the associated Levi-Civita
connection obeys the Einstein condition will be called an Einstein metric (we
shall not use the terminology “pseudo-Einstein”). The symbol $G$ will denote,
most of the time, the Lie group ${\rm SU}(3)$, but several discussions can
often be generalized to any simple (or even semi-simple) compact Lie group.
Left-invariant metrics are fully characterized by their value at the origin of
the group, and therefore by a non-degenerate bilinear symmetric form on the
Lie algebra, equivalently (after having chosen an appropriate basis), by an
$8\times 8$ non-degenerate symmetric matrix. Such metrics are left-invariant
by construction, but their isometry group can be larger than ${\rm SU}(3)$: as
a rule it is isomorphic with ${\rm SU}(3)\times K$ where $K$, that we call the
right isometry group, is some Lie subgroup of ${\rm SU}(3)$ (more about it
later). We shall give111This was already discussed long ago, with physical
applications in mind, in [4]. a parametrization, in terms of $8\times 8$
matrices, of those left-invariant metrics for which the right isometry group
is $K$, for the various possible choices of $K$.
Then, for every $K$, we shall study the Einstein condition and describe the
metrics for which this condition holds. This occurs (when it occurs) for
specific values of the real parameters that enter our various parametrizations
of the corresponding bilinear forms. In some cases, i.e., for some choices of
$K$, our analysis is complete. Unfortunately, in some other cases we could not
solve the equations in full generality, and we had to assume extra relations
between the otherwise independent parameters in order to complete our study.
We shall discover, along the way, several infinite families of homogenous
Einstein metrics, but once one takes into account the action of the group of
diffeomorphisms on the space of metrics giving rise, in general, to the notion
of (pseudo) Riemannian structures, and in particular to the notion of Einstein
structures, these families reduce, up to scaling, to only four cases, three of
which were already known: the Killing metric (which is properly Riemannian and
for which $K$ is ${\rm SU}(3)$), the so-called Jensen metric [12] (which is
also properly Riemannian and for which $K$ is ${\mathrm{S}O}(3)$), a
particular Lorentz metric (for which $K$ is a member – that we call
${\mathrm{U}}(1)_{I}$ – of a specific conjugacy class of ${\mathrm{U}}(1)$
subgroups), and a metric of signature $(6,2)$, that was already discovered by
[10], for which $K$ is trivial.
This paper grew up from a collection of notes whose purpose was to illustrate
several concepts of (pseudo) Riemannian geometry, in particular left-invariant
geometry, in a case going beyond the three-sphere $S^{3}$, aka ${\rm SU}(2)$,
for which the study of left-invariant metrics has been thoroughly studied,
long ago, in many places, hence the choice of ${\rm SU}(3)$ which is the next
case in the ${\rm SU}(N)$ family. For this reason the reader will find here a
section, entitled “Miscellaneous”, with contents that have to do with left-
invariant geometry, but that is not directly related with the theme of
Einstein metrics: There we shall discuss for instance quadratic and cubic
Casimir elements (possibly modified), Laplace-Beltrami operators, sectional
curvatures, Ricci decompositions, etc. These concepts are of course standard,
and we shall add nothing fundamentally new to the study of their general
properties, but we shall give a number of explicit results that, we hope, will
entertain the reader or trigger the interest of a few students.
As for the classification of Einstein left-invariant metrics on ${\rm SU}(3)$
with given right isometry group, or of homogeneous Einstein structures, what
we can offer is unfortunately incomplete since, in several cases, we could not
solve the Einstein equation in full generality while keeping all the
parameters allowed by the choice of a given right isometry group. We hope that
some courageous readers will take up this study. It is of no surprise that the
system of equations that one needs to solve gets more and more complex, with
more and more parameters, as soon as one chooses a right isometry group that
gets smaller. For this reason, even if one can easily solve these equations by
hand when the right isometry group $K$ is large enough (for instance
${\mathrm{S}O}(3)$, ${\mathrm{U}}(2)$, or ${\rm SU}(3)$ itself), the use of a
computer system becomes almost compulsory for smaller groups; for example our
homogenous Einstein space with Lorentz signature involves parameters that are
algebraic integers of degree $15$ with large coefficients, a manual handling
of such large expressions is inefficient and prone to error. Most calculations
done here were carried out using the Mathematica software system.
The paper ends with a section devoted to possible physical applications. One
of them, in particle physics, using left-invariant metrics with right isometry
group ${\mathrm{U}}(2)$, was described long ago, see [4], and we shall add
almost nothing to this discussion, apart from resetting the problem in a
slightly more general framework. Physical applications, if any, of the
existence of pseudo-Riemannian homogenous Einstein metrics on ${\rm SU}(3)$,
in particular of the one that has Lorentz signature, remain to be found.
##### Reminders.
Every non-compact connected smooth manifold admits a Lorentz metric, and a
compact connected smooth manifold admits a Lorentz metric if and only if its
Euler characteristic is zero ([18], p. 149). A useful corollary arises when
there is a non-vanishing vector field, this implying that the Euler
characteristic is zero. In particular, any compact parallelizable manifold,
including any compact Lie group (they admit many non-vanishing vector fields
!), has Euler characteristic zero. So one a priori knows that the groups ${\rm
SU}(N)$, and in particular the group ${\rm SU}(3)$, admits Lorentz metrics. In
the case of Lie groups however one does not need such general theorems to
establish this property since, as already recalled, any non-degenerate
symmetric bilinear form with Lorentz signature on the Lie algebra gives rise
to a left-invariant Lorentz metric on the group itself, by using group
translations; these metrics are obviously homogeneous, and, in our case, have
an isometry group isomorphic with ${\rm SU}(3)\times K$, where $K$ is some
subgroup of ${\rm SU}(3)$. Usually these metrics are not Einstein metrics:
this is one motivation for the search of those which are such.
## 2 Metrics and curvatures
### 2.1 Killing form, inner product, and renormalized Killing inner product
Since the Lie group ${\rm SU}(3)$ is compact, the Killing form is a negative
definite bilinear symmetric form on the Lie algebra $\mathfrak{su}(3)$. Its
opposite, the Killing inner product, defines an Euclidean structure on
$\mathfrak{su}(3)$ and, using left translations (for instance), a Riemannian
metric on the group itself: the Killing metric.
It is useful and standard to define the renormalized Killing form by dividing
the Killing form by $2g$, where $g$ is the dual Coxeter number. We call $k$
the Killing inner product (so the Killing form is $-k$), and
$\widehat{k}=k/2g$ the renormalized Killing inner product. For ${\rm SU}(3)$,
$g=3$, therefore $\widehat{k}=k/6$. Warning: Through notational abuse we also
call $k$ and $\widehat{k}$ the corresponding bi-invariant metrics on the
manifold $G={\rm SU}(3)$.
### 2.2 Several basis
Let $(e_{a})$ be an arbitrary basis of the tangent space at the identity of
$G$, identified with ${\mathrm{Lie}}(G)$. Through notational abuse we also
call $e_{a}$ the corresponding left-fundamental222According to the present
standard terminology they are also called left-invariant although they commute
with the right-fundamental ones (in some old references left-fundamental
vector fields are called right-invariant). vector fields obtained from the
latter by letting $G$ act on itself by left multiplication. The structure
constants of the global moving frame $(e_{a})$ defined by the equality
$[e_{a},e_{b}]={x_{ab}}^{c}\,e_{c}$ are identified with the structure
constants of the basis $(e_{a})$ in the Lie algebra.
The dual (also called inverse) Killing metric, in the moving frame $(e_{a})$
has components $k^{ab}$ and therefore reads333Here and below we use the first
Einstein summation convention: an index variable that appears twice, once as a
superscript and once as a subscript, must be summed over.
$k^{-1}=k^{ab}\,e_{a}\otimes e_{b}$. The Killing metric itself reads
$k=k_{ab}\,\theta^{a}\otimes\theta^{b}$, where $(\theta^{a})$ is the moving
co-frame dual to $(e_{a})$. Replacing $k$ by $\widehat{k}$ we have similar
expressions for the renormalized Killing metric, with
$\widehat{k}_{ab}=k_{ab}/2g$ and $\widehat{k}^{ab}=2g\,k^{ab}$. Since $g=N$
for ${\rm SU}(N)$, we have $\widehat{k}=k/6$ and $\widehat{k}^{-1}=6k^{-1}$
for ${\rm SU}(3)$.
Let $(X_{a})$ be a basis of ${\mathrm{Lie}}(G)$ which is orthonormal for the
inner product $k$. The ordered set of vectors
$\widehat{X}_{a}=\sqrt{2g}\,X_{a}$, in particular
$\widehat{X}_{a}=\sqrt{6}\,X_{a}$ for $G={\rm SU}(3)$, is then an orthonormal
basis for the inner product $\widehat{k}$. We have
$k^{-1}=\delta^{ab}\,X_{a}\otimes X_{b}$ and
$\widehat{k}^{-1}=2g\,\delta^{ab}\,X_{a}\otimes X_{b}$, where $\delta^{ab}$ is
the Kronecker symbol.
Define $i\,L_{a}\in\,\mathfrak{su}(3)$ by the equality
$i\,L_{a}=2\sqrt{3}\,X_{a}=\sqrt{2}\,\widehat{X}_{a}$. It is traditional444The
hermitian matrices $\lambda_{a}$ are usually called Gell-Mann matrices in the
physics literature. to call $i\,\lambda_{a}$ the $3\times 3$ anti-hermitian
matrices that represent the $i\,L_{a}$ in the defining representation of ${\rm
SU}(3)$.
Let $E^{i}_{j}$ be single-entry $3\times 3$ matrices. One obtains the
$\lambda_{a}$ as follows :
$\lambda_{1}=E^{1}_{2}+E^{2}_{1},\,\lambda_{2}=i(E^{2}_{1}-E^{1}_{2}),\,\lambda_{3}=E^{1}_{1}-E^{2}_{2},\,\lambda_{4}=E^{3}_{1}-E^{1}_{3},\,\lambda_{5}=i(E^{3}_{1}-E^{1}_{3}),\,\lambda_{6}=E^{2}_{3}+E^{3}_{2},\,\lambda_{7}=i(E^{3}_{2}-E^{2}_{3}),\,\lambda_{8}=\tfrac{1}{\sqrt{3}}(E^{1}_{1}+E^{2}_{2}-2E^{3}_{3}).$
distinct
The Lie bracket of two Lie algebra elements can be written as a matrix
commutator in any chosen representation. It is standard to call
$-2\,{f_{ab}}^{c}$ the (real) structure constants of the basis $i\lambda_{a}$,
i.e., $[i\lambda_{a},i\lambda_{b}]=-2{f_{ab}}^{c}(i\lambda_{c})$,
equivalently, $[\lambda_{a},\lambda_{b}]=2i{f_{ab}}^{c}\,\lambda_{c}$. From
$Tr(\lambda_{a}.\lambda_{b})=2\,\delta_{ab}$ one obtains
$4i\,{f_{ab}}^{c}=Tr([\lambda_{a},\lambda_{b}]\,.\,\lambda_{c})$; using
cyclicity of the trace one finds that ${f_{ab}}^{c}$ is antisymmetric in its
last two indices $b,c$.
At the origin of ${\rm SU}(3)$, the left-invariant vector fields $X_{a}$,
identified with Lie algebra elements, are expressed as matrices
$\tfrac{i}{2\sqrt{3}}\,\lambda_{a}$ in the defining representation. The
structure constants of the basis $(X_{a})$, which is orthonormal for $k$, are
therefore equal to $\tfrac{-1}{\sqrt{3}}{f_{ab}}^{c}$. Notice that in the
adjoint representation, the generators $iL_{a}$ are represented by (real
antisymmetric) matrices $2f_{a}$ which have matrix elements ${2\,f_{ab}}^{c}$.
### 2.3 Left-invariant pseudo-Riemannian metrics and isometry groups
##### Isometry groups.
Isometry groups of left-invariant metrics on ${\rm SU}(3)$ are isomorphic with
${\rm SU}(3)\times K$ where $K$ is a subgroup of ${\rm SU}(3)$. Left-invariant
metrics on a Lie group are homogeneous since the isometry group acts
transitively on the manifold. The group $K$ can be ${\rm SU}(3)$ itself (bi-
invariant metrics) or some subgroup of the maximal subgroups of the latter,
which, up to conjugacy, are
${\mathrm{U}}(2)=S({\mathrm{U}}(2)\times{\mathrm{U}}(1))$ (locally ${\rm
SU}(2)\times{\mathrm{U}}(1))$ and ${\mathrm{S}O}(3)$, sometimes called the
“${\mathrm{S}O}(3)$ principal subgroup”. Hence, restricting oneself to closed
connected subgroups, one finds that the candidates for a (right) isometry
group $K$, up to isomorphism, are the members of the following list555The
Hasse diagram of nontrivial Lie subalgebras of ${\mathrm{Lie}}({\rm SU}(3))$,
up to equivalence (conjugacy by an inner automorphism), can be found in [9]. :
$S({\mathrm{U}}(2){\mathrm{U}}(1))\sim{\mathrm{U}}(2)$, ${\rm SU}(2)$,
${\mathrm{S}O}(3)$, ${\mathrm{U}}(1)\times{\mathrm{U}}(1)$, and
${\mathrm{U}}(1)$.
Two remarks are in order here: 1) If, for some left-invariant metric, $K$
contains ${\rm SU}(2)$, it also contains ${\mathrm{U}}(2)$ (see, below, the
paragraph called “Parametrizations”), so ${\rm SU}(2)$ should be removed from
the previous list. 2) In order to discuss left-invariant metrics up to
equivalence (a notion that will be made precise later), specifying $K$ up to
isomorphism is a priori not enough and, in general, one needs to specify $K$
up to conjugacy; however, maximal compact subgroups in a connected Lie group
are all conjugate, and maximal tori are also conjugate, so only the last
possibility of the above list, namely the case $K={\mathrm{U}}(1)$, needs to
be specified further.
Notice that, with the exception of ${\mathrm{U}}(1)$, specifying the type of
the subgroup $K$ up to isomorphism is also enough to determine the quotient
${\rm SU}(3)/K$ as a smooth manifold. Again, in the case $K={\mathrm{U}}(1)$
one has to be more specific. These quotients666Here we only think of these
quotients as homogeneous spaces defined by the pair $(G,K)$. are well known:
we have the complex projective space $CP^{2}={\rm SU}(3)/{\mathrm{U}}(2)$, the
sphere $S^{5}={\rm SU}(3)/{\rm SU}(2)$, the irreducible symmetric space ${\rm
SU}(3)/{\mathrm{S}O}(3)$ (sometimes called the Wu manifold), the flag manifold
${\rm SU}(3)/({\mathrm{U}}(1)\times{\mathrm{U}}(1)$, and the various Aloff-
Wallach spaces ${\rm SU}(3)/{\mathrm{U}}(1)$.
In order to obtain a parametrization for left-invariant metrics on ${\rm
SU}(3)$ invariant under a given isometry group ${\rm SU}(3)\times K$, the
first step is to specify $K$ itself, or rather its Lie algebra, in terms of
${\rm SU}(3)$ generators; this is conveniently done in the defining
representation, in terms of the matrices $\lambda_{a}$. The second step is to
impose the vanishing of the Lie derivative of an arbitrary left-invariant
metric (a symmetric $8\times 8$ matrix with $8(8+1)/2=36$ arbitrary real
constant parameters) in the direction of the generators of the Lie subalgebra
of the chosen isometry subgroup $K$. Equivalently, one can impose (or check)
the equality ${r}^{T}\,.\,h^{-1}\,.\,r=h^{-1}$ with $r=exp(u)$ for the
generators $u$ of the chosen subgroup $K$ in the adjoint representation of
${\rm SU}(3)$; one takes $u=2f_{a}$, for the one-parameter subgroups generated
by the vectors $i\,L_{a}$.
Our choice777Given the isomorphy types of the right isometry groups $K$, we
make here specific (but of course arbitrary) choices that define $K$ as
concrete subgroups of ${\rm SU}(3)$. of generators for $Lie(K)$, for the
various candidates, is as follows. For ${\mathrm{U}}(2)\sim
S({\mathrm{U}}(2){\mathrm{U}}(1))$ (locally isomorphic with ${\rm
SU}(2)\times{\mathrm{U}}(1)$), we choose the generators
$\\{\lambda_{1},\lambda_{2},\lambda_{3}\\}$ and $\lambda_{8}$. For
${\mathrm{S}O}(3)$ we choose the generators
$\\{\lambda_{2},\lambda_{5},\lambda_{7}\\}$888${\rm SU}(2)$ and
${\mathrm{S}O}(3)$ are of course locally isomorphic, but not isomorphic.. We
identify the Cartan subgroup ${\mathrm{U}}(1)\times{\mathrm{U}}(1)$ with a
fixed maximal torus of ${\rm SU}(3)$, namely the one generated by
$\lambda_{3}$ and $\lambda_{8}$.
Let us take $e^{i\phi}\in{\mathrm{U}}(1)$, $k,\ell\in Z$, and call
${\mathrm{U}}(1)_{k,\ell}$ the subgroup of ${\rm SU}(3)$ defined as the set of
$3\times 3$ diagonal matrices with diagonal
$(e^{ik\phi},e^{i\ell\phi},e^{-i(k+\ell)\phi})$. Any one-dimensional subgroup
of ${\rm SU}(3)$ is conjugate to such an ${\mathrm{U}}(1)_{k,\ell}$. Two
manifolds of the type ${\rm SU}(3)/{{\mathrm{U}}(1)}_{k,\ell}$ (Aloff-Wallach
spaces) are diffeomorphic, and therefore homeomorphic, if the corresponding
${\mathrm{U}}(1)$ subgroups are conjugated in ${\rm SU}(3)$. However Aloff-
Wallach spaces are not necessarily homeomorphic, and even when they are, they
may sometimes be non diffeomorphic. This subtle problem is investigated in
[14]. Consider $S_{3}$ acting on the triple $(k,l,-k-\ell)$, and identify this
finite group with the Weyl group of ${\rm SU}(3)$. Take $\sigma\in S_{3}$. One
can show [19] that the action of the latter on the Cartan torus changes
${\mathrm{U}}(1)_{k,\ell}$ to ${\mathrm{U}}(1)_{\sigma(k),\sigma(\ell)}$. It
is therefore enough to assume that $k\geq\ell\geq 0$, and that $k$ and $\ell$
are co-prime (multiplying $(k,\ell)$ by an integer does not change the
subgroup). One recovers the special cases ${\mathrm{U}}(1)_{Y}$ for
$(k,\ell)=(1,1)$, and ${\mathrm{U}}(1)_{I}$ for $(k,\ell)=(1,-1)\sim(1,0)$.
Another labelling possibility (that hides the roles played by $k$ and $\ell$)
is to introduce a single index $\upsilon$: up to an appropriate scaling of the
generator $\tfrac{k-\ell}{2}\lambda_{3}+\tfrac{k+\ell}{2}\sqrt{3}\lambda_{8}$,
the same one-dimensional subgroup ${\mathrm{U}}(1)_{k,\ell}$ (that one can may
call ${\mathrm{U}}(1)_{\upsilon}$) is generated by
$\upsilon\,\lambda_{3}+\sqrt{3}\,\lambda_{8}$.
##### Notations.
It is traditional in physics to introduce the operators
$I=\tfrac{1}{2}\,L_{3}$ (isospin), $Y=\tfrac{1}{\sqrt{3}}\,L_{8}$
(hypercharge) and $Q=I+Y/2$, (electric charge). We shall use these notations.
In the fundamental representation, where one replaces $L_{a}$ by
$\lambda_{a}$, these operators $I,Y,Q$ are therefore respectively represented
by the diagonal matrices $\text{diag}(1/2,-1/2,0)$,
$\text{diag}(1/3,1/3,-2/3)$ and $\text{diag}(2/3,-1/3,-1/3)$. One also calls
${\mathrm{U}}(1)_{I}$, ${\mathrm{U}}(1)_{Y}$, and ${\mathrm{U}}(1)_{Q}$, the
subgroups respectively generated by $\lambda_{3}$, by $\lambda_{8}$ and by
$Q$. When the right isometry group is a Cartan subgroup, our above specific
choice for $K$ amounts to take it equal to
${\mathrm{U}}(1)_{I}\times{\mathrm{U}}(1)_{Y}$. Notice that the subgroups
${\mathrm{U}}(1)_{Y}=\\{(e^{i\phi/3},e^{i\phi/3},e^{i(-2)\phi/3})\\}$ and
${\mathrm{U}}(1)_{Q}=\\{(e^{i(2)\phi/3},e^{i(-1)\phi/3},e^{i(-1)\phi/3})\\}$,
with $e^{i\phi}\in S^{1}$, equivalently $e^{i\phi/3}\in S^{1}$, respectively
equal to ${\mathrm{U}}(1)_{1,1}$ and
${\mathrm{U}}(1)_{2,-1}={\mathrm{U}}(1)_{-2,1}$, are conjugated in ${\rm
SU}(3)$ by a permutation of the Weyl group $S_{3}$ (the triple $(-2,1,1)$
being equivalent to $(1,1,-2)$), but they are not conjugated to the subgroup
${\mathrm{U}}(1)_{I}$.
Some mathematical readers could ask why physicists prefer to define $Q$ and
$Y$ as before, without incorporating a multiplicative factor equal to $3$ in
the definition, a choice that would indeed look more natural since irreps of
${\mathrm{U}}(1)$ are labelled by integers. The problem is that, by so doing,
quarks (identified with basis vectors of the fundamental representations)
would have charge $\pm 1$ and the proton (identified with a specific vector in
the tensor cube of the defining representation) would have charge $+3$.
However, conventionally, the latter has electric charge $+1$ (minus the charge
of the electron). One could of course suggest to modify the standard
terminology and redefine the notion of electric charge in such a way that the
electron has electric charge $-3$, but this is not going to happen!
For this reason $Y$ and $Q$ are defined as above and quarks turn out to have
”fractional electric charge”: $\pm 1/3$ or $\pm 2/3$.
##### Parametrizations.
We now give parametrizations for the dual metric $h^{-1}$ in the basis
$(X_{a})$ which is orthonormal for the Killing metric, assuming that the
isometry group of $h$ is ${\rm SU}(3)\times K$. Remember that in the defining
representation, the vector fields $X_{a}$, at the origin, are represented by
matrices $\tfrac{i}{2\sqrt{3}}\lambda_{a}$. The reader can obtain the
following results by imposing the vanishing of the Lie derivative of $h^{-1}$
with respect to the generators $\lambda_{k}$ of $K$, i.e., 999As usual, a
summation over repeated indices is understood.
$h^{ij}([\lambda_{k},\lambda_{i}]\otimes\lambda_{j}+\lambda_{i}\otimes[\lambda_{k},\lambda_{j}])=0$
(1)
For the Killing metric, $K$ is ${\rm SU}(3)$, and we have
$k^{-1}=\delta^{ab}\,X_{a}\otimes X_{b}$, in other words, $k^{ab}$ is the unit
matrix $8\times 8$. Bi-invariant metrics are multiples of the Killing metric
$k$ since ${\rm SU}(3)$ is simple, and they have the same isometry group; they
read $h^{-1}=\alpha\,\delta^{ab}\,X_{a}\otimes X_{b}$ where $\alpha$ is some
real constant.
In the same basis $(X_{a})$ the parametrization of $h^{-1}$, with components
$h^{ab}$, for the other choices of $K$ specified in the list (2), reads as in
the following table (3) (the generic parameters appearing in these expressions
are arbitrary real numbers and the dots stand for $0$’s) :
$\begin{array}[]{ccc}&{\mathrm{S}O}(3):\\{\lambda_{2},\lambda_{5},\lambda_{7}\\},\qquad{\mathrm{U}}(2):\;\\{\lambda_{1},\lambda_{2},\lambda_{3};\lambda_{8}\\},\qquad{\mathrm{U}}(1)\times{\mathrm{U}}(1):\;\\{\lambda_{3},\lambda_{8}\\}\\\
&{\mathrm{U}}(1)_{I}:\;\\{\lambda_{3}\\},\qquad\qquad{\mathrm{U}}(1)_{Y}:\;\\{\lambda_{8}\\}.\end{array}$
(2)
$\begin{array}[]{ccc}\left(\begin{array}[]{cccccccc}\alpha&.&.&.&.&.&.&.\\\
.&\beta&.&.&.&.&.&.\\\ .&.&\alpha&.&.&.&.&.\\\ .&.&.&\alpha&.&.&.&.\\\
.&.&.&.&\beta&.&.&.\\\ .&.&.&.&.&\alpha&.&.\\\ .&.&.&.&.&.&\beta&.\\\
.&.&.&.&.&.&.&\alpha\\\
\end{array}\right)&\left(\begin{array}[]{cccccccc}\alpha&.&.&.&.&.&.&.\\\
.&\alpha&.&.&.&.&.&.\\\ .&.&\alpha&.&.&.&.&.\\\ .&.&.&\beta&.&.&.&.\\\
.&.&.&.&\beta&.&.&.\\\ .&.&.&.&.&\beta&.&.\\\ .&.&.&.&.&.&\beta&.\\\
.&.&.&.&.&.&.&\gamma\\\
\end{array}\right)&\left(\begin{array}[]{cccccccc}\alpha&.&.&.&.&.&.&.\\\
.&\alpha&.&.&.&.&.&.\\\ .&.&\beta&.&.&.&.&\zeta\\\ .&.&.&\gamma&.&.&.&.\\\
.&.&.&.&\gamma&.&.&.\\\ .&.&.&.&.&\delta&.&.\\\ .&.&.&.&.&.&\delta&.\\\
.&.&\zeta&.&.&.&.&\varepsilon\\\ \end{array}\right)\\\ &&\\\
K={\mathrm{S}O}(3)&K={\mathrm{U}}(2)&K={\mathrm{U}}(1)\times{\mathrm{U}}(1)\end{array}$
$\begin{array}[]{ccc}\left(\begin{array}[]{cccccccc}\alpha&.&.&.&.&.&.&.\\\
.&\alpha&.&.&.&.&.&.\\\ .&.&\beta&.&.&.&.&\zeta\\\
.&.&.&\gamma&.&\theta&\eta&.\\\ .&.&.&.&\gamma&\eta&-\theta&.\\\
.&.&.&\theta&\eta&\delta&.&.\\\ .&.&.&\eta&-\theta&.&\delta&.\\\
.&.&\zeta&.&.&.&.&\epsilon\\\
\end{array}\right)&\left(\begin{array}[]{cccccccc}\varkappa_{11}&\varkappa_{12}&\varkappa_{13}&.&.&.&.&\epsilon_{1}\\\
\varkappa_{12}&\varkappa_{22}&\varkappa_{23}&.&.&.&.&\epsilon_{2}\\\
\varkappa_{13}&\varkappa_{23}&\varkappa_{33}&.&.&.&.&\epsilon_{3}\\\
.&.&.&\alpha&.&\gamma&\delta&.\\\ .&.&.&.&\alpha&-\delta&\gamma&.\\\
.&.&.&\gamma&-\delta&\beta&.&.\\\ .&.&.&\delta&\gamma&.&\beta&.\\\
\epsilon_{1}&\epsilon_{2}&\epsilon_{3}&.&.&.&.&\epsilon_{8}\\\
\end{array}\right)&{}\hfil\\\ &&\\\
K={\mathrm{U}}(1)_{I},\,\text{or}\,{\mathrm{U}}(1)_{k,-k}&K={\mathrm{U}}(1)_{Y},\,\text{or}\,{\mathrm{U}}(1)_{k,k}&{}\hfil\end{array}$
(3)
The parametrization obtained for the matrices given in table (3), for the
specific subgroups $K$ given in (2) should be understood as generic ones:
obviously, for particular choices of the real parameters entering these
matrices the right isometry group can be larger than $K$ (for instance by
taking all the diagonal coefficients equal to $1$, and by setting to $0$ the
off-diagonal ones, one recover the Killing metric, for which $K={\rm SU}(3)$).
More generally the matrices $h^{-1}$ that obey the Killing equation (1) for a
chosen group $K$, as specified in (2), determine left-invariant metrics for
which the right isometry group is equal either to $K$ or to an over-group of
$K$ that should be equal or conjugated to one member of the list (2).
Remarks (proofs, using (1) and the commutation relations in
$\mathfrak{su}(3)$, are immediate, and left to the reader):
$\bullet$ Invariance of a metric under ${\rm SU}(2)$ implies invariance under
${\mathrm{U}}(2)$.
$\bullet$ Imposing invariance under ${\mathrm{U}}(1)_{k,0}$, with $k>0$
amounts, up to conjugacy, to impose invariance under ${\mathrm{U}}(1)_{k,-k}$,
and therefore gives for $h^{-1}$ the same parametrization as the one obtained
when $K={\mathrm{U}}(1)_{I}$.
$\bullet$ Invariance under any ${\mathrm{U}}(1)_{k,\ell}$, with $k>\ell>0$,
implies invariance under ${\mathrm{U}}(1)\times{\mathrm{U}}(1)$.
We can therefore restrict our attention to the subgroups $K$ given by the list
(2).
The above parametrizations were already obtained and commented in [4] for the
various choices of the subgroup $K$. In the same reference, an application to
particle physics was given, namely the interpretation of the mass operator for
various types of mesons in terms of the Laplacian associated to left-invariant
metrics for which $\text{Lie}(K)=\mathfrak{su}(2)\oplus\mathfrak{u}(1)$. We
shall come back to this discussion at the end of the present article.
The number of free parameters appearing in the previous expressions of
$h^{-1}$ could be a priori determined by considering these metrics as coming
from an $ad(K)$ invariant bilinear form at the origin of the coset space
$({\rm SU}(3)\times K)/K$, with $K$ diagonally embedded, and by reducing the
isotropy action of $K$ in the tangent space at the identity ($\mathbb{R}^{8}$)
into a sum of real irreducible representations (irreps).
##### Pseudo-Riemannian structures.
In view of using the above parametrizations to explicitly determine various
curvature tensors, one wants to have as few free coefficients as possible. It
is therefore useful to consider pseudo-Riemannian structures, rather than
pseudo-Riemannian metrics. The group of diffeomorphisms of a manifold acts by
pullback on its space of (pseudo) Riemannian metrics. The quotient space is,
by definition, the space of Riemannian structures. The stabilizer of this
action at a given point, i.e., at a given metric, is the isometry group of
this metric. Two metrics belonging to the same orbit have conjugated
stabilizers, i.e., conjugated isometry groups, and each stratum (that maybe
contains distinct orbits) of the obtained stratification is characterized by
an isometry group, up to conjugacy. It may also happen that distinct metrics
belonging to the same orbit have the same isometry group —we shall meet one
such example in what follows.
Left-invariant metrics of signature $(p,q)$ on ${\rm SU}(3)$ can be associated
with elements of $GL(8,\mathbb{R})/O(p,q)$ since they can defined by arbitrary
symmetric bilinear forms of prescribed signature on the tangent space at the
origin of ${\rm SU}(3)$, i.e., in $\mathbb{R}^{8}$, but the associated
Riemannian structures are associated with points of the orbit space of the
latter under the action of $Ad({\rm SU}(3))\subset{\mathrm{S}O}(8)$.
Equivalence under this action generically (i.e., when the right isometry group
$K$ is trivial) reduces the number of free parameters from $36=(8\times 9)/2$
to $28=36-8$. For ${\rm SU}(3)\times K$ invariant metrics, with $K$ non
trivial, one may use rotations defined by elements of $Ad({\rm SU}(3))$ that
commute with the action of $K$ to decrease the number of parameters entering
the matrices of table (3) determined by solving equation (1).
For instance, if $K={\mathrm{U}}(1)_{I}$, this number is reduced from $8$ to
$7$: setting $(h^{-1})^{\prime}={r}^{T}.h^{-1}.r$ with $h^{-1}$ as in table
(3), and using $r=\exp(x\ f_{8})$ with $x=\arctan(\theta/\eta)$, one obtains a
new matrix $(h^{-1})^{\prime}$ of the same family that can be directly
obtained from $(h^{-1})$ by replacing only $\eta$ by
$\eta^{\prime}=\sqrt{\theta^{2}+\eta^{2}}$ and the coefficient
$\theta=h^{-1}_{(6,4)}=h^{-1}_{(4,6)}=-h^{-1}_{(5,7)}=-h^{-1}_{(7,5)}$, in
table (3), by $0$. Since $\lambda_{3}$ and $\lambda_{8}$ commute, these two
matrices $(h^{-1})$ and $(h^{-1})^{\prime}$ define left-invariant metrics that
have the same right isometry group.
In a similar way the number of parameters, if $K={\mathrm{U}}(1)_{Y}$, can be
reduced from $14$ to $11$: the $3\times 3$ symmetric sub-matrix $\varkappa$,
in the upper left corner of $h^{-1}$, can be assumed to be diagonal.
In this way one obtains respectively $1,2,3,6,7,11,28$ parameters (instead of
$1,2,3,6,8,14,36$) for the choices $K={\rm
SU}(3),{\mathrm{S}O}(3),{\mathrm{U}}(2),{\mathrm{U}}(1)\times{\mathrm{U}}(1),{\mathrm{U}}(1)_{I},{\mathrm{U}}(1)_{Y},\\{e\\}$.
A last simplification, further reducing by one the number of parameters, is to
consider metrics only up to scale, i.e., metrics that differ by a constant
conformal transformation (this changes the obtained curvatures by an overall
multiplicative constant). The number of parameters for the previous choices of
$K$, once we identify metrics that differ by equivalence and scaling, becomes
$0,1,2,5,6,10,27$.
##### Decomposition of a bilinear symmetric form of rank $8$ on ${\rm SU}(3)$
irreps
The group $G={\rm SU}(3)$ acts on the vector space of $8\times 8$ symmetric
matrices —the symmetric subspace of the tensor square of the adjoint
representation. This action is not irreducible and, denoting the irreps that
appear in this symmetric subspace by their dimension, we have the direct sum
decomposition: $36=1\oplus 8\oplus 27$, with three terms respectively
associated with the irreps of highest weights $(0,0)$, $(1,1)$, and $(2,2)$.
Let us call ${{}_{1}h^{-1}}$, ${{}_{8}h^{-1}}$, ${{}_{27}h^{-1}}$, the
projections of the dual metric $h^{-1}$ on these three vector subspaces.
Calling $d_{a,b,c}=\tfrac{1}{4}\ Tr(\lambda_{a}[\lambda_{b},\lambda_{c}]_{+})$
where
$[\lambda_{b},\lambda_{c}]_{+}=\lambda_{b}\lambda_{c}+\lambda_{c}\lambda_{b}$
is the anti-commutator, and $d_{a}$ the (symmetric) matrices with elements
$(d_{a})_{b,c}=d_{a,b,c}$, it is easy to show that
${{}_{1}h^{-1}}=\tfrac{1}{8}\,Tr(h^{-1})\,\,\mathrm{l}\\!\\!\\!1$ and that
${{}_{8}h^{-1}}=\tfrac{3}{5}\,\sum_{a=1\ldots 8}\,Tr(h^{-1}d_{a})\,d_{a}$; the
last projection, ${{}_{27}h^{-1}}$, can be obtained by difference. Let us
illustrate this decomposition by assuming that the metric $h$ belongs to the
family of metrics for which the right isometry group $K$ is (at least)
${\mathrm{U}}(2)$, with the parametrization given in (2). One obtains
immediately $h^{-1}={{}_{1}h^{-1}}+{{}_{8}h^{-1}}+{{}_{27}h^{-1}}$, with
$\begin{split}{{}_{1}h^{-1}}=&A\,\,\,\mathrm{l}\\!\\!\\!1\\\
{{}_{8}h^{-1}}=&{B\sqrt{3}}\,d_{8}=B\,m_{8}\quad\text{with}\quad
m_{8}=\text{diag}\left(1,1,1,-\frac{1}{2},-\frac{1}{2},-\frac{1}{2},-\frac{1}{2},-1\right)\\\
{{}_{27}h^{-1}}=&C\,m_{27}\quad\text{with}\quad
m_{27}=\text{diag}\left(1,1,1,-3,-3,-3,-3,9\right)\\\
\text{where}\;&A=\frac{1}{8}(3\alpha+4\beta+\gamma),\quad
B=\frac{1}{5}(3\alpha-2\beta-\gamma),\quad
C=\frac{1}{40}(\alpha-4\beta+3\gamma)\end{split}$ (4)
We chose to illustrate this decomposition of $h$ (actually of $h^{-1}$) in the
case $K={\mathrm{U}}(2)$, but one can do it as well for the other
cases101010The Killing metric has projection onto $\,\,\mathrm{l}\\!\\!\\!1$
only. For the $K={\mathrm{S}O}(3)$ family (see (2)), the decomposition is as
in (4) but with $A=\frac{5\alpha}{8}+\frac{3\beta}{8}$, $B=0$ and
$C=\frac{3\alpha}{8}-\frac{3\beta}{8}$, with the same $m_{8}$ but with
$m_{27}=\text{diag}(1,-(5/3),1,1,-(5/3),1,-(5/3),1)$. For the Jensen sub-
family (Einstein metrics, see sect. 3), one has $A=\frac{19\alpha}{4}$, $B=0$
and $C=-\frac{15\alpha}{4}$.. One may notice, however, that such
decompositions (that do not seem to be much used) have no reason to be
compatible with the signature, or even with the non-degenerateness, of the
chosen bilinear form. Nevertheless one can consider families, or subfamilies,
of bilinear forms for which one or several of the above projections vanish. We
shall come back to this possibility in the last section.
### 2.4 Curvature tensors
Expressions for curvature tensors of the Levi-Civita connection (the
torsionless metric connection) defined by an invariant metric on a Lie group
can be found in various places in the literature. Unfortunately these
expressions are often written in a a basis (a moving frame) for which the
chosen metric is orthonormal. Here we want to study various metrics while
keeping the same basis. For this reason we shall give expressions of the
various curvature tensors in a basis $(e_{a})$ made of arbitrary left-
invariant vector fields111111These formulae can be found in [5].; we call
${x_{ab}}^{c}$ the corresponding structure constants:
$[e_{a},e_{b}]={x_{ab}}^{c}\,e_{c}$.
The chosen metric (call it $h$) defines musical isomorphisms between a vector
space and its dual; in particular, using the structure constants
${x_{ab}}^{c}$ and the metric coefficients $h_{ab}$ or $h^{ab}$, one can
define new symbols such as $x_{abc}={x_{ab}}^{d}\,h_{dc}$,
${x^{a}}_{bc}=h^{ae}\,{x_{eb}}^{d}\,h_{dc}$, etc. The term
${x^{m}}_{ik}x_{mjl}$, for instance, extracted from (5) below, actually means
$\sum_{m^{\prime},k^{\prime},l^{\prime}}\,{x_{m^{\prime}i}}^{k^{\prime}}{x_{mj}}^{l^{\prime}}\,h^{m^{\prime}m}\,h_{k^{\prime}k}\,h_{l^{\prime}l}$
when expressed in terms of structure constants and metric (or inverse metric)
coefficients121212One could write such expressions with all the indices at the
same level (writing for instance $x_{mik}x_{mjl}$) provided one uses the
second Einstein summation convention, which supposes chosen a fixed metric
$h$: an index variable that appears twice at the same level, i.e., twice as a
superscript or twice as a subscript, should be summed over using the chosen
metric or its dual.. Observe that the symbols $x_{abc}$ are not, in general,
antisymmetric with respect to the last two indices since the metric $h$ is not
assumed to be bi-invariant.
Call $R^{a}_{\;bcd}$ the components of the Riemann curvature tensor. The last
two indices ($c$ and $d$) are the form indices, and the first two ($a$ and
$b$) are the fiber indices. Using the metric $h$, one defines
$R_{abcd}=h_{aa^{\prime}}R^{a^{\prime}}_{\;bcd}$. The components of the Ricci
tensor are $\varrho_{bd}=R^{a}_{\;bad}$ and the scalar curvature is
$\tau=\varrho^{d}_{\;d}:=h^{db}\varrho_{bd}$. One can also define the Einstein
tensor $\mathtt{G}=\varrho-\frac{1}{2}\,\tau\,h$. One has:
$\begin{split}R_{abcd}=\frac{1}{4}&(x_{acm}\,{x_{bd}}^{m}+2x_{abm}\,{x_{cd}}^{m}-x_{bcm}\,{x_{ad}}^{m}-{x_{ab}}^{m}\,x_{mcd}+{x_{ab}}^{m}\,x_{mdc}-{x_{cd}}^{m}\,x_{mab}\\\
&+{x_{cd}}^{m}\,x_{{mba}}+({x^{m}}_{ac}+{x^{m}}_{ca})(x_{mbd}+x_{mdb})-({x^{m}}_{bc}+{x^{m}}_{cb})(x_{mad}+x_{mda})\end{split}$
(5)
$\varrho_{bd}=-\frac{1}{2}x_{mbn}\,{x_{nd}}^{m}-\frac{1}{2}x_{mbn}\,{x_{md}}^{n}+\frac{1}{4}{x_{mnb}}\,{x^{mn}}_{d}-\frac{1}{2}(x_{mbd}+x_{mdb}){{x^{m}}_{n}}^{n}$
(6)
$\tau=-\frac{1}{4}{x^{mk}}_{n}\,{x_{mk}}^{n}-\frac{1}{2}{{x}_{m}}^{kn}\,{x_{nk}}^{m}-{{x_{m}}_{k}}^{k}\,{{x^{m}}_{n}}^{n}$
(7)
Notice that in order to calculate the Ricci tensor for a specific left-
invariant metric, one does not need to evaluate the Riemann tensor first.
In the following we shall always express the components of the curvature
tensors in the basis $(X_{a})$ for which the Killing metric is orthonormal: we
shall take $(e_{a})=(X_{a})$, hence
${x_{ab}}^{c}=\tfrac{-1}{\sqrt{3}}{f_{ab}}^{c}$ in all cases.
Notice that the last term of (6) and (7), a trace, vanishes for unimodular
groups, in particular for ${\rm SU}(3)$, so we can safely drop it in the
practical calculations that come next.
## 3 Pseudo-Riemannian homogeneous Einstein metrics on ${\rm SU}(3)$
As before, the isometry group of a left-invariant metrics $h$ on ${\rm SU}(3)$
is denoted ${\rm SU}(3)\times K$. It is clear that any subgroup of $K$ is also
a group of isometries of such a metric $h$. The inverse metrics $h^{-1}$ are
parametrized as in sect. 2.3 but we can also incorporate an overall (constant)
real scaling factor in their definition. The Einstein condition for the metric
$h$ reads
$\varrho=\kappa\,h$
the real number $\kappa$ being called the Einstein constant. Equivalently, one
can solve the Einstein equation $\mathtt{G}+\Lambda\,h=0$, where $\mathtt{G}$
is the Einstein tensor; $\Lambda$ is the so-called cosmological constant
(although there is no cosmological interpretation in the present context!).
For Einstein metrics one has obviously $\tau=8\kappa$ since $\text{dim}({\rm
SU}(3))=8$, moreover $\Lambda=\kappa-\tau/2$, therefore $\Lambda=3\kappa$.
Remark. A pseudo-Riemannian metric on ${\rm SU}(3)$ which is left invariant
and $K$-right invariant, with $K$ a Lie subgroup, is therefore $ad_{K}$
invariant and passes to an ${\rm SU}(3)$-invariant pseudo-Riemannian metric on
the quotient ${\rm SU}(3)/K$, but even if the metric one starts from is an
Einstein metric, the metric on the homogenous space ${\rm SU}(3)/K$ has no
reason to be Einstein (and in general it is not). For instance the homogeneous
metrics induced on Aloff-Wallach spaces from the Killing metric on ${\rm
SU}(3)$ are not Einstein (and the so-called Aloff-Wallach metrics [1] – that
are ${\rm SU}(3)$ invariant and have positive sectional curvature – are not
Einstein either), although each of these spaces admits an homogeneous Einstein
metric and even a Lorentz-Einstein metric (see [22]). The aim of the previous
brief comment is only to stress the fact that our purpose in the present
section is to study the Einstein condition for left-invariant metrics on ${\rm
SU}(3)$ itself: we shall not study what happens on its quotients. By way of
contrast, however, notice that the calculations performed in this section are
the same for any Lie group with Lie algebra ${\mathrm{Lie}}({\rm SU}(3))$, in
particular for ${\rm SU}(3)/Z_{3}$, which is not homotopically trivial.
We now study the Einstein condition on ${\rm SU}(3)$ for the various
parametrizations of the metrics for which the right isometry group is $K$, as
in (2), (3), or an over-group of the latter.
#### $K={\rm SU}(3)$
These are the bi-invariant metrics $h=k/\alpha$, where $k$ is the Killing
metric. For a simple Lie group $G$, the Ricci tensor of $k$ is
$\varrho=\frac{1}{4}\,k$. It therefore defines an Einstein space with Einstein
constant $\kappa=1/4$. Its scalar curvature is $\tau=\text{dim}(G)/4$.
The Ricci tensor is invariant under constant scaling of the metric (a general
property), the Einstein condition is therefore also satisfied when $k$ is
scaled by $1/\alpha$, the Einstein constant becoming $\kappa=\alpha/4$, with
$\tau=\alpha\,\text{dim}(G)/4$; therefore $\tau=2\alpha$ for $G={\rm SU}(3)$.
Moreover $\Lambda=3\alpha/4$.
#### $K={\mathrm{S}O}(3)$
For these metrics, the Ricci tensor is diagonal, with diagonal
$\left\\{\frac{1}{2}-\frac{\alpha}{4\beta},\frac{1}{24}\left(\frac{5\alpha^{2}}{\beta^{2}}+1\right),\frac{1}{2}-\frac{\alpha}{4\beta},\frac{1}{2}-\frac{\alpha}{4\beta},\frac{1}{24}\left(\frac{5\alpha^{2}}{\beta^{2}}+1\right),\frac{1}{2}-\frac{\alpha}{4\beta},\frac{1}{24}\left(\frac{5\alpha^{2}}{\beta^{2}}+1\right),\frac{1}{2}-\frac{\alpha}{4\beta}\right\\}$
The scalar curvature, for this family, is
$\tau=\frac{-5\alpha^{2}+20\alpha\beta+\beta^{2}}{8\beta}$. The Einstein
condition gives a second degree equation, with tho real solutions,
$\alpha=\beta,\kappa=\alpha/4$, the already obtained Killing metric, and
another solution, the Jensen metric [12]: $\beta=11\alpha$, with Einstein
constant $\kappa=\tfrac{21}{44}\,\alpha$. Both are properly Riemannian
(signature $(8,0)$). We recover the scalar curvature $\tau=2\alpha$ in the
first case, and find $\tau=42\,\alpha/11$ in the second.
$h^{-1}=\left(\begin{array}[]{cccccccc}1&.&.&.&.&.&.&.\\\ .&11&.&.&.&.&.&.\\\
.&.&1&.&.&.&.&.\\\ .&.&.&1&.&.&.&.\\\ .&.&.&.&11&.&.&.\\\ .&.&.&.&.&1&.&.\\\
.&.&.&.&.&.&11&.\\\ .&.&.&.&.&.&.&1\\\ \end{array}\right)$ (8)
One can recover these solutions as follows, without calculating the Ricci
tensor: write ${\rm SU}(3)$ as a principal bundle with typical fiber
${\mathrm{S}O}(3)$ over the irreducible symmetric space ${\rm
SU}(3)/{\mathrm{S}O}(3)$, consider a first family of metrics $h(t)$ obtained
by dilating the Killing metric in the direction of fibers by an arbitrary
coefficient $t^{2}$, their scalar curvature is
$\tau(h(t))=-\frac{5t^{2}}{8}+\frac{1}{8t^{2}}+\frac{5}{2}$, then define a
second family $\widehat{h}(t)=(1/t^{2})^{3/8}\,h(t)$, the overall scaling
coefficient being chosen in such a way that the Riemannian volume stays
constant when $t$ varies (the determinant of $h(t)$ is $(t^{2})^{3}$). The
stationary points, with respect to $t$, of the scalar curvature
$\tau(\widehat{h}(t))=(t^{2})^{3/8}\,\tau(h(t))$ of the metrics
$\widehat{h}(t)$ are Einstein metrics [12]; one obtains the equation
$\tfrac{d}{dt}\tau(\widehat{h}(t)))=\text{coeff}\times(t^{2}-1)(t^{2}-1/11)$,
hence the solutions.
The above is a particular case of a general construction ([6], [23], see also
[5]). Assuming that both $G$ and $K$ are simple, writing $G$ as a $K$
principal bundle over $G/K$, and dilating the Killing metric of $G$ by $t^{2}$
in the direction of fibers, one first obtains the following formula for the
scalar curvature of the metrics $h(t)$ on $G$: $\tau(h(t))=\tfrac{s}{2}+c\
\tfrac{k}{4}\tfrac{1}{t^{2}}-k(1-c)\tfrac{t^{2}}{4}$, where $n=\text{dim}\,G$,
$k=\text{dim}\,K$, $s=\text{dim}\,G/K$ and $c$ is the embedding coefficient of
$K$ in $G$. This result is immediately obtained by Kaluza-Klein dimensional
reduction, see for instance [5], applied to this particular fibration (in this
simple case one can use O’Neill formulae for Riemannian submersions with
totally geodesic fibers, see [11], [17]). The stationary points of the scalar
curvature $\tau(\widehat{h}(t))=(t^{2})^{k/n}\,\tau(h(t))$ of the metrics
$\widehat{h}(t)=(t^{2})^{-k/n}\,h(t)$ are Einstein metrics [12]. For an
irreducible symmetric pair $(G,K)$ one has $c=1-\tfrac{s}{2k}$; in that case
$\tfrac{d}{dt}\tau(\widehat{h}(t)))=\text{coeff}\times(t^{2}-1)(t^{2}-(2k-s)/(2k+s))$.
The previous results are recovered for $G={\rm SU}(3)$, $K={\mathrm{S}O}(3)$,
using $n=8$, $k=3$ (hence $s=5$ and $c=1/6$).
#### $K={\mathrm{U}}(2)$
The Ricci tensor is diagonal, with non-zero coefficients
$\varrho_{11}=\varrho_{22}=\varrho_{33}$,
$\varrho_{44}=\varrho_{55}=\varrho_{66}=\varrho_{77}$, $\varrho_{88}$,
respectively given by
$\frac{1}{12}\left(\frac{\beta^{2}}{\alpha^{2}}+2\right),\quad\frac{1}{8}\left(-\frac{\beta}{\alpha}-\frac{\beta}{\gamma}+4\right),\quad\frac{\beta^{2}}{4\gamma^{2}}$
The Einstein condition gives only one real solution, $\alpha=\beta=\gamma$,
i.e., the family of bi-invariant metrics (proportional to the Killing metric).
#### $K={\mathrm{U}}(1)\times{\mathrm{U}}(1)$
The non-zero components of the Ricci tensor are $\varrho_{11}=\varrho_{22}$,
$\varrho_{33}$, $\varrho_{44}=\varrho_{55}=\varrho_{66}=\varrho_{77}$,
$\varrho_{88}$, and $\varrho_{38}=\varrho_{83}$, respectively equal to
$\begin{split}&\frac{1}{12}\left(\frac{\gamma\delta}{\alpha^{2}}+\frac{2\alpha\epsilon}{\zeta^{2}-\beta\epsilon}-\frac{\gamma}{\delta}-\frac{\delta}{\gamma}+6\right),\quad\frac{\epsilon^{2}\left(4\alpha^{2}+\gamma^{2}+\delta^{2}\right)+3\zeta^{2}\left(\gamma^{2}+\delta^{2}\right)+2\sqrt{3}\zeta\epsilon\left(\delta^{2}-\gamma^{2}\right)}{24\left(\zeta^{2}-\beta\epsilon\right)^{2}}\\\
&\frac{1}{24}\left(\frac{2\alpha\delta}{\gamma^{2}}-\frac{2\alpha}{\delta}-\frac{2\delta}{\alpha}+\frac{\gamma\left(3\beta-2\sqrt{3}\zeta+\epsilon\right)}{\zeta^{2}-\beta\epsilon}+12\right),\quad\frac{\zeta^{2}\left(4\alpha^{2}+\gamma^{2}+\delta^{2}\right)+3\beta^{2}\left(\gamma^{2}+\delta^{2}\right)+2\sqrt{3}\beta\zeta\left(\delta^{2}-\gamma^{2}\right)}{24\left(\zeta^{2}-\beta\epsilon\right)^{2}}\\\
&\frac{-\zeta\epsilon\left(4\alpha^{2}+\gamma^{2}+\delta^{2}\right)+\beta\gamma^{2}\left(\sqrt{3}\epsilon-3\zeta\right)-\beta\delta^{2}\left(3\zeta+\sqrt{3}\epsilon\right)+\sqrt{3}\zeta^{2}(\gamma-\delta)(\gamma+\delta)}{24\left(\zeta^{2}-\beta\epsilon\right)^{2}}\end{split}$
The Einstein condition gives only one real solution,
$\alpha=\beta=\gamma=\delta=\epsilon$, $\zeta=0$, i.e., the known family of
bi-invariant metrics.
#### $K={\mathrm{U}}(1)_{I}$
The parametrization of a generic left-invariant metric, with
$K={\mathrm{U}}(1)_{I}$, involves the eight parameters
${\alpha,\beta,\gamma,\delta,\epsilon,\zeta,\eta,\theta}$ but we know that we
can fix the scale $\alpha=1$, and set the parameter $\theta$ to $0$ since
different choices for $\theta$ give metrics corresponding to the same
Riemannian structure (see our discussion at the end of sect. 2.3). We are left
with six parameters. The Einstein condition involves one parameter more, the
Einstein constant $\kappa$. We did not solve this set of equations in full
generality: we restricted our attention to the family of metrics obtained by
imposing the further constraint $\gamma=\delta$; in that case, one of the
equations implies that $\zeta$ should vanish.
There are five solutions (only three if one imposes $\eta\geq 0$). The first
is the Killing metric —as expected. The second and third solutions only differ
by a sign flip in the value of the parameter $\eta$, they are properly
Riemannian Einstein metrics and they are equivalent to the Jensen solution.
The last two solutions (again, they only differ by the sign of $\eta$) are
Einstein metrics with a Lorentzian signature.
The non-zero components of the Ricci tensor are $\varrho_{11}=\varrho_{22}$,
$\varrho_{33}$, $\varrho_{44}=\varrho_{55}$, $\varrho_{66}=\varrho_{77}$,
$\varrho_{7,4}=\varrho_{6,5}=\varrho_{5,6}=\varrho_{4,7}$,
$\varrho_{3,8}=\varrho_{8,3}$, $\varrho_{88}$. These seven expressions are
rather huge to be displayed in an article, even after setting $\theta=0$. As
already mentioned, one can show (it is almost straightforward but cumbersome!)
that the hypothesis $\gamma=\delta$, on top of the the Einstein condition,
implies that $\zeta$ should vanish; we shall therefore only display the non-
zero components of the Ricci tensor and of the metric in this simpler case,
which also implies that $\varrho_{44}=\varrho_{55}$ should be equal to
$\varrho_{66}=\varrho_{77}$ and that $\varrho_{3,8}=\varrho_{8,3}$ is $0$.
Removing duplicates, we are left with five non-zero distinct components of the
Ricci tensor:
$\begin{split}\varrho_{11}=\varrho_{22}=&\frac{1}{12}\left(\frac{(\gamma-\eta)(\gamma+\eta)}{\alpha^{2}}-\frac{2\alpha}{\beta}+\frac{4\gamma^{2}}{\eta^{2}-\gamma^{2}}+8\right),\qquad\varrho_{33}=\frac{2\alpha^{2}+\gamma^{2}+\eta^{2}}{12\beta^{2}},\\\
\varrho_{44}=\varrho_{55}=\varrho_{66}=\varrho_{77}=&\frac{1}{24}\left(\gamma\left(\frac{8\alpha\eta^{2}}{\left(\gamma^{2}-\eta^{2}\right)^{2}}-\frac{2}{\alpha}-\frac{1}{\beta}+\frac{12\eta^{2}\epsilon}{\left(\gamma^{2}-\eta^{2}\right)^{2}}-\frac{3}{\epsilon}\right)+12\right),\\\
\varrho_{7,4}=\varrho_{6,5}=\varrho_{5,6}=\varrho_{4,7}=&\frac{1}{24}\eta\left(-\frac{4\gamma^{2}(2\alpha+3\epsilon)}{\left(\gamma^{2}-\eta^{2}\right)^{2}}+\frac{2}{\alpha}-\frac{1}{\beta}+\frac{3}{\epsilon}\right),\\\
\varrho_{88}=&\frac{-2\eta^{2}\left(\gamma^{2}+2\epsilon^{2}\right)+\gamma^{4}+\eta^{4}}{4\epsilon^{2}(\gamma-\eta)(\gamma+\eta)}\end{split}$
The dual metric $h^{-1}$ is specified by the matrix $h^{ij}$ given in (9):
$h^{-1}=\left(\begin{array}[]{cccccccc}\alpha&.&.&.&.&.&.&.\\\
.&\alpha&.&.&.&.&.&.\\\ .&.&\beta&.&.&.&.&.\\\ .&.&.&\gamma&.&.&\eta&.\\\
.&.&.&.&\gamma&\eta&.&.\\\ .&.&.&.&\eta&\gamma&.&.\\\
.&.&.&\eta&.&.&\gamma&.\\\ .&.&.&.&.&.&.&\epsilon\\\ \end{array}\right)$ (9)
The Einstein condition reads $\varrho_{ij}=\kappa\,h_{i,j}$ where the non-zero
components of the matrix $h$ are as follows:
$h_{1,1}=h_{2,2}=\frac{1}{\alpha},h_{3,3}=\frac{1}{\beta},h_{44}=h_{55}=h_{66}=h_{77}=\frac{\gamma}{\gamma^{2}-\eta^{2}},h_{7,4}=h_{6,5}=h_{5,6}=h_{4,7}=\frac{\eta}{\eta^{2}-\gamma^{2}},h_{8,8}=\frac{1}{\epsilon}$
We have five non-linear equations and five unknowns: the five parameters
$\alpha,\beta,\gamma,\epsilon,\eta$ (but one can take $\alpha=1$), and the
Einstein constant $\kappa$.
$\bullet$ One obvious solution of the Einstein condition is obtained by
setting $\eta=0$ and by taking all the other parameters equal: one recover the
bi-invariant metrics.
$\bullet$ Another solution, up to scale, is obtained by setting
${\alpha=1,\beta=11,\gamma=\delta=6,\epsilon=1,\zeta=0,\eta=\pm 5,\theta=0}$.
See (10). The Einstein constant is $\kappa=21/44$. The metric has signature
$(8,0)$.
$h^{-1}=\left(\begin{array}[]{cccccccc}1&.&.&.&.&.&.&.\\\ .&1&.&.&.&.&.&.\\\
.&.&11&.&.&.&.&.\\\ .&.&.&6&.&.&5&.\\\ .&.&.&.&6&5&.&.\\\ .&.&.&.&5&6&.&.\\\
.&.&.&5&.&.&6&.\\\ .&.&.&.&.&.&.&1\\\ \end{array}\right)$ (10)
From the metric defined by (10), and using the remarks at the end of sect.
2.3, one can obtain a one-parameter family of Einstein metrics (all defining
the same Einstein structure), for the same parameters $\alpha_{0}=1$,
$\beta_{0}=11$, $\gamma_{0}=6$, $\epsilon_{0}=1$, $\zeta_{0}=0$, as in (10),
but for arbitrary values of $\theta$, $|\theta|\leq 5$, (remember that we had
imposed a priori the conditions $\theta=0$ and $\delta=\gamma$) while also
setting $\eta=\sqrt{\eta_{0}^{2}-\theta^{2}}=\sqrt{25-\theta^{2}}$ in the
matrix $h^{-1}$ given in table 3 for the subgroup $K={\mathrm{U}}(1)_{I}$. All
these metrics have an isometry group a priori equal or conjugated to an over-
group of this particular subgroup.
The solution $h^{-1}$ is reminiscent of the Jensen metric: it is easy to see
that the two matrices (8) and (10) are congruent; moreover, the value of
$\kappa$ is the same. One is therefore tempted to think that both131313They
are distinct since the symmetric bilinear forms defined by these two matrices,
written in the same basis, are distinct. metrics define the same Riemannian
structure. One could nevertheless be puzzled by the fact that the specific
group ${\mathrm{S}O}(3)$ specified in the list (2) does not leave invariant
the metric (10): only the group ${\mathrm{U}}(1)_{I}$ of the list (2), leaves
it invariant (setting $r_{a}=exp(f_{a})$, the reader can indeed check that,
for $h^{-1}$ given by 8, the equation
${r_{a}}^{T}\,.\,h^{-1}\,.\,r_{a}=h^{-1}$ holds for $a=2,5,7$, whereas, for
$h^{-1}$ given by (10), this equation holds only for $a=3$. The right isometry
group of the latter can be obtained from the same equation by taking linear
combinations of the $r_{a}$ with arbitrary coefficients; one finds that this
group, of type ${\mathrm{S}O}(3)$, is generated by
$\\{\lambda_{3},\tfrac{\lambda_{4}+\lambda_{7}}{\sqrt{2}},\tfrac{\lambda_{5}+\lambda_{6}}{\sqrt{2}}\\}$.
Although distinct from the one specified in (2), it is conjugated to the
latter (because ${\mathrm{S}O}(3)$ is maximal in ${\rm SU}(3)$), and it
contains ${\mathrm{U}}(1)_{I}$, as it should.
$\bullet$ The third solution is a Lorentz metric (signature $(7,1)$).
Let $\epsilon$ be the (unique) real root of the 15-th degree polynomial
$\begin{split}&157464000\,x^{15}+403632720\,x^{14}-612290016\,x^{13}-1011752856\,x^{12}+2420977896\,x^{11}-160395147\,x^{10}+8214701211\,x^{9}\\\
&+22205850480\,x^{8}+25959494541\,x^{7}+13520748157\,x^{6}+6727192848\,x^{5}+3545761995\,x^{4}-307092303\,x^{3}+775200861\,x^{2}+1476112248\,x+416419380\end{split}$
Let $\gamma$ be the (unique) real root of the 15-th degree polynomial
$\begin{split}&1203125\,x^{15}-5947500\,x^{14}+27668175\,x^{13}-91826280\,x^{12}+247552546\,x^{11}-578539560\,x^{10}+1139842990\,x^{9}\\\
&-1943457696\,x^{8}+2859080697\,x^{7}-3567181452\,x^{6}+3705721907\,x^{5}-3090965208\,x^{4}+1958091648\,x^{3}-862410240\,x^{2}+238768128\,x-26542080\end{split}$
For these values of $\gamma$ and $\epsilon$, the cubic polynomial with one
indeterminate $x$
$\begin{split}&x^{3}(3\gamma\epsilon+3\gamma-12\epsilon)+\\\
&x^{2}\left(\gamma^{3}(-\epsilon)+12\gamma^{2}\epsilon-3\gamma^{3}-12\gamma\epsilon^{2}-4\gamma\epsilon+12\gamma-48\epsilon\right)+\\\
&x\left(-12\gamma^{3}\epsilon^{2}-7\gamma^{5}\epsilon+12\gamma^{4}\epsilon-52\gamma^{3}\epsilon+96\gamma^{2}\epsilon-3\gamma^{5}-24\gamma^{3}-48\gamma\epsilon^{2}-64\gamma\epsilon\right)\\\
&+(5\gamma^{7}\epsilon-12\gamma^{6}\epsilon+24\gamma^{5}\epsilon-48\gamma^{4}\epsilon+16\gamma^{3}\epsilon+3\gamma^{7}+12\gamma^{5})\end{split}$
has three real roots, two are negative and one is positive; call $\eta^{2}$
its positive root, and call $\eta$ the positive141414One can choose the
negative square root as well because $\eta$ appears only in even powers and in
products $(\gamma-\eta)(\gamma+\eta)$. square root of $\eta^{2}$. Then
$\beta=\frac{(\gamma-\eta)(\gamma+\eta)\left(\gamma^{2}+\eta^{2}+4\right)}{-2\left(\gamma^{2}+4\right)\eta^{2}+\gamma^{2}\left(\gamma^{2}+4\right)+\eta^{4}}$
$\kappa=\frac{\left(\gamma^{2}+\eta^{2}+2\right)\left(-2\left(\gamma^{2}+4\right)\eta^{2}+\gamma^{2}\left(\gamma^{2}+4\right)+\eta^{4}\right)}{12(\gamma-\eta)(\gamma+\eta)\left(\gamma^{2}+\eta^{2}+4\right)}$
Like $\gamma$ and $\epsilon$, the parameter $\beta$, as well as the Einstein
constant $\kappa$, can be expressed as roots of polynomials of degree 15 with
integer coefficients.
$\beta$ is the (unique) real root of the polynomial
$\begin{split}&420959000000\,x^{15}-1864887536000\,x^{14}+3473091156700\,x^{13}-3742325355930\,x^{12}+2779023618983\,x^{11}-1598512715722\,x^{10}+738336195619\,x^{9}\\\
&-286057154856\,x^{8}+100590932418\,x^{7}-32232937198\,x^{6}+8922748831\,x^{5}-2060272970\,x^{4}+375594480\,x^{3}-51335104\,x^{2}+4940624\,x-297440\end{split}$
The Einstein constant $\kappa$ is the (unique) real root of the polynomial
$\begin{split}&75874469299200000000\,x^{15}-194337331275110400000\,x^{14}+301355277599416320000\,x^{13}-332561544757530624000\,x^{12}+282171231781966252800\,x^{11}\\\
&-191136024361902738240\,x^{10}+105464748331948650048\,x^{9}-47804548501070787024\,x^{8}+17858543123347792128\,x^{7}-5477519217851980920\,x^{6}\\\
&+1363429678619072700\,x^{5}-269374969407033333\,x^{4}+40612859877938577\,x^{3}-4362120554579953\,x^{2}+293255347774576\,x-9061971967716\end{split}$
The real $\eta^{2}$ is the (unique) real root of the polynomial
$\begin{split}&7237548828125\,x^{15}+70864769531250\,x^{14}+314655757840625\,x^{13}+889027170133500\,x^{12}+1845686712291930\,x^{11}+2969194934204748\,x^{10}\\\
&+6007481883873834\,x^{9}+14368049748482976\,x^{8}+23991657392689833\,x^{7}+23305737247777970\,x^{6}+9939040159739877\,x^{5}\\\
&-2269867978871308\,x^{4}-3190456836365280\,x^{3}+2429318649600\,x^{2}+508754442240000\,x-6234734592000\end{split}$
Both square roots of $\eta^{2}$ solve the equations and therefore give rise to
two distinct solutions, for the same values of the other parameters.
This Lorentzian Einstein solution is therefore obtained for a dual metric
specified by the matrix $h^{ij}$ given in (9), with the above values of the
parameters. Numerically, $\eta^{2}\simeq 0.0122658$, and
$\\{\epsilon\simeq-0.491148,\,\gamma\simeq 0.233098,\,\eta\simeq\pm
0.110751,\,\beta\simeq
1.41407,\,\zeta=0,\,\alpha=1\\}\quad\text{and}\quad\kappa\simeq 0.121788$ (11)
One can restore the $\alpha$ dependence by scaling the parameters
$\eta,\gamma,\epsilon,\beta$, by $\alpha$. In that case, the Einstein constant
$\kappa$ is also multiplied by $\alpha$. Remember that the Ricci tensor is
invariant under a (constant) rescaling of the metric.
The scalar curvature $\tau$, for the general family of metrics specified by
(9), is
$\frac{8\alpha^{2}\beta\epsilon\left(\gamma^{2}-2\eta^{2}\right)+2\alpha^{3}\epsilon\left(\eta^{2}-\gamma^{2}\right)+\alpha\left(6\beta\eta^{2}\left(\gamma^{2}-4\gamma\epsilon-2\epsilon^{2}\right)-\gamma^{3}(3\beta(\gamma-8\epsilon)+\gamma\epsilon)+\eta^{4}(\epsilon-3\beta)\right)-2\beta\epsilon\left(\gamma^{2}-\eta^{2}\right)^{2}}{12\alpha\beta\epsilon(\gamma-\eta)(\gamma+\eta)}$
(12)
Using the previous values of parameters, one finds that $\tau$, for the
Lorentz-Einstein metric, is equal to $8\kappa$, as it should. Numerically,
$\tau\simeq 0.974303$. Moreover, $\Lambda$, the “cosmological” constant, is
equal to $3\kappa\simeq 0.365363$.
Other properties of the obtained Lorentzian Einstein metric:
1. 1.
The matrix $h$ has seven positive eigenvalues, and one negative: its signature
is Lorentzian $(7,1)$. Using $\alpha=1$ these numerically sorted eigenvalues
are $(8.17347,8.17347,2.90825,2.90825,1.,1.,0.707178,-2.03605)$.
2. 2.
The Einstein condition gives two solutions differing from one another by
flipping the sign of $\eta$.
3. 3.
One can calculate the eight principal Ricci curvatures, check that they are
constant (Einstein manifolds have constant Ricci curvature), all equal to
$\tau/8$. As $\tau>0$, the Ricci signature (the signature of the Ricci
quadratic form) is $(8,0)$.
4. 4.
We already know, from the chosen parametrization, that the right isometry
group of this metric is ${\mathrm{U}}(1)_{I}$, the vector field $e_{3}$
defined by the basis vector $X_{3}$ being its associated Killing vector field.
5. 5.
This Lorentzian manifold has, at every point, a cone of time-like directions.
The underlying manifold, being a Lie group, is parallelizable, orientable, and
it is time-orientable for this Lorentz metric. Numerically,
$h(X_{8},X_{8})=-2.03605<0$, the vector field $e_{8}$ (which is not Killing)
is therefore time-like. Notice that the Killing vector field $e_{3}$ is space-
like. The integral curve of the left-invariant vector field $e_{8}$ is a
closed time-like curve. Moreover, it is a geodesic (it is easy to show that
the covariant derivative $\nabla_{e_{8}}\,e_{8}$ vanishes). The integral curve
of $e_{3}$ is also a geodesic.
6. 6.
One can check that this Lorentzian Einstein metric is a stationary point of
the scalar curvature, when one varies the parameters while keeping the volume
fixed. This provides another way to obtain the above solution. For the metrics
specified by (9), the determinant of $h^{-1}$ is ${\sl
d}=\alpha^{2}\beta\epsilon\left(-2\gamma^{2}\eta^{2}+\gamma^{4}+\eta^{4}\right)$,
and the scalar curvature of the family of metrics ${\sl d}^{1/8}\times h$ (for
which the determinant stays equal to $1$ when the parameters vary) is
$\tau/{\sl d}^{1/8}$, where the expression of $\tau$ in terms of the
parameters $\eta,\gamma,\epsilon,\beta,\alpha$ was given in (12). We shall
only display a few curves that illustrate the stationarity property by giving
plots of $\tfrac{\partial}{\partial u}\tfrac{\tau}{{(-\sl d)}^{1/8}}$, for
$u\in\\{\eta,\gamma,\epsilon,\beta\\}$, in a neighborhood of the found
solution151515The determinant being negative around the extremum that
corresponds to the obtained Einstein metric (because $\epsilon<0$), we
introduce a minus sign in front of ${\sl d}$ in ${\sl d}^{1/8}$..
Figure 1: Derivative of $\tfrac{\tau}{{(-\sl d)}^{1/8}}$ with respect to
$\beta$, for $\beta$ in $[0,5]$ and in $[1.41,1.42]$.
Figure 2: Derivative of $\tfrac{\tau}{{(-\sl d)}^{1/8}}$ with respect to
$\gamma$, for $\gamma$ in $[-0.5,0.5]$, $[0.2,0.26]$ and in $[0.231,0.235]$.
Figure 3: Derivative of $\tfrac{\tau}{{(-\sl d)}^{1/8}}$ with respect to
$\epsilon$, for $\epsilon$ in $[-1,1]$ and in $[-0.5,-0.48]$.
Figure 4: Derivative of $\tfrac{\tau}{{(-\sl d)}^{1/8}}$ with respect to
$\eta$, for $\eta$ in $[-0.5,0.5]$, $[0,0.15]$, and in $[0.1102,0.1114]$.
7. 7.
From the previous Lorentz Einstein metric, defined by parameters values that
we now call $\alpha_{0},\beta_{0},\gamma_{0}$,
$\epsilon_{0},\eta_{0},\zeta_{0}$ (remember that we had imposed a priori the
conditions $\theta=0$ and $\delta=\gamma$), one obtains a one-parameter family
of distinct Lorentz Einstein metrics, with the same Einstein constant, for the
same values $\alpha_{0},\beta_{0},\gamma_{0},\epsilon_{0},\zeta_{0}$, but for
arbitrary values of $\theta$ (obeying $\theta^{2}\leq\eta_{0}^{2}$), while
setting $\eta=\sqrt{\eta_{0}^{2}-\theta^{2}}$ in the matrix $h^{-1}$ given in
table 3 for $K={\mathrm{U}}(1)_{I}$ (see our discussion at the end of sect.
2.3). In particular we could trade $\eta$ for $\theta$ by taking
$\theta=\eta_{0}$, then $\eta$ vanishes. All these metrics define the same
pseudo-Riemannian structure. They have the same isometry group
${\mathrm{U}}(1)_{I}$. Notice that the calculations presented in the present
subsection ($K={\mathrm{U}}(1)_{I}$) do not exclude the fact that the right
isometry group could be equal or conjugated to a group larger than this
particular ${\mathrm{U}}(1)$, but it cannot be so, otherwise we would have
already found this left-invariant Einstein Lorentzian metric in one of the
previous subsections.
#### $K={\mathrm{U}}(1)_{Y}$
Such left-invariant metrics are parametrized by the last entry of table 3.
Even after taking into account isometries and scaling, there are too many free
parameters left ($10$ of them) and we could not solve the Einstein condition
for this family in full generality. For this reason we looked at several
subfamilies obtained by imposing conditions on the parameters, but, even then,
we could not find a single example of an Einstein metric in this family,
except, of course, the Killing metric, for which the right isometry group is
${\rm SU}(3)$ itself.
#### $K=\\{e\\}$
Solving explicitly the system of equations coming from the Einstein condition
for this family seems to be a formidable task, even after reducing the number
of parameters from $36$ to $28$ by considering metrics only up to equivalence.
So we shall not have much to say in that case.
We should nevertheless mention one pseudo-Riemannian Einstein metric, of
signature $(6,2)$, for which $K=\\{e\\}$, and that was found in [10], using
other notations. We shall describe it below. Consider first the family of
metrics defined by taking $h^{-1}$ equal to
$\left(\begin{array}[]{cccccccc}\alpha&.&.&.&.&.&.&.\\\ .&\beta&.&.&.&.&.&.\\\
.&.&\gamma&.&.&.&.&.\\\ .&.&.&\alpha&.&.&.&.\\\ .&.&.&.&\beta&.&.&.\\\
.&.&.&.&.&\alpha&.&.\\\ .&.&.&.&.&.&\beta&.\\\ .&.&.&.&.&.&.&\gamma\\\
\end{array}\right)$
For generic values of the parameters $\alpha,\beta,\gamma$, these left-
invariant metrics have a trivial right-isometry group $K$ (the equation for
Lie derivatives stemming from (1) has no non-trivial solution) even though the
same parameter $\beta$ occurs in positions $(2,5,7)$ which are those
corresponding to the generators $\lambda_{a}$ of the ${\mathrm{S}O}(3)$
subgroup defined in (2). For $\alpha=\beta=\gamma$ the right isometry group
$K$ is ${\rm SU}(3)$, and for $\alpha=\gamma$ one recovers the cases already
described in (3) for which $K={\mathrm{S}O}(3)$.
The non-zero components of the Ricci tensor are
$\varrho_{11}=\varrho_{44}=\varrho_{66}$,
$\varrho_{22}=\varrho_{55}=\varrho_{77}$, $\varrho_{33}=\varrho_{88}$, they
are respectively equal to:
$\left\\{\frac{1}{12}\left(\frac{2\beta\gamma}{\alpha^{2}}-\frac{\alpha}{\beta}-\frac{2\gamma}{\beta}-\frac{2\beta}{\gamma}+6\right),\frac{1}{24}\left(\frac{\alpha^{2}}{\beta^{2}}+\frac{4\alpha\gamma}{\beta^{2}}-\frac{4\alpha}{\gamma}-\frac{4\gamma}{\alpha}+9\right),\frac{1}{4}\left(\frac{\alpha\beta}{\gamma^{2}}-\frac{\alpha}{\beta}-\frac{\beta}{\alpha}+2\right)\right\\}$
The Einstein condition is obtained by setting the previous triple equal to
$\\{\kappa/\alpha,\kappa/\beta,\kappa/\gamma\\}$. This system of equations has
three real solutions: one first recovers the multiples of the Killing metric
by taking $\alpha=\beta=\gamma$, with Einstein constant $\kappa=\alpha/4$,
then one recovers the multiples of the Jensen metric, for which
$\alpha=\gamma$, $\beta=11\alpha$, and $\kappa=\tfrac{21}{44}\,\alpha$;
finally one obtains a third solution that we describe now.
Let $P$ be a cubic polynomial with one indeterminate $x$ and real
coefficients, call ${\mathfrak{r}}(P)$ its smallest real root. Then, taking
$\beta=\alpha\,{\mathfrak{r}}(85x^{3}-29x^{2}+27x-3)$ and
$\gamma=\alpha\,{\mathfrak{r}}(768x^{3}+128x^{2}+204x+45)$ defines an Einstein
metric with Einstein constant
$\kappa=\alpha\,{\mathfrak{r}}(14400x^{3}-5520x^{2}+1044x-101)$ and scalar
curvature $\tau=\alpha\,{\mathfrak{r}}(-808+1044x-690x^{2}+225x^{3})$.
Numerically $\beta/\alpha\simeq 0.121$, $\gamma/\alpha\simeq-0.213$,
$\kappa/\alpha\simeq 0.196$, $\tau/\alpha\simeq 1.568$. This left-invariant
Einstein metric has signature $(6,2)$ and its right isometry group $K$ is
trivial.
As already mentioned this solution was already found in [10] where the authors
give the matrix elements of $h$ (not of its inverse $h^{-1}$), using a
different scaling, in terms of two reals $x_{1},x_{2}$. Their values can be
compared to the above ones by writing
$h=({1}/{\beta})\text{diag}(\\{x_{1},1,x_{2},x_{1},1,x_{1},1,x_{2}\\})$; one
finds $x_{1}=\beta/\alpha$ (given above) and
$x_{2}=\beta/\gamma={\mathfrak{r}}(768-128x-1860x^{2}+1275x^{3})\simeq-0.570$.
Notice that $x_{2}=-\tfrac{(1-x_{1})(1-5x_{1})}{5x_{1}}$. The Einstein
constant for the metric $\beta\,h$ is $\kappa/\beta\simeq 1.616$, and can be
written161616The Einstein constants given in reference [10] differ from ours
by two overall multiplicative factors: one comes from the fact that their
matrix expression of $h$, compared to ours, is rescaled by $\beta$, and the
other (equal to $3$) comes from the fact that the basis vectors used by these
authors to define their metrics differ from our basis vectors $(X_{i})$ by a
factor $\sqrt{3}$. $\frac{(1-x_{1})(10x_{1}-1)}{20(1-5x_{1})x_{1}^{2}}$.
## 4 Miscellaneous
### 4.1 The quadratic Casimir operator
The quadratic Casimir element of the simple Lie group $G$ for the renormalized
Killing form (resp. for the Killing form) is the element of the universal
enveloping algebra defined171717We remind the reader that the Killing inner
product $k$ is the opposite of the Killing form, hence the minus sign in front
of the expressions defining $\Omega_{2}^{k}$ and ${\widehat{\Omega}_{2}}$,
since $(X_{a})$ is an orthonormal basis for $k$. See sect. 2.2. by
${\widehat{\Omega}_{2}}=-\sum_{a}\widehat{X}_{a}.\widehat{X}_{a}$ (resp.
${\Omega_{2}}=-\sum_{a}X_{a}.X_{a}$). Casimir elements can be evaluated in any
representation, and, in an irreducible representation,
${\widehat{\Omega}_{2}}$ (resp. ${\Omega_{2}}$) is a multiple of the identity
matrix, with eigenvalue ${\widehat{C}_{2}}$ (resp. ${C_{2}}$). The definition
of Casimir operators involves the inverse Killing inner product, so, using
$\widehat{k}=k/2g$, one obtains the relation181818We also remind the reader
that $g$ is the dual Coxeter number, which is equal to $N$ for ${\rm SU}(N)$.:
${C_{2}}={\widehat{C}_{2}}/2g.$ (13)
Explicitly, for an irreducible representation of highest weight $\mathpzc{w}$,
one obtains
${\widehat{C}_{2}}=\langle\mathpzc{w}+\rho,\mathpzc{w}+\rho\rangle-\langle\rho,\rho\rangle=\langle\mathpzc{w},\mathpzc{w}+2\rho\rangle$
(14)
where $\rho$ is the Weyl vector and $\langle.,.\rangle$ is the Cartan inner
product in the space of roots, normalized in such a way that the length square
of long roots is equal to $2$. One has also:
${C_{2}}=\sum_{\alpha}\langle\mathpzc{w}+\rho,\alpha\rangle^{2}-\langle\rho,\alpha\rangle^{2}$
(15)
where $\alpha$ runs over the set of all roots (use the identity
$\sum_{\alpha}|\alpha\rangle\langle\alpha|=2g$ to relate (14) and (15) as in
(13)).
For ${\rm SU}(N)$ in the defining representation one obtains
${\widehat{C}_{2}}=(N^{2}-1)/N$ and ${C_{2}}=(N^{2}-1)/2$. In the adjoint
representation one obtains ${\widehat{C}_{2}}=2N$ and ${C_{2}}=1$.
In the case of ${\rm SU}(3)$, one can use for instance (14) to show that, for
an irreducible representation of highest weight $\mathpzc{w}$ with (Dynkin)
components $(o_{1},o_{2})$ in the basis of fundamental weights,
${\widehat{C}_{2}}=\frac{2}{3}(o_{1}^{2}+o_{1}o_{2}+o_{2}^{2})+2(o_{1}+o_{2})$
(16)
Equivalently, one can evaluate
${\widehat{\Omega}_{2}}=-\sum_{a}\tfrac{iL_{a}}{\sqrt{2}}.\tfrac{iL_{a}}{\sqrt{2}}$
and
${\Omega_{2}}=-\sum_{a}\tfrac{iL_{a}}{2\sqrt{3}}.\tfrac{iL_{a}}{2\sqrt{3}}$ in
the chosen representations. The above general relations, in the case of ${\rm
SU}(3)$, give: ${\widehat{C}_{2}}=6$, ${C_{2}}=1$ in the adjoint
representation, and ${\widehat{C}_{2}}=8/3$, ${C_{2}}=4/9$ in the defining
representation (these values can also be directly calculated by representing
the $iL_{a}$ generators by matrices $2f_{a}$ in the former case and by
matrices $i\lambda_{a}$ in the latter).
For the group ${\rm SU}(2)$, and for an irreducible representation of highest
weight $2j$ (where the “spin” variable $j$ is an integer or a half-integer),
of dimension $2j+1$, the value $j(j+1)$ presented in the majority of quantum
physics textbooks as eigenvalue of “the Casimir operator” corresponds to a
Casimir element neither associated with the Killing form on ${\rm SU}(2)$
(${C_{2}}=j(j+1)/2$) nor with the renormalized Killing form
(${\widehat{C}_{2}}=2j(j+1)$). Details: the unique long root, which is also
the highest weight $\sigma=2$ of the vector representation (of dimension 3),
obeys $<2,2>=2$, so $<1,1>=1/2$, and (14), using $\rho=1$, indeed gives
${\widehat{C}_{2}}\,=\,<2j+1,2j+1>-<1,1>=((2j+1)^{2}-1)<1,1>=4j(j+1)<1,1>=2j(j+1)$.
In order to obtain $j(j+1)$ one has to use another rescaled Killing form,
namely $k/2=2{\widehat{k}}$, in which case the associated Casimir can still
formally be given by the rhs of 14, provided one normalizes the Cartan inner
product in such a way that the length square of long root is equal to $1$, a
choice that is also often made in the same quantum physics textbooks (but
remember that for us this length square is equal to $2$).
##### Dynkin index.
In an arbitrary basis $(e_{a})$, we have
$Tr(\mathpzc{w}(e_{a})\mathpzc{w}(e_{b}))=-2\iota{w}\;{\widehat{k}}_{ab}=-\iota_{\mathpzc{w}}/g\,k_{ab}$.
Here $\iota_{\mathpzc{w}}$ denotes the Dynkin index191919Some authors
incorporate a pre-factor $2$ in the definition of the Dynkin index. of the
representation $\mathpzc{w}$ of the Lie group $G$. If $\mathpzc{w}$ is the
defining representation of ${\rm SU}(N)$, one has $\iota{w}=1/2$. If
$\mathpzc{w}$ is the adjoint representation of $G$, one has $\iota{w}=g$; in
particular, for $G={\rm SU}(N)$, $\iota{w}=N$. More generally, one has the
relation:
${\widehat{C}_{2}}=2\,\iota_{\mathpzc{w}}\times{\text{dim}(Lie(G))}/{\text{dim}(\mathpzc{w})}$.
### 4.2 Restriction to subgroups: branching
We consider the Lie algebra embedding
${\mathrm{Lie}}({\mathrm{U}}(2))\subset{\mathrm{Lie}}({\rm SU}(3))$ i.e.,
$\mathfrak{su}(2)\oplus\mathfrak{u}(1)\subset\mathfrak{su}(3)$, and we take
$\mathfrak{u}(1)$ as the Lie algebra of the subgroup called
${\mathrm{U}}(1)_{Y}$ in previous sections. This is a Levi type subalgebra :
the set of simple roots of the semi-simple component of the subalgebra can be
chosen as a subset of the set of simple roots of the given Lie algebra. Call
$\alpha_{1},\alpha_{2}$ the simple roots of $\mathfrak{su}(3)$ and
$\omega_{1},\omega_{2}$ its fundamental weights. We take $v=\alpha_{1}$ as the
simple root of $\mathfrak{su}(2)$ (the “$v$” stands for “vector” since the
$\mathfrak{su}(2)$ irrep of highest weight $v$ is the vector representation)
and $t$ the fundamental $\mathfrak{u}(1)$ weight. The ${\mathrm{U}}(1)_{Y}$
generator is $3\,Y=\sqrt{3}\,L_{8}$ and reads
$\sqrt{3}\,\lambda_{8}=\text{diag}(1,1,-2)$ in the defining representation;
its eigenvalues are integers, as they should. Notice that
$k(\sqrt{3}\,iL_{8},\sqrt{3}\,iL_{8})=3\times 12$, so
$\widehat{k}(\sqrt{3}\,iL_{8},\sqrt{3}\,iL_{8})=3\times 12/6=6$ and
$\widehat{k}^{-1}(t,t)=1/6$. Notice also that $v=2\sigma$, where $\sigma$
denotes the $\mathfrak{su}(2)$ fundamental weight202020The component along
$\sigma$ of each weight of an irrep of $\mathfrak{su}(2)$ is equal to twice
the “(iso-)spin”. For instance those of the spinorial irrep (highest weight
$\sigma$) are twice $\pm 1/2$, those of the vectorial irrep (highest weight
$v$) are twice $(1,0,-1)$..
The simple root $\alpha_{2}$ of $\mathfrak{su}(3)$ is a priori a linear
combination of $v$ and $t$: we have $\alpha_{2}=a\,v+b\,t$. We determine $a$
and $b$ from the inner products of roots and weights calculated using the
Cartan matrix or its inverse. As usual, all roots have length $2$ both for
$\mathfrak{su}(3)$ and for $\mathfrak{su}(2)$ (we have only long roots here),
so $\langle\,\alpha_{1},\alpha_{1}\rangle=\langle\,v,v\rangle=2$. From the
Cartan matrix of $\mathfrak{su}(3)$, namely $\begin{pmatrix}2&-1\\\
-1&2\end{pmatrix}$, we get $\langle\,\alpha_{1},\alpha_{2}\rangle=-1$,
moreover $\mathfrak{su}(2)$ and $\mathfrak{u}(1)$ are orthogonal subspaces for
$\langle\,,\,\rangle$, so $\langle\,v,\,t\rangle=0$, therefore
$a\langle\,v,\,v\rangle=-1$, and we obtain $a=-1/2$. We have also
$\langle\,\alpha_{2},\alpha_{2}\rangle=2$, therefore
$a^{2}\langle\,v,v\rangle+b^{2}\langle\,t,t\rangle=2$. Using
$\langle\,t,t\rangle=1/6$ one gets $b=3$. Therefore $\alpha_{1}=v$, and
$\alpha_{2}=-v/2+3t$.
The restriction matrix defining the embedding in terms of fundamental weights
(which also gives the ${\mathrm{U}}(2)$ weight components $2I$ and $3Y$ from
the Dynkin components $(o_{1},o_{2})$ of the highest weight $\mathpzc{w}$ of
any irreducible ${\rm SU}(3)$ representation) reads:
$\left(\begin{array}[]{c}\omega_{1}\\\ \omega_{2}\\\
\end{array}\right)=\left(\begin{array}[]{cc}1&1\\\ 0&2\\\
\end{array}\right)\left(\begin{array}[]{c}\sigma\\\ t\\\
\end{array}\right)\qquad(2I,3Y)=\left(o_{1},o_{2}\right)\,\left(\begin{array}[]{cc}1&1\\\
0&2\\\ \end{array}\right)$ (17)
Examples.
Consider the basic (fundamental) irrep of ${\rm SU}(3)$ with highest weight
$\mathpzc{w}=(1,0)$, of dimension $3$. Using the restriction matrix (eq 17) on
the weight system of $\mathpzc{w}$, namely $\\{(1,0),(-1,1),(0,-1)\\}$ we
obtain the weights appearing in the branching from $\mathfrak{su}(3)$ to
$\mathfrak{su}(2)\oplus\mathfrak{u}(1)$, namely $\\{(1,1),(-1,1),(0,-2)\\}$;
the associated decomposition of irreps, in terms of highest weights, reads
$(1,0)\rightarrow(1,1)\oplus(0,-2)$ where, on the right hand side, the first
member ($2I$) of each pair is the component along $\sigma$ of the ${\rm
SU}(2)$ highest weight and where the second member ($3Y$) is the component of
the ${\mathrm{U}}(1)$ weight along $t$. Equivalently, in terms of
dimensions212121Remember that an ${\rm SU}(2)$ irrep with highest weight $2I$
(i.e., spin $I$) has dimension $2I+1$, and that an ${\rm SU}(3)$ irrep with
highest weight components $(o_{1},o_{2})$ has dimension
$(o_{1}+1)(o_{2}+1)(o_{1}+o_{2}+2)/2$.: $[3]\rightarrow[2]_{1}\oplus[1]_{-2}$
where the subindex of $[2I+1]_{3Y}$ refers to the component of the
${\mathrm{U}}(1)$ weight. Conservation of the ${\mathrm{U}}(1)$ (hyper) charge
reads $2\times(1)+1\times(-2)=0$.
For the adjoint representation (highest weight $\mathpzc{w}=(1,1)$, of
dimension $8$), the branching rule can be obtained in the same way and reads,
when written in terms of dimensions (no confusion can arise in this case):
$[8]\rightarrow[3]_{0}\oplus[2]_{3}\oplus[2]_{-3}\oplus[1]_{0}$.
Let us conclude this section with a slightly more involved example: we
consider the ${\rm SU}(3)$ representation of highest weight
$\mathpzc{w}=(2,1)$, which is of dimension $[15]$. Using the restriction
matrix on the weight
system222222$\\{(2,1),(3,-1),(0,2),(1,0),(1,0),(-2,3),(2,-2),(-1,1),(-1,1),(0,-1),(0,-1),(-3,2),(1,-3),(-2,0),(-1,-2)\\}$.
of this highest weight of ${\rm SU}(3)$, we obtain the
weights232323$\\{(2,4),(3,1),(0,4),(1,1),(1,1),(-2,4),(2,-2),(-1,1),(-1,1),(0,-2),(0,-2),(-3,1),(1,-5),(-2,-2),(-1,-5)\\}$.
appearing in the branching to ${\mathrm{U}}(2)$. The associated decomposition
reads
$(2,1)\rightarrow(3,1)\oplus(2,4)\oplus(2,-2)\oplus(1,1)\oplus(1,-5)\oplus(0,-2)$
where, again, on the right hand side, the first member ($2I$) of each pair is
the component along $\sigma$ of the ${\rm SU}(2)$ highest weight and where the
second member ($3Y$) is the component along $t$ of the ${\mathrm{U}}(1)$
weight. In terms of dimensions, this rhs reads
$[4]_{1}\oplus[3]_{4}\oplus[3]_{-2}\oplus[2]_{1}\oplus[2]_{-5}\oplus[1]_{-2}$
and we can check the conservation of the ${\mathrm{U}}(1)$ (hyper) charge:
$4\times(1)+3\times(4)+3\times(-2)+2\times(1)+2\times(-5)+1\times(-2)=0$.
### 4.3 Laplacian
Let $h$ be a Riemannian or pseudo-Riemannian metric on the Lie group $G$.
Assuming that $h$ is left-invariant (hence homogeneous), we can write
$h=h_{ab}\,\theta^{a}\otimes\theta^{b}$ where $h_{ab}$ are constants (real
numbers) and $(\theta^{a})$ is the global moving co-frame dual to the
arbitrary moving frame $(e_{a})$ defined from an arbitrary basis, also called
$(e_{a})$, in the Lie algebra of $G$ identified with the tangent space to $G$
at the identity. The dual (i.e., inverse) metric reads
$h^{-1}=h^{ab}\,e_{a}\otimes e_{b}$. With the usual convention, the rough
metric Laplacian (or Laplace-Beltrami operator) on functions on the manifold
has negative spectrum —so it is the opposite of the De Rham Laplacian on
$0$-forms— and can be written as the second-order differential operator
$\Delta=h^{ab}\,e_{a}\circ e_{b}$ where the vector fields $e_{a}$ act on
functions on $G$. More generally, when studying the action of the Laplacian on
sections of vector bundles over $G$, the $e_{a}$ would act as a Lie derivative
of sections in the direction $a$.
##### Laplacian of bi-invariant metrics.
We call $\Delta_{0}$ the Laplacian associated with the Killing metric $k$; its
eigenstates are labelled by irreducible representations $\mathpzc{w}$ of $G$,
the eigenvalues of $-\Delta_{0}$ are equal to the Casimir eigenvalues
${C_{2}}$ (see 15) evaluated in the representation $\mathpzc{w}$, and the
degeneracy is $dim(\mathpzc{w})^{2}$, see [2] and [8].
##### Laplacian of left-invariant metrics.
Let $h$ be an arbitrary left-invariant metric on the Lie group $G$, the
spectrum of the corresponding Laplacian $\Delta$ is discussed in a number of
places (see for instance [13], [15], and references therein). Using the Peter-
Weyl theorem together with left-invariance of the metric one can replace a
difficult problem of analysis on manifolds by a simpler algebraic problem: as
in the bi-invariant case, the eigenvalues of the Laplace operator can be
obtained, up to sign, as eigenvalues of some appropriate metric-dependent
modified Casimir operator (consider for instance the expression (19) below)
evaluated in irreducible representations of $G$. One should be careful with
this terminology because the associated modified Casimir elements (that can be
defined in the enveloping algebra of $\text{Lie}(G)$) are not, in general,
central.
For left-invariant metrics $h$ with isometry group $G\times K$ and more
generally for naturally reductive metrics on Lie groups one can certainly
write general results but here we only want to focus on the $K$ dependence of
the eigenvalues in a few specific cases, and we shall be happy with some
elementary calculations.
So we return to the case $G={\rm SU}(3)$, call $X_{a}$ the vectors of an
orthonormal basis for the Killing metric, $h^{ab}$ the covariant components of
some chosen left-invariant metric $h$ in the same basis, and $C(\mathpzc{w})$
the list of eigenvalues of the operator $\Delta=h^{ab}\,X_{a}.X_{b}$ evaluated
in some chosen non trivial representation $\mathpzc{w}$ of $G$. The degeneracy
of each eigenvalue is at least $dim(\mathpzc{w})$. With the notations of sect.
2.2, namely setting $X_{a}=\frac{i}{2\sqrt{3}}L_{a}$, we can also write
$\Delta=-\tfrac{1}{12}h^{ab}\,L_{a}.L_{b}$. Taking for instance $h=k$, the
Killing metric, we have
$\begin{split}\Delta_{0}=&\left(X_{1}.X_{1}+X_{2}.X_{2}+X_{3}.X_{3}\right)+\left(X_{4}.X_{4}+X_{5}.X_{5}+X_{6}.X_{6}+X_{7}.X_{7}\right)+X_{8}.X_{8}\\\
{}=&\frac{-1}{12}\left(\left(L_{1}.L_{1}+L_{2}.L_{2}+L_{3}.L_{3}\right)+\left(L_{4}.L_{4}+L_{5}.L_{5}+L_{6}.L_{6}+L_{7}.L_{7}\right)+L_{8}.L_{8}\right)\\\
\end{split}$ (18)
and replacing $L_{a}$ by $\lambda_{a}$ (in the defining representation), or by
$-2if_{a}$ (in the adjoint), one recovers the known Casimir eigenvalues.
Let us now choose a metric $h$ for which $K={\mathrm{U}}(2)$, with parameters
$\alpha,\beta,\gamma$ as in (3). The Laplacian reads as follows, and we may
introduce the notation ${\Omega_{2}^{{\mathrm{U}}(2)}}$ to denote the
“modified Casimir operator” defined as $-\Delta$.
$\begin{split}\Delta=&\frac{-1}{12}\left(\alpha\left(L_{1}.L_{1}+L_{2}.L_{2}+L_{3}.L_{3}\right)+\beta\left(L_{4}.L_{4}+L_{5}.L_{5}+L_{6}.L_{6}+L_{7}.L_{7}\right)+\gamma
L_{8}.L_{8}\right)\end{split}$ (19)
In the fundamental representation $\mathpzc{w}=(1,0)$ of ${\rm SU}(3)$,
$-\Delta$ is a $3\times 3$ diagonal matrix with diagonal:
$C(1,0)=\left\\{\frac{1}{36}(9\alpha+6\beta+\gamma),\frac{1}{36}(9\alpha+6\beta+\gamma),\frac{1}{9}(3\beta+\gamma)\right\\}$
In the adjoint representation $\mathpzc{w}=(1,1)$, $-\Delta$ is an $8\times 8$
diagonal matrix with diagonal:
$C(1,1)=\left\\{\frac{1}{3}(2\alpha+\beta),\frac{1}{3}(2\alpha+\beta),\frac{1}{3}(2\alpha+\beta),\frac{1}{4}(\alpha+2\beta+\gamma),\frac{1}{4}(\alpha+2\beta+\gamma),\frac{1}{4}(\alpha+2\beta+\gamma),\frac{1}{4}(\alpha+2\beta+\gamma),\beta\right\\}$
More generally, consider the difference $\Delta-\beta\Delta_{0}$, this makes
the term $L_{4}.L_{4}+L_{5}.L_{5}+L_{6}.L_{6}+L_{7}.L_{7}$ disappear:
$\begin{split}\Delta-\beta\Delta_{0}=&\frac{-1}{12}\left((\alpha-\beta)\left(L_{1}.L_{1}+L_{2}.L_{2}+L_{3}.L_{3}\right)+(\gamma-\beta)L_{8}.L_{8}\right)\end{split}$
From the discussion in sect. 4.1, we identify
$L_{1}.L_{1}+L_{2}.L_{2}+L_{3}.L_{3}$ with the ${\rm SU}(2)$ quadratic
Casimir, of eigenvalue $4I(I+1)$ in the irreducible representation of
isospin242424In particle physics applications, $I$ is called the isospin, and
$Y$ the hypercharge, see our paragraph on notations in sect. 2.3 $I$ (the
highest weight component is $2I$), and $L_{8}.L_{8}$ with the
remaining252525The restriction of $h$ to ${\mathrm{U}}(2)$ is a bi-invariant
metric on this subgroup. quadratic ${\mathrm{U}}(1)$ operator, of eigenvalue
$3Y^{2}$. For an arbitrary representation $\mathpzc{w}=(o_{1},o_{2})$ of ${\rm
SU}(3)$, and for a representation that appears in the branching from
$\mathpzc{w}$ to the subgroup ${\mathrm{U}}(2)$ (remember, see sect. 4.2, that
such a term is characterized by the highest weight $2I$ of ${\rm SU}(2)$ and a
weight $3Y$ of ${\mathrm{U}}(1)$), the eigenvalues of the modified quadratic
Casimir operator ${\Omega_{2}^{{\mathrm{U}}(2)}}=-\Delta$, with $\Delta$ given
by (19), are therefore:
$C(o_{1},o_{2};I,Y)=\beta\,{C_{2}(o_{1},o_{2})}+((\alpha-\beta)\,\frac{1}{3}\,I(I+1)+(\gamma-\beta)\,\frac{1}{4}\,Y^{2})$
(20)
where $C_{2}$ is the eigenvalue of the Casimir element associated with the
Killing form of ${\rm SU}(3)$.
If $\mu^{2}\in\mathbb{R}^{+}$, then, upon scaling of $h^{-1}$ by $\mu^{2}$,
the rhs of the previous equation gets multiplied by the same factor.
Examples.
Consider the fundamental irrep of ${\rm SU}(3)$ with highest weight
$\mathpzc{w}=(1,0)$, of dimension $3$. The eigenvalue of $-\Delta_{0}$ given
by (16) is $4/9$ whereas those of $-\Delta$, given by (20), for each of the
terms appearing in the branching rule obtained at the end of sect. 4.2, are
sums of three contributions: the irrep $[2]_{1}$ in the branching of $[3]$ has
$I=1/2$ and $Y=1/3$ since $[2]_{1}\equiv[2I+1]_{3Y}$, and it is such that
$\left\\{\beta\,{{C_{2}}},\frac{1}{3}I(I+1)(\alpha-\beta),\frac{1}{4}Y^{2}(\gamma-\beta)\right\\}=\left\\{\frac{4\beta}{9},\frac{\alpha-\beta}{4},\frac{\gamma-\beta}{36}\right\\}$,
whose sum is $\frac{1}{36}(9\alpha+6\beta+\gamma)$; in the same way, the three
contributions for the irrep $[1]_{-2}$ sum to $\frac{1}{9}(3\beta+\gamma)$.
Both values were expected since we had to recover the eigenvalues of $-\Delta$
given by the diagonal $3\times 3$ matrix $C(1,0)$ obtained previously.
For the adjoint representation, whose branching to ${\mathrm{U}}(2)$ was also
given in sect. 4.2, the eigenvalue of $-\Delta_{0}$ is $1$, and those of
$-\Delta$, calculated using (20), coincide, of course, for each of the four
terms of the branching decomposition, with the components of the diagonal
$8\times 8$ matrix $C(1,1)$ obtained previously.
Let us finally consider the ${\rm SU}(3)$ representation of highest weight
$\mathpzc{w}=(2,1)$, of dimension $[15]$, whose branching to ${\mathrm{U}}(2)$
was also considered in sect. 4.2. The eigenvalue of $-\Delta_{0}$ given by
(16) is $32/3$ and those of $-\Delta$, given by (20), read respectively
$[4]_{1}:\frac{5\alpha}{12}+10\beta+\frac{\gamma}{4}$,
$[3]_{4}:\frac{2}{3}(\alpha+16\beta-\gamma)$,
$[3]_{-2}:\frac{2\alpha}{3}+9\beta+\gamma$,
$[2]_{1}:\frac{1}{36}(-\alpha+376\beta+9\gamma)$,
$[2]_{-5}:\frac{1}{12}(3\alpha+50\beta+75\gamma)$,
$[1]_{-2}:\frac{29\beta}{3}+\gamma$, for the six different terms that appear
in the branching rule. The multiplicity, which is $15\times 15$ for a bi-
invariant metric (right isometry group ${\rm SU}(3)$), becomes
$15\times(2I+1)$, where the values of $(2I+1)$ are the consecutive members of
the list $\left(4,3,3,2,2,1\right)$, when the right isometry group is
${\mathrm{U}}(2)$.
Eigenvalues of $\Delta$ for left-invariant metrics with other right isometry
groups $K$ can be obtained and discussed along similar lines, this study is
left to the reader.
### 4.4 The cubic Casimir operator
From the commutation relations of $Lie(G)$, when $G={\rm SU}(3)$, it is
straightforward (but cumbersome) to show that the most general central cubic
element of the enveloping algebra of $\mathfrak{su}(3)$ is proportional to
$\Omega_{3}+x\,\frac{3}{2}\,{\widehat{\Omega}_{2}}$, with
$\begin{split}\Omega_{3}&=h_{1}.(\overline{L_{12}}.L_{12}+\overline{L_{45}}.L_{45}-2\overline{L_{67}}.L_{67})/3+h_{2}.(2\overline{L_{12}}.L_{12}-\overline{L_{45}}.L_{45}-\overline{L_{67}}.L_{67})/3+\overline{L_{45}}.L_{67}.L_{12}+\overline{L_{67}}.\overline{L_{12}}.L_{45}+\\\
&\overline{L_{12}}.L_{12}-\overline{L_{67}}.L_{67}+\frac{1}{3}\left(\frac{1}{3}(h_{2}.h_{1}.h_{1}-h_{2}.h_{2}.h_{1})+\frac{2}{9}(h_{1}.h_{1}.h_{1}-h_{2}.h_{2}.h_{2})+(h_{1}.h_{1}-h_{2}.h_{2})+(h_{1}-h_{2})\right)\end{split}$
(21)
where $L_{ij}=(L_{i}+i\,L_{j})/2$, $h_{1}=L_{3}$,
$h_{2}=(-L_{3}+\sqrt{3}L_{8})/2$, and $x$ is an arbitrary real parameter.
Setting $x=0$ i.e., discarding the quadratic Casimir term
${\widehat{\Omega}_{2}}$, we are left with an essentially cubic262626The terms
appearing in 21 (taking $x=0$) are indeed purely cubic when written in terms
of the chosen generators $L_{ij}$ and $h_{j}$, although the same expression,
when written in terms of the generators $L_{a}$, as in 22, contains terms
linear in $L_{3}$ and $L_{8}$. term that we can write as
$\begin{split}\Omega_{3}&=\frac{\left(L_{1}.L_{1}+L_{2}.L_{2}+L_{3}.L_{3}\right).L_{8}}{4\sqrt{3}}-\frac{\left(L_{4}.L_{4}+L_{5}.L_{5}+L_{6}.L_{6}+L_{7}.L_{7}\right).L_{8}}{8\sqrt{3}}+\\\
&\frac{1}{8}L_{3}.\left(L_{4}.L_{4}+L_{5}.L_{5}-L_{6}.L_{6}-L_{7}.L_{7}\right)+\frac{1}{4}\left(L_{1}.L_{4}.L_{6}+L_{1}.L_{5}.L_{7}-L_{2}.L_{4}.L_{7}+L_{2}.L_{5}.L_{6}\right)-\frac{L_{8}.L_{8}.L_{8}}{12\sqrt{3}}+\frac{L_{3}}{2}-\frac{L_{8}}{2\sqrt{3}}\end{split}$
(22)
We can now evaluate this expression in an irreducible representation of
highest weight $\mathpzc{w}$, with components $(o_{1},o_{2})$ in the basis of
fundamental weights; calling $C_{3}$ the eigenvalue of $\Omega_{3}$, one finds
$C_{3}=\frac{1}{27}(o_{1}-o_{2})(3+2o_{1}+o_{2})(3+o_{1}+2o_{2})$ (23)
Notice that $C_{3}$ is equal to $20/27$ in the defining representation, and it
vanishes in the adjoint, as in all real representations since it is
proportional to $(o_{1}-o_{2}$).
One can play with the idea of relaxing the centrality requirement and look for
the most general cubic element commuting with the generators of some Lie
subgroup, in particular some right isometry group $K$. For instance, choosing
$K={\mathrm{U}}(2)$ we impose the vanishing of Lie derivatives with respect to
$L_{1},L_{2},L_{3}$ and $L_{8}$, in which case one finds (again the proof is
straightforward) that the most general such cubic element, up to scale, can be
written as
${\Omega_{3}^{{\mathrm{U}}(2)}}+9\,x\,{\Omega_{2}^{{\mathrm{U}}(2)}}$, where
$x$ is an arbitrary real parameter, where
${\Omega_{2}^{{\mathrm{U}}(2)}}=-\Delta$, with $\Delta$ the Laplacian given by
(19), and where the remaining operator, ${\Omega_{3}^{{\mathrm{U}}(2)}}$, is
$\begin{split}{}&\frac{1}{72}\left(6\sqrt{3}\,A(L_{1}.L_{1}.L_{8}+L_{2}.L_{2}.L_{8}+L_{3}.L_{3}.L_{8}\right)-3\sqrt{3}\,B\left(L_{4}.L_{4}.L_{8}+L_{5}.L_{5}.L_{8}+L_{6}.L_{6}.L_{8}+L_{7}.L_{7}.L_{8}\right)-2\sqrt{3}\,CL_{8}.L_{8}.L_{8}+\\\
&9\,U\left(2L_{1}.L_{4}.L_{6}+2L_{1}.L_{5}.L_{7}-2L_{2}.L_{4}.L_{7}+2L_{2}.L_{5}.L_{6}+L_{3}.L_{4}.L_{4}+L_{3}.L_{5}.L_{5}-L_{3}.L_{6}.L_{6}-L_{3}.L_{7}.L_{7}+4L_{3}\right)-12\,V\sqrt{3}L_{8})\end{split}$
(24)
In this expression $A,B,C,U,V$ denote arbitrary real parameters, and by
setting all of them equal to $1$, one recovers the essentially cubic Casimir
element $\Omega_{3}$ given previously. Using ${\rm SU}(3)$ left translations,
the element ${\Omega_{3}^{{\mathrm{U}}(2)}}$ of the universal enveloping
algebra defines a cubic differential operator on the group which is ${\rm
SU}(3)$ left-invariant by construction, but also right invariant under $K$.
The interested reader will show that, for an arbitrary representation
$\mathpzc{w}=(o_{1},o_{2})$ of ${\rm SU}(3)$, and for a representation that
appears in the branching from $\mathpzc{w}$ to the subgroup ${\mathrm{U}}(2)$
(a representation characterized, as in the previous section, by a pair of
integers $(2I,3Y)$), the eigenvalue of the operator
${\Omega_{3}^{{\mathrm{U}}(2)}}$ is equal to
$U*C_{3}+(U_{B}\,*{C_{2}}+U_{AB}\,*I(I+1)+U_{BC}\,*Y^{2})*Y+U_{V}*Y$ (25)
where $C_{3}$ is given by (23), $C_{2}={\widehat{C}_{2}}/6$ is obtained from
(16), and $U_{B}=\frac{3}{2}(U-B)$, $U_{AB}=\frac{1}{2}(2A+B-3U)$,
$U_{BC}=\frac{1}{8}(3B-2C-U)$, $U_{V}=\frac{1}{2}(U-V)$. This is the cubic
analog of formula 20.
Example: Choose again the fundamental representation $(1,0)$, so ${C_{2}}=4/9$
and $C_{3}=20/27$. The branching of this ${\rm SU}(3)$ irrep to
${\mathrm{U}}(2)$ reads $[3]\mapsto[2]_{1}\oplus[1]_{-2}$ (the notation on the
rhs is $[2I+1]_{3Y}$, like in sect. 4.2). Equation 24 with $L_{a}$ replaced by
$\lambda_{a}$ gives a $3\times 3$ diagonal matrix with diagonal
$\left(\frac{A}{4}-\frac{B}{12}-\frac{C}{108}+\frac{3U}{4}-\frac{V}{6},\frac{A}{4}-\frac{B}{12}-\frac{C}{108}+\frac{3U}{4}-\frac{V}{6},\frac{B}{3}+\frac{2C}{27}+\frac{V}{3}\right)$;
its matrix elements can be also be obtained from (25) by setting $I=1/2,Y=1/3$
for the first two, and $I=0,Y=-2/3$ for the last. Finally, one recovers the
same value $20/27$ of the undeformed cubic Casimir $C_{3}$ by setting
$A=B=C=U=V=1$.
### 4.5 Sectional curvatures
Sectional curvatures $\chi$ are associated with the choice of a two-
dimensional linear subspace of the tangent space at some point of the manifold
under consideration. Here the manifold is a Lie group, and the chosen metrics
are left-invariant, so it is enough to consider sectional curvatures at the
origin. There is a general formula, due to Milnor [16], that expresses these
quantities in terms of the structure constants of an orthonormal basis (for
the chosen metric) of left-invariant vector fields. However, we remind the
reader that, in these notes, for all choices of the right isometry group $K$
we have chosen the same basis $(X_{a})$ to perform curvature calculations,
namely the one that is orthonormal for the Killing metric, and which is
therefore not orthonormal for a general left-invariant metric metric $h$. For
this reason we could not use the Milnor formula (with the exception of the
Killing metric) and had to rely on the general expression
$\chi(u,v)=\frac{\langle{\mathcal{R}}(v,u)u,v\rangle}{\langle
u,u\rangle\langle v,v\rangle-\langle u,v\rangle^{2}}$
where $u$ and $v$ are two linearly independent vectors at the origin, where
the inner product $\langle\,,\,\rangle$ is defined by $h$, and272727The signs
in the definition of ${\mathcal{R}}$ are opposite to [16], this explains the
order of arguments in the expression of $\chi(u,v)$. where
${\mathcal{R}}(u,v)=[\nabla_{u},\nabla_{v}]-\nabla_{[u,v]}$. From the
calculated Riemann tensors one can determine the sectional curvatures
$\chi(X_{a},X_{b})=R_{abab}/(h_{aa}h_{bb}-h_{ab}^{2})$. We list them below,
for the parametrizations of $h$ given in (3) that correspond to right isometry
groups $K={\rm SU}(3)$, ${\mathrm{U}}(2)$ or ${\mathrm{S}O}(3)$. For the
groups $K={\mathrm{U}}(1)\times{\mathrm{U}}(1)$, ${\mathrm{U}}(1)_{Y}$ and
$\\{e\\}$ (the trivial subgroup), we have also explicit results for sectional
curvatures in terms of the parameters specifying the metric, but these
expressions are too large to be displayed on paper. For the
$K={\mathrm{U}}(1)_{I}$ family, the results are also too large to be displayed
but we shall nevertheless give explicit sectional curvatures for a subfamily.
For the special case of the Lorentzian Einstein metric obtained in sect. 3 we
only give numerical values (exact values typically involve specific roots of
$15^{th}$ degree polynomials).
Warning (again): the choice of a pair $X_{a},X_{b}$ with $a\neq b$ determines
a two-dimensional linear subspace at the origin but we remind the reader that
our vectors $X_{a}$ are usually not orthonormal for the chosen metric. By
definition $\chi(X_{a},X_{b})$ is symmetric in $a$ and $b$, and it is not
defined (one can set it equal to $0$) for $a=b$. We give tables of $\chi$ for
$a<b$ running from $1$ to $8$. In some cases we also give the Ricci principal
curvatures (one can check that their sum is the already given scalar
curvature).
$K={\rm SU}(3)$
$\begin{array}[]{cccccccc}.&\frac{\alpha}{12}&\frac{\alpha}{12}&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{48}&0\\\
.&.&\frac{\alpha}{12}&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{48}&0\\\
.&.&.&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{48}&0\\\
.&.&.&.&\frac{\alpha}{12}&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{16}\\\
.&.&.&.&.&\frac{\alpha}{48}&\frac{\alpha}{48}&\frac{\alpha}{16}\\\
.&.&.&.&.&.&\frac{\alpha}{12}&\frac{\alpha}{16}\\\
.&.&.&.&.&.&.&\frac{\alpha}{16}\\\ \end{array}$
All sectional curvatures are non negative, as expected282828Any compact Lie
group admits a bi-invariant metric with non-negative sectional curvatures [16]
and there is only one bi-invariant metric (up to scale) on ${\rm SU}(3)$..
Some of them vanish, also as expected since the $3$-sphere group ${\rm SU}(2)$
is the only simply connected Lie group which admits a left invariant metric of
strictly positive sectional curvature [21]. All Ricci principal curvatures are
equal to $\alpha/4$.
$K={\mathrm{U}}(2)$
$\begin{array}[]{cccccccc}.&\frac{\alpha}{12}&\frac{\alpha}{12}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&0\\\
.&.&\frac{\alpha}{12}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&0\\\
.&.&.&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&\frac{\beta^{2}}{48\alpha}&0\\\
.&.&.&.&-\frac{\beta(9\alpha\beta-16\alpha\gamma+3\beta\gamma)}{48\alpha\gamma}&\frac{\beta(4\alpha-3\beta)}{48\alpha}&\frac{\beta(4\alpha-3\beta)}{48\alpha}&\frac{\beta^{2}}{16\gamma}\\\
.&.&.&.&.&\frac{\beta(4\alpha-3\beta)}{48\alpha}&\frac{\beta(4\alpha-3\beta)}{48\alpha}&\frac{\beta^{2}}{16\gamma}\\\
.&.&.&.&.&.&-\frac{\beta(9\alpha\beta-16\alpha\gamma+3\beta\gamma)}{48\alpha\gamma}&\frac{\beta^{2}}{16\gamma}\\\
.&.&.&.&.&.&.&\frac{\beta^{2}}{16\gamma}\\\ \end{array}$
Ricci principal curvatures:
$\left\\{\frac{\beta^{2}}{12\alpha}+\frac{\alpha}{6},\frac{\beta^{2}}{12\alpha}+\frac{\alpha}{6},\frac{\beta^{2}}{12\alpha}+\frac{\alpha}{6},-\frac{\beta^{2}}{8\alpha}-\frac{\beta^{2}}{8\gamma}+\frac{\beta}{2},-\frac{\beta^{2}}{8\alpha}-\frac{\beta^{2}}{8\gamma}+\frac{\beta}{2},-\frac{\beta^{2}}{8\alpha}-\frac{\beta^{2}}{8\gamma}+\frac{\beta}{2},-\frac{\beta^{2}}{8\alpha}-\frac{\beta^{2}}{8\gamma}+\frac{\beta}{2},\frac{\beta^{2}}{4\gamma}\right\\}$.
$K={\mathrm{S}O}(3)$
$\begin{array}[]{cccccccc}.&\frac{\alpha^{2}}{12\beta}&\frac{\alpha(4\beta-3\alpha)}{12\beta}&\frac{\alpha(4\beta-3\alpha)}{48\beta}&\frac{\alpha^{2}}{48\beta}&\frac{\alpha(4\beta-3\alpha)}{48\beta}&\frac{\alpha^{2}}{48\beta}&0\\\
.&.&\frac{\alpha^{2}}{12\beta}&\frac{\alpha^{2}}{48\beta}&\frac{\beta}{48}&\frac{\alpha^{2}}{48\beta}&\frac{\beta}{48}&0\\\
.&.&.&\frac{\alpha(4\beta-3\alpha)}{48\beta}&\frac{\alpha^{2}}{48\beta}&\frac{\alpha(4\beta-3\alpha)}{48\beta}&\frac{\alpha^{2}}{48\beta}&0\\\
.&.&.&.&\frac{\alpha^{2}}{12\beta}&\frac{\alpha(4\beta-3\alpha)}{48\beta}&\frac{\alpha^{2}}{48\beta}&\frac{\alpha(4\beta-3\alpha)}{16\beta}\\\
.&.&.&.&.&\frac{\alpha^{2}}{48\beta}&\frac{\beta}{48}&\frac{\alpha^{2}}{16\beta}\\\
.&.&.&.&.&.&\frac{\alpha^{2}}{12\beta}&\frac{\alpha(4\beta-3\alpha)}{16\beta}\\\
.&.&.&.&.&.&.&\frac{\alpha^{2}}{16\beta}\\\ \end{array}$
In particular, sectional curvatures for the (Jensen) Einstein metric read:
$\quad\begin{array}[]{cccccccc}.&\frac{1}{132}&\frac{411}{132}&\frac{411}{528}&\frac{1}{528}&\frac{411}{528}&\frac{1}{528}&0\\\
.&.&\frac{1}{132}&\frac{1}{528}&\frac{111}{48}&\frac{1}{528}&\frac{111}{48}&0\\\
.&.&.&\frac{411}{528}&\frac{1}{528}&\frac{411}{528}&\frac{1}{528}&0\\\
.&.&.&.&\frac{1}{132}&\frac{411}{528}&\frac{1}{528}&\frac{411}{176}\\\
.&.&.&.&.&\frac{1}{528}&\frac{111}{48}&\frac{1}{176}\\\
.&.&.&.&.&.&\frac{1}{132}&\frac{411}{176}\\\ .&.&.&.&.&.&.&\frac{1}{176}\\\
\end{array}$
Ricci principal curvatures:
$\left\\{-\frac{\alpha(\alpha-2\beta)}{4\beta},-\frac{\alpha(\alpha-2\beta)}{4\beta},-\frac{\alpha(\alpha-2\beta)}{4\beta},-\frac{\alpha(\alpha-2\beta)}{4\beta},-\frac{\alpha(\alpha-2\beta)}{4\beta},\frac{5\alpha^{2}+\beta^{2}}{24\beta},\frac{5\alpha^{2}+\beta^{2}}{24\beta},\frac{5\alpha^{2}+\beta^{2}}{24\beta}\right\\}$.
For the Jensen metric they are all equal to $21\alpha/44$.
$K={\mathrm{U}}(1)_{I}$.
The general case, with its $8$ parameters, is too large to be displayed. We
can nevertheless (using tiny fonts (!)) exhibit the sectional curvatures
obtained for the subfamily $\theta=0,\zeta=0$, $\delta=\gamma$.
$\begin{split}&\left\\{\begin{split}&\frac{\cdot,\alpha(4\beta-3\alpha)}{12\beta},\frac{\alpha^{2}}{12\beta},\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},\\\
&\qquad\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},0\end{split}\right\\},\\\
&\left\\{\begin{split}&\cdot,\cdot,\frac{\alpha^{2}}{12\beta},\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},\\\
&\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\alpha\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\alpha^{2}\eta^{2}\gamma+4\alpha\eta^{4}}{48\alpha\gamma^{3}-48\alpha\gamma\eta^{2}},0\end{split}\right\\},\\\
&\left\\{\cdot,\cdot,\cdot,\frac{\gamma^{2}-\eta^{2}}{48\beta},\frac{\gamma^{2}-\eta^{2}}{48\beta},\frac{\gamma^{2}-\eta^{2}}{48\beta},\frac{\gamma^{2}-\eta^{2}}{48\beta},0\right\\},\\\
&\left\\{\begin{split}&\cdot,\cdot,\cdot,\cdot,\frac{\beta\left(-9\gamma^{4}+16\epsilon\gamma^{3}+18\eta^{2}\gamma^{2}-16\epsilon\eta^{2}\gamma-9\eta^{4}+8\alpha\epsilon\eta^{2}\right)-3\epsilon\left(\gamma^{2}-\eta^{2}\right)^{2}}{48\beta\gamma^{2}\epsilon},\frac{-4\alpha^{2}\eta^{2}-3\left(\gamma^{2}-\eta^{2}\right)^{2}+4\alpha\left(\gamma^{3}-\eta^{2}\gamma+3\epsilon\eta^{2}\right)}{48\alpha\gamma^{2}},\frac{4\alpha^{2}\eta^{2}-3\left(\gamma^{2}-\eta^{2}\right)^{2}+4\alpha\left(\gamma^{3}-\gamma\eta^{2}\right)}{48\alpha\left(\gamma^{2}-\eta^{2}\right)},\\\
&\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\epsilon\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\epsilon^{2}\eta^{2}\gamma+4\epsilon\eta^{4}}{16\gamma^{3}\epsilon-16\gamma\epsilon\eta^{2}}\end{split}\right\\},\\\
&\left\\{\cdot,\cdot,\cdot,\cdot,\cdot,\frac{4\alpha^{2}\eta^{2}-3\left(\gamma^{2}-\eta^{2}\right)^{2}+4\alpha\left(\gamma^{3}-\gamma\eta^{2}\right)}{48\alpha\left(\gamma^{2}-\eta^{2}\right)},\frac{-4\alpha^{2}\eta^{2}-3\left(\gamma^{2}-\eta^{2}\right)^{2}+4\alpha\left(\gamma^{3}-\eta^{2}\gamma+3\epsilon\eta^{2}\right)}{48\alpha\gamma^{2}},\frac{\gamma^{5}-2\eta^{2}\gamma^{3}-4\epsilon\eta^{2}\gamma^{2}+\eta^{4}\gamma+4\epsilon^{2}\eta^{2}\gamma+4\epsilon\eta^{4}}{16\gamma^{3}\epsilon-16\gamma\epsilon\eta^{2}}\right\\},\\\
&\left\\{\cdot,\cdot,\cdot,\cdot,\cdot,\cdot,\frac{\beta\left(-9\gamma^{4}+16\epsilon\gamma^{3}+18\eta^{2}\gamma^{2}-16\epsilon\eta^{2}\gamma-9\eta^{4}+8\alpha\epsilon\eta^{2}\right)-3\epsilon\left(\gamma^{2}-\eta^{2}\right)^{2}}{48\beta\gamma^{2}\epsilon},\frac{\gamma^{5}-2\left(\gamma^{2}+2\epsilon\gamma-2\epsilon^{2}\right)\eta^{2}\gamma+(\gamma+4\epsilon)\eta^{4}}{16\gamma\epsilon\left(\gamma^{2}-\eta^{2}\right)}\right\\},\\\
&\left\\{\cdot,\cdot,\cdot,\cdot,\cdot,\cdot,\cdot,\frac{\gamma^{5}-2\left(\gamma^{2}+2\epsilon\gamma-2\epsilon^{2}\right)\eta^{2}\gamma+(\gamma+4\epsilon)\eta^{4}}{16\gamma\epsilon\left(\gamma^{2}-\eta^{2}\right)}\right\\}\end{split}$
The scalar curvature, for this family, was given by (12).
$K={\mathrm{U}}(1)_{I}$. Special case: Lorentzian Einstein metric
$\begin{array}[]{cccccccc}.&0.156539&0.0589315&0.0207884&0.0207884&0.0207884&0.0207884&0\\\
.&.&0.0589315&0.0207884&0.0207884&0.0207884&0.0207884&0\\\
.&.&.&0.000619797&0.000619797&0.000619797&0.000619797&0\\\
.&.&.&.&0.108778&-0.0335267&0.0410926&-0.054309\\\
.&.&.&.&.&0.0410926&-0.0335267&-0.054309\\\ .&.&.&.&.&.&0.108778&-0.054309\\\
.&.&.&.&.&.&.&-0.054309\\\ \end{array}$
Other features of this metric have been discussed in sect.3.
### 4.6 Ricci decomposition (examples)
The Ricci decomposition of the Riemann tensor associated with the Levi-Civita
connection defined by the metric $h$, namely
$R=C+\frac{1}{d-2}(\rho-\frac{\tau}{d}h)\mathbin{\mathchoice{\ooalign{$\displaystyle\bigcirc$\crcr$\displaystyle\land$\crcr}}{\ooalign{$\textstyle\bigcirc$\crcr$\textstyle\land$\crcr}}{\ooalign{$\scriptstyle\bigcirc$\crcr$\scriptstyle\land$\crcr}}{\ooalign{$\scriptscriptstyle\bigcirc$\crcr$\scriptscriptstyle\land$\crcr}}}h+\frac{\tau}{2d(d-1)}h\mathbin{\mathchoice{\ooalign{$\displaystyle\bigcirc$\crcr$\displaystyle\land$\crcr}}{\ooalign{$\textstyle\bigcirc$\crcr$\textstyle\land$\crcr}}{\ooalign{$\scriptstyle\bigcirc$\crcr$\scriptstyle\land$\crcr}}{\ooalign{$\scriptscriptstyle\bigcirc$\crcr$\scriptscriptstyle\land$\crcr}}}h$,
where $d$ is the dimension (here $d=8$), $C$ is the Weyl tensor, and
$\mathbin{\mathchoice{\ooalign{$\displaystyle\bigcirc$\crcr$\displaystyle\land$\crcr}}{\ooalign{$\textstyle\bigcirc$\crcr$\textstyle\land$\crcr}}{\ooalign{$\scriptstyle\bigcirc$\crcr$\scriptstyle\land$\crcr}}{\ooalign{$\scriptscriptstyle\bigcirc$\crcr$\scriptscriptstyle\land$\crcr}}}$
denotes the Kulkarni-Nomizu product of two $(0,2)$ tensors, expresses the
Riemann tensor (here thought of as a $(0,4)$ tensor), as an orthogonal direct
sum. Such a decomposition can be considered for an arbitrary metric, in
particular for homogeneous metrics. As a verification of our calculations
involving curvatures, we have checked this identity for all the familes of
metrics considered in this paper. It would be of course paper-consuming to
list all the non-zero entries of the relevant tensors, nevertheless, in a few
cases, it may be useful to mention a numerical consequence of this identity,
namely the following norm decomposition:
$|R|^{2}=|C|^{2}+|\frac{1}{d-2}(\rho-\frac{\tau}{d}h)\mathbin{\mathchoice{\ooalign{$\displaystyle\bigcirc$\crcr$\displaystyle\land$\crcr}}{\ooalign{$\textstyle\bigcirc$\crcr$\textstyle\land$\crcr}}{\ooalign{$\scriptstyle\bigcirc$\crcr$\scriptstyle\land$\crcr}}{\ooalign{$\scriptscriptstyle\bigcirc$\crcr$\scriptscriptstyle\land$\crcr}}}h|^{2}+|\frac{\tau}{2d(d-1)}h\mathbin{\mathchoice{\ooalign{$\displaystyle\bigcirc$\crcr$\displaystyle\land$\crcr}}{\ooalign{$\textstyle\bigcirc$\crcr$\textstyle\land$\crcr}}{\ooalign{$\scriptstyle\bigcirc$\crcr$\scriptstyle\land$\crcr}}{\ooalign{$\scriptscriptstyle\bigcirc$\crcr$\scriptscriptstyle\land$\crcr}}}h|^{2}$
(26)
In those cases where the results are reasonably short we give $|R|^{2}$
followed by a triple containing the three contributions, in the same order as
in the previous equation.
$K={\rm SU}(3)$, (bi-invariant metrics):
$\left\\{\frac{\alpha^{2}}{2},\\{\frac{5\alpha^{2}}{14},0,\frac{\alpha^{2}}{7}\\}\right\\}$
$K={\mathrm{U}}(2)$:
$\begin{split}&\\{\frac{8\alpha^{4}\gamma^{2}+\alpha^{2}\beta^{2}\left(51\beta^{2}-144\beta\gamma+176\gamma^{2}\right)+6\alpha\beta^{3}\gamma(5\beta-16\gamma)+23\beta^{4}\gamma^{2}}{96\alpha^{2}\gamma^{2}},\\\
&\\{\frac{80\alpha^{4}\gamma^{2}-24\alpha^{3}\beta\gamma(\beta-8\gamma)+\alpha^{2}\beta^{2}\left(909\beta^{2}-2448\beta\gamma+2600\gamma^{2}\right)+6\alpha\beta^{3}\gamma(79\beta-240\gamma)+377\beta^{4}\gamma^{2}}{2016\alpha^{2}\gamma^{2}},\\\
&\frac{20\alpha^{4}\gamma^{2}+12\alpha^{3}\beta\gamma(\beta-8\gamma)+\alpha^{2}\beta^{2}\left(45\beta^{2}-144\beta\gamma+236\gamma^{2}\right)+6\alpha\beta^{3}\gamma(7\beta-24\gamma)+29\beta^{4}\gamma^{2}}{576\alpha^{2}\gamma^{2}},\frac{1}{448}\left(-\frac{\beta^{2}}{\alpha}+2\alpha+\beta\left(8-\frac{\beta}{\gamma}\right)\right)^{2}\\}\\}\end{split}$
$K={\mathrm{S}O}(3)$:
$\left\\{\frac{295\alpha^{4}-660\alpha^{3}\beta+460\alpha^{2}\beta^{2}+\beta^{4}}{192\beta^{2}},\\{\frac{5\left(508\alpha^{4}-1110\alpha^{3}\beta+733\alpha^{2}\beta^{2}+12\alpha\beta^{3}+\beta^{4}\right)}{2016\beta^{2}},\frac{5\left(11\alpha^{2}-12\alpha\beta+\beta^{2}\right)^{2}}{2304\beta^{2}},\frac{\left(-5\alpha^{2}+20\alpha\beta+\beta^{2}\right)^{2}}{1792\beta^{2}}\\}\right\\}$
In particular, for the Einstein solution (Jensen case: $\beta=11\alpha$), we
have
$\left\\{\frac{2639\alpha^{2}}{968},\\{\frac{2135\alpha^{2}}{968},0,\frac{63\alpha^{2}}{121}\\}\right\\}$.
$K={\mathrm{U}}(1)_{I}$: The results giving the norms are too large to be
displayed. We only consider the Lorentzian Einstein solution. In that case,
one can obtain these four norms as roots of appropriate 15th degree
polynomials with (very) large integer coefficients. We shall not print them.
Numerically, one finds $\\{0.115257,\\{0.0813543,0,0.0339023\\}\\}$.
The expressions of the four norms, for
$K={\mathrm{U}}(1)\times{\mathrm{U}}(1)$ and for $K={\mathrm{U}}(1)_{Y}$, are
also very large, and we shall not display them.
Notice that for all Einstein spaces the square norm
$|(\rho-\frac{\tau}{d}h)\mathbin{\mathchoice{\ooalign{$\displaystyle\bigcirc$\crcr$\displaystyle\land$\crcr}}{\ooalign{$\textstyle\bigcirc$\crcr$\textstyle\land$\crcr}}{\ooalign{$\scriptstyle\bigcirc$\crcr$\scriptstyle\land$\crcr}}{\ooalign{$\scriptscriptstyle\bigcirc$\crcr$\scriptscriptstyle\land$\crcr}}}h|^{2}$
vanishes, as it should.
## 5 Physical applications
### 5.1 Particle physics and the GMO formula
##### Physical considerations.
In the standard model of elementary particles, more precisely in the quark
model based on the Lie group $G={\rm SU}(N)$, the particles called mesons are
associated with the space of intertwiners ($G$-equivariant morphisms) from
$V_{f}\otimes{\overline{V}_{f}}$, where $V_{f}$ is the defining representation
and ${\overline{V}_{f}}$ its conjugate, to the irreducible representations
that appear in the product, namely the adjoint representation, and the trivial
one. This decomposition, for ${\rm SU}(3)$, in terms of dimensions, read
$[3]\otimes[\overline{3}]=[8]\oplus[1]$. Mesons are described as basis vectors
of the tensor product, specified by appropriate $G$-Clebsch-Gordan
coefficients (or by the $3J$ Wigner symbols of the group $G$) associated with
the chosen spaces of intertwiners, but this is not our concern here.
Quarks (resp. anti-quarks) are basis vectors in the space of the defining
representation of $G$ (resp. its conjugate), and mesons are called “bound
states ” of a quark and an anti-quark. In particle physics parlance $G$ is the
“flavor group”, which, in the presently accepted model, means ${\rm SU}(N)$,
and where $N$ can only be $2,3,4,5$ or $6$. When $N=2$, $G$ is called the
isospin group and the basis vectors of the defining representation (the
quarks) are nicknamed ‘up’ and ‘down’. When $N=3$ (in this paper we restrict
our attention to $G={\rm SU}(3)$) they are nicknamed ‘up’, ‘down’, ‘strange’,
and are respectively denoted by $u,d,s$.
The classical (i.e., not quantum field theoretical) description of mesons, as
sections of appropriate vector bundles over the space-time manifold, also
specifies their behavior with respect to space-time symmetries (i.e., under
action of the Lorentz group or of the Poincaré group), but we don’t have to be
more precise here, it is enough to say that there are several families of
mesons differing by their space-time properties, the two most important
families being the so-called pseudo-scalar mesons and the vector mesons.
Quarks are not observable but mesons are, and they have masses. Experimentally
the pseudo-scalar mesons have masses that are close (same remark for the
vector mesons), and this is precisely the reason why, historically, they were
described as members of the same Lie group multiplet (basis vectors of some
irrep). Calculating meson masses in terms of more fundamental parameters is a
task that goes beyond the possibilities of (perturbative) quantum field
theory, in particular of quantum chromodynamics, but it remains that,
phenomenologically, one can assume that interactions of mesons, in particular
the quadratic operator responsible for their masses, or their mass splitting,
commutes with the generators of the chosen flavor group, or of a subgroup of
the latter. This hypothesis is at the origin of several mass relations.
Experimentally, particles of the same ${\rm SU}(3)$ multiplet have
approximately the same masses, but this is even more so when they are members
of the same irreducible representation of the ${\mathrm{U}}(2)$ subgroup
(locally ${\rm SU}(2)\times{\mathrm{U}}(1)$) defined in table (2). For this
reason, it is natural to describe (or approximate) the unknown mass operator
as the Laplacian on ${\rm SU}(3)$ associated with an appropriate left-
invariant metric for which the right isometry group is $K={\mathrm{U}}(2)$,
and to look at the consequences of this ansatz. We rescale the dual metric by
$\mu^{2}$ to fix the dimensions ($\mu$ will have the dimensions of a mass).
The eigenvalues of the Laplacian are given by (20), we write them again below
(the whole expression is now multiplied by $\mu^{2}$). To an irrep
$(o_{1},o_{2})$ of ${\rm SU}(3)$ branching to irreps of ${\mathrm{U}}(2)$
labelled by the isospin value $I$ and hypercharge $Y$ we associate the square
mass $m^{2}=\mu^{2}\,C(o_{1},o_{2})$ given by
$m^{2}={{C_{2}}(o_{1},o_{2})}\,\beta\,\mu^{2}+\left(\frac{1}{3}I(I+1)(\alpha-\beta)\mu^{2}+\frac{1}{4}Y^{2}(\gamma-\beta)\mu^{2}\right)$
(27)
##### Pseudo-scalar mesons and the Gell-Mann-Okubo formula.
The branching of the adjoint representation (octet) of ${\rm SU}(3)$, when
restricted to the previously defined ${\mathrm{U}}(2)$ subgroup, in terms of
${\rm SU}(2)$ highest weights, reads292929the components are written in the
basis of fundamental weights.: $(1,1)\rightarrow(2)+(1)+(1)+(0)$; this is
often written in terms of dimensions of irreps
($[8]\rightarrow[3]_{0}+[2]_{3}+[2]_{-3}+[1]_{0}$), with the same notations
$[2I+1]_{3Y}$ as in sect. 4.2, since there is no possible confusion in the
present case. The corresponding mesons are the three pions
$\\{\pi^{+},\pi^{0},\pi^{-}\\}$, for which $I=1$, $Y=0$, the four kaons
$\\{K^{+},K^{0}\\}$ for which $I=1$, $Y=1$, and $\\{\overline{K^{0}},K^{-}\\}$
for which $I=1$, $Y=-1$, and the eta particle, for which $I=0$, $Y=0$.
In the adjoint representation of ${\rm SU}(3)$, ${C_{2}}=1$, so that one
obtains immediately:
$\left\\{m^{2}{}_{\pi},\,m^{2}{}_{K},\,m^{2}{}_{\eta}\right\\}=\left\\{\frac{1}{3}(2\alpha+\beta)\mu^{2},\frac{1}{4}(\alpha+2\beta+\gamma)\mu^{2},\beta\mu^{2}\right\\}$
Experimentally: $m_{\pi^{+}}=m_{\pi^{-}}=139.57\,\text{MeV}$,
$m_{\pi^{0}}=134.976\,\text{MeV}$, $m_{K^{+}}=m_{K^{-}}=493.677\,\text{MeV}$,
$m_{K^{0}}=m_{\overline{K^{0}}}=497.64\,\text{MeV}$, and
$m_{\eta}=549\,\text{MeV}$. For pions and kaons we use averaged masses
$m_{\pi}\simeq 137\,\text{MeV}$, $m_{K}\simeq 496\,\text{MeV}$,
$m_{\eta}=549\,\text{MeV}$. The corresponding values303030These values where
already obtained in [4], up to a scaling factor equal to $12$ coming from the
fact that the bi-invariant metric used in that reference for normalizing
purposes was a multiple of the Killing metric. of parameters are then
$\mu^{2}\alpha_{exp}\simeq-(350\,\text{MeV})^{2},\,\mu^{2}\beta_{exp}\simeq(549\,\text{MeV})^{2},\,\mu^{2}\gamma_{exp}\simeq(710\,\text{MeV})^{2}$.
Their ratios, normalized by $\beta$, are
$(\alpha_{exp},\beta_{exp},\gamma_{exp})/\beta_{exp}=(-0.406591,1,1.67156)$.
We stress the fact that the above is nothing else than an educated fit: it is
neither a prediction nor a “post-diction” since the number of unknown
parameters is the same as the number of values coming from experiment. In
order to get a prediction, one needs at least one more relation between the
parameters $\alpha,\beta,\gamma$; such a relation (expressed in a rather
different way) was postulated in the sixties, by making the hypothesis that
the ${\rm SU}(3)$ the mass operator could be well approximated by keeping only
its singlet and octet components, therefore neglecting the contribution from
the representation of dimension $27$ (see for instance [20]). In our language,
this amounts to neglect the ${}_{27}h^{-1}$ component of the (dual)
pseudo313131The signature of the bilinear form, with the previous values of
$\alpha_{exp},\beta_{exp},\gamma_{exp}$, is $(5,3)$.-Riemannian metric in the
decomposition $h^{-1}={{}_{1}h^{-1}}+{{}_{8}h^{-1}}+{{}_{27}h^{-1}}$ discussed
in sect. 2.3. In other words the coefficient
$C=\frac{1}{40}(\alpha-4\beta+3\gamma)$ given in (4) is set to $0$. One can
then eliminate the parameter $\gamma$, for example, and obtain
$\left\\{m^{2}{}_{\pi},\,m^{2}{}_{K},\,m^{2}{}_{\eta}\right\\}\simeq\left(\frac{1}{3}(2\alpha+\beta)\mu^{2},\,\frac{1}{6}(\alpha+5\beta)\mu^{2},\,\beta\mu^{2}\right),$
$\text{which implies the relation:}{\hskip
28.45274pt}m_{\eta}^{2}\simeq\frac{1}{3}\left(4m_{K}^{2}-m_{\pi}^{2}\right).$
This is the celebrated Gell-Mann-Okubo formula for pseudo-scalar mesons (the
formula using square masses, see for instance the article on GMO formulae in
Wikipedia). It holds reasonably well323232The first published mass relation of
this type (1961) was for baryons, and in particular for hyperons of the
decuplet. As it is well known this equation lead to the discovery of the
$\Omega^{-}$ particle (and to the Nobel Prize in Physics 1969). In our
language, this latter formula, which is linear in masses (not quadratic) could
be obtained from the eigenvalues of a Dirac operator on ${\rm SU}(3)$
associated with a left-invariant metric for which the right isometry group is
${\mathrm{U}}(2)$., although, using the experimental values of $m_{\pi}$ and
$m_{K}$, it leads to a value of $567\,\text{MeV}$ for the mass of the $\eta$,
which is slightly too big.
Remarks: Rather than using a (rough) Laplacian, for some appropriate ${\rm
SU}(3)\times{\mathrm{U}}(2)$ invariant metric, one could be tempted of using
the Yamabe (conformal) Laplacian that differs333333The conformal Laplacian is
$\Delta-\tfrac{d-2}{4(d-1)}\,\tau$, where $\tau$ is the scalar curvature. In
our case $d=8$. from the latter by a simple shift (equal to
$-\tfrac{3}{14}\,\frac{1}{4}\left(-\frac{\beta^{2}}{\alpha}+2\alpha+\beta\left(8-\frac{\beta}{\gamma}\right)\right)$).
but this does not seem to lead to anything physically particularly
interesting. One could also play with the idea of using analogous
considerations to study other particle multiplets, to generalize the previous
analysis to Lie groups $SU(N)$ for $N>3$, or even to consider other kinds of
“symmetry breaking” scenarios (selecting other right-invariant isometry
groups), but this would lie beyond the intended scope of these notes.
##### Warning.
The physicist reader certainly knows, and the mathematician reader should be
warned, that the above way of obtaining mass relations for some elementary
particles, from considerations on Laplacians (or Dirac operators) associated
with left-invariant metrics (actually ${\rm SU}(3)\times{\mathrm{U}}(2)$
invariant metrics on the Lie group ${\rm SU}(3)$ is not standard, in the sense
that, although not a new observation (see [4]), this approach is not widely
known and it is not the way it is taught. It can be noticed that, whatever the
starting point one chooses (the elementary quark model, or more sophisticated
approaches like QCD or chiral perturbation theory), the elementary
mathematical considerations leading to these mass relations are similar: they
involve representation theory of ${\rm SU}(3)$, the branching to ${\rm
SU}(2)\times{\mathrm{U}}(1)$ (of isospin and hypercharge), the fact that
masses should be related to eigenvalues of linear or quadratic operators
(would-be Hamiltonian or mass operators) that are not explicitly known, and an
approximation of “octet dominance” (in the present case it is the hypothesis
that one can neglect, in the metric, a contribution associated with the $27$
dimensional representation of ${\rm SU}(3)$ —see sect 2.3 and the discussion
in [20]). However, one thing is to cook up a physico-mathematical formula that
works at least approximately in some particular cases, another is to derive it
in the framework of a physical theory. The contemporary attitude is to view
GMO mass formulae for hadrons as remote consequences of a fundamental theory
called “The Standard Model”; this theory, in its usual formulation, does not
explicitly involve considerations on the left-invariant geometry of Lie groups
${\rm SU}(N)$ —this $N$ standing for the number of “flavors”. Our observation
that the same mass formulae can be interpreted as expressions describing the
spectrum of appropriate differential operators associated with particular
left-invariant metrics on Lie groups (it is not difficult to extend the
previous results to $N>3$) may not be significant but, notwithstanding this
possibility, the result suggests that it could be, or should be, justified, as
a formal consequence of some currently accepted physical theory (a subject
that we don’t investigate in these notes) and maybe trigger some new
developments of the latter.
### 5.2 Other possibly physical considerations
The fact that some particular left-invariant metrics, which may be Riemannian
or pseudo-Riemannian, can sometimes be Einstein metrics played no role in the
previous discussion on particle masses. Now, for the last thirty years or so,
many theoretical physicists have been concerned with the construction of
classical field models, quantum field theories, and string theories,
incorporating, on top of space and time, several “extra-dimensions” that we do
not perceive (the prototype being the old Kaluza-Klein theory). Such models,
that are often speculative, are described by equations that sometimes require
the total space of the theory (or maybe a quotient of the latter) to be an
Einstein manifold. We have no wish to comment this endeavor and shall refrain
from suggesting anything in that direction but one may notice that, if needed,
the examples of eight dimensional Einstein manifolds described in sect 3 can
be used to construct higher dimensional pseudo-Riemannian spaces, of given
signature, that are also Einstein. For instance one can build a Lorentzian
homogeneous Einstein metric on the $11$-dimensional compact manifold ${\rm
SU}(3)\times{\rm SU}(2)$ where the first factor of the Cartesian product is
endowed with the Lorentzian Einstein structure described previously and where
the second factor (diffeomorphic with the sphere $S^{3}$) is endowed with its
standard ${\mathrm{S}O}(4)$ invariant Einstein metric.
## References
* [1] S. Aloff, Nolan R. Wallach. An infinite family of distinct 7-manifolds admitting positively curved Riemannian structures. Bull. Amer. Math. Soc. 81(1): 93-97 (January 1975).
* [2] B.L. Beers and R.S. Millman. The Spectra of the Laplace-Beltrami Operator on Compact, Semisimple Lie Groups. American Journal of Mathematics 99, no. 4 (1977): 801-07. Accessed May 30, 2021. doi:10.2307/2373866.
* [3] A. L. Besse, Einstein Manifolds, Springer-Verlag, ISBN 3-540-15279-2, 1987.
* [4] R. Coquereaux and G. Esposito-Farese, Right-invariant metrics on the Lie group SU(3) and the Gell-Mann-Okubo formula, Journal of Mathematical Physics 32, 826 (1991). https://doi.org/10.1063/1.529339
* [5] R. Coquereaux and A. Jadczyk. Riemannian geometry, fiber bundles, Kaluza-Klein theories, and all that$\ldots$, World Scientific Lectures Notes in Physics, Vol. 16 (1988). https://doi.org/10.1142/0488
* [6] J. E. D’Atri and W. Ziller, Naturally Reductive Metrics and Einstein Metrics on Compact Lie Groups, Memoirs of the Am. Math. Soc., Vol. 18, No. 215, 1979.
* [7] A. Derdzinski, S.R. Gal, Indefinite Einstein Metrics on Simple Lie Groups, Indiana Univ. Math. J. 63 (1), 2014.
* [8] H.D. Fegan, The spectrum of the Laplacian on forms over a Lie group, Pacific Journal of Mathematics, Vol 90, No 2, 1980.
* [9] J. Figueroa-O’Farrill and M. Ungureanu, Homogeneous M2 duals, JHEP 2016(1), 150. https://arXiv:1511.03637
* [10] G.W. Gibbons, H. Lü and C.N. Pope, Einstein Metrics on Group Manifolds and Cosets, J.Geom.Phys.61:947-960,2011, DOI: 10.1016/j.geomphys.2011.01.004. https://arxiv.org/abs/0903.2493v1
* [11] A. Gray, Pseudo-Riemannian almost product manifolds and submersions, J. Math. Mech. 16, 1967, pp 715-737.
* [12] G.R. Jensen and S.S. Chern, The Scalar Curvature of Left-Invariant Riemannian Metrics, Indiana University Mathematics Journal Vol. 20, No. 12 (June, 1971), pp. 1125-1144. https://www.jstor.org/stable/24890189
* [13] E.A. Lauret, On the smallest Laplace eigenvalue for naturally reductive metrics on compact simple Lie groups. Proc. Amer. Math. Soc. 148:8, (2020), 3375-3380. DOI: 10.1090/proc/14969.
* [14] M. Kreck and S. Stolz, Some nondiffeomorphic homeomorphic homogeneous 7-manifolds with positive sectional curvature, Journal of Differential Geometry, J. Differential Geom. 33(2), 465-486, (1991).
* [15] E.A. Lauret, Diameter and Laplace eigenvalue estimates for left-invariant metrics on compact Lie groups, (2021), arXiv:2004.00350.
* [16] J. Milnor, Curvatures of left invariant metrics on lie groups, Advances in Mathematics, Volume 21, Issue 3, 1976, Pages 293-329.
* [17] B. O’Neill, The fundamental equations of a submersion, Michigan Math. Journal 13, 1966, pp 459-469.
* [18] B. O’Neill, Semi-Riemannian Geometry with Applications to General Relativity, Academic Press, New York, 1983.
* [19] F. Reidegeld, Exceptional holonomy and Einstein metrics constructed from Aloff-Wallach spaces, Proc. Lond. Math. Soc. 102, No. 6, 1127-1160 (2011), DOI: 10.1112/plms/pdq048, https://arxiv.org/abs/1004.4788v1
* [20] J.J. de Swart, The Octet Model and its Clebsch-Gordan Coefficients, Review of Modern Physics, Vol 35, No 4, pp 916-939 (1963).
* [21] N. Wallach , Compact homogeneous Riemannian manifolds with strictly positive curvature, Ann. Math. 96 (1972), 277-295.
* [22] M.Y. Wang, Some examples of homogeneous Einstein manifolds in dimension seven. Duke Math. J. 49, 23-28 (1982).
* [23] M. Y. Wang and W. Ziller, On normal homogeneous Einstein manifolds, Annales scientifiques de l’École Normale Supérieure, Série 4, Tome 18 (1985) no. 4, pp. 563-633. http://www.numdam.org/item/?id=ASENS_1985_4_18_4_563_0
|
arxiv-papers
| 2021-07-26T15:50:42 |
2024-09-04T03:07:19.071694
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Robert Coquereaux",
"submitter": "Robert. Coquereaux",
"url": "https://arxiv.org/abs/2107.12285"
}
|
2107.12289
|
# Hybrid origins of the cosmic-ray nuclei spectral hardening at a few hundred
GV
Jia-Shu Niu (牛家树) Institute of Theoretical Physics, Shanxi University, Taiyuan
030006, China State Key Laboratory of Quantum Optics and Quantum Optics
Devices, Shanxi University, Taiyuan 030006, China
###### Abstract
Many experiments have confirmed the spectral hardening at a few hundred GV of
cosmic-ray (CR) nuclei spectra, and three general different origins have been
proposed: the primary source acceleration, the propagation, and the
superposition of different kinds of sources. The AMS-02 CR nuclei spectra of
He, C, N, O, Ne, Mg, Si, and B (which including B and its dominating parents
species) are collected to study the necessity of employing a break in
diffusion coefficient and independent breaks in primary source injection
spectra to reproduce the spectral hardening at a few hundred GV. For
comparison, three different schemes are introduced to do the global fitting.
The fitting results show that both the break in diffusion coefficient and the
independent breaks in primary source injection spectra are needed, which are
corresponding to the spatial dependent propagation and the superposition of
different kinds of sources, respectively. Consequently, the nuclei spectral
hardening in a few hundred GV should have hybrid origins. Moreover, the CR
spectral indices of He and Ne show large deviations to other species in low-
rigidity region, which indicates their different CR origins.
††software: emcee (Foreman-Mackey et al., 2013), galprop (Strong & Moskalenko,
1998; Moskalenko et al., 2002; Strong & Moskalenko, 2001; Moskalenko et al.,
2003; Ptuskin et al., 2006), corner (Foreman-Mackey, 2016), seaborn (Waskom,
2021)
## 1 Motivation
Many space-borne and ground-based experiments have confirmed the spectral
hardening at a few hundred GV in cosmic-ray (CR) nuclei species (such as
ATIC-2 (Panov et al., 2006), CREAM (Ahn et al., 2010), and PAMELA (Adriani et
al., 2011)). The space station experiment Alpha Magnetic Spectrometer (AMS-02)
improves the measurement precision of the CR fluxes by an order of magnitude
of the systematics (Aguilar et al., 2013) and leads us to a precision-driven
era. The released spectra of different nuclei species by AMS-02 (including the
primary CR species: proton (Aguilar et al., 2015), helium (He), carbon (C),
oxygen (O) (Aguilar et al., 2017), neon (Ne), magnesium (Mg), silicon (Si)
(Aguilar et al., 2020), and iron (Fe) (Aguilar et al., 2021a); the secondary
CR species: lithium (Li), beryllium (Be), boron (B) (Aguilar et al., 2018b),
and fluorine (F) (Aguilar et al., 2021b); the hybrid CR species: nitrogen (N)
(Aguilar et al., 2018a), sodium (Na), and aluminum (Al) (Aguilar et al.,
2021c) provide us an excellent opportunity to study the origin, acceleration
and propagation of CRs. As the most obvious and attractive fine structure in
AMS-02 nuclei spectra, the spectral hardening in the region of $100-1000$ GV
has been studied by many works.
One of the most promising scenarios (see, e.g., Blasi et al. (2012);
Tomassetti (2012, 2015a, 2015b); Feng et al. (2016); Génolini et al. (2017);
Jin et al. (2016); Guo & Yuan (2018a, b); Liu et al. (2018); Niu et al.
(2019); Boschini et al. (2020a, b)) is that the spectral hardening comes from
the CR propagation process. Phenomenologically, in such scenario, the
secondary nuclei spectra should harden even more than that of the primary ones
at a few hundred GV111The secondary species spectra not only inherit the
hardening from the primary species (which is caused by the propagation of
primary species), but are also hardened by their own propagation processes.,
which is equivalent to add an extra high-rigidity break in the diffusion
coefficient. Some previous works show that AMS-02 nuclei data favor the
hardening coming from the propagation process rather than the CR primary
source injections in a statistical meaning (see, e.g., Génolini et al. (2017);
Niu & Xue (2020)).
However, some recent works show that the propagation origin of the hardening
can not be easily established. (see, e.g., Yuan et al. (2020b); Niu (2021)).
Because the secondary CR species (such as Li, Be, and B) are produced in
collisions of primary CR particles (such as C, N, and O) with interstellar
medium (ISM), the spectral hardening of the secondary CR species inherits from
that of the CR primary species. The test of such process should consider all
the contributions from the parents species, at least the dominating ones.
In detail, the contribution of C to B flux is about 20%, which is almost equal
to N but less than O (Génolini et al., 2018). In Niu (2021), it shows that not
only the break rigidity (at a few hundred GV), but also the differences
between the spectral index below and above the break of C, N, and O are
different. In such a case, the conclusions obtained from B/C ratio alone
cannot represent the real propagation process completely (such as in Génolini
et al. (2017, 2019)). Moreover, the spectra of proton and He have very small
uncertainties because of the extremely large event number, if one uses these
spectra in global fitting based on a uniform primary source injection for all
the CR nuclei species, they dominate the injection spectra parameters and
would seriously dilute the impacts of the real parents species (like that of
C, N, O, Ne, Mg, and Si) on the daughter species (like that of Li, Be, and B)
(such as in Niu et al. (2019); Niu & Xue (2020)). As a result, independent
primary source injections are needed.
In this work, the AMS-02 CR nuclei spectra of C, N, O, Ne, Mg, and Si are used
as the parents species, and that of B is used as the daughter species 222The
spectra of Li and Be are not used in this work because some recent works show
that they might have extra primary components (Boschini et al., 2020a; Niu et
al., 2019; Niu & Xue, 2020) and it needs to re-scale the production cross
sections if we want to reproduce their spectra with that of B simutaneously
(De La Torre Luque et al., 2021a, b).. The spectrum of He is also included in
the data set, which could provide us valuable comparisons with other species
(especially C and O). This clean data set could not only help us to check the
consistency between the observed data and the CR model, but also avoid the
systematics between different experiments.
## 2 Setups
In this work, we design three schemes to test the properties of the spectral
hardening in the region of 100-1000 GV. In Scheme I, high-rigidity breaks are
simultaneously employed in the diffusion coefficient (with one break) and
primary source injection spectra for different species (with independent
breaks); in Scheme II, independent high-rigidity breaks are employed in the
primary source injection spectra for different species; in Scheme III, one
high-rigidity break is employed in the diffusion coefficient in charge of the
spectral hardening.
### 2.1 Models for Different Schemes
A modified version of the diffusion-reacceleration scenario is used to
describe the propagation process (Yuan, 2019), which could successfully
reproduce the spectra in low-rigidity regions. For Scheme I and III, the
diffusion coefficient includes a high-rigidity break and is parameterized as
$D_{xx}(R)=D_{0}\cdot\beta^{\eta}\left(\frac{\,R_{\mathrm{br}}}{R_{0}}\right)\times\left\\{\begin{array}[]{ll}\left(\dfrac{R}{\,R_{\mathrm{br}}}\right)^{\delta_{1}}&R\leq\,R_{\mathrm{br}}\\\
\left(\dfrac{R}{\,R_{\mathrm{br}}}\right)^{\delta_{2}}&R>\,R_{\mathrm{br}}\end{array}\right.,$
(1)
where $R\equiv pc/Ze$ is the rigidity, $\beta$ is the velocity of the particle
in unit of light speed $c$, $\,R_{\mathrm{br}}$ is the high-rigidity break,
$\delta_{1}$ and $\delta_{2}$ are the diffusion slopes below and above the
break, and $R_{0}$ is the reference rigidity (4 GV). For Scheme II, the
diffusion coefficient without the break is parameterized as
$D_{xx}(R)=D_{0}\cdot\beta^{\eta}\left(\frac{\,R_{\mathrm{br}}}{R_{0}}\right)\times\left(\dfrac{R}{\,R_{\mathrm{br}}}\right)^{\delta_{1}}\
\text{for all }R.$ (2)
The primary source injection spectra of all kinds of nuclei are assumed to be
a broken power law form independently. For Scheme I and II, each of them
includes a low-rigidity break and a high-rigidity break , which is represented
as:
$q_{\mathrm{i}}\propto
N_{i}\times\left\\{\begin{array}[]{ll}\left(\dfrac{R}{R\mathrm{{}_{1}^{i}}}\right)^{-\nu_{1}^{i}}&R\leq
R_{1}^{i}\\\
\left(\dfrac{R}{R\mathrm{{}_{1}^{i}}}\right)^{-\nu_{2}^{i}}&R_{1}^{i}<R\leq
R_{2}^{i}\\\
\left(\dfrac{R}{R\mathrm{{}_{2}^{i}}}\right)^{-\nu_{3}^{i}}\left(\dfrac{R\mathrm{{}_{2}^{i}}}{R\mathrm{{}_{1}^{i}}}\right)^{-\nu_{2}^{i}}&R>R_{2}^{i}\end{array}\right.,$
(3)
where $i$ denotes the species of nuclei, $N_{i}$ is the relative abundance of
the species $i$ to that of proton333The relative abundance of proton is fixed
to $10^{6}$ and the post-propagated normalization flux of protons at 100 GeV
is fixed to $4.45\times
10^{-2}\,\mathrm{m}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}\,\mathrm{GeV}^{-1}$.,
and $\nu\equiv\nu_{1}^{i}(\nu_{2}^{i},\nu_{3}^{i})$ is the spectral index at
rigidity $R$ belonging to the several intervals divided by the breaks at the
reference rigidity $R_{1}^{i}$ and $R_{2}^{i}$. For Scheme III, each of the
primary source injection spectra includes only a low-rigidity break:
$q_{\mathrm{i}}\propto
N_{i}\times\left\\{\begin{array}[]{ll}\left(\dfrac{R}{R\mathrm{{}_{1}^{i}}}\right)^{-\nu_{1}^{i}}&R\leq
R_{1}^{i}\\\
\left(\dfrac{R}{R\mathrm{{}_{1}^{i}}}\right)^{-\nu_{2}^{i}}&R>R_{1}^{i}\end{array}\right..$
(4)
In this work, we use independent primary source injection spectra for He, C,
N, O, Ne, Mg, and Si. 444Here, we use the injection spectra of the dominating
isotopes ${}^{04}_{02}\mathrm{He}$, ${}^{12}_{06}\mathrm{C}$,
${}^{14}_{07}\mathrm{N}$, ${}^{16}_{08}\mathrm{O}$, ${}^{20}_{10}\mathrm{Ne}$,
${}^{24}_{12}\mathrm{Mg}$, and ${}^{28}_{14}\mathrm{Si}$ to represent that of
the corresponding elements. All the other primary injection species who have
small contributions on the flux of B are assumed to have the same injection
spectra as ${}^{20}_{10}\mathrm{Ne}$., The nuclear network used in our
calculations is extended to silicon-28.
The force-field approximation (Gleeson & Axford, 1968) is adopted to describe
the effects of solar modulation in the solar system, which contains only one
parameter the so-called solar-modulation potential $\phi$. All the above
configurations are simulated and the diffusion equation are solved by the
public code galprop v56 555http://galprop.stanford.edu (Strong & Moskalenko,
1998; Moskalenko et al., 2002; Strong & Moskalenko, 2001; Moskalenko et al.,
2003; Ptuskin et al., 2006) numerically.666More details about the
configuration can be referred to in Niu & Li (2018); Niu et al. (2019).
It is necessary to note that in the model described above for Scheme I, the
hardening in the spectra at a few hundred GV seems to be repeatedly
contributed by the primary source acceleration
($R_{1}^{i},R_{2}^{i},\nu_{1}^{i},\nu_{2}^{i},\nu_{3}^{i}$) and propagation
process ($\,R_{\mathrm{br}},\delta_{1},\delta_{2}$). But the former will lead
to an equal hardening of the primary and secondary spectra, while the latter
will lead to a larger hardening in secondary spectra than in primary ones. The
fact whether the secondary nuclei spectra harden even more than that of the
primary ones can be directly tested by comparing the differences between
$\delta_{1}$ and $\delta_{2}$.
### 2.2 Fitting Procedure
In this work, the Bayesian inference is used to get the posterior probability
distribution function (PDF), which is based on the following formula
$p(\boldsymbol{\theta}|D)\propto\mathcal{L}(D|\boldsymbol{\theta})\pi(\boldsymbol{\theta}),$
(5)
where $\boldsymbol{\theta}=\\{\theta_{1},\dots,\theta_{m}\\}$ is the free
parameter set, $D$ is the experimental data set,
$\mathcal{L}(D|\boldsymbol{\theta})$ is the likelihood function, and
$\pi(\boldsymbol{\theta})$ is the prior PDF which represents our state of
knowledge on the values of the parameters before taking into account of the
new data.
We take the prior PDF as uniform distributions
$\pi(\theta_{i})\propto\left\\{\begin{tabular}[]{ll}1,&\text{for }
$\theta_{i,\text{min}}<\theta_{i}<\theta_{i,\text{max}}$\\\
0,&\text{otherwise}\end{tabular}\right.,$ (6)
and the likelihood function as a Gaussian form
$\mathcal{L}(D|\boldsymbol{\theta})=\prod_{i}\frac{1}{\sqrt{2\pi\sigma_{i}^{2}}}\exp\left[-\frac{(f_{\text{th},i}(\boldsymbol{\theta})-f_{\text{exp},i})^{2}}{2\sigma_{i}^{2}}\right],$
(7)
where $f_{\text{th},i}(\boldsymbol{\theta})$ is the predicted $i$-th
observable from the model which depends on the parameter set
$\boldsymbol{\theta}$, and $f_{\text{exp},i}$ is the one measured by the
experiment with uncertainty $\sigma_{i}$.
Here we use the Markov Chain Monte Carlo (MCMC) algorithms which is proposed
by Goodman & Weare (2010) instead of classical Metropolis-Hastings to
determine the PDFs of the parameters, because its ensemble samplers can avoid
the Markov Chains falls into local optimal values and thus provide us robust
PDFs of the parameters. The algorithm proposed by Goodman & Weare (2010) is
slightly altered and implemented as the Python module
emcee777http://dan.iel.fm/emcee/ by Foreman-Mackey et al. (2013), which makes
it easy to use by the advantages of Python.
In total, for Scheme I, we have the following 50 free parameters:
$\displaystyle\boldsymbol{\theta}_{\mathrm{I}}=$
$\displaystyle\\{D_{0},\eta,\,R_{\mathrm{br}},\delta_{1},\delta_{2},z_{h},v_{A},\phi,|$
$\displaystyle
N_{\mathrm{He}},R_{1}^{\mathrm{He}},R_{2}^{\mathrm{He}},\nu_{1}^{\mathrm{He}},\nu_{2}^{\mathrm{He}},\nu_{3}^{\mathrm{He}},|$
$\displaystyle
N_{\mathrm{C}},R_{1}^{\mathrm{C}},R_{2}^{\mathrm{C}},\nu_{1}^{\mathrm{C}},\nu_{2}^{\mathrm{C}},\nu_{3}^{\mathrm{C}},|$
$\displaystyle
N_{\mathrm{N}},R_{1}^{\mathrm{N}},R_{2}^{\mathrm{N}},\nu_{1}^{\mathrm{N}},\nu_{2}^{\mathrm{N}},\nu_{3}^{\mathrm{N}},|$
$\displaystyle
N_{\mathrm{O}},R_{1}^{\mathrm{O}},R_{2}^{\mathrm{O}},\nu_{1}^{\mathrm{O}},\nu_{2}^{\mathrm{O}},\nu_{3}^{\mathrm{O}},|$
$\displaystyle
N_{\mathrm{Ne}},R_{1}^{\mathrm{Ne}},R_{2}^{\mathrm{Ne}},\nu_{1}^{\mathrm{Ne}},\nu_{2}^{\mathrm{Ne}},\nu_{3}^{\mathrm{Ne}},|$
$\displaystyle
N_{\mathrm{Mg}},R_{1}^{\mathrm{Mg}},R_{2}^{\mathrm{Mg}},\nu_{1}^{\mathrm{Mg}},\nu_{2}^{\mathrm{Mg}},\nu_{3}^{\mathrm{Mg}},|$
$\displaystyle
N_{\mathrm{Si}},R_{1}^{\mathrm{Si}},R_{2}^{\mathrm{Si}},\nu_{1}^{\mathrm{Si}},\nu_{2}^{\mathrm{Si}},\nu_{3}^{\mathrm{Si}}\\}~{}.$
For Scheme II, we have the following 48 free parameters:
$\displaystyle\boldsymbol{\theta}_{\mathrm{II}}=$
$\displaystyle\\{D_{0},\eta,\delta_{1},z_{h},v_{A},\phi,|$ $\displaystyle
N_{\mathrm{He}},R_{1}^{\mathrm{He}},R_{2}^{\mathrm{He}},\nu_{1}^{\mathrm{He}},\nu_{2}^{\mathrm{He}},\nu_{3}^{\mathrm{He}},|$
$\displaystyle
N_{\mathrm{C}},R_{1}^{\mathrm{C}},R_{2}^{\mathrm{C}},\nu_{1}^{\mathrm{C}},\nu_{2}^{\mathrm{C}},\nu_{3}^{\mathrm{C}},|$
$\displaystyle
N_{\mathrm{N}},R_{1}^{\mathrm{N}},R_{2}^{\mathrm{N}},\nu_{1}^{\mathrm{N}},\nu_{2}^{\mathrm{N}},\nu_{3}^{\mathrm{N}},|$
$\displaystyle
N_{\mathrm{O}},R_{1}^{\mathrm{O}},R_{2}^{\mathrm{O}},\nu_{1}^{\mathrm{O}},\nu_{2}^{\mathrm{O}},\nu_{3}^{\mathrm{O}},|$
$\displaystyle
N_{\mathrm{Ne}},R_{1}^{\mathrm{Ne}},R_{2}^{\mathrm{Ne}},\nu_{1}^{\mathrm{Ne}},\nu_{2}^{\mathrm{Ne}},\nu_{3}^{\mathrm{Ne}},|$
$\displaystyle
N_{\mathrm{Mg}},R_{1}^{\mathrm{Mg}},R_{2}^{\mathrm{Mg}},\nu_{1}^{\mathrm{Mg}},\nu_{2}^{\mathrm{Mg}},\nu_{3}^{\mathrm{Mg}},|$
$\displaystyle
N_{\mathrm{Si}},R_{1}^{\mathrm{Si}},R_{2}^{\mathrm{Si}},\nu_{1}^{\mathrm{Si}},\nu_{2}^{\mathrm{Si}},\nu_{3}^{\mathrm{Si}}\\}~{}.$
For Scheme III, we have the following 36 free parameters:
$\displaystyle\boldsymbol{\theta}_{\mathrm{III}}=$
$\displaystyle\\{D_{0},\eta,\,R_{\mathrm{br}},\delta_{1},\delta_{2},z_{h},v_{A},\phi,|$
$\displaystyle
N_{\mathrm{He}},R_{1}^{\mathrm{He}},\nu_{1}^{\mathrm{He}},\nu_{2}^{\mathrm{He}},|$
$\displaystyle
N_{\mathrm{C}},R_{1}^{\mathrm{C}},\nu_{1}^{\mathrm{C}},\nu_{2}^{\mathrm{C}},|$
$\displaystyle
N_{\mathrm{N}},R_{1}^{\mathrm{N}},\nu_{1}^{\mathrm{N}},\nu_{2}^{\mathrm{N}},|$
$\displaystyle
N_{\mathrm{O}},R_{1}^{\mathrm{O}},\nu_{1}^{\mathrm{O}},\nu_{2}^{\mathrm{O}},|$
$\displaystyle
N_{\mathrm{Ne}},R_{1}^{\mathrm{Ne}},\nu_{1}^{\mathrm{Ne}},\nu_{2}^{\mathrm{Ne}},|$
$\displaystyle
N_{\mathrm{Mg}},R_{1}^{\mathrm{Mg}},\nu_{1}^{\mathrm{Mg}},\nu_{2}^{\mathrm{Mg}},|$
$\displaystyle
N_{\mathrm{Si}},R_{1}^{\mathrm{Si}},\nu_{1}^{\mathrm{Si}},\nu_{2}^{\mathrm{Si}}\\}~{}.$
For all the schemes, the spectral data of He, C, N, O, and B is collected from
Aguilar et al. (2021d), that of Ne, Mg, and Si is collected from Aguilar et
al. (2020), and the data errors used in our fitting are the quadratic sum of
statistical and systematic errors.
## 3 Results
The samples of the parameters are taken as their posterior probability
distribution function (PDF) after the Markov Chains have reached their
equilibrium states.888Here, different prior values are tested to ensure the
robustness of the PDFs. The best-fit results and the corresponding residuals
of the spectra are given in Figure 1 (He, C, N, and O), 2 (Ne, Mg, and Si),
and 3 (B). The best-fit values, statistical mean values and standard
deviations, and the 90% confidence intervals of the parameters in three
schemes are shown in Table 1. The fitting 1D probability and 2D credible
regions (covariances) of posterior PDFs on the parameters of different schemes
and groups are collected in Appendix A, B, and C.
Figure 1: Fitting results and corresponding residuals to the CR spectra of He,
C, N, and O for the Scheme I, II, and III. The 2$\sigma$ (deep red) and
3$\sigma$ (light red) bounds are also shown in the subfigures. The relevant
$\chi^{2}$ of each spectrum is given in the subfigures as well.
Figure 2: Fitting results and corresponding residuals to the CR spectra of Ne,
Mg, and Si for the Scheme I, II, and III. The 2$\sigma$ (deep red) and
3$\sigma$ (light red) bounds are also shown in the subfigures. The relevant
$\chi^{2}$ of each spectrum is given in the subfigures as well.
Figure 3: Fitting results and corresponding residuals to the CR spectra of B
for the Scheme I, II, and III. The 2$\sigma$ (deep red) and 3$\sigma$ (light
red) bounds are also shown in the subfigures. The relevant $\chi^{2}$ of each
spectrum is given in the subfigures as well. Table 1: Fitting results of the
parameters in $\boldsymbol{\theta}_{\mathrm{I}}$,
$\boldsymbol{\theta}_{\mathrm{II}}$, and $\boldsymbol{\theta}_{\mathrm{III}}$.
Prior: prior interval; Mean/Std: statistical mean and standard deviation
values; 90%: 90% confidence intervals; Best: best-fit values.
| Scheme I | Scheme II | Scheme III
---|---|---|---
ID | Prior | Mean/Std | 90% | Best | Mean/Std | 90% | Best | Mean/Std | 90% | Best
$D_{0}\ (10^{28}\,\mathrm{cm}^{2}\,\mathrm{s}^{-1})$ | [1, 20] | 6.6$\pm$0.4 | [5.8, 7.2] | 6.6 | 5.7$\pm$0.4 | [4.8, 6.5] | 5.7 | 6.8$\pm$0.6 | [5.7, 7.7] | 6.9
$\,R_{\mathrm{br}}\ (\,\mathrm{GV})$ | [100, 1000] | 225$\pm$38 | [167, 272] | 204 | — | — | — | 267$\pm$26 | [226, 312] | 269
$\delta_{1}$ | [0.1, 1.0] | 0.45$\pm$0.01 | [0.43, 0.46] | 0.45 | 0.43$\pm$0.01 | [0.41, 0.45] | 0.43 | 0.44$\pm$0.01 | [0.42, 0.45] | 0.44
$\delta_{2}$ | [0.1, 1.0] | 0.31$\pm$0.03 | [0.27, 0.36] | 0.32 | — | — | — | 0.26$\pm$0.02 | [0.22, 0.29] | 0.26
$\eta$ | [-5.0, 5.0] | -1.5$\pm$0.1 | [-1.8, -1.3] | -1.5 | -1.5$\pm$0.2 | [-1.7, -1.2] | -1.4 | -1.5$\pm$0.1 | [-1.7, -1.4] | -1.5
$z_{h}\ (\,\mathrm{kpc})$ | [0.5, 20.0] | 10$\pm$1 | [8, 13] | 10.5 | 7$\pm$1 | [6, 9] | 7.0 | 11$\pm$2 | [8, 14] | 10.9
$v_{A}\ (\,\mathrm{km}/\,\mathrm{s})$ | [0, 70] | 19$\pm$1 | [16, 21] | 19 | 20$\pm$2 | [18, 23] | 21 | 20$\pm$1 | [18, 22] | 20
$\phi\ (\,\mathrm{GV})$ | [0, 1.5] | 0.72$\pm$0.03 | [0.67, 0.78] | 0.72 | 0.72$\pm$0.03 | [0.68, 0.79] | 0.73 | 0.75$\pm$0.03 | [0.69, 0.81] | 0.75
$R_{1}^{\mathrm{He}}\ (\,\mathrm{GV})$ | [1, 100] | 4.4$\pm$0.6 | [3.6, 5.6] | 4.2 | 5.5$\pm$1.2 | [4.1, 6.7] | 4.8 | 3.6$\pm$0.5 | [2.9, 4.6] | 3.5
$R_{2}^{\mathrm{He}}\ (\,\mathrm{GV})$ | [100, 1000] | 593$\pm$166 | [349, 946] | 623 | 272$\pm$41 | [220, 371] | 284 | — | — | —
$\nu_{1}^{\mathrm{He}}$ | [1.0, 4.0] | 2.78$\pm$0.13 | [2.60, 3.03] | 2.81 | 2.63$\pm$0.10 | [2.49, 2.84] | 2.69 | 3.26$\pm$0.31 | [2.81, 3.84] | 3.24
$\nu_{2}^{\mathrm{He}}$ | [1.0, 4.0] | 2.34$\pm$0.01 | [2.32, 2.36] | 2.34 | 2.35$\pm$0.01 | [2.33, 2.36] | 2.35 | 2.34$\pm$0.01 | [2.33, 2.35] | 2.34
$\nu_{3}^{\mathrm{He}}$ | [1.0, 4.0] | 2.25$\pm$0.08 | [2.11, 2.34] | 2.22 | 2.19$\pm$0.03 | [2.13, 2.22] | 2.18 | — | — | —
$R_{1}^{\mathrm{C}}\ (\,\mathrm{GV})$ | [1, 100] | 9$\pm$4 | [3, 17] | 8 | 7$\pm$5 | [1, 17] | 5 | 7$\pm$2 | [4, 12] | 7
$R_{2}^{\mathrm{C}}\ (\,\mathrm{GV})$ | [100, 1000] | 455$\pm$118 | [269, 728] | 448 | 239$\pm$36 | [186, 311] | 232 | — | — | —
$\nu_{1}^{\mathrm{C}}$ | [1.0, 4.0] | 2.42$\pm$0.05 | [2.36, 2.54] | 2.43 | 2.45$\pm$0.10 | [2.28, 2.69] | 2.46 | 2.50$\pm$0.08 | [2.40, 2.63] | 2.48
$\nu_{2}^{\mathrm{C}}$ | [1.0, 4.0] | 2.36$\pm$0.01 | [2.34, 2.37] | 2.36 | 2.37$\pm$0.01 | [2.35, 2.39] | 2.37 | 2.36$\pm$0.01 | [2.34, 2.37] | 2.36
$\nu_{3}^{\mathrm{C}}$ | [1.0, 4.0] | 2.24$\pm$0.07 | [2.10, 2.32] | 2.24 | 2.18$\pm$0.04 | [2.12, 2.23] | 2.18 | — | — | —
$R_{1}^{\mathrm{N}}\ (\,\mathrm{GV})$ | [1, 100] | 70$\pm$16 | [37, 98] | 75 | 16$\pm$9 | [3, 38] | 15 | 78$\pm$21 | [29, 99] | 84
$R_{2}^{\mathrm{N}}\ (\,\mathrm{GV})$ | [100, 1000] | 822$\pm$125 | [552, 981] | 817 | 164$\pm$28 | [118, 209] | 160 | — | — | —
$\nu_{1}^{\mathrm{N}}$ | [1.0, 4.0] | 2.40$\pm$0.03 | [2.35, 2.44] | 2.40 | 2.34$\pm$0.10 | [2.16, 2.51] | 2.37 | 2.40$\pm$0.03 | [2.35, 2.45] | 2.40
$\nu_{2}^{\mathrm{N}}$ | [1.0, 4.0] | 2.28$\pm$0.04 | [2.22, 2.37] | 2.29 | 2.44$\pm$0.04 | [2.40, 2.52] | 2.44 | 2.29$\pm$0.04 | [2.21, 2.35] | 2.29
$\nu_{3}^{\mathrm{N}}$ | [1.0, 4.0] | 1.86$\pm$0.33 | [1.32, 2.22] | 1.71 | 2.00$\pm$0.06 | [1.90, 2.10] | 2.00 | — | — | —
$R_{1}^{\mathrm{O}}\ (\,\mathrm{GV})$ | [1, 100] | 6$\pm$3 | [2, 11] | 5 | 8$\pm$3 | [2, 15] | 7 | 5$\pm$2 | [3, 8] | 5
$R_{2}^{\mathrm{O}}\ (\,\mathrm{GV})$ | [100, 1000] | 767$\pm$125 | [504, 961] | 759 | 696$\pm$117 | [431, 871] | 642 | — | — | —
$\nu_{1}^{\mathrm{O}}$ | [1.0, 4.0] | 2.47$\pm$0.15 | [2.21, 2.75] | 2.46 | 2.47$\pm$0.07 | [2.37, 2.65] | 2.47 | 2.58$\pm$0.12 | [2.42, 2.83] | 2.55
$\nu_{2}^{\mathrm{O}}$ | [1.0, 4.0] | 2.38$\pm$0.01 | [2.37, 2.40] | 2.38 | 2.38$\pm$0.01 | [2.36, 2.40] | 2.38 | 2.38$\pm$0.01 | [2.37, 2.40] | 2.38
$\nu_{3}^{\mathrm{O}}$ | [1.0, 4.0] | 2.20$\pm$0.12 | [1.99, 2.35] | 2.17 | 2.02$\pm$0.09 | [1.85, 2.16] | 2.02 | — | — | —
$R_{1}^{\mathrm{Ne}}\ (\,\mathrm{GV})$ | [1, 100] | 8$\pm$2 | [6, 12] | 8 | 9$\pm$3 | [6, 14] | 9 | 12$\pm$5 | [7, 21] | 10
$R_{2}^{\mathrm{Ne}}\ (\,\mathrm{GV})$ | [100, 1000] | 797$\pm$119 | [566, 980] | 823 | 788$\pm$88 | [586, 941] | 764 | — | — | —
$\nu_{1}^{\mathrm{Ne}}$ | [1.0, 4.0] | 2.11$\pm$0.09 | [1.92, 2.26] | 2.13 | 2.18$\pm$0.08 | [2.01, 2.30] | 2.18 | 2.21$\pm$0.08 | [2.07, 2.33] | 2.23
$\nu_{2}^{\mathrm{Ne}}$ | [1.0, 4.0] | 2.38$\pm$0.01 | [2.36, 2.40] | 2.38 | 2.38$\pm$0.01 | [2.36, 2.40] | 2.38 | 2.38$\pm$0.01 | [2.37, 2.40] | 2.38
$\nu_{3}^{\mathrm{Ne}}$ | [1.0, 4.0] | 2.15$\pm$0.21 | [1.72, 2.41] | 2.08 | 1.87$\pm$0.15 | [1.54, 2.08] | 1.82 | — | — | —
$R_{1}^{\mathrm{Mg}}\ (\,\mathrm{GV})$ | [1, 100] | 28$\pm$11 | [9, 48] | 27 | 14$\pm$7 | [3, 29] | 11 | 44$\pm$25 | [7, 89] | 40
$R_{2}^{\mathrm{Mg}}\ (\,\mathrm{GV})$ | [100, 1000] | 550$\pm$186 | [217, 939] | 559 | 385$\pm$85 | [240, 592] | 398 | — | — | —
$\nu_{1}^{\mathrm{Mg}}$ | [1.0, 4.0] | 2.41$\pm$0.03 | [2.37, 2.45] | 2.41 | 2.38$\pm$0.05 | [2.25, 2.46] | 2.38 | 2.43$\pm$0.02 | [2.38, 2.46] | 2.43
$\nu_{2}^{\mathrm{Mg}}$ | [1.0, 4.0] | 2.46$\pm$0.02 | [2.44, 2.49] | 2.46 | 2.46$\pm$0.01 | [2.44, 2.48] | 2.46 | 2.47$\pm$0.02 | [2.44, 2.50] | 2.47
$\nu_{3}^{\mathrm{Mg}}$ | [1.0, 4.0] | 2.40$\pm$0.14 | [2.10, 2.63] | 2.39 | 2.30$\pm$0.11 | [2.09, 2.44] | 2.27 | — | — | —
$R_{1}^{\mathrm{Si}}\ (\,\mathrm{GV})$ | [1, 100] | 42$\pm$13 | [22, 70] | 42 | 53$\pm$10 | [34, 74] | 57 | 55$\pm$20 | [22, 89] | 53
$R_{2}^{\mathrm{Si}}\ (\,\mathrm{GV})$ | [100, 1000] | 528$\pm$156 | [236, 858] | 534 | 355$\pm$116 | [168, 641] | 315 | — | — | —
$\nu_{1}^{\mathrm{Si}}$ | [1.0, 4.0] | 2.38$\pm$0.02 | [2.34, 2.41] | 2.38 | 2.39$\pm$0.02 | [2.36, 2.41] | 2.39 | 2.39$\pm$0.02 | [2.35, 2.42] | 2.39
$\nu_{2}^{\mathrm{Si}}$ | [1.0, 4.0] | 2.45$\pm$0.02 | [2.43, 2.48] | 2.45 | 2.46$\pm$0.02 | [2.44, 2.52] | 2.47 | 2.46$\pm$0.02 | [2.43, 2.49] | 2.46
$\nu_{3}^{\mathrm{Si}}$ | [1.0, 4.0] | 2.48$\pm$0.11 | [2.30, 2.68] | 2.47 | 2.32$\pm$0.08 | [2.10, 2.41] | 2.31 | — | — | —
$N_{\mathrm{He}}/71990.0$ | [0.1,5.0] | 1.40$\pm$0.01 | [1.38, 1.42] | 1.40 | 1.39$\pm$0.01 | [1.38, 1.42] | 1.40 | 1.40$\pm$0.01 | [1.39, 1.42] | 1.40
$N_{\mathrm{C}}/2819.0$ | [0.1,5.0] | 1.21$\pm$0.01 | [1.19, 1.23] | 1.21 | 1.20$\pm$0.02 | [1.18, 1.22] | 1.19 | 1.22$\pm$0.01 | [1.20, 1.24] | 1.22
$N_{\mathrm{N}}/182.8$ | [0.1,5.0] | 1.45$\pm$0.05 | [1.37, 1.51] | 1.44 | 1.35$\pm$0.07 | [1.25, 1.46] | 1.36 | 1.45$\pm$0.04 | [1.37, 1.51] | 1.45
$N_{\mathrm{O}}/3822.0$ | [0.1,5.0] | 1.11$\pm$0.01 | [1.09, 1.13] | 1.11 | 1.12$\pm$0.01 | [1.10, 1.14] | 1.12 | 1.11$\pm$0.01 | [1.10, 1.13] | 1.11
$N_{\mathrm{Ne}}/312.5$ | [0.1,5.0] | 1.59$\pm$0.03 | [1.54, 1.63] | 1.59 | 1.62$\pm$0.03 | [1.57, 1.66] | 1.62 | 1.60$\pm$0.02 | [1.56, 1.64] | 1.60
$N_{\mathrm{Mg}}/658.1$ | [0.1,5.0] | 0.94$\pm$0.02 | [0.91, 0.96] | 0.93 | 0.95$\pm$0.02 | [0.92, 0.98] | 0.95 | 0.94$\pm$0.02 | [0.91, 0.96] | 0.94
$N_{\mathrm{Si}}/725.7$ | [0.1,5.0] | 1.02$\pm$0.02 | [0.99, 1.05] | 1.02 | 1.02$\pm$0.02 | [0.98, 1.05] | 1.02 | 1.01$\pm$0.01 | [1.00, 1.04] | 1.02
$\chi^{2}/\text{d.o.f.}$ | 78.6/484 | 118.0/486 | 105.2/498
For Scheme I, II, and III, we have
$\chi^{2}_{\mathrm{I}}/d.o.f=78.6/484\approx 0.16$,
$\chi^{2}_{\mathrm{II}}/d.o.f=118.0/486\approx 0.24$, and
$\chi^{2}_{\mathrm{III}}/d.o.f=105.2/498\approx 0.21$ for the best-fit
results, respectively.
In Bayesian terms, the criterion of a decisive evidence between 2 models is
$\Delta\chi^{2}\geq 10$ (see, e.g., Génolini et al. (2017)), with the same
$d.o.f$. Comparing Scheme II and III,
$\Delta\chi^{2}=\chi^{2}_{\mathrm{II}}-\chi^{2}_{\mathrm{III}}=12.8$, and the
$d.o.f$ of Scheme II is even smaller than that of Scheme III simultaneously,
which is a decisive evidence and indicates that the Scheme III is
statistically significant better than Scheme II in current data set. It is
consistent with some previous works (see, e.g., Génolini et al. (2017); Niu &
Xue (2020)), which declare that the AMS-02 nuclei data favor the spectral
hardening coming from the propagation process rather than the CR primary
source. Comparing Scheme I and III, although the $d.o.f$ of Scheme I is
smaller than that of Scheme III (caused by additional 14 parameters), the
$\Delta\chi^{2}=\chi^{2}_{\mathrm{III}}-\chi^{2}_{\mathrm{I}}=26.6$ is really
a large improvement. On the other hand, considering the differences of
$\chi^{2}/d.o.f$, because that between Scheme II and III $\sim 0.03$ is
statistically significant, that between Scheme I and III $\sim 0.05$ indicates
that the Scheme I is statistically significant better than Scheme III in
current data set.
The small values of $\chi^{2}/d.o.f$ in the three schemes are mainly caused by
the correlations of the systematic errors of the data. More appropriate
treatment of the systematic errors can be found in Derome et al. (2019);
Weinrich et al. (2020); Heisig et al. (2020); Korsmeier & Cuoco (2021a).
In Figure 1, 2, and 3, for a specific nuclei spectrum, Scheme I gives the
smallest $\chi^{2}$ in most cases (which is due to its precise description of
the high-rigidity spectral structures), while Scheme II and III give larger
$\chi^{2}$, because both of them can not precisely reproduce the spectral
breaks around 200 GV and 400-800 GV simultaneously. Comparing the $\chi^{2}$
of Scheme II and III for different species in Figure 1, Scheme II gives out
larger $\chi^{2}$ in the case of He and O, and Scheme III gives out larger
$\chi^{2}$ in the case of C and N, which indicates that the spectral breaks
around 200 GV and 400-800 GV have different weights for different nuclei
species. Comparing the $\chi^{2}$ of Scheme II and III for different species
in Figure 2, Scheme II always gives out larger $\chi^{2}$, it indicates that
the spectral breaks around 400-800 GV are not that important in the case of
Ne, Mg, and Si, which represents that Ne, Mg, and Si and He, C, and O might be
two different classes of primary CRs (Aguilar et al., 2020). Comparing the
$\chi^{2}$ of Scheme II and III for B in Figure 3, Scheme II gives out
$\chi^{2}=31.96$ and Scheme III gives out $\chi^{2}=19.20$
($\Delta\chi^{2}\sim 13$), which indicates that the spectral break around 200
GV of B is its dominating feature and it favors the propagation origin of the
spectral hardening.
The detailed information about the three schemes can be read out in Table 1.
The propagation parameters in Scheme I and III have similar distributions,
except the cases of $R_{br}$ and $\delta_{2}$. Because that in Scheme I is
only responsible for the spectral breaks around 200 GV, while that in Scheme
III needs to reproduce the spectral breaks around 200 GV and 400-800 GV
simultaneously. The distributions of $D_{0}$ and $z_{h}$ are slightly
different in Scheme II and Scheme I/III. This is because these two parameters
are mainly determined by the spectrum of B, which is hardening around 200 GV
more than the primary species. In Scheme I/III, the spectrum of B is precisely
reproduced with the diffusion break and $\delta_{1}$ and $\delta_{2}$, while
that is roughly reproduced without the diffusion break in Scheme II, and it
influences the distributions of $D_{0}$ and $z_{h}$. The solar-modulation
potential $\phi$ in this work has a range from 0.67 GV to 0.81 GV, which is a
bit larger than $\phi=0.64\,\mathrm{GV}$ based the the
NEWK999http://www01.nmdb.eu/station/newk/ neutron monitor experiment from
Cosmic-Ray DataBase (CRDB101010https://lpsc.in2p3.fr/crdb/) (Ghelfi et al.,
2016, 2017). Considering that it is an effective value which is coupled with
$v_{A}$ and $\eta$ and does not impact on the discussion regarding the high-
rigidity breaks, we will not discuss this issue in depth in this work.
About the spectral parameters, comparing the high-rigidity break positions in
injection spectra with and without the diffusion break (i.e. $R_{2}$ in Scheme
I and II, respectively), we find that with the diffusion break (in Scheme I)
always have larger values, in which case the high-rigidity breaks just take
charge of the spectral breaks about 400-800 GV, while both the spectral breaks
around 200 GV and 400-800 GV determines the high-rigidity breaks without the
diffusion break in Scheme II. For the high-rigidity break positions in
diffusion coefficients with and without additional high-rigidity breaks in
injection spectra (i.e. $\,R_{\mathrm{br}}$ in Scheme I and III,
respectively), that with the additional high-rigidity breaks in injection
spectra (in Scheme I) has a smaller value, which accounts for the spectral
breaks around 200 GV. However, that in Scheme III accounts for both the
spectral breaks around 200 GV and 400-800 GV, then has a larger value. For the
differences between spectral index in the injection spectra above and below
the high-rigidity breaks (i.e. $\Delta\nu\equiv|\nu_{3}-\nu_{2}|$ in Scheme I
and II), that in Scheme II have larger values in most cases, which represents
that the hardening in most of the spectra around 200 GV and 400-800 GV is
taken up by $\Delta\nu$ alone in Scheme II, while it is shared by $\Delta\nu$
and the break in diffusion coefficient simultaneously in Scheme I. The
exception comes from the spectra of N, in which case the $\Delta\nu$ in Scheme
I has larger value. It comes from the sudden hardening of its spectra around
800 GV, which cannot be precisely reproduced in Scheme II.
Hereafter, we focus on the fitting results of Scheme I.
## 4 Discussions and Conclusion
In order to compare the primary source injection parameters of different
species, the box plot of these parameters ($\nu_{1}$, $\nu_{2}$, $\nu_{3}$,
$R_{1}$, and $R_{2}$) are shown in Figure 4.
Figure 4: Boxplots for the primary source injection parameters of different
species in Scheme I. The band inside the box shows the median value of the
dataset, the box shows the quartiles, and the whiskers extend to show the rest
of the distribution which are edged by the 5th percentile and the 95th
percentile.
The large deviations and uncertainties of N compared with other species in
subfigures (b) and (c) are related to its hybrid origins (which is expected to
contain both primary and secondary components), while the production cross
sections of its secondary components are not precisely provided in galprop
v56. Unless specifically mentioned, the following discussions exclude the
fitting results of N.
In subfigure (a), the $\nu_{1}$ of He and Ne show large deviations compared
with other species; in subfigure (b), the distributions of $\nu_{2}$ indicate
that He and C should be in a group, O and Ne should be in a group, and Mg and
Si should be in a group; in subfigure (c), the values of $\nu_{3}$ of Mg and
Si have some deviations compared with other species; in subfigure (d), the
values of $R_{1}$ of Mg and Si also show some deviations compared with other
species; in subfigure (e), it shows that O and Ne have values of $R_{2}$
(Here, $R_{2}\equiv R_{\mathrm{br}}^{\mathrm{H}}$ for He, C, N, O, Ne, Mg, and
Si) with large overlaps compared with that of He, C, Mg, and Si. Taken
together, the CR species Mg and Si might have similar origins because of their
similar distributions of primary source injection parameters. Another hint
should be noted is that the relationships between different species could be
different in low and high-rigidity regions. For example, He and Ne, show
similar $\nu_{2}$, $\nu_{3}$, $R_{2}$ distributions to C and O respectively in
high-rigidity region, but show large $\nu_{1}$ deviations to C and O
respectively in low-rigidity region. This might be some hints that the CR
species He and Ne have different origins in low-rigidity regions.
In order to explore the properties of the spectral hardening in 100-1000 GV,
the posterior mean and standard deviation of the high-rigidity break
($R_{\mathrm{br}}^{\mathrm{H}}\equiv R_{2}^{i}$ for He, C, N, O, Ne, Mg, and
Si; $R_{\mathrm{br}}^{\mathrm{H}}\equiv R_{\mathrm{br}}$ for $\delta$) and the
differences between the spectral index above and below it
($\Delta\nu^{\mathrm{H}}\equiv\nu_{3}^{i}-\nu_{2}^{i}$ for He, C, N, O, Ne,
Mg, and Si; $\Delta\nu^{\mathrm{H}}\equiv\delta_{2}-\delta_{1}$ for $\delta$)
are summarized in Table 2. The box plot of these two kinds of parameters are
shown in subfigure (e) of Figure 4 and Figure 5, respectively.
Table 2: Posterior mean and standard deviation of $R_{\mathrm{br}}^{\mathrm{H}}$ and $\Delta\nu^{\mathrm{H}}$. ID | $R_{\mathrm{br}}^{\mathrm{H}}$ (GV) | $\Delta\nu^{\mathrm{H}}$
---|---|---
$\delta$ | 225 $\pm$ 38 | -0.13 $\pm$ 0.03
He | 593 $\pm$ 166 | -0.12 $\pm$ 0.07
C | 455 $\pm$ 118 | -0.14 $\pm$ 0.07
N | 822 $\pm$ 125 | -0.56 $\pm$ 0.31
O | 767 $\pm$ 125 | -0.22 $\pm$ 0.11
Ne | 797 $\pm$ 119 | -0.30 $\pm$ 0.21
Mg | 550 $\pm$ 186 | -0.08 $\pm$ 0.16
Si | 528 $\pm$ 156 | 0.02 $\pm$ 0.13
Figure 5: Boxplots for $\Delta\nu^{\mathrm{H}}$. The band inside the box shows
the median value of the dataset, the box shows the quartiles, and the whiskers
extend to show the rest of the distribution which are edged by the 5th
percentile and the 95th percentile.
In the subfigure (e) of Figure 4, the high-rigidity breaks show different
distributions: for He, C, Mg and Si, $R_{\mathrm{br}}^{\mathrm{H}}$ mainly
distributes less than about 700 GV; for N, O and Ne, it almost distributes
greater than 700 GV. These different distributions of the high-rigidity breaks
cannot be naturally reproduced by a uniform acceleration mechanism in the
primary source injection spectra for different CR nuclei species. As some
previous work have been pointed out (see, e.g., Yuan et al. (2011); Yue et al.
(2019); Yuan et al. (2020a); Niu (2021)), it could be naturally explained by
the superposition of different kinds of sources. In this scenario, each kind
of the sources have similar spectral indices for all the primary source
injection but have different element abundances between different kinds of
sources.111111An interesting and detailed work on revealing the origin of
galactic CRs by their composition has been proposed in Tatischeff et al.
(2021). Different from the $R_{\mathrm{br}}^{\mathrm{H}}$ of the primary
source injection spectra which have large overlaps between each other, that of
the diffusion coefficient demonstrates little uncertainty and has large
deviation to others. It indicates the necessity of employing a break in the
diffusion coefficient, which is the observational evidence of the propagation
origin scenarios (such as in Blasi et al. (2012); Tomassetti (2012, 2015a,
2015b); Feng et al. (2016); Génolini et al. (2017); Jin et al. (2016); Guo &
Yuan (2018a, b); Liu et al. (2018); Niu et al. (2019); Boschini et al. (2020a,
b)).
In Figure 5, except the quite large uncertainty for N (which is caused by its
primary/secondary hybrid origin), for He, C, O, and Ne,
$\Delta\nu^{\mathrm{H}}$ has a confidence level of about 95% smaller than 0,
which are the signs of the necessity of hardening contributions from the
primary source injection at about 400-800 GV. For Mg and Si, the
$\Delta\nu^{\mathrm{H}}$ distributions around 0 (which can also be noted in
Table 2) indicate that it is not necessary to employ a high-rigidity break to
reproduce the spectral hardening at about 400-800 GV for them. This result is
also consistent with the above analysis that Mg and Si should be grouped
together and their CRs might have similar origins. On the other hand, the
concentrated distribution of $\Delta\nu^{\mathrm{H}}$ for $\delta$ also shows
its necessity to reproduce the data set, whose value of $\sim-0.1$ also has
been proved by some of the previous works based on different configurations
(see, e.g., Génolini et al. (2017, 2019); Niu & Xue (2020)).
In summary, if we want to reproduce the spectral hardening in the CR nuclei
species at a few hundred GV precisely, not only an extra break at about 200 GV
in the diffusion coefficient is needed (see, e.g., Génolini et al. (2017,
2019); Niu et al. (2019); Niu & Xue (2020)), but the extra independent high-
rigidity breaks at about 400-800 GV in the primary source injection spectra
for different CR species are also needed (see, e.g., Niu (2021); Korsmeier &
Cuoco (2021b)). The result shows statistically significant improvement
compared with the schemes which use a break in the diffusion coefficient or
breaks in the primary source injection alone to reproduce the AMS-02 CR nuclei
spectra. The break in the diffusion coefficient could come from the
propagation process, which can be reproduced by the spatial-dependent
propagation (see, e.g., Tomassetti (2012); Guo et al. (2016); Feng et al.
(2016)). The different propagation regions of the galactic CRs are
corresponding to the structures of our galaxy (i.e., the galaxy center, the
bulk, the disk, and the halo), which have different densities of ISM and thus
different propagation environments. The different breaks in the primary source
injection spectra could come from the superposition of different kinds of
sources. On one hand, these different kinds of sources can be corresponding to
the galactic averaged CR sources and a local CR source (such as Geminga SNR
(Zhao et al., 2022)). On the other hand, it also can be correspond to
different kinds of CR factories: such as the different population of supernova
remnants (Aharonian et al., 2004), galactic center (Scherer et al., 2022),
novas (H.E.S.S. Collaboration, 2022), etc. In any case, as long as they have
different elemental abundances (which is natural), it will produce different
breaks and spectral indices. Of course, a combination of the above two
situations is also possible (see, e.g., Zhang et al. (2022)). Consequently,
the CR nuclei spectral hardening at a few hundred GV has hybrid origins.
Moreover, in low-rigidity regions, $\nu_{1}$ for He and Ne show large
deviations to other nuclei species, which indicates their different CR origins
and the CR universality is violated in all the rigidity region from sub-GV to
TV. The precise CR spectra data reveals a more complicated CR nuclei origin
than we thought, and it will be clearer in the future based on more precise
data.
This research was supported by the National Natural Science Foundation of
China (NSFC) (No. 12005124 and No. 12147215) and the Applied Basic Research
Programs of Natural Science Foundation of Shanxi Province (No. 201901D111043).
The data of the posterior samples of the parameter for three schemes is
available on Zenodo under an open-source Creative Commons Attribution license:
https://doi.org/10.5281/zenodo.6435163 (catalog doi:10.5281/zenodo.6435163).
## References
* Adriani et al. (2011) Adriani, O., Barbarino, G. C., & Bazilevskaya et al, G. A. 2011, Science, 332, 69, doi: 10.1126/science.1199172
* Aguilar et al. (2015) Aguilar, M., Aisa, D., & Alpat et al, B. 2015, Phys. Rev. Lett., 114, 171103, doi: 10.1103/PhysRevLett.114.171103
* Aguilar et al. (2013) Aguilar, M., Alberti, G., & Alpat et al, B. 2013, Phys. Rev. Lett., 110, 141102, doi: 10.1103/PhysRevLett.110.141102
* Aguilar et al. (2017) Aguilar, M., Ali Cavasonza, L., & Alpat et al, B. 2017, Phys. Rev. Lett., 119, 251101, doi: 10.1103/PhysRevLett.119.251101
* Aguilar et al. (2018a) —. 2018a, Phys. Rev. Lett., 121, 051103, doi: 10.1103/PhysRevLett.121.051103
* Aguilar et al. (2018b) Aguilar, M., Ali Cavasonza, L., & Ambrosi et al, G. 2018b, Phys. Rev. Lett., 120, 021101, doi: 10.1103/PhysRevLett.120.021101
* Aguilar et al. (2020) Aguilar, M., Ali Cavasonza, L., & Ambrosi et al, G. 2020, Phys. Rev. Lett., 124, 211102, doi: 10.1103/PhysRevLett.124.211102
* Aguilar et al. (2021a) Aguilar, M., Cavasonza, L. A., Allen, M. S., & et al. 2021a, Phys. Rev. Lett., 126, 041104, doi: 10.1103/PhysRevLett.126.041104
* Aguilar et al. (2021b) —. 2021b, Phys. Rev. Lett., 126, 081102, doi: 10.1103/PhysRevLett.126.081102
* Aguilar et al. (2021c) Aguilar, M., Cavasonza, L. A., Alpat, B., & et al. 2021c, Phys. Rev. Lett., 127, 021101, doi: 10.1103/PhysRevLett.127.021101
* Aguilar et al. (2021d) Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2021d, Phys. Rep., 894, 1, doi: 10.1016/j.physrep.2020.09.003
* Aharonian et al. (2004) Aharonian, F. A., Akhperjanian, A. G., Aye, K. M., et al. 2004, Nature, 432, 75, doi: 10.1038/nature02960
* Ahn et al. (2010) Ahn, H. S., Allison, P., & Bagliesi et al, M. G. 2010, ApJ, 714, L89, doi: 10.1088/2041-8205/714/1/L89
* Blasi et al. (2012) Blasi, P., Amato, E., & Serpico, P. D. 2012, Phys. Rev. Lett., 109, 061101, doi: 10.1103/PhysRevLett.109.061101
* Boschini et al. (2020a) Boschini, M. J., Della Torre, S., Gervasi, M., et al. 2020a, ApJ, 889, 167, doi: 10.3847/1538-4357/ab64f1
* Boschini et al. (2020b) —. 2020b, ApJS, 250, 27, doi: 10.3847/1538-4365/aba901
* De La Torre Luque et al. (2021a) De La Torre Luque, P., Mazziotta, M. N., Loparco, F., Gargano, F., & Serini, D. 2021a, J. Cosmology Astropart. Phys, 2021, 099, doi: 10.1088/1475-7516/2021/03/099
* De La Torre Luque et al. (2021b) —. 2021b, J. Cosmology Astropart. Phys, 2021, 010, doi: 10.1088/1475-7516/2021/07/010
* Derome et al. (2019) Derome, L., Maurin, D., Salati, P., et al. 2019, A&A, 627, A158, doi: 10.1051/0004-6361/201935717
* Feng et al. (2016) Feng, J., Tomassetti, N., & Oliva, A. 2016, Phys. Rev. D, 94, 123007, doi: 10.1103/PhysRevD.94.123007
* Foreman-Mackey (2016) Foreman-Mackey, D. 2016, The Journal of Open Source Software, 1, 24, doi: 10.21105/joss.00024
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067
* Génolini et al. (2018) Génolini, Y., Maurin, D., Moskalenko, I. V., & Unger, M. 2018, Phys. Rev. C, 98, 034611, doi: 10.1103/PhysRevC.98.034611
* Génolini et al. (2017) Génolini, Y., Serpico, P. D., & Boudaud et al, M. 2017, Phys. Rev. Lett., 119, 241101, doi: 10.1103/PhysRevLett.119.241101
* Génolini et al. (2019) Génolini, Y., Boudaud, M., Batista, P. I., et al. 2019, Phys. Rev. D, 99, 123028, doi: 10.1103/PhysRevD.99.123028
* Ghelfi et al. (2016) Ghelfi, A., Barao, F., Derome, L., & Maurin, D. 2016, A&A, 591, A94, doi: 10.1051/0004-6361/201527852
* Ghelfi et al. (2017) Ghelfi, A., Maurin, D., Cheminet, A., et al. 2017, Advances in Space Research, 60, 833, doi: 10.1016/j.asr.2016.06.027
* Gleeson & Axford (1968) Gleeson, L. J., & Axford, W. I. 1968, ApJ, 154, 1011, doi: 10.1086/149822
* Goodman & Weare (2010) Goodman, J., & Weare, J. 2010, Communications in Applied Mathematics and Computational Science, 5, 65, doi: 10.2140/camcos.2010.5.65
* Guo et al. (2016) Guo, Y.-Q., Tian, Z., & Jin, C. 2016, ApJ, 819, 54, doi: 10.3847/0004-637X/819/1/54
* Guo & Yuan (2018a) Guo, Y.-Q., & Yuan, Q. 2018a, Chinese Physics C, 42, 075103, doi: 10.1088/1674-1137/42/7/075103
* Guo & Yuan (2018b) —. 2018b, Phys. Rev. D, 97, 063008, doi: 10.1103/PhysRevD.97.063008
* Heisig et al. (2020) Heisig, J., Korsmeier, M., & Winkler, M. W. 2020, Physical Review Research, 2, 043017, doi: 10.1103/PhysRevResearch.2.043017
* H.E.S.S. Collaboration (2022) H.E.S.S. Collaboration. 2022, Science, 10, abn0567, doi: 10.1126/science.abn0567
* Jin et al. (2016) Jin, C., Guo, Y.-Q., & Hu, H.-B. 2016, Chinese Physics C, 40, 015101, doi: 10.1088/1674-1137/40/1/015101
* Korsmeier & Cuoco (2021a) Korsmeier, M., & Cuoco, A. 2021a, Phys. Rev. D, 103, 103016, doi: 10.1103/PhysRevD.103.103016
* Korsmeier & Cuoco (2021b) —. 2021b, arXiv e-prints, arXiv:2112.08381. https://arxiv.org/abs/2112.08381
* Liu et al. (2018) Liu, W., Yao, Y.-h., & Guo, Y.-Q. 2018, ApJ, 869, 176, doi: 10.3847/1538-4357/aaef39
* Moskalenko et al. (2003) Moskalenko, I. V., Strong, A. W., Mashnik, S. G., & Ormes, J. F. 2003, ApJ, 586, 1050, doi: 10.1086/367697
* Moskalenko et al. (2002) Moskalenko, I. V., Strong, A. W., Ormes, J. F., & Potgieter, M. S. 2002, ApJ, 565, 280, doi: 10.1086/324402
* Niu (2021) Niu, J.-S. 2021, Chinese Physics C, 45, 041004, doi: 10.1088/1674-1137/abe03d
* Niu & Li (2018) Niu, J.-S., & Li, T. 2018, Phys. Rev. D, 97, 023015, doi: 10.1103/PhysRevD.97.023015
* Niu et al. (2019) Niu, J.-S., Li, T., & Xue, H.-F. 2019, ApJ, 873, 77, doi: 10.3847/1538-4357/ab0420
* Niu & Xue (2020) Niu, J.-S., & Xue, H.-F. 2020, J. Cosmology Astropart. Phys, 2020, 036, doi: 10.1088/1475-7516/2020/01/036
* Panov et al. (2006) Panov, A. D., Adams, J. H., & Ahn et al, H. S. 2006, ArXiv Astrophysics e-prints
* Ptuskin et al. (2006) Ptuskin, V. S., Moskalenko, I. V., Jones, F. C., Strong, A. W., & Zirakashvili, V. N. 2006, ApJ, 642, 902, doi: 10.1086/501117
* Scherer et al. (2022) Scherer, A., Cuadra, J., & Bauer, F. E. 2022, A&A, 659, A105, doi: 10.1051/0004-6361/202142401
* Strong & Moskalenko (1998) Strong, A. W., & Moskalenko, I. V. 1998, ApJ, 509, 212, doi: 10.1086/306470
* Strong & Moskalenko (2001) —. 2001, Advances in Space Research, 27, 717, doi: 10.1016/S0273-1177(01)00112-0
* Tatischeff et al. (2021) Tatischeff, V., Raymond, J. C., Duprat, J., Gabici, S., & Recchia, S. 2021, arXiv e-prints, arXiv:2106.15581. https://arxiv.org/abs/2106.15581
* Tomassetti (2012) Tomassetti, N. 2012, ApJ, 752, L13, doi: 10.1088/2041-8205/752/1/L13
* Tomassetti (2015a) —. 2015a, ApJ, 815, L1, doi: 10.1088/2041-8205/815/1/L1
* Tomassetti (2015b) —. 2015b, Phys. Rev. D, 92, 081301(R), doi: 10.1103/PhysRevD.92.081301
* Waskom (2021) Waskom, M. L. 2021, Journal of Open Source Software, 6, 3021, doi: 10.21105/joss.03021
* Weinrich et al. (2020) Weinrich, N., Génolini, Y., Boudaud, M., Derome, L., & Maurin, D. 2020, A&A, 639, A131, doi: 10.1051/0004-6361/202037875
* Yuan (2019) Yuan, Q. 2019, Science China Physics, Mechanics, and Astronomy, 62, 49511, doi: 10.1007/s11433-018-9300-0
* Yuan et al. (2020a) Yuan, Q., Qiao, B.-Q., Guo, Y.-Q., Fan, Y.-Z., & Bi, X.-J. 2020a, Frontiers of Physics, 16, 24501, doi: 10.1007/s11467-020-0990-4
* Yuan et al. (2011) Yuan, Q., Zhang, B., & Bi, X.-J. 2011, Phys. Rev. D, 84, 043002, doi: 10.1103/PhysRevD.84.043002
* Yuan et al. (2020b) Yuan, Q., Zhu, C.-R., Bi, X.-J., & Wei, D.-M. 2020b, J. Cosmology Astropart. Phys, 2020, 027, doi: 10.1088/1475-7516/2020/11/027
* Yue et al. (2019) Yue, C., Ma, P.-X., & Yuan et al, Q. 2019, Frontiers of Physics, 15, 24601, doi: 10.1007/s11467-019-0946-8
* Zhang et al. (2022) Zhang, Y., Liu, S., & Zeng, H. 2022, MNRAS, 511, 6218, doi: 10.1093/mnras/stac470
* Zhao et al. (2022) Zhao, B., Liu, W., Yuan, Q., et al. 2022, ApJ, 926, 41, doi: 10.3847/1538-4357/ac4416
The best-fit results and the corresponding residuals of the spectra are given
in Appendix Figure 1 (He, C, N, and O), 2 (Ne, Mg, Si), and 3 (B). Note that
in the lower panel of subfigures in Fig. 1, 2, and 3, the
$\sigma_{\mathrm{eff}}$ is defined as
$\sigma_{\mathrm{eff}}=\frac{f_{\mathrm{obs}}-f_{\mathrm{cal}}}{\sqrt{\sigma_{\mathrm{stat}}^{2}+\sigma_{\mathrm{syst}}^{2}}},$
(1)
where $f_{\mathrm{obs}}$ and $f_{\mathrm{cal}}$ are the points which come from
the observation and model calculation; $\sigma_{\mathrm{stat}}$ and
$\sigma_{\mathrm{syst}}$ are the statistical and systematic standard
deviations of the observed points. This quantity could clearly show us the
deviations between the best-fit result and observed values at each point based
on its uncertainty.
The best-fit values, statistical mean values and standard deviations, and the
90% confidence intervals for the parameters in three schemes are shown in
Appendix Table 1.
The fitting 1D probability and 2D credible regions (covariances) of posterior
PDFs on the parameters of different schemes and groups are collected in
Appendix A, B, and C. The data of the posterior samples of the parameter for
three schemes is available on Zenodo under an open-source Creative Commons
Attribution license: https://doi.org/10.5281/zenodo.6435163 (catalog
doi:10.5281/zenodo.6435163).
## Appendix A Covariances of Parameters in Scheme I
Figure 6: Fitting 1D probability and 2D credible regions (covariances) of
posterior PDFs on the propagation parameters in Scheme I. The contours present
the $\sigma$, $2\sigma$ and $3\sigma$ CL. Figure 7: Fitting 1D probability and
2D credible regions (covariances) of posterior PDFs on the He, C, N and O
injection parameters in Scheme I. The contours present the $\sigma$, $2\sigma$
and $3\sigma$ CL. Figure 8: Fitting 1D probability and 2D credible regions
(covariances) of posterior PDFs on the Ne, Mg, and Si injection parameters in
Scheme I. The contours present the $\sigma$, $2\sigma$ and $3\sigma$ CL.
Figure 9: Fitting 1D probability and 2D credible regions (covariances) of
posterior PDFs on the primary source injection normalization parameters in
Scheme I. The contours present the $\sigma$, $2\sigma$ and $3\sigma$ CL.
## Appendix B Covariances of Parameters in Scheme II
Figure 10: Same as Fig. 6 but for Scheme II. Figure 11: Same as Fig. 7 but for
Scheme II. Figure 12: Same as Fig. 8 but for Scheme II. Figure 13: Same as
Fig. 9 but for Scheme II.
## Appendix C Covariances of Parameters in Scheme III
Figure 14: Same as Fig. 6 but for Scheme III. Figure 15: Same as Fig. 7 but
for Scheme III. Figure 16: Same as Fig. 8 but for Scheme III. Figure 17: Same
as Fig. 9 but for Scheme III.
|
arxiv-papers
| 2021-07-26T15:58:23 |
2024-09-04T03:07:19.094228
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Jia-Shu Niu",
"submitter": "Jia-Shu Niu",
"url": "https://arxiv.org/abs/2107.12289"
}
|
2107.12290
|
Proof: Proof:
# Operators arising as Second Variation of optimal control problems and their
spectral asymptotics
Stefano Baranzini SISSA, Scuola Internazionale Superiore di Studi Avanzati,
Via Bonomea, 265 - 34136 Trieste, Italy [email protected]
###### Abstract
We compute the asymptotic for the eigenvalues of a particular class of compact
operators deeply linked with the second variation of optimal control problems.
We characterize this family in terms of a set of finite dimensional data and
we apply this results to a particular class of singular extremal to get a nice
description of the spectrum of the second variation.
###### keywords:
second variation, optimal control, Weyl law, compact operator
## Introduction
The main focus of this paper is the study of a particular class of compact
operators $K$ on the Hilbert space $L^{2}([0,1],\mathbb{R}^{k})$ with the
standard Hilbert structure. They are characterized by the following
properties:
* •
there exists a finite dimensional subspace of $L^{2}([0,1],\mathbb{R}^{k})$,
which we call $\mathcal{V}$, on which $K$ becomes a self-adjoint operator,
i.e. :
$\langle u,Kv\rangle=\langle Ku,v\rangle\quad\forall\,u,v\in\mathcal{V},$ (1)
* •
$K$ is an Hilbert-Schmidt operator with an integral kernel of a particular
form, namely:
$K(v)(t)=\int_{0}^{t}V(t,\tau)v(\tau)d\tau,\quad v\in
L^{2}([0,1],\mathbb{R}^{k}).$ (2)
Where $V(t,\tau)$ is a matrix whose entries are $L^{2}$ functions. We call the
class of operator satisfying this last condition _Volterra-type_ operators.
The main results of this paper are a fairly general study of the asymptotic
distribution of the eigenvalues of $K$ when restricted to any subspace
$\mathcal{V}$ which satisfies eq. 1 (Theorem 1) and a characterization result
for operators satisfying the two properties stated above (Theorem 2).
The first result is proved in Section 2. We first restrict ourself to
operators $\tilde{K}$ of the form:
$\tilde{K}(v)(t)=-\int_{0}^{t}\sigma(Z_{\tau}v_{\tau},Z_{t}\cdot)d\tau.$ (3)
Here $Z_{t}$ is an analytic in $t$, $2n\times k$ matrix and $\sigma$ the
standard symplectic form on $\mathbb{R}^{2n}$ (see 1). A similar asymptotic
formula was proved in (determinant, , Theorem 1), it was shown that if we
consider $\\{\lambda_{n}(\tilde{K})\\}_{n\in\mathbb{Z}}$ the decreasing (resp.
increasing) arrangement of positive (resp. negative) eigenvalues of
$\tilde{K}$ we have either:
$\lambda_{n}(\tilde{K})=\frac{\xi}{\pi n}+O(n^{-5/3})\quad\text{ or
}\quad\lambda_{n}(\tilde{K})=O(n^{-2}),$ (4)
for $n\in\mathbb{Z}$ sufficiently large and for some $\xi>0$. The number $\xi$
is called _capacity_ and depends only on the matrix $Z_{t}$ in the definition
of $\tilde{K}$.
If $\xi=0$, we go further with the expansion in eq. 4. We single out the term
giving the principal contribution to the asymptotic representing the quadratic
form associated to $\tilde{K}$ as:
$Q(v)=\langle
v,\tilde{K}v\rangle=-\int_{0}^{1}\int_{0}^{t}\sigma(Z_{\tau}v_{\tau},Z_{t}u_{t})d\tau
dt=\sum_{i=1}^{k-1}Q_{i}(v)+R_{k}(v).$
The result mentioned above corresponds to the case $Q_{1}\neq 0$, in Theorem 1
we give the asymptotic for the general case.
From the point of view of geometric control theory Theorem 1 can be seen as an
asymptotic analysis of the spectrum of the second variation for particular
classes of singular extremals and a quantitative version of some necessary
optimality conditions.
Precise definitions will be given in Section 4, standard references on the
second variation are (bookcontrol, , Chapter 20) and ASZ . For now it is
enough to know that the second variation $Q$ of an optimal control problem on
a manifold $M$ is a linear operator on $L^{2}([0,1],\mathbb{R}^{k})$ of the
following form:
$\langle Qv,u\rangle=-\int_{0}^{1}\langle
H_{t}v_{t},u_{t}\rangle-\int_{0}^{1}\int_{0}^{t}\sigma(Z_{\tau}v_{\tau},Z_{t}u_{t})d\tau
dt,$ (5)
where $H_{t}$ is a symmetric $k\times k$ matrix, $\sigma$ is the standard
symplectic form on $T_{\eta}T^{*}M$ and $Z_{t}:\mathbb{R}^{k}\to
T_{\eta}(T^{*}M)$ is a linear map with values in the tangent space to a fixed
point $\eta\in T^{*}M$.
For totally singular extremal, the matrix $H_{t}$ appearing in eq. 5 is
identically zero and the second variation reduces to an operator of the same
form as in eq. 3.
In Section 3 we prove Theorem 2. We first show that any $K$ satisfying eqs. 1
and 2 it is completely determined by its (_finite rank_) skew-symmetric part
$\mathcal{A}$ and can always be represented as in eq. 3. Then we relate the
_capacity_ of $K$ to the spectrum of $\mathcal{A}$.
In Section 4 we recall some basic notions from control theory and we
reformulate Theorem 2 in a more control theoretic fashion, and use it to
characterize the operators coming form the _second variation_ of an optimal
control problem. Moreover we give a geometric interpretation of the capacity
$\xi$ appearing in eq. 4 in terms of the Hessian of the maximized Hamiltonian
coming from Pontryagin Maximum Principle.
## 1 Overview of the main results
We begin this section recalling some general facts about the spectrum of
compact operators, then we fix some notation and give a precise statement of
the main results. Given a compact self-adjoint operator $K$ on an Hilbert
space $\mathcal{H}$, we can define a quadratic form setting $Q(v)=\langle
v,K(v)\rangle$. The eigenvalues of $Q$ are by definition those of $K$ and we
will denote $\Sigma_{\pm}(Q)$ the positive and negative parts of the spectrum
of $Q$.
By the standard spectral theory of compact operators (see functionalanalysis )
the non zero eigenvalues of $K$ are either finite or accumulate at zero and
their multiplicity is finite. Consider the positive part of the spectrum of
$Q$, $\Sigma_{+}(Q)$ and $\lambda\in\Sigma_{+}(Q)$. Denote by $m_{\lambda}$
the multiplicity of the eigenvalue $\lambda$. We can introduce a monotone non
increasing sequence $\\{\lambda_{n}\\}_{n\in\mathbb{N}}$ indexing the
eigenvalues of $K$, requiring that the cardinality of the set
$\\{\lambda_{n}=\lambda\\}=m_{\lambda}$ for every $\lambda\in\Sigma_{+}(Q)$.
This will be called the monotone arrangement of $\Sigma_{+}(Q)$. We can
perform the same construction indexing by $-n$, $n\in\mathbb{N}$, the negative
part of the spectrum $\Sigma_{-}(Q)$. This time we require that the sequence
$\\{\lambda_{-n}\\}_{n\in\mathbb{N}}$ is non decreasing. Provided that
$\Sigma_{\pm}(Q)$ are both infinite, we obtain a sequence
$\\{\lambda_{n}\\}_{n\in\mathbb{Z}}$.
###### Definition 1.
Let $Q$ be a quadratic form $Q$ on a Hilbert space $\mathcal{H}$ and
$j\in\mathbb{N}$
* •
if $j$ is odd, $Q$ has $j-$capacity $\xi>0$ with reminder of order $\nu>0$ if
$\Sigma_{+}(Q)$ and $\Sigma_{-}(Q)$ are both infinite and:
$\lambda_{n}=\frac{\xi}{(\pi n)^{j}}+O(n^{-\nu-j})\quad\text{ as }\quad
n\to\pm\infty,$
* •
if $j$ is even, $Q$ has $j-$capacity $(\xi_{+},\xi_{-})$ of order $\nu>0$ if
both $\Sigma_{+}(Q)$ and $\Sigma_{-}(Q)$ are infinite and:
$\begin{split}\lambda_{n}=\frac{\xi_{+}}{(\pi n)^{j}}+O(n^{-\nu-j})\quad\text{
as }\quad n\to+\infty,\\\ \lambda_{n}=\frac{\xi_{-}}{(\pi
n)^{j}}+O(n^{-\nu-j})\quad\text{ as }\quad n\to-\infty,\end{split}$
where $\xi_{\pm}\geq 0$ or if at least one between $\Sigma_{+}(Q)$ and
$\Sigma_{-}(Q)$ is infinite and the relative monotone arrangement satisfies
the corresponding asymptotic relation;
* •
if the spectrum is finite or $\lambda_{n}=O(n^{-\nu})$ as $n\to\pm\infty$ for
any $\nu>0$, we say that $Q$ has $\infty-$capacity.
The behaviour of the sequence $\\{\lambda_{n}\\}_{n\in\mathbb{Z}}$ is closely
related to the following counting functions:
$C^{+}_{j}(n)=\\#\\{l\in\mathbb{N}:0<\frac{1}{\sqrt[j]{\lambda_{l}}}<n\\}\quad
C^{-}_{j}(n)=\\#\\{l\in\mathbb{N}:-n>\frac{-1}{\sqrt[j]{|\lambda_{-l}|}}>0\\}$
The requirement of definition 1 for the $j-$capacity can be translated into
the following asymptotic for the functions $C^{\pm}_{j}(n)$:
$C^{\pm}_{j}(n)=\frac{\xi_{\pm}}{\pi}n+O(n^{1-\nu})\quad\text{ as }\quad
n\to\pm\infty$
We illustrate here some of the properties of the $j-$capacity. The proofs are
given in section 2, Proposition 3. Without loss of generality we state the
properties for the positive part of the spectrum, analogue results hold for
the negative one.
* •
(Homogeneity) if $Q_{1}$ and $Q_{2}$ are quadratic forms on two Hilbert spaces
$\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ of $j-$capacity $\xi_{1}$ and
$\xi_{2}$ respectively with the same remainder $\nu$, then $aQ_{1}$ has
$j-$capacity $a\xi_{1}$ and the sum $Q_{1}\oplus Q_{2}$ on
$\mathcal{H}_{1}\oplus\mathcal{H}_{2}$ has $j-$capacity
$(\sqrt[j]{\xi_{1}}+\sqrt[j]{\xi_{2}})^{j}$ both with remainder $\nu$.
* •
(Independence of restriction) If $\mathcal{V}\subseteq\mathcal{H}$ is a
subspace of finite codimension then $Q$ has $j-$capacity $\xi$ with remainder
$\nu$ if and only if its restriction to $\mathcal{V}$ has $j-$capacity $\xi$
with remainder $\nu$.
* •
(Additivity) if $Q_{1}$ has $j-$capacity $\xi$ with remainder $\nu$ and
$Q_{2}$ has $0$ $j-$capacity with remainder of the same order $\nu$, then
their sum $Q_{1}+Q_{2}$ has the same capacity with remainder
$\nu^{\prime}=\frac{(j+\nu)(j+1)}{j+\nu+1}$
In the remaining part of this section will be dealing with quadratic forms $Q$
coming from operators of the form given in eq. 3. Suppose that $Z_{t}$ is a
$2n\times k$ matrix which depends piecewise analytically on the parameter
$t\in[0,1]$ and define the following $2n\times 2n$ skew-symmetric matrix:
$J=\begin{pmatrix}0&-Id_{n}\\\ Id_{n}&0\end{pmatrix}.$ (6)
As $Q$ consider the following quadratic form on $L^{2}([0,1],\mathbb{R}^{k})$:
$Q(v)=\langle v,K(v)\rangle=\int_{0}^{1}\int_{0}^{t}\langle
Z_{t}v(t),JZ_{\tau}v(\tau)\rangle d\tau dt.$ (7)
###### Remark 1.
The operator $K$ and the bilinear form $Q(u,v)=\langle u,K(v)\rangle$ are not
symmetric. However the operator:
$K(v)=\int_{0}^{t}Z_{t}^{*}JZ_{\tau}v(\tau)d\tau,$
satisfies eq. 1 and becomes symmetric on a finite codimension subspace
$\mathcal{V}$. It is enough to require that the integral
$\int_{0}^{1}Z_{t}v(t)dt$ lies in a Lagrangian subspace of
$(\mathbb{R}^{2n},\sigma)$ for any $v\in\mathcal{V}$. For instance if we
consider the fibre (or _vertical_ subspace), i.e. the following:
$\Pi=\\{(p,0):p\in\mathbb{R}^{n}\\}\subset\mathbb{R}^{2n}.$ (8)
Here $\sigma$ denotes the standard symplectic form on $\mathbb{R}^{2n}$
defined as $\sigma(x,x^{\prime})=\langle Jx,x^{\prime}\rangle.$
Let $f$ be a smooth function on $[0,1]$ and let $k\in\mathbb{N}$, denote by
$f^{(k)}=\frac{d^{k}f}{dt^{k}}$ the $k-$th derivative with respect to $t$. For
$j\geq 1$ define the following matrix valued functions:
$A_{j}(t)=\begin{cases}\big{(}Z_{t}^{(k)}\big{)}^{*}JZ^{(k)}_{t}\quad&\text{if
}j=2k-1\\\ \big{(}Z_{t}^{(k-1)}\big{)}^{*}JZ^{(k)}_{t}\quad&\text{if }j=2k\\\
\end{cases}$ (9)
We use $\rho_{t}$ to denote any eigenvalue of the matrix $A_{j}(t)$. If
$j=2k$, define:
$\mu_{t,2k}^{+}:=\sum_{\rho_{t}:\rho_{t}>0}\sqrt[2k]{\rho_{t}}\qquad\mu_{t,2k}^{-}:=\sum_{\rho_{t}:\rho_{t}<0}\sqrt[2k]{|\rho_{t}|}.$
For odd indices, $A_{2k-1}$ is skew-symmetric and thus the spectrum is purely
imaginary. So we define the function:
$\mu_{t,2k-1}=\sum_{\rho_{t}:-i\rho_{t}>0}\sqrt[2k-1]{-i\rho_{t}}.$
We are now ready to state the first main result of the section.
###### Theorem 1.
Let $Q$ be the quadratic form in eq. 7. $Q$ has either $\infty-$capacity or
$j-$capacity with remainder of order $\nu=1/2$. More precisely, let $j\geq 1$
be the lowest integer such that $A_{j}(t)$ is not identically zero, then
* •
if $j=2k-1$, the $(2k-1)-$capacity $\xi$ is given by:
$\xi=\Bigg{(}\int_{0}^{1}\mu_{t,2k-1}dt\Bigg{)}^{2k-1},$
and thus for $n\in\mathbb{Z}$ sufficiently large:
$\lambda_{n}=\frac{\Big{(}\int_{0}^{1}\mu_{t,2k-1}dt\Big{)}^{2k-1}}{(\pi
n)^{2k-1}}+O(n^{-2k+1/2}).$
* •
if $j=2k$, the $2k-$capacity $(\xi_{+},\xi_{-})$ is given by:
$\xi_{\pm}=\Bigg{(}\int_{0}^{1}\mu^{\pm}_{t,2k}dt\Bigg{)}^{2k},$
and thus for $n\in\mathbb{Z}$ sufficiently large:
$\lambda_{n}=\frac{\Big{(}\int_{0}^{1}\mu^{\pm}_{t,2k}dt\Big{)}^{2k}}{(\pi
n)^{2k}}+O(n^{-2k-1/2}).$
* •
if $A_{j}(t)\equiv 0$ for any $j$ then $Q$ has $\infty-$capacity.
###### Remark 2.
It is worth remarking that in Theorem 1 of determinant the order of the
remainder for the $1-$capacity was a little better, $2/3$ and not $1/2$.
The proof of this result is given in Section 2. The next theorem gives a
characterization of the operators satisfying eqs. 1 and 2 and a geometric
interpretation of the $1-$capacity. Before going to the statement let us
introduce the following notation. Let $\mathcal{A}$ denote the skew-symmetric
part of $K$:
$\mathcal{A}=\frac{1}{2}\Big{(}K-K^{*}\Big{)}.$
Let $\Sigma$ be the spectrum of $\mathcal{A}$ and
$\operatorname{\mathrm{Im}}(\mathcal{A})$, the image of $\mathcal{A}$.
###### Theorem 2.
Let be $K$ an operator satisfying eq. 1 and eq. 2. Then $\mathcal{A}$ has
finite rank and completely determines $K$. More precisely, if $\mathcal{A}$
has rank $2m$ and is represented as:
$\mathcal{A}(v)(t):=\frac{1}{2}Z_{t}^{*}\mathcal{A}_{0}\int_{0}^{1}Z_{\tau}v(\tau)dt,$
for a skew-symmetric $2m\times 2m$ matrix $\mathcal{A}_{0}$ and a $2m\times k$
matrix $Z_{t}$ then:
$K(v)(t)=\int_{0}^{t}Z_{t}^{*}\mathcal{A}_{0}Z_{\tau}v(\tau)d\tau.$ (10)
Let $\Sigma$ be the spectrum of $\mathcal{A}$, if the matrix $Z_{t}$ can be
chosen to be piecewise analytic the $1-$capacity of $K$ can be bound by
$\xi\leq 2\sqrt{m}\sqrt{\sum_{\rho\in\Sigma:-i\rho>0}-\rho^{2}}\leq
2\sqrt{m}\sum_{\rho\in\Sigma:-i\rho>0}|\rho|.$
## 2 Proof of Theorem 1
Before going to the proof of Theorem 1 we still need some auxiliary results.
We start with Lemma 1 to single out the main contributions to the asymptotic
of the eigenvalues of $Q$ (the quadratic form defined in eq. 7). The first non
zero term of the decomposition we give will determine the rate of decaying of
the eigenvalues (see Proposition 4).
Before showing this and prove the precise estimates we need to carry out the
explicit computation of the asymptotic in some model cases, namely when the
matrices $A_{j}$ are constant. Then we have to show how the $j-$capacity
behaves with respect to natural operations such as direct sum of quadratic
form or restriction to finite codimension subspaces (Proposition 3).
Let us start with some notation:
$v_{k}(t)=\int_{0}^{t}v_{k-1}(\tau)d\tau,\quad v_{0}(t)=v(t)\in
L^{2}([0,1],\mathbb{R}^{m})$
Suppose that the map $t\mapsto Z_{t}$ is real analytic (or at least regular
enough to perform the necessary derivatives) and integrate by parts twice:
$\begin{split}Q(v)&=\int_{0}^{1}\langle
Z_{t}v(t),\int_{0}^{t}JZ_{\tau}v(\tau)d\tau\rangle dt\\\ &=\int_{0}^{1}\langle
Z_{t}v(t),JZ_{t}v_{1}(t)\rangle-\langle
Z_{t}v(t),\int_{0}^{t}J\dot{Z}_{\tau}v_{1}(\tau)d\tau\rangle dt\\\
&=\int_{0}^{1}\langle Z_{t}v(t),JZ_{t}v_{1}(t)\rangle+\langle
Z_{t}v_{1}(t),J\dot{Z}_{t}v_{1}(t)\rangle dt+\\\
&\quad\quad+\int_{0}^{1}\langle\dot{Z}_{t}v_{1}(t),J\int_{0}^{t}\dot{Z}_{\tau}v_{1}(\tau)d\tau\rangle
dt-\Big{[}\langle\int_{0}^{1}Z_{t}v(t)dt,J\int_{0}^{1}\dot{Z}_{t}v_{1}(t)dt\rangle\Big{]}\end{split}$
If we impose the condition $\int_{0}^{1}v_{t}dt=0\,(\iff v_{1}(1)=0)$ the term
in brackets vanishes:
$\langle\int_{0}^{1}Z_{t}v(t)dt,J\int_{0}^{1}\dot{Z}_{t}v_{1}(t)dt\rangle=\langle\int_{0}^{1}Z_{t}v(t)dt,JZ_{1}v_{1}(1)\rangle-\langle\int_{0}^{1}Z_{t}v(t)dt,J\int_{0}^{1}Z_{t}v(t)dt\rangle$
and we can write $Q$ as a sum of three terms
$Q(v)=Q_{1}(v)+Q_{2}(v)+R_{1}(v)$
In analogy we can make the following definitions:
$\begin{split}Q_{2k-1}(v)&=\int_{0}^{1}\langle
Z^{(k-1)}_{t}v_{k-1}(t),JZ^{(k-1)}_{t}v_{k}(t)\rangle=\int_{0}^{1}\langle
v_{k-1}(t),A_{2k-1}(t)v_{k}(t)\rangle\\\ Q_{2k}(v)&=\int_{0}^{1}\langle
Z^{(k-1)}_{t}v_{k}(t),JZ^{(k)}_{t}v_{k}(t)\rangle dt=\int_{0}^{1}\langle
v_{k}(t),A_{2k}(t)v_{k}(t)\rangle dt\\\ R_{k}&=\int_{0}^{1}\langle
Z^{(k)}_{t}v_{k}(t),J\int_{0}^{t}{Z}^{(k)}_{\tau}v_{k}(\tau)d\tau\rangle dt\\\
V_{k}&=\\{v\in L^{2}([0,1],\mathbb{R}^{m}):v_{l}(1)=0,\,\forall\,0<l\leq
k\\}\end{split}$
Here the matrices $A_{j}(t)$ are exactly those defined in eq. 9.
###### Lemma 1.
For every $j\in\mathbb{N}$, on the subspace $V_{j}$, the form $Q$ can be
represented as
$Q(v)=\sum_{k=1}^{2j}Q_{k}(v)+R_{j}(v)$ (11)
The matrices $A_{2k}(t)$ are symmetric provided that
$\frac{d}{dt}A_{2k-1}(t)\equiv 0$. On the other hand $A_{2k-1}$ is always skew
symmetric.
Proof: It is sufficient to notice that $R_{1}(v)$ has the same form as $Q(v)$
but with $v_{1}$ instead of $v$ and $\dot{Z}_{t}$ instead of $Z_{t}$. Thus the
same scheme of integration by parts gives the decomposition.
Notice that $A_{2k}(t)=A^{*}_{2k}(t)+\frac{d}{dt}A_{2k-1}(t)$ thus the skew-
symmetric part of $A_{2k}(t)$ is zero if $A_{2k-1}$ is zero or constant.
$A_{2k-1}(t)$ is always skew-symmetric by definition.
Now we would like to compute explicitly the spectrum of the $Q_{j}$ when the
matrices $A_{j}$ are constant. Unfortunately describing the spectrum with
boundary conditions given by the $V_{j}$ is quite hard. Already for $Q_{4}$
the equation determining it cannot be solved explicitly.
We will derive the Euler-Lagrange equation for $Q_{j}$ and turn instead to
periodic boundary conditions for which everything becomes very explicit and
show how to relate the solution for the two boundary value problems we are
considering. Let us write down the Euler-Lagrange equations for the forms
$Q_{j}$. If $j=2k$ integration by parts yields:
$\begin{split}Q_{2k}(v)-\lambda||v||^{2}=&\int_{0}^{1}\langle
v_{k}(t),A_{2k}v_{k}(t)\rangle-\lambda\langle v_{0}(t),v_{0}(t)\rangle dt\\\
=&\int_{0}^{1}\langle v_{0}(t),(-1)^{k}A_{2k}v_{2k}(t)-\lambda v_{0}(t)\rangle
dt+\\\ &\quad+\sum_{r=0}^{k-1}(-1)^{r}\Big{[}\langle
v_{k-r}(t),A_{2k}v_{k+r+1}(t)\rangle\Big{]}_{0}^{1}\end{split}$
Notice that the boundary terms vanish identically if we impose the vanishing
of $v_{j}$ for $1\leq j\leq k$ at boundary points.
We change notation and define $w(t)=v_{2k}(t)$ and
$w^{(j)}(t)=\frac{d^{j}}{dt^{j}}(w(t))$. The new equations are:
$w^{(2k)}(t)=\frac{(-1)^{k}}{\lambda}A_{2k}w(t)$
We can perform a linear change of coordinates that diagonalizes $A_{2k}$ to
reduce to $m$ $1-$dimensional systems. Imposing periodic boundary conditions,
we are thus left with the following boundary value problem:
$w^{(2k)}(t)=\frac{(-1)^{k}\mu}{\lambda}w(t)\quad w^{(j)}(0)=w^{(j)}(1)\text{
for }0\leq j\leq 2k-1$ (12)
The case of odd $j$ is very similar, in fact $Q_{2k-1}(v)$ can be rewritten
as:
$\begin{split}Q_{2k-1}(v)-\lambda||v||^{2}=&\int_{0}^{1}\langle
v_{k-1}(t),A_{2k-1}v_{k}(t)\rangle-\lambda\langle v_{0}(t),v_{0}(t)\rangle
dt\\\ =&\int_{0}^{1}\langle v_{0}(t),(-1)^{k-1}A_{2k-1}v_{2k-1}(t)-\lambda
v_{0}\rangle dt+\textit{ b.t.}\end{split}$
Here by $b.t.$ we mean boundary terms as the one appearing in the previous
equation. They again disappear if we assume that $v_{j}\in V_{j}$. Thus we end
up with a boundary value problem similar to the one we had before with the
difference that now the matrix $A_{2k-1}$ is skew-symmetric.
$w^{(2k-1)}(t)=\frac{(-1)^{k-1}}{\lambda}A_{2k-1}w(t)$
If we split the space into the kernel and invariant subspaces on which
$A_{2k-1}$ is non degenerate we can decompose $Q_{2k-1}$ as a direct sum of
two-dimensional forms. Imposing periodic boundary conditions, we end up with
the following boundary value problems:
$\begin{cases}w_{1}^{(2k-1)}(t)&=-\frac{(-1)^{(k-1)}\mu}{\lambda}w_{2}\\\
w_{2}^{(2k-1)}(t)&=\frac{(-1)^{(k-1)}\mu}{\lambda}w_{1}\end{cases}\quad\begin{cases}w_{1}^{(j)}(0)=w_{1}^{(j)}(1),\\\
w_{2}^{(j)}(0)=w_{2}^{(j)}(1)\end{cases}\text{ for }0\leq j\leq 2k-2.$ (13)
###### Lemma 2.
The boundary value problem in eq. 12 has a solution if and only if
$\lambda\in\Big{\\{}\frac{\mu}{(2\pi r)^{2k}}:r\in\mathbb{N}\Big{\\}}.$
Moreover any such $\lambda$ has multiplicity $2$. In particular, the
decreasing sequence of $\lambda$ for which eq. 12 has solutions satisfies:
$\lambda_{r}=\frac{\mu}{(2\pi\lceil r/2\rceil)^{2k}}=\frac{\mu}{(\pi
r)^{2k}}+O(r^{-(2k+1)}),\quad r\in\mathbb{N}$
Similarly the boundary value problem in (13) has a solution if and only if:
$\lambda\in\Big{\\{}\frac{|\mu|}{(2\pi r)^{2k-1}}:r\in\mathbb{Z}\Big{\\}}$
and any such $\lambda$ has again multiplicity $2$. The monotone rearrangement
of $\lambda$ for which there exists a solution to the boundary value problem
is:
$\lambda_{r}=\frac{|\mu|}{(2\pi\lceil r/2\rceil)^{2k-1}}=\frac{|\mu|}{(\pi
r)^{2k-1}}+O(r^{-(2k)}),\quad r\in\mathbb{Z}$
Proof: Any solution of the equation
$w^{(2k)}(t)=\frac{(-1)^{k}\mu}{\lambda}w(t)$ can be expressed as a
combination of trigonometric and hyperbolic functions with the appropriate
frequencies.
Without loss of generality we can assume $\mu>0$, we have to consider two
separate cases:
_Case 1: $k$ even and $\lambda>0$ or $k$ odd and $\lambda<0$_
In this case the quantity $(-1)^{k}\mu\lambda^{-1}>0$. If we define
$a^{2k}=(-1)^{k}\mu\lambda^{-1}>0$ for $a>0$, we have to solve:
$w^{(2k)}(t)=a^{2k}w(t),\qquad w^{(j)}(0)=w^{(j)}(1),\,\,0\leq j<2k.$ (14)
A base for the space of solutions to the $ODE$ is then
$\\{e^{\omega^{j}at}:\omega=e^{i\pi/k}\\}$. For us it will be more convenient
to switch to a real representation of the space of solutions. Notice the
following symmetry of the even roots of $1$, if $\eta$ is a root of $1$
different form $\pm 1,\pm i$ then $\\{\eta,\bar{\eta},-\eta,-\bar{\eta}\\}$
are still distinct roots of $1$ (this is also a Hamiltonian feature of the
problem).
If we write $\eta=\eta_{1}+i\eta_{2}$, this symmetry implies that the space
generated by $\\{e^{\eta t},e^{\bar{\eta}t},e^{-\eta t},e^{-\bar{\eta}t}\\}$
is the same as the space generated by
$\\{\sin(\eta_{2}t)\sinh(\eta_{1}t),\sin(\eta_{2}t)\cosh(\eta_{1}t),\cos(\eta_{2}t)\sinh(\eta_{1}t),\cos(\eta_{2}t)\cosh(\eta_{1}t)\\}.$
Let us rescale these functions by $a$ (so that they solve eq. 14) and call
their linear span $U_{\eta}$, we then define $U_{1}$ to be the span of
$\\{\sinh(t),\cosh(t)\\}$ and $U_{i}=\\{\sin(t),\cos(t)\\}$. Note that $U_{i}$
appears if and only if $k$ is even.
Thus the solution space for our problem is the space
$\bigoplus_{\eta}U_{\eta}$ where $\eta$ ranges over the set
$E=\\{\eta:\Re(\eta)\geq 0,\Im(\eta)\geq 0,\eta^{2k}=1\\}$.
Now we have to impose the boundary conditions. Notice that, if $k$ is even
then $U_{i}$ is made of periodic functions, so they are always solutions. We
can look for more on the complement $\bigoplus_{\eta\neq i}U_{\eta}$. Suppose
by contradiction that $w$ is one of such solutions. Write
$w=\sum_{\eta}w_{\eta}$ with $w_{\eta}\in U_{\eta}$ and let $b$ be the
$\sup\\{\Re(\eta):\eta\in E,w_{\eta}\neq 0\\}$. It follows that either
$\sinh(b\,at)$ or $\cosh(b\,at)$ is present in the decomposition of $w$. It
follows that:
$w(t)=\sinh(b\,at)\frac{w(t)}{\sinh(b\,at)}=\sinh(b\,at)g(t),\quad
0\not\equiv|g(t)|<C\text{ for $t$ large enough}$
and so $|w|$ is unbounded as $t\to+\infty$ (or $-\infty$) and thus $w$ is not
periodic. It follows that there are periodic solutions only if $k$ is even
(and thus $\lambda>0$) and $a=2\pi r=\sqrt[2k]{\frac{\mu}{\lambda}}$. Notice
that we have two independent solutions, so if we order the solution
decreasingly we have:
$\lambda_{r}=\frac{\mu}{(2\pi\lceil r/2\rceil)^{2k}},\quad r\in\mathbb{N}$
_Case 2: $k$ odd and $\lambda>0$ or $k$ even and $\lambda<0$_
In this case we have to look at the roots of $-1$ but the argument is very
similar. If $k$ is even there are no solutions, since you lack purely
imaginary frequencies. If $k$ is odd, set $|\mu\lambda^{-1}|=a^{2k}$, then the
boundary value problem is:
$w^{(2k)}(t)=-a^{2k}w(t)\qquad w^{(j)}(0)=w^{(j)}(1),\,0\leq j<2k.$
The roots of $-1$ are just the roots of $1$ rotated by $i$. Now the space of
solutions is $\bigoplus_{\eta\neq 1}U_{\eta}$. We find again two independent
solutions, if we order them we get:
$\lambda_{r}=\frac{\mu}{(2\pi\lceil r/2\rceil)^{2k}},\quad r\in\mathbb{N}$
Notice that positive $\mu$ gives rise to positive solutions. Thus if we
consider $\mu<0$, we get the same result but with switched signs.
We can reduce the odd case (eq. 13) to the even one. Consider the
$1-$dimensional equation of twice the order, i.e.:
$w_{1}^{2(2k-1)}(t)=-\frac{\mu^{2}}{\lambda^{2}}w_{1}$
Now, the discussion above tells us that there are exactly two independent
solutions with periodic boundary conditions whenever $\lambda$ satisfies
$\sqrt[2k-1]{\frac{\mu}{|\lambda|}}=2r\pi$. It follows that again there are
two independent solutions, this times for both signs of $\lambda$. If we order
them we get:
$\lambda_{r}=\frac{\mu}{(2\pi\lceil
r/2\rceil)^{2k-1}},\quad\lambda_{-r}=\frac{\mu}{(2\pi\lfloor
r/2\rfloor)^{2k-1}},\quad r\in\mathbb{N}$
###### Proposition 1.
Let $\mu>0$ and $s\in(0,+\infty)$, denote by $\eta_{s}$ the number of
solutions of eq. 12 with $\lambda$ greater than $s$ and similarly denote by
$\omega_{s}$ be the number of solutions with $\lambda$ bigger than $s$ of:
$w^{(2k)}(t)=\frac{(-1)^{k}\mu}{\lambda}w(t),\quad
w^{(j)}(0)=w^{(j)}(1)=0,\quad k\leq j\leq 2k-1$ (15)
Then $|\omega_{s}-\eta_{s}|\leq 2k$. The same conclusion holds for eq. 13.
Proof: The result follows from standard results about Maslov index of a path
in the Lagrange Grassmannian. References on the topic can be found in
beschastnyi_morse ; beschastnyi_1d ; agrachev_quadratic_paper . Let us
illustrate briefly the construction. Let $(\Sigma,\sigma)$ be a symplectic
space, the Lagrange Grassmannian is the collection of Lagrangian subspaces of
$\Sigma$ and it has a structure of smooth manifold. For any Lagrangian
subspace $L_{0}$ we define the _train_ of $L_{0}$ to be the set:
$T_{L_{0}}=\\{L\text{ Lagrangian}:L\cap L_{0}\neq(0)\\}$. $T_{L_{0}}$ is a
stratified set, the biggest stratum has codimension $1$ and is endowed with a
co-orientation. If $\gamma$ is a smooth curve with values in the Lagrangian
Grassmannian (i.e. a smooth family of Lagrangian subspaces) which intersects
transversally $T_{L_{0}}$ in its smooth part, one defines an intersection
number by counting the intersection points weighted with a plus or minus sign
depending on the co-orientation. Tangent vectors at a point $L$ of the
Lagrange Grassmannian (which is a subspace of $\Sigma$) are naturally
interpreted as quadratic forms on $L$. We say that a curve is _monotone_ if at
any point its velocity is either a non negative or a non positive quadratic
form. For monotone curves, Maslov index counts the number of intersections
with the train up to sign. For generic continuous curves it is defined via a
homotopy argument.
Denote by $\mathrm{Mi}_{L_{0}}(\gamma)$ the Maslov index of a curve $\gamma$
and $L_{1}$ be another Lagrangian subspace. In agrachev_quadratic_paper the
following inequality is proved:
$|\mathrm{Mi}_{L_{0}}(\gamma)-\mathrm{Mi}_{L_{1}}(\gamma)|\leq\frac{\dim(\Sigma)}{2}$
(16)
Let us apply this results to our problem. First of all let us produce a curve
in the Lagrange Grassmannian whose Maslov index coincides with the counting
functions $\omega_{s}$ and $\eta_{s}$. The right candidate is the graph of the
fundamental solution of $w^{(2k)}(t)=\frac{(-1)^{k}\mu}{\lambda}w(t)$.
We write down a first order system on $\mathbb{R}^{2k}$ equivalent to our
boundary value problem, if we call the coordinates on $\mathbb{R}^{2k}$
$x_{j}$, set:
$x_{j+1}(t)=w^{(j)}(t)\Rightarrow\dot{x}_{j}=x_{j+1}\text{ for }1\leq j\leq
2k-1,\quad\dot{x}_{2k}=\frac{(-1)^{k}\mu}{\lambda}x_{1}.$
For simplicity call $\frac{(-1)^{k}\mu}{\lambda}=a$, the matrix we obtain has
the following structure:
$A_{\lambda}=\begin{pmatrix}0&&&a\\\ 1&0&&\\\ &\ddots&\ddots&\\\
&&1&0\end{pmatrix}$
This matrix is not Hamiltonian with respect to the standard symplectic form on
$\mathbb{R}^{2k}$ but is straightforward to compute a similarity
transformation that sends it to an Hamiltonian one (recall that we already
used that $A_{\lambda}$ has the spectrum of an Hamiltonian matrix). Moreover
the change of coordinates can be chosen to be block diagonal and thus
preserves the subspace $B=\\{x_{j}=0,k\leq j\\}$, which remains Lagrangian
too. Since later on we will have to show that the curve we consider is
monotone we will give this change of coordinates explicitly. Define the matrix
$S$ setting $S_{i,k-i+1}=(-1)^{i-1}$ and zero otherwise. It is a matrix that
has alternating $\pm 1$ on the anti-diagonal. Define the following $2k\times
2k$ matrices:
$G=\begin{pmatrix}1&0\\\ 0&S\end{pmatrix}\quad G^{-1}=\begin{pmatrix}1&0\\\
0&(-1)^{k}S\end{pmatrix}\quad\hat{A}_{\lambda}=GA_{\lambda}G^{-1}$
Set $N$ to be the lower triangular $k\times k$ shift matrix (i.e. the left
upper block of $A_{\lambda}$ above) and $E$ the matrix with just a $1$ in
position $(1,k)$ (i.e. the left lower block of $A_{\lambda}$). The new matrix
of coefficients is:
$\hat{A}_{\lambda}=\begin{pmatrix}N&a(-1)^{k}ES\\\ SE&-N^{*}\end{pmatrix}\quad
ES=\mathrm{diag}(0,\dots,0,1),\quad SE=\mathrm{diag}(1,0,\dots,0).$
Now we are ready to define our curve. First of all the symplectic space we are
going to use is $(\mathbb{R}^{4k},\sigma\oplus(-\sigma))$ where $\sigma$ is
the standard symplectic form, in this way graphs of symplectic transformation
are Lagrangian subspaces. Sometimes we will denote the direct sum of the two
symplectic forms with opposite signs with $\sigma\ominus\sigma$ too. Let
$\Phi_{\lambda}$ be the fundamental solution of
$\dot{\Phi}_{\lambda}^{t}=\hat{A}_{\lambda}\Phi_{\lambda}^{t}$ at time $t=1$.
Consider its graph:
$\gamma:\lambda\mapsto\Gamma(\Phi^{1}_{\lambda})=\Gamma(\Phi_{\lambda}),\quad\lambda\in(0,+\infty)$
Once we prove that $\gamma$ is monotone, is straightforward to check that
$\mathrm{Mi}_{B\times B}(\gamma|_{[s,+\infty)})$ counts the number of
solutions to boundary value problem given in eq. 15 for $\lambda\geq s$ and
similarly $\mathrm{Mi}_{\Gamma(I)}(\gamma|_{[s,+\infty)})$ counts the
solutions of eq. 12 for $\lambda\geq s$. Here $\Gamma(I)$ stands for the graph
of the identity map (i.e. the diagonal subspace).
Let us check that the curve is monotone. As already mentioned, tangent vectors
in the Lagrange Grassmannian can be interpreted as quadratic forms. Being
monotone means that the following quadratic form is either non negative or non
positive:
$\big{(}\partial_{\lambda}\gamma\big{)}(\xi)=\sigma(\Phi_{\lambda}\xi,\partial_{\lambda}\Phi_{\lambda}\xi),\quad\xi\in\mathbb{R}^{2k}$
We use the ODE for $\Phi_{\lambda}(t)$ to prove monotonicity:
$\begin{split}\sigma(\Phi_{\lambda}\xi,\partial_{\lambda}\Phi_{\lambda}\xi)&=\int_{0}^{1}\frac{d}{dt}\big{(}\sigma(\Phi^{t}_{\lambda}\xi,\partial_{\lambda}\Phi^{t}_{\lambda}\xi)\big{)}dt+\sigma(\Phi^{0}_{\lambda}\xi,\partial_{\lambda}\Phi^{0}_{\lambda}\xi)\\\
&=\int_{0}^{1}\sigma(\hat{A}_{\lambda}\Phi^{t}_{\lambda}\xi,\partial_{\lambda}\Phi^{t}_{\lambda}\xi)+\sigma(\Phi^{t}_{\lambda}\xi,\big{(}\partial_{\lambda}\hat{A}_{\lambda}\,\Phi^{t}_{\lambda}+\hat{A}_{\lambda}\partial_{\lambda}\Phi^{t}_{\lambda}\big{)}\xi)dt\\\
&=\int_{0}^{1}\sigma(\Phi^{t}_{\lambda}\xi,\partial_{\lambda}\hat{A}_{\lambda}\,\Phi^{t}_{\lambda}\xi)dt\end{split}$
Where we used the facts that
$\partial_{\lambda}\Phi^{0}_{\lambda}=\partial_{\lambda}Id=0$ and that
$\hat{A}_{\lambda}$ is Hamiltonian and thus
$J\hat{A}_{\lambda}=-\hat{A}_{\lambda}^{*}J$ to cancel the first and third
term. It remains to check $J\partial_{\lambda}\hat{A}_{\lambda}$. It is
straightforward to see that it is a diagonal matrix with just a non zero
entry, thus is either non negative or non positive. So
$\partial_{\lambda}\gamma$ is either non positive or non negative being the
integral of a non positive or non negative quantity (the sign is independent
of $\xi$).
Now the statement follows from inequality (16).
We are finally ready to compute the asymptotic for $Q_{j}$ when the matrix
$A_{j}$ is constant. The next Proposition translate the estimate on the
counting functions $\eta_{s}$ and $\omega_{s}$ defined in Proposition 1 to an
estimate for the eigenvalues.
###### Proposition 2.
Let $Q_{j}$ be any of the forms appearing in eq. 11.
* •
Suppose $j=2k$ and $Q_{2k}(v)=\int_{0}^{1}\langle A_{2k}v_{k},v_{k}\rangle dt$
with $A_{2k}$ symmetric and constant and let $\Sigma_{2k}$ be its spectrum.
Define
$\xi_{+}=\left(\sum_{\mu\in\Sigma_{2k},\mu>0}\sqrt[j]{\mu}\right)^{j}\text{
and }\xi_{-}=\left(\sum_{\mu\in\Sigma_{2k},\mu<0}\sqrt[j]{|\mu|}\right)^{j}.$
Then $Q_{2k}$ has capacity $(\xi_{+},\xi_{-})$ with remainder of order one.
Moreover, if $A_{2k}$ is $m\times m$ and $r\in\mathbb{N}$, for $r\geq mk$
$\frac{\xi_{+}}{\pi^{j}(r-2mk-p(r))^{j}}\geq\lambda_{r}\geq\frac{\xi_{+}}{\pi^{j}(r+2mk+p(r))^{j}}$
(17)
where $p(r)=0$ if $r$ is even or $p(r)=1$ if $r$ is odd. Similarly for
negative $r$ with $\xi_{-}$.
* •
Suppose $j=2k+1$ and $Q_{2k+1}(v)=\int_{0}^{1}\langle
A_{2k+1}v_{k-1},v_{k}\rangle dt$ with $A_{2k+1}$ skew-symmetric and constant
and let $\Sigma_{2k+1}$ be its spectrum. Define
$\xi=\left(\sum_{\mu\in\Sigma_{2k+1},-i\mu>0}\sqrt[j]{-i\mu}\right)^{j}.$
Then $Q_{2k+1}$ has capacity $\xi$ with remainder of order one. Moreover , if
$A_{2k}$ is $m\times m$ and $r\in\mathbb{Z}$, for $|r|\geq mk$
$\frac{\xi}{\pi^{j}(r-2mk-p(r))^{j}}\geq\lambda_{r}\geq\frac{\xi}{\pi^{j}(r+2mk+p(r))^{j}}.$
(18)
Proof: First of all we consider $1-$dimensional system and we write the
inequality $|\eta_{s}-\omega_{s}|$ as an inequality for the eigenvalues.
Notice that if we have two integer valued function
$f,g:\mathbb{R}\to\mathbb{N}$ and an inequality of the form:
$g(s)\geq\\#\\{\lambda\text{ solutions of \lx@cref{creftype~refnum}{eq:
boundary value problem right bc} }:\lambda\geq s\\}\geq f(s),$
it means that we have at least $f(s)$ solutions bigger than $s$ and at most
$g(s)$. This implies that the sequence of ordered eigenvalues satisfies:
$\lambda_{f(s)}\geq s,\quad\lambda_{g(s)}\leq s.$
Now we compute this quantities explicitly. In virtue of Proposition 1 we can
take as upper/lower bounds for the counting function $g(s)=\eta_{s}+2k$ and
$f(s)=\eta_{s}-2k$. We choose the point $s=\frac{\mu}{(2\pi r)^{j}}$. It is
straightforward to see that:
$\eta_{s}\Big{|}_{s=\frac{\mu}{(2\pi
r)^{j}}}=2\\#\\{l\in\mathbb{N}:\frac{\mu}{(2\pi l)^{j}}\geq\frac{\mu}{(2\pi
r)^{j}}\\}=2r.$
And thus we obtain:
$\lambda_{2(r-k)}\geq\frac{\mu}{(2\pi
r)^{j}},\quad\lambda_{2(r+k)}\leq\frac{\mu}{(2\pi r)^{j}}.$
Now if we change the labelling we find that , for $l\geq k$:
$\frac{\mu}{(2\pi(l-k))^{j}}\geq\lambda_{2l}\geq\frac{\mu}{(2\pi(l+k))^{j}}.$
By definition $\lambda_{2l}\geq\lambda_{2l+1}\geq\lambda_{2l+2}$ and thus we
have a bound for any index $r\in\mathbb{N}$.
Now we consider $m-$dimensional system, notice that we reduced the problem,
via diagonalization, to the sum of $m$ $1-$dimensional systems. Thus our form
$Q_{j}$ is always a direct sum of $1-$ dimensional objects. We show now how to
recover the desired estimate for the sum of quadratic forms.
First of all observe that counting functions are additive with respect to
direct sum. In fact, if $Q=\oplus_{i=1}^{m}Q_{i}$, $\lambda$ is an eigenvalue
of $Q$ if and only if it is an eigenvalue of $Q_{i}$ for some $i$. We proceed
as we did before. Suppose that $Q_{a}$ is $1-$dimensional and
$Q_{a}(v)=\int_{0}^{1}\mu_{a}|v_{k}(t)|^{2}dt$. Let us compute $\eta_{s}$ in
the point $s_{0}=(\sum_{i=1}^{m}\sqrt[j]{\mu_{i}})^{j}/(2\pi l)^{j}$:
$2\\#\left\\{r\in\mathbb{N}:\frac{\mu_{a}}{(2\pi
r)^{j}}\geq\frac{(\sum_{i=1}^{m}\sqrt[j]{\mu_{i}})^{j}}{(2\pi
l)^{j}}\right\\}=2\\#\left\\{r\in\mathbb{N}:\frac{\sqrt[j]{\mu_{a}}}{(\sum_{i=1}^{m}\sqrt[j]{\mu_{i}})r}\geq\frac{1}{l}\right\\}$
Set for simplicity
$c_{a}=\frac{\sqrt[j]{\mu_{a}}}{(\sum_{i=1}^{m}\sqrt[j]{\mu_{i}})}$, it is
straightforward to see that the cardinality of the above set is
$\\#\\{r\in\mathbb{N}:r\leq c_{a}l\\}=\lfloor c_{a}l\rfloor$. Now we are ready
to prove the estimates for the direct sum of forms. Adding everything we have:
$2\sum_{a=1}^{m}(\lfloor c_{a}l\rfloor+k)\geq\\#\Big{\\{}\text{eigenvalues of
}Q\geq\frac{(\sum_{i=1}^{m}\sqrt[j]{\mu_{i}})^{j}}{(2\pi
l)^{j}}\Big{\\}}=2\sum_{a=1}^{m}(\lfloor c_{a}l\rfloor-k)$
It is clear that $\sum_{a=1}^{m}c_{a}=1$ and that
$l+mk\geq\sum_{a=1}^{m}(\lfloor c_{a}l\rfloor+k)$, similarly
$\sum_{a=1}^{m}(\lfloor c_{a}l\rfloor+k)\geq l-m(k+1)$ since $\lfloor
c_{a}l\rfloor\geq c_{a}l-1$. Rewriting for the eigenvalues with $l\geq mk$ we
obtain:
$\frac{(\sum_{i=1}^{m}\sqrt[j]{\mu_{i}})^{j}}{(2\pi(l-mk))^{j}}\geq\lambda_{2l}\geq\frac{(\sum_{i=1}^{m}\sqrt[j]{\mu_{i}})^{j}}{(2\pi(l+mk))^{j}}.$
It is straightforward to compute the bounds in eqs. 17 and 18 observing again
$\lambda_{2l}\geq\lambda_{2l+1}\geq\lambda_{2l+2}$.
###### Remark 3.
The shift $m$ appearing in eqs. 17 and 18 is due to the fact we are
considering the direct sum of $m$ quadratic forms. It is worth noticing that
this does not depend on the fact that we are considering a quadratic form on
$L^{2}([0,1],\mathbb{R}^{m})$ and the estimates in eqs. 17 and 18 hold
whenever we consider the direct sum of $m$ $1-$dimensional forms with constant
coefficients. This consideration will be used in the proof of Theorem 1 below.
Now we prove some properties of the capacities which are closely related to
the explicit estimate we have just proved for the linear case. As done so far
we state the proposition for ordered positive eigenvalues. An analogous
statement is true for the negative ones.
###### Proposition 3.
Suppose that $Q$ is a quadratic form on an Hilbert space and let
$\\{\lambda_{n}\\}_{n\in\mathbb{N}}$ be its positive ordered eigenvalues.
Suppose that:
$\lambda_{n}=\frac{\zeta}{n^{j}}+O(n^{-j-\nu})\quad\nu>0,j\in\mathbb{N}\text{
as }n\to+\infty.$
1. 1.
Then for any such $Q_{i}$ on a Hilbert space $\mathcal{H}_{i}$ the direct sum
$Q=\oplus_{i=1}^{m}Q_{i}$ satisfies:
$\lambda_{n}=\Big{(}\sum_{i=1}^{m}\frac{\sqrt[j]{\zeta_{i}}}{n}\Big{)}^{j}+O(n^{-j-\nu})\quad\nu>0,j\in\mathbb{N}\text{
as }n\to+\infty.$
2. 2.
Suppose that $U$ is a subspace of codimension $d<\infty$ then
$\lambda_{n}(Q|_{U})=\frac{\zeta}{n^{j}}+O(n^{-j-\nu})\iff\lambda_{n}(Q)=\frac{\zeta}{n^{j}}+O(n^{-j-\nu}),$
as $n\to+\infty$.
3. 3.
Suppose that $Q$ and $\hat{Q}$ are two quadratic forms. Suppose that $Q$ is as
at the beginning of the proposition and $\hat{Q}$ satisfies:
$\lambda_{n}(\hat{Q})=O(n^{j+\mu})\quad\mu>0,\text{ as }n\to+\infty.$
Then the sum $Q^{\prime}=Q+\hat{Q}$ satisfies:
$\lambda_{n}(Q^{\prime})=\frac{\zeta}{n^{j}}+O(n^{j+\nu^{\prime}}),\quad\nu^{\prime}=\min\\{\frac{j+\mu}{j+\mu+1}(j+1),j+\nu\\}.$
Proof: The asymptotic relation can be written in terms of a counting function.
Take the $j-$th root of the eigenvalues of $Q_{i}$, then it holds that
$\\#\\{n\in\mathbb{N}\,|\,0\leq\frac{1}{\sqrt[j]{\lambda_{n}}}\leq
k\\}=\sqrt[j]{\zeta_{i}}k+O(k^{1-\nu})$
So summing up all the contribution we get the estimate in $i)$.
The min-max principle implies that we can control the $n-$th eigenvalue of
$Q|_{U}$ with the $n-$th and $(n+d)$-th eigenvalue of $Q$ i.e.:
$\lambda_{n}(Q|_{U})\leq\lambda_{n}(Q)\leq\lambda_{n-d}(Q|_{U})\leq\lambda_{n-d}(Q)$
So, if the codimension is fixed, it is equivalent to provide and estimate for
the eigenvalues $Q$ or for those of $Q|_{U}$.
For the last point we use Weyl law. We can estimate the $i+j$-th eigenvalue of
a sum of quadratic forms with the sum of the $i-$th and the $j$-th eigenvalues
of the summands. Write, as in determinant , $Q^{\prime}$ as $Q$+$\hat{Q}$ and
$Q$ as $Q^{\prime}$+$(-\hat{Q})$. and choose $i=n-\lfloor n^{\delta}\rfloor$
and $j=\lfloor n^{\delta}\rfloor$ in the first case and $i=n$ and $j=\lfloor
n^{\delta}\rfloor$ in the second. This implies:
$\lambda_{n+\lfloor n^{\delta}\rfloor}(Q)+\lambda_{\lfloor
n^{\delta}\rfloor}(\hat{Q})\leq\lambda_{n}(Q^{\prime})\leq\lambda_{n-\lfloor
n^{\delta}\rfloor}(Q)+\lambda_{\lfloor n^{\delta}\rfloor}(\hat{Q})$
The best remainder is computed as
$\nu^{\prime}=\max_{\delta\in(0,1)}\min\\{(j+\mu)\delta,j+1-\delta,j+\nu\\}$.
Collecting all the facts above we have the following estimate on the decaying
of the eigenvalues of $Q_{j}$, independently of any analyticity assumption of
the kernel.
###### Proposition 4.
Take $Q_{j}$ as in the decomposition of lemma (1). Then the eigenvalues of
$Q_{j}$ satisfy:
$\lambda_{n}(Q_{j})=O\Big{(}\frac{1}{n^{j}}\Big{)}\quad\text{ as
}n\to\pm\infty$
Moreover for any $k\in\mathbb{N}$ and for any $0\leq s\leq k$ the forms
$Q_{2k+1}$ and $Q_{2k}$ have the same first term asymptotic as the forms:
$\displaystyle\hat{Q}_{2k+1,s}(v)=(-1)^{s}\int_{0}^{1}\langle
A_{2k+1}v_{k+1+s}(t),v_{k-s}(t)\rangle dt$
$\displaystyle\hat{Q}_{2k,s}(v)=(-1)^{s}\int_{0}^{1}\langle
A_{2k}v_{k+s}(t),v_{k-s}(t)\rangle dt$
Proof: Let’s start with even case, $j=2k$. It holds that:
$|Q_{2k}(v)|=|\int_{0}^{1}\langle A_{t}v_{k}(t),v_{k}(t)dt|\leq
C\int_{0}^{1}\langle v_{k}(t),v_{k}(t)\rangle dt$
Where $C=\max_{t}||A_{t}||$. By comparison with the constant coefficient case
we get the bound.
Suppose now that $j=2k-1$. As before there is a constant $C$ such that
$|Q_{2k}(v)|=|\int_{0}^{1}\langle A_{t}v_{k}(t),v_{k+1}(t)dt|\leq
C\|v_{k}\|_{2}\|v_{k+1}\|_{2}$
Consider now the following quadratic forms on $L^{2}([0,1],\mathbb{R}^{k})$:
$F_{k}(v)=\int_{0}^{1}||v_{k}(t)||^{2}dt=\|v_{k}\|_{2}^{2},\quad
F_{k+1}(v)=\int_{0}^{1}||v_{k+1}(t)||^{2}dt=\|v_{k+1}\|_{2}^{2}$
Define $V_{n}=\\{v_{1},\dots,v_{n}\\}^{\perp}$ where $v_{i}$ are linearly
independent eigenvectors of $F_{k}$ associated to the first $n$ eigenvalues
$\lambda_{1}\geq\dots\geq\lambda_{n}$. Similarly define
$U_{n}=\\{u_{1},\dots,u_{n}\\}^{\perp}$ to be the orthogonal complement to the
eigenspace associated to the first $n$ eigenvalues of $F_{k+1}$. It follows
that:
$\lambda_{2n}(Q_{2k+1})\leq\max_{v\in V_{n}\cap
U_{n}}C\|v_{k}\|_{2}\|v_{k+1}\|_{2}\leq C\max_{v\in
V_{n}}\|v_{k}\|_{2}\max_{v\in U_{n}}\|v_{k+1}\|_{2}$
We already have an estimate for the eigenvalues of $F_{k}$ and $F_{k+1}$ since
we have already dealt with constant coefficients case. In virtue of the choice
of the subspace $V_{n}$ and $U_{n}$, the maxima in the right hand side are the
square roots of the $n-th$ eigenvalues of the respective forms. Thus one gives
a contribution of order $n^{-k}$ and the other of order $n^{-k-1}$ and the
first part of the proposition is proved.
For the second part, without loss of generality suppose that $j=2k$. The other
case is completely analogous.
$\begin{split}Q_{2k}(v)&=\int_{0}^{1}\langle v_{k},A_{t}v_{k}\rangle
dt=\int_{0}^{1}\langle
v_{k},\int_{0}^{t}A_{\tau}v_{k-1}(\tau)+\dot{A}_{\tau}v_{k}(\tau)d\tau\rangle
dt\\\ &=-\int_{0}^{1}\langle v_{k+1}(t),A_{t}v_{k-1}(t)+\int_{0}^{1}\langle
v_{k+1}(t),\dot{A}_{t}v_{k}(t)\rangle dt\\\ \end{split}$
The second term above is of higher order by the first part of the lemma and so
iterating the integration by parts on the first term at step $s$ we get that:
$\displaystyle\int_{0}^{1}\langle v_{k+s}(t),A_{t}v_{k-s}(t)\rangle
dt=-\int_{0}^{1}\langle v_{k+s+1}(t),A_{t}v_{k-s-1}(t)\rangle dt+$
$\displaystyle+\int_{0}^{1}\langle
v_{k+s+1}(t),\dot{A}_{\tau}v_{k-s}(t)\rangle dt$
The second term of the right hand side is again of order $n^{2k+1}$, this can
be checked in the same way as in the first part of the proposition. This
finishes the proof.
Now we prove the main result of this section:
Proof: [Proof of Theorem 1] Suppose that $j=2k$ is even. We work on
$V_{k}=\\{v\in L^{2}([0,1],\mathbb{R}^{m}):v_{j}(0)=v_{j}(1)=0,\,0<j\leq
k\\}$. Then
$Q(v)=Q_{2k}(v)+R_{k}(v)=\int_{0}^{1}\langle A_{t}v_{k}(t),v_{k}(t)\rangle
dt+R_{k}(v)$
Since the matrix $A_{t}$ is analytic we can diagonalize it piecewise
analytically in $t$ (see kato ). Thus there exists a piecewise analytic
orthogonal matrix $O_{t}$ such that $O_{t}^{*}A_{t}O_{t}$ is diagonal. By the
second part of Proposition 4, if we make the change of coordinates
$v_{t}\mapsto O_{t}v_{t}$ we can reduce to study the direct sum of $m$ $1-$
dimensional forms. Without loss of generality we consider forms of the type:
$Q_{2k}(v)=\int_{0}^{1}a_{t}||v_{k}(t)||^{2}dt=\int_{0}^{1}a_{t}v_{k}(t)^{2}dt$
where now $a_{t}$ is piecewise analytic and $v_{k}$ a scalar function.
For simplicity we can assume that $a_{t}$ does not change sign and is analytic
on the whole interval. If that were not the case, we could just divide $[0,1]$
in a finite number of intervals and study $Q_{2k}$ separately on each of them.
Suppose you pick a point $t_{0}$ in $(0,1)$ and consider the following
subspace of codimension $mk$ in $V_{k}$:
$V_{k}\supset V^{t_{0}}_{k}=\\{v\in
V_{k}:v_{j}(0)=v_{j}(t_{0})=v_{j}(1)=0,\,0<j\leq k\\}$
For $t\geq t_{0},$ define
$v_{j}^{t_{0}}:=\int_{t_{0}}^{t}v^{t_{0}}_{j-1}(\tau)d\tau$ and $v_{0}=v\in
V_{k}$. It is straightforward to check that on $V_{k}^{t_{0}}$ the form
$Q_{2k}$ splits as a direct sum:
$Q_{2k}(v)=\int_{0}^{t_{0}}\langle A_{t}v_{k}(t),v_{k}(t)\rangle
dt+\int_{t_{0}}^{1}\langle A_{t}v^{t_{0}}_{k}(t),v^{t_{0}}_{k}(t)\rangle dt$
Now by Proposition 3 (points $i)$ and $ii)$) we can introduce as many points
as we want and work separately on each segment and the asymptotic will not
change (as long as the number of point is finite).
Now we fix a partition $\Pi$ of $[0,1]$, $\Pi=\\{t_{0}=0,t_{1}\dots
t_{l-1},t_{l}=1\\}$. Consider the subspace $V_{\Pi}=\\{v\in
L^{2}\,|\,v_{s}(t_{i})=v_{s}(t_{i+1})=0,0<s\leq k,\,t_{i}\in\Pi\\}$ which has
codimension equal to $k|\Pi|$. Set $a_{i}^{-}=\min_{t\in[t_{i},t_{i+1}]}a_{t}$
and $a_{i}^{+}=\max_{t\in[t_{i},t_{i+1}]}a_{t}$. Finally define
$v_{k}^{t_{i}}(t)=\int_{t_{i}}^{t}\dots\int_{t_{i}}^{\tau_{1}}v(\tau)d\tau\dots
d\tau_{k-1}$. It follows immediately that on $V_{\Pi}$:
$\sum_{i}a^{-}_{i}\int_{t_{i}}^{t_{i+1}}v^{t_{i}}_{k}(t)^{2}dt\leq
Q_{2k}(v)\leq\sum_{i}a^{+}_{i}\int_{t_{i}}^{t_{i+1}}v^{t_{i}}_{k}(t)^{2}dt$
Now, we already analysed the spectrum for the problem with constant $a_{t}$ on
$[0,1]$. The last step to understand the quantities on the right and left hand
side is to see how the eigenvalues rescale when we change the length of
$[0,1]$.
If we look back at the proof of Lemma 2, it is straightforward to check that
the length is relevant only when we impose the boundary conditions, we find
that the eigenvalues are: $\lambda=\frac{a\ell^{2k}}{(2\pi n)^{2k}}$ and again
double.
In particular the estimates in eqs. 17 and 18 are still true replacing
$\mu_{i}$ with $a_{i}^{\pm}\ell^{2k}$.
If we replace now $\ell$ by $|t_{i+1}-t_{i}|$ and sum the capacities according
to Proposition 3 we have the following estimate on the eigenvalues on
$V_{\Pi}$, for $n\geq 2k|\Pi|$:
$\Big{(}\frac{\sum_{i}(a_{i}^{-})^{\frac{1}{2k}}(t_{i+1}-t_{i})}{\pi(n+2|\Pi|k+p(n))}\Big{)}^{2k}\leq\lambda_{n}(Q_{2k}\big{|}_{V_{\Pi}})\leq\Big{(}\frac{\sum_{i}(a_{i}^{+})^{\frac{1}{2k}}(t_{i+1}-t_{i})}{\pi(n-2|\Pi|k-p(n))}\Big{)}^{2k}$
Moreover the min-max principle implies that, for $n\geq k|\Pi|$:
$\lambda_{n}\big{(}Q_{2k}\big{|}_{V_{\Pi}}\big{)}\leq\lambda_{n}\big{(}Q_{2k}\big{)}\leq\lambda_{n-k|\Pi|}\big{(}Q_{2k}\big{|}_{V_{\Pi}}\big{)}$
In particular for $n\geq 3k|\Pi|$ we have:
$\Big{(}\frac{\sum_{i}(a_{i}^{-})^{\frac{1}{2k}}(t_{i+1}-t_{i})}{\pi(n+2|\Pi|k+p(n))}\Big{)}^{2k}\leq\lambda_{n}(Q_{2k})\leq\Big{(}\frac{\sum_{i}(a_{i}^{+})^{\frac{1}{2k}}(t_{i+1}-t_{i})}{\pi(n-3|\Pi|k-p(n))}\Big{)}^{2k}$
(19)
We address now the issue of the convergence of the Riemann sums. Set
$I^{\pm}_{a}=\sum_{i}(a_{i}^{\pm})^{\frac{1}{2k}}(t_{i+1}-t_{i})$ and
$I_{a}=\int_{0}^{1}a^{\frac{1}{2k}}dt$. It is well know that $I^{\pm}_{a}\to
I_{a}$ as long as $\sup_{i}|t_{i}-t_{i+1}|$ goes to zero. We need a more
quantitative bound on the rate of convergence. Using results from
convergenceRiemannSums for and equispaced partition, we have that:
$|I_{a}-I^{\pm}_{a}|\leq
C^{\pm}_{a}\frac{1}{|\Pi|}=\frac{C(a,k,\pm)}{\mathrm{codim}(V_{\Pi})}$
Where $C(a,k,\pm)$ is a constant that depends only on the function $a$ and on
$k$ and the inequality holds for $|\Pi|\geq n_{0}$ sufficiently large, where
$n_{0}$ depends just on $a$ and $k$.
Consider the right hand side of eq. 19, adding and subtracting
$\frac{I_{a}}{(\pi n)^{2k}}$, we find that for $n\geq\max\\{n_{0},k|\Pi|\\}$:
$\lambda_{n}(Q_{2k})\leq\Big{(}\frac{I_{a}}{\pi
n}\Big{)}^{2k}+\Big{(}\frac{I_{a}^{+}}{\pi(n-3|\Pi|k-p(n))}\Big{)}^{2k}-\Big{(}\frac{I_{a}}{\pi
n}\Big{)}^{2k}.$
A simple algebraic manipulation shows that there are constants $C_{1},C_{2}$
and $C_{3}$ such that the difference on the right hand side is bounded by
$\frac{C_{1}n^{2k}|\Pi|^{-1}+C_{2}(n^{2k}-|\Pi|^{2k}(n/|\Pi|-1)^{2k})}{C_{3}(n-3k|\Pi|)^{2k}n^{2k}}$
for $n\geq\max\\{3k|\Pi|,n_{1}|\Pi|,n_{0}\\}$ where $n_{1}$ is a certain
threshold independent of $|\Pi|$.
The idea now is to choose for $n$ a partition $\Pi$ of size $|\Pi|=\lfloor
n^{\delta}\rfloor$ to provide a good estimate of $\lambda_{n}(Q)$. The better
result in terms of approximation is obtained for $\delta=\frac{1}{2}$.
Heuristically this can be explained as follows: on one hand the first piece of
the error term is of order $n^{-2k-\delta}$, comes from the convergence of the
Riemann sums and gets better as $\delta\to 1$. On the other hand the second
term comes from the estimate on the eigenvalues and get worse and worse as
$n^{\delta}$ becomes comparable to $n$.
A perfectly analogous argument allows to construct an error function for the
left side of eq. 19 which decays as $n^{-2k-1/2}$ for $n$ sufficiently large.
We have proved so far that, for one dimensional forms, $Q_{2k}$ has
$2k-$capacity $\xi_{+}=(\int_{0}^{1}\sqrt[2k]{a_{t}}dt)^{2k}$. Now we apply
point $i)$ of Proposition 3 to obtain the formula in the statement for forms
on $L^{2}([0,1],\mathbb{R}^{m})$. Finally notice that by Proposition 4 the
eigenvalues of $R_{k}(v)$ decay as $n^{-2k-1}$. If we apply point $iii)$ of
Proposition 3 we find that $Q_{2k}(v)+R_{k}(v)$ has the same $2k-$capacity as
$Q_{2k}$ with remainder of order $1/2$.
Now we consider the case $j=2k-1$. The idea is to reduce to the case of
$j=4k-2$ as in the proof of Lemma 2 and use the symmetries of $Q_{2k-1}$ to
conclude. In the same spirit as in the beginning of the proof let us
diagonalize the kernel $A_{2k-1}$. We thus reduce everything to the two
dimensional case, i.e. to the quadratic forms:
$Q(v)=\int_{0}^{1}\langle v_{k}(t),\begin{pmatrix}0&-a_{t}\\\
a_{t}&0\end{pmatrix}v_{k-1}(t)\rangle dt\quad a_{t}\geq 0$ (20)
It is clear that the map $v_{0}\mapsto Ov_{0}$ where $O=\begin{pmatrix}0&1\\\
1&0\end{pmatrix}$ is an isometry of $L^{2}([0,1],\mathbb{R}^{2})$ and
$Q(Ov_{0})=-Q(v_{0})$ and so the spectrum is two sided and the asymptotic is
the same for positive and negative eigenvalues.
Now we reduce the problem to the even case. Let’s consider the square of
$Q_{2k-1}$. By proposition (4) $Q_{2k-1}$ has the same asymptotic as the form:
$\hat{Q}_{2k-1}=(-1)^{k+1}\int_{0}^{1}\langle A_{t}v_{2k-1}(t),v_{0}(t)\rangle
dt\qquad F(v_{0})(t)=(-1)^{k+1}A_{t}v_{2k-1}(t)$
So we have to study the eigenvalues of the symmetric part of $F$. It is clear
that:
$\frac{(F+F^{*})^{2}}{4}=\frac{F^{2}+FF^{*}+F^{*}F+(F^{*})^{2}}{4}$
Thus we have to deal with the quadratic form:
$\begin{split}4\tilde{Q}(v)&=\langle[2F^{2}+F^{*}F+FF^{*}](v),v\rangle\\\
&=2\langle F(v),F^{*}(v)\rangle+\langle F^{*}(v),F^{*}(v)\rangle+\langle
F(v),F(v)\rangle\end{split}$
The last term is the easiest to write, it is just:
$\langle F(v),F(v)\rangle=\int_{0}^{1}\langle-
A_{t}^{2}v_{2k-1}(t),v_{2k-1}(t)\rangle dt$
which is precisely of the form of point $i)$ and gives $\frac{1}{4}$ of the
desired asymptotic. The operator $F^{*}$ acts as follows:
$F^{*}(v)=(-1)^{k+1}\int_{0}^{t}\int_{0}^{t_{2k-1}}\dots\int_{0}^{t_{1}}A_{t_{1}}v_{0}{(t_{1})}dt_{1}\dots
dt_{2k-1}$
Using integration by parts one can single out the term $A_{t}v_{2k-1}$. To
illustrate the procedure, for $k=1$ one gets:
$\begin{split}F^{*}(v)&=A_{t}v_{1}(t)-\int_{0}^{t}\dot{A}_{\tau}v_{1}(\tau)d\tau\\\
\langle F^{*}(v),F^{*}(v)\rangle&=\int_{0}^{1}\langle-
A_{t}^{2}v_{1}(t),v_{1}(t)\rangle dt+2\int_{0}^{1}\langle
A_{t}v_{1}(t),\int_{0}^{t}\dot{A}_{\tau}v_{1}(\tau)d\tau\rangle dt+\\\
&\qquad+\int_{0}^{1}\langle\int_{0}^{t}\dot{A}_{\tau}v_{1}(\tau)d\tau,\int_{0}^{t}\dot{A}_{\tau}v_{1}(\tau)d\tau\rangle
dt\end{split}$
The other terms thus do not affect the asymptotic since by Proposition 4 they
decay at least as $O(n^{3})$. The proof goes on the same line for general $k$.
The same reasoning applies to the term $\langle F(v),F^{*}(v)\rangle$. Summing
everything one gets that the leading term is $\int_{0}^{1}\langle-
A_{t}^{2}v_{2k-1}(t),v_{2k-1}(t)\rangle dt$ and so this is precisely the same
case as point $i)$. Recall that $A_{t}$ is a $2\times 2$ skew-symmetric matrix
as defined in eq. 20, thus the eigenvalues of the square coincide and are
$a_{t}^{2}$. It follows that, for $n$ sufficiently large, the square of the
eigenvalues of $\tilde{Q}$ satisfy:
$\lambda_{n}(\tilde{Q})=\frac{\big{(}\int_{0}^{1}2\sqrt[4k-2]{a_{t}^{2}}dt\big{)}^{4k-2}}{\pi^{4k-2}n^{4k-2}}+O(n^{-4k-2-\frac{1}{2}})$
It is immediate to see that
$\frac{\big{(}\int_{0}^{1}2\sqrt[4k-2]{a_{t}^{2}}dt\big{)}^{4k-2}}{(\pi
n)^{4k-2}}=\frac{\big{(}\int_{0}^{1}\sqrt[2k-1]{a_{t}}dt\big{)}^{4k-2}}{(\pi
n/2)^{4k-2}}$. This mirrors the fact that the spectrum of $Q_{2k-1}$ is double
and any couple $\lambda,-\lambda$ is sent to the same eigenvalue
$\lambda^{2}$. Thus the $(2k-1)-$capacity of $Q_{2k-1}$ is
$(\int_{0}^{1}\sqrt[2k-1]{a_{t}}dt)^{2k-1}$.
Moreover, given two sequences $\\{a_{n}\\}_{n\in\mathbb{N}}$ and
$\\{b_{n}\\}_{n\in\mathbb{N}}$,
$\sqrt{a_{n}^{2}+b_{n}^{2}}=a_{n}\sqrt{1+\frac{b_{n}^{2}}{a_{n}^{2}}}\approx
a_{n}(1+\frac{b_{n}}{a_{n}}+O(\frac{b_{n}}{a_{n}}))$ so the remainder is still
$2k-1+\frac{1}{2}$.
Arguing again by point $i)$ of Proposition 3 one gets the estimate in the
statement.
The last part about the $\infty-$capacity follow just by Proposition 4. If
$A_{j}\equiv 0$ for any $j$ then for any $\nu\in\mathbb{R}$, $\nu>0$ we have
$\lambda_{n}n^{\nu}\to 0$ as $n\to\pm\infty$.
## 3 Proof of Theorem 2
Proof: [Proof of Theorem 2] The proof of the first part of the statement
follows from a couple of elementary considerations. In the sequel we will use
the short-hand notation $\mathcal{A}$ for $Skew(K)$.
_Fact 1: Equation 1 holds if and only if $\mathcal{A}$ has finite rank_
Suppose that $K|_{\mathcal{V}}$ is symmetric. Consider the orthogonal
splitting of $L^{2}[0,1]$ as $\mathcal{V}\oplus\mathcal{V}^{\perp}$. Equation
1 can be reformulated as
$\mathcal{A}(\mathcal{V})\subseteq\mathcal{V}^{\perp}$, thus
$\operatorname{\mathrm{Im}}(\mathcal{A}(L^{2}[0,1]))\subseteq\mathcal{V}^{\perp}+\mathcal{A}(\mathcal{V}^{\perp})$
which is finite dimensional.
Conversely, if the range of $\mathcal{A}$ is finite dimensional, we can
decompose $L^{2}[0,1]$ as
$\operatorname{\mathrm{Im}}(\mathcal{A})\oplus\ker(\mathcal{A})$, where the
decomposition is orthogonal by skew-symmetry. Thus, on $\ker(\mathcal{A})$,
$K$ is symmetric.
_Fact 2: $\mathcal{A}$ determines the kernel of $K$_
It is well known that, if $K$ is Hilbert-Schmidt, then $K^{*}$ is Hilbert-
Schmidt too. Since we are assuming eq. 2 it is given by:
$K^{*}(v)(t)=\int_{t}^{1}V^{*}(\tau,t)v(\tau)d\tau.$
So we can write down the integral kernel $A(t,\tau)$ of $\mathcal{A}$ as
follows:
$A(t,\tau)=\begin{cases}\frac{1}{2}V(t,\tau)\text{ if }\tau<t\\\
-\frac{1}{2}V^{*}(\tau,t)\text{ if }t<\tau.\end{cases}$
The key observation now is that the support of the kernel of $K$ is disjoint
form the support of the kernel of $K^{*}$. Thus the kernel of $\mathcal{A}$
determines the kernel of $K$ (and vice versa).
Now, since we are assuming that $\mathcal{A}$ has finite dimensional image, we
can present its kernel as:
$A(t,\tau)=\frac{1}{2}Z_{t}^{*}\mathcal{A}_{0}Z_{\tau},$
where $\mathcal{A}_{0}$ is a skew-symmetric matrix and $Z_{t}$ is a
$\dim(\operatorname{\mathrm{Im}}(\mathcal{A}))\times k$ matrix that has as
rows the elements of some orthonormal base of
$\operatorname{\mathrm{Im}}(\mathcal{A})$. Without loss of generality we can
assume $\mathcal{A}_{0}=J$. In fact with an orthogonal change of coordinates
$\mathcal{A}_{0}$ decomposes as a direct sum of rotation with an amplitude
$\lambda_{i}$. Rescaling the coordinates by $\sqrt{\lambda_{i}}$ yields the
desired canonical form $J$.
The first part of the statement is proved so we pass to second one. First of
all notice that, now that we have written down any operator satisfying eqs. 1
and 2 in the same form as those in eq. 3, we can apply all the results about
the asymptotic of their eigenvalues. In particular, if we assume that the
space $\operatorname{\mathrm{Im}}(\mathcal{A})\subset
L^{2}([0,1],\mathbb{R}^{k})$ is generated by piecewise analytic functions, the
ordered sequence of eigenvalues satisfies:
$\lambda_{n}=\frac{\xi}{\pi n}+O(n^{-5/3}),\quad\text{ as }n\to\pm\infty.$
Notice that we are using a better estimates on the reminder (for the case of
the $1-$capacity) then the one given in Theorem 1 that was given in
determinant . We denote by $M^{\dagger}=\bar{M}^{*}$ the conjugate transpose.
Set $2m=\dim(\operatorname{\mathrm{Im}}(\mathcal{A}))$, since the map
$t\mapsto Z_{t}$ is analytic, there exists a piecewise analytic family of
unitary matrices $G_{t}$ such that:
$G_{t}^{\dagger}Z_{t}^{*}JZ_{t}G_{t}=\begin{bmatrix}&i\zeta_{1}(t)\\\
&&\ddots\\\ &&&i\zeta_{l}(t)\\\ &&&&-i\zeta_{1}(t)\\\ &&&&&\ddots\\\
&&&&&&-i\zeta_{l}(t)\\\ &&&&&&&\underline{0}\end{bmatrix}$
Without loss of generality we can assume that the function $\zeta_{i}$ are
analytic on the whole interval and everywhere non negative. Recall that the
coefficient $\xi$ appearing in the asymptotic was computed as
$\xi=\int_{0}^{1}\zeta(t)dt=\int_{0}^{1}\sum_{i=0}^{l}\zeta_{i}(t)dt$.
Let us work on the Hilbert space $L^{2}([0,1],\mathbb{C}^{k})$ with standard
hermitian product. Notice that $G:L^{2}([0,1],\mathbb{C}^{k})\to
L^{2}([0,1],\mathbb{C}^{k})$, $v\mapsto G_{t}v$ is an isometry, thus the
eigenvalue of $Skew(K)=\mathcal{A}$ remain the same if we consider the similar
operator $G^{-1}\circ\mathcal{A}\circ G$ which acts as follows:
$G^{-1}\circ\mathcal{A}\circ
G(v)=\frac{1}{2}{G}_{t}^{\dagger}Z_{t}^{*}J\int_{0}^{1}Z_{\tau}G_{\tau}v(\tau)d\tau$
To simplify notation let’s forget about this change of coordinates and still
call $Z_{t}$ the matrix $Z_{t}G_{t}$. Write $Z_{t}$ as:
$Z_{t}=\begin{pmatrix}y^{*}_{1}(t)\\\ \vdots\\\ y_{m}^{*}(t)\\\
x_{1}^{*}(t)\\\ \vdots\\\ x^{*}_{m}(t)\\\ \end{pmatrix}.$
We introduce the following notation: for a vector function $v_{i}$ the
quantity $(v_{i})_{j}$ stands for $j-$th component of $v_{i}$.
We can now bound the function $\zeta(t)$ in terms of the components of the
matrix $Z_{t}$:
$\begin{split}2\zeta(t)&=\sum_{j=1}^{k}|(Z_{t}^{\dagger}JZ_{t})_{jj}|\leq\sum_{i=1}^{m}\sum_{j=1}^{k}|(x_{i})_{j}(\bar{y}_{i})_{j}-(y_{i})_{j}(\bar{x}_{i})_{j}|(t)\\\
&=\sum_{i=1}^{m}\sum_{j=1}^{k}2|\text{Im}((x_{i})_{j}(\bar{y}_{i})_{j})|\leq\sum_{i=1}^{m}\sum_{j=1}^{k}2|(x_{i})_{j}||(y_{i})_{j}|=\sum_{i=1}^{m}2\langle|x_{i}|,|y_{i}|\rangle(t)\end{split}$
Where the vector $|v|$ is the vector with entries the absolute values the
entries of $v$. Integrating and using Hölder inequality for the $2$ norm, we
get:
$\xi=\int_{0}^{1}\zeta(t)dt=\sum_{i=1}^{m}||x_{i}||_{2}\,||y_{i}||_{2}.$
The next step is to relate the quantity on the right hand side to the
eigenvalues of $\mathcal{A}$. The strategy now is to modify the matrix $Z_{t}$
in order to get an orthonormal frame of $Im(\mathcal{A})$. Keeping track of
the transformations used we get a matrix representing $\mathcal{A}$, then it
is enough to compute the eigenvalues of the said matrix.
We can assume, without loss of generality that $\langle
x_{i},x_{j}\rangle_{L^{2}}=\delta_{ij}$. This can be achieved with a
symplectic change of the matrix $Z_{t}$. Then we modify the $y_{j}$ in order
to make them orthogonal to the space generated by the $x_{j}$. We use the
following transformation:
$\begin{pmatrix}Y_{t}\\\ X_{t}\end{pmatrix}\mapsto\begin{pmatrix}1&M\\\
0&1\end{pmatrix}\begin{pmatrix}Y_{t}\\\
X_{t}\end{pmatrix}=\begin{pmatrix}Y_{t}+MX_{t}\\\ X_{t}\end{pmatrix}$
where $M$ is defined by the relation
$\int_{0}^{1}Y_{t}X_{t}^{*}+MX_{t}X_{t}^{*}dt=\int_{0}^{1}Y_{t}X_{t}^{*}dt+M=0$.
The last step is to make $y_{j}$ orthonormal. If we multiply $Y_{t}$ by a
matrix $L$ we find the equation $L\int_{0}^{1}Y_{t}Y_{t}^{*}dtL^{*}=1$ , so
$L=(\int_{0}^{1}Y_{t}Y_{t}^{*}dt)^{-\frac{1}{2}}$. Thus the matrix
representing $\mathcal{A}$ in this coordinates is one half of:
$\mathcal{A}_{0}=\begin{pmatrix}L^{-1}&0\\\
-M^{*}&1\end{pmatrix}\begin{pmatrix}0&-1\\\
1&0\end{pmatrix}\begin{pmatrix}L^{-1}&-M\\\
0&1\end{pmatrix}=\begin{pmatrix}0&L^{-1}\\\ -L^{-1}&M^{*}-M\end{pmatrix}$
If we square $\mathcal{A}_{0}$ and compute the trace we get:
$-\frac{1}{2}\operatorname{tr}(\mathcal{A}_{0}^{2})=\operatorname{tr}(L^{-2})-\frac{1}{2}\operatorname{tr}((M^{*}-M)^{2})\geq\operatorname{tr}\left(\int_{0}^{1}Y_{t}Y_{t}^{*}dt\right)=\sum_{i=1}^{m}||y_{i}||_{2}^{2}$
Call $\Sigma(\mathcal{A})$ the spectrum of $\mathcal{A}$, since $\mathcal{A}$
is skew-symmetric it follows that:
$-\frac{1}{2}\operatorname{tr}(\mathcal{A}_{0}^{2})=4\sum_{\mu\in\Sigma(\mathcal{A}),-i\mu>0}-\mu^{2}\geq
0.$
Recalling that $||x_{i}||=1$ and putting all together we find that:
$\xi\leq\sum_{i=1}^{m}||y_{i}||_{2}\leq\sqrt{m}\sqrt{\sum_{i=1}^{m}||y_{i}||_{2}^{2}}=2\sqrt{m}\sqrt{\sum_{\mu\in\Sigma(\mathcal{A}),-i\mu>0}-\mu^{2}}.$
###### Example 1.
Consider a matrix $Z_{t}$ of the following form:
$Z_{t}=\begin{bmatrix}\xi_{1}(t)&\xi_{3}(t)\\\ 0&\xi_{2}(t)\end{bmatrix}\quad
Z_{t}^{*}JZ_{t}=\begin{bmatrix}0&-{\xi}_{1}\xi_{2}(t)\\\
{\xi}_{2}\xi_{1}(t)&0\end{bmatrix}$
The capacity of $K$ is given by $\zeta=\int_{0}^{1}|\xi_{1}\xi_{2}|(t)dt$. We
can assume that $\langle\xi_{2},\xi_{3}\rangle=0$ and $||\xi_{2}||=1$. A
direct computation shows that the eigenvalue of $SkewK$ are $\frac{\pm
i}{2}\sqrt{(||\xi_{1}||^{2}+||\xi_{3}||^{2})}$. This shows that the two
quantities behave in a very different way. If we choose $\xi_{2}$ very close
to $\xi_{1}$ and $\xi_{3}$ small, capacity and eigenvalue square are
comparable. If we choose $\xi_{3}$ very big the capacity remains the same
whereas the eigenvalues explode. In particular there cannot be any lower bound
of $\zeta$ in terms of the eigenvalues of $K$.
###### Remark 4.
There is a natural class of translations that preserves the capacity. Take any
path $\Phi_{t}$ of symplectic matrices (say $L^{2}$ integrable), the operators
constructed with $Z_{t}$ and $\Phi_{t}Z_{t}$ have the same capacity (but the
respective skew-symmetric part clearly do not have the same eigenvalues).
Set
$K^{\Phi}(v)=\int_{0}^{t}Z_{t}^{*}J\Phi_{t}^{-1}\Phi_{\tau}Z_{\tau}v_{\tau}d\tau$
and $\Sigma^{+}(K^{\Phi})$ the set of eigenvalues of $Skew(K^{\Phi})$
satisfying $-i\sigma\geq 0$. It seems natural to ask if:
$\zeta(K)=2\inf_{\Phi_{t}\in
Sp(n)}\sqrt{\sum_{\sigma\in\Sigma^{+}(K^{\Phi})}-\sigma^{2}}$
Take for instance the example above and suppose for simplicity that $\xi_{1}$
and $\xi_{2}$ are positive and never vanishing. Using the following
transformation we obtain:
$Z^{\prime}_{t}=\begin{bmatrix}\sqrt{\frac{\xi_{2}}{\xi_{1}}}&\frac{-\xi_{3}}{\sqrt{\xi_{1}\xi_{2}}}\\\
0&\sqrt{\frac{\xi_{1}}{\xi_{2}}}\end{bmatrix}\begin{bmatrix}\xi_{1}&\xi_{3}\\\
0&\xi_{2}\end{bmatrix}=\begin{bmatrix}\sqrt{\xi_{1}\xi_{2}}&0\\\
0&\sqrt{\xi_{1}\xi_{2}}\end{bmatrix}$
and in this case the eigenvalue became $\frac{\pm
i}{2}\langle\xi_{1},\xi_{2}\rangle$, precisely half the capacity.
## 4 The second variation of an optimal control problem
We start this section collecting some basic fact about optimal control
problems, first and second variation. Standard references on the topic are
determinant , bookcontrol , bookSubriemannian , bookJean and
symplecticMethods .
### 4.1 Symplectic geometry and optimal control problems
Consider a smooth manifold $M$, its cotangent bundle $T^{*}M$ is a vector
bundle on $M$ whose fibre at a point $q$ is the vector space of linear
functions on $T_{q}M$, the tangent space of $M$ at $q$.
Let $\pi$ be the natural projection, $\pi:T^{*}M\to M$ which takes a covector
and gives back the base point:
$\pi:T^{*}M\to M,\quad\pi(\lambda_{q})=q.$
Using the the projection map we define the following $1-$form, called
tautological (or Liouville ) form: take an element $X\in T_{\lambda}(T^{*}M)$,
$s_{\lambda}(X)=\lambda(\pi_{*}X)$. One can check that $\sigma=ds$ is not
degenerate in local coordinates. We obtain a symplectic manifold considering
$(T^{*}M,\sigma)$.
Using the symplectic form we can associate to any function on $T^{*}M$ a
vector field. Suppose that $H$ is a smooth function on $T^{*}M$, we define
$\vec{H}$ setting:
$\sigma(X,\vec{H}_{\lambda})=d_{\lambda}H(X),\quad\forall X\in
T_{\lambda}(T^{*}M)$
$H$ is called Hamiltonian function and $\vec{H}$ is an Hamiltonian vector
field.
On $T^{*}M$ we have a particular instance of this construction which can be
used to lift arbitrary flows on the base manifold $M$ to Hamiltonian flows on
$T^{*}M$. For any vector field $V$ on $M$ consider the following function:
$h_{V}(\lambda)=\langle\lambda,V\rangle,\quad\lambda\in T^{*}M.$
It is straight forward to check in local coordinates that
$\pi_{*}\vec{h}_{V}=V$.
The next objects we are going to introduce are Lagrangian subspaces. We say
that a subspace $W$ of a symplectic vector space $(\Sigma,\sigma)$ is
Lagrangian if the restriction of the symplectic form $\sigma$ is degenerate,
i.e. if $\\{v\in\Sigma:\sigma(v,w)=0,\,\forall\,w\in W\\}=W$. An example of
Lagrangian subspaces is the fibre, i.e. the kernel of $\pi_{*}$. More
generally we can consider the following submanifolds in $T^{*}M$:
$A(N)=\\{\lambda\in T^{*}M:\lambda(X)=0,\,\forall\,X\in TN,\pi(\lambda)\in
N\\}$
where $N\subset M$ is a submanifold. $A(N)$ is called the annihilator of $N$
and its tangent space at any point is a Lagrangian subspace.
Suppose we are given a family of complete and smooth vector fields $f_{u}$
which depend on some parameter $u\in U\subset\mathbb{R}^{k}$ and a Lagrangian,
i.e. a smooth function $\varphi(u,q)$ on $U\times M$. We use the vector fields
$f_{u}$ to produce a family of curves on $M$. For any function $u\in
L^{\infty}([0,1],U)$ we consider the following non autonomous $ODE$ system on
$M$:
$\dot{q}=f_{u(t)}(q),\quad q(0)=q_{0}\in M$ (21)
The solution are always Lipschitz curves. For fixed $q_{0}$, the set of
functions $u\in L^{\infty}([0,1],U)$ for which said curves are defined up to
time $1$ is an open set which we call $\mathcal{U}_{q_{0}}$. We can let the
base point $q_{0}$ vary and consider $\mathcal{U}=\cup_{q_{0}\in
M}\mathcal{U}_{q_{0}}$. It turns out that this set has a structure of a Banach
manifold (see beschastnyi_morse ). We call the $L^{\infty}$ functions obtained
this way _admissible controls_ and the corresponding trajectories on $M$
_admissible curves_.
Denote by $\gamma_{u}$ the admissible curve obtained form an admissible
control $u$. We are interested in the following minimization problem on the
space of _admissible_ controls:
$\min_{u\text{ admissible}}\mathcal{J}(u)=\min_{u\text{
admissible}}\int_{0}^{1}\varphi(u(t),\gamma_{u}(t))dt$ (22)
We often reduce the space of admissible variations imposing additional
constraints on the final and initial position of the trajectory. For example
one can consider trajectories that start and end at two fixed points
$q_{0},q_{1}\in M$, or trajectory that start from a submanifold $N_{0}$ and
reach a second submanifold $N_{1}$. More generally we can ask that the curves
satisfy $(\gamma(0),\gamma(1))\in N\subseteq M\times M$.
We often consider the following family of functions on $T^{*}M$:
$h_{u}:T^{*}M\to\mathbb{R},\quad
h_{u}(\lambda)=\langle\lambda,f_{u}\rangle+\nu\varphi(u,\pi(\lambda)).$
We use them to lift vector fields on $M$ to vector fields on $T^{*}M$. They
are closely relate with the function defined above and still satisfy
$\pi_{*}(\vec{h}_{u})=f_{u}$.
In particular, if $\tilde{\gamma}$ is and admissible curve, we can build a
lift, i.e. a curve $\tilde{\lambda}$ in $T^{*}M$ such that
$\pi(\tilde{\lambda})=\tilde{\gamma}$, solving
$\dot{\lambda}=\vec{h}_{u}(\lambda)$. The following theorem, known as
Pontryagin Maximum Principle, gives a characterization of critical points of
$\mathcal{J}$, for any set of boundary conditions.
###### Theorem (PMP).
If a control $\tilde{u}\in L^{\infty}([0,1],U)$ is a critical point for the
functional in eq. 22 there exists a curve $\lambda:[0,1]\to T^{*}M$ and an
admissible curve $q:[0,1]\to M$ such that for almost all $t\in[0,1]$
1. 1.
$\lambda(t)$ is a lift of $q(t)$:
$q(t)=\pi(\lambda(t))\mathchar 24635\relax\;$
2. 2.
$\lambda(t)$ satisfies the following Hamiltonian system:
$\frac{d\lambda}{dt}=\vec{h}_{\tilde{u}(t)}(\lambda)\mathchar 24635\relax\;$
3. 3.
the control $\tilde{u}$ is determined by the maximum condition:
$h_{\tilde{u}(t)}(\lambda(t))=\max_{u\in U}h_{u}(\lambda(t)),\quad\nu\leq
0\mathchar 24635\relax\;$
4. 4.
the non-triviality condition holds: $(\lambda(t),\nu)\neq(0,0)$;
5. 5.
transversality condition holds:
$(-\lambda(0),\lambda(1))\in A(N).$
We call $q(t)$ an extremal curve (or trajectory) and $\lambda(t)$ an extremal.
There are essentially two possibility for the parameter $\nu$, it can be
either $0$ or, after appropriate normalization of $\lambda_{t}$, $-1$. The
extremals belonging to the first family are called _abnormal_ whereas the ones
belonging to second _normal_.
### 4.2 The Endpoint map and its differentiation
We will consider now in detail the minimization problem in equation eq. 22
with fixed endpoints.
As in the previous section we denote by $\mathcal{U}_{q_{0}}\subset
L^{\infty}([0,1],U)$ be the space of admissible controls at point $q_{0}$ and
define the following map:
$E^{t}:\mathcal{U}_{q_{0}}\to M,\quad u\mapsto\gamma_{u}(t)$
It takes the control $u$ and gives the position at time $t$ of the solution of
eq. 21 starting from $q_{0}$. We call this map _Endpoint map_. It turns out
that $E^{t}$ is smooth, we are going now to compute its differential and
Hessian. The proof of these facts can be found in the book bookcontrol or in
ASZ .
For a fixed control $\tilde{u}$ consider the function
$h_{\tilde{u}}(\lambda)=h_{\tilde{u}(t)}(\lambda)$ and define the following
non autonomous flow which plays the role of parallel transport in this
context:
$\frac{d}{dt}\tilde{\Phi}_{t}=\vec{h}_{\tilde{u}}(\tilde{\Phi}_{t})\qquad\tilde{\Phi}_{0}=Id$
(23)
It has the following properties:
* _i)_
It extends to the cotangent bundle the flow which solves
$\dot{q}=f^{t}_{\tilde{u}}(q)$ on the base. In particular if $\lambda_{t}$ is
an extremal with initial condition $\lambda_{0}$,
$\pi(\tilde{\Phi}_{t}(\lambda_{0}))=q_{\tilde{u}}(t)$ where $q_{\tilde{u}}$ is
an extremal trajectory.
* _ii)_
$\tilde{\Phi}_{t}$ preserves the fibre over each $q\in M$. The restriction
$\tilde{\Phi}_{t}:\,T^{*}_{q}M\to T^{*}_{\tilde{\Phi}_{t}(q)}M$ is an affine
transformation.
We suppose now that $\lambda(t)$ is an extremal and $\tilde{u}$ a critical
point of the functional $\mathcal{J}$. We use the symplectomorphism
$\tilde{\Phi}_{t}$ to pull back the whole curve $\lambda(t)$ to the starting
point $\lambda_{0}$. We can express all the first and second order information
about the extremal using the following map and its derivatives:
$b_{u}^{t}(\lambda)=(h_{u}^{t}-h_{\tilde{u}}^{t})\circ\tilde{\Phi}_{t}(\lambda)$
Notice that:
* •
$b_{u}^{t}(\lambda_{0})|_{u=\tilde{u}(t)}=0=d_{\lambda_{0}}\,b_{u}^{t}|_{u=\tilde{u}(t)}$
by definition.
* •
$\partial_{u}b_{u}^{t}|_{u=\tilde{u}(t)}=\partial_{u}(h_{u}^{t}\circ\tilde{\Phi}_{t})|_{u=\tilde{u}(t)}=0$
since $\lambda(t)$ is an extremal and $\tilde{u}$ the relative control.
Thus the first non zero derivatives are the order two ones. We define the
following maps:
$\begin{split}Z_{t}=\partial_{u}\vec{b}_{u}^{t}(\lambda_{0})|_{u=\tilde{u}(t)}:\mathbb{R}^{k}=T_{\tilde{u}(t)}U\to
T_{\lambda_{0}}(T^{*}M)\\\
H_{t}=\partial_{u}^{2}b_{t}(\lambda_{0})|_{u=\tilde{u}(t)}:\mathbb{R}^{k}=T_{\tilde{u}(t)}U\to
T^{*}_{\tilde{u}(t)}U=\mathbb{R}^{k}\end{split}$ (24)
We denote by $\Pi=\ker\pi_{*}$ the kernel of the differential of the natural
projection $\pi:T^{*}M\to M$.
###### Proposition 5 (Differential of the endpoint map).
Consider the endpoint map $E^{t}:\mathcal{U}_{q_{0}}\to M$. Fix a point
$\tilde{u}$ and consider the symplectomorphism $\tilde{\Phi}_{t}$ and the map
$Z_{t}$ defined above. The differential is the following map:
$d_{\tilde{u}}E(v_{t})=d_{\lambda(t)}\pi\circ
d_{\lambda_{0}}\tilde{\Phi}_{t}(\int_{0}^{t}Z_{\tau}v_{\tau}d\tau)\in
T_{q_{t}}M$
In particular, if we identify $T_{\lambda_{0}}(T^{*}M)$ with $\mathbb{R}^{2m}$
and write $Z_{t}=\begin{pmatrix}Y_{t}\\\ X_{t}\end{pmatrix}$, $\tilde{u}$ is a
regular point if and only if $v_{t}\mapsto\int_{0}^{t}X_{\tau}v_{\tau}d\tau$
is surjective. Equivalently if the following matrix is invertible:
$\Gamma_{t}=\int_{0}^{t}X_{\tau}X^{*}_{\tau}d\tau\in Mat_{n\times
n}(\mathbb{R}),\quad\det(\Gamma_{t})\neq 0$
If $d_{\tilde{u}}E^{t}$ is surjective then $(E^{t})^{-1}(q_{t})$ is smooth in
a neighbourhood of $\tilde{u}$ and is tangent space is given by:
$\begin{split}T_{\tilde{u}}(E^{t})^{-1}(q_{t})=\\{v\in
L^{\infty}([0,1],\mathbb{R}^{k}):\,\int_{0}^{t}X_{\tau}v_{\tau}d\tau=0\\}\\\
=\\{v\in
L^{\infty}([0,1],\mathbb{R}^{k}):\,\int_{0}^{t}Z_{\tau}v_{\tau}d\tau\in\Pi\\}\end{split}$
When the differential of the Endpoint map is surjective a good geometric
description of the situation is possible. The set of admissible control
becomes smooth (at least locally) and our minimization problem can be
interpreted as a constrained optimization problem. We are looking for critical
points of $\mathcal{J}$ on the submanifold
$\\{u\in\mathcal{U}:E^{t}(u)=q_{1}\\}$.
###### Definition 2.
We say that a normal extremal $\lambda(t)$ with associated control
$\tilde{u}(t)$ is strictly normal if the differential of the endpoint map at
$\tilde{u}$ is surjective.
It makes sense to go on and consider higher order optimality conditions. At
critical points is well defined (i.e. independent of coordinates) the Hessian
of $\mathcal{J}$ (or the _second variation_). Using chronological calculus
(see again bookcontrol or ASZ ) it is possible to write the second variation
of $\mathcal{J}$ on $\ker dE^{t}\subseteq L^{\infty}([0,1],\mathbb{R}^{k})$.
###### Proposition 6 (Second variation).
Suppose that $(\lambda(t),\tilde{u})$ is a strictly normal critical point of
$\mathcal{J}$ with fixed initial and final point. For any $u\in
L^{\infty}([0,1],\mathbb{R}^{k})$ such that $\int_{0}^{1}X_{t}u_{t}dt=0$ the
second variation of $\mathcal{J}$ has the following expression:
$d^{2}_{\tilde{u}}\mathcal{J}(u)=-\int_{0}^{1}\langle H_{t}u_{t},u_{t}\rangle
dt-\int_{0}^{1}\int_{0}^{t}\sigma(Z_{\tau}u_{\tau},Z_{t}u_{t})d\tau dt$
The associated bilinear form is symmetric provided that $u,v$ lie in a
subspace that projects to a Lagrangian one via the map
$u\mapsto\int_{0}^{1}Z_{t}u_{t}dt$.
$d^{2}_{\tilde{u}}\mathcal{J}(u,v)=-\int_{0}^{1}\langle
H_{t}u_{t},v_{t}\rangle
dt-\int_{0}^{1}\int_{0}^{t}\sigma(Z_{\tau}u_{\tau},Z_{t}v_{t})d\tau dt$
One often makes the assumption, which is customarily called _strong Legendre
condition_ , that the matrix $H_{t}$ is strictly negative definite and has
uniformly bounded inverse. This guarantees that the term:
$\int_{0}^{1}-\langle H_{t}u_{t},v_{t}\rangle dt$
is equivalent to the $L^{2}$ scalar product.
###### Definition 3.
Suppose that the set $U\subset\mathbb{R}^{k}$ is open, we say that
$(\lambda(t),\tilde{u})$ is a _regular_ critical point if strong Legendre
condition holds along the extremal. If $H_{t}\leq 0$ but
$(\lambda(t),\tilde{u})$ does not satisfy Legendre strong condition we say
that $(\lambda(t),\tilde{u})$ is _singular_. If $H_{t}\equiv 0$ we say that it
is _totally singular_.
Even if the extremal $(\lambda(t),\tilde{u})$ is abnormal or not strictly
normal it is possible to produce a second variation for the optimal control
problem. To do so one considers the extended control system:
$\hat{f}_{(v,u)}(q)=\begin{pmatrix}\varphi(u,q)+v\\\
f_{u}(q)\end{pmatrix}\in\mathbb{R}\times T_{q}M$
and the corresponding endpoint map
$\hat{E}^{t}:(0,+\infty)\times\mathcal{U}_{q_{0}}\to\mathbb{R}\times M$. To
differentiate it we use the same construction explained above and employ the
following Hamiltonians on $\mathbb{R}^{*}\times T^{*}M$:
$\hat{h}_{(v,u)}(\nu,\lambda)=\langle\lambda,f_{u}\rangle+\nu(\varphi(u,q)+v)$
One has just to identify which are the right controls to consider, PMP implies
that $\dot{\nu}=0$, $\nu\leq 0$ and $v=0$. In the end one obtains formally the
same expression as in Proposition 6 involving the derivatives of the functions
$\hat{h}_{(v,u)}$ and recover the same expression as in Proposition 6 for
strictly normal extremals (see (bookcontrol, , Chapter 20) or
symplecticMethods ).
### 4.3 Reformulation of the main results
In this section we reformulate Theorem 2 as a characterization of the compact
part of the second variation of an optimal control problem at a strictly
normal regular extremal (see definitions 2 and 3).
###### Theorem 3.
Suppose $\mathcal{V}\subset L^{2}([0,1],\mathbb{R}^{k})$ is a finite
codimension subspace and $K$ and operator satisfying eqs. 1 and 2. Then
$(K,\mathcal{V})$ can be realized as the second variation of an optimal
control problem at a strictly normal regular extremal. To any such couple we
can associate a triple $((\Sigma,\sigma),\Pi,Z)$ consisting of:
* •
a finite dimensional symplectic space $(\Sigma,\sigma)$;
* •
a Lagrangian subspace $\Pi\subset\Sigma$;
* •
a linear map $Z:L^{2}([0,1],\mathbb{R}^{k})\to\Sigma$ such that
$\operatorname{\mathrm{Im}}(Z)$ is transversal to the subspace $\Pi$.
This triple is unique up to the action of
$\mathrm{stab}_{\Pi}(\Sigma,\sigma)$, the group of symplectic transformations
that fix $\Pi$. Any other triple is given by $((\Sigma,\sigma),\Pi,\Phi\circ
Z)$ for $\Phi\in\mathrm{stab}_{\Pi}(\Sigma,\sigma)$.
Vice versa any triple $((\Sigma,\sigma),\Pi,Z)$ as above determines a couple
$(K,\mathcal{V})$. We can define the skew-symmetric part $\mathcal{A}$ of $K$
as:
$\langle\mathcal{A}u,v\rangle=\sigma(Zu,Zv),\,\forall u,v\in
L^{2}([0,1],\mathbb{R}^{k}),$
$\mathcal{A}$ determines the whole operator $K$ and its domain is recovered as
$\mathcal{V}=Z^{-1}(\Pi)$.
Proof: The proof is essentially a reformulation of Theorem 2. Given the
operator we construct the symplectic space $(\Sigma,\sigma)$ taking as vector
space the image of the skew-symmetric part
$\operatorname{\mathrm{Im}}(\mathcal{A})$ and as symplectic form
$\langle\mathcal{A}\cdot,\cdot\rangle$.
The transversality condition correspond to the fact that the differential of
the endpoint map is surjective.
The only thing left to show is uniqueness of the triple. Without loss of
generality we can assume that the symplectic subspace
$(\Sigma,\sigma)=(\mathbb{R}^{2n},\sigma)$ is the standard one and that the
Lagrangian subspace $\Pi$ is the vertical subspace. In this coordinates
$Z(v)=\int_{0}^{1}Z_{t}v_{t}dt=\int_{0}^{1}\begin{pmatrix}Y_{t}\\\
X_{t}\end{pmatrix}v_{t}dt.$
Define the following map:
$F:L^{2}([0,1],\mathrm{Mat}_{n\times k}(\mathbb{R}))\to
L^{2}([0,1]^{2},\mathrm{Mat}_{k\times k}(\mathbb{R})),\quad Y_{t}\mapsto
Z_{t}^{*}JZ_{\tau}=X_{t}^{*}Y_{\tau}-Y_{t}^{*}X_{\tau}.$
It is linear if $X_{t}$ is fixed. To determine uniqueness we have to study an
affine equation thus is sufficient to study the kernel of $F$. Suppose for
simplicity that $X_{t}$ and $Y_{t}$ are continuous in $t$. We have to solve
the equation:
$F(Y_{t})=Z_{t}^{*}JZ_{\tau}=\sigma(Z_{t},Z_{\tau})=0.$
Consider the following subspace of $\mathbb{R}^{2n}$
$V^{[0,1]}=\Big{\\{}\sum_{i=1}^{l}Z_{t_{i}}\nu_{i}:\,\nu_{i}\in\mathbb{R}^{k},t_{i}\in[0,1],l\in\mathbb{N}\Big{\\}}\subset\mathbb{R}^{2n}$
It follows that $F(Y_{t})=0$ if and only if the subspace $V^{[0,1]}$ is
isotropic. Since we are in finite dimension, we can consider a finite number
of instants $t_{i}$ to which we can restrict to generate the whole
$V^{[0,1]}$. Call $I$ the set of this instants. Without loss of generality we
can assume that $\\{\sum_{i\in
I}X_{t_{i}}\nu_{i},\nu_{i}\in\mathbb{R}^{k},t_{i}\in I\\}=\mathbb{R}^{n}$.
This is so since the image of $Z$ is transversal to $\Pi$ and thus
$\Gamma=\int_{0}^{1}X_{t}X_{t}^{*}dt$ is non degenerate. In fact, if the
subspace
$\\{\sum_{i=1}^{l}X_{t_{i}}\nu_{i}|\,\nu_{i}\in\mathbb{R}^{k},l\in\mathbb{N}\\}$
were a proper subspace of $\mathbb{R}^{n}$, there would be a vector $\mu$ such
that $\langle\mu,X_{t}\nu\rangle=0$, $\forall t\in[0,1]$ and
$\forall\nu\in\mathbb{R}^{n}$. Thus an element of the kernel of $\Gamma$. A
contradiction.
Now we evaluate the equation $F(Y_{t})=0\iff
Y_{t}^{*}X_{\tau}=X_{t}^{*}Y_{\tau}$ at the instants $t=t_{i}$ that guarantee
controllability. One can read off the following identities:
$Y_{t}^{*}v_{j}=X_{t}^{*}c_{j}$
where the $v_{j}^{\prime}$s are a base of $\mathbb{R}^{n}$ and $c_{j}$ free
parameters. Taking transpose we get that $Y_{t}=GX_{t}$.
It is straightforward to check that, if $Y_{t}=GX_{t}$, $G$ must be symmetric,
in fact:
$Z_{t}JZ_{\tau}=Y_{t}^{*}X_{\tau}-X_{t}^{*}Y_{\tau}=X_{t}^{*}(G^{*}-G)X_{\tau}=0\iff
G=G^{*}$
And so uniqueness is proved when $X_{t}$ and $Y_{t}$ are continuous.
The case in which $X_{t}$ and $Y_{t}$ are just $L^{2}$ (matrix-)functions can
be dealt with similarly. One has just to replace _evaluations_ with integrals
of the form $\int_{t-\epsilon}^{t+\epsilon}Z_{\tau}\nu d\tau$ and
$\int_{t-\epsilon}^{t+\epsilon}X_{\tau}\nu d\tau$ and interpret every equality
$t$ almost everywhere.
The only thing left to show is how to construct a control system with given
$(K,\mathcal{V})$ as second variation. By the equivalence stated above it is
enough to show that we can realize any given map
$Z:L^{2}([0,1],\mathbb{R}^{k})\to\Sigma$ with a proper control system. We can
assume without loss of generality that $(\Sigma,\sigma)$ is just
$\mathbb{R}^{2m}$ with the standard symplectic form and $\Pi$ is the vertical
subspace. With this choices the map $Z$ is given by :
$v\mapsto\int_{0}^{1}Z_{t}v_{t}dt=\int_{0}^{1}\begin{pmatrix}Y_{t}v_{t}\\\
X_{t}v_{t}\end{pmatrix}dt$
The operator $K$ is then given by
$K(v)=\int_{0}^{t}Z_{t}^{*}JZ_{\tau}v_{\tau}d\tau$ and
$\mathcal{V}=\\{v|\int_{0}^{1}X_{t}v_{t}dt=0\\}$. Consider the following
linear quadratic system on $\mathbb{R}^{m}$:
$f_{u}(q)=B_{t}u\quad\varphi_{t}(x)=\frac{1}{2}|u|^{2}+\langle\Omega_{t}u,x\rangle,$
where $B_{t}$ and $\Omega_{t}$ are matrices of size $m\times k$, the
Hamiltonian in PMP reads:
$h_{u}(\lambda,x)=\langle\lambda,B_{t}u\rangle-\frac{1}{2}|u|^{2}-\langle\Omega_{t}u,x\rangle$
Take as extremal control $u_{t}\equiv 0$, it easy to check that the re-
parametrization flow $\tilde{\Phi}_{t}$ defined in eq. 23 is just the identity
and the matrix $Z_{t}$ for this problem is the following:
$Z_{t}=\begin{pmatrix}\Omega_{t}\\\ B_{t}\end{pmatrix}$
So it is enough to take $\Omega_{t}=Y_{t}$ and $B_{t}=X_{t}$.
We can reformulate also the second part of Theorem 2 relating the capacity of
$K$ and the eigenvalues of $\mathcal{A}$. We make the following assumptions:
1. 1.
the map $t\mapsto Z_{t}$ is piecewise analytic in $t$;
2. 2.
the maximum condition in the statement of PMP defines a $C^{2}$ function
$\hat{H}_{t}(\lambda)=\max_{u\in\mathbb{R}^{k}}h^{t}_{u}(\lambda)$ in a
neighbourhood of the strictly normal regular extremal we are considering.
Under the above assumptions the following proposition clarifies the link
between the matrices $Z_{t}$ and $H_{t}$ and the function $\hat{H}_{t}$. A
proof can be found either in (bookcontrol, , Proposition 21.3) or ASZ .
###### Proposition 7.
Suppose that $(\lambda(t),\tilde{u})$ is an extremal and the function
$\hat{H}_{t}$ is $C^{2}$, using the flow defined in eq. 23 define
$\mathcal{H}_{t}(\lambda)=(\hat{H}_{t}-h_{\tilde{u}(t)})\circ\tilde{\Phi}_{t}(\lambda)$.
It holds that:
$\text{Hess}_{\lambda_{0}}(\mathcal{H}_{t})=JZ_{t}H_{t}^{-1}Z_{t}^{*}J$
Define $R_{t}=\max_{v\in\mathbb{R}^{k},||v||=1}||Z_{t}v||$ and let $\\{\pm
i\zeta_{j}(t)\\}_{j=1}^{l}$ be the eigenvalues of $iZ_{t}^{*}JZ_{t}$ as
defined in Section 3. We have the following proposition.
###### Proposition 8.
The capacity $\xi$ of $K$ satisfies:
$\xi\leq\frac{\sqrt{k}\,||R_{t}||_{2}}{2}\sqrt{\int_{0}^{1}\operatorname{tr}(\text{Hess}_{\lambda_{0}}(\mathcal{H}_{t}))dt}$
and in particular, if we order the functions $\zeta_{j}(t)$ decreasingly, they
satisfy
$0\leq\zeta_{j}(t)\leq R_{t}\sqrt{\lambda_{2j}(t)},\quad j\in\\{1,\dots l\\}$
where $\lambda_{j}(t)$ are the eigenvalues of
$Hess_{\lambda_{0}}(\mathcal{H}_{t})$ in decreasing order.
Proof: We give a sketch of the proof. Without loss of generality we can assume
$H_{t}=-Id$, otherwise, we can perform the change of coordinate on
$L^{2}([0,1],\mathbb{R}^{k})$ $v\mapsto(-H_{t})^{-\frac{1}{2}}v$ and redefine
$Z_{t}$ accordingly.
In this notation $Hess_{\lambda_{0}}(\mathcal{H}_{t})$ corresponds to the
matrix $JZ_{t}Z_{t}^{*}J$. If we square $A_{t}=Z_{t}^{*}JZ_{t}$ we obtain:
$A_{t}^{*}A_{t}=-Z_{t}^{*}JZ_{t}Z_{t}^{*}JZ_{t}=-Z_{t}^{*}\big{(}JZ_{t}Z_{t}^{*}J\big{)}Z_{t}=-Z_{t}^{*}Hess_{\lambda_{0}}(\mathcal{H}_{t})Z_{t}$
Observe that $\zeta_{j}(t)$ is an eigenvalue of $A_{t}$ if and only if
$-\zeta_{j}^{2}(t)$ is a eigenvalue of $A^{*}_{t}A_{t}$. The equation above
relates the _restriction_ of $Hess_{\lambda_{0}}(\mathcal{H}_{t})$ to the
image of the maps $Z_{t}:\mathbb{R}^{k}\to\mathbb{R}^{2n}$ with the square of
the functions $\zeta_{j}(t)$ defining the capacity.
The idea is to use Cauchy interlacing inequality for the eigenvalues of
$Hess_{\lambda_{0}}(\mathcal{H}_{t})$ and its restriction to a codimension
$2n-k$ subspace. If $\\{\lambda_{j}(t)\\}_{j=1}^{2n}$ are the eigenvalues of
the Hessian, taken in decreasing order, and $\\{\mu_{j}(t)\\}_{j=1}^{2n-k}$
the eigenvalues of its restriction we have:
$\lambda_{j+2n-k}(t)\leq\mu_{j}(t)\leq\lambda_{j}(t)$
In our case $Z_{t}$ are not orthogonal projectors but we can adjust the
estimates considering how much the matrices $Z_{t}$ dilate the space, and thus
we have to take in account the function $R_{t}$ defined just before the
statement. Denote by $\mu_{j}(t)$ the $j-$th eigenvalue of $-A_{t}^{2},$
putting all together we have:
$0\leq\mu_{j}(t)\leq R_{t}^{2}\,\lambda_{2j}(t)\quad j\in\\{1,\dots k\\}$
Where we shifted the index by one since $\mu_{2k-1}(t)=\mu_{2k}(t)$ for all
$k\leq l$. Taking square roots and integrating we have:
$\int_{0}^{1}\zeta_{j}(t)dt\leq\int_{0}^{1}R_{t}\sqrt{\lambda_{2j}(t)}dt$
Summing up over $j$ we find that:
$\xi=\int_{0}^{1}\sum_{j}\zeta_{j}(t)dt\leq\frac{1}{2}\int_{0}^{1}\sum_{j}R_{t}\sqrt{\lambda_{2j}(t)}dt\leq\frac{\sqrt{k}||R_{t}||_{2}}{2}\sqrt{\int_{0}^{1}\operatorname{tr}(Hess_{\lambda_{0}}(\mathcal{H}_{t}))}$
We turn now to Theorem 1, we can interpret it as a quantitative version of
various necessary optimality conditions that one can formulate for certain
classes of singular extremals (see (bookcontrol, , Chapter 20) or
(bookSubriemannian, , Chapter 12)). Moreover, leaving optimality conditions
aside, Theorem 1 gives the asymptotic distribution of the eigenvalues of the
second variation for totally singular extremals (see definition 3).
As mentioned in the previous section we can produce a second variation also in
the non strictly normal case which is at least formally very similar to the
normal case. However, a common occurrence is that the matrix $H_{t}$
completely degenerates and is constantly equal to the zero matrix. This is the
case for affine control systems and abnormal extremal in Sub-Riemannian
geometry, i.e. systems of the form:
$f_{u}=\sum_{i=1}^{l}f_{i}u_{i}+f_{0},\quad f_{i}\text{ smooth vector fields}$
In this case Legendre condition $H_{t}\leq 0$ (see the previous section) does
not give much information. One, then, looks for _higher_ order optimality
conditions. This is usually done exactly as in Lemma 1: the first optimality
conditions one finds are _Goh condition_ and _generalized Legendre condition_
which prevent the second variation from being _strongly indefinite_.
In the notation of Lemma 1 Goh conditions is written as $Q_{1}\equiv 0$ i.e.
$Z_{t}^{*}JZ_{t}\equiv 0$. It can be reformulated in geometric terms as
follows, if $\lambda_{t}$ is the extremal then
$\lambda_{t}[\partial_{u}f_{u}(q(t))v_{1},\partial_{u}f_{u}(q(t))v_{2}]=0,\,\forall\,v_{1},v_{2}\in\mathbb{R}^{k}$
From Theorem 1 it is clear that if $Q_{1}\not\equiv 0$, the second variation
has infinite negative index and that eigenvalues distribute evenly between the
negative and positive parts of the spectrum. Then one asks that the second
term $Q_{2}$ is non positive definite (recall the different sign convention in
Proposition 6), otherwise the negative part of the spectrum of $-Q_{2}$
becomes infinite. In our notation this condition reads
$(Z_{t}^{(1)})^{*}JZ_{t}\leq 0\iff\sigma(Z_{t}^{(1)}v,Z_{t}v)\leq
0,\,\forall\,v\in\mathbb{R}^{k}.$
Again it can be translated in a differential condition along the extremal,
however this time it will in general involve more than just commutators if the
system is not control affine.
If $Q_{2}\equiv 0$, one can take more derivatives and find new conditions. In
particular, using the notation of Lemma 1, one has always to ask that the
first non zero term in the expansion is of even order and that the matrix of
its coefficients is non positive in order to have finite negative index.
## Acknowledgements
The author wishes to thank Prof. A. Agrachev for the stimulating discussions
on the topic and the referee for the helpful suggestions which greatly
improved the exposition.
## References
* [1] A. Agrachev, G. Stefani, and P. Zezza. An invariant second variation in optimal control. Internat. J. Control, 71(5):689–715, 1998.
* [2] A. A. Agrachëv. Quadratic mappings in geometric control theory. In Problems in geometry, Vol. 20 (Russian), Itogi Nauki i Tekhniki, pages 111–205. Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1988. Translated in J. Soviet Math. 51 (1990), no. 6, 2667–2734.
* [3] A. A. Agrachev. Spectrum of the second variation. Tr. Mat. Inst. Steklova, 304(Optimal noe Upravlenie i Differentsial nye Uravneniya):32–48, 2019.
* [4] Andrei Agrachev, Davide Barilari, and Ugo Boscain. A comprehensive introduction to sub-Riemannian geometry, volume 181 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2020. From the Hamiltonian viewpoint, With an appendix by Igor Zelenko.
* [5] Andrei Agrachev and Ivan Beschastnyi. Jacobi fields in optimal control: one-dimensional variations. J. Dyn. Control Syst., 26(4):685–732, 2020.
* [6] Andrei Agrachev and Ivan Beschastnyi. Jacobi fields in optimal control: Morse and Maslov indices. Nonlinear Anal., 214:Paper No. 112608, 47, 2022.
* [7] Andrei A. Agrachev and Yuri L. Sachkov. Control theory from the geometric viewpoint, volume 87 of Encyclopaedia of Mathematical Sciences. Springer-Verlag, Berlin, 2004. Control Theory and Optimization, II.
* [8] Andrey A. Agrachev and Ivan Yu. Beschastnyi. Symplectic geometry of constrained optimization. Regul. Chaotic Dyn., 22(6):750–770, 2017.
* [9] Charles K. Chui. Concerning rates of convergence of Riemann sums. J. Approximation Theory, 4:279–287, 1971.
* [10] Frédéric Jean. Control of nonholonomic systems: from sub-Riemannian geometry to motion planning. SpringerBriefs in Mathematics. Springer, Cham, 2014.
* [11] Tosio Kato. Perturbation theory for linear operators. Classics in Mathematics. Springer-Verlag, Berlin, 1995. Reprint of the 1980 edition.
* [12] Walter Rudin. Functional analysis. International Series in Pure and Applied Mathematics. McGraw-Hill, Inc., New York, second edition, 1991.
|
arxiv-papers
| 2021-07-26T15:59:56 |
2024-09-04T03:07:19.110139
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Stefano Baranzini",
"submitter": "Stefano Baranzini",
"url": "https://arxiv.org/abs/2107.12290"
}
|
2107.12291
|
11institutetext: School of Biomedical Engineering & Imaging Sciences, King’s
College London, UK 22institutetext: Oxford University Clinical Research Unit,
Ho Chi Minh City, Vietnam 33institutetext: Mahidol Oxford Research Unit,
Thailand
# B-line Detection in Lung Ultrasound Videos: Cartesian vs Polar
Representation
Hamideh Kerdegari 11 Phung Tran Huy Nhat 1122 Angela McBride 22 Luigi Pisani
33 Reza Razavi 11 Louise Thwaites 22 Sophie Yacoub 22 Alberto Gomez 11
###### Abstract
Lung ultrasound (LUS) imaging is becoming popular in the intensive care units
(ICU) for assessing lung abnormalities such as the appearance of B-line
artefacts as a result of severe dengue. These artefacts appear in the LUS
images and disappear quickly, making their manual detection very challenging.
They also extend radially following the propagation of the sound waves. As a
result, we hypothesize that a polar representation may be more adequate for
automatic image analysis of these images. This paper presents an attention-
based Convolutional+LSTM model to automatically detect B-lines in LUS videos,
comparing performance when image data is taken in Cartesian and polar
representations. Results indicate that the proposed framework with polar
representation achieves competitive performance compared to the Cartesian
representation for B-line classification and that attention mechanism can
provide better localization.
###### Keywords:
Lung ultrasound B-line classification Temporal model Cartesian representation
Polar representation
## 1 Introduction
Recently, lung ultrasound (LUS) imaging has increased in popularity for rapid
lung monitoring in patients in the intensive care units (ICU). Particularly
for dengue patients, LUS can capture image artefacts such as B-lines that
indicate a pulmonary abnormalities such as oedema and effusions [1]. B-lines
are bright lines extending from the surface of the lung distally following the
direction of propagation of the sound wave (shown in Figure 1). LUS imaging is
useful for assessing lung abnormalities though the presence of B-lines.
However, these lines become visible randomly during respiratory cycle in the
affected area only [2]; therefore, manually detecting these artefacts becomes
challenging for inexperienced sonographers, and particularly in low and middle
income countries with higher prevalence of these diseases where training
opportunities and experise are scarce.
Figure 1: Examples of LUS B-line frames. B-line artefacts are presented as
bright lines that develop from the surface of the lung.
In order to provide an automatic solution to the LUS B-line detection problem,
recent studies proposed classification, segmentation and localization of
B-line artefacts in individual LUS frames. For example, a convolutional neural
network (CNN) followed by a class activation map was proposed by Sloun et al.
[3] to classify B-lines and produce a segmentation map of them, respectively.
A weakly supervised localization of B-lines was proposed using a spatial
transformer network [4]. In another study [5], a single-shot CNN was used to
localize B-lines with bounding boxes. Previous work by Kerdegari et al. [6]
showed that employing temporal information can improve B-line detection task
in LUS, leveraging a temporal attention mechanism to localize B-line frames
within LUS videos.
Furthermore, attention mechanisms have been used widely for spatial
localization of lung lesions particularly in CT and x-ray lung images. For
instance, a residual attention U-Net for multi-class segmentation of CT images
[7] and x-ray images [8] was proposed. A lesion-attention deep neural network
(LA-DNN) was presented by [9] to do two tasks of B-line classification and
multi-label attention localization of five lesions. All these studies employed
spatial attention mechanism for lung lesion localization.
LUS images are usually used in a standard Cartesian coordinate representation
(i.e., scan-converted). In this representation, the B-lines commonly appear
densely in the middle of frustum. Therefore, data preprocessing techniques
such as downsampling might cause information loss with Cartesian
representation. Additionally, the radial direction that B-lines follow is
known but this information is not exploited. In this paper, we propose to use
a polar representation to, first, reduce information loss when downsampling
the data, and second, leverage prior knowledge about line formation by having
one dimension aligned with the lines.
To this end, we compare the performance of the temporal attention-based
convolutional+LSTM model proposed by [6] when using Cartesian and polar
representations. In summary, the contribution of this paper is investigating
the effect of using LUS polar coordinate representation on the B-line
detection and localization performance. Also, we evaluate the effect of
different downsampling factors of LUS video with polar and Cartesian
representations for B-line detection and localization tasks.
## 2 Model Architecture
This paper employs a model that combines a deep visual feature extractor such
as a CNN with a long short-term memory (LSTM) network that can learn to
recognize temporal dynamics of videos; and a temporal attention mechanism to
learn where to pay more attention in the video. Figure 2 shows the core of our
model. This model works by passing each frame from the video through our CNN
model (The architecture details are explained in Figure 2, right) to produce a
fixed length feature vector representation. The outputs of our CNN are passed
into a bidirectional LSTM (16 hidden units, tanh activation function) network
as a recurrent sequence learning model. Then, the LSTM outputs are passed to
the attention network [10] to produce an attention score ($e_{t}$) for each
attended frame ($h_{t}$): $e_{t}=h_{t}w_{a}$, where $w_{a}$ represents
attention layer weight matrix. From $e_{t}$, an importance attention weight
($a_{t}$) is computed for each attended frame:
$a_{t}=\frac{\exp(e_{t})}{\sum_{i=1}^{T}\exp(e_{i})}$. To learn which frame of
the video to pay attention to, $a_{t}$s are multiplied with the LSTM output.
Finally, the output of LUS video classification is generated by averaging the
attention weighted temporal feature vector over time and passing to a fully
connected layer for video classification.
Figure 2: Overview of the proposed framework, Left: LUS videos are first
processed through CNN layers, then a bidirectional LSTM is used to extract
temporal information and finally an attention mechanism is applied to localize
B-line frames within LUS videos. Right: Detailed architecture of our CNN.
## 3 Experimental Setup
### 3.1 Dataset and Preprocessing
The dataset used in the experiments was collected at the Hospital of Tropical
Diseases in Ho Chi Minh City, Vietnam. It includes about 5 hours of lung
ultrasound videos collected from 60 dengue patients. These videos were
collected using a Sonosite M-Turbo machine (Fujifilm Sonosite, Inc., Bothell,
WA) with a low-medium frequency (3.5-5 MHz) convex probe. The Kigali ARDS
protocol [11], as a standardised operating procedure was applied at 6 points
(2 anterior, 2 lateral and 2 posterolateral) on each side of the chest to
perform LUS exams.
The four-second LUS video clips have been resized from original size of
$640\times 480$ pixels to $64\times 64$ pixels for training, and fully
anonymised through masking. A qualified sonographer annotated these clips
using the VGG annotator tool [12]. During the annotation procedure, each video
clip was annotated by being assigned either a B-line or non-B-line label.
Further, B-line frames and B-line regions in the B-line video clips were
annotated to be used as annotations for temporal and spatial B-line
localization task later.
### 3.2 Polar Coordinate Representation
Like other common applications of ultrasound imaging, lung ultrasound images
are normally presented in Cartesian coordinates (shown in Figure 3, left). In
this case, the information particularly B-line artefacts are presented densely
in the centre of the frustum to some extend. Therefore, when we downsample the
LUS videos as input to our network some information are lost. To overcome this
limitation, we transform each video clip into its associated polar coordinate
representation (shown in Figure 3, right). With polar coordinate
representation, information are expanded along the degree axes of polar data;
therefore, less information are missed during downsampling of the data.
Additionally, there is not much information in the left and right up corner
(black areas) of Cartiesian coordinate representation. As a result, when these
areas are removed in the polar coordinate representation, the network can
concentrate on the areas of each frame where more useful information are
exist.
Figure 3: Examples of a B-line frame in Cartesian coordinate (left) and polar
coordinate (right) representation.
Polar representation is carried out by the following reparameterization, used
to resample the Cartesian images into a polar grid using bilinear
interpolation:
$\begin{array}[]{rl}x=&r\sin(\alpha)\\\ y=&r\cos(\alpha)\end{array}$ (1)
Where $r$ is the depth, or radius (distance form the beam source to a pixel
location) and $\alpha$ is the angle measured from the y axis.
### 3.3 Implementation Details
Our network was implemented using Keras library with a Tensorflow backend. The
network was optimised using Adam optimizer with the learning rate (lr) set to
$10^{-6}$. Batch normalization was used for both CNN and LSTM parts of the
network. Batch size of 25, dropout of 0.2 and $L2=10^{-5}$ for regularization
were employed. Data augmentation was applied to the training data by adding
horizontally-flipped frames. 5-fold cross validation was used and the network
converged after 60 epochs. The class imbalance was addressed by weighting the
probability to draw a sample by its relative class occurrence in the training
set.
## 4 Experiments and Results
In order to investigate the potential benefit of employing polar
representation and various video resolutions in B-line detection task, we
trained our model with Cartesian and polar representations using various input
video sizes of $64\times 64$, $32\times 32$ and $16\times 16$ resolution.
Furthermore, we reduced the depth size of polar data to 32 and 16 samples,
while keeping the number of angular elements to 64 (hence maintaining angle
resolution), therefore having the video size of $64\times 32$ and $64\times
16$ resolution for training.
To assess the classification performance of the model, the harmonic mean of
precision and recall expressed as $F1=2\times\frac{Precision\times
Recall}{Precision+Recall}\times 100$ score (%) was utilised. The
classification performance for Cartesian and polar data are presented in
Figure 4. An alpha value of 0.05 was selected as the statistical significance
threshold. Shapiro-Wilk test showed that all data were normally distributed.
Figure 4: B-line classification performance (F1 score) of Cartesian and polar
representations with various video resolutions.
Our baseline video resolution ($64\times 64$) received the highest performance
for both polar and Cartesian representations. Also, a paired t-test revealed
that the performance of polar data (83.5%) is significantly higher than
Cartesian data (81%) (t=2.776, p=0.017) in all cases with the same number of
pixels. This demonstrates that the model can extract more information from a
polar representation. When we decreased the video resolution into $32\times
32$ and $16\times 16$, the performance dropped compared to the baseline video
resolution, although the drop was less significant in polar images. For video
resolutions of $32\times 32$, paired t-test showed significant difference
between Cartesian and polar representation in B-line detection task (t=1.035 ,
p=0.028). However, this difference is not significant for video resolution of
$16\times 16$ (t=-1.104, p=0.165), probably because the downsampling is too
aggressive and B-lines become barely distinguishable in any representation.
Furthermore, we decreased the depth size of polar data into 32 and 16 to
evaluate the contribution of depth information in B-line detection. Compared
to the depth size of 64 in baseline resolution, the performance decreased
significantly for both depth sizes of 32 (t=2.835 , p=0.008) and 16 (t=1.503 ,
p=0.018).
Additionally, we investigated the impact of downsampling along scan-lines and
along angles. To do this, we compared two video resolutions that had the same
amount of pixels: $32\times 32$ and $64\times 16$. Results showed that video
resolution of $64\times 16$ (64 along the angle dimension) has significantly
higher performance which shows that preserving information along the angle
dimension helps in this specific task where artefacts are aligned along
constant-angle lines (t=2.43, p=0.03).
We further evaluated B-line temporal localization accuracy using both data
representations. We calculated intersection over union (IoU) of predicted
temporal localized frames with their ground truth annotations. Results are
presented in Table 1. With polar representation, the model is able to localize
B-line frames temporally with higher performance than Cartesian
representation. Additionally, the attention weights for true B-line frames are
higher in polar representation and for the non B-line frames lower, compared
to Cartesian representation, further suggesting that the network learns to
differentiate B-line and non B-line frames better in polar representation.
Table 1: IoU values showing B-line localisation accuracy (%) for various video
resolutions of Cartesian and polar representations. Video Resolution
---
| 64*64 | 32*32 | 16*16 | 64*32 | 64*16
Cartesian | 67.1 | 56.3 | 42.2 | — | —
Polar | 73.2 | 62.5 | 43.1 | 67.7 | 65.1
## 5 Conclusion
This paper investigates the effect of employing ultrasound polar coordinate
representation on LUS B-line detection and localization tasks. We employed an
attention-based convloutional+LSTM model capable of extracting spatial and
temporal features from LUS videos and localizing B-line frames using a
temporal attention mechanism. We evaluated B-line classification and
localization with this architecture using Cartesian and polar coordinate
representations with different resolutions. Using our LUS video dataset,
results showed that polar representation consistently outperforms Cartesian in
terms of classification accuracy and temporal localization accuracy.
Our future work will explore an spatiotemporal attention mechanism that is
able to detect the B-line artefacts and localize them both spatially and
temporally within LUS videos in polar coordinates. B-line spatial localization
may help clinicians to quantify the severity of the disease. Overall, these
findings will assist management of ICU patients with dengue particularly in
low- and middle-income countries where ultrasound operator expertise is
limited.
## ACKNOWLEDGMENT
The VITAL Consortium: OUCRU: Dang Trung Kien, Dong Huu Khanh Trinh, Joseph
Donovan, Du Hong Duc, Ronald Geskus, Ho Bich Hai, Ho Quang Chanh, Ho Van Hien,
Hoang Minh Tu Van, Huynh Trung Trieu, Evelyne Kestelyn, Lam Minh Yen, Le
Nguyen Thanh Nhan, Le Thanh Phuong, Luu Phuoc An, Nguyen Lam Vuong, Nguyen
Than Ha Quyen, Nguyen Thanh Ngoc, Nguyen Thi Le Thanh, Nguyen Thi Phuong Dung,
Ninh Thi Thanh Van, Pham Thi Lieu, Phan Nguyen Quoc Khanh, Phung Khanh Lam,
Phung Tran Huy Nhat, Guy Thwaites, Louise Thwaites, Tran Minh Duc, Trinh Manh
Hung, Hugo Turner, Jennifer Ilo Van Nuil, Sophie Yacoub. Hospital for Tropical
Diseases, Ho Chi Minh City: Cao Thi Tam, Duong Bich Thuy, Ha Thi Hai Duong, Ho
Dang Trung Nghia, Le Buu Chau, Le Ngoc Minh Thu, Le Thi Mai Thao, Luong Thi
Hue Tai, Nguyen Hoan Phu, Nguyen Quoc Viet, Nguyen Thanh Nguyen, Nguyen Thanh
Phong, Nguyen Thi Kim Anh, Nguyen Van Hao, Nguyen Van Thanh Duoc, Nguyen Van
Vinh Chau, Pham Kieu Nguyet Oanh, Phan Tu Qui, Phan Vinh Tho, Truong Thi
Phuong Thao. University of Oxford: David Clifton, Mike English, Heloise
Greeff, Huiqi Lu, Jacob McKnight, Chris Paton. Imperial College London:
Pantellis Georgiou, Bernard Hernandez Perez, Kerri Hill-Cawthorne, Alison
Holmes, Stefan Karolcik, Damien Ming, Nicolas Moser, Jesus Rodriguez Manzano.
King’s College London: Alberto Gomez, Hamideh Kerdegari, Marc Modat, Reza
Razavi. ETH Zurich: Abhilash Guru Dutt, Walter Karlen, Michaela Verling, Elias
Wicki. Melbourne University: Linda Denehy, Thomas Rollinson.
## References
* [1] Gino Soldati, Marcello Demi, and Libertario Demi., “Ultrasound patterns of pulmonary edema,” Annals of Translational Medicine, vol. 7, no. 1, 2019.
* [2] Christoph Dietrich et al., “Lung b-line artefacts and their use,” Journal of Thoracic Disease, vol. 8, no. 6, pp. 1356, 2016.
* [3] Van Sloun, Ruud JG, and Libertario Demi., “Localizing b-lines in lung ultrasonography by weakly supervised deep learning, in-vivo results,” IEEE JBHI, vol. 24, no. 4, pp. 957–964, 2019.
* [4] S. Roy et al., “Deep learning for classification and localization of covid-19 markers in point-of-care lung ultrasound,” IEEE TMI, 2020.
* [5] S. Kulhare et al., “Ultrasound-based detection of lung abnormalities using single shot detection convolutional neural networks,” in MICCAI-PoCUS, pp. 65–73. 2018.
* [6] Hamideh Kerdegari et al., “Automatic Detection of B-lines in Lung Ultrasound Videos From Severe Dengue Patients,” 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 989–993, 2021.
* [7] Xiaocong Chen, Lina Yao, and Yu Zhang., “Residual attention u-net for automated multi-class segmentation of covid-19 chest ct images,” arXiv:2004.05645, 2020.
* [8] Gusztáv Gaál, Balázs Maga, and András Lukács., “Attention u-net based adversarial architectures for chest x-ray lung segmentation,” arXiv:2003.10304, 2020.
* [9] Bin Liu, Xiaoxue Gao, Mengshuang He, Fengmao Lv, and Guosheng Yin., “Online covid-19 diagnosis with chest ct images: Lesion-attention deep neural networks,” medRxiv, 2020.
* [10] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio., “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
* [11] E. D. Riviello et al., “Hospital incidence and outcomes of the acute respiratory distress syndrome using the kigali modification of the berlin definition,” Am. J. Respir. Crit. Care Med., vol. 193, no. 1, pp. 52–59, 2016\.
* [12] Abhishek Dutta, and Andrew Zisserman., “The VIA annotation software for images, audio and video,” in ACM Multimedia, 2019.
|
arxiv-papers
| 2021-07-26T15:59:56 |
2024-09-04T03:07:19.129274
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Hamideh Kerdegari, Phung Tran Huy Nhat, Angela McBride, Luigi Pisani,\n Reza Razavi, Louise Thwaites, Sophie Yacoub, and Alberto Gomez",
"submitter": "Hamideh Kerdegari Dr",
"url": "https://arxiv.org/abs/2107.12291"
}
|
2107.12293
|
# An answer to the Whitehead’s asphericity question
Elton Pasku
Universiteti i Tiranës
Fakulteti i Shkencave Natyrore
Departamenti i Matematikës
Tiranë, Albania
[email protected]
###### Abstract
The Whitehead asphericity problem, regarded as a problem of combinatorial
group theory, asks whether any subpresentation of an aspherical group
presentation is also aspherical. We give a positive answer to this question by
proving that if $\mathcal{P}=(\mathbf{x},\mathbf{r})$ is an aspherical
presentation of the trivial group, and $r_{0}\in\mathbf{r}$ a fixed relation,
then $\mathcal{P}_{1}=(\mathbf{x},\mathbf{r}_{1})$ is aspherical where
$\mathbf{r}_{1}=\mathbf{r}\setminus\\{r_{0}\\}$.
## 1 Introduction
A 2-dimensional CW-complex $K$ is called aspherical if $\pi_{2}(K)=0$. The
Whitehead asphericity problem (WAP for short), raised as a question in [44],
asks whether any subcomplex of an aspherical 2-complex is also aspherical. The
question can be formulated in group theoretic terms since every group
presentation $\mathcal{P}$ has a geometric realisation as a 2-dimensional CW-
complex $K(\mathcal{P})$ and so $\mathcal{P}$ is called aspherical if
$K(\mathcal{P})$ is aspherical. A useful review of this question is in [42].
The purpose of the present paper is to prove that if
$\mathcal{P}=(\mathbf{x},\mathbf{r})$ is an aspherical presentation of the
trivial group and $r_{0}\in\mathbf{r}$ is a fixed relation, then the
subpresentation $\mathcal{P}_{1}=(\mathbf{x},\mathbf{r}_{1})$ where
$\mathbf{r}_{1}=\mathbf{r}\setminus\\{r_{0}\\}$ is again aspherical. This in
fact implies that WAP has always a positive answer since in Theorem 1 of [32]
Ivanov proves that if the WAP is false, then there is an aspherical
presentation $\mathcal{P}=(\mathcal{A},\mathcal{R}\cup\\{z\\})$ of the trivial
group where the alphabet $\mathcal{A}$ is countable and $z\in\mathcal{A}$ such
that $\mathcal{P}_{1}=(\mathcal{A},\mathcal{R})$ is not aspherical.
An immediate implication of our result and that of Bestvina-Brady [2] is that
the conjecture of Eilenberg and Ganea [11] is false. This conjecture states
that if a discrete group $G$ has cohomological dimension 2, then it has a
2-dimensional Eilenberg-MacLane space $K(G,1)$.
There is a large corpus of results which are related to ours and is mostly
contained in [4], [5], [8], [10], [14], [18], [19], [20], [22], [23], [24],
[25], [26], [30], [32], [16] and [43].
In the first part of our paper we will make use of the review paper [5] of
Brown and Huebschmann which contains several key results about aspherical
group presentations one of which is proposition 14 that gives sufficient and
necessary conditions under which a group presentation
$\mathcal{P}=(\mathbf{x},\mathbf{r})$ is aspherical. It turns out that the
asphericity of $\mathcal{P}$ is encoded in the structure of the free crossed
module $(H/P,\hat{F},\delta)$ that is associated to $\mathcal{P}$. To be
precise we state below proposition 14.
###### Proposition 1.1.
(Proposition 14 of [5]) Let $K(\mathcal{P})$ be the geometric realisation of a
group presentation $\mathcal{P}=(\mathbf{x},\mathbf{r})$ and let $G$ be the
group given by $\mathcal{P}$. The following are equivalent.
(i)
The 2-complex $K(\mathcal{P})$ is aspherical.
(ii)
The module $\pi$ of identities for $\mathcal{P}$ is zero.
(iii)
The relation module $\mathcal{N(P)}$ of $\mathcal{P}$ is a free left
$\mathbb{Z}G$ module on the images of the relators $r\in\mathbf{r}$.
(iv)
Any identity $Y$-sequence for $\mathcal{P}$ is Peiffer equivalent to the empty
sequence.
The last condition is of a particular interest to us. By definition, a
$Y$-sequence for $\mathcal{P}$ is a finite (possibly empty) sequence of the
form
$((^{u_{1}}r_{1})^{\varepsilon_{1}},...,(^{u_{n}}r_{n})^{\varepsilon_{n}})$
where $r\in\mathbf{r}$, $u$ is a word from the free group $F$ over
$\mathbf{x}$ and $\varepsilon=\pm 1$. A $Y$-sequence
$((^{u_{1}}r_{1})^{\varepsilon_{1}},...,(^{u_{n}}r_{n})^{\varepsilon_{n}})$ is
called an identity $Y$-sequence if it is either empty or if
$\prod_{i=1,n}u_{i}r_{i}^{\varepsilon_{i}}u_{i}^{-1}=1$ in $F$. The definition
of Peiffer equivalence is based on Peiffer operations on $Y$-sequences and
reads as follows.
* (i)
An elementary Peiffer exchange replaces an adjacent pair
$((^{u}r)^{\varepsilon},((^{v}s)^{\delta})$ in a $Y$-sequence by either
$((^{ur^{\varepsilon}u^{-1}v}s)^{\delta},(^{u}r)^{\varepsilon})$, or by
$((^{v}s)^{\delta},((^{vs^{-\delta}v^{-1}u}r)^{\varepsilon})$.
* (ii)
A Peiffer deletion deletes an adjacent pair
$((^{u}r)^{\varepsilon},(^{u}r)^{-\varepsilon})$ in a $Y$-sequence.
* (iii)
A Peiffer insertion is the inverse of the Peiffer deletion.
The equivalence relation on the set of $Y$-sequences generated by the above
operations is called Peiffer equivalence. We recall from [5] what does it mean
for an identity $Y$-sequence
$((^{u_{1}}r_{1})^{\varepsilon_{1}},...,(^{u_{n}}r_{n})^{\varepsilon_{n}})$ to
have the primary identity property. This means that the indices $1,2,...,n$
are grouped into pairs $(i,j)$ such that $r_{i}=r_{j}$,
$\varepsilon_{i}=-\varepsilon_{j}$ and $u_{i}=u_{j}$ modulo $N$ where $N$ is
the normal subgroup of $\hat{F}$ generated by $\mathbf{r}$. Proposition 16 of
[5] shows that every such sequence is Peiffer equivalent to the empty
sequence. Given an identity $Y$-sequence $d$ which is equivalent to the empty
sequence 1, we would be interested to know what kind of insertions
$((^{u}r)^{\varepsilon},(^{u}r)^{-\varepsilon})$ are used along the way of
transforming $d$ to 1. It is obvious that keeping track of that information is
vital to tackle the Whitehead problem.
The aim of Section 3 of the present paper is to offer an alternative way in
dealing with the asphericity of a group presentation
$\mathcal{P}=(\mathbf{x},\mathbf{r})$ by considering a new crossed module
$(\mathcal{G}(\Upsilon),\hat{F},\tilde{\theta})$ over the free group $\hat{F}$
on $\mathbf{x}$ where $\mathcal{G}(\Upsilon)$ is the group generated by the
symbols $(^{u}r)^{\varepsilon}$ subject to relations
$(^{u}r)^{\varepsilon}(^{v}s)^{\delta}=(^{{u{r^{\varepsilon}}u^{-1}}v}s)^{\delta}(^{u}r)^{\varepsilon}$,
the action of $\hat{F}$ on $\mathcal{G}(\Upsilon)$ and the map
$\tilde{\theta}$ are defined in the obvious fashion. The advantage of working
with $\mathcal{G}(\Upsilon)$ is that unlike to $H/P$, in
$\mathcal{G}(\Upsilon)$ the images of insertions
$((^{u}r)^{\varepsilon},(^{u}r)^{-\varepsilon})$ do not cancel out and this
enables us to express the asphericity in terms of such insertions. This is
realized by considering the kernel $\tilde{\Pi}$ of $\tilde{\theta}$ which is
the analogue of the module $\pi$ of identities for $\mathcal{P}$ in the
standard theory and is not trivial when $\mathcal{P}$ is aspherical. We call
$\tilde{\Pi}$ the generalized module of identities for $\mathcal{P}$.
To prove our results we apply techniques from the theory of semigroup actions
and to this end we use concepts like the universal enveloping group
$\mathcal{G}(S)$ of a given semigroup $S$, the dominion of a subsemigroup $U$
of a semigroup $S$ and the tensor product of semigroup actions. These concepts
are explained, with references, in Section 2.
## 2 Monoid actions
For the benefit of the reader not familiar with monoid actions we will list
below some basic notions and results that are used in the paper. For further
results on the subject the reader may consult the monograph [27]. Given $S$ a
monoid with identity element 1 and $X$ a nonempty set, we say that $X$ is a
left S-system if there is an action $(s,x)\mapsto sx$ from $S\times X$ into
$X$ with the properties
$\displaystyle(st)x$ $\displaystyle=s(tx)\text{ for all }s,t\in S\text{ and
}x\in X,$ $\displaystyle 1x$ $\displaystyle=x\text{ for all }x\in X.$
Right $S$-systems are defined analogously in the obvious way. Given $S$ and
$T$ (not necessarily different) monoids, we say that $X$ is an (S,T)-bisystem
if it is a left $S$-system, a right $T$-system, and if
$(sx)t=s(xt)\text{ for all }s\in S,t\in T\text{ and }x\in X.$
If $X$ and $Y$ are both left $S$-systems, then an S-morphism or S-map is a map
$\phi:X\rightarrow Y$ such that
$\phi(sx)=s\phi(x)\text{ for all }s\in S\text{ and }x\in X.$
Morphisms of right $S$-systems and of $(S,T)$-bisystems are defined in an
analogue way. If we are given a left $T$-system $X$ and a right $S$-system
$Y$, then we can give the cartesian product $X\times Y$ the structure of an
$(T,S)$-bisystem by setting
$t(x,y)=(tx,y)\text{ and }(x,y)s=(x,ys).$
Let now $A$ be an $(T,U)$-bisystem, $B$ an $(U,S)$-bisystem and $C$ an
$(T,S)$-bisystem. As explained above, we can give to $A\times B$ the structure
of an $(T,S)$-bisystem. With this in mind we say that a $(T,S)$-map
$\beta:A\times B\rightarrow C$ is a bimap if
$\beta(au,b)=\beta(a,ub)\text{ for all }a\in A,b\in B\text{ and }u\in U.$
A pair $(A\otimes_{U}B,\psi)$ consisting of a $(T,S)$-bisystem $A\otimes_{U}B$
and a bimap $\psi:A\times B\rightarrow A\otimes_{U}B$ will be called a tensor
product of A and B over U if for every $(T,S)$-bisystem $C$ and every bimap
$\beta:A\times B\rightarrow C$, there exists a unique $(T,S)$-map
$\bar{\beta}:A\otimes_{U}B\rightarrow C$ such that the diagram
---
$\textstyle{A\times
B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\scriptstyle{\psi}$$\textstyle{A\otimes_{U}B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\bar{\beta}}$$\textstyle{C}$
commutes. It is proved in [27] that $A\otimes_{U}B$ exists and is unique up to
isomorphism. The existence theorem reveals that $A\otimes_{U}B=(A\times
B)/\tau$ where $\tau$ is the equivalence on $A\times B$ generated by the
relation
$T=\\{((au,b),(a,ub)):a\in A,b\in B,u\in U\\}.$
The equivalence class of a pair $(a,b)$ is usually denoted by $a\otimes_{U}b$.
To us is of interest the situation when $A=S=B$ where $S$ is a monoid and $U$
is a submonoid of $S$. Here $A$ is clearly regarded as an $(S,U)$-bisystem
with $U$ acting on the right on $A$ by multiplication, and $B$ as an
$(U,S)$-bisystem where $U$ acts on the left on $B$ by multiplication.
Another concept that is important to our approach is that of the dominion
which is defined in [31] from Isbell. By definition, if $U$ is a submonoid of
a monoid $S$, then the dominion $\text{Dom}_{S}(U)$ consists of all the
elements $d\in S$ having the property that for every monoid $T$ and every pair
of monoid homomorphisms $f,g:S\rightarrow T$ that coincide in $U$, it follows
that $f(d)=g(d)$. Related to dominions there is the well known zigzag theorem
of Isbell. We will present here the Stenstrom version of it (theorem 8.3.3 of
[27]) which reads. Let $U$ be a submonoid of a monoid $S$ and let $d\in S$.
Then, $d\in\text{Dom}_{S}(U)$ if and only if $d\otimes_{U}1=1\otimes_{U}d$ in
the tensor product $A=S\otimes_{U}S$. We mention here that this result holds
true if $S$ turns out to be a group and $U$ a subgroup, both regarded as
monoids. A key result (theorem 8.3.6 of [27]) that is used in the next section
is the fact that any inverse semigroup $U$ is absolutely closed in the sense
that for every semigroup $S$ containing $U$ as a subsemigroup,
$\text{Dom}_{S}(U)=U$. It is obvious that groups are absolutely closed as
special cases of inverse monoids (see [28]).
## 3 Peiffer operations and monoid actions
Before we explain how monoid actions are used to deal with the Peiffer
operations on $Y$-sequences, we will introduce several monoids.
The first one is the monoid $\Upsilon$ defined by the monoid presentation
$\mathcal{M}=\langle Y\cup Y^{-1},P\rangle$ where $Y^{-1}$ is the set of group
inverses of the elements of $Y$ and $P$ consists of all pairs
$(ab,{{}^{\theta(a)}}ba)$ where $a,b\in Y\cup Y^{-1}$.
The second one is the group $\mathcal{G}(\Upsilon)$ given by the group
presentation $(Y\cup Y^{-1},\hat{P})$ where $\hat{P}$ is the set of all words
$ab\iota(a)\iota(^{\theta(a)}b)$ where by $\iota(c)$ we denote the inverse of
$c$ in the free group over $Y\cup Y^{-1}$. Before we introduce the next two
monoids and the respective monoid actions, we stop to explain that $\Upsilon$
and $\mathcal{G}(\Upsilon)$ are special cases of a more general situation. If
a monoid $S$ is given by the monoid presentation $\mathcal{M}=\langle
X,R\rangle$, then its universal enveloping group $\mathcal{G}(S)$ (see [1] and
[9]) is defined to be the group given by the group presentation $(X,\hat{R})$
where $\hat{R}$ consists of all words $u\iota(v)$ whenever $(u,v)\in R$ where
$\iota(v)$ is the inverse of $v$ in the free group over $X$. We let for future
use $\sigma:FM(X)\rightarrow S$ the respective canonical homomorphism where
$FM(X)$ is the free monoid on $X$. It is easy to see that there is a monoid
homomorphism $\mu_{S}:S\rightarrow\mathcal{G}(S)$ which satisfies the
following universal property. For every group $G$ and monoid homomorphism
$f:S\rightarrow G$, there is a unique group homomorphism
$\hat{f}:\mathcal{G}(S)\rightarrow G$ such that $\hat{f}\mu_{S}=f$. This
universal property is an indication of an adjoint situation. Specifically, the
functor $\mathcal{G}:\mathbf{Mon}\rightarrow\mathbf{Grp}$ which maps every
monoid to its universal group, is a left adjoint to the forgetful functor
$U:\mathbf{Grp}\rightarrow\mathbf{Mon}$. This ensures that $\mathcal{G}(S)$ is
an invariant of the presentation of $S$.
The third monoid we consider is the submonoid $\mathfrak{U}$ of $\Upsilon$,
having the same unit as $\Upsilon$, and is generated from all the elements of
the form $\sigma(a)\sigma(a^{-1})$ with $a\in Y\cup Y^{-1}$. This monoid, acts
on the left and on the right on $\Upsilon$ by the multiplication in
$\Upsilon$.
The last monoid considered is the subgroup $\hat{\mathfrak{U}}$ of
$\mathcal{G}(\Upsilon)$ generated by $\mu(\mathfrak{U})$. Similarly to above,
$\hat{\mathfrak{U}}$ acts on $\mathcal{G}(\Upsilon)$ by multiplication.
Given $\alpha=(a_{1},...,a_{n})$ an $Y$-sequence over the group presentation
$\mathcal{P}=(\mathbf{x},\mathbf{r})$, then performing an elementary Peiffer
operation on $\alpha$ can be interpreted in a simple way in terms of the
monoids $\Upsilon$ and $\mathfrak{U}$. In what follows we will denote by
$\sigma(\alpha)$ the element
$\sigma(a_{1})\cdot\cdot\cdot\sigma(a_{n})\in\Upsilon$. If
$\beta=(b_{1},...,b_{n})$ is obtained from $\alpha=(a_{1},...,a_{n})$ by
performing an elementary Peiffer exchange, then from the definition of
$\Upsilon$, $\sigma(\alpha)=\sigma(\beta)$, therefore an elementary Peiffer
exchange or a finite sequence of such has no effect on the element
$\sigma(a_{1})\cdot\cdot\cdot\sigma(a_{n})\in\Upsilon$. Before we see the
effect that a Peiffer insertion in $\alpha$ has on $\sigma(\alpha)$ we need
the first claim of the following.
###### Lemma 3.1.
The elements of $\mathfrak{U}$ are central in $\Upsilon$ and those of
$\hat{\mathfrak{U}}$ are central in $\mathcal{G}(\Upsilon)$.
###### Proof.
We see that for every $a\text{ and }b\in Y\cup Y^{-1}$,
$\sigma(a)\sigma(a^{-1})\sigma(b)=\sigma(b)\sigma(a)\sigma(a^{-1})$. Indeed,
$\displaystyle\sigma(a)\sigma(a^{-1})\sigma(b)$
$\displaystyle=~{}^{\theta(a)\theta(a^{-1})}{\sigma(b)}(\sigma(a)\sigma(a^{-1}))$
$\displaystyle=\sigma(b)\sigma(a)\sigma(a^{-1}).$
Since elements $\sigma(b)$ and $\sigma(a)\sigma(a^{-1})$ are generators of
$\Upsilon$ and $\mathfrak{U}$ respectively, then the first claim holds true.
The second claim follows easily. ∎
If we insert $(a,a^{-1})$ at some point in $\alpha=(a_{1},...,a_{n})$ to
obtain $\alpha^{\prime}=(a_{1},...,a,a^{-1},...,a_{n})$, then from lemma 3.1,
$\sigma(\alpha^{\prime})=\sigma(\alpha)\cdot(\sigma(a)\sigma(a^{-1})),$
which means that inserting $(a,a^{-1})$ inside a $Y$-sequence $\alpha$ has the
same effect as multiplying the corresponding $\sigma(\alpha)$ in $\Upsilon$ by
the element $\sigma(a)\sigma(a^{-1})$ of $\mathfrak{U}$. For the converse, it
is obvious that any word $\beta\in FM(Y\cup Y^{-1})$ representing
$\sigma(\alpha)\cdot(\sigma(a)\sigma(a^{-1}))$ is Peiffer equivalent to
$\alpha$. Of course the deletion has the obvious interpretation in our
semigroup theoretic terms as the inverse of the above process. We retain the
same names for our semigroup operations, that is insertion for multiplication
by $\sigma(a)\sigma(a^{-1})$ and deletion for its inverse. Related to these
operations on the elements of $\Upsilon$ we make the following definition.
###### Definition 3.2.
We denote by $\sim_{\mathfrak{U}}$ the equivalence relation in $\Upsilon$
generated by all pairs
$(\sigma(\alpha),\sigma(\alpha)\cdot\sigma(a)\sigma(a^{-1}))$ where
$\alpha\in\text{FM}(Y\cup Y^{-1})$ and $a\in Y\cup Y^{-1}$. We say that two
elements $\sigma(a_{1})\cdot\cdot\cdot\sigma(a_{n})$ and
$\sigma(b_{1})\cdot\cdot\cdot\sigma(b_{m})$ where $m,n\geq 0$ are Peiffer
equivalent in $\Upsilon$ if they fall in the same $\sim_{\mathfrak{U}}$-class.
From what we said before it is obvious that two $Y$-sequences $\alpha$ and
$\beta$ are Peiffer equivalent in the usual sense if and only if
$\sigma(\alpha)\sim_{\mathfrak{U}}\sigma(\beta)$. For this reason we decided
to make the following convention. If $\alpha=(a_{1},...,a_{n})$ is a
$Y$-sequence (resp. an identity $Y$-sequence), then its image in $\Upsilon$,
$\sigma(\alpha)$ will again be called a $Y$-sequence (resp. an identity
$Y$-sequence). In the future instead of working directly with an $Y$-sequence
$\alpha$, we will work with its image $\sigma(\alpha)$.
We note that it should be mentioned that the study of $\sim_{\mathfrak{U}}$
might be as hard as the study of Peiffer operations on $Y$-sequences, and at
this point it seems we have not made any progress at all. In fact this
definition will become useful later in this section and yet we have to prove a
few more things before we utilize it.
The process of inserting and deleting generators of $\mathfrak{U}$ in an
element of $\Upsilon$ is related to the following new concept. Given $U$ a
submonoid of a monoid $S$ and $d\in S$, then we say that $d$ belongs to the
weak dominion of $U$, shortly written as $d\in\text{WDom}_{S}(U)$, if for
every group $G$ and every monoid homomorphisms $f,g:S\rightarrow G$ such that
$f(u)=g(u)$ for every $u\in U$, then $f(d)=g(d)$. An analogue of the Stenström
version of Isbell’s theorem for weak dominion holds true. The proof of the if
part of its analogue is similar to that of Isbell theorem apart from some
minor differences that reflect the fact that we are working with $WDom$ rather
than $Dom$ and that will become clear along the proof, while the converse
relies on the universal property of $\mu:S\rightarrow\mathcal{G}(S)$.
###### Proposition 3.3.
Let $S$ be a monoid, $U$ a submonoid and let $\hat{U}$ be the subgroup of
$\mathcal{G}(S)$ generated by elements $\mu(u)$ with $u\in U$. Then
$d\in\text{WDom}_{S}(U)$ if and only if $\mu(d)\in\hat{U}$.
###### Proof.
The set $\hat{A}=\mathcal{G}(S)\otimes_{\hat{U}}\mathcal{G}(S)$ has an obvious
$(\mathcal{G}(S),\mathcal{G}(S))$-bisystem structure. The free abelian group
$\mathbb{Z}\hat{A}$ on $\hat{A}$ inherits a
$(\mathcal{G}(S),\mathcal{G}(S))$-bisystem structure if we define
$g\cdot\sum z_{i}(g_{i}\otimes_{\hat{U}}h_{i})=\sum
z_{i}(gg_{i}\otimes_{\hat{U}}h_{i})\text{ and }\left(\sum
z_{i}(g_{i}\otimes_{\hat{U}}h_{i})\right)\cdot g=\sum
z_{i}(g_{i}\otimes_{\hat{U}}h_{i}g).$
The set $\mathcal{G}(S)\times\mathbb{Z}\hat{A}$ becomes a group by defining
$(g,\sum z_{i}g_{i}\otimes_{\hat{U}}h_{i})\cdot(g^{\prime},\sum
z^{\prime}_{i}g^{\prime}_{i}\otimes_{\hat{U}}h^{\prime}_{i})=(gg^{\prime},\sum
z_{i}g_{i}\otimes_{\hat{U}}h_{i}g^{\prime}+\sum
z^{\prime}_{i}gg^{\prime}_{i}\otimes_{\hat{U}}h^{\prime}_{i}).$
The associativity is proved easily. The unit element is $(1,0)$ and for every
$(g,\sum z_{i}g_{i}\otimes_{\hat{U}}h_{i})$ its inverse is the element
$(g^{-1},-\sum z_{i}g^{-1}g_{i}\otimes_{\hat{U}}h_{i}g^{-1})$. Let us now
define
$\beta:S\rightarrow\mathcal{G}(S)\times\mathbb{Z}\hat{A}\text{ by
}s\mapsto(\mu(s),0),$
which is clearly a monoid homomorphism, and
$\gamma:S\rightarrow\mathcal{G}(S)\times\mathbb{Z}\hat{A}\text{ by
}s\mapsto(\mu(s),\mu(s)\otimes_{\hat{U}}1-1\otimes_{\hat{U}}\mu(s)),$
which is again seen to be a monoid homomorphism. These two coincide on $U$
since for every $u\in U$
$\gamma(u)=(\mu(u),\mu(u)\otimes_{\hat{U}}1-1\otimes_{\hat{U}}\mu(u))=(\mu(u),0)=\beta(u).$
The last equality and the assumption that $d\in\text{WDom}_{S}(U)$ imply that
$\beta(d)=\gamma(d)$, therefore
$(\mu(d),0)=(\mu(d),\mu(d)\otimes_{\hat{U}}1-1\otimes_{\hat{U}}\mu(d)),$
which shows that $\mu(d)\otimes_{\hat{U}}1=1\otimes_{\hat{U}}\mu(d)$ in the
tensor product $\mathcal{G}(S)\otimes_{\hat{U}}\mathcal{G}(S)$ and therefore
theorem 8.3.3, [27], applied for monoids $\mathcal{G}(S)$ and $\hat{U}$,
implies that $\mu(d)\in\text{Dom}_{\mathcal{G}(S)}(\hat{U})$. But
$\text{Dom}_{\mathcal{G}(S)}(\hat{U})=\hat{U}$ as from theorem 8.3.6, [27]
every inverse semigroup is absolutely closed, whence $\mu(d)\in\hat{U}$.
Conversely, suppose that $\mu(d)\in\hat{U}$ and we want to show that
$d\in\text{WDom}_{S}(U)$. Let $G$ be a group and $f,g:S\rightarrow G$ two
monoid homomorphisms that coincide in $U$, therefore the group homomorphisms
$\hat{f},\hat{g}:\mathcal{G}(S)\rightarrow G$ of the universal property of
$\mu$ coincide in $\hat{U}$ which, from our assumption, implies that
$\hat{f}(\mu(d))=\hat{g}(\mu(d))$, and then $f(d)=g(d)$ proving that
$d\in\text{WDom}_{S}(U)$. ∎
Given a presentation $\mathcal{P}=(\mathbf{x},\mathbf{r})$ for a group $G$, we
consider the following crossed module. If $\mathcal{G}(\Upsilon)$ is the
universal group associated with $\mathcal{P}$ and $\hat{F}$ is the free group
on $\mathbf{x}$, then we define
$\tilde{\theta}:\mathcal{G}(\Upsilon)\rightarrow\hat{F}\text{ by
}\mu\sigma(^{u}{r})^{\varepsilon}\mapsto ur^{\varepsilon}u^{-1}.$
An action of $\hat{F}$ on $\mathcal{G}(\Upsilon)$ is given by
${}^{v}(\mu\sigma(^{u}r)^{\varepsilon})=\mu\sigma(^{vu}r)^{\varepsilon}$ for
every $v\in\hat{F}$ and every generator $\mu\sigma((^{u}r)^{\varepsilon})$ of
$\mathcal{G}(\Upsilon)$. It is easy to check that the triple
$(\mathcal{G}(\Upsilon),\hat{F},\tilde{\theta})$ is a crossed module over
$\hat{F}$. The elements of $\text{Ker}(\tilde{\theta})$ are central, therefore
$\text{Ker}(\tilde{\theta})$ is an abelian subgroup of $\mathcal{G}(\Upsilon)$
on which $G$ acts on the left by the rule
${}^{g}(\mu\sigma(a_{1},...,a_{n})\iota\mu\sigma(b_{1},...,b_{m}))=\mu\sigma(^{w}{a_{1}},...,^{w}{a_{n}})\iota\mu\sigma(^{w}{b_{1}},...,^{w}{b_{m}}),$
where $w$ is a word in $\hat{F}$ representing $g$. With this action
$\text{Ker}(\tilde{\theta})$ becomes a left $G$-module which we call the
generalized module of identities for $\mathcal{P}$ and is denoted by
$\tilde{\Pi}$. Also we note that $\hat{\mathfrak{U}}$ is a sub $G$-module of
$\tilde{\Pi}$. The module of identities $\pi$ for $\mathcal{P}$ is obtained
from $\tilde{\Pi}$ by factoring out $\hat{\mathfrak{U}}$. In terms of
$\tilde{\Pi}$ and $\hat{\mathfrak{U}}$ we prove the following analogue of
theorem 3.1 of [38].
###### Theorem 3.4.
The following assertions are equivalent.
* (i)
The presentation $\mathcal{P}=(\mathbf{x},\mathbf{r})$ is aspherical.
* (ii)
For every identity $Y$-sequence $d$,
$d\in\text{WDom}_{\Upsilon}(\mathfrak{U})$.
* (iii)
$\tilde{\Pi}=\hat{\mathfrak{U}}$.
###### Proof.
$(i)\Rightarrow(ii)$ Let
$d=\sigma(a_{1})\cdot\cdot\cdot\sigma(a_{n})\in\Upsilon$ be any identity
$Y$-sequence and as such it has to be Peiffer equivalent to 1. We proceed by
showing that $d\in\text{WDom}_{\Upsilon}(\mathfrak{U})$. Let $G$ be any group
and $f,g:\Upsilon\rightarrow G$ two monoid homomorphisms that coincide in
$\mathfrak{U}$ and we want to show that $f(d)=g(d)$. The proof will be done by
induction on the minimal number $h(d)$ of insertions and deletions needed to
transform $d=\sigma(a_{1})\cdot\cdot\cdot\sigma(a_{n})$ to $1$. If $h(d)=1$,
then $d\in\mathfrak{U}$ and $f(d)=g(d)$. Suppose that $h(d)=n>1$ and let
$\tau$ be the first operation performed on $d$ in a series of operations of
minimal length. After $\tau$ is performed on $d$, it is obtained an element
$d^{\prime}$ with $h(d^{\prime})=n-1$. By induction hypothesis,
$f(d^{\prime})=g(d^{\prime})$ and we want to prove that $f(d)=g(d)$. There are
two possible cases for $\tau$. First, $\tau$ is an insertion and let
$u=\sigma(a)\sigma(a^{-1})\in\mathfrak{U}$ be the element inserted. It follows
that $f(d^{\prime})=f(d)f(u)$ and $g(d^{\prime})=g(d)g(u)$, but $f(u)=g(u)$,
therefore from cancellation law in the group $G$ we get $f(d)=g(d)$. Second,
$\tau$ is a deletion and let $u=\sigma(a)\sigma(a^{-1})\in\mathfrak{U}$ be the
element deleted, that is $d=d^{\prime}u$. It follows immediately from the
assumptions that $f(d)=g(d)$ proving that
$d\in\text{WDom}_{\Upsilon}(\mathfrak{U})$.
$(ii)\Rightarrow(iii)$ Let $\tilde{d}\in\tilde{\Pi}$. We may assume without
loss of generality that no $\iota(\mu\sigma(^{u}{r})^{\varepsilon})$ is
represented in $\tilde{d}$ for if there is any such occurrence, we can
multiply $\tilde{d}$ by
$\mu\sigma((^{u}{r})^{\varepsilon}(^{u}{r})^{-\varepsilon})$ to obtain in
return $\tilde{d}^{\prime}$ where $\iota(\mu\sigma(^{u}{r})^{\varepsilon})$ is
now replaced by $\mu\sigma((^{u}{r})^{-\varepsilon})$. It is obvious that if
$\tilde{d}^{\prime}\in\hat{\mathfrak{U}}$, then
$\tilde{d}\in\hat{\mathfrak{U}}$ and conversely. Let now $d$ be any preimage
of $\tilde{d}$ under $\mu$. It is clear that $d$ is an identity $Y$-sequence
and as such $d\in\text{WDom}_{\Upsilon}(\mathfrak{U})$. Then proposition 3.3
implies that $\tilde{d}=\mu(d)\in\hat{\mathfrak{U}}$.
$(iii)\Rightarrow(i)$ Assume that $\tilde{\Pi}=\hat{\mathfrak{U}}$ and we want
to show that any identity $Y$-sequence $d$ is Peiffer equivalent to 1. From
the assumption for $d$ we have that $\mu(d)\in\hat{\mathfrak{U}}$ and then
proposition 3.3 implies that $d\in\text{WDom}_{\Upsilon}(\mathfrak{U})$.
Consider the group $H/P$ as a quotient of $\mathcal{G}(\Upsilon)$ obtained by
identifying $\iota(\mu\sigma({{}^{u}{r}}))$ with $\mu\sigma((^{u}{r})^{-1})$
and let $\nu:\mathcal{G}(\Upsilon)\rightarrow H/P$ be the respective quotient
morphism. Writing $\tau$ for the zero morphism from $\Upsilon$ to $H/P$, we
see that $\tau$ and the composition $\nu\mu$ coincide in $\mathfrak{U}$,
therefore since $d\in\text{WDom}_{\Upsilon}(\mathfrak{U})$, it follows that
$\nu\mu(d)=1$ in $H/P$. The asphericity of $\mathcal{P}$ now follows from
theorem 2.7, p.71 of [17]. ∎
Before we prove our next result we recall the definition of the relation
module $\mathcal{N}(\mathcal{P})$. Given $\mathcal{P}=(\mathbf{x},\mathbf{r})$
a presentation for a group $G$, we let $\alpha:\hat{F}\rightarrow G$ and
$\beta:N\rightarrow N/[N,N]$ be the canonical homomorphisms where $N$ is the
normal closure of $\mathbf{r}$ in $\hat{F}$ and $[N,N]$ its commutator
subgroup. There is a well defined $G$-action on
$\mathcal{N}(\mathcal{P})=N/[N,N]$ given by
$w^{\alpha}\cdot s^{\beta}=(w^{-1}sw)^{\beta}$
for every $w\in\hat{F}$ and $s\in N$. This action extends to an action of
$\mathbb{Z}G$ over $\mathcal{N}(\mathcal{P})$ by setting
$(w_{1}^{\alpha}\pm w_{2}^{\alpha})\cdot
s^{\beta}=(w_{1}^{-1}sw_{1}w_{2}^{-1}s^{\pm 1}w_{2})^{\beta}.$
When $\mathcal{P}$ is aspherical, the basis of $\mathcal{N}(\mathcal{P})$ as a
free $\mathbb{Z}G$ module is the set of elements $r^{\beta}$ with
$r\in\mathbf{r}$.
###### Proposition 3.5.
If $\mathcal{P}$ is aspherical, then $\hat{\mathfrak{U}}$ is a free $G$-module
with bases equipotent to the set $\mathbf{r}$.
###### Proof.
The result follows if we show that
$\hat{\mathfrak{U}}\cong\mathcal{N}(\mathcal{P})$ as $G$-modules. For this we
define
$\Omega:\mathcal{N}(\mathcal{P})\rightarrow\hat{\mathfrak{U}}$
on free generators by $r^{\beta}\mapsto\mu\sigma(rr^{-1})$ which is clearly
well defined and a surjective morphism of $G$-modules. Now we prove that
$\Omega$ is injective. Let
$\xi=\sum_{i=1}^{n}u_{i}^{\alpha}\cdot
r_{i}^{\beta}-\sum_{j=n+1}^{m}v_{j}^{\alpha}\cdot
r_{j}^{\beta}\in\text{Ker}(\Omega),$
which means that
$\prod_{i=1}^{n}\mu\sigma(^{u_{i}}r_{i}(^{u_{i}}r_{i})^{-1})\iota\left(\prod_{j=n+1}^{m}\mu\sigma(^{v_{j}}r_{j}(^{v_{j}}r_{j})^{-1})\right)=1.$
(1)
To prove that $\xi=0$ we will proceed as follows. Define
$\gamma:FM(Y\cup Y^{-1})\rightarrow\mathcal{N}(\mathcal{P})$
on free generators as follows
$(^{u}r)^{\varepsilon}\mapsto u^{\alpha}\cdot r^{\beta}.$
It is easy to see that $\gamma$ is compatible with the defining relations of
$\Upsilon$, hence there is $g:\Upsilon\rightarrow\mathcal{N}(\mathcal{P})$ and
then the universal property of $\mu$ implies the existence of
$\hat{g}:\mathcal{G}(\Upsilon)\rightarrow\mathcal{N}(\mathcal{P})$ such that
$\hat{g}\mu=g$. If we apply now $\hat{g}$ on both sides of (1) obtain
$2\cdot\sum_{i=1}^{n}u_{i}^{\alpha}\cdot
r_{i}^{\beta}-2\cdot\sum_{j=n+1}^{m}v_{j}^{\alpha}\cdot r_{j}^{\beta}=0,$
proving that $\xi=0$. ∎
## 4 Proof of the main theorem
The proof of our main theorem is heavily based on two papers. The first one is
[36] where McGlashan et al extended the Squier complex of a monoid
presentation to a 3-complex and obtained a short exact sequence involving data
from this complex. This sequence will be crucial in the proof of our theorem.
The second one is [40] where Pride realizes the second homotopy group
associated with a group presentation as the first homotopy group of a certain
extension of the Squire complex arising from that presentation. For the sake
of completeness we have added below a number of sections which tend to explain
the material that is used in our proofs. Section 4.1 gives some basic material
about rewriting systems since they are used in the construction of our
complexes and in our proofs. In Section 4.2 we explain in some details how the
Squier complex of a monoid presentation is defined and the cellular chain
complex associated with it. Further in section 4.3 we give the definition of
the extended Squier complex as it appears in [36] and some of the homological
consequences that will be used in our proofs. Section 4.4 shows how the 0 and
the 1-skeleton of the Squier complex is well ordered, and in the case when the
rewriting system is complete, it shows how these well orders induce another
well order in the set of all 2-cells of the extended 3-complex. This new well
order will be used further in section 4.6. Section 4.5 is about the Knuth-
Bendix completion procedure since it is used to give a new and shorter proof
of the key result of [36] regarding the short exact sequence we mentioned
above. This proof is given in section 4.6. Section 4.7 is devoted to
introducing the Pride complex associated with a group presentation and to
explain ideas and results from [40] since we make extensive use of them in our
proofs.
Finally, it is important to mention that theorem 6.6 of [33] is vital in the
proof of key lemma 4.14.
### 4.1 Some basic concepts from rewriting systems
A rewriting system is a pair $\mathcal{P}=(\mathbf{x},\mathbf{r})$ where
$\mathbf{x}$ is a non empty set and $\mathbf{r}$ is a set of rules
$r=(r_{+1},r_{-1})\in F\times F$ where $F$ is the free monoid on $\mathbf{x}$.
Related with $\mathbf{r}$ there is the so called the one single step reduction
of words
$\rightarrow_{\mathbf{r}}=\\{(ur_{+1}v,ur_{-1}v)|r\in\mathbf{r}\text{ and
}u,v\in F\\}.$
The reflexive and transitive closure of $\rightarrow_{\mathbf{r}}$ is denoted
by $\rightarrow_{\mathbf{r}}^{\ast}$, and the reflexive, transitive and
symmetric closure is denoted by $\leftrightarrow_{\mathbf{r}}^{\ast}$ and is
also known as the Thue congruence generated by $\mathbf{r}$. The quotient
$F/\leftrightarrow_{\mathbf{r}}^{\ast}$ forms a monoid $S$ whose elements are
the congruence classes $\bar{u}$ of words $u\in F$, and the multiplication is
given by $\bar{u}\cdot\bar{v}=\overline{uv}$. We say that the monoid $S$ is
given by $\mathcal{P}$, or that $\mathcal{P}$ is a presentation for $S$.
A rewriting system $\mathcal{P}=(\mathbf{x},\mathbf{r})$ is noetherian if
there is no infinite chain
$w\rightarrow_{\mathbf{r}}w^{\prime}\rightarrow_{\mathbf{r}}\dots$
and is confluent if whenever we have $w\rightarrow_{\mathbf{r}}^{\ast}w_{1}$
and $w\rightarrow_{\mathbf{r}}^{\ast}w_{2}$, then there is $z\in F$ such that
$w_{1}\rightarrow_{\mathbf{r}}^{\ast}z$ and
$w_{2}\rightarrow_{\mathbf{r}}^{\ast}z$. A rewriting system
$\mathcal{P}=(\mathbf{x},\mathbf{r})$ is complete if it is both noetherin and
confluent.
Let $\mathcal{P}=(\mathbf{x},\mathbf{r})$ be a presentation for a monoid $S$.
The natural epimorphism
$F\rightarrow S\text{ such that }w\mapsto\bar{w},$
where $F$ is the free monoid on $\mathbf{x}$, extends linearly to a ring
epimorphism
$\mathbb{Z}F\rightarrow\mathbb{Z}S$
of the corresponding integral monoid rings. The kernel of this epimorphism is
denoted by $J$ which as an abelian group is generated by all
$u(r_{+1}-r_{-1})v\text{ where }u,v\in F\text{ and }r\in\mathbf{r}.$
As a $(\mathbb{Z}F,\mathbb{Z}F)$-bimodule $J$ is generated by all
$r_{+1}-r_{-1}$.
### 4.2 The Squier complex of a monoid presentation
The material included in this section is taken from [36] (see also [35]). At
the end of the section we give shortly the respective terminology used in [33]
which differs slightly from ours. The reason we explain this terminology is
the use of theorem 6.6 of [33] in the proof of our key lemma 4.14.
For every rewriting system $\mathcal{P}=(\mathbf{x},\mathbf{r})$ we can define
its graph of derivations $\Gamma(\mathcal{P})$ whose vertices are the elements
of $F$, and the edges are all quadruples
$e=(w,r,\varepsilon,w^{\prime})\text{ where }w,w^{\prime}\in F,\varepsilon=\pm
1,r\in\mathbf{r},$
with initial, terminal and inverse functions
$\iota e=wr_{\varepsilon}w^{\prime},\tau e=wr_{-\varepsilon}w^{\prime}\text{
and }e^{-1}=(w,r,-\varepsilon,w^{\prime}).$
The edge $e$ is called positive if $\varepsilon=1$. We can think of
$\Gamma(\mathcal{P})$ as a one dimensional cw-complex with 0-cells all the
elements of $F$ and with 1-cells all positive edges. We note here that
$e^{-1}=(w,r,-1,w^{\prime})$ is not a new edge attached to the complex, but is
defined to mean the topological inverse of the attaching map of
$e=(w,r,1,w^{\prime})$. A path $p$ of length $n$ in $\Gamma(\mathcal{P})$ is a
sequence of edges $p=e_{1}\dots e_{i}e_{i+1}\dots e_{n}$ where $\tau
e_{i}=\iota e_{i+1}$ for $1\leq i\leq n-1$. It is called positive if the edges
are positive, and is called closed if $\iota e_{1}=\tau e_{n}$.
There is a natural two-sided action of $F$ on $\Gamma(\mathcal{P})$. The
action on vertices is given by the multiplication of $F$, and the action of
$z,z^{\prime}\in F$ on edges $e=(w,r,\varepsilon,w^{\prime})$ is given by
$z.e.z^{\prime}=(zw,r,\varepsilon,w^{\prime}z^{\prime}),$
and sometimes is called translation. This action extends to paths in the
obvious way.
Note that there is a 1-1 correspondence between the elements of $S$ given by
$\mathcal{P}$ and the connected components of $\Gamma(\mathcal{P})$ since
$u\leftrightarrow_{\mathbf{r}}^{\ast}v$ if and only if there is a path in
$\Gamma(\mathcal{P})$ connecting $u$ with $v$. Also note that the generators
of $J$ as an abelian group are the elements $\iota e-\tau e$ where $e$ is a
positive edge.
We say that two positive edges $e_{1}$ and $e_{2}$ are disjoint if they can be
written in the form
$e_{1}=f_{1}.\iota f_{2},f_{2}=\iota f_{1}f_{2}$
where $f_{1},f_{2}$ are positive edges. We say that an edge $e$ is left
reduced (resp. right reduced) if it cannot be written in the form $u.f$ (resp.
$f.u$) for some non empty word $u\in F$ and an edge $f$. A pair of positive
edges with the same initial forms a critical pair it either
* (1)
One of the pair is both left and right reduced (a critical pair of inclusion
type), or
* (2)
One of the pair is left reduced but not right reduced, and the other is right
reduced but not left reduced (a critical pair of overlapping type).
We say that a critical pair $(e_{1},e_{2})$ is resolvable if there are
positive paths (a resolution of the critical pair) from $\tau e_{1}$ and $\tau
e_{2}$ to a common vertex. It is well known [37] that, when the system
$\mathcal{P}=(\mathbf{x},\mathbf{r})$ is noetherian and if all the critical
pairs are resolvable, then the system is confluent.
The Squier complex $\mathcal{D}(\mathcal{P})$ associated with $\mathcal{P}$ is
a combinatorial 2-complex with 1-skeleton $\Gamma(\mathcal{P})$, to which, for
each pair of positive edges $e,f$ a 2-cell $[e,f]$ is attached along the
closed path
$\partial[e,f]=(e.\iota f)(\tau e.f)(e.\tau f)^{-1}(\iota e.f)^{-1}.$
Sometimes we refer the 2-cell $[e,f]$ as a square 2-cell. The two-sided action
of $F$ on $\Gamma(\mathcal{P})$ extends to the 2-cells by
$w.[e,f].w^{\prime}=[w.e,f.w^{\prime}]\text{ where }w,w^{\prime}\in
F,e,f\text{ are positive edges}.$
We have the chain complex
$\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
23.8173pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&&\crcr}}}\ignorespaces{\hbox{\kern-23.8173pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathbf{C}(\mathcal{D}):C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
29.95915pt\raise 6.075pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.78612pt\hbox{$\scriptstyle{\partial_{2}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 47.8173pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
47.8173pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{C_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
70.62163pt\raise 6.075pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.78612pt\hbox{$\scriptstyle{\partial_{1}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 88.47978pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
88.47978pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{C_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
129.14226pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
129.14226pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{0}$}}}}}}}\ignorespaces}}}}\ignorespaces,$
where $C_{0}$, $C_{1}$, $C_{2}$ and $C_{3}$ are the free abelian groups
generated by all 0-cells, positive edges, 2-cells, and 3-cells respectively.
The boundary maps are given by
$\partial_{1}e=\iota e-\tau e\text{ where }e\text{ is a positive edge},$
$\partial_{2}[e,f]=e.(\iota f-\tau f)-(\iota e-\tau e).f\text{ where
}e,f\text{ are positive edges}.$
In the paper [33] of Otto and Kobayashi, a monoid presentation is denoted by
$(\Sigma,R)$ and the rewriting rules of $R$ are denoted by $r\rightarrow\ell$.
The edges of the graph of derivations in [33] are denoted by $(x,u,v,y)$ where
$x,y\in\Sigma^{\ast}$ and $(u\rightarrow v)\in E=R\cup R^{-1}$. In [33] it is
considered the set of closed paths
$D=\\{e_{1}xu_{2}\circ v_{1}xe_{2}\circ e_{1}^{-1}xv_{2}\circ
u_{1}xe_{2}^{-1}|e_{1}=(u_{1},v_{1})\in R,e_{2}=(u_{2},v_{2})\in
R,x\in\Sigma^{\ast}\\}.$
It is important to observe that each circuit of $D$ is in fact the boundary of
a square 2-cell as the following shows
$e_{1}xu_{2}\circ v_{1}xe_{2}\circ e_{1}^{-1}xv_{2}\circ
u_{1}xe_{2}^{-1}=\partial[(1,r_{1},1,1),(x,r_{2},1,1)],$
where $r_{1}=e_{1}$ and $r_{2}=e_{2}$. The free $\mathbb{Z}\Sigma^{\ast}$ bi-
module $\mathbb{Z}\Sigma^{\ast}\cdot R\cdot\mathbb{Z}\Sigma^{\ast}$ considered
in [33] is the abelian group $C_{1}$ of our complex $\mathbf{C}(\mathcal{D})$,
and the maps $\partial_{1}$ are the same in both papers. On the other hand,
the free $\mathbb{Z}\Sigma^{\ast}$ bi-module $\mathbb{Z}\Sigma^{\ast}\cdot
D\cdot\mathbb{Z}\Sigma^{\ast}$ of [33] is the abelian group $C_{2}$ of
$\mathbf{C}(\mathcal{D})$, and the maps $\partial_{2}$ are the same in both
papers. Finally, the exact sequence of theorem 6.6 of [33] in our notations
will be
$\textstyle{C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\partial_{2}}$$\textstyle{J.R.\mathbb{Z}\Sigma^{\ast}+\mathbb{Z}\Sigma^{\ast}.R.J\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\partial_{1}}$$\textstyle{J^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
The interpretation of the exactness in the middle of the above sequence is
that
$Ker\partial_{1}\cap(J.R.\mathbb{Z}\Sigma^{\ast}+\mathbb{Z}\Sigma^{\ast}.R.J)=Im\partial_{2}$.
### 4.3 The extended Squier complex
Assume now that $\mathbf{p}$ is a set closed paths in
$\mathcal{D}(\mathcal{P})$. In [36] the complex $\mathcal{D}(\mathcal{P})$ has
been extended to a 3-complex $(\mathcal{D},\mathbf{p})$ in the following way.
We add to $\mathcal{D}(\mathcal{P})$ additional 2-cells $[u,p,v]$ attached
along the closed path
$\partial[u,p,v]=u.p.v\text{ where }u,v\in F,\text{ and }p\in\mathbf{p}.$
The construction is then completed by adding 3-cells as follows. For each
positive edge $f$ and each 2-cell $\sigma$ with
$\partial\sigma=e_{1}^{\varepsilon_{1}}\dots e_{n}^{\varepsilon_{n}}$, 3-cells
$[f,\sigma]$ and $[\sigma,f]$ are attached to the 2-skeleton by mapping their
boundaries to respectively:
* (1)
the 2-cells $\iota f.\sigma$, $\tau f.\sigma$ together with 2-cells
$[f,e_{i}]$ for $1\leq i\leq n$,
* (2)
the 2-cells $\sigma.\iota f$, $\sigma.\tau f$ together with 2-cells
$[e_{i},f]$ for $1\leq i\leq n$.
The 2-sided action of $F$ on the 2-skeleton extends naturally to the 3-cells.
For $[f,\sigma]$, $[\sigma,f]$ and $u,v\in F$,
$u.[f,\sigma].v=[u.f,\sigma.v]\text{ and }u.[\sigma,f].v=[u.\sigma,f.v].$
The complex $\mathbf{C}(\mathcal{D})$ now extends to
$\textstyle{\mathbf{C}(\mathcal{D},\mathbf{p}):0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C_{3}^{\mathbf{p}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\partial}_{3}}$$\textstyle{C_{2}^{\mathbf{p}}\oplus
C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\partial}_{2}}$$\textstyle{C_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\partial_{1}}$$\textstyle{C_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
where $C_{3}^{\mathbf{p}}$ is the free abelian group generated by the set of
all 3-cells, and $C_{2}^{\mathbf{p}}$ is the free abelian group generated by
the set of all newly added 2-cells $\sigma=[u,p,v]$. The boundary map
$\tilde{\partial}_{2}$ restricted to $C_{2}$ is $\partial_{2}$, and for every
$[u,p,v]$ where $p\in\mathbf{p}$ with $\partial p=f_{1}^{\delta_{1}}\dots
f_{n}^{\delta_{n}}$, it is defined
$\tilde{\partial}_{2}[u,p,v]=\sum_{i=1}^{n}\delta_{i}u.f_{i}.v.$
Finally, the definition of $\tilde{\partial}_{3}$ is done in the following
way. For every positive edge $f$ and every 2-cell $\sigma$ with
$\tilde{\partial}_{2}\sigma=\sum_{i=1}^{n}\varepsilon_{i}e_{i}$ we have
$\tilde{\partial}_{3}[f,\sigma]=(\iota f-\tau
f).\sigma+\sum_{i=1}^{n}\varepsilon_{i}[f,e_{i}],$ (2)
and
$\tilde{\partial}_{3}[\sigma,f]=\sigma.(\iota f-\tau
f)-\sum_{i=1}^{n}\varepsilon_{i}[e_{i},f].$ (3)
The definition of the 2-cells $[u,p,v]$ where $u,v\in F,p\in\mathbf{p}$
suggests that $C_{2}^{\mathbf{p}}$ can be regarded as a free
$(\mathbb{Z}F,\mathbb{Z}F)$-bimodule with basis
$\hat{\mathbf{p}}=\\{[1,p,1]|p\in\mathbf{p}\\}.$
This enables us to define a $(\mathbb{Z}F,\mathbb{Z}F)$-homomorphism
$\varphi:C_{2}\oplus
C_{2}^{\mathbf{p}}\rightarrow\mathbb{Z}S\mathbf{p}\mathbb{Z}S$
by mapping $C_{2}$ to 0, and every 2-cell $[u,p,v]$ to $\bar{u}.p.\bar{v}$.
The kernel of $\varphi$ is denoted by $K^{\mathbf{p}}$.
It is shown in [36] that
$K^{\mathbf{p}}=C_{2}+J.\hat{\mathbf{p}}.\mathbb{Z}F+\mathbb{Z}F.\hat{\mathbf{p}}.J.$
Also it is shown that $B_{2}(\mathcal{D},\mathbf{p})\subseteq K^{\mathbf{p}}$
and that the restriction of $\tilde{\partial}_{2}$ on $K^{\mathbf{p}}$ sends
$K^{\mathbf{p}}$ onto $B_{1}(\mathcal{D})$, therefore we have the complex
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{2}(\mathcal{D},\mathbf{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{incl.}$$\textstyle{K^{\mathbf{p}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\partial}_{2}}$$\textstyle{B_{1}(\mathcal{D})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
(4)
It is proved in Proposition 14 of [36] that when $\mathbf{p}$ is a homology
trivializer, then the sequence (4) is exact. We will give a new proof in
section 4.6 for the exactness of (4). Since the proof uses the so called
Knuth-Bendix completion procedure, we will explain this procedure in some
details in section 4.5. Before doing that we will introduce in the next
section some useful orders in the skeleta of $\mathcal{D}(\mathcal{P})$.
### 4.4 Ordering the Squier complex
As before $\mathcal{P}=(\mathbf{x},\mathbf{r})$ is a rewriting system and
$\mathcal{D}(\mathcal{P})$ its Squier complex. Assume in addition that for
every $(r_{+1},r_{-1})\in\mathbf{r}$, $r_{+1}\neq r_{-1}$. Let
$\vartriangleright$ be a well ordering on $\mathbf{x}$. The corresponding
length-lexicographical ordering on $F$ is defined as follows. For $u,v\in F$,
we write $u>_{llex}v$ if and only if $|u|>|v|$, or $|u|=|v|$, $u=au^{\prime}$,
$v=bv^{\prime}$ where $a,b\in\mathbf{x}$, $u^{\prime},v^{\prime}\in F$, and
one of the following holds:
* (i)
$a\vartriangleright b$,
* (ii)
$a=b$, $u^{\prime}>_{llex}v^{\prime}$.
It turns out that $>_{llex}$ is a well ordering on $F$ (see [3]). We can
always assume that $>_{llex}$ is compatible with $\mathbf{r}$ in the sense
that $r_{+1}>_{llex}r_{-1}$, for if there are rules $(r_{+1},r_{-1})$
satisfying the opposite, we can exchange $r_{+1}$ with $r_{-1}$. Well
orderings in $F$ that are compatible with $\mathbf{r}$ are usually called
reduction well ordering and are the basis to start the Knuth Bendix completion
procedure.
So far we have defined a reduction well order on the 0-skeleton of
$\mathcal{D}(\mathcal{P})$ which will be denoted by $\prec_{0}$. This order
induces a noetherian (well founded) partial order in the 1-skeleton of
$\mathcal{D}(\mathcal{P})$ in the following way. For $e=(u,r,+1,v)$ and
$f=(u^{\prime},r^{\prime},+1,v^{\prime})$ positive edges in by
$\mathcal{D}(\mathcal{P})$, we define $e\prec_{1}f$ if and only if $\iota
e=\iota f$, and one of the following occurs:
* (i)
$v^{\prime}$ is a proper suffix of $v$, or
* (ii)
$v=v^{\prime}$ and $|r_{+1}|<|r^{\prime}_{+1}|$, or
* (iii)
$v=v^{\prime}$, $r_{+1}=r^{\prime}_{+1}$ and $r_{-1}\prec_{0}r^{\prime}_{-1}$.
It turns out that $\prec_{1}$ is a partial order and that it is well founded.
Further, assume that $\mathcal{P}=(\mathbf{x},\mathbf{r})$ is confluent, so
that all the critical pairs of positive edges resolve. In that case, we attach
to $\mathcal{D}(\mathcal{P})$ 2-cells $\mathbf{p}$ by choosing resolutions for
every critical pair of positive edges $(e,f)$ in the following way. If
$p_{e}$, $p_{f}$ are positive paths from $\tau e$ and $\tau f$ respectively to
a common vertex, then the boundary of the 2-cell $\sigma$ corresponding to
$(e,f)$ is
$\partial\sigma=ep_{e}p_{f}^{-1}f^{-1}.$
Also we attach 2-cells $u.\sigma.v$ for every $u,v\in F$ along the loop
$u.\partial\sigma.v$. As it is explained in the section 4.3, this new
2-complex extends to a 3-complex denoted there by $(\mathcal{D},\mathbf{p})$.
It is important to mention that every 2-cell of $(\mathcal{D},\mathbf{p})$,
including the square 2-cells, is uniquely determined by the pair $(e,f)$ of
edges meeting its maximal vertex $w=\iota_{e}=\iota_{f}$ (according to
$\prec_{0}$). For this reason, we write the 2-cell as $[w;(e,f)]$. Now we
extend the orders $\prec_{0}$ and $\prec_{1}$ to the 2-skeleton of the
3-complex $(\mathcal{D},\mathbf{p})$ as follows. For every two 2-cells
$[w;(e,f)]$ and $[w^{\prime};(e^{\prime},f^{\prime})]$ we say that
$[w;(e,f)]\prec_{2}[w^{\prime};(e^{\prime},f^{\prime})]$ if and only if:
* (i)
$w\prec_{0}w^{\prime}$; or
* (ii)
$w=w^{\prime}$ and $f\prec_{1}f^{\prime}$; or
* (iii)
$f=f^{\prime}$ and $e\prec_{1}e^{\prime}$.
This is a well founded total order in the set of all 2-cells of
$(\mathcal{D},\mathbf{p})$.
Under the current assumptions, similarly to 2-cells, every 3-cell is uniquely
determined by three positive edges $e_{1}\prec_{1}e_{2}\prec_{1}e_{3}$ with
initial the maximal vertex $w$ of the 3-cell, where either $e_{1}$ is disjoint
from $e_{2}$ and $e_{3}$, or $e_{3}$ is disjoint from $e_{1}$ and $e_{2}$. For
this reason we write the 3-cell by $[w;(e_{1},e_{2},e_{3})]$. By (2) and (3)
we see that
$\tilde{\partial}_{3}[w;(e_{1},e_{2},e_{3})]=[w;(e_{2},e_{3})]-[w;(e_{1},e_{3})]+[w;(e_{1},e_{2})]+\varsigma$
(5)
where $\varsigma$ is a 2-chain made up of 2-cells all of which have maximal
vertices less than $w$. Also note that the maximal 2-cell represented in
$\tilde{\partial}_{3}[w;(e_{1},e_{2},e_{3})]$ is $[w;(e_{2},e_{3})]$.
### 4.5 The Knuth-Bendix completion procedure
The Knuth-Bendix procedure [3], produces a complete system out of any given
system and equivalent to it. Given a rewriting system
$\mathcal{P}=(\mathbf{x},\mathbf{r})$ and a reduction well order $\succ$ on
$F$ that is compatible with $\mathbf{r}$ (there is always such one as
explained in section 4.4), one can produce a complete rewriting system
$\mathcal{P}^{\infty}$ that is equivalent to $\mathcal{P}$ in the following
way. Put $\mathbf{r}_{0}=\mathbf{r}$. For each non-resolvable pair of edges
$(e,f)$ in $\mathcal{D}(\mathcal{P})$ we chose positive path $p_{e}$, $p_{f}$
from $\tau e$ and $\tau f$ respectively to distinct irreducibles. Let
$\mathbf{r}_{1}$ be the set of rules obtained from $\mathbf{r}$ by adding for
each such critical pair $(e,f)$ the rule $(\tau p_{e},\tau p_{f})$ if $\tau
p_{e}\succ\tau p_{f}$, otherwise adding the rule $\tau p_{f}\succ\tau p_{e}$.
It is clear that $\mathcal{P}_{1}=(\mathbf{x},\mathbf{r}_{1})$ is equivalent
to $\mathcal{P}=(\mathbf{x},\mathbf{r})$ and that
$\mathbf{r}\subseteq\mathbf{r}_{1}$ where the inclusion is strict if
$\mathcal{P}$ is not complete. Assume by induction that we have defined a
sequence of equivalent rewriting systems
$\mathcal{P}=(\mathbf{x},\mathbf{r}_{0}),...,\mathcal{P}_{n-1}=(\mathbf{x},\mathbf{r}_{n-1}),\mathcal{P}_{n}=(\mathbf{x},\mathbf{r}_{n}),$
and consequently, an increasing sequence of complexes
$\mathcal{D}(\mathcal{P})\subseteq\dots\subseteq\mathcal{D}(\mathcal{P}_{n-1})\subseteq\mathcal{D}(\mathcal{P}_{n}),$
where $\mathcal{P}_{n}=(\mathbf{x},\mathbf{r}_{n})$ is obtained from
$\mathcal{P}_{n-1}=(\mathbf{x},\mathbf{r}_{n-1})$ by resolving all the non-
resolvable critical pairs of $\mathcal{D}(\mathcal{P}_{n-1})$. Put
$\mathbf{r}_{\infty}=\underset{n\geq 0}{\cup}\mathbf{r}_{n}$ and let
$\mathcal{P}_{\infty}=(\mathbf{x},\mathbf{r}_{\infty})$ be the resulting
rewriting system. The corresponding complex
$\mathcal{D}(\mathcal{P}_{\infty})$ will be latter denoted by
$\mathcal{D}^{\infty}$. The rewriting system
$\mathcal{P}_{\infty}=(\mathbf{x},\mathbf{r}_{\infty})$ is obviously
equivalent to $\mathcal{P}$ and it is complete since it is compatible with the
order $\succ$ on $F$ and for every non-resolvable pair $(e,f)$ of edges found
in some $\mathcal{D}(\mathcal{P}_{n})$, there is an edge $g$ in
$\mathcal{D}(\mathcal{P}_{n+1})$ connecting the endpoints of the positive
paths $p_{e}$ and $p_{f}$ of $\mathcal{D}(\mathcal{P}_{n})$.
### 4.6 A shorter proof for the exactness of (4)
The proof that is provided below is valid in the special case when each 2-cell
from $\mathbf{p}$ arises from the resolution of a critical pair. The proof
goes through the following stages. The first stage is the same as that of [36]
and for this reason is not presented here in full. In this stage it is proved
that (4) is exact in the special case when the monoid presentation
$\mathcal{M}=\langle\mathbf{x},\mathbf{r}\rangle$ from which $\mathcal{D}$ is
defined, is complete, and the set $\mathbf{p}$ of homology trivializers is
obtained by choosing resolutions of critical pairs of $\mathbf{r}$. The proof
is roughly as follows. Using (5), it is shown that every 2-cycle $\xi\in
K^{\mathbf{p}}$ is homologous to a 2-cycle $\bar{\xi}\in K^{\mathbf{p}}$ that
is obtained from $\xi$ by replacing the maximal 2-cell $\sigma$ represented in
$\xi$ by a 2-chain made up of lesser 2-cells than $\sigma$. Then we proceed by
Noetherian induction.
In the second stage, differently from the general case that is considered in
[36], we assume that we have a monoid presentation
$\mathcal{M}=\langle\mathbf{x},\mathbf{r}\rangle$ (not necessarily complete)
and that $H_{1}(\mathcal{D})$ of the corresponding Squier complex
$\mathcal{D}$ is trivialized by adding 2-cells $\mathbf{p}$ arising from the
resolution of certain critical pairs. Also, the same as in [36], we assume
that $\mathbf{r}$ is compatible with a length-lexicographic order in the free
monoid $F$ on $\mathbf{x}$. Using the Knuth-Bendix procedure, we obtain a new
presentation
$\mathcal{M}^{\infty}=\langle\mathbf{x},\mathbf{r}^{\infty}\rangle$ with
$\mathbf{r}\subseteq\mathbf{r}^{\infty}$ and where $\mathbf{r}^{\infty}$ is
compatible with the order on $F$. The Squier complex $\mathcal{D}^{\infty}$
has trivializer $\mathbf{p}^{\infty}$ obtained by choosing resolution of all
critical pairs of $\mathbf{r}^{\infty}$ and as a consequence
$\mathbf{p}\subseteq\mathbf{p}^{\infty}$. From the special case of the first
stage, we have the exactness of
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{2}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{incl.}$$\textstyle{K^{\mathbf{p}^{\infty}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\partial}^{\infty}_{2}}$$\textstyle{B_{1}(\mathcal{D}^{\infty})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$
where
$K^{\mathbf{p}^{\infty}}=C_{2}(\mathcal{D}^{\infty})+J.\mathbf{p}^{\infty}.\mathbb{Z}F+\mathbb{Z}F.\mathbf{p}^{\infty}.J$.
We will use this and the fact that $\mathbf{p}\subseteq\mathbf{p}^{\infty}$ to
prove in a shorter way the exactness of (4).
We begin by pointing out that $(\mathcal{D},\mathbf{p})$ is a subcomplex of
$(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$, therefore for $i=1,2,3$, we have
that $C_{i}(\mathcal{D},\mathbf{p})\leq
C_{i}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$. We will define for
$i=1,2,3$, retractions
$\hat{\rho_{i}}:C_{i}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})\rightarrow
C_{i}(\mathcal{D},\mathbf{p})$.
First, for every positive edge $e$ from
$(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ not belonging to
$(\mathcal{D},\mathbf{p})$, we chose a path
$\rho(e)=e_{1}^{\varepsilon_{1}}\dots e_{n}^{\varepsilon_{n}}$ in
$(\mathcal{D},\mathbf{p})$ connecting $\iota e$ with $\tau e$ where every
$\varepsilon_{i}=\pm 1$. Relative to this choice we define
$\hat{\rho_{1}}:C_{1}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})\rightarrow
C_{1}(\mathcal{D},\mathbf{p})$
by
$e\mapsto\sum_{i}\varepsilon_{i}e_{i},$
whenever $e$ is from $(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ not
belonging to $(\mathcal{D},\mathbf{p})$, and for positive edges $e$ from
$(\mathcal{D},\mathbf{p})$ we define
$\hat{\rho_{1}}(e)=e.$
Thus $\hat{\rho_{1}}$ is a retraction. Before we define a second retraction
$\hat{\rho_{2}}:C_{2}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})\rightarrow
C_{2}(\mathcal{D},\mathbf{p})$, we prove the following.
###### Lemma 4.1.
For every path $\rho=f_{1}^{\beta_{1}}\dots f_{n}^{\beta_{n}}$ in
$(\mathcal{D},\mathbf{p})$ where every $\beta_{j}=\pm 1$, we have that
$\partial_{1}(\beta_{1}f_{1}+\dots+\beta_{n}f_{n})=\iota(\rho)-\tau(\rho).$
###### Proof.
The proof will be done by induction on $n$. For $n=1$,
$\partial_{1}(\beta_{1}f_{1})=\beta_{1}(\iota f_{1}-\tau f_{1}),$
therefore, depending on the sign of $\beta_{1}$, we have that
$\partial_{1}(\beta_{1}f_{1})=\iota(f_{1}^{\beta_{1}})-\tau(f_{1}^{\beta_{1}})$.
For the inductive step, we write
$\rho=f_{1}^{\beta_{1}}\dots f_{n}^{\beta_{n}}\cdot
f_{n+1}^{\beta_{n+1}}=\rho_{1}\cdot f_{n+1}^{\beta_{n+1}}.$
From the assumption for $\rho_{1}$ we have
$\partial_{1}(\beta_{1}f_{1}+\dots+\beta_{n}f_{n})=\iota(\rho_{1})-\tau(\rho_{1})=\iota(\rho)-\iota(f_{n+1}^{\beta_{n+1}}),$
and then
$\displaystyle\partial_{1}(\beta_{1}f_{1}+\dots+\beta_{n}f_{n}+\beta_{n+1}f_{n+1})$
$\displaystyle=\partial_{1}(\beta_{1}f_{1}+\dots+\beta_{n}f_{n})+\partial_{1}(\beta_{n+1}f_{n+1})$
$\displaystyle=\iota(\rho)-\iota(f_{n+1}^{\beta_{n+1}})+(\iota(f_{n+1}^{\beta_{n+1}})-\tau(f_{n+1}^{\beta_{n+1}}))$
$\displaystyle=\iota(\rho)-\tau(f_{n+1}^{\beta_{n+1}})$
$\displaystyle=\iota(\rho)-\tau(\rho).$
∎
Now we define $\hat{\rho_{2}}$ in the following way. If
$z=\sum_{j}\delta_{j}f_{j}\in Z_{1}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$
is a 1-cycle where at least one of $f_{j}$ is from
$(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ not belonging to
$(\mathcal{D},\mathbf{p})$, we have the 1-chain
$\hat{\rho_{1}}\left(\sum_{j}\delta_{j}f_{j}\right)$
in $C_{1}(\mathcal{D},\mathbf{p})$. Let us show that
$\hat{\rho_{1}}\left(\sum_{j}\delta_{j}f_{j}\right)$ is in fact a 1-cycle in
$Z_{1}(\mathcal{D},\mathbf{p})$. Indeed,
$\displaystyle\partial_{1}\hat{\rho_{1}}\left(\sum_{j}\delta_{j}f_{j}\right)$
$\displaystyle=\sum_{j}\delta_{j}\partial_{1}\hat{\rho_{1}}(f_{j})$
$\displaystyle=\sum_{j}\delta_{j}(\iota f_{j}-\tau f_{j})$ (by lemma 4.1)
$\displaystyle=\partial^{\infty}_{1}\left(\sum_{j}\delta_{j}f_{j}\right)$
$\displaystyle=\partial^{\infty}_{1}(z)$ $\displaystyle=0.$
Since $\mathbf{p}$ is a homology trivializer, then for the 1-cycle
$\hat{\rho_{1}}\left(\sum_{j}\delta_{j}f_{j}\right)$ there is a 2-chain
$\varsigma_{z}\in C_{2}(\mathcal{D},\mathbf{p})$ such that
$\tilde{\partial}_{2}(\varsigma_{z})=\hat{\rho_{1}}\left(\sum_{j}\delta_{j}f_{j}\right).$
(6)
We can apply the above for every 2-cell
$\sigma\in(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ not in
$(\mathcal{D},\mathbf{p})$ by taking $z=\tilde{\partial}^{\infty}_{2}(\sigma)$
and writing $\varsigma_{\sigma}$ instead of $\varsigma_{z}$. With these
notations (6) takes the form
$\tilde{\partial}_{2}(\varsigma_{\sigma})=\hat{\rho_{1}}(\tilde{\partial}^{\infty}_{2}(\sigma)).$
(7)
We define
$\hat{\rho_{2}}:C_{2}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})\rightarrow
C_{2}(\mathcal{D},\mathbf{p})$
by
$\hat{\rho_{2}}(\sigma)=\sigma$
for every 2-cell $\sigma$ in $(\mathcal{D},\mathbf{p})$, and for every other
2-cell $\sigma$ we define
$\hat{\rho_{2}}(\sigma)=\varsigma_{\sigma}.$
We will explain how this works for 2-cells $[e,f]$ with
$\hat{\rho_{1}}(e)=\sum_{i}\alpha_{i}e_{i}$ and
$\hat{\rho_{1}}(f)=\sum_{j}\beta_{j}f_{j}$ where at least one of the sums has
more than one term (the corresponding edge is not in
$(\mathcal{D},\mathbf{p})$). In this case we have
$\displaystyle\hat{\rho}_{1}\tilde{\partial}^{\infty}_{2}([e,f])$
$\displaystyle=\hat{\rho}_{1}(e.(\iota f-\tau f)-(\iota e-\tau e).f)$
$\displaystyle=\sum_{i}\alpha_{i}e_{i}\cdot\sum_{j}\beta_{j}(\iota f_{j}-\tau
f_{j})-\sum_{i}\alpha_{i}(\iota e_{i}-\tau e_{i})\cdot\sum_{j}\beta_{j}f_{j}$
(lemma 4.1)
$\displaystyle=\sum_{i,j}\alpha_{i}\beta_{j}\tilde{\partial}_{2}[e_{i},f_{j}]$
$\displaystyle=\tilde{\partial}_{2}\left(\sum_{i,j}\alpha_{i}\beta_{j}[e_{i},f_{j}]\right),$
therefore the 2-chain $\varsigma_{[e,f]}$ in this case can be chosen to be
$\sum_{i,j}\alpha_{i}\beta_{j}[e_{i},f_{j}]$. Again $\hat{\rho_{2}}$ as
defined above is a retraction.
Finally, we define
$\hat{\rho_{3}}:C_{3}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})\rightarrow
C_{3}(\mathcal{D},\mathbf{p})$
as follows. If $e$ is any edge with
$\hat{\rho_{1}}(e)=\sum_{i}\varepsilon_{i}e_{i}$ and $\sigma$ a 2-cell in
$(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ such that
$\hat{\rho_{2}}(\sigma)=\sum_{j}\mu_{j}\sigma_{j}$, then we define
$\hat{\rho_{3}}([e,\sigma])=\sum_{i,j}\varepsilon_{i}\mu_{j}[e_{i},\sigma_{j}]\text{
and
}\hat{\rho_{3}}([\sigma,e])=\sum_{i,j}\varepsilon_{i}\mu_{j}[\sigma_{j},e_{i}].$
It is obvious that $\hat{\rho_{3}}$ is a retraction.
###### Lemma 4.2.
The following hold true:
* (i)
$\hat{\rho_{1}}\partial^{\infty}_{2}=\tilde{\partial}_{2}\hat{\rho_{2}}$.
* (ii)
$\hat{\rho_{2}}\partial^{\infty}_{3}=\tilde{\partial}_{3}\hat{\rho_{3}}$.
###### Proof.
(i) If $\sigma\in C_{2}(\mathcal{D},\mathbf{p})$, then
$\displaystyle\hat{\rho_{1}}\tilde{\partial}^{\infty}_{2}(\sigma)$
$\displaystyle=\hat{\rho_{1}}\tilde{\partial}_{2}(\sigma)=\tilde{\partial}_{2}(\sigma)=\tilde{\partial}_{2}\hat{\rho_{2}}(\sigma).$
Assume now that $\sigma$ is a 2-cell not in $C_{2}(\mathcal{D},\mathbf{p})$,
then from the definition of $\hat{\rho_{2}}$ and from (7) we have
$\displaystyle\tilde{\partial}_{2}\hat{\rho_{2}}(\sigma)=\tilde{\partial}_{2}(\varsigma_{\sigma})=\hat{\rho_{1}}(\tilde{\partial}^{\infty}_{2}(\sigma)).$
(ii) Let $[e,\sigma]$ be a 3-cell where $\sigma$ is any 2-cell in
$(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ with
$\hat{\rho_{2}}(\sigma)=\sum_{j}\mu_{j}\sigma_{j}$, and $e$ is an edge with
$\hat{\rho_{1}}(e)=\sum_{i}\varepsilon_{i}e_{i}$. Assume also that
$\tilde{\partial}^{\infty}_{2}(\sigma)=\sum_{k}\delta_{k}f_{k}$ where for each
$k$, $\hat{\rho_{1}}(f_{k})=\sum_{s}\beta_{ks}g_{ks}$. It follows that from
(i) that
$\tilde{\partial}_{2}\left(\sum_{j}\mu_{j}\sigma_{j}\right)=\tilde{\partial}_{2}(\hat{\rho_{2}}(\sigma))=\hat{\rho_{1}}\tilde{\partial}^{\infty}_{2}(\sigma)=\sum_{k}\delta_{k}\sum_{s}\beta_{ks}g_{ks}.$
(8)
If for each $j$ we let $\mathcal{T}_{\sigma_{j}}$ be the set of terms of
$\tilde{\partial}_{2}(\sigma_{j})$, we see from (8) that for each $k$ and each
$s$, there is some $j$ and some $\alpha_{j}x_{j}\in\mathcal{T}_{\sigma_{j}}$,
such that $\mu_{j}\alpha_{j}x_{j}=\delta_{k}\beta_{ks}g_{ks}$. This implies
that for each edge $e_{i}$, we have that
$\delta_{k}\beta_{ks}[e_{i},g_{ks}]=\mu_{j}\alpha_{j}[e_{i},x_{j}]$. Further
we see that
$\displaystyle\hat{\rho_{2}}\tilde{\partial}^{\infty}_{3}([e,\sigma])$
$\displaystyle=\hat{\rho_{2}}\left((\iota e-\tau
e).\sigma+\sum_{k}\delta_{k}[e,f_{k}]\right)$
$\displaystyle=\sum_{j}\mu_{j}(\iota e-\tau
e).\sigma_{j}+\sum_{k}\delta_{k}\left(\sum_{i}\varepsilon_{i}\sum_{s}\beta_{ks}[e_{i},g_{ks}]\right)$
$\displaystyle=\sum_{j}\mu_{j}\left(\sum_{i}\varepsilon_{i}(\iota e_{i}-\tau
e_{i})\right).\sigma_{j}+\sum_{k}\delta_{k}\left(\sum_{i}\varepsilon_{i}\sum_{s}\beta_{ks}[e_{i},g_{ks}]\right)$
(lemma 4.1) $\displaystyle=\sum_{i}\varepsilon_{i}\left(\sum_{j}\mu_{j}(\iota
e_{i}-\tau
e_{i})\right).\sigma_{j}+\sum_{i}\varepsilon_{i}\left(\sum_{k}\delta_{k}\sum_{s}\beta_{ks}[e_{i},g_{ks}]\right)$
$\displaystyle=\sum_{i}\varepsilon_{i}\left(\sum_{j}\mu_{j}(\iota e_{i}-\tau
e_{i}).\sigma_{j}+\sum_{k}\delta_{k}\sum_{s}\beta_{ks}[e_{i},g_{ks}]\right)$
$\displaystyle=\sum_{i}\varepsilon_{i}\left(\sum_{j}\mu_{j}(\iota e_{i}-\tau
e_{i}).\sigma_{j}+\sum_{j}\mu_{j}\sum_{\alpha_{j}x_{j}\in\mathcal{T}_{\sigma_{j}}}\alpha_{j}[e_{i},x_{j}]\right)$
$\displaystyle=\sum_{i,j}\varepsilon_{i}\mu_{j}\left((\iota e_{i}-\tau
e_{i}).\sigma_{j}+\sum_{\alpha_{j}x_{j}\in\mathcal{T}_{\sigma_{j}}}\alpha_{j}[e_{i},x_{j}]\right)=\tilde{\partial}_{3}\left(\sum_{i,j}\varepsilon_{i}\mu_{j}[e_{i},\sigma_{j}]\right)$
(by (2)) $\displaystyle=\tilde{\partial}_{3}\hat{\rho_{3}}([e,\sigma]).$
The proof for the 3-cell $[\sigma,e]$ is similar to the above and is omitted
here. ∎
###### Proposition 4.3.
(Proposition 14, [36]) If $\mathbf{p}$ is a homology trivializer obtained by
choosing resolutions of certain critical pairs, then the sequence (4) is
exact.
###### Proof.
From the special case of proposition we have the short exact sequence
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{2}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{incl.}$$\textstyle{K^{\mathbf{p}^{\infty}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\partial}^{\infty}_{2}}$$\textstyle{B_{1}(\mathcal{D}^{\infty})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
(9)
Let $\xi\in Ker\tilde{\partial}_{2}$. Since $K^{\mathbf{p}}\subseteq
K^{\mathbf{p}^{\infty}}$ and the restriction of
$\tilde{\partial}^{\infty}_{2}$ on $K^{\mathbf{p}}$ is $\tilde{\partial}_{2}$,
then $\xi\in Ker\tilde{\partial}^{\infty}_{2}$ and the exactness of (9)
implies the existence of a 3-chain $w\in
C_{3}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ such that
$\tilde{\partial}^{\infty}_{3}(w)=\xi$. It follows form lemma 4.2 that
$\xi=\hat{\rho_{2}}(\xi)=\hat{\rho_{2}}(\tilde{\partial}^{\infty}_{3}(w))=\tilde{\partial}_{3}\hat{\rho_{3}}(w),$
which shows that $\xi\in B_{2}(\mathcal{D},\mathbf{p})$ and hence the
exactness of (4). ∎
### 4.7 The Pride complex $\mathcal{D}(\mathcal{P})^{\ast}$ associated with a
group presentation $\mathcal{P}$
In this section we will explain several results of Pride in [40] where it is
proved that the homotopical property FDT for groups is equivalent to the
homological property $FP_{3}$. In order to achieve this, associated to any
group presentation $\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r})$, Pride
considers two crossed modules. The first one is the free crossed module
$(\Sigma,\hat{F},\partial)$ associated to
$\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r})$. To define the second, he
constructs first a complex $\mathcal{D}(\mathcal{M})^{\ast}$ arising from the
monoid presentation
$\mathcal{M}=\langle\mathbf{x},\mathbf{x}^{-1}:R=1(R\in\mathbf{r}),x^{\varepsilon}x^{-\varepsilon}=1(x\in\mathbf{x},\varepsilon=\pm
1)\rangle$
of the same group given by $\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r})$. In
other words, the group is now realized as the quotient of the free monoid $F$
on $\mathbf{x}\cup\mathbf{x}^{-1}$ by the smallest congruence generated by the
relations giving $\mathcal{M}$. We will give below some necessary details on
this complex. We first mention that $\mathcal{D}(\mathcal{M})^{\ast}$ is an
extension of the usual Squier complex $\mathcal{D}(\mathcal{M})$ arising from
$\mathcal{M}$. This complex is called in [12] the Pride complex. We emphasize
here that the definition of $\mathcal{D}(\mathcal{M})$ in [40] requires the
attachment of 2-cells $[e^{\varepsilon},f^{\delta}]$ where $e,f$ are positive
edges and $\varepsilon,\delta=\pm 1$. But as it is observed in the latter
paper [36] (see Remark 8 there), since we are interested in the homotopy and
homology of the complex, the attachment of 2-cells
$[e^{\varepsilon},f^{\delta}]$ is unnecessary in the presence of $[e,f]$
because the boundary of $[e^{\varepsilon},f^{\delta}]$ is a cyclic permutation
of $(\partial[e,f])^{\pm 1}$, hence it is null homotopic. So we assume in what
follows that $\mathcal{D}(\mathcal{M})$ is that one described in section 4.2.
To complete the construction of $\mathcal{D}(\mathcal{M})^{\ast}$ we need to
add to $\mathcal{D}(\mathcal{M})$ certain extra 2-cells along the closed paths
$\mathbf{t}=(1,x^{\varepsilon}x^{-\varepsilon},1,x^{\varepsilon})\circ(x^{\varepsilon},x^{-\varepsilon}x^{\varepsilon},1,1)^{-1},$
where $x\in\mathbf{x}$ and $\varepsilon=\pm 1$. The attaching of these two
cells is done for every overlap of two trivial edges as depicted below
---
$\textstyle{x^{\varepsilon}x^{-\varepsilon}x^{\varepsilon}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(x^{\varepsilon},x^{-\varepsilon}x^{\varepsilon},1,1)}$$\scriptstyle{(1,x^{\varepsilon}x^{-\varepsilon},1,x^{\varepsilon})}$$\textstyle{x^{\varepsilon}}$
The attached 2-cell has boundary made of the following edges
$\mathbb{A}=(1,x^{\varepsilon}x^{-\varepsilon},1,x^{\varepsilon})\text{ and
}\mathbb{B}=(x^{\varepsilon},x^{-\varepsilon}x^{\varepsilon},1,1).$
Together with such 2-cells, there are added to the complex all their
”translates”
---
$\textstyle{ux^{\varepsilon}x^{-\varepsilon}x^{\varepsilon}v\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(ux^{\varepsilon},x^{-\varepsilon}x^{\varepsilon},1,v)}$$\scriptstyle{(u,x^{\varepsilon}x^{-\varepsilon},1,x^{\varepsilon}v)}$$\textstyle{ux^{\varepsilon}v}$
In our paper the Pride complex $\mathcal{D}(\mathcal{M})^{\ast}$ is denoted
throughout by $(\mathcal{D},\mathbf{t})$.
If $P$ is a path in $(\mathcal{D},\mathbf{t})$ with $\iota(P)=W$ and
$\tau(P)=Z$, there are defined in [40], $T_{W}$ and $T_{Z}$ to be arbitrary
trivial paths from $W^{\ast}$ to $W$ and from $Z^{\ast}$ to $Z$ where
$W^{\ast}$ and $Z^{\ast}$ are the unique reduced words freely equivalent to
$W$ and $Z$ respectively. Then the path $T_{W}PT_{Z}^{-1}$ is denoted by
$P^{\ast}$. The notation is not ambiguous since any two parallel trivial paths
in $(\mathcal{D},\mathbf{t})$ are homotopic.
Pride has defined in [40] an $\hat{F}$-crossed module $\Sigma^{\ast}$ out of
$(\mathcal{D},\mathbf{t})$ in the following way. The elements of
$\Sigma^{\ast}$ are the homotopy classes $\langle P\rangle$ where $P$ is a
path in the 1-skeleton of $(\mathcal{D},\mathbf{t})$ such that $\tau(P)$ is
the empty word and $\iota(P)$ is a freely reduced word from $\hat{F}$. He then
defines a (non commutative) operation $+$ on $\Sigma^{\ast}$ by
$\langle P_{1}\rangle+\langle P_{2}\rangle=\langle(P_{1}+P_{2})^{\ast}\rangle$
and an action of the free group $\hat{F}$ on $\mathbf{x}\cup\mathbf{x}^{-1}$
on $\Sigma^{\ast}$ by
${}^{[W]}\langle P\rangle=\langle(WPW^{-1})^{\ast}\rangle.$
Also he defines
$\partial^{\ast}:\Sigma^{\ast}\rightarrow\hat{F}$
by
$\partial^{\ast}(\langle P\rangle)=[\iota(P)].$
It is proved in [40] that the triple $(\Sigma^{\ast},\hat{F},\partial^{\ast})$
is a crossed module. Further, using the fact that $\Sigma$ is the free crossed
module over $\mathbf{r}$, it is proved that
$\eta:\Sigma\rightarrow\Sigma^{\ast}$
defined by $r\mapsto\langle(1,r,1,1)\rangle$ is an isomorphism of crossed
modules. The inverse $\psi:\Sigma^{\ast}\rightarrow\Sigma$ of $\eta$ is the
map defined in the following way. It is first defined a map $\psi_{0}$ from
the set of edges of $(\mathcal{D},\mathbf{t})$ to $\Sigma$ as follows. Every
trivial edge is mapped to $0$, and every edge $(u,r,\varepsilon,v)$ is mapped
to $(^{[u]}r)^{\varepsilon}$ where $[u]$ is the element of $\hat{F}$
represented by $u$. It is proved that this map extends to paths of
$(\mathcal{D},\mathbf{t})$ and it sends the boundaries of the defining 2-cells
of $(\mathcal{D},\mathbf{t})$ to 0. We thus have a morphism
$\psi:\Sigma^{\ast}\rightarrow\Sigma$ given by
$\langle P\rangle\mapsto\psi_{0}(P)$
which is proved to be the inverse of $\eta$. By restriction it is obtained an
isomorphism between $\text{Ker}\partial$ and $\text{Ker}\partial^{\ast}$. But
$\text{Ker}\partial$ is itself isomorphic to $\pi_{2}(\hat{\mathcal{P}})$, the
second homotopy module of the standard complex associated with
$\hat{\mathcal{P}}$, and $\text{Ker}\partial^{\ast}$ on the other hand, is
isomorphic to $\pi_{1}(\mathcal{D},\mathbf{t},1)$, the first homotopy group of
the connected component of $(\mathcal{D},\mathbf{t})$ at 1. Recollecting, we
have the following isomorphisms
$\textstyle{\pi_{2}(\hat{\mathcal{P}})=\text{Ker}\partial\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta}$$\textstyle{\text{Ker}\partial^{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{=\pi_{1}(\mathcal{D},\mathbf{t},1)}.}$$\scriptstyle{\psi}$
(10)
The fundamental group $\pi_{1}(\mathcal{D},\mathbf{t},1)$ is abelian being
isomorphic to $\pi_{2}(\hat{\mathcal{P}})$ and therefore isomorphic to its
abelianization $H_{1}(\mathcal{D},\mathbf{t},1)$. The role of the isomorphism
between the two groups will be played by the well known Hurewicz homomorphism
$h:\pi_{1}(\mathcal{D},\mathbf{t},1)\rightarrow
H_{1}(\mathcal{D},\mathbf{t},1)$ which sends the homotopy class of a loop to
the homology class of the corresponding 1-cycle. In our proofs in the
following sections, we will identify the homotopy class of any loop $\xi$ with
$h(\xi)$ without further comment.
### 4.8 A characterization of the asphericity in terms of the Pride complex
Assume now we are given a presentation $\mathcal{P}=(\mathbf{x},\mathbf{r})$
of a group $G$. The new presentation
$\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r}\cup\mathbf{r}^{-1})$ where
$\mathbf{r}^{-1}=\\{r^{-1}|r\in\mathbf{r}\\}$, is still giving $G$. The free
crossed module $(\Sigma,\hat{F},\partial)$ of [40] arising from
$\hat{\mathcal{P}}$ is in fact isomorphic to our crossed module
$(\mathcal{G}(\Upsilon),\hat{F},\tilde{\theta})$. Indeed, there is a morphism
of crossed modules $\alpha:\Sigma\rightarrow\mathcal{G}(\Upsilon)$ induced by
the map $r^{\varepsilon}\mapsto\mu\sigma(r^{\varepsilon})$, whose inverse is
$\beta:\mathcal{G}(\Upsilon)\rightarrow\Sigma$ defined by
$\mu\sigma((^{u}r)^{\varepsilon})\mapsto{{}^{u}}(r^{\varepsilon})$. So there
is no loss of generality if we identify ${{}^{u}}(r^{\varepsilon})\in\Sigma$
with $\mu\sigma((^{u}r)^{\varepsilon})\in\mathcal{G}(\Upsilon)$. The
isomorphism $\Sigma\cong\mathcal{G}(\Upsilon)$ means in particular that
$\text{Ker}\partial\cong\text{Ker}\tilde{\theta}=\tilde{\Pi}$.
We have on the other hand the monoid presentation of $G$
$\mathcal{M}=\langle\mathbf{x},\mathbf{x}^{-1}:\mathbf{s}\rangle$
where
$\mathbf{s}=\\{(r^{\varepsilon},1):r\in\mathbf{r},\varepsilon=\pm
1\\}\cup\left\\{(x^{\varepsilon}x^{-\varepsilon},1):x\in\mathbf{x},\varepsilon=\pm
1\right\\}.$
Related to $\mathcal{M}$ we have the Pride complex $(\mathcal{D},\mathbf{t})$.
Being aspherical for $\mathcal{P}$ means in virtue of theorem 3.4 and of
isomorphisms in (10) that $H_{1}(\mathcal{D},\mathbf{t},1)$ is trivialized as
an abelian group by the homology classes of all 1-cycles corresponding to
$\eta(\hat{\mathfrak{U}})$. This section is devoted to proving that the
asphericity of $\mathcal{P}$ is equivalent to
$H_{1}(\mathcal{D},\mathbf{p})=0$ where $\mathbf{p}=\mathbf{q}\cup\mathbf{t}$
and $\mathbf{q}$ is the set of 1-cycles corresponding to
$\eta(\mu\sigma(rr^{-1}))$ with $r\in\mathbf{r}$.
For every two paths of positive length
$A=e^{\varepsilon_{1}}_{1}\circ\dots\circ e^{\varepsilon_{n}}_{n}$ and
$B=f^{\delta_{1}}_{1}\circ\dots\circ f^{\delta_{m}}_{m}$ in
$(\mathcal{D},\mathbf{t})$ we have two parallel paths:
$A.\iota B\circ\tau A.B\text{ and }\iota A.B\circ A.\tau B.$
In what follows we use the notation $C\sim D$ to mean that two parallel paths
$C$ and $D$ are homotopic to each other.
###### Lemma 4.4.
For every two paths $A=e^{\varepsilon_{1}}_{1}\circ\dots\circ
e^{\varepsilon_{n}}_{n}$ and $B=f^{\delta_{1}}_{1}\circ\dots\circ
f^{\delta_{m}}_{m}$ as above, $(A.\iota B\circ\tau A.B)\sim(\iota A.B\circ
A.\tau B)$.
###### Proof.
The proof is done by induction on the maximum of $n$ and $m$. If $m=n=1$, then
it follows that
$A.\iota B\circ\tau A.B=e^{\varepsilon_{1}}_{1}.\iota
f^{\delta_{1}}_{1}\circ\tau
e^{\varepsilon_{1}}_{1}.f^{\delta_{1}}_{1}\sim\iota
e^{\varepsilon_{1}}_{1}.f^{\delta_{1}}_{1}\circ e^{\varepsilon_{1}}_{1}.\tau
f^{\delta_{1}}_{1}=\iota A.B\circ A.\tau B.$
Indeed, if $\varepsilon_{1}=\delta_{1}=1$, then this is an immediate
consequence of the 2-cell $[e_{1},f_{1}]$. If $\varepsilon_{1}=\delta_{1}=-1$,
then, since
$e_{1}.\iota f_{1}\circ\tau e_{1}.f_{1}\sim\iota e_{1}.f_{1}\circ e_{1}.\tau
f_{1},$ (11)
it follows by taking inverses that
$\displaystyle\iota e_{1}^{\varepsilon_{1}}.f_{1}^{\delta_{1}}\circ
e_{1}^{\varepsilon_{1}}.\tau f_{1}^{\delta_{1}}$ $\displaystyle=\tau
e_{1}.f_{1}^{-1}\circ e_{1}^{-1}.\iota f_{1}$ $\displaystyle\sim
e_{1}^{-1}.\tau f_{1}\circ\iota e_{1}.f_{1}^{-1}$
$\displaystyle=e^{\varepsilon_{1}}_{1}.\iota f^{\delta_{1}}_{1}\circ\tau
e^{\varepsilon_{1}}_{1}.f^{\delta_{1}}_{1}.$
In the case when $\varepsilon_{1}=1$ and $\delta_{1}=-1$, after composing on
the left of (11) by $\iota e_{1}.f_{1}^{-1}$ we obtain
$\iota e_{1}.f_{1}^{-1}\circ e_{1}.\iota f_{1}\circ\tau e_{1}.f_{1}\sim\iota
e_{1}.f_{1}^{-1}\circ\iota e_{1}.f_{1}\circ e_{1}.\tau f_{1}=e_{1}.\tau
f_{1},$
and then after composing the above on the right by $\tau e_{1}.f_{1}^{-1}$ we
get
$\iota e_{1}.f_{1}^{-1}\circ e_{1}.\iota f_{1}\sim e_{1}.\tau f_{1}\circ\tau
e_{1}.f_{1}^{-1},$
which is the same as
$\iota e_{1}^{\varepsilon_{1}}.f_{1}^{\delta_{1}}\circ
e_{1}^{\varepsilon_{1}}.\tau f_{1}^{\delta_{1}}\sim
e_{1}^{\varepsilon_{1}}.\iota f_{1}^{\delta_{1}}\circ\tau
e_{1}^{\varepsilon_{1}}.f_{1}^{\delta_{1}}.$
The proof for the case when $\varepsilon_{1}=-1$ and $\delta_{1}=1$ is
symmetric to the above and is omitted.
For the inductive step, let (for instance) $B$ be the path of maximal length
$m>1$. For the path $B^{\prime}=f_{1}^{\delta_{1}}\circ\dots\circ
f_{m-1}^{\delta_{m-1}}$ we know by induction that
$A.\iota B^{\prime}\circ\tau A.B^{\prime}\sim\iota A.B^{\prime}\circ A.\tau
B^{\prime}.$
Again, by induction for $A$ and $f_{m}^{\delta_{m}}$ we have that
$A.\iota f_{m}^{\delta_{m}}\circ\tau A.f_{m}^{\delta_{m}}\sim\iota
A.f_{m}^{\delta_{m}}\circ A.\tau f_{m}^{\delta_{m}}.$
It follows that
$\displaystyle A.\iota B\circ\tau A.B$ $\displaystyle=A.\iota B\circ\tau
A.B^{\prime}\circ\tau A.f_{m}^{\delta_{m}}$ $\displaystyle=A.\iota
B^{\prime}\circ\tau A.B^{\prime}\circ\tau A.f_{m}^{\delta_{m}}$
$\displaystyle\sim\iota A.B^{\prime}\circ A.\tau B^{\prime}\circ\tau A.f_{m}$
$\displaystyle=\iota A.B^{\prime}\circ A.\iota f_{m}^{\delta_{m}}\circ\tau
A.f_{m}^{\delta_{m}}$ $\displaystyle\sim\iota A.B^{\prime}\circ\iota
A.f_{m}^{\delta_{m}}\circ A.\tau f_{m}^{\delta_{m}}$ $\displaystyle=\iota
A.B\circ A.\tau B.$
There is a similar proof when $A$ is of maximal length. ∎
For every $u\in\hat{F}$, regarded as an element of the free monoid $F$ on
$\mathbf{x}\cup\mathbf{x}^{-1}$, and for every $r\in\mathbf{r}$, we see that
$\displaystyle\eta(\mu\sigma(^{u}r))={{}^{u}}\eta(\mu\sigma(r))$
$\displaystyle={{}^{u}}\langle(1,r,1,1)\rangle$
$\displaystyle=\langle(u,r,1,u^{-1})^{\ast}\rangle$ $\displaystyle=\langle
T_{uru^{-1}}\circ(u,r,1,u^{-1})\circ T^{-1}_{uu^{-1}}\rangle.$
The path $T_{uru^{-1}}\circ(u,r,1,u^{-1})\circ T_{uu^{-1}}$ is a composition
of $T_{uru^{-1}}$ which is a trivial path from the freely reduced word
$(uru^{-1})^{\ast}$ to $uru^{-1}$ followed by the edge $(u,r,1,u^{-1})$ and
then by the inverse of the trivial path $T_{uu^{-1}}$ from $uu^{-1}$ to 1.
Similarly to the above we have that
$\displaystyle\eta(\mu\sigma((^{u}r)^{-1}))={{}^{u}}\eta(\mu\sigma(r^{-1}))$
$\displaystyle={{}^{u}}\langle(1,r^{-1},1,1)\rangle$
$\displaystyle=\langle(u,r^{-1},1,u^{-1})^{\ast}\rangle$
$\displaystyle=\langle T_{ur^{-1}u^{-1}}\circ(u,r^{-1},1,u^{-1})\circ
T^{-1}_{uu^{-1}}\rangle.$
Then we have
$\displaystyle\eta(\mu\sigma(^{u}r({{}^{u}}r)^{-1}))$
$\displaystyle=\eta(\mu\sigma(^{u}r))+\eta(\mu\sigma((^{u}r)^{-1}))$
$\displaystyle=\langle((T_{uru^{-1}}\circ(u,r,1,u^{-1})\circ
T^{-1}_{uu^{-1}})+(T_{ur^{-1}u^{-1}}\circ(u,r^{-1},1,u^{-1})\circ
T^{-1}_{uu^{-1}}))^{\ast}\rangle$ $\displaystyle=\langle
T_{(ur^{-1}u)^{\ast}(ur^{-1}u^{-1})^{\ast}}\circ
T_{uru^{-1}}\cdot(ur^{-1}u^{-1})^{\ast}\circ(u,r,1,u^{-1})\cdot(ur^{-1}u^{-1})^{\ast}$
$\displaystyle\circ T^{-1}_{uu^{-1}}\cdot(ur^{-1}u^{-1})^{\ast}\circ
T_{ur^{-1}u^{-1}}\circ(u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}\rangle.$
Now we define two closed paths in $(\mathcal{D},\mathbf{t})$. First we let
$\displaystyle P(r,u)=T_{(uru^{-1})(ur^{-1}u^{-1})}$
$\displaystyle\circ(u,r,1,u^{-1}ur^{-1}u^{-1})$
$\displaystyle\circ(T^{-1}_{uu^{-1}}\cdot
ur^{-1}u^{-1})\circ(u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}},$
and second
$Q(r,u)=T_{urr^{-1}u^{-1}}\circ(u,r,1,r^{-1}u^{-1})\circ(u,r^{-1},1,u^{-1})\circ
T^{-1}_{uu^{-1}}.$
###### Proposition 4.5.
The presentation $\mathcal{P}$ is aspherical if and only if
$\pi_{1}(\mathcal{D},\mathbf{t},1)$ is generated as an abelian group by the
homotopy classes of loops $Q(r,u)$ with $r\in\mathbf{r}$ and $u\in F$.
###### Proof.
First note that the presentation $\mathcal{P}$ is aspherical if and only if
the set of all $\eta(\mu\sigma(^{u}r({{}^{u}}r)^{-1}))$ generates
$\pi_{1}(\mathcal{D},\mathbf{t},1)$. The claim follows directly if we prove
that for each $r\in\mathbf{r}$ and $u\in F$,
$\eta(\mu\sigma(^{u}r({{}^{u}}r)^{-1}))=\langle P(r,u)\rangle$ and that
$P(r,u)\sim Q(r,u)$.
Let us prove first that $\eta(\mu\sigma(^{u}r({{}^{u}}r)^{-1}))=\langle
P(r,u)\rangle$. Consider the following paths in $(\mathcal{D},\mathbf{t})$.
First, we let
$\displaystyle a$ $\displaystyle=(uru^{-1})^{\ast}\cdot T_{ur^{-1}u^{-1}},$
$\displaystyle b$ $\displaystyle=T_{uru^{-1}}\cdot(ur^{-1}u^{-1}),$
$\displaystyle d$ $\displaystyle=T_{uru^{-1}}\cdot(ur^{-1}u^{-1})^{\ast},$
$\displaystyle c$ $\displaystyle=(uru^{-1})\cdot T_{ur^{-1}u^{-1}},$
and observe from lemma 4.4 that
$a\circ b\sim d\circ c$ (12)
Second, we let
$\displaystyle e$ $\displaystyle=(u,r,1,u^{-1}(ur^{-1}u^{-1})^{\ast}),$
$\displaystyle c$ $\displaystyle=(uru^{-1})\cdot T_{ur^{-1}u^{-1}},$
$\displaystyle g$ $\displaystyle=(u,r,1,u^{-1}(ur^{-1}u^{-1})),$
$\displaystyle f$ $\displaystyle=(uu^{-1})\cdot T_{ur^{-1}u^{-1}},$
where again from lemma 4.4 we have that $c\circ g\sim e\circ f$. This implies
that
$c^{-1}\sim g\circ f^{-1}\circ e^{-1}.$ (13)
And finally, let
$\displaystyle y$ $\displaystyle=T^{-1}_{uu^{-1}}\cdot(ur^{-1}u^{-1})^{\ast}$
$\displaystyle f$ $\displaystyle=(uu^{-1})\cdot T_{ur^{-1}u^{-1}}$
$\displaystyle x$ $\displaystyle=T^{-1}_{uu^{-1}}\cdot(ur^{-1}u^{-1})$
$\displaystyle z$ $\displaystyle=1\cdot T_{ur^{-1}u^{-1}},$
where as before $f\circ x\sim y\circ z$. This on the other hand implies that
$f^{-1}\sim x\circ z^{-1}\circ y^{-1}.$ (14)
Further we write $\ell=T_{(uru^{-1})^{\ast}(ur^{-1}u^{-1})^{\ast}}$,
$\ell_{1}=T_{(uru^{-1})(ur^{-1}u^{-1})}$, $k=(u,r^{-1},1,u^{-1})$ and
$h=T^{-1}_{uu^{-1}}$. With the above abbreviations we have
$\displaystyle\ell$ $\displaystyle\sim\ell_{1}\circ b^{-1}\circ a^{-1}$
($\parallel$ trivial parallel paths are $\sim$)
$\displaystyle\sim\ell_{1}\circ c^{-1}\circ d^{-1}$ (from (12))
$\displaystyle\sim\ell_{1}\circ(g\circ f^{-1}\circ e^{-1})\circ d^{-1}$ (from
(13)) $\displaystyle\sim\ell_{1}\circ g\circ(x\circ z^{-1}\circ y^{-1})\circ
e^{-1}\circ d^{-1}$ (from (14)).
It follows that
$\displaystyle\eta(\mu\sigma(^{u}r({{}^{u}}r)^{-1}))$
$\displaystyle=\langle\ell\circ(d\circ e\circ y\circ z\circ k\circ h)\rangle$
$\displaystyle=\langle(\ell_{1}\circ g\circ(x\circ z^{-1}\circ y^{-1})\circ
e^{-1}\circ d^{-1})\circ(d\circ e\circ y\circ z\circ k\circ h)\rangle$
$\displaystyle=\langle\ell_{1}\circ g\circ x\circ k\circ h\rangle$
$\displaystyle=\langle P(r,u)\rangle.$
Secondly, we prove that $P(r,u)\sim Q(r,u)$. Indeed, if we consider paths
$\displaystyle A$ $\displaystyle=(u,r,1,u^{-1}ur^{-1}u^{-1})$ $\displaystyle
B$ $\displaystyle=(ur)\cdot T^{-1}_{u^{-1}u}\cdot(r^{-1}u^{-1})$
$\displaystyle C$ $\displaystyle=u\cdot T^{-1}_{u^{-1}u}\cdot(r^{-1}u^{-1})$
$\displaystyle D$ $\displaystyle=(u,r,1,r^{-1}u^{-1}),$
which from lemma 4.4 satisfy $A\circ C\sim B\circ D$, then we have
$\displaystyle P(r,u)$
$\displaystyle=T_{(uru^{-1})(ur^{-1}u^{-1})}\circ(u,r,1,u^{-1}ur^{-1}u^{-1})\circ(T^{-1}_{uu^{-1}}\cdot
ur^{-1}u^{-1})\circ(u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}$
$\displaystyle\sim T_{(uru^{-1})(ur^{-1}u^{-1})}\circ A\circ(u\cdot
T^{-1}_{u^{-1}u}\cdot r^{-1}u^{-1})\circ(u,r^{-1},1,u^{-1})\circ
T^{-1}_{uu^{-1}}$ $\displaystyle=T_{(uru^{-1})(ur^{-1}u^{-1})}\circ A\circ
C\circ(u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}$ $\displaystyle\sim
T_{(uru^{-1})(ur^{-1}u^{-1})}\circ B\circ D\circ(u,r^{-1},1,u^{-1})\circ
T^{-1}_{uu^{-1}}$ $\displaystyle\sim T_{urr^{-1}u^{-1}}\circ B^{-1}\circ
B\circ D\circ(u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}$
$\displaystyle=T_{urr^{-1}u^{-1}}\circ D\circ(u,r^{-1},1,u^{-1})\circ
T^{-1}_{uu^{-1}}$ $\displaystyle=Q(r,u).$
This concludes the proof. ∎
Passing to homology we have the following
###### Proposition 4.6.
The presentation $\mathcal{P}$ is aspherical if and only if
$H_{1}(\mathcal{D},\mathbf{t},1)$ is generated as an abelian group by the
homology classes of 1-cycles corresponding to loops $Q(r,u)$ with
$r\in\mathbf{r}$ and $u\in F$.
###### Definition 4.7.
In $(\mathcal{D},\mathbf{t})$ we let $\mathbf{q}$ be the set of closed paths
$Q(r,1)$ with $r\in\mathbf{r}$.
If we attach to $(\mathcal{D},\mathbf{t})$ 2-cells $\sigma$ along the closed
paths $u.Q(r,1).v$ with $u,v\in F$ and 3-cells $[e,\sigma]$ and $[\sigma,e]$
for every 2-cell $\sigma$ and each positive edge $e$, then we obtain a new
3-complex $(\mathcal{D},\mathbf{q}\cup\mathbf{t})$. The asphericity of
$\mathcal{P}$ is encoded in the homology of
$(\mathcal{D},\mathbf{q}\cup\mathbf{t})$ as the following shows.
###### Theorem 4.8.
The presentation $\mathcal{P}$ is aspherical if and only if
$H_{1}(\mathcal{D},\mathbf{q}\cup\mathbf{t})=0$.
To prove the theorem we first note the following two lemmas.
###### Lemma 4.9.
For every $\varsigma\in Z_{1}(\mathcal{D},\mathbf{t})$ and every $u,v\in F$
such that $\bar{u}=\bar{v}$, $\varsigma\cdot
u+B_{1}(\mathcal{D},\mathbf{t})=\varsigma\cdot
v+B_{1}(\mathcal{D},\mathbf{t})$.
###### Proof.
It is enough to prove that for every positive edge $f$, we have
$\varsigma\cdot\iota f+B_{1}(\mathcal{D},\mathbf{t})=\varsigma\cdot\tau
f+B_{1}(\mathcal{D},\mathbf{t})$. From Lemma 4.1 of [39] it follows that
$\varsigma\cdot(\iota f-\tau f)\in B_{1}(\mathcal{D})$. But
$B_{1}(\mathcal{D})\subseteq B_{1}(\mathcal{D},\mathbf{t})$ and we are done. ∎
If $u=x_{1}^{\varepsilon_{1}}\dots x_{n}^{\varepsilon_{n}}\in F$ is any word
of length $|u|=n\in\mathbb{N}$, then a trivial path from 1 to $uu^{-1}$ is the
following
$T_{uu^{-1}}=(1,x_{1}^{\varepsilon_{1}}x_{1}^{-\varepsilon_{1}},1,1)^{-1}\circ\dots\circ(x_{1}^{\varepsilon_{1}}\dots
x_{|u|-1}^{\varepsilon_{|u|-1}},x_{|u|}^{\varepsilon_{|u|}}x_{|u|}^{-\varepsilon_{|u|}},1,x_{|u|-1}^{-\varepsilon_{|u|-1}}\dots
x_{1}^{-\varepsilon_{1}})^{-1}.$
We write for short
$\displaystyle t_{u}^{(1)}$
$\displaystyle=(1,x_{1}^{\varepsilon_{1}}x_{1}^{-\varepsilon_{1}},1,1)$
$\displaystyle...$ $\displaystyle t_{u}^{(|u|)}$
$\displaystyle=(x_{1}^{\varepsilon_{1}}\dots
x_{|u|-1}^{\varepsilon_{|u|-1}},x_{|u|}^{\varepsilon_{|u|}}x_{|u|}^{-\varepsilon_{|u|}},1,x_{|u|-1}^{-\varepsilon_{|u|-1}}\dots
x_{1}^{-\varepsilon_{1}})^{-1},$
and let
$\tau_{uu^{-1}}=t_{u}^{(1)}+\dots+t_{u}^{(|u|)}.$
###### Definition 4.10.
For every $r\in\mathbf{r}$ and $u\in F^{\ast}$, we let
$q(r,u)=(u,r,1,r^{-1}u^{-1})+(u,r^{-1},1,u^{-1})+\tau_{uu^{-1}}-\tau_{urr^{-1}u^{-1}},$
be the 1-cycle that corresponds to the closed path $Q(r,u)$. When $u=1$, we
let
$q(r,1)=(1,r,1,r^{-1})+(1,r^{-1},1,1)-\tau_{rr^{-1}}$
be the 1-cycle corresponding to $Q(r,1)$.
###### Lemma 4.11.
For every $r\in\mathbf{r}$ and $u\in F$,
$u.q(r,1).u^{-1}+B_{1}(\mathcal{D},\mathbf{t})=q(r,u)+B_{1}(\mathcal{D},\mathbf{t})$.
###### Proof.
First note that $T^{-1}_{uu^{-1}}\circ T_{urr^{-1}u^{-1}}\sim
u.T_{rr^{-1}}.u^{-1}$ since any two trivial paths with the same end points are
homotopic with each other. For the corresponding 1-chains we have that
$\tau_{uu^{-1}}-\tau_{urr^{-1}u^{-1}}+B_{1}(\mathcal{D},\mathbf{t})=-u.\tau_{rr^{-1}}.u^{-1}+B_{1}(\mathcal{D},\mathbf{t}).$
It follows now that
$\displaystyle u.q(r,1).u^{-1}+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=(u,r,1,r^{-1}u^{-1})+(u,r^{-1},1,u^{-1})-u.\tau_{rr^{-1}}.u^{-1}+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=(u,r,1,r^{-1}u^{-1})+(u,r^{-1},1,u^{-1})+\tau_{uu^{-1}}-\tau_{urr^{-1}u^{-1}}+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=q(r,u)+B_{1}(\mathcal{D},\mathbf{t}),$
proving the claim. ∎
###### Proof.
(of theorem 4.8) If $H_{1}(\mathcal{D},\mathbf{q}\cup\mathbf{t})=0$, then
$H_{1}(\mathcal{D},\mathbf{q}\cup\mathbf{t},1)=0$ which means that the
homology classes of the loops $u.Q(r,1).v$ trivialize
$H_{1}(\mathcal{D},\mathbf{t},1)$. We claim that every 1-cycle corresponding
to a loop $u.Q(r,1).v$ which sits inside $(\mathcal{D},\mathbf{t},1)$ is in
fact homologous to the 1-cycle corresponding to the loop $Q(r,u)$. Indeed,
since $u.Q(r,1).v$ is a loop in $(\mathcal{D},\mathbf{t},1)$, then
$\bar{v}=\bar{u}^{-1}$. It follows from lemma 4.9 and lemma 4.11 that
$\displaystyle u.q(r,1).v+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=u.q(r,1).u^{-1}+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=q(r,u)+B_{1}(\mathcal{D},\mathbf{t}),$
which proves our claim. As a consequence of this we have that the homology
classes of 1-cycles $q(r,u)$ trivialize $H_{1}(\mathcal{D},\mathbf{t},1)$, and
then from proposition 4.6 we get the asphericity of $\mathcal{P}$.
Conversely, if $\mathcal{P}$ is aspherical, then from proposition 4.6 and
lemma 4.11 $H_{1}(\mathcal{D},\mathbf{t},1)$ is generated as an abelian group
by the homology classes of 1-cycles $u.q(r,1).u^{-1}$. But from [40] the
homology group $H_{1}(\mathcal{D},\mathbf{t},w)$ of the connected component
$(\mathcal{D},\mathbf{t},w)$ is isomorphic to
$H_{1}(\mathcal{D},\mathbf{t},1)$, where the isomorphism
$\phi_{w}:H_{1}(\mathcal{D},\mathbf{t},1)\rightarrow
H_{1}(\mathcal{D},\mathbf{t},w)$ maps each homology class of some 1-cycle
$\varsigma$ to the homology class of $\varsigma\cdot w$. This shows that the
set of the homology classes of 1-cycles $u.q(r,1).u^{-1}w$ trivialize
$H_{1}(\mathcal{D},\mathbf{t})$. We prove that this set equals to the set of
homology classes of 1-cycles $u.q(r,1).v$ where $u,v\in F$. Indeed, for every
$u,v\in F$ and every $q(r,1)$, if we take $w=uv$, we get that
$u.q(r,1).u^{-1}uv+B_{1}(\mathcal{D},\mathbf{t})$ is a generator of
$H_{1}(\mathcal{D},\mathbf{t})$. But from lemma 4.9,
$u.q(r,1).u^{-1}uv+B_{1}(\mathcal{D},\mathbf{t})=u.q(r,1).v+B_{1}(\mathcal{D},\mathbf{t})$,
hence $u.q(r,1).v+B_{1}(\mathcal{D},\mathbf{t})$ is a generator of
$H_{1}(\mathcal{D},\mathbf{t})$. For the converse, it is obvious that any
generator $u.q(r,1).u^{-1}w+B_{1}(\mathcal{D},\mathbf{t})$ is of the form
$u.q(r,1).v+B_{1}(\mathcal{D},\mathbf{t})$ with $v=u^{-1}w$. ∎
###### Remark 4.12.
The Squier complex $\mathcal{D}$ of the monoid presentation
$\mathcal{M}=\langle\mathbf{x},\mathbf{x}^{-1}:\mathbf{s}\rangle$ of $G$ has
an important property. As the theorem 4.8 shows, the homology trivializers of
$H_{1}(\mathcal{D})$ are classes of 1-cycles corresponding to loops from
$\mathbf{q}\cup\mathbf{t}$ and each one of them arises from the resolution of
a critical pair. Indeed, if $r\in\mathbf{r}$ has the reduced word form
$r=x_{1}^{\varepsilon_{1}}\dots x_{n}^{\varepsilon_{n}}$ in $\hat{F}$, then
considering $x_{1}^{\varepsilon_{1}}\dots x_{n}^{\varepsilon_{n}}$ as a word
from $F$, we see that the loop $Q(r,1)$ is obtained by resolving the following
overlapping pair of edges
$((1,r,1,r^{-1}),(x_{1}^{\varepsilon_{1}}\dots
x_{n-1}^{\varepsilon_{n-1}},x_{n}^{\varepsilon_{n}}x_{n}^{-\varepsilon_{n}},1,x_{n-1}^{-\varepsilon_{n-1}}\dots
x_{1}^{-\varepsilon_{1}})).$
On the other hand, if
$t=(1,x^{\varepsilon}x^{-\varepsilon},1,x^{\varepsilon})\circ(x^{\varepsilon},x^{-\varepsilon}x^{\varepsilon},1,1)^{-1}$
is a loop of $\mathbf{t}$, then it arises from the resolution of the
overlapping pair
$((1,x^{\varepsilon}x^{-\varepsilon},1,x^{\varepsilon}),(x^{\varepsilon},x^{-\varepsilon}x^{\varepsilon},1,1)).$
The importance of this remark stands at the fact that when the given
presentation $\mathcal{P}$ is aspherical, then the sequence (4) that is
associated with the complex $(\mathcal{D},\mathbf{q}\cup\mathbf{t})$ is exact.
### 4.9 A preliminary result
Let $\mathcal{P}=(\mathbf{x},\mathbf{r})$ be an aspherical group presentation
and $\mathcal{P}_{1}=(\mathbf{x},\mathbf{r}_{1})$ a subpresentation of the
first where $\mathbf{r}_{1}=\mathbf{r}\setminus\\{r_{0}\\}$ and
$r_{0}\in\mathbf{r}$ is a fixed relation. We denote by $\Upsilon_{1}$,
$\mathfrak{U}_{1}$ monoids associated with $\mathcal{P}_{1}$ and by
$\mathcal{G}(\Upsilon_{1})$ and $\hat{\mathfrak{U}}_{1}$ their respective
groups and let $\tilde{\theta}_{1}$ be the morphism of the crossed module
$\mathcal{G}(\Upsilon_{1})$ whose kernel is denoted by $\tilde{\Pi}_{1}$. Also
we consider $\hat{\mathfrak{A}}_{1}$ the subgroup of $\hat{\mathfrak{U}}$
generated by all $\mu\sigma(bb^{-1})$ where $b\in Y_{1}\cup Y_{1}^{-1}$.
Finally note that the monomorphism $\varphi:\Upsilon_{1}\rightarrow\Upsilon$
induced by the map $\sigma_{1}(a)\rightarrow\sigma(a)$ induces a homomorphism
$\hat{\varphi}:\mathcal{G}(\Upsilon_{1})\rightarrow\mathcal{G}(\Upsilon)$.
These data fit into a commutative diagram as depicted below.
---
$\textstyle{\mathcal{G}(\Upsilon_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hat{\varphi}}$$\scriptstyle{\tilde{\theta}_{1}}$$\textstyle{\mathcal{G}(\Upsilon)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\theta}}$$\textstyle{F}$
The following will be useful in the proof of our main theorem.
###### Proposition 4.13.
If $\mathcal{P}$ is aspherical, then
$\hat{\varphi}(\tilde{\Pi}_{1})=\hat{\mathfrak{A}}_{1}$.
###### Proof.
Let $\tilde{d}=\mu_{1}\sigma_{1}(a_{1}\cdot\cdot\cdot
a_{n})\in\tilde{\Pi}_{1}$ where as before no $a_{i}$ is equal to any
$\iota(\mu_{1}\sigma_{1}(^{u}{r})^{\varepsilon})$ and assume that
$\displaystyle\hat{\varphi}(\tilde{d})=$
$\displaystyle(\mu\sigma(b_{1}b_{1}^{-1})\cdot\cdot\cdot\mu\sigma(b_{s}b_{s}^{-1}))\cdot(\iota(\mu\sigma(b_{s+1}b_{s+1}^{-1}))\cdot\cdot\cdot\iota(\mu\sigma(b_{r}b_{r}^{-1})))$
$\displaystyle(\mu\sigma(c_{1}c_{1}^{-1})\cdot\cdot\cdot\mu\sigma(c_{t}c_{t}^{-1}))\cdot(\iota(\mu\sigma(d_{1}d_{1}^{-1}))\cdot\cdot\cdot\iota(\mu\sigma(d_{k}d_{k}^{-1}))),$
where the first half involves elements from $Y_{1}\cup Y_{1}^{-1}$ and the
second one is
$\mu\sigma(C)\iota(\mu\sigma(D))$
with
$C=c_{1}c_{1}^{-1}\cdot\cdot\cdot c_{t}c_{t}^{-1}\text{ and
}D=d_{1}d_{1}^{-1}\cdot\cdot\cdot d_{k}d_{k}^{-1},$
where $C$ and $D$ involve only elements of the form
$(^{u}{r_{0}})^{\varepsilon}$ with $\varepsilon=\pm 1$. Recalling from above
that in $\mathcal{G}(\Upsilon)$ we have
$\displaystyle\mu\sigma((a_{1}\cdot\cdot\cdot
a_{n})\cdot((b_{s+1}b_{s+1}^{-1})\cdot\cdot\cdot(b_{r}b_{r}^{-1}))\cdot((d_{1}d_{1}^{-1})\cdot\cdot\cdot(d_{k}d_{k}^{-1})))$
$\displaystyle=\mu\sigma(((b_{1}b_{1}^{-1})\cdot\cdot\cdot(b_{s}b_{s}^{-1}))\cdot((c_{1}c_{1}^{-1})\cdot\cdot\cdot(c_{t}c_{t}^{-1}))),$
we can apply $\hat{g}$ defined in proposition 3.5 on both sides and get
$\displaystyle g\sigma((a_{1}\cdot\cdot\cdot
a_{n})\cdot((b_{s+1}b_{s+1}^{-1})\cdot\cdot\cdot(b_{r}b_{r}^{-1}))\cdot((d_{1}d_{1}^{-1})\cdot\cdot\cdot(d_{k}d_{k}^{-1})))$
$\displaystyle=g\sigma(((b_{1}b_{1}^{-1})\cdot\cdot\cdot(b_{s}b_{s}^{-1}))\cdot((c_{1}c_{1}^{-1})\cdot\cdot\cdot(c_{t}c_{t}^{-1}))).$
If we now write each $c_{i}=(^{u_{i}}r_{0})^{\varepsilon_{i}}$ and each
$d_{j}=(^{v_{j}}r_{0})^{\delta_{j}}$ where $\varepsilon_{i}$ and
$\delta_{j}=\pm 1$, while we write each
$a_{\ell}=(^{w_{\ell}}r_{\ell})^{\gamma_{\ell}}$ and each
$b_{p}=(^{\eta_{p}}\rho_{p})^{\epsilon_{p}}$ where all $r_{\ell}$ and
$\rho_{p}$ belong to $\mathbf{r}_{1}$ and $\gamma_{\ell},\epsilon_{p}=\pm 1$,
then the definition of $g$ yields
$\displaystyle(w_{1}^{\alpha}\cdot
r_{1}^{\beta}+\cdot\cdot\cdot+w_{n}^{\alpha}\cdot
r_{n}^{\beta})+(2\eta_{s+1}^{\alpha}\cdot\rho_{s+1}^{\beta}+\cdot\cdot\cdot+2\eta_{r}^{\alpha}\cdot\rho_{r}^{\beta})+(2v_{1}^{\alpha}+\cdot\cdot\cdot+2v_{k}^{\alpha})\cdot
r_{0}^{\beta}$
$\displaystyle=(2\eta_{1}^{\alpha}\cdot\rho_{1}^{\beta}+\cdot\cdot\cdot+2\eta_{s}^{\alpha}\cdot\rho_{s}^{\beta})+(2u_{1}^{\alpha}+\cdot\cdot\cdot+2u_{t}^{\alpha})\cdot
r_{0}^{\beta}$
The freeness of $\mathcal{N}(\mathcal{P})$ on the set of elements $r^{\beta}$
implies in particular that
$(2v_{1}^{\alpha}+\cdot\cdot\cdot+2v_{k}^{\alpha})\cdot
r_{0}^{\beta}=(2u_{1}^{\alpha}+\cdot\cdot\cdot+2u_{t}^{\alpha})\cdot
r_{0}^{\beta}$
from which we see that $k=t$, and after a rearrangement of terms
$u^{\alpha}_{i}=v^{\alpha}_{i}$ for $i=1,...,k$. The easily verified fact that
in $\mathcal{G}(\Upsilon)$, $\mu\sigma(aa^{-1})=\mu\sigma(a^{-1}a)$ and the
fact that if $u^{\alpha}=v^{\alpha}$, then for every $s\in\mathbf{r}$,
$\mu\sigma((^{v}s)^{\delta}(^{v}s)^{-\delta})=\mu\sigma((^{u}s)^{\delta}(^{u}s)^{-\delta})$,
imply easily that
$\mu\sigma((^{v}r_{0})^{\delta}(^{v}r_{0})^{-\delta})=\mu\sigma((^{u}r_{0})^{\varepsilon}(^{u}r_{0})^{-\varepsilon}).$
If we apply the latter to pairs $(c_{i},d_{i})$ for which
$u^{\alpha}_{i}=v^{\alpha}_{i}$, we get that
$\mu\sigma(C)\iota(\mu\sigma(D))=1$ which shows that
$\hat{\varphi}(\tilde{d})\in\hat{\mathfrak{A}}_{1}$. ∎
### 4.10 The proof
Throughout this section we assume that $\mathcal{P}=(\mathbf{x},\mathbf{r})$
is an aspherical presentation of the trivial group. Consider now a sub
presentation $\mathcal{P}_{1}=(\mathbf{x},\mathbf{r}_{1})$ of $\mathcal{P}$
where $\mathbf{r}_{1}=\mathbf{r}\setminus\\{r_{0}\\}$. For each of the above
group presentations, we have a monoid presentation of the same group, namely
$\mathcal{M}=\langle\mathbf{x},\mathbf{x}^{-1}:\mathbf{s}\rangle$
is a monoid presentation of the trivial group, where
$\mathbf{s}=\\{(r^{\varepsilon},1):r\in\mathbf{r},\varepsilon=\pm
1\\}\cup\left\\{(x^{\varepsilon}x^{-\varepsilon},1):x\in\mathbf{x},\varepsilon=\pm
1\right\\},$
and
$\mathcal{M}_{1}=\langle\mathbf{x},\mathbf{x}^{-1}:\mathbf{s}_{1}\rangle$
is a monoid presentation of the group given by $\mathcal{P}_{1}$, where
$\mathbf{s}_{1}=\\{(r_{1}^{\varepsilon},1):r_{1}\in\mathbf{r}_{1},\varepsilon=\pm
1\\}\cup\left\\{(x^{\varepsilon}x^{-\varepsilon},1):x\in\mathbf{x},\varepsilon=\pm
1\right\\}.$
Related to $\mathcal{M}$ we have defined two 2-complexes. The first one is the
usual Squier complex $\mathcal{D}$, and the second one is its extension
$(\mathcal{D},\mathbf{t})$, and similarly we have two 2-complexes arising from
$\mathcal{M}_{1}$, $\mathcal{D}_{1}$ and its extension
$(\mathcal{D}_{1},\mathbf{t})$. Further, $(\mathcal{D},\mathbf{t})$ has been
extended to a 3-complex $(\mathcal{D},\mathbf{q}\cup\mathbf{t})$ by adding
first 2-cells arising from $Q(r,1)$ and their translates, and than adding all
the 3-cells $[e,\sigma]$ or $[\sigma,e]$ for every 2-cell $\sigma$ and every
positive edge $e$. We write for short $(\mathcal{D},\mathbf{q}\cup\mathbf{t})$
by $(\mathcal{D},\mathbf{p})$ where $\mathbf{p}=\mathbf{q}\cup\mathbf{t}$.
Likewise, $(\mathcal{D}_{1},\mathbf{t})$ extends to a 3-complex
$(\mathcal{D}_{1},\mathbf{p}_{1})$ where
$\mathbf{p}_{1}=\mathbf{q}_{1}\cup\mathbf{t}$ and $\mathbf{q}_{1}$ is the set
of 2-cells arising from $Q(r_{1},1)$ with $r_{1}\in\mathbf{r}_{1}$. But
$(\mathcal{D}_{1},\mathbf{p}_{1})$ is a subcomplex of
$(\mathcal{D},\mathbf{p})$, therefore we have the following exact sequence of
abelian groups
$\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
6.75pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&&&&\crcr}}}\ignorespaces{\hbox{\kern-6.75pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\dots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
36.75pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
36.75pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H_{2}(\mathcal{D},\mathbf{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
108.42502pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
108.42502pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
221.67233pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
221.67233pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H_{1}(\mathcal{D}_{1},\mathbf{p}_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
298.94736pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
298.94736pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H_{1}(\mathcal{D},\mathbf{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
370.62238pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
370.62238pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\dots}$}}}}}}}\ignorespaces}}}}\ignorespaces.$
We know from theorem 4.8 that $H_{1}(\mathcal{D},\mathbf{p})=0$, so if we
prove that
$H_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))=0$, then the
exactness of the above sequence will imply that
$H_{1}(\mathcal{D}_{1},\mathbf{p}_{1})=0$ and we are done. Before we proceed
with the proof, we explain how the boundary maps for the corresponding
quotient complex are defined. For this we consider the commutative diagram
$\textstyle{C_{3}(\mathcal{D},\mathbf{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{3}}$$\scriptstyle{\tilde{\partial}_{3}}$$\textstyle{C_{2}(\mathcal{D},\mathbf{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{2}}$$\scriptstyle{\tilde{\partial}_{2}}$$\textstyle{C_{1}(\mathcal{D},\mathbf{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{1}}$$\textstyle{C_{3}(\mathcal{D},\mathbf{p})/C_{3}(\mathcal{D}_{1},\mathbf{p}_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hat{\partial}_{3}}$$\textstyle{C_{2}(\mathcal{D},\mathbf{p})/C_{2}(\mathcal{D}_{1},\mathbf{p}_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hat{\partial}_{2}}$$\textstyle{C_{1}(\mathcal{D},\mathbf{p})/C_{1}(\mathcal{D}_{1},\mathbf{p}_{1})}$
(15)
where $\mu_{i}$ for $i=1,2,3$ are the canonical epimorphisms. Then, for
$i=2,3$ and for every $\sigma\in C_{i}(\mathcal{D},\mathbf{p})$ we have
$\hat{\partial}_{i}(\mu_{i}\sigma)=\mu_{i-1}\tilde{\partial}_{i}(\sigma).$
We write
$Im(\hat{\partial}_{3})=B_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))$,
and similarly
$Im(\hat{\partial}_{2})=B_{1}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))$.
Also we let
$Z_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))=Ker(\hat{\partial}_{2})$
and then
$H_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))=Z_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))/B_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1})).$
We note that
$C_{2}(\mathcal{D},\mathbf{p})/C_{2}(\mathcal{D}_{1},\mathbf{p}_{1})\cong\mu_{2}(C_{2}(\mathcal{D}))\oplus
C_{2}^{\mathbf{q}_{0}},$
where $\mu_{2}(C_{2}(\mathcal{D}))$ is the free abelian group generated by all
2-cells $[e,f]$ where at least one of the edges $e$ or $f$ arises from
$r_{0}$, and $C_{2}^{\mathbf{q}_{0}}$ is the free abelian group on 2-cells
$u.\mathbf{q}_{0}.v$ where $\mathbf{q}_{0}$ is the 2-cell attached along
$Q(r_{0},1)$. We can thus regard
$Z_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))$ as a
subgroup of $\mu_{2}(C_{2}(\mathcal{D}))\oplus C_{2}^{\mathbf{q}_{0}}$. Now we
let
$\varphi_{rel}:\mu_{2}(C_{2}(\mathcal{D}))\oplus
C_{2}^{\mathbf{q}_{0}}\rightarrow\mathbb{Z}G.\mathbf{q}_{0}.\mathbb{Z}G$
be the $(\mathbb{Z}F,\mathbb{Z}F)$-homomorphism defined by mapping
$\mu_{2}(C_{2}(\mathcal{D}))$ to 0, and every 2-cell $u.\mathbf{q}_{0}.v$ to
$\bar{u}.\mathbf{q}_{0}.\bar{v}$. Denote the kernel of $\varphi_{rel}$ by
$K_{rel}^{\mathbf{q}_{0}}$. By the same argument as that of [36] we see that
$K_{rel}^{\mathbf{q}_{0}}=\mu_{2}(C_{2}(\mathcal{D}))+J.\mathbf{q}_{0}.\mathbb{Z}F+\mathbb{Z}F.\mathbf{q}_{0}.J.$
Latter we will make use of the fact that $K_{rel}^{\mathbf{q}_{0}}$ can be
regarded as a sub group of $K^{\mathbf{p}}$.
Next we show that
$B_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\subseteq
K_{rel}^{\mathbf{q}_{0}}$ and that the restriction of $\hat{\partial}_{2}$ on
$K_{rel}^{\mathbf{q}_{0}}$ sends $K_{rel}^{\mathbf{q}_{0}}$ onto the subgroup
$B_{1}(\mathcal{D},\mathcal{D}_{1})$ of
$C_{1}(\mathcal{D},\mathbf{p})/C_{1}(\mathcal{D}_{1},\mathbf{p}_{1})$ defined
by
$B_{1}(\mathcal{D},\mathcal{D}_{1})=\left\\{\beta+C_{1}(\mathcal{D}_{1})|\beta\in\tilde{\partial}_{2}(C_{2}(\mathcal{D}))\right\\}.$
To see that
$B_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\subseteq
K_{rel}^{\mathbf{q}_{0}}$, we must prove that for every 3-cell $[f,\sigma]$ or
$[\sigma,f]$,
$\hat{\partial}_{3}\left([f,\sigma]+C_{3}(\mathcal{D}_{1},\mathbf{p}_{1})\right)\in
K_{rel}^{\mathbf{q}_{0}}$
and similarly,
$\hat{\partial}_{3}\left([\sigma,f]+C_{3}(\mathcal{D}_{1},\mathbf{p}_{1})\right)\in
K_{rel}^{\mathbf{q}_{0}}.$
We prove the second for convenience. Let $[\sigma,f]\notin
C_{3}(\mathcal{D}_{1},\mathbf{p}_{1})$. If $\sigma\in F.\mathbf{q}_{0}.F$ or
$f$ arises from $r_{0}$, then clearly
$\displaystyle\hat{\partial}_{3}\left([\sigma,f]+C_{3}(\mathcal{D}_{1},\mathbf{p}_{1})\right)$
$\displaystyle=\left(\sigma.(\iota f-\tau
f)-\sum_{i}\varepsilon_{i}[e_{i},f]\right)+C_{2}(\mathcal{D}_{1},\mathbf{p}_{1})\in
K_{rel}^{\mathbf{q}_{0}}.$
Otherwise, if $\sigma\notin\mathbf{q}_{0}$ and $f$ arises from
$\mathbf{r}_{1}$, then $\sigma=[g,h]$ where at least $g$ or $h$ arises from
$r_{0}$. Again we see that
$\hat{\partial}_{3}\left([\sigma,f]+C_{3}(\mathcal{D}_{1},\mathbf{p}_{1})\right)\in
K_{rel}^{\mathbf{q}_{0}}$.
Next we prove that the restriction of $\hat{\partial}_{2}$ on
$K_{rel}^{\mathbf{q}_{0}}$ sends $K_{rel}^{\mathbf{q}_{0}}$ onto
$B_{1}(\mathcal{D},\mathcal{D}_{1})$. Indeed, since for every $(\iota f-\tau
f).\sigma_{0}\in J.\mathbf{q}_{0}.\mathbb{Z}F$
$(\iota f-\tau
f).\sigma_{0}=\tilde{\partial}_{3}[f,\sigma_{0}]-\sum_{i}\varepsilon_{i}[f,e_{i}],$
then we can derive that
$\displaystyle\hat{\partial}_{2}\left((\iota f-\tau f).\sigma_{0}\right)$
$\displaystyle=-\hat{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[f,e_{i}]\right)$
$\displaystyle=-\sum_{i}\varepsilon_{i}\tilde{\partial}_{2}[f,e_{i}]+C_{1}(\mathcal{D}_{1})\in
B_{1}(\mathcal{D},\mathcal{D}_{1}).$
In a symmetric way one can show that
$\hat{\partial}_{2}\left(\sigma_{0}.(\iota f-\tau f)\right)\in
B_{1}(\mathcal{D},\mathcal{D}_{1})$. Finally, if
$[e,f]\in\mu_{2}(C_{2}(\mathcal{D}))$, then
$\hat{\partial}_{2}[e,f]=\tilde{\partial_{2}}[e,f]+C_{1}(\mathcal{D}_{1})\in
B_{1}(\mathcal{D},\mathcal{D}_{1}).$
This also shows that $\hat{\partial}_{2}$ is onto.
Therefore we have the complex
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{incl.}$$\textstyle{K_{rel}^{\mathbf{q}_{0}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hat{\partial}_{2}}$$\textstyle{B_{1}(\mathcal{D},\mathcal{D}_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
(16)
which is exact on the left and on the right.
###### Lemma 4.14.
The complex (16) is exact.
###### Proof.
For this we consider the commutative diagram
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{2}(\mathcal{D},\mathbf{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{2}}$$\scriptstyle{incl.}$$\textstyle{K^{\mathbf{p}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{2}}$$\scriptstyle{\tilde{\partial}_{2}}$$\textstyle{B_{1}(\mathcal{D})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{1}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{incl.}$$\textstyle{K_{rel}^{\mathbf{q}_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}}$$\scriptstyle{\hat{\partial}_{2}}$$\textstyle{B_{1}(\mathcal{D},\mathcal{D}_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
The top row is exact from proposition 4.3 and from remark 4.12, and
$\mu_{1},\mu_{2}$ are the restrictions of the epimorphisms of (15). Let
$\xi=\sum_{i}z_{i}\mu_{2}(\sigma_{i})\in Ker\hat{\partial}_{2}$. We recall
that $\xi$ can be regarded as an element of $K^{\mathbf{p}}$ with no terms
arising from $\mathbf{t}$ or square 2-cells $[e,f]$ with both $e$ and $f$ in
$\mathcal{D}_{1}$. Further we have that
$\displaystyle 0$
$\displaystyle=\hat{\partial}_{2}\left(\sum_{i}z_{i}\mu_{2}(\sigma_{i})\right)=\sum_{i}z_{i}\hat{\partial}_{2}\mu_{2}(\sigma_{i})$
$\displaystyle=\sum_{i}z_{i}\mu_{1}\tilde{\partial}_{2}(\sigma_{i})=\mu_{1}\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right),$
which implies that
$\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)\in
C_{1}(\mathcal{D}_{1})$, and so
$\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)$ is a 1-cycle in
$Z_{1}(\mathcal{D}_{1})$. It follows that
$\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)\in
Ker\tilde{\partial}_{1}\cap(J.\mathbf{s}_{1}.\mathbb{Z}F+\mathbb{Z}F.\mathbf{s}_{1}.J).$
We note that each term from
$J.\mathbf{s}_{1}.\mathbb{Z}F+\mathbb{Z}F.\mathbf{s}_{1}.J$ that is
represented in $\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)$
arises either from a 2-cell $[e,f]$ where at least one of $e$ or $f$ is a
positive edges that belongs to $\mathcal{D}_{1}$, or arises from an element of
the form $j.\mathbf{q}_{0}.v$ or $u.\mathbf{q}_{0}.j$ with $u,v\in F$ and
$j\in J$. Theorem 6.6 of [33] implies that there is a 2-chain
$\sum_{j}k_{j}\beta_{j}\in C_{2}(\mathcal{D}_{1})$ such that
$\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)=\tilde{\partial}_{2}\left(\sum_{j}k_{j}\beta_{j}\right)$
and then we have the 2-cycle
$\tilde{\xi}=\sum_{i}z_{i}\sigma_{i}-\sum_{j}k_{j}\beta_{j}$ in
$K^{\mathbf{p}}$. It follows that $\tilde{\xi}$ is a 2-boundary since the top
row is exact, and has the property that
$\displaystyle\mu_{2}(\tilde{\xi})$
$\displaystyle=\mu_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)-\mu_{2}\left(\sum_{j}k_{j}\beta_{j}\right)$
$\displaystyle=\mu_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)$ (since each
$\beta_{j}\in C_{2}(\mathcal{D}_{1})$)
$\displaystyle=\sum_{i}z_{i}\mu_{2}(\sigma_{i})$ $\displaystyle=\xi,$
hence for some $w\in C_{3}(\mathcal{D},\mathbf{p})$ we have that
$\xi=\mu_{2}(\tilde{\xi})=\mu_{2}(\tilde{\partial}_{3}(w))=\hat{\partial}_{3}\mu_{3}(w).$
This proves that $\xi$ is a relative 2-boundary and as a consequence the
exactness of the bottom row. ∎
Further we note that $B_{1}(\mathcal{D},\mathcal{D}_{1})$ embeds in
$Im(\hat{\partial}_{2})$. Indeed, any element
$\tilde{\partial}_{2}(\xi)+C_{1}(\mathcal{D}_{1})$ of
$B_{1}(\mathcal{D},\mathcal{D}_{1})$ where $\xi$ is a 2-chain from
$C_{2}(\mathcal{D})$ is in $Im(\hat{\partial}_{2})$ since
$C_{2}(\mathcal{D})\leq C_{2}(\mathcal{D},\mathbf{p})$ and
$C_{1}(\mathcal{D}_{1},\mathbf{p}_{1})=C_{1}(\mathcal{D}_{1})$.
Finally, consider the commutative diagram
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota}$$\scriptstyle{incl.}$$\textstyle{K_{rel}^{\mathbf{q}_{0}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota}$$\scriptstyle{\hat{\partial}_{2}}$$\textstyle{B_{1}(\mathcal{D},\mathcal{D}_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Z_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota}$$\textstyle{\mu_{2}(C_{2}(\mathcal{D}))\oplus
C_{2}^{\mathbf{q}_{0}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hat{\partial}_{2}}$$\textstyle{Im(\hat{\partial}_{2})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
where the top row is exact from lemma 4.14, and the bottom one is also exact
where $Im(\hat{\partial}_{2})\leq
C_{1}(\mathcal{D},\mathbf{p})/C_{1}(\mathcal{D}_{1},\mathbf{p}_{1})$. From the
Snake Lemma we get the exact sequence
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{Z}G.\mathbf{q}_{0}.\mathbb{Z}G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\nu}$$\textstyle{Im(\hat{\partial}_{2})/B_{1}(\mathcal{D},\mathcal{D}_{1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$
(17)
where
$\nu(\mathbf{q}_{0})=\hat{\partial}_{2}([1,\mathbf{q}_{0},1])+B_{1}(\mathcal{D},\mathcal{D}_{1})$.
Since $G$ is the trivial group, we have that for every $[u,\mathbf{q}_{0},v]$,
$\hat{\partial}_{2}([u,\mathbf{q}_{0},v])+B_{1}(\mathcal{D},\mathcal{D}_{1})=\hat{\partial}_{2}([1,\mathbf{q}_{0},1])+B_{1}(\mathcal{D},\mathcal{D}_{1}).$
(18)
This follows easily if we prove that for every positive edge $e$, and $v\in F$
we have that
$\hat{\partial}_{2}([\iota
e,\mathbf{q}_{0},v])+B_{1}(\mathcal{D},\mathcal{D}_{1})=\hat{\partial}_{2}([\tau
e,\mathbf{q}_{0},v])+B_{1}(\mathcal{D},\mathcal{D}_{1}),$
and similarly, for every positive edge $f$ and $u\in F$,
$\hat{\partial}_{2}([u,\mathbf{q}_{0},\iota
f])+B_{1}(\mathcal{D},\mathcal{D}_{1})=\hat{\partial}_{2}([u,\mathbf{q}_{0},\tau
f])+B_{1}(\mathcal{D},\mathcal{D}_{1}).$
We prove the first claim for convenience. Since
$(\iota e-\tau
e).\mathbf{q}_{0}.v+C_{2}(\mathcal{D}_{1},\mathbf{p}_{1})=\left(\tilde{\partial}_{3}[e,\mathbf{q}_{0}]-\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{2}(\mathcal{D}_{1},\mathbf{p}_{1}),$
where $\partial\mathbf{q}_{0}=e_{1}^{\varepsilon_{1}}\dots
e_{n}^{\varepsilon_{n}}$, then
$\tilde{\partial}_{2}((\iota e-\tau
e).\mathbf{q}_{0}.v)+C_{1}(\mathcal{D}_{1})=-\tilde{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{1}(\mathcal{D}_{1}).$
But
$\tilde{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{1}(\mathcal{D}_{1})\in
B_{1}(\mathcal{D},\mathcal{D}_{1}),$
consequently
$\displaystyle\left(\hat{\partial}_{2}([\iota
e,\mathbf{q}_{0},v])-\hat{\partial}_{2}([\tau
e,\mathbf{q}_{0},v])\right)+B_{1}(\mathcal{D},\mathcal{D}_{1})$
$\displaystyle=\left(\tilde{\partial}_{2}((\iota e-\tau
e).\mathbf{q}_{0}.v)+C_{1}(\mathcal{D}_{1})\right)+B_{1}(\mathcal{D},\mathcal{D}_{1})$
$\displaystyle=-\left(\tilde{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{1}(\mathcal{D}_{1})\right)+B_{1}(\mathcal{D},\mathcal{D}_{1})$
$\displaystyle=B_{1}(\mathcal{D},\mathcal{D}_{1}),$
which proves the first claim.
An obvious consequence of (18) is that
$Im(\hat{\partial}_{2})/B_{1}(\mathcal{D},\mathcal{D}_{1})$ is a cyclic group
with generator
$\hat{\partial}_{2}([1,\mathbf{q}_{0},1])+B_{1}(\mathcal{D},\mathcal{D}_{1})$.
The key to proving our main theorem is that
$Im(\hat{\partial}_{2})/B_{1}(\mathcal{D},\mathcal{D}_{1})$ is infinite
cyclic. Before that, we need to do some preparatory work.
If we let $G_{1}$ be the group given by $\mathcal{P}_{1}$, then for every
$g\in G_{1}$, we let $(\mathcal{D}_{1},\mathbf{t},g)$ be the connected
component of $(\mathcal{D}_{1},\mathbf{t})$ corresponding to $g$, and let
$H_{1}(\mathcal{D}_{1},\mathbf{t},g)$ be the corresponding homology group. The
homology group $H_{1}(\mathcal{D}_{1},\mathbf{t})$ decomposes as a direct sum
$H_{1}(\mathcal{D}_{1},\mathbf{t})=\oplus_{g\in
G_{1}}H_{1}(\mathcal{D}_{1},\mathbf{t},g).$
Any 1-cycle $\varsigma$ now decomposes uniquely as
$\varsigma=\varsigma_{g_{1}}+\dots+\varsigma_{g_{n}}$
where $\varsigma_{g_{i}}\in Z_{1}(\mathcal{D}_{1},\mathbf{t},g_{i})$, and
$\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t})$ writes uniquely as
$\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t})=(\varsigma_{g_{1}}+B_{1}(\mathcal{D}_{1},\mathbf{t},g_{1}))+\dots+(\varsigma_{g_{n}}+B_{1}(\mathcal{D}_{1},\mathbf{t},g_{n})).$
From [40] we know that each $H_{1}(\mathcal{D}_{1},\mathbf{t},g_{i})$ is
isomorphic to $H_{1}(\mathcal{D}_{1},\mathbf{t},1)$ where the isomorphism
$\theta_{u_{i}}:H_{1}(\mathcal{D}_{1},\mathbf{t},g_{i})\rightarrow
H_{1}(\mathcal{D}_{1},\mathbf{t},1)$
is defined by
$\varsigma_{g_{i}}+B_{1}(\mathcal{D}_{1},\mathbf{t},g_{i})\mapsto\varsigma_{g_{i}}\cdot
u_{i}^{-1}+B_{1}(\mathcal{D}_{1},\mathbf{t},1)$
where $u_{i}$ is any vertex in $(\mathcal{D}_{1},\mathbf{t},g_{i})$.
We let
$\psi_{1}:H_{1}(\mathcal{D}_{1},\mathbf{t},1)\rightarrow\tilde{\Pi}_{1}$ and
$\eta:\tilde{\Pi}\rightarrow H_{1}(\mathcal{D},\mathbf{t})$ be the isomorphism
of [40]. With these notations the following holds true.
###### Lemma 4.15.
For every $\varsigma\in Z_{1}(\mathcal{D}_{1},\mathbf{t},1)$,
$\hat{\varphi}\psi_{1}(\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t},1))=\psi(\varsigma+B_{1}(\mathcal{D},\mathbf{t}))$.
###### Proof.
This follows easily from the definitions of $\psi,\psi_{1}$ and
$\hat{\varphi}$. Indeed, assume that
$\varsigma=\sum_{i\in I}z_{i}(u_{i},s_{i},1,v_{i}),$
and let
$J=\\{i\in I:s_{i}=r^{\varepsilon_{i}}_{i}\text{ where
}r^{\varepsilon_{i}}_{i}\in\mathbf{r}_{1}^{\pm 1}\\}.$
Then from the definitions of $\psi_{1}$ and $\psi$ we have that
$\psi_{1,0}(\varsigma)=\prod_{j\in
J}\mu_{1}\sigma_{1}(^{u_{j}}s_{j})^{z_{j}}\text{ and
}\psi_{0}(\varsigma)=\prod_{j\in J}\mu\sigma(^{u_{j}}s_{j})^{z_{j}}$ (19)
where the exponential notations of the right hand sides mean that if
$z_{j}<0$, then
$\mu_{1}\sigma_{1}(^{u_{j}}s_{j})^{z_{j}}=\iota\left(\mu_{1}\sigma_{1}(^{u_{j}}s_{j})\right)^{|z_{j}|}$
and likewise,
$\mu\sigma(^{u_{j}}s_{j})^{z_{j}}=\iota\left(\mu\sigma(^{u_{j}}s_{j})\right)^{|z_{j}|}$.
We used the definitions of $\psi_{1,0}$ and $\psi_{0}$ by regarding
$\varsigma$ as a sum of 1-cycles arising from loops in
$(\mathcal{D}_{1},\mathbf{t},1)$. This is always possible due to lemma 5.1 of
[33]. Further we have that
$\displaystyle\hat{\varphi}\psi_{1}(\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t},1))$
$\displaystyle=\hat{\varphi}(\psi_{1,0}(\varsigma))$ (from the definition of
$\psi_{1}$) $\displaystyle=\hat{\varphi}\left(\prod_{j\in
J}\mu_{1}\sigma_{1}(^{u_{j}}s_{j})^{z_{j}}\right)$ (from (19))
$\displaystyle=\prod_{j\in J}\mu\sigma(^{u_{j}}s_{j})^{z_{j}}$ (from the
definition of $\hat{\varphi}$) $\displaystyle=\psi_{0}(\varsigma)$ (from (19))
$\displaystyle=\psi(\varsigma+B_{1}(\mathcal{D},\mathbf{t}))$
$\displaystyle\text{(from the definition of $\psi$)},$
proving the lemma. ∎
With the decomposition $H_{1}(\mathcal{D}_{1},\mathbf{t})=\oplus_{g_{i}\in
G_{1}}H_{1}(\mathcal{D}_{1},\mathbf{t},g_{i})$, consider the following
sequence of homomorphisms
$\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
37.8995pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&&&\crcr}}}\ignorespaces{\hbox{\kern-37.8995pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\oplus_{g_{i}\in
G_{1}}H_{1}(\mathcal{D}_{1},\mathbf{t},g_{i})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
40.32365pt\raise 6.38873pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.47238pt\hbox{$\scriptstyle{\oplus\theta_{u_{i}}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 61.8995pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
61.8995pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{\oplus_{g_{i}\in
G_{1}}H_{1}(\mathcal{D}_{1},\mathbf{t},1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
138.63872pt\raise 6.1111pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.75pt\hbox{$\scriptstyle{\oplus\psi_{1}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 159.64081pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
159.64081pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{\oplus_{g_{i}\in
G_{1}}\tilde{\Pi}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
198.28487pt\raise 6.90279pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-3.31946pt\hbox{$\scriptstyle{\oplus\hat{\varphi}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 219.4849pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
219.4849pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{\oplus_{g_{i}\in
G_{1}}\tilde{\Pi}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
261.45798pt\raise 5.1875pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8264pt\hbox{$\scriptstyle{\gamma}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 276.52896pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
276.52896pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H_{1}(\mathcal{D},\mathbf{t})}$}}}}}}}\ignorespaces}}}}\ignorespaces,$
where
$\gamma\left(\sum_{g_{i}}d_{g_{i}}\right)=\sum_{g_{i}}\eta(d_{g_{i}}),$
and write for short
$\chi=\gamma(\oplus\hat{\varphi})(\oplus\psi_{1})(\oplus\theta_{u_{i}})$.
###### Lemma 4.16.
For any element $\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t})\in
H_{1}(\mathcal{D}_{1},\mathbf{t})$, we have
$\chi(\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t}))=\varsigma+B_{1}(\mathcal{D},\mathbf{t})$.
###### Proof.
If $\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t})$ is expressed as
$\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t})=\sum_{g_{i}}(\varsigma_{g_{i}}+B_{1}(\mathcal{D}_{1},\mathbf{t},g_{i})),$
then we have
$\displaystyle\chi(\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t}))$
$\displaystyle=(\gamma(\oplus\hat{\varphi})(\oplus\psi_{1})(\oplus\theta_{u_{i}}))\left(\sum_{g_{i}}(\varsigma_{g_{i}}+B_{1}(\mathcal{D}_{1},\mathbf{t},g_{i}))\right)$
$\displaystyle=(\gamma(\oplus\hat{\varphi})(\oplus\psi_{1}))\left(\sum_{g_{i}}(\varsigma_{g_{i}}\cdot
u_{i}^{-1}+B_{1}(\mathcal{D}_{1},\mathbf{t},1))\right)$
$\displaystyle=(\gamma(\oplus\hat{\varphi}))\left(\sum_{g_{i}}\psi_{1}(\varsigma_{g_{i}}\cdot
u_{i}^{-1}+B_{1}(\mathcal{D}_{1},\mathbf{t},1))\right)$
$\displaystyle=\gamma\left(\sum_{g_{i}}\hat{\varphi}\psi_{1}(\varsigma_{g_{i}}\cdot
u_{i}^{-1}+B_{1}(\mathcal{D}_{1},\mathbf{t},1))\right)$
$\displaystyle=\gamma\left(\sum_{g_{i}}\psi(\varsigma_{g_{i}}\cdot
u_{i}^{-1}+B_{1}(\mathcal{D},\mathbf{t}))\right)$ (by lemma 4.15)
$\displaystyle=\sum_{g_{i}}\eta\psi(\varsigma_{g_{i}}\cdot
u_{i}^{-1}+B_{1}(\mathcal{D},\mathbf{t}))$
$\displaystyle=\sum_{g_{i}}\varsigma_{g_{i}}\cdot
u_{i}^{-1}+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=\sum_{g_{i}}\varsigma_{g_{i}}+B_{1}(\mathcal{D},\mathbf{t})$
(by lemma 4.9) $\displaystyle=\varsigma+B_{1}(\mathcal{D},\mathbf{t}).$
∎
###### Theorem 4.17.
The subpresentation $\mathcal{P}_{1}$ is aspherical.
###### Proof.
We prove first that
$Im(\hat{\partial}_{2})/B_{1}(\mathcal{D},\mathcal{D}_{1})$ is infinite
cyclic. If we assume the contrary, then there is $z\in\mathbb{Z}$ and a
2-chain $\xi\in C_{2}(\mathcal{D})$ such that
$z\tilde{\partial}_{2}([1,\mathbf{q}_{0},1])+C_{1}(\mathcal{D}_{1})=\tilde{\partial}_{2}(\xi)+C_{1}(\mathcal{D}_{1}).$
It follows that
$\varsigma=z\tilde{\partial}_{2}([1,\mathbf{q}_{0},1])-\tilde{\partial}_{2}(\xi)$
is a 1-cycle in $C_{1}(\mathcal{D}_{1})$ and therefore $\varsigma\in
Z_{1}(\mathcal{D}_{1},\mathbf{t})$. Now we see that
$\displaystyle\chi(\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t}))$
$\displaystyle=\varsigma+B_{1}(\mathcal{D},\mathbf{t})$ (from lemma 4.16)
$\displaystyle=(z\tilde{\partial}_{2}([1,\mathbf{q}_{0},1])-\tilde{\partial}_{2}(\xi))+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=z\tilde{\partial}_{2}([1,\mathbf{q}_{0},1])+B_{1}(\mathcal{D},\mathbf{t})$
$\displaystyle=zq(r_{0},1)+B_{1}(\mathcal{D},\mathbf{t}).$
But from proposition 4.13 it follows that
$((\oplus\hat{\varphi})(\oplus\psi_{1})(\oplus\theta_{u_{i}}))(\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t}))=\sum_{g_{i}}v_{g_{i}},$
where $v_{g_{i}}\in\hat{\mathfrak{A}}_{1}$, say
$v_{g_{i}}=\mu\sigma(^{u_{i}}r_{i}({{}^{u_{i}}}r_{i})^{-1})^{z_{i}}$ where
$r_{i}\in\mathbf{r}_{1}$, $u_{i}\in F$ and $z_{i}\in\mathbb{Z}$. Now we have
$\displaystyle\chi(\varsigma+B_{1}(\mathcal{D}_{1},\mathbf{t}))$
$\displaystyle=\gamma\left(\sum_{g_{i}}v_{g_{i}}\right)$
$\displaystyle=\gamma\left(\sum_{g_{i}}\mu\sigma(^{u_{i}}r_{i}({{}^{u_{i}}}r_{i})^{-1})^{z_{i}}\right)$
$\displaystyle=\sum_{g_{i}}\eta(\mu\sigma(^{u_{i}}r_{i}({{}^{u_{i}}}r_{i})^{-1})^{z_{i}})$
$\displaystyle=\sum_{g_{i}}z_{i}(q(r_{i},u_{i})+B_{1}(\mathcal{D},\mathbf{t}))$
$\displaystyle\text{(Proposition \ref{th})}.$
Recollecting, we have in $H_{1}(\mathcal{D},\mathbf{t})$ the equality
$zq(r_{0},1)+B_{1}(\mathcal{D},\mathbf{t})=\sum_{g_{i}}z_{i}q(r_{i},u_{i})+B_{1}(\mathcal{D},\mathbf{t}).$
In $\tilde{\Pi}=\hat{\mathfrak{U}}$ this equality translates to
$\mu\sigma(r_{0}r_{0}^{-1})^{z}=\prod_{g_{i}}\mu\sigma(^{u_{i}}r_{i}({{}^{u_{i}}}r_{i})^{-1})^{z_{i}}$
which from proposition 3.5 is impossible since each $r_{i}\neq r_{0}$. So it
remains that $Im(\hat{\partial}_{2})/B_{1}(\mathcal{D},\mathcal{D}_{1})$ is
infinite cyclic, and as a result it is isomorphic to
$\mathbb{Z}G.\mathbf{q}_{0}.\mathbb{Z}G$ where the isomorphism sends
$\mathbf{q}_{0}$ to
$\hat{\partial}_{2}([1,\mathbf{q}_{0},1])+B_{1}(\mathcal{D},\mathcal{D}_{1})$
which is the free generator of
$Im(\hat{\partial}_{2})/B_{1}(\mathcal{D},\mathcal{D}_{1})$. But this map is
the map $\nu$ of (17), therefore
$H_{2}((\mathcal{D},\mathbf{p}),(\mathcal{D}_{1},\mathbf{p}_{1}))=0$ as
desired. ∎
## References
* [1] Bergman, G.M., An Invitation to General Algebra and Universal Constructions, Henry Helson, (1998)
* [2] Bestvina, M., Brady, N., Morse theory and finiteness properties of groups, Invent. Math., 129(3):445-470, 1997
* [3] Book, R., and Otto, F., String-Rewriting Systems, Springer, New York, 1993
* [4] Bogley, W.A., Pride, S.J., Aspherical relative presentations, Proc. Edinburg Math. Soc. 35, 1-39, (1992)
* [5] Brown, R., Huebschmann, J., Identities among relations, in Low-dimensional Topology, Proc. Bangor Symp., 1979, Ed. R. Brown and T. L. Thickstun, London Math. Soc. Lecture Notes Series, Cambridge University Press, (1981)
* [6] Brown, R., On The second relative homotopy group of an adjunction space: an exposition of a theorem of J. H. C. Whitehead, J. London Math. Soc. (2), 22, 146-152, (1980)
* [7] Brown, R., Higgins, P.J., Sivera, R., Nonabelian Algebraic Topology, filtered spaces, crossed complexes, cubical homotopy groupoids, EMS Tracts in Mathematics Vol. 15, (2011)
* [8] Chiswell, I., M., Collins, D., J., Huebschmann, J., Aspherical Group Presentations, Math. Z., 178, 1-36, (1981)
* [9] Cohn, P.M., Universal Algebra, New York : Harper and Row, (1965)
* [10] Collins, D. J., Huebschmann, J., Spherical diagrams and identities among relations, Math. Ann., 261(2):155-183, 1982
* [11] Eilenberg, S., Ganea, T., On the Lusternik-Schnirelmann category of abstract groups, Ann. of Math. (2), 65:517-518, 1957
* [12] Gilbert, N. D. Monoid presentations and associated groupoids, Internat. J. Algebra Comput. 8 (1998) 141-152
* [13] V. S. Guba and M. Sapir, Diagram groups, Mem. Amer. Math. Soc. 620 (1997) 1-117
* [14] Gutierrez, M., Ratcliffe, J.G., On the second homotopy group, Quart. J. Math. Oxford (2) 32, 45-55, (1981)
* [15] A. Hatcher, Algebraic Topology, Cambridge University Press 2002
* [16] Harlander, J., Rosebrock, S., Injective labeled oriented trees are aspherical, Math. Z. (2016). https://doi.org/10.1007/s00209-016-1823-6
* [17] Hog-Angeloni, C., Metzler, W., and Sieradski, A. J. (eds.), Two-dimensional homotopy and combinatorial group theory London Math. Soc. Lecture Note Ser. 197, Cambridge University Press, Cambridge (1993)
* [18] Howie, J., Aspherical and acyclic 2-complexes, J. London Math. Soc. (2), 20(3):549-558, 1979
* [19] Howie, J., On pairs of 2-complexes and systems of equations over groups, J. Reine Angew. Math., 324:165-174, 1981
* [20] Howie, J., On the fundamental group of an almost-acyclic 2-complex, Proc. Edinburgh Math. Soc. (2), 24(2):119-122, 1981
* [21] Howie, J., Epimorphisms and Dominions. II, J. of Algebra 6, 7-21 (1967)
* [22] Howie, J., On locally indicable groups, Math. Z., 180(4):445-461, 1982
* [23] Howie, J., Some remarks on a problem of J. H. C. Whitehead, Topology, 22(4):475-485, 1983
* [24] Howie, J., Spherical diagrams and equations over groups, Math. Proc. Cambridge Philos. Soc., 96(2):255-268, 1984
* [25] Howie, J., On the asphericity of ribbon disc complements, Trans. Amer. Math. Soc., 289(1):281-302, 1985
* [26] Howie, J., Minimal Seifert manifolds for higher ribbon knots, In The Epstein birthday schrift, volume 1 of Geom. Topol. Monogr., pages 261-293 (electronic). Geom. Topol. Publ., Coventry, 1998
* [27] Howie, J.M., Fundamentals of Semigroup Theory, Clarendon Press Oxford, (1995)
* [28] Howie, J.M., Isbell, J. R., Epimorphisms and Dominions II, J. Algebra 6, 7-21, (1967)
* [29] Huck, G., Rosebrock, S., Aspherical labelled oriented trees and knots, Proceedings of the Edinburgh Mathematical Society (2001) 44, 285-294
* [30] Huebschmann, J., Aspherical 2-complexes and an unsettled problem of J. H. C. Whitehead, Math. Ann., 258(1):17-37, 1981/82.
* [31] Isbell, J.R., Epimorphisms and dominions, in Proc. of the Conference on Categorical Algebra, La Jolla 1965 (S. Eilenberg et al, ed.), Lange and Springer, New York, 1966, pp. 232-246, MR 35:105a
* [32] Ivanov, S., Some Remarks on the Asphericity Whitehead Conjecture, Illinois J., Math., Vol. 43, Nr. 4, (1999)
* [33] Y. Kobayashi and F. Otto, Some exact sequences for the homotopy (bi-)module of a monoid, Internat. J. Algebra Comput. 12 (2002) 247-284
* [34] Lyndon, R.C. Cohomology theory of groups with a single defining relation, Ann. Math. 52 (1950), 650-655. MR 13:819b
* [35] McGlashan, S., Finiteness conditions for rewriting systems, Ph.D. Thesis, University of Glasgow 2002
* [36] McGlashan, S., Pasku, E., Pride, S.J. Finiteness conditions for rewriting systems, Internat. J. Algebra Comput. 15, No. 1 (2005) 175-205
* [37] Newman, M. H. A., On theories with a combinatorial definition of ’equivalence’, Ann. of Math. 43, No. 2 (1942) 223-243
* [38] Papakyriakopoulos, C., D., Attaching 2-dimensional cells to a complex, Ann. of Math. Vol. 78, N. 2, 205-222, (1963)
* [39] Pride, S.J., Low-dimensional homotopy theory for monoids, Int. J. Algebra and Computations, Vol. 5, No. 6 (1995), 631-649
* [40] Pride, S.J., Low-dimensional homotopy theory for monoids II: Groups, Glasgow Math. J., 41 (1999), 1-11
* [41] Pride, S.J., Identities among relations of group presentations, in Group Theory from a Geometrical Viewpoint, World Scientific Publishing, Co, Pte, Ltd., (1991)
* [42] Rosebrock, S., The Whitehead conjecture-an overview, Siberian Electronic Mathematical Reports, Tom 4, cmp. 440-449 (2007)
* [43] Stefan, P., On Peiffer transformations, link diagrams and a question of J. H. C. Whitehead, in Low-dimensional Topology, Proc. Bangor Symp., 1979, Ed. R. Brown and T. L. Thickstun, London Math. Soc. Lecture Notes Series, Cambridge University Press, (1981)
* [44] Whitehead, J.H.C., On adding relations to homotopy groups, Ann. of Math. 42, 409-428, (1941)
|
arxiv-papers
| 2021-07-26T16:01:35 |
2024-09-04T03:07:19.142337
|
{
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"authors": "Elton Pasku",
"submitter": "Pasku Elton",
"url": "https://arxiv.org/abs/2107.12293"
}
|
2107.12294
|
Thermodynamic Properties of q-deformed massless Dirac fermions in graphene
with Rashba coupling
Rachid Houça***[email protected],2, El Bouâzzaoui
Choubabi†††[email protected], Abdelhadi Belouad‡‡‡[email protected],
Abdellatif Kamal§§§[email protected],3 and Mohammed El
Bouziani¶¶¶[email protected]
1Team of Theoretical Physics and High Energy, Department of Physics, Faculty
of Sciences, Ibn Zohr University, Agadir, Morocco,
PO Box 8106, Agadir, Morocco
2Team of Theoretical Physics, Laboratory L.P.M.C., Department of Physics,
Faculty of Sciences, Chouaib Doukkali University, El Jadida, Morocco,
PO Box 20, 24000 El Jadida, Morocco
3Department of Mechanical Engineering, National Higher School of Arts and
Crafts, Hassan II University, Casablanca, Morocco
We study the thermodynamic properties of massless Dirac fermions in graphene
under a uniform magnetic field and Rashba spin-orbit coupling with a
$q-$deformed Heisenberg algebra calculus. The thermodynamic functions such as
the Helmholtz free energy, total energy, entropy and heat capacity are
obtained by using an approach based on the zeta function and Euler-Maclaurain
formula. These functions will be numerically examined for different values of
$\eta={1\over i}\ln(q)$. In particular, the heat capacity in the presence of
deformation, all curves coincide and reach the fixed value $C=6K_{B}$ three
times greater compared to the case of undeformed massless Dirac fermions in
graphene.
PACS numbers: 65.80.Ck, 03.65.-w
Keywords: Graphene, Rashba coupling, zeta function, Euler Maclaurain formula,
partition function, thermodynamic functions, q-deformed algebra.
## 1 Introduction
Graphene, which is an elemental sheet of graphite, consists of a periodic,
two-dimensional arrangement of carbon atoms of monoatomic thickness with a
honeycomb structure. It is the latest member of the carbon allotropic family:
diamond, graphite, $C60$ fullerenes [1] and nanotubes [2]. For the first time
in $2004$, a graphene sheet stable at room temperature was obtained physically
by A. Geim and K. Novoselov [3]. This experiment contradicted the theory that
a graphene sheet was thermodynamically unstable. As this new material
developed by mechanical exfoliation has remarkable and unique properties, they
were awarded the Nobel Prize in Physics in $2010$. Since this discovery,
graphene has been the material most studied by the scientific community for
its new and unique physical properties. Indeed, it has a high superior
electrical mobility [4, 5], an anomalous quantum Hall effect [6], a modulable
band gap [7] and it is a transparent conductor [8] since, in the optical
region, it absorbs only $2.3\%$ of light. It also has good flexibility [9] and
excellent mechanical strength, and its thermal conductivity is ten times
higher than that of copper [10].
The more general framework of the $q-$deformation theory for a real parameter
$q$ has found great success and has attracted considerable attention from
physicists and mathematicians. The interesting physical application was
started by the introduction of the $q-$deformed harmonic oscillator by
Biedenharn [11] and Macfarlane [12] in $1989$. Quantum mechanics can be
considered as a deformation (the deformation parameter is $\hbar$) of
classical mechanics and relativistic mechanics is another deformation (with
$1\over c$ as the deformation parameter) of classical mechanics. In the same
sense, quantum mechanics can be seen as the limit of a more general theory
depending on one or more deformation parameters.
The study of the dynamic behavior of systems is a central question in physics
and mathematics. These systems provide fundamental and general results which
have found major applications not only in physics, but also in all other
branches of science, as well as in technology; however, the harmonic
oscillator is the simplest and most fundamental theoretical model of
mechanical and electrical oscillatory phenomena. The definitions and main
properties of independent and time dependent harmonic oscillators and damped
harmonic oscillators have been studied by several authors [13, 14].
It was lately shown that the regular standard thermodynamics of Boltzmann-
Gibb’s statistics are no longer suitable for studying all physical systems,
including the attitude of complex systems controlled by the Tsallis non-
extensive statistics [15] and non-equilibrium statistics of the q-deformed
superstatistics [16, 17]. The concept of superstatistics was first developed
by Wilk and Wlodarczyk [18] before Beck and Cohen [19] latter reworded the
theory.
Motivated by the work done on thermodynamic proprieties under magnetic field
and Rashba coupling [23] we will generalize this last work by introducing the
notion of the $q$-deformed harmonic oscillator and see the influence of the
parameter $q$ on the various thermodynamic quantities. For this, we consider a
massless Dirac fermions in monolayer graphene with the magnetic field applied
perpendicular to the graphene layer. Through investment of The $q$-deformed
algebra of the quantum oscillator defined by q-deformed Heisenberg algebra we
express our Hamiltonian in terms of creation and annihilation operators to
obtain the solutions of the energy spectrum, this last will be used to
determine the partition function which will help us to calculate and plot
numerically the different thermodynamic functions in order to make
conclusions. The present paper is organized as follows. In section 2, we give
an overview on q-deformed Heisenberg algebra which serves to determine,
explicitly, the exact eigenvalues in terms of $q$-deformed parameter. In
section 3, we will look for the partition function which will be the key to
determine the different thermodynamic functions such as the free energy of
Helmholtz, internal energy, entropy and heat capacity. section 4, will be
devoted to the numerical results and discussions as well as comparison with
literature. We conclude our results in the final section.
## 2 Theoretical model
### 2.1 q-deformed quantum theory
The q-deformed algebra of the quantum oscillator is defined by q-deformed
Heisenberg algebra in terms of creation and annihilation operators
$a^{\dagger}$ and $a$, respectively, and number operator $N$ by [20, 21, 22]
$[a,a]=[a^{\dagger},a^{\dagger}]=0,\quad[a,a^{\dagger}]_{q}=aa^{\dagger}-q^{-1}a^{\dagger}a=q^{N},\quad[N,a^{\dagger}]=a^{\dagger},\quad[N,a]=-a$
(1)
where deformation parameter $q$ is real and the observed value of q has to
satisfy the non-additivity property
$[x+y]\neq[x]+[y]$ (2)
In addition, the operators obey the relations
$[N]=a^{\dagger}a,\quad[N+1]=aa^{\dagger}$ (3)
The $q-$Fock space spanned by orthornormalized eigenstates $|n\rangle$ is
constructed according to
$|n\rangle={(a^{\dagger})^{n}\over\sqrt{[n]!}}|0\rangle,\quad a|0\rangle=0$
(4)
Both $q-$factorial and $q-$numbers are defined, respectively, by
$[n]!=[n][n-1][n-2]\cdots[1],\quad[n]={q^{n}-q^{-n}\over q-q^{-1}}$ (5)
For $n\in\mathbb{N}$ with $[0]!=1$. The eigenvalues of the $q-$deformed one-
dimensional harmonic oscillator are
$E_{n}={\hbar\omega\over 2}\left([n]+[n+1]\right)$ (6)
Considering the definition of basic number given in (5), and making
$q=e^{i\eta}$, the eigenvalues become
$E_{n}={\hbar\omega\over 2}{\sin[\eta(n+{1\over 2})]\over\sin[{\eta\over 2}]}$
(7)
### 2.2 Eigenvalue problem
We consider a massless Dirac fermions in monolayer graphene with the magnetic
field applied perpendicular to the graphene layer. Low energy quasiparticles
in graphene with Rashba spin orbit coupling (RSOC) interaction can be well
described by the Dirac-type Hamiltonian
$H=v_{F}\left(\eta\sigma_{x}\pi_{x}+\sigma_{y}\pi_{y}\right)+\lambda_{R}\left(\eta\sigma_{x}s_{y}-\sigma_{y}s_{x}\right)$
(8)
where the conjugate momentum $\pi_{x}$ and $\pi_{y}$ can be written in
symmetric gauge $\vec{A}=\frac{B}{2}(-y,x)$ as
$\pi_{x}=p_{x}-\frac{eB}{2}y,\qquad\pi_{y}=p_{y}+\frac{eB}{2}x.$ (9)
Where $B$ and $v_{F}=10^{6}m/s$ are respectively the uniform magnetic field
and the Fermi velocity, the parameter $\eta=\pm 1$ labels the valley degrees
of freedom, $\sigma=\left(\sigma_{x},\sigma_{y}\right)$ are the Pauli matrices
of pseudospin operator on $A(B)$ lattice cites. The present system presents
the intrinsic spin orbit coupling (SOC), but its value is very weak compared
to the RSOC [23], for this we have neglected it because it will not influence
on the physical properties of the studied system.
Fixing a certain intra-Landau-level quantum number, we denote by
$|r_{A,B},n,\sigma\rangle={(a^{\dagger})^{n}\over\sqrt{[n]!}}|r_{A,B},n,\sigma\rangle$
a state in the nth Landau level with spin direction
$\sigma\in\\{\uparrow,\downarrow\\}$ , and all other eigenstates are of the
form
$|\Psi\rangle=(|r_{A},n,\uparrow\rangle,|r_{B},n-1,\downarrow\rangle,|r_{B},n,\uparrow\rangle,|r_{A},n-1,\downarrow\rangle)^{t}$.
The Hamiltonian (8) around a single Dirac point $(\eta=+1)$ with these
considerations is given by
$H=\left(\begin{array}[]{cccc}0&0&v_{F}\left(\pi_{x}-i\pi_{y}\right)&0\\\
0&0&0&v_{F}\left(\pi_{x}+i\pi_{y}\right)\\\
v_{F}\left(\pi_{x}+i\pi_{y}\right)&0&0&-2i\lambda_{R}\\\
0&v_{F}\left(\pi_{x}-i\pi_{y}\right)&2i\lambda_{R}&0\\\ \end{array}\right).$
(10)
To diagonalize the Hamiltonian (10), it is convenient to introduce the usual
bosonic operators in terms of the conjugate momentum
$a=\frac{\ell_{B}}{\sqrt{2}\hbar}\left(\pi_{x}-i\pi_{y}\right),\qquad
a^{\dagger}=\frac{\ell_{B}}{\sqrt{2}\hbar}\left(\pi_{x}+i\pi_{y}\right)$ (11)
which verify the commutation relation $[a,a^{\dagger}]=q^{N}$,
$\ell_{B}=\sqrt{\frac{\hbar}{eB}}$ is the magnetic length. Express (10) in
terms of $a$ and $a^{\dagger}$ to obtain
$H=\left(\begin{matrix}0&0&\frac{\sqrt{2}\hbar v_{F}}{\ell_{B}}a&0\\\
0&0&0&\frac{\sqrt{2}\hbar v_{F}}{\ell_{B}}a^{\dagger}\\\ \frac{\sqrt{2}\hbar
v_{F}}{\ell_{B}}a^{\dagger}&0&0&-2i\lambda_{R}\\\ 0&\frac{\sqrt{2}\hbar
v_{F}}{\ell_{B}}a&2i\lambda_{R}&0\\\ \end{matrix}\right).$ (12)
To find the solution of the energy spectrum we act the Hamiltonian on the
state $|\Psi\rangle$ leading to the eigenvalue equation
$\left(\begin{matrix}-E&0&\frac{\sqrt{2}\hbar v_{F}}{\ell_{B}}a&0\\\
0&-E&0&\frac{\sqrt{2}\hbar v_{F}}{\ell_{B}}a^{\dagger}\\\ \frac{\sqrt{2}\hbar
v_{F}}{\ell_{B}}a^{\dagger}&0&-E&-2i\lambda_{R}\\\ 0&\frac{\sqrt{2}\hbar
v_{F}}{\ell_{B}}a&2i\lambda_{R}&-E\\\
\end{matrix}\right)\left(\begin{matrix}|r_{A},n,\uparrow\rangle\\\
|r_{B},n-1,\downarrow\rangle\\\ |r_{B},n,\uparrow\rangle\\\
|r_{A},n-1,\downarrow\rangle\\\ \end{matrix}\right)=\left(\begin{matrix}0\\\
0\\\ 0\\\ 0\\\ \end{matrix}\right)$ (13)
giving the following system of equations
$\displaystyle-E|r_{A},n,\uparrow\rangle+\frac{\sqrt{2}\hbar
v}{\ell_{B}}a|r_{B},n,\uparrow\rangle=0$ (14)
$\displaystyle-E|r_{B},n-1,\downarrow\rangle+\frac{\sqrt{2}\hbar
v}{\ell_{B}}a^{\dagger}|r_{A},n-1,\downarrow\rangle=0$ (15)
$\displaystyle\frac{\sqrt{2}\hbar
v}{\ell_{B}}a^{\dagger}|r_{A},n,\uparrow\rangle-E|r_{B},n,\uparrow\rangle-2i\lambda_{R}|r_{A},n-1,\downarrow\rangle=0$
(16) $\displaystyle\frac{\sqrt{2}\hbar
v}{\ell_{B}}a|r_{B},n-1,\downarrow\rangle+2i\lambda_{R}|r_{B},n,\uparrow\rangle)-E|r_{A},n-1,\downarrow\rangle=0.$
(17)
These can be solved to obtain a second order equation for the eigenvalues
$E^{2}\pm 2\lambda_{R}E-+\left(\hbar\omega_{D}\right)^{2}[n]=0,\qquad
n=0,1,2\cdots$ (18)
where $\omega_{D}=v_{F}\sqrt{\frac{2eB}{\hbar}}$ is the Dirac constant. The
following solutions of the last equations are the form
$\displaystyle
E_{1,n}^{\pm}=-\lambda_{R}\pm\sqrt{\left(\hbar\omega_{D}\right)^{2}[n]+\lambda_{R}^{2}}$
(20) $\displaystyle
E_{2,n}^{\pm}=\lambda_{R}\pm\sqrt{\left(\hbar\omega_{D}\right)^{2}[n]+\lambda_{R}^{2}}$
We note that the preceding energies depend to the Rashba coupling parameter
$\lambda_{R}$ and the $q-$deformation parameter, now using the equation (5) to
find
$\displaystyle
E_{1,n}^{\pm}=\lambda_{R}\left(-1\pm\sqrt{1+\left({\hbar\omega_{D}\over\lambda_{R}}\right)^{2}{\sin(n\eta)\over\sin(\eta)}}\right)$
(21) $\displaystyle
E_{2,n}^{\pm}=\lambda_{R}\left(1\pm\sqrt{1+\left({\hbar\omega_{D}\over\lambda_{R}}\right)^{2}{\sin(n\eta)\over\sin(\eta)}}\right)$
(22)
Figure 1: (Color online) Eigenvalue $E$ versus $n$ for different values of
$q-$deformed parameters, $\eta=0,0.2,0.4,0.6$, respectively for the values of
the magnetic field and Rashba coupling parameter $B\sim 10^{-3}$ and
$\lambda_{R}=0.014meV$ [23].
In Figure (1), we present the eigenvalues of the $q-$deformed massless Dirac
fermions in graphene, thus, the energy levels shows that when there is no
deformation the energy is quantified and has a parabolic form and symmetrical
compared to the quantization axis $n$, but for a given deformation we notice
that the parabolic form tends towards an ellipse, on the other hand there is
an appearance of a second quantification via the periodicity of ellipses.
Now if we consider very small deformation and neglect all terms proportional
to $\eta^{4}$, we have
$\displaystyle E_{1,n}^{\pm}=\epsilon^{\pm}_{1,n}\pm\Delta\epsilon(n)$
$\displaystyle E_{2,n}^{\pm}=\epsilon^{\pm}_{2,n}\pm\Delta\epsilon(n)$
where $\varrho$, $\epsilon^{\pm}_{1,n}$, $\epsilon^{\pm}_{2,n}$ and
$\Delta\epsilon(n)$ are defined by
$\displaystyle\epsilon^{\pm}_{1,n}=\lambda_{R}\left[-1\pm\sqrt{1+\varrho^{2}n}\right]$
(24)
$\displaystyle\epsilon^{\pm}_{2,n}=\lambda_{R}\left[1\pm\sqrt{1+\varrho^{2}n}\right]$
(25)
$\displaystyle\Delta\epsilon_{n}=-\frac{\lambda_{R}\varrho^{2}}{12}\frac{n^{3}}{\sqrt{1+\varrho^{2}n}}\eta^{2}$
(26) $\displaystyle\varrho={\hbar\omega_{D}\over\lambda_{R}}$ (27)
The term $\Delta\epsilon_{n}$ is the correction on the energy when the
deformation exists, without deformation i.e. $q\rightarrow 1$
($\eta\rightarrow 0$) the last energies are reduced to the expressions already
found in [23].
## 3 Thermodynamic quantities
We will study the thermodynamic properties of massless Dirac fermions in
graphene with Rashba coupling in contact with a thermal reservoir at finite
temperature. For simplicity, we assume that only fermions with positive energy
$(E>0)$ are regarded to constitute the thermodynamic ensemble [23]. We start
by evaluating
$\mathbb{Z}={\rm Tr}e^{-\beta H}=\sum_{n=0}^{+\infty}\left(e^{-\beta
E_{1,n}^{+}}+e^{-\beta E_{2,n}^{+}}\right)$ (28)
where $\beta=\frac{1}{k_{B}T}$, $k_{B}$ is the Boltzmann constant and $T$ is
the equilibrium temperature. Using (2.2-28), we show that $\mathbb{Z}$ takes
the form
$\displaystyle\mathbb{Z}$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{+\infty}e^{-\beta\left(\epsilon^{+}_{1,n}+\Delta\epsilon_{n}\right)}+e^{-\beta\left(\epsilon^{+}_{2,n}+\Delta\epsilon_{n}\right)}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{+\infty}e^{\beta\Delta\epsilon_{n}}e^{-\beta\left(\epsilon^{+}_{1,n}+\epsilon^{+}_{2,n}\right)}$
noting here that the term $e^{-\beta\Delta\epsilon_{n}}$ is very small than
$1$ then the development limit around $0$ give
$\displaystyle\mathbb{Z}$ $\displaystyle\simeq$
$\displaystyle\sum_{n=0}^{+\infty}\left(1+\beta\Delta\epsilon_{n}\right)e^{-\beta\left(\epsilon^{+}_{1,n}+\epsilon^{+}_{2,n}\right)}$
(30) $\displaystyle\mathbb{Z}$ $\displaystyle\simeq$
$\displaystyle\mathbb{Z}_{0}+\mathbb{Z}_{1}$
with the partitions functions of no deformed system $\mathbb{Z}_{0}$ and the
correction partition function $\mathbb{Z}_{1}$ have as expressions
$\displaystyle\mathbb{Z}_{0}=\sum_{n=0}^{+\infty}e^{-\beta\left(\epsilon^{+}_{1,n}+\epsilon^{+}_{2,n}\right)}$
(31)
$\displaystyle\mathbb{Z}_{1}=\sum_{n=0}^{+\infty}\beta\Delta\epsilon_{n}e^{-\beta\left(\epsilon^{+}_{1,n}+\epsilon^{+}_{2,n}\right)}$
the partition function $\mathbb{Z}_{0}$ for no deformed system is already
calculated in [23] and it has the following expression
$\mathbb{Z}_{0}=\left[{2\over\varrho^{2}}\left(\tau^{2}-1\right)+1\right]\cosh{1\over\tau}.$
(32)
Where $\tau={k_{B}T\over\lambda_{R}}$ is the reduced temperature. Indeed, the
second term can be evaluated by using the Euler-Maclaurin formula; starting by
the equation
$\mathbb{Z}_{1}=\frac{\varrho^{2}\eta^{2}}{6\tau}\cosh{1\over\tau}\sum_{n=0}^{+\infty}\frac{n^{3}}{\sqrt{1+\varrho^{2}n}}e^{-{1\over\tau}\sqrt{1+\varrho^{2}n}}$
(33)
to solve the last sum it’s convenient to approximate integrals by finite sums,
or conversely to evaluate finite sums and infinite series using integrals,
indeed we put
$f(x)=\frac{x^{3}}{\sqrt{1+\varrho^{2}x}}e^{-{1\over\tau}\sqrt{1+\varrho^{2}x}}$
(34)
using the Euler–Maclaurin formula
$\sum_{x=0}^{+\infty}f(x)={1\over
2}f(0)+\int_{0}^{+\infty}f(x)-\sum_{p=1}^{+\infty}{B_{2p}\over
2p!}f^{(2p-1)}(0)$ (35)
$B_{2p}$ are the Bernoulli numbers, and $f^{(2p-1)}$ is the derivative of
order $(2p-1)$. Up to $p=1$, the values $f(0)$ and $f^{(1)}(0)$ are nulls,
then with the straightforward calculation the final form of $\mathbb{Z}_{1}$
have the form
$\mathbb{Z}_{1}={16\eta^{2}\over\varrho^{6}}\left(15\tau^{6}+15\tau^{5}+6\tau^{4}+\tau^{3}\right)e^{-{1\over\tau}}\cosh{1\over\tau}$
(36)
Finally, the compact final form of the q-deformed partition function of the
system
$\mathbb{Z}=\left({2\over\varrho^{2}}\left(\tau^{2}-1\right)+1+{16\eta^{2}\over\varrho^{6}}\left(15\tau^{6}+15\tau^{5}+6\tau^{4}+\tau^{3}\right)e^{-{1\over\tau}}\right)\cosh{1\over\tau}$
(37)
Since we have inferred the partition function of our framework, we would now
be able to determine all related thermodynamic quantities. The determination
of all thermal properties, such as the Helmholtz free energy $F$, internal
energy $U$, heat capacity $C$ and entropy $S$, can be obtained through the
expression of the partition function $\mathbb{Z}$ by using the following
relations [23]:
$\displaystyle F$ $\displaystyle=$
$\displaystyle-\lambda_{R}\tau\ln\mathbb{Z}$ (38) $\displaystyle U$
$\displaystyle=$
$\displaystyle\lambda_{R}\tau^{2}\frac{\partial\ln\mathbb{Z}}{\partial\tau}$
$\displaystyle\frac{S}{k_{B}}$ $\displaystyle=$
$\displaystyle-{1\over\lambda_{R}}\frac{\partial F}{\partial\tau}$
$\displaystyle\frac{C}{k_{B}}$ $\displaystyle=$
$\displaystyle{1\over\lambda_{R}}\frac{\partial U}{\partial\tau}.$
Then, we will numerically investigate the above thermodynamic functions to
underline the conduct of our framework. This will be finished by giving a few
plots under reasonable conditions and making various discussions.
## 4 Numerical Results and discussions
To make a reference to reality of graphene, we restrict our study to the low-
energy regime, which may be reached by fixing an appropriates values of the
Rashba coupling parameter $\lambda_{R}$ and the external magnetic field $B$.
Indeed, for $B\simeq 10^{-3}T$ and $\lambda_{R}=0.014meV$. The thermodynamic
functions versus the reduced temperature $\tau$ for the fixed values of
$\eta=0,0.2,0.4,0.6,0.8,0.9$.
Figure 2: (Color online) Thermodynamic functions of $q-$deformed Dirac
fermions in graphene with Rashba coupling versus the reduced temperature
$\tau$ for different values of the $q-$deformed parameter
$\eta=0.0,0.2,0.4,0.6,0.8,0.9$, respectively for the values of the magnetic
field and Rashba coupling parameter $B\sim 10^{-3}T$ and
$\lambda_{R}=0.014meV$ [23].
It is clearly seen that the common remark between the four curves is the
$\eta$-deformed parameter does not influence on the thermodynamic properties
of the system in the low temperature regime. In figure (2.a) The free energy
$F$ decreases gradually with increasing of temperature at a given
$\eta$-deformed parameter and decreases with $\eta$ at a given temperature. In
figure (2.b) we observe that at high temperature our system follows Joule’s
first law in both cases, with and without deformation, thus in the case where
the $\eta-$deformed parameter is not zero the internal energy in this regime
is asymptotic to $U=6\lambda_{R}\tau$, but when $\eta=0$ the internal energy
become asymptotic to $U=2\lambda_{R}\tau$, then we observe that for two cases
the internal energy in high temperature regime depends only on the reduced
temperature $\tau$, then we conclude for two cases the kinetic energy of
translation of molecules is the unique form of energy of $N$ atoms contained
in a volume $V$ of the system. In figure (2.c) there are two remarks to report
in low temperature in particular for $0<{S\over k_{B}}<1.7$ the entropy is
negative which can be explained by the less disorder of the system [23], in
the case where ${S\over k_{B}}>1.7$ the entropy increases when
$\eta$-parameter increases. For the tree curves at the top (a,b,c) we deduce
that the parameter $\eta$-parameter plays the same role of the doping of the
graphene, however when $\eta$ increase the thermodynamic properties such as
entropy, internal energy increases with $\eta$-parameter in the same way when
we dope pure graphene with boron atoms $B$ or nitrogen $N$ and vice versa
[24], and for the free energy of Helmholtz, it decrease when $\eta$ decrease
similarly when the concentration of the doped atoms in graphene decrease. What
is remarkable is in figure (2.d) we observe that without q-deformation our
system at high temperature obeys to the Dulong-Petit law, but when the
q-deformation is introduced, the heat capacity passes through a maximum in low
temperature regime, that is, the point where the temperature changes very
little as energy is supplied to the system, most of the energy is used to
excite the carbon atoms of the ground state in the excited state, rather than
increasing the kinetic energy of the system, that on the one hand, on the
other hand at high temperature the heat capacity coincide and reach the fixed
value $C=6K_{B}$ three times greater compared to the case of no deformed
massless Dirac fermions in graphene which can be explained by the increasing
of degree of freedom of the system due to the introduction of the
$\eta$-deformed parameter.
## 5 Conclusion
In this paper, after a brief insight on the notion of the q-deformed harmonic
oscillator, we have studied the thermodynamic properties of Dirac fermions in
graphene in this deformation formalism, we have found the eigenvalues of the
considered system via q-deformed annihilations and creations operators. It was
shown that the eigenvalues of our system are more general than in the case
where there is no deformation, and especially we tested them in the limiting
case $\eta=0$ where the ordinary results were well recovered. The eigenvalues
are used together with a method based on the zeta function and Euler-
Maclaurain formula to determine the partition function according to the
q-deformed parameter. Therefore the thermodynamic functions, such as the
Helmholtz free energy, total energy, entropy and heat capacity, were obtained
in terms of the q-deformed parameter.
Subsequently, some cases were studied related to the $q$-deformed parameter.
Indeed, we numerically analyzed the plotted curves which allowed us to make
important remarks on the influence of deformation on the thermodynamic
properties of our system. We also found a similarity between the doping
concentration and the q-deformed parameter for the graphene system [24].
Finally, it was shown that the Dulong-Petit law is no longer verified when the
q-deformed harmonic oscillator notion is introduced where the heat capacity at
high temperature tends to a constant value $C=6k_{B}$ three times greater in
comparison with the Dirac fermions in graphene [23].
## References
* [1] H. W. Kroto, J. R. Heath, S. C. O’Brien, R. F. Curl, and R. E. Smalley, Nature 318 (1985) 162.
* [2] S. Iijima, Nature 354 (1991) 56.
* [3] K. S. Novoselov, A. K. Geim, S. V Morozov, D. Jiang, Y. Zhang, S. V Dubonos, I. V Grigorieva, and a a Firsov, Science 306 (2004) 666.
* [4] S. Morozov, K. Novoselov, M. Katsnelson, F. Schedin, D. Elias, J. Jaszczak, and a. Geim, Phys. Rev. Lett. 100 (2008) 016602.
* [5] K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun. 146 (2008) 351.
* [6] K. S. Novoselov, A. K. Geim, S. V Morozov, D. Jiang, M. I. Katsnelson, I. V Grigorieva, S. V Dubonos, and A. A. Firsov, Nature 438 (2005) 197.
* [7] E. Castro, K. Novoselov, S. Morozov, N. Peres, J. dos Santos, J. Nilsson, F. Guinea, a. Geim, and a. Neto, Phys. Rev. Lett. 99 (2007) 216802.
* [8] R. R. Nair, P. Blake, a N. Grigorenko, K. S. Novoselov, T. J. Booth, T. Stauber, N. M. R. Peres, and a K. Geim, Science 320 (2008) 1308.
* [9] C. Lee, X. Wei, J. W. Kysar, and J. Hone, Science 321 (2008) 385.
* [10] A. a Balandin, S. Ghosh, W. Bao, I. Calizo, D. Teweldebrhan, F. Miao, and C. N. Lau, Nano Lett. 8 (2008) 902.
* [11] L. C. Biedenharn, The quantum group $SUq(2)$ and a $q-$analogue of the boson operators, J. Phys. A 22 (1989) 873.
* [12] A. J. Macfarlane, On $q-$Analogues of the quantum harmonic oscillator and the quantum group $SUq(2)$, J. Phys. A 22 (1989) 4581.
* [13] Célia M. A. Dantas, I. A. Pedrosa and B. Baseia, Harmonic oscillator with time-dependent mass and frequency, Brazilian Journal of Physics 22 (1992) 33.
* [14] M. C. Baldiotti, R. Fresneda, and D.M. Gitman, Quantization of the damped harmonic oscillator revisited, Phys. Lett. A 375 (2011) 1630.
* [15] C. Tsallis, J.Stat.Phys. 52 (2008) 479.
* [16] S. Abe, C. Beck and E. G. D. Cohen, Phys.Rev.E 76 (2007) 031102.
* [17] C. Beck, Europhys. Lett. 57 (2002) 329.
* [18] G. Wilk and Z. Wlodarczyk, Phys. Rev. Lett. 84 (2002) 2770.
* [19] C. Beck and E. G. D. Cohen, Physica A 322 (2003) 267.
* [20] M. Chaichian, R. Gonzales Felipe, C. Montonen, J. Phys. A: Math. Gen. 26 (1993) 4017.
* [21] Y.J. Ng, J. Phys. A 23 (1990) 1023.
* [22] J.J. Sakurai, Modern Quantum Mechanics, Late-Univ. of California, LA, 1985.
* [23] R. Houça et al. Phys. Scr. 94 (2019) 105707.
* [24] S. Mann et al., J Nano. 3(4) (2018) 555618.
|
arxiv-papers
| 2021-07-26T16:04:12 |
2024-09-04T03:07:19.163962
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Rachid Hou\\c{c}a and El Bouazzaoui Choubabi and Abdelhadi Belouad and\n Abdellatif Kamal and Mohammed El Bouziani",
"submitter": "Houca Rachid Hourachid",
"url": "https://arxiv.org/abs/2107.12294"
}
|
2107.12297
|
# $H^{s}$ Bounds for the Derivative Nonlinear Schrödinger Equation
Hajer Bahouri CNRS & Sorbonne Université, Laboratoire Jacques-Louis Lions
(LJLL) UMR 7598, Place Jussieu, 75005 Paris, France
[email protected] , Trevor M. Leslie University of Southern
California, 3620 S. Vermont Ave., Los Angeles, CA 90089 [email protected] and
Galina Perelman Laboratoire D’Analyse et de Mathématiques Appliquées UMR
8050, Université Paris-Est Créteil, 61, Avenue Du Général De Gaulle, 94010
Créteil Cedex, France [email protected]
###### Abstract.
We study the derivative nonlinear Schrödinger equation on the real line and
obtain global-in-time bounds on high order Sobolev norms.
This material is based upon work supported by the National Science Foundation
under Grant No. DMS-1928930 while the authors participated in a program hosted
by the Mathematical Sciences Research Institute in Berkeley, California,
during the Spring 2021 semester.
## 1\. Introduction
We consider the Cauchy problem for the derivative nonlinear Schrödinger
equation (DNLS) on the real line $\mathbb{R}$:
(1)
$\left\\{\begin{array}[]{rcl}i\partial_{t}u+\partial_{x}^{2}u&=&-i\partial_{x}(|u|^{2}u),\\\
u\big{|}_{t=0}&=&u_{0}\in
H^{s}(\mathbb{R}),\;s\geq\frac{1}{2}.\end{array}\right.$
We remark right away that the DNLS is $L^{2}$ critical, as it is invariant
under the scaling
(2) $u(t,x)\mapsto u_{\mu}(t,x):=\sqrt{\mu}u(\mu^{2}t,\mu x),\qquad\mu>0.$
The DNLS equation was introduced by Mio-Ogino-Minami-Takeda and Mjølhus [20,
21] as a model for studying magnetohydrodynamics, and it has received a great
deal of attention from the mathematics community after being shown to be
completely integrable by Kaup-Newell [13]. The infinitely many conserved
quantities admitted by the DNLS equation play an important role in the
wellposedness theory. The first three—the mass, momentum, and energy—are as
follows.
(3) $\displaystyle M(u)$
$\displaystyle:=\int_{\mathbb{R}}|u|^{2}\,\mathrm{d}x,$ (4) $\displaystyle
P(u)$
$\displaystyle:=\operatorname{Im}\int_{\mathbb{R}}\overline{u}u_{x}\,\mathrm{d}x+\frac{1}{2}\int_{\mathbb{R}}|u|^{4}\,\mathrm{d}x,$
(5) $\displaystyle E(u)$
$\displaystyle:=\int_{\mathbb{R}}\big{(}|u_{x}|^{2}-\frac{3}{2}\operatorname{Im}(|u|^{2}u\overline{u}_{x})+\frac{1}{2}|u|^{6}\big{)}\,\mathrm{d}x.$
Before stating our main result, let us give a very brief review of what is
known about the wellposedness of the DNLS equation. More detailed overviews
can be found, for example, in the introductions of [2] and [14]. Local
wellposedness in $H^{s}(\mathbb{R})$ for $s\geq\frac{1}{2}$ was proven by
Takaoka [25], improving earlier work [22] by Ozawa. On the other hand, for
$s<\frac{1}{2}$, the uniform continuity of the data-to-solution map fails in
$H^{s}(\mathbb{R})$ [3, 26]. One can, however, close the
$\frac{1}{2}$-derivative gap between the $H^{\frac{1}{2}}$ threshold and the
critical space $L^{2}(\mathbb{R})$ by working in more general Fourier-Lebesgue
spaces, c.f. Grünrock [6] and references therein.
A line of results, due to Hayashi-Ozawa [8], Colliander-Keel-Staffilani-
Takaoka-Tao [4], Wu [28], and Guo-Wu [7], establishes global well-posedness of
the DNLS equation in $H^{s}(\mathbb{R})$ for $s\geq\frac{1}{2}$, for initial
data having mass less than $4\pi$. Another line (Pelinovsky-Saalmann-
Shimabukuro [23], Pelinovsky-Shimabukuro [24], and Jenkins-Liu-Perry-Sulem
[12, 11, 10]) uses inverse scattering techniques to establish global
wellposedness under stronger regularity and decay assumptions on the initial
data, but without a smallness requirement on the mass.
The first and third authors proved in [2] that the DNLS equation is globally
well-posed in $H^{s}(\mathbb{R})$ for $s\geq\frac{1}{2}$ and that solutions
generated from $H^{\frac{1}{2}}$ initial data remain bounded in
$H^{\frac{1}{2}}(\mathbb{R})$ for all time. There have also been some recent
works below the aforementioned $s=\frac{1}{2}$ threshold of uniform $H^{s}$
continuity with respect to initial data [3, 26]. Klaus-Schippa [17] gave
$H^{s}$ a priori estimates for $0<s<\frac{1}{2}$ in the case of small mass,
Killip-Ntekoume-Vişan [14] improved the small mass assumption to $4\pi$ and
furthermore proved a global wellposedness result in $H^{s}(\mathbb{R})$,
$\frac{1}{6}\leq s<\frac{1}{2}$, for initial data with mass less than $4\pi$.
Very recently, Harrop-Griffiths, Killip, and Vişan [9] have removed the small
mass assumption both from their $H^{s}$ a priori bounds, $0<s<\frac{1}{2}$, as
well as from their global wellposedness result in $H^{s}(\mathbb{R})$ with
$\frac{1}{6}\leq s<\frac{1}{2}$.
In this paper, we are concerned with the global-in-time boundedness of
solutions to the DNLS equation in $H^{s}$ spaces. We prove that a uniform-in-
time bound in $H^{s}(\mathbb{R})$ holds for all $s\geq\frac{1}{2}$.
###### Theorem 1.1.
Suppose $u$ is a solution to the DNLS equation with initial data $u_{0}\in
H^{s}(\mathbb{R})$, $s\geq\frac{1}{2}$. There exists a finite positive
constant $C=C(s,\|u_{0}\|_{H^{s}(\mathbb{R})})$, such that
$\sup_{t\in\mathbb{R}}\|u(t)\|_{H^{s}(\mathbb{R})}\leq
C(s,\|u_{0}\|_{H^{s}(\mathbb{R})}).$
The main idea is to build off of the $H^{s}$ bounds with $0<s<\frac{1}{2}$
from [9] and to take advantage of the complete integrability of the equation.
As in [2], [9], the present work relies heavily on the conservation of the
transmission coefficient for the spectral problem associated to the DNLS
equation. This property has already been used in many other works; of
particular relevance to us are the papers of Gérard [5], Killip-Vişan-Zhang
[16], Killip-Vişan [15], and Koch-Tataru [18], on the cubic NLS and KdV
equations.
Note that by continuity of the flow, and the preservation of the Schwartz
class under the flow, we lose nothing by restricting attention to the Schwartz
class; we will thus work exclusively with Schwartz functions for the remainder
of the manuscript. We will also suppress the time dependence when it does not
play a role.
One can easily prove Theorem 1.1 in the special case $s=1$, using the
conserved quantity $E(u)$. Indeed, simply rearranging (5) yields
$\|u\|_{\dot{H}^{1}(\mathbb{R})}^{2}=E(u)-\frac{1}{2}\|u\|_{L^{6}(\mathbb{R})}^{6}+\frac{3}{2}\int_{\mathbb{R}}\operatorname{Im}(|u|^{2}u\overline{u}_{x})\,\mathrm{d}x.$
Clearly, the last term can be bounded above in absolute value by
$\frac{1}{2}\|u\|_{\dot{H}^{1}(\mathbb{R})}^{2}+C\|u\|_{L^{6}(\mathbb{R})}^{6}$,
whence the desired bound follows by Sobolev embedding.
The higher-order Sobolev norms of integer order can be dealt with similarly,
once we have a formula for the corresponding higher-order conserved
quantities. We will show that for any nonnegative integer $\ell$, one of the
conserved quantities is equal to a constant multiple of
$\|u\|_{\dot{H}^{\ell}(\mathbb{R})}^{2}$, plus terms which are of lower order.
For noninteger $s$, we will use a sort of ‘generalized energy’, comparable to
$\|u\|_{\dot{H}^{s}(\mathbb{R})}^{2}$, that will be defined in terms of the
transmission coefficient of the DNLS spectral problem. We sketch presently the
background necessary to define these objects precisely; for more details, see,
for example, [1, 12, 11, 10, 13, 19, 24, 27].
The DNLS equation can be obtained as a compatibility condition of the
following system [13]:
(6) $\begin{split}\partial_{x}\psi&=\mathcal{U}(\lambda)\psi,\\\
\partial_{t}\psi&=\Upsilon(\lambda)\psi.\end{split}$
Here $\lambda\in\mathbb{C}$ is a spectral parameter, independent of $t$ and
$x$, and $\psi=\psi(t,x,\lambda)$ is $\mathbb{C}^{2}$-valued. The operators
$\mathcal{U}(\lambda)$ and $\Upsilon(\lambda)$ are defined by
(7) $\begin{split}\mathcal{U}(\lambda)&=-i\sigma_{3}(\lambda^{2}+i\lambda
U),\\\
\Upsilon(\lambda)&=-i(2\lambda^{4}-\lambda^{2}|u|^{2})\sigma_{3}+\begin{pmatrix}0&2\lambda^{3}u-\lambda|u|^{2}u+i\lambda
u_{x}\\\
-2\lambda^{3}\overline{u}+\lambda|u|^{2}u+i\lambda\overline{u}_{x}&0\end{pmatrix},\end{split}$
where
$\sigma_{3}=\begin{pmatrix}1&0\\\ 0&-1\end{pmatrix},\qquad
U=\begin{pmatrix}0&u\\\ \overline{u}&0\end{pmatrix}.$
To be more specific about the sense in which the DNLS is a compatibility
condition, we note that $u$ satisfies the DNLS equation if and only if
$\mathcal{U}$ and $\Upsilon$ satisfy the so-called ‘zero-curvature’
representation
$\frac{\partial\mathcal{U}}{\partial t}-\frac{\partial\Upsilon}{\partial
x}+[\mathcal{U},\Upsilon]=0.$
The first equation of (6) can be written in the form
(8) $L_{u}(\lambda)\psi:=(i\sigma_{3}\partial_{x}-\lambda^{2}-i\lambda
U)\psi=0,$
which defines the scattering transform associated to the DNLS. Let us denote
$\Omega_{+}:=\\{\lambda\in\mathbb{C}:\operatorname{Im}\lambda^{2}>0\\}.$
Then given $u\in\mathcal{S}(\mathbb{R})$ and
$\lambda\in\overline{\Omega}_{+}$, there are unique solutions to (8) (the
“Jöst solutions”) exhibiting the following behavior at $\pm\infty$:
(9)
$\begin{split}\psi_{1}^{-}(x,\lambda)=e^{-i\lambda^{2}x}\left[\begin{pmatrix}1\\\
0\end{pmatrix}+o(1)\right],&\qquad\text{ as }x\to-\infty,\\\
\psi_{2}^{+}(x,\lambda)=e^{i\lambda^{2}x}\left[\begin{pmatrix}0\\\
1\end{pmatrix}+o(1)\right],&\qquad\text{ as }x\to+\infty.\end{split}$
Finally, we denote by $a_{u}(\lambda)$ the Wronskian of the Jöst solutions
defined above:111The transmission coefficient mentioned earlier is the inverse
of $a_{u}(\lambda)$.
(10) $a_{u}(\lambda)=\det(\psi_{1}^{-}(x,\lambda),\psi_{2}^{+}(x,\lambda)).$
Using the second equation in (6), it can be shown that $a_{u}(\lambda)$ is
time-independent if $u$ is a solution of (1). Furthermore, $a_{u}$ is a
holomorphic function of $\lambda$ in $\Omega_{+}$, and one may determine the
behavior of $a_{u}$ at infinity by transforming (8) into a Zakharov-Shabat
spectral problem, linear with respect to the spectral parameter, c.f. [13],
[23]. The equivalence between the two problems allows us to write
(11)
$\lim_{|\lambda|\to\infty,\,\lambda\in\overline{\Omega}_{+}}a_{u}(\lambda)=e^{-\frac{i}{2}\|u\|_{L^{2}(\mathbb{R})}^{2}}.$
For fixed $u$, we can thus define the logarithm so that
(12) $\lim_{|\lambda|\to\infty,\lambda\in\overline{\Omega}_{+}}\ln
a_{u}(\lambda)=-\frac{i}{2}\|u\|_{L^{2}(\mathbb{R})}^{2}.$
Moreover, $\ln a_{u}(\lambda)$ admits an asymptotic expansion of the following
form:
(13) $\ln
a_{u}(\lambda)=\sum_{j=0}^{\infty}\frac{E_{j}(u)}{\lambda^{2j}}\qquad\text{ as
}|\lambda|\to\infty,\;\lambda\in\overline{\Omega}_{+}.$
Since $a_{u}(\lambda)$ is time-independent, the quantities $E_{j}(u)$ are
conservation laws. They are all polynomial in $u$, $\overline{u}$, and their
derivatives. Furthermore, the $E_{j}(u)$’s inherit scaling properties from
$a_{u}(\lambda)$. That is, for $\mu>0$, the fact that
$a_{u_{\mu}}(\lambda)=a_{u}(\frac{\lambda}{\sqrt{\mu}})$ implies that
$E_{j}(u_{\mu})=\mu^{j}E_{j}(u)$, for each $j\in\mathbb{N}$. The first several
of the $E_{j}(u)$’s are (up to multiplicative constants) the conserved
quantities (3)–(5) mentioned earlier:
$E_{0}(u)=-\frac{i}{2}\|u\|_{L^{2}(\mathbb{R})}^{2}=-\frac{i}{2}M(u),\qquad
E_{1}(u)=\frac{i}{4}P(u),\qquad E_{2}(u)=-\frac{i}{8}E(u).$
For each $\ell\in\mathbb{N}^{*}$, the quantity $E_{2\ell}(u)$ can be used to
control $\|u\|_{\dot{H}^{\ell}(\mathbb{R})}^{2}$. Let us define, for $\rho$
positive sufficiently large and $L\in\mathbb{N}$,
(14) $\varphi_{L}(u,\rho)=\operatorname{Im}\left[\ln
a_{u}(\sqrt{i\rho})-\sum_{j=0}^{2L+1}\frac{E_{j}(u)}{(i\rho)^{j}}\right].$
If $u$ is a solution of the DNLS equation, then $\varphi_{L}(u,\rho)$ is time-
independent, being a sum of time-independent quantities.
In order to establish bounds on the $H^{s}$ norm of $u$, for
$s\geq\frac{1}{2}$, we will show that
$\int_{R}^{\infty}\rho^{2s-1}\varphi_{[s]}(u,\rho)\mathrm{d}\rho$ with $R>0$
large enough controls the $\dot{H}^{s}$ seminorm of $u$, in a sense to be made
precise later. Here and below, we use $[s]$ to denote the integer part of a
real number $s$.
Our proof of Theorem 1.1 relies on a good understanding of the structure of
the remainder associated to the expansion (13). Note that when
$\lambda^{2}=i\rho$, the imaginary part of this remainder (which is what we
really use) is simply $\varphi_{L}(u,\rho)$. In Section 2, we will introduce a
determinant characterization of $a_{u}(\lambda)$; we use this characterization
to formulate a technical statement (Lemma 2.1 below) on the size of the
remainder. Assuming the result of Lemma 2.1, we will prove Theorem 1.1 at the
end of Section 2. Then, in Section 3, we will prove our technical Lemma,
completing the circle of ideas. Most of the work is contained in this last
section.
Before moving on, let us establish a few notational conventions that we wish
to add to the ones introduced above. First of all, we use the following
normalization for the Fourier transform:
$\widehat{f}(\zeta)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{ix\zeta}f(x)\,\mathrm{d}x.$
The symbol $\mathbb{N}$ will denote the nonnegative integers, and
$\mathbb{N}^{*}=\mathbb{N}\backslash\\{0\\}$. We will use $\|\cdot\|_{2}$ to
denote the Hilbert-Schmidt norm, and $\|\cdot\|$ will denote the operator norm
on $L^{2}(\mathbb{R})$. And we will use the following shorthand for
derivatives:
$D=-i\partial_{x},\qquad\mathcal{L}_{0}=i\sigma_{3}\partial_{x}.$
Whenever $2\leq p<\infty$, we will use $s^{*}(p)$ to denote the Sobolev
exponent $s^{*}(p)=\frac{1}{2}-\frac{1}{p}$ such that the embedding
$H^{s^{*}(p)}(\mathbb{R})\hookrightarrow L^{p}(\mathbb{R})$ holds.
Finally, we set notation for the following subset of $\Omega_{+}$:
$\Gamma_{\delta}=\\{\lambda\in\Omega_{+}:\delta<\arg(\lambda^{2})<\pi-\delta\\}.$
This notation will be useful in some of the intermediate steps we use to prove
Theorem 1.1, as our estimates will frequently depend on
$\frac{|\lambda|^{2}}{\operatorname{Im}\lambda^{2}}$ (which is $\leq
C(\delta)$ on $\Gamma_{\delta}$). However, the value of $\delta>0$ will be
inconsequential for our final steps, where we will take $\lambda^{2}$ to be
pure imaginary. Therefore, for simplicity of presentation, we will fix
$\delta>0$ once and for all and suppress dependence on $\delta$ in all bounds
below.
## 2\. Proof of the Main Result
### 2.1. The Determinant Characterization of $a_{u}(\lambda)$
An important property of $a_{u}(\lambda)$ is the fact that it can be realized
as a perturbation determinant:
(15) $a_{u}(\lambda)^{2}=\det(I-T_{u}(\lambda)^{2}),$
where
$T_{u}(\lambda)=i\lambda(\mathcal{L}_{0}-\lambda^{2})^{-1}U,\qquad\lambda\in\Omega_{+}.$
The operator $T_{u}(\lambda)$ is Hilbert-Schmidt, with
(16)
$\|T_{u}(\lambda)\|_{2}^{2}=\frac{|\lambda|^{2}}{\operatorname{Im}(\lambda^{2})}\|u\|_{L^{2}(\mathbb{R})}^{2}.$
As a consequence of (15), we may write222This series expansion of $\ln
a_{u}(\lambda)$ is consistent with the definition (12).
(17) $\ln
a_{u}(\lambda)=-\sum_{k=1}^{\infty}\frac{\operatorname{Tr}(T_{u}(\lambda)^{2k})}{2k},\qquad\text{
if }\|T_{u}(\lambda)\|<1.$
This series will converge whenever $\lambda\in\Gamma_{\delta}$ has large
enough modulus; indeed, using the explicit kernel of
$(\mathcal{L}_{0}-\lambda^{2})^{-1}$, it can easily be shown that for any
$p>2$, we have
(18)
$\|T_{u}(\lambda)\|\lesssim\frac{|\lambda|\|u\|_{L^{p}(\mathbb{R})}}{\operatorname{Im}(\lambda^{2})^{1-\frac{1}{p}}},\qquad\lambda\in\Omega_{+},\;u\in
L^{p}(\mathbb{R}).$
In particular, we can find $R_{0}=R_{0}(\|u\|_{H^{\frac{1}{3}}(\mathbb{R})})$
such that $\|T_{u}(\lambda)\|\leq\frac{1}{2}$ for all
$\lambda\in\Gamma_{\delta}$ satisfying $|\lambda|^{2}\geq R_{0}$. We will fix
the notation $R_{0}$ for use below.
As we shall see later, each term of the series (17) can be expanded in powers
of $\lambda^{-2}$:
(19)
$-\frac{\operatorname{Tr}(T_{u}(\lambda)^{2k})}{2k}=\sum_{j=k-1}^{\infty}\frac{\mu_{j,k}(u)}{\lambda^{2j}}.$
According to (13) and (17), the $E_{j}(u)$’s should then satisfy
(20) $E_{j}(u)=\sum_{k=1}^{j+1}\mu_{j,k}(u).$
We will use the following notation for the remainders after truncation of the
expansions (17) and (19):
(21) $\ln
a_{u}(\lambda)=-\sum_{k=1}^{2L+2}\frac{\operatorname{Tr}(T_{u}(\lambda)^{2k})}{2k}+\tau_{L}^{*}(u,\lambda),\qquad
L\in\mathbb{N};$ (22)
$-\frac{\operatorname{Tr}(T_{u}(\lambda)^{2k})}{2k}=\sum_{j=k-1}^{2L+1}\frac{\mu_{j,k}(u)}{\lambda^{2j}}+\tau^{k}_{L}(u,\lambda),\qquad
k\in\\{1,\ldots,2L+2\\},\;L\in\mathbb{N}.$
The primary difficulty of the proof of Theorem 1.1—and indeed, the subject of
Lemma 2.1—is the understanding of the size and structure of the remainder
terms $\tau^{k}_{L}(u,\lambda)$, and to a lesser extent, the $\mu_{j,k}(u)$’s.
On the other hand, for $\lambda\in\Gamma_{\delta}$ with large enough modulus,
it is easy to bound the $\tau_{L}^{*}(u,\lambda)$’s. For example, if
$\|T_{u}(\lambda)\|\leq\frac{1}{2}$, then
(23) $\begin{split}|\tau_{L}^{*}(u,\lambda)|&=\bigg{|}\ln
a_{u}(\lambda)+\sum_{k=1}^{2L+2}\frac{\operatorname{Tr}T_{u}^{2k}(\lambda)}{2k}\bigg{|}\leq\sum_{k=2L+3}^{\infty}\|T_{u}(\lambda)\|^{2k-2}\|T_{u}(\lambda)\|_{2}^{2}\\\
&\lesssim\|T_{u}(\lambda)\|^{4L+4}\|T_{u}(\lambda)\|_{2}^{2}\lesssim\frac{\|u\|_{H^{s^{*}(p)}(\mathbb{R})}^{4L+4}\|u\|_{L^{2}(\mathbb{R})}^{2}}{|\lambda|^{(4L+4)(1-\frac{2}{p})}},\quad
2<p<\infty,\;s^{*}(p)=\frac{1}{2}-\frac{1}{p}.\end{split}$
The following table summarizes the various relationships among the quantities
introduced above and will be helpful to keep track of the numerology. More
precise information about the $\mu_{j,k}(u)$’s and $\tau_{L}^{k}(u,\lambda)$’s
will be provided below.
$\begin{array}[]{rcccccccccccccc}&&-\dfrac{\operatorname{Tr}T_{u}^{2}(\lambda)}{2}&&-\dfrac{\operatorname{Tr}T_{u}^{4}(\lambda)}{4}&&-\dfrac{\operatorname{Tr}T_{u}^{6}(\lambda)}{6}&&\cdots&&-\dfrac{\operatorname{Tr}T_{u}^{4L+2}(\lambda)}{4L+2}&&-\dfrac{\operatorname{Tr}T_{u}^{4L+4}(\lambda)}{4L+4}&&\\\
\cline{3-14}\cr\ln
a_{u}(\lambda)=&&\mu_{0,1}(u)&&&&&&&&&&&\vline&E_{0}(u)\\\\[2.84544pt]
&+&\dfrac{\mu_{1,1}(u)}{\lambda^{2}}&+&\dfrac{\mu_{1,2}(u)}{\lambda^{2}}&&&&&&&&&\vline&\dfrac{E_{1}(u)}{\lambda^{2}}\\\
&+&\dfrac{\mu_{2,1}(u)}{\lambda^{4}}&+&\dfrac{\mu_{2,2}(u)}{\lambda^{4}}&+&\dfrac{\mu_{2,3}(u)}{\lambda^{4}}&&&&&&&\vline&\dfrac{E_{2}(u)}{\lambda^{4}}\\\
&&\vdots&&\vdots&&\vdots&&\ddots&&&&&\vline&\vdots\\\
&+&\dfrac{\mu_{2L,1}(u)}{\lambda^{4L}}&+&\dfrac{\mu_{2L,2}(u)}{\lambda^{4L}}&+&\dfrac{\mu_{2L,3}(u)}{\lambda^{4L}}&+&\cdots&+&\dfrac{\mu_{2L,2L+1}(u)}{\lambda^{4L}}&&&\vline&\dfrac{E_{2L}(u)}{\lambda^{4L}}\\\
&+&\dfrac{\mu_{2L+1,1}(u)}{\lambda^{4L+2}}&+&\dfrac{\mu_{2L+1,2}(u)}{\lambda^{4L+2}}&+&\dfrac{\mu_{2L+1,3}(u)}{\lambda^{4L+2}}&+&\cdots&+&\dfrac{\mu_{2L+1,2L+1}(u)}{\lambda^{4L+2}}&+&\dfrac{\mu_{2L+1,2L+2}(u)}{\lambda^{4L+2}}&\vline&\dfrac{E_{2L+1}(u)}{\lambda^{4L+2}}\\\\[8.5359pt]
\cline{14-15}\cr&+&\tau_{L}^{1}(u,\lambda)&+&\tau_{L}^{2}(u,\lambda)&+&\tau_{L}^{3}(u,\lambda)&+&\cdots&+&\tau_{L}^{2L+1}(u,\lambda)&+&\tau_{L}^{2L+2}(u,\lambda)&+&\tau_{L}^{*}(u,\lambda)\end{array}$
### 2.2. Structure of the Traces
In this section, we record all the information about the traces that we need
in order to prove our main result. We deal first with the easy case of
$\operatorname{Tr}T_{u}(\lambda)^{2}$, about which we need more explicit
information. A straightforward computation gives us
(24)
$\operatorname{Tr}T_{u}^{2}(\lambda)=2i\lambda^{2}\int_{\mathbb{R}}\frac{|\widehat{u}(\zeta)|^{2}}{\zeta+2\lambda^{2}}\mathrm{d}\zeta.$
We determine the expansion of $\operatorname{Tr}T_{u}^{2}(\lambda)$ by simply
substituting into (24) the identity
$\frac{2\lambda^{2}}{\zeta+2\lambda^{2}}=\sum_{j=0}^{2L+1}\left(-\frac{\zeta}{2\lambda^{2}}\right)^{j}+\frac{\zeta}{\zeta+2\lambda^{2}}\left(\frac{\zeta}{2\lambda^{2}}\right)^{2L+1},\qquad
L\in\mathbb{N},$
to obtain
(25)
$-\frac{\operatorname{Tr}T_{u}^{2}(\lambda)}{2}=\sum_{j=0}^{2L+1}\frac{1}{\lambda^{2j}}\cdot\underbrace{\frac{i}{(-2)^{j+1}}\int_{\mathbb{R}}\zeta^{j}|\widehat{u}(\zeta)|^{2}\mathrm{d}\zeta}_{=:\mu_{j,1}(u)}-\underbrace{\frac{i}{4^{L+1}\lambda^{4L+2}}\int_{\mathbb{R}}\frac{\zeta^{2L+2}|\widehat{u}(\zeta)|^{2}}{\zeta+2\lambda^{2}}\mathrm{d}\zeta}_{=:\tau_{L}^{1}(u,\lambda)},\qquad
L\in\mathbb{N}.$
Now we state our main Lemma, which describes the structure of the other
$\mu_{j,k}(u)$’s and $\tau_{L}^{k}(u,\lambda)$’s.
###### Lemma 2.1.
For any $k\in\mathbb{N}^{*}$, $L\in\mathbb{N}$, the traces
$\operatorname{Tr}T_{u}^{2k}(\lambda)$ admit the decomposition (22). The
$\mu_{j,k}(u)$’s and $\tau_{L}^{k}(u,\lambda)$’s satisfy the properties below,
where for any $n\in\mathbb{N}$ we denote $\sigma(n)=\max\\{n,\frac{1}{3}\\}$.
* •
Each $\mu_{j,k}(u)$ is a homogeneous polynomial of degree $2k$ in $u$,
$\overline{u}$, and their derivatives; it is homogeneous with respect to the
natural scaling. We have
(26)
$\begin{split}|\mu_{2\ell,2}(u)|&\lesssim\|u\|_{H^{\sigma(\ell-1)}(\mathbb{R})}^{3}\|u\|_{H^{\ell}(\mathbb{R})},\qquad\ell\in\mathbb{N}^{*},\\\
|\mu_{2\ell,k}(u)|&\lesssim\|u\|_{H^{\sigma(\ell-1)}(\mathbb{R})}^{2k},\qquad\qquad\quad\;\;\ell\in\mathbb{N}^{*},\;k\in\\{3,\ldots,2\ell+1\\},\\\
|\mu_{2\ell+1,k}(u)|&\lesssim\|u\|_{H^{\ell}(\mathbb{R})}^{2k},\qquad\qquad\qquad\quad\;\ell\in\mathbb{N}^{*},\;k\in\\{2,\ldots,2\ell+2\\}.\end{split}$
* •
For $|\lambda|^{2}>R_{0}$, $\lambda\in\Gamma_{\delta}$, we have the following
bounds:
(27) $\displaystyle|\tau_{L}^{2}(u,\lambda)|$
$\displaystyle\lesssim_{\alpha}\frac{\|u\|_{H^{\sigma(L)}(\mathbb{R})}^{3}\|u\|_{H^{L+\alpha}(\mathbb{R})}}{|\lambda|^{4L+2+2\alpha}},$
$\displaystyle L\in\mathbb{N},\;0\leq\alpha<1;$ (28)
$\displaystyle|\tau_{L}^{k}(u,\lambda)|$
$\displaystyle\lesssim\frac{\|u\|_{H^{L}(\mathbb{R})}^{2k}}{|\lambda|^{4L+4}},$
$\displaystyle L\in\mathbb{N}^{*},\;k\in\\{3,\ldots,2L+2\\}.$
We postpone the proof of the Lemma until Section 3.
### 2.3. Proof of Theorem 1.1
In this section, we will prove Theorem 1.1, assuming the result of Lemma 2.1.
For $s\in\mathbb{N}^{*}$, the conclusion follows easily from Lemma 2.1,
together with (20), (25), and an induction argument; we provide the details
presently. Actually, the case $s=1$ was already proved in the Introduction.
Therefore, let us turn to our inductive hypothesis. For $k=1,\ldots,\ell-1$,
we assume that the following bound holds.
(29) $\sup_{t\in\mathbb{R}}\|u(t)\|_{H^{k}(\mathbb{R})}\leq
C(k,\|u_{0}\|_{H^{k}(\mathbb{R})}).$
We will prove that the same bound holds with $k=\ell\geq 2$.
First of all, for any integer $\ell\geq 2$, and any time $t$, we have
$\displaystyle\|u(t)\|_{\dot{H}^{\ell}(\mathbb{R})}^{2}$
$\displaystyle=C(\ell)\mu_{2\ell,1}(u(t))$ $\displaystyle\text{ by
}\eqref{e:trtu2expanded}$
$\displaystyle=C(\ell)\left[E_{2\ell}(u(t))-\sum_{k=2}^{2\ell+1}\mu_{2\ell,k}(u(t))\right]$
$\displaystyle\text{ by }\eqref{e:Ejmu}$ $\displaystyle\leq
C(\ell)E_{2\ell}(u_{0})+\frac{1}{2}\|u(t)\|_{\dot{H}^{\ell}(\mathbb{R})}^{2}+C(\ell,\|u_{0}\|_{H^{\ell-1}(\mathbb{R})}).$
To pass to the last line, we used time-independence of $E_{2\ell}(u(t))$, the
bounds (26), and our inductive hypothesis (29) (with $k=\ell-1$). Finally,
using that
$E_{2\ell}(u_{0})=\sum_{k=1}^{2\ell+1}\mu_{2\ell,k}(u_{0})\leq
C(\ell,\|u_{0}\|_{H^{\ell}(\mathbb{R})}),$
we get
$\sup_{t\in\mathbb{R}}\|u(t)\|_{\dot{H}^{\ell}(\mathbb{R})}\leq
C(\ell,\|u_{0}\|_{H^{\ell}(\mathbb{R})}),$
which finishes the induction argument, and thus the proof of Theorem 1.1 for
$s\in\mathbb{N}^{*}$.
It remains to consider the situation where $s\notin\mathbb{N}^{*}$. We start
by recording the characterization of $\varphi_{L}(u,\rho)$ in terms of the
remainders $\tau_{L}^{k}(u,\sqrt{i\rho})$, and we also set notation for the
quadratic part of $\varphi_{L}(u,\rho)$. We also note that the case $L=0$ is
included in the definition.
(30) $\displaystyle\varphi_{L}(u,\rho)$
$\displaystyle=\operatorname{Im}\left[\ln
a_{u}(\sqrt{i\rho})-\sum_{j=0}^{2L+1}\frac{E_{j}(u)}{(i\rho)^{j}}\right]=\operatorname{Im}\left[\sum_{k=1}^{2L+2}\tau_{L}^{k}(u,\sqrt{i\rho})+\tau_{L}^{*}(u,\sqrt{i\rho})\right],$
$\displaystyle L\in\mathbb{N},$ (31) $\displaystyle\varphi_{L,0}(u,\rho)$
$\displaystyle=\operatorname{Im}\tau^{1}_{L}(u,\sqrt{i\rho})=\frac{(-1)^{L}}{2^{2L+1}\rho^{2L}}\int_{\mathbb{R}}\frac{\zeta^{2L+2}|\widehat{u}(\zeta)|^{2}}{\zeta^{2}+4\rho^{2}}\mathrm{d}\zeta,$
$\displaystyle L\in\mathbb{N}.$
The conclusion of Theorem 1.1 for noninteger $s\geq\frac{1}{2}$ will be
deduced from the following two Lemmas.
###### Lemma 2.2.
Suppose $u\in\mathcal{S}(\mathbb{R})$, $s>0$, $s\notin\mathbb{N}^{*}$, and
$R>0$. Then the following comparison holds.
(32)
$\int_{\mathbb{R}_{+}}\rho^{2s-1}|\varphi_{[s],0}(u,\rho)|\mathrm{d}\rho\lesssim_{s}\|u\|_{\dot{H}^{s}(\mathbb{R})}^{2}\lesssim_{s}\int_{R}^{\infty}\rho^{2s-1}|\varphi_{[s],0}(u,\rho)|\mathrm{d}\rho+R^{2(s-[s])}\|u\|_{\dot{H}^{[s]}(\mathbb{R})}^{2}.$
###### Proof.
Let us define the function $f_{\nu}:\mathbb{R}\to\mathbb{R}$, for $0<\nu<1$,
by $f_{\nu}(z)=\frac{|z|^{2\nu-1}}{1+z^{2}}$. Note that $f_{\nu}\in
L^{1}(\mathbb{R})$ for this range of $\nu$.
We make a direct substitution of the formula (31) for
$\varphi_{[s],0}(u,\rho)$ into the left side of (32), then we switch the order
of integration. Continuing the computation yields
$\displaystyle\int_{R}^{\infty}\rho^{2s-1}|\varphi_{[s],0}(u,\rho)|\mathrm{d}\rho$
$\displaystyle=\frac{1}{2^{2[s]+1}}\int_{\mathbb{R}}\zeta^{2[s]+2}|\widehat{u}(\zeta)|^{2}\int_{R}^{\infty}\frac{\rho^{2(s-[s])-1}}{\zeta^{2}+4\rho^{2}}\mathrm{d}\rho\,\mathrm{d}\zeta$
$\displaystyle=\frac{1}{2^{2s+1}}\int_{\mathbb{R}}|\zeta|^{2s}|\widehat{u}(\zeta)|^{2}\int_{\frac{2R}{|\zeta|}}^{\infty}f_{s-[s]}(z)\,\mathrm{d}z\,\mathrm{d}\zeta$
$\displaystyle=\frac{1}{4^{s+1}}\|f_{s-[s]}\|_{L^{1}(\mathbb{R})}\|u\|_{\dot{H}^{s}(\mathbb{R})}^{2}-\frac{1}{2}\int_{\mathbb{R}}\left|\frac{\zeta}{2}\right|^{2s}|\widehat{u}(\zeta)|^{2}\int_{0}^{\frac{2R}{|\zeta|}}f_{s-[s]}(z)\,\mathrm{d}z\,\mathrm{d}\zeta.$
We estimate the second term on the right by means of the trivial replacement
$\frac{1}{1+z^{2}}\leq 1$:
$\displaystyle\frac{1}{2}\int_{\mathbb{R}}\left|\frac{\zeta}{2}\right|^{2s}|\widehat{u}(\zeta)|^{2}\int_{0}^{\frac{2R}{|\zeta|}}f_{s-[s]}(z)\,\mathrm{d}z\,\mathrm{d}\zeta$
$\displaystyle\leq\frac{1}{2}\int_{\mathbb{R}}\left|\frac{\zeta}{2}\right|^{2s}|\widehat{u}(\zeta)|^{2}\int_{0}^{\frac{2R}{|\zeta|}}z^{2(s-[s])-1}\,\mathrm{d}z\,\mathrm{d}\zeta=\frac{R^{2(s-[s])}}{s-[s]}\cdot\frac{\|u\|_{\dot{H}^{[s]}(\mathbb{R})}^{2}}{4^{[s]+1}}.$
The comparison (32) follows. ∎
###### Lemma 2.3.
Suppose $u\in\mathcal{S}(\mathbb{R})$, $s>0$, $s\notin\mathbb{N}^{*}$.
Denoting $\beta=\max\\{[s],\,\frac{s+[s]+1}{4([s]+1)},\,\frac{1}{3}\\}$, we
have
(33)
$|\varphi_{[s]}(u,\rho)-\varphi_{[s],0}(u,\rho)|\leq\frac{C(s,\|u\|_{H^{\beta}(\mathbb{R})})}{\rho^{s+[s]+1}}(\|u\|_{H^{s}(\mathbb{R})}+1),\quad\forall\rho\geq
R_{0}.$
###### Proof.
Choose $p>2$ to solve $2([s]+1)(1-\frac{2}{p})=s+[s]+1$. (Note that
$s^{*}(p)=\frac{s+[s]+1}{4([s]+1)}$ for this choice of $p$.) Then for
$\rho>R_{0}$, we have
$\displaystyle|\varphi_{[s]}(u,\rho)-\varphi_{[s],0}(u,\rho)|\leq\sum_{k=2}^{2[s]+2}|\tau_{[s]}^{k}(u,\sqrt{i\rho})|+|\tau^{*}_{[s]}(u,\sqrt{i\rho})|$
$\displaystyle\text{ by }\eqref{e:fLdef},\eqref{e:fL0def}$ $\displaystyle\leq
C(s)\bigg{[}\frac{\|u\|_{H^{\beta}(\mathbb{R})}^{3}\|u\|_{H^{s}(\mathbb{R})}}{\rho^{s+[s]+1}}+\sum_{k=3}^{2[s]+2}\frac{\|u\|_{H^{[s]}(\mathbb{R})}^{2k}}{\rho^{2[s]+2}}+\frac{\|u\|_{H^{s^{*}(p)}(\mathbb{R})}^{4[s]+4}\|u\|_{L^{2}(\mathbb{R})}^{2}}{\rho^{(2[s]+2)(1-\frac{2}{p})}}\bigg{]}$
$\displaystyle\text{ by
}\eqref{e:tau2bound},\eqref{e:taubound},\eqref{e:tau*}$
$\displaystyle\leq\frac{C(s,\|u\|_{H^{\beta}(\mathbb{R})})}{\rho^{s+[s]+1}}(\|u\|_{H^{s}(\mathbb{R})}+1).$
In the second line, we understand the sum over $k$ to be empty if $[s]=0$. ∎
The conclusion of Theorem 1.1 for noninteger $s\geq\frac{1}{2}$ follows from
Lemmas 2.2 and 2.3, the time-independence of the quantity
$\varphi_{[s]}(u,\rho)$ for solutions of the DNLS equation, and the bound
(34) $\sup_{t\in\mathbb{R}}\|u(t)\|_{H^{\beta}(\mathbb{R})}\leq
C(\beta,\|u_{0}\|_{H^{\beta}(\mathbb{R})}),$
where $\beta=\max\\{[s],\frac{s+[s]+1}{4([s]+1)},\frac{1}{3}\\}$ is as in the
statement of Lemma 2.3. The bound (34) follows from our induction argument if
$s>1$ and from the result of Harrop-Griffiths, Killip, and Vişan [9] if
$\frac{1}{2}\leq s<1$.
Let us give the remaining details of the proof of Theorem 1.1 presently. For
any $t\in\mathbb{R}$, we have
$\displaystyle\|u(t)\|_{\dot{H}^{s}(\mathbb{R})}^{2}$
$\displaystyle\lesssim_{s}\int_{R_{0}}^{\infty}\rho^{2s-1}|\varphi_{[s],0}(u(t),\rho)|\mathrm{d}\rho+R_{0}^{2(s-[s])}\|u(t)\|_{H^{[s]}(\mathbb{R})}^{2}$
$\displaystyle\leq
C(s)\int_{R_{0}}^{\infty}\rho^{2s-1}|\varphi_{[s]}(u(t),\rho)|\mathrm{d}\rho+C(s,R_{0},\|u(t)\|_{H^{\beta}(\mathbb{R})})(\|u(t)\|_{H^{s}(\mathbb{R})}+1)$
$\displaystyle\leq
C(s)\int_{R_{0}}^{\infty}\rho^{2s-1}|\varphi_{[s]}(u_{0},\rho)|\mathrm{d}\rho+C(s,\|u_{0}\|_{H^{s}(\mathbb{R})})(\|u(t)\|_{H^{s}(\mathbb{R})}+1)$
$\displaystyle\leq\frac{1}{2}\|u(t)\|^{2}_{H^{s}(\mathbb{R})}+C(s,\|u_{0}\|_{H^{s}(\mathbb{R})}),$
which establishes the desired conclusion. Note that the first line in the
calculation above is simply the upper bound in Lemma 2.2. To pass from the
first line to the second, we use Lemma 2.3, followed by the lower bound of
Lemma 2.2. We use (34) and the time independence of $\varphi_{[s]}(u(t),\rho)$
to pass to the third line. Finally, we justify the last line by noting that
$\int_{R_{0}}^{\infty}\rho^{2s-1}|\varphi_{[s]}(u_{0},\rho)|\mathrm{d}\rho\lesssim_{s}C(s,\|u_{0}\|_{H^{s}(\mathbb{R})}),$
which follows from an application of Lemma 2.3, followed by the lower bound in
Lemma 2.2.
## 3\. Proof of Lemma 2.1
### 3.1. Outline of the Proof
In this section, we expand each $\operatorname{Tr}(T_{u}^{2k}(\lambda))$ in
powers of $\lambda^{-2}$, up to a specified order, and we establish bounds on
the remainders, in order to prove our key Lemma 2.1. In Section 3.2, we
consider the case $L=0$, which is easy to treat explicitly but does not fit
naturally into our argument for the other cases. When $L\geq 1$, we follow the
strategy of [5], deducing the expansions of the traces from the expansion of
the resolvent $L_{u}(\lambda)^{-1}$. The relationship between $T_{u}(\lambda)$
and $L_{u}(\lambda)$ is the following:
(35) $L_{u}(\lambda)=(\mathcal{L}_{0}-\lambda^{2})(I-T_{u}(\lambda)).$
Therefore,
(36)
$L_{u}(\lambda)^{-1}=(I-T_{u}(\lambda))^{-1}(\mathcal{L}_{0}-\lambda^{2})^{-1}=\sum_{n=0}^{\infty}\underbrace{T_{u}(\lambda)^{n}(\mathcal{L}_{0}-\lambda^{2})^{-1}}_{=:\mathcal{R}_{n}},\qquad\|T_{u}(\lambda)\|<1.$
The point is that
(37) $T_{u}^{2k}(\lambda)=i\lambda\mathcal{R}_{2k-1}U.$
Thus, the part of $L_{u}^{-1}(\lambda)$ that is of relevance to us is
$\mathcal{R}_{2k-1}$, i.e., the term in the expansion (36) that is homogeneous
of degree $2k-1$ in $u,\overline{u}$. In particular, we seek an expansion of
$\lambda\mathcal{R}_{2k-1}$ in powers of $\lambda^{-2}$, up to order
$\lambda^{4L+2}$ for a given $L\in\mathbb{N}^{*}$, and a good understanding of
the remainder term.
Our strategy will be to examine the symbol $R(x,\zeta)$ of the
pseudodifferential operator $L_{u}(\lambda)^{-1}$. In Section 3.3, we will
expand the diagonal and antidiagonal parts $R^{d}(x,\zeta)$ and
$R^{a}(x,\zeta)$ of $R(x,\zeta)$ in powers of $\lambda^{-2}$, determining
recursively the form of each term of the expansion. Homogeneity considerations
will then give us the desired expansion of $\lambda\mathcal{R}_{2k-1}$ (and
thus of $\operatorname{Tr}T_{u}^{2k}(\lambda)$) in powers of $\lambda^{-2}$.
In Section 3.4, we identify the $\mu_{j,k}(u)$’s from (22) and separate them
from the remainder term. In Section 3.5 we estimate the remainder term,
finishing the proof of the Lemma. The final Section 3.6 consists of the proof
by induction of a technical result stated in Section 3.3.1, on the form of the
terms of the expansions for $R^{d}$ and $R^{a}$.
### 3.2. Case $L=0$
Let us note first of all that the desired decomposition in the case $L=0$
reads
$\ln
a_{u}(\lambda)=[\underbrace{\mu_{0,1}(u)+\lambda^{-2}\mu_{1,1}(u)+\tau_{0}^{1}(u,\lambda)}_{=-\frac{1}{2}\operatorname{Tr}T_{u}^{2}(\lambda)}]+[\underbrace{\lambda^{-2}\mu_{1,2}(u)+\tau_{0}^{2}(u,\lambda)}_{=-\frac{1}{4}\operatorname{Tr}T_{u}^{4}(\lambda)}]+\tau_{0}^{*}(u,\lambda).$
(See the table in Section 2.1.) The only term which we have not already
understood is $\tau_{0}^{2}(u,\lambda)$; in order to treat it, we decompose
$T_{u}^{4}(\lambda)$ explicitly as follows. A computation (the details of
which are contained, for instance, in [2]) tells us that
$\operatorname{Tr}T_{u}^{4}(\lambda)=i(2\lambda^{2})^{2}\int_{\mathbb{R}}\overline{u}(x)\big{(}(D+2\lambda^{2})^{-1}u(x)\big{)}^{2}(D-2\lambda^{2})^{-1}\overline{u}(x)\,\mathrm{d}x.$
Then, making a few simple manipulations, we can bring the right side of the
equation above into the following form.
$\displaystyle\operatorname{Tr}T_{u}^{4}(\lambda)$
$\displaystyle=\frac{i}{-2\lambda^{2}}\int_{\mathbb{R}}\overline{u}(x)\bigg{[}u(x)-(D+2\lambda^{2})^{-1}Du(x)\bigg{]}^{2}\big{[}\overline{u}(x)-(D-2\lambda^{2})^{-1}D\overline{u}(x)\big{]}\,\mathrm{d}x$
$\displaystyle=-\frac{i}{2\lambda^{2}}\bigg{[}\int_{\mathbb{R}}|u(x)|^{4}\,\mathrm{d}x-\int_{\mathbb{R}}|u|^{2}u(x)(D-2\lambda^{2})^{-1}D\overline{u}(x)\,\mathrm{d}x-2\int_{\mathbb{R}}|u|^{2}\overline{u}(x)(D+2\lambda^{2})^{-1}Du(x)\,\mathrm{d}x$
$\displaystyle\qquad\qquad+2\int_{\mathbb{R}}|u(x)|^{2}(D+2\lambda^{2})^{-1}Du(x)(D-2\lambda^{2})^{-1}D\overline{u}(x)\,\mathrm{d}x+\int_{\mathbb{R}}((D+2\lambda^{2})^{-1}Du(x))^{2}\overline{u}(x)^{2}\,\mathrm{d}x$
$\displaystyle\qquad\qquad-\int_{\mathbb{R}}\overline{u}(x)((D+2\lambda^{2})^{-1}Du(x))^{2}((D-2\lambda^{2})^{-1}D\overline{u}(x))\,\mathrm{d}x\bigg{]}$
$\displaystyle=-\frac{4}{\lambda^{2}}\underbrace{\left[\frac{i}{8}\|u\|_{L^{4}(\mathbb{R})}^{4}\right]}_{=\mu_{1,2}(u)}-4\tau_{0}^{2}(u,\lambda).$
To estimate $\tau_{0}^{2}(u,\lambda)$, we use the following simple Lemma, the
proof of which we omit.
###### Lemma 3.1.
The following estimates hold, for $\lambda\in\Gamma_{\delta}$.
* •
If $0\leq\alpha_{1}\leq\alpha_{2}\leq 1$, then
(38) $\left\|(D\pm
2\lambda^{2})^{-1}Du\right\|_{\dot{H}^{\alpha_{1}}(\mathbb{R})}\lesssim_{\alpha_{2}-\alpha_{1}}\frac{\|u\|_{\dot{H}^{\alpha_{2}}(\mathbb{R})}}{(2\operatorname{Im}(\lambda^{2}))^{\alpha_{2}-\alpha_{1}}},\quad\forall\,u\in
H^{\alpha_{2}}(\mathbb{R}).$
* •
If $2\leq p<\infty$, then
(39) $\left\|(D\pm
2\lambda^{2})^{-1}Du\right\|_{L^{p}(\mathbb{R})}\lesssim_{p}\|u\|_{H^{s^{*}(p)}(\mathbb{R})},\quad\forall\,u\in
H^{s^{*}(p)}(\mathbb{R}).$
We estimate one of the terms defining $\tau_{0}^{2}(u,\lambda)$ explicitly;
the others can be dealt with in an entirely similar way.
$\displaystyle\left|\frac{1}{\lambda^{2}}\int_{\mathbb{R}}\overline{u}(x)((D+2\lambda^{2})^{-1}Du(x))^{2}((D-2\lambda^{2})^{-1}D\overline{u}(x))\,\mathrm{d}x\right|$
$\displaystyle\quad\leq\frac{1}{|\lambda|^{2}}\|u\|_{L^{6}(\mathbb{R})}\|(D+2\lambda^{2})^{-1}Du\|_{L^{6}(\mathbb{R})}^{2}\|(D-2\lambda^{2})^{-1}D\overline{u}\|_{L^{2}(\mathbb{R})}\lesssim_{\alpha}\frac{\|u\|_{H^{\frac{1}{3}}(\mathbb{R})}^{3}\|u\|_{H^{\alpha}(\mathbb{R})}}{|\lambda|^{2+2\alpha}}.$
We conclude that $\tau_{0}^{2}(u,\lambda)$ satisfies the required bound,
finishing the case $L=0$.
### 3.3. Expanding the Resolvent
#### 3.3.1. Formal Expansion of $R^{a}$ and $R^{d}$
As stated above, for $L\geq 1$ we seek an expansion of the symbol of
$L_{u}^{-1}(\lambda)$, in powers of $\lambda^{-2}$. That is, we seek to
understand $R(x,\zeta)$ in the expression
(40) $L_{u}^{-1}(\lambda)f=\frac{1}{\sqrt{2\pi}}\int\mathrm{d}\zeta
e^{ix\zeta}R(x,\zeta)\widehat{f}(\zeta).$
The identity $L_{u}(\lambda)R(x,D)=I$ implies
(41)
$i\sigma_{3}\partial_{x}R(x,\zeta)-(\zeta\sigma_{3}+\lambda^{2})R(x,\zeta)-i\lambda
U(x)R(x,\zeta)=I.$
Introducing the new variable $p=\frac{\zeta}{\lambda^{2}}$, this reads
(42)
$i\sigma_{3}\partial_{x}R(x,\zeta)-\lambda^{2}(p\sigma_{3}+1)R(x,\zeta)-i\lambda
U(x)R(x,\zeta)=I.$
We split $R$ into its diagonal and antidiagonal parts $R^{d}$ and $R^{a}$,
respectively,
$R(x,\zeta)=R^{d}(x,\zeta)+R^{a}(x,\zeta),$
and we also split equation (42) accordingly:
(43)
$i\sigma_{3}\partial_{x}R^{d}(x,\zeta)-\lambda^{2}(p\sigma_{3}+1)R^{d}(x,\zeta)-i\lambda
U(x)R^{a}(x,\zeta)=I;$ (44)
$i\sigma_{3}\partial_{x}R^{a}(x,\zeta)-\lambda^{2}(p\sigma_{3}+1)R^{a}(x,\zeta)-i\lambda
U(x)R^{d}(x,\zeta)=0.$
Setting the notation
$R^{d}(x,\zeta)=\sum_{k\geq 0}\frac{1}{\lambda^{2+2k}}R^{d}_{k}(x,p),\qquad
R^{a}(x,\zeta)=\sum_{k\geq 0}\frac{1}{\lambda^{3+2k}}R^{a}_{k}(x,p),$
we rewrite (43) and (44) in expanded form:
(45) $\displaystyle I$
$\displaystyle=-(p\sigma_{3}+1)R_{0}^{d}+\sum_{k=1}^{\infty}\frac{i\sigma_{3}\partial_{x}R_{k-1}^{d}-(p\sigma_{3}+1)R_{k}^{d}-iUR_{k-1}^{a}}{\lambda^{2k}};$
(46) $\displaystyle 0$
$\displaystyle=-(p\sigma_{3}+1)R_{0}^{a}-iUR_{0}^{d}+\sum_{k=1}^{\infty}\frac{i\sigma_{3}\partial_{x}R_{k-1}^{a}-(p\sigma_{3}+1)R_{k}^{a}-iUR_{k}^{d}}{\lambda^{2k}}.$
We thus obtain the recursive system (47)–(49) below.
(47) $R_{0}^{d}(x,p)=-\frac{p\sigma_{3}-1}{p^{2}-1},\qquad
R_{0}^{a}(x,p)=-\frac{iU}{p^{2}-1},$ (48)
$R_{k}^{d}(x,p)=\frac{1}{p^{2}-1}\big{[}-iUR_{k-1}^{a}(x,p)+i\partial_{x}R_{k-1}^{d}(x,p)\sigma_{3}\big{]}(p\sigma_{3}-1),\qquad\qquad\qquad\qquad
k\geq 1,$ (49)
$\begin{split}R_{k}^{a}(x,p)&=\frac{1}{p^{2}-1}\big{[}iUR_{k}^{d}(x,p)+i\partial_{x}R_{k-1}^{a}(x,p)\sigma_{3}\big{]}(p\sigma_{3}+1)\\\
&=\frac{1}{p^{2}-1}\big{[}U^{2}R_{k-1}^{a}(x,p)-U\partial_{x}R_{k-1}^{d}(x,p)\sigma_{3}+i\partial_{x}R_{k-1}^{a}(x,p)\sigma_{3}(p\sigma_{3}+1)\big{]},\end{split}\quad
k\geq 1.$
Note that we used the formula for $R_{k}^{d}(x,p)$ to pass to the second line
in the formula for $R_{k}^{a}(x,p)$. We also used several times the fact that
$\sigma_{3}A=-A\sigma_{3}$ for any antidiagonal matrix.
We use the computations above to clarify the form of the $R_{k}^{d}$’s and
$R_{k}^{a}$’s; the precise statement is contained in the following Lemma.
###### Lemma 3.2.
The $R_{k}^{d}$’s and $R_{k}^{a}$’s take the following form:
(50) $R_{k}^{d}(x,p)=\sum_{r=1}^{k}R_{k,r}^{d}(x,p),\qquad k\geq 1,$ (51)
$R_{k}^{a}(x,p)=\sum_{r=0}^{k}R_{k,r}^{a}(x,p),\qquad k\geq 0,$
where the entries of the $R_{k,r}^{d}$’s and $R_{k,r}^{a}$’s are homogeneous
polynomials of degrees $2r$ and $2r+1$, respectively, in $u,\overline{u}$, and
their derivatives. More specifically, setting
$Q_{\gamma}=\partial_{x}^{\gamma_{1}}U\cdots\partial_{x}^{\gamma_{n}}U$, for
$\gamma\in\mathbb{N}^{n}$, we have
(52) $\displaystyle R^{d}_{k,r}(x,p)$
$\displaystyle=\frac{1}{(p^{2}-1)^{k+1}}\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2r}\\\
|\gamma|=k-r\end{subarray}}Q_{\gamma}(x)P_{|\gamma|}(p)(p\sigma_{3}-1),$ (53)
$\displaystyle R^{a}_{k,r}(x,p)$
$\displaystyle=\frac{1}{(p^{2}-1)^{k+1}}\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2r+1}\\\
|\gamma|=k-r\end{subarray}}Q_{\gamma}(x)P_{|\gamma|}(p).$
Here and below we use the notation $P_{n}$ to denote any diagonal matrix whose
diagonal entries are polynomials in $p$ having degree at most $n$.
We postpone the proof of this Lemma until Section 3.6, so as not to interrupt
the flow of ideas.
#### 3.3.2. The Truncated Expansion, and a Formula for $\mathcal{R}_{2m-1}$
For a fixed $N\in\mathbb{N}^{*}$, we set the following notation. (Later we
will set $N=2L$.)
(54)
$\begin{split}R^{(N)}(x,p)&=\underbrace{\sum_{k=0}^{N}\frac{R_{k}^{d}(x,p)}{\lambda^{2+2k}}}_{=:R_{d}^{(N)}(x,p)}+\underbrace{\sum_{k=0}^{N-1}\frac{R_{k}^{a}(x,p)}{\lambda^{3+2k}}}_{=:R_{a}^{(N)}(x,p)}\\\
&=\underbrace{\frac{R_{0}^{d}(x,p)}{\lambda^{2}}}_{=:R^{(N)}_{d,0}(x,p)}+\sum_{r=1}^{N}\underbrace{\sum_{k=r}^{N}\frac{R_{k,r}^{d}(x,p)}{\lambda^{2+2k}}}_{=:R^{(N)}_{d,r}(x,p)}+\sum_{r=0}^{N-1}\underbrace{\sum_{k=r}^{N-1}\frac{R_{k,r}^{a}(x,p)}{\lambda^{3+2k}}}_{=:R^{(N)}_{a,r}(x,p)}.\end{split}$
The symbol $R^{(N)}(x,p)$ is a truncated expansion of $R(x,p)$ in inverse
powers of $\lambda$, having diagonal and antidiagonal parts $R^{(N)}_{d}$,
$R^{(N)}_{a}$, respectively. The point of this definition is that, using Lemma
3.2, we know that $R^{(N)}_{d,r}$ is homogeneous of degree $2r$ in $u$,
$\overline{u}$, and their derivatives, while $R^{(N)}_{a,r}$ is homogeneous of
degree $2r+1$ in these quantities. Expanding $R^{(N)}$ according to (54) and
applying the recursive identities (45)–(46), we see that $R^{(N)}(x,p)$
satisfies
$[i\sigma_{3}\partial_{x}-\lambda^{2}(p\sigma_{3}+1)-i\lambda
U(x)]R^{(N)}(x,p)=I+Y^{(N)}(x,p),$
where $Y^{(N)}(x,p)=Y^{(N)}_{d}(x,p)+Y^{(N)}_{a}(x,p)$,
$Y^{(N)}_{d}(x,p)=\frac{1}{\lambda^{2+2N}}i\sigma_{3}(\partial_{x}R_{N}^{d})(x,p),\quad
Y^{(N)}_{a}(x,p)=-\frac{1}{\lambda^{1+2N}}R_{N}^{a}(x,p)(p\sigma_{3}-1).$
This implies
(55)
$L_{u}^{-1}(\lambda)=R^{(N)}(x,\lambda^{-2}D)-L_{u}^{-1}(\lambda)Y^{(N)}(x,\lambda^{-2}D).$
Recall that $\mathcal{R}_{2m-1}$ is the term in the expansion (36) which is
homogeneous of order $2m-1$ in $u$, $\overline{u}$, and their derivatives. On
the other hand, the portion of $R^{(N)}$ which is of this homogeneity is
precisely $R^{(N)}_{a,m-1}$. Combining these considerations with (55), we see
that $\mathcal{R}_{2m-1}$ is the difference between $R^{(N)}_{a,m-1}$ and the
part of $L_{u}^{-1}(\lambda)Y^{(N)}(x,\lambda^{-2}D)$ that is homogeneous of
degree $2m-1$ in $u$, $\overline{u}$, and their derivatives. Using the
expansion (36) to isolate this part, we obtain:
(56)
$\begin{split}\mathcal{R}_{2m-1}&=R_{a,m-1}^{(N)}(x,\lambda^{-2}D)-\frac{1}{\lambda^{2+2N}}\sum_{\begin{subarray}{c}k+r^{\prime}=m-1\\\
k\geq 0,\;1\leq r^{\prime}\leq
N\end{subarray}}T_{u}(\lambda)^{2k+1}(\mathcal{L}_{0}-\lambda^{2})^{-1}(\mathcal{L}_{0}R_{N,r^{\prime}}^{d})(x,\lambda^{-2}D)\\\
&\quad-\frac{1}{\lambda^{1+2N}}\sum_{\begin{subarray}{c}k+r^{\prime}=m-1\\\
k\geq 0,\;0\leq r^{\prime}\leq
N\end{subarray}}T_{u}(\lambda)^{2k}(\mathcal{L}_{0}-\lambda^{2})^{-1}R_{N,r^{\prime}}^{a}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1).\end{split}$
### 3.4. Extracting the $\mu_{j,m}(u)$’s
Combining (56) with (37), (54), and pulling out inverse powers of $\lambda$,
we easily find the following formula for
$\operatorname{Tr}T_{u}^{2m}(\lambda)$ with $m\geq 2$, truncated at $N=2L$.
(57)
$\begin{split}\operatorname{Tr}(T_{u}^{2m}(\lambda))=&\;\sum_{j=m-1}^{2L-1}\frac{1}{\lambda^{2j+2}}\operatorname{Tr}[iUR_{j,m-1}^{a}(x,\lambda^{-2}D)]\\\
&+\sum_{\begin{subarray}{c}k+r=m-1\\\ k\geq 0,\;1\leq r\leq
2L\end{subarray}}\frac{(-1)^{k}}{\lambda^{4L+4+2k}}\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2k+2}(\mathcal{L}_{0}R_{2L,r}^{d})(x,\lambda^{-2}D)\big{]}\\\
&+\sum_{\begin{subarray}{c}k+r=m-1\\\ k\geq 0,\;0\leq r\leq
2L\end{subarray}}\frac{i(-1)^{k+1}}{\lambda^{4L+2+2k}}\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2k+1}R_{2L,r}^{a}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)\big{]}.\end{split}$
We will refer to the three sums above as $I$, $II$, and $III$, respectively.
We now identify the coefficients $\mu_{j,m}(u)$’s and verify that they satisfy
the properties claimed in Lemma 2.1. The claimed homogeneity properties will
be clear from the formulas that we derive below; we will just need to verify
the bounds (26). The latter are also straightforward to verify but will
require us to use the structure of the $R^{d}_{k,r}$’s and $R^{a}_{k,r}$’s
from (52)–(53).
The first sum has the form
$-2m\sum_{j=m-1}^{2L-1}\frac{\mu_{j,m}(u)}{\lambda^{2j}}$
with
(58)
$\begin{split}\mu_{j,m}(u)&=-\frac{i}{2m\lambda^{2}}\operatorname{Tr}\big{[}UR_{j,m-1}^{a}(x,\lambda^{-2}D)\big{]}\\\
&=-\frac{i}{4m\pi}\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2m-1}\\\
|\gamma|=j-(m-1)\end{subarray}}\operatorname{Tr}\left[\int
U(x)Q_{\gamma}(x)\,\mathrm{d}x\int\frac{P_{|\gamma|}(\tfrac{\zeta}{\lambda^{2}})}{((\tfrac{\zeta}{\lambda^{2}})^{2}-1)^{j+1}}\frac{\mathrm{d}\zeta}{\lambda^{2}}\right].\end{split}$
Note that ‘$\operatorname{Tr}$’ denotes an operator trace in the first line,
whereas it refers to the $2\times 2$ matrix trace in the second and third
lines. We will use the notation ‘$\operatorname{Tr}$’ similarly in what
follows without further comment.
Since $\lambda$ is presumed to lie in $\Gamma_{\delta}$, a comparison of the
degrees in the numerator and denominator ensures that the integrals over
$\zeta$ are finite and their values are independent of $\lambda$. The total
number of derivatives in the $x$-integrals is $j-(m-1)$, and we distribute
these so that the highest order of the derivatives that fall on a single $U$
is as small as possible. We list the bounds on $\mu_{j,m}(u)$ according to $m$
and the parity of $j$. In each case below, $\ell$ is a strictly positive
integer.
* •
If $j=2\ell$ is even and $m=2$, then there are $2\ell-1$ derivatives; thus
$|\mu_{2\ell,2}(u)|\lesssim\|u\|_{H^{\ell}(\mathbb{R})}\|u\|_{H^{\sigma(\ell-1)}(\mathbb{R})}^{3},$
where we recall the notation $\sigma(n)=\max\\{n,\frac{1}{3}\\}$.
* •
If $j=2\ell$ is even and $m\geq 3$, then there are at most $2(\ell-1)$ total
derivatives. This establishes the following bounds:
$\begin{split}|\mu_{2,3}(u)|&\lesssim\|u\|_{H^{\frac{1}{3}}(\mathbb{R})}^{6},\\\
|\mu_{2\ell,m}(u)|&\lesssim\|u\|_{H^{\ell-1}(\mathbb{R})}^{2m},\qquad\ell\geq
2,\;m\in\\{3,\ldots,2\ell+1\\}.\end{split}$
* •
If $j=2\ell+1$ is odd and $m\geq 2$, then there are at most $2\ell$
derivatives, so
$|\mu_{2\ell+1,m}(u)|\lesssim\|u\|_{H^{\ell}(\mathbb{R})}^{2m}$.
Let us remark that the formula (58) determines $\mu_{j,m}(u)$ for all $m\geq
2$ and all $j\geq m-1$, not just for those $\mu_{j,m}(u)$’s that appear in the
sum $I$. Therefore, the above bounds on the $\mu_{j,m}(u)$’s complete our
proof of the estimates (26). However, in order to determine the remainders
$\tau_{L}^{m}(u,\lambda)$, we still need to extract $\mu_{2L,m}(u)$ and
$\mu_{2L+1,m}(u)$ from the sums $II$ and $III$. To this end, we remove the
parts of $II$ and $III$ which are of order $\lambda^{-4L}$ and
$\lambda^{-4L-2}$; these will correspond to $\mu_{2L,m}(u)$ and
$\mu_{2L+1,m}(u)$, respectively. We deal first with $\mu_{2L,m}(u)$; the only
term expected to be relevant is the $k=0$ term in $III$, namely
(59)
$-\frac{i}{\lambda^{4L+2}}\operatorname{Tr}\big{[}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,m-1}^{a}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)\big{]}.$
To extract the part of this expression that is really of order
$\lambda^{-4L}$, we commute the operator
$(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}$ with $R_{2L,m-1}^{a}(x,\lambda^{-2}D)$.
We will have to do something similar several times below, so let us pause to
write down a more general formula. Let $A$ denote an antidiagonal operator
with symbol $A(x,\zeta)$ and similarly let $B$ denote a diagonal operator with
symbol $B(x,\zeta)$. Then a simple application of the product rule gives the
following operator identities.
(60) $\displaystyle A(\mathcal{L}_{0}+\lambda^{2})^{-1}$
$\displaystyle=-(\mathcal{L}_{0}-\lambda^{2})^{-1}A+(\mathcal{L}_{0}-\lambda^{2})^{-1}(\mathcal{L}_{0}A)(\mathcal{L}_{0}+\lambda^{2})^{-1};$
(61) $\displaystyle B(\mathcal{L}_{0}-\lambda^{2})^{-1}$
$\displaystyle=\;\;\,(\mathcal{L}_{0}-\lambda^{2})^{-1}B+(\mathcal{L}_{0}-\lambda^{2})^{-1}(\mathcal{L}_{0}B)(\mathcal{L}_{0}-\lambda^{2})^{-1}.$
Using (60) with $A=R_{2L,m-1}^{a}$, the expression (59) becomes
(62)
$\begin{split}\underbrace{\frac{i}{\lambda^{4L+2}}\operatorname{Tr}\big{[}UR_{2L,m-1}^{a}(x,\lambda^{-2}D)\big{]}}_{-2m\mu_{2L,m}\lambda^{-4L}}-\frac{i}{\lambda^{4L+4}}\operatorname{Tr}\big{[}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}R_{2L,m-1}^{a})(x,\lambda^{-2}D)\big{]}.\end{split}$
To extract $\mu_{2L+1,m}(u)$, we need to determine the part of $II$ and $III$
that is of order $2L+1$ in $\lambda^{-2}$. There are three quantities we need
to consider:
* •
The second term in (62):
(63)
$-\frac{i}{\lambda^{4L+4}}\operatorname{Tr}\big{[}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}R_{2L,m-1}^{a})(x,\lambda^{-2}D)\big{]}$
* •
The $k=0$ term in $II$:
(64)
$\frac{1}{\lambda^{4L+4}}\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2}(\mathcal{L}_{0}R_{2L,m-1}^{d})(x,\lambda^{-2}D)\big{]}$
* •
The $k=1$ term in $III$:
(65)
$\frac{i}{\lambda^{4L+4}}\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{3}R_{2L,m-2}^{a}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)\big{]}.$
We deal with each of these in turn, denoting their contributions to
$\mu_{2L+1,m}$ by $\mu_{2L+1,m}^{(1)}$, $\mu_{2L+1,m}^{(2)}$, and
$\mu_{2L+1,m}^{(3)}$, respectively. To put (63) in the desired form, we simply
apply (60) again, this time with $A=\mathcal{L}_{0}R_{2L,m-1}^{a}$. The result
is
(66)
$\begin{split}&\frac{i}{\lambda^{4L+4}}\operatorname{Tr}\big{[}U(\mathcal{L}_{0}R_{2L,m-1}^{a})(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}\big{]}\\\
&-\frac{i}{\lambda^{4L+6}}\operatorname{Tr}\big{[}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(D^{2}R_{2L,m-1}^{a})(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}\big{]}.\end{split}$
Thus
$\displaystyle\mu_{2L+1,m}^{(1)}=-\frac{i}{2m\lambda^{2}}\operatorname{Tr}\big{[}U(\mathcal{L}_{0}R_{2L,m-1}^{a})(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}\big{]}.$
Next, we look at (64). We perform two commutations, using $A=U$ in (60), then
$B=U^{2}$ in (61) to obtain
(67)
$\begin{split}[U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}]^{2}&=-(\lambda^{-4}D^{2}-1)^{-1}U^{2}\\\
&\qquad+\lambda^{-2}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}\\\
&\qquad-\lambda^{-2}(\lambda^{-4}D^{2}-1)^{-1}(\mathcal{L}_{0}U^{2})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}.\end{split}$
Substituting this into (64) yields
(68)
$\begin{split}&-\frac{1}{\lambda^{4L+4}}\operatorname{Tr}\big{[}U^{2}(\mathcal{L}_{0}R_{2L,m-1}^{d})(x,\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-1}\big{]}\\\
&+\frac{1}{\lambda^{4L+6}}\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}R_{2L,m-1}^{d})(x,\lambda^{-2}D)\big{]}\\\
&-\frac{1}{\lambda^{4L+6}}\operatorname{Tr}\big{[}(\lambda^{-4}D^{2}-1)^{-1}(\mathcal{L}_{0}U^{2})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}R_{2L,m-1}^{d})(x,\lambda^{-2}D)\big{]}.\end{split}$
We take
$\displaystyle\mu_{2L+1,m}^{(2)}=\frac{1}{2m\lambda^{2}}\operatorname{Tr}\big{[}U^{2}(\mathcal{L}_{0}R_{2L,m-1}^{d})(x,\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-1}\big{]}.$
Finally, we look at (65). Proceeding as in (67) but commuting one more time,
we get
$\displaystyle(\lambda^{-2}\mathcal{L}_{0}+1)[U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}]^{3}$
$\displaystyle=(\lambda^{-4}D^{2}-1)^{-1}U^{3}$
$\displaystyle\qquad+\lambda^{-2}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}$
$\displaystyle\qquad-\lambda^{-2}(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}U^{2})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}$
$\displaystyle\qquad-\lambda^{-2}(\lambda^{-4}D^{2}-1)^{-1}(\mathcal{L}_{0}U^{3})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}.$
Substituting the above into (65) yields
(69)
$\begin{split}&\frac{i}{\lambda^{4L+4}}\operatorname{Tr}\big{[}U^{3}R_{2L,m-2}^{a}(x,\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-1}\big{]}\\\
&+\frac{i}{\lambda^{4L+6}}\operatorname{Tr}\big{[}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,m-2}^{a}(x,\lambda^{-2}D)\big{]}\\\
&-\frac{i}{\lambda^{4L+6}}\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}U^{2})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,m-2}^{a}(x,\lambda^{-2}D)\big{]}\\\
&-\frac{i}{\lambda^{4L+6}}\operatorname{Tr}\big{[}(\lambda^{-4}D^{2}-1)^{-1}(\mathcal{L}_{0}U^{3})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,m-2}^{a}(x,\lambda^{-2}D)\big{]}.\end{split}$
Thus
$\displaystyle\mu_{2L+1,m}^{(3)}=-\frac{i}{2m\lambda^{2}}\operatorname{Tr}\big{[}U^{3}R_{2L,m-2}^{a}(x,\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-1}\big{]}.$
A short calculation involving (49) confirms that the three quantities
identified above sum to $\mu_{2L+1,m}$ as defined in (58):
$\mu_{2L+1,m}^{(1)}+\mu_{2L+1,m}^{(2)}+\mu_{2L+1,m}^{(3)}=\mu_{2L+1,m}.$
### 3.5. Estimating the Remainder
The final step of the proof is to estimate the remainder terms, which we group
together into $\tau^{m}_{L}(u,\lambda)$. This expression is a sum of the
following terms:
* •
The $k\geq 1$ terms of $II$ and the $k\geq 2$ terms of $III$, where $II$ and
$III$ denote (as above) the second and third sums in the decomposition (57).
We refer to these as the ‘Type 1’ remainder terms.
* •
The terms in (66), (68), and (69) where $\lambda^{-4L-6}$ appears (six terms
total). We refer to these as the ‘Type 2’ remainder terms.
#### 3.5.1. Type 1 Remainder Terms
We begin with the two sums. We want to show that the following expression is
bounded by $\|u\|_{H^{L}(\mathbb{R})}^{2m}$:
(70) $\begin{split}&\sum_{\begin{subarray}{c}k+r=m-1\\\ k\geq 1,\;1\leq r\leq
2L\end{subarray}}\frac{(-1)^{k}}{\lambda^{2k}}\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2k+2}(\mathcal{L}_{0}R_{2L,r}^{d})(x,\lambda^{-2}D)\big{]}\\\
&+\sum_{\begin{subarray}{c}k+r=m-1\\\ k\geq 2,\;0\leq r\leq
2L\end{subarray}}\frac{i(-1)^{k+1}}{\lambda^{2(k-1)}}\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2k+1}R_{2L,r}^{a}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)\big{]}.\end{split}$
By virtue of (52), we can write
$\displaystyle(\mathcal{L}_{0}R^{d}_{2L,r})(x,p)=\frac{1}{(p^{2}-1)^{2L+1}}\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2r}\\\
|\gamma|=2L-r+1\end{subarray}}Q_{\gamma}(x)P_{|\gamma|}(p).$
Thus, the traces in the first sum in (70) may be written as a sum of terms of
the form
(71)
$\displaystyle\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2k+2}Q_{\gamma}(x)P_{2L-r+1}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}\big{]}$
with $\gamma\in\mathbb{N}^{2r},\,\,|\gamma|=2L-r+1\leq 2L$. Integrating by
parts repeatedly in the above expression until no derivative of order larger
than $L$ falls on any single $U$, we rewrite the expression (71) as a sum of
terms of the form
$\operatorname{Tr}\big{[}(\partial_{x}^{\eta_{1}}U)(x)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}\dots(\partial_{x}^{\eta_{2k+2}}U)(x)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}Q_{\gamma}(x)P_{2L-r+1}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}\big{]},$
with $\eta=(\eta_{1},\dots\eta_{2k+2})\in\mathbb{N}^{2k+2}$,
$\gamma=(\gamma_{1},\dots,\gamma_{2r})\in\mathbb{N}^{2r}$ satisfying
$|\eta|+|\gamma|=2L-r+1$ and $\max\limits_{p,q}(\eta_{p},\gamma_{q})\leq L$.
Thus, the expression (71) can be bounded by
$|\lambda|^{2}\|u\|_{H^{L}(\mathbb{R})}^{2m}$, and therefore the first sum in
(70) by $\|u\|_{H^{L}(\mathbb{R})}^{2m}$.
We deal with the second sum in (70) in essentially the same way. Invoking
(53), we may write
$R^{a}_{2L,r}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)=\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2r+1}\\\
|\gamma|=2L-r\end{subarray}}Q_{\gamma}(x)P_{|\gamma|+1}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}.$
Thus
$\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2k+1}R_{2L,r}^{a}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)\big{]}$
is a sum of terms of the form
(72)
$\operatorname{Tr}\big{[}(U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1})^{2k+1}Q_{\gamma}(x)P_{2L-r+1}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}\big{]},$
where $\gamma\in\mathbb{N}^{2r+1}$ with $|\gamma|=2L-r\leq 2L$. As before, we
integrate by parts repeatedly to rewrite the expression (72) as a sum of terms
of the form
$\operatorname{Tr}\big{[}(\partial_{x}^{\eta_{1}}U)(x)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}\dots(\partial_{x}^{\eta_{2k+1}}U)(x)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}Q_{\gamma}(x)P_{2L-r+1}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}\big{]},$
with $\eta=(\eta_{1},\dots\eta_{2k+1})\in\mathbb{N}^{2k+1}$,
$\gamma=(\gamma_{1},\dots,\gamma_{2r+1})\in\mathbb{N}^{2r+1}$ satisfying
$|\eta|+|\gamma|=2L-r$ and $\max\limits_{p,q}(\eta_{p},\gamma_{q})\leq L$.
This allows us to bound (72) by $|\lambda|^{2}\|u\|_{H^{L}(\mathbb{R})}^{2m}$,
thus completing the desired estimates on the Type 1 remainder terms.
#### 3.5.2. Type 2 Remainder Terms
We now deal with the Type 2 remainder terms (the terms in (66), (68), and (69)
where $\lambda^{-4L-6}$ appears). When $m\geq 3$, the total number of
derivatives falling on the $U$’s is $2L$; therefore we can bound all these
terms by $|\lambda|^{-4L-4}\|u\|_{H^{L}(\mathbb{R})}^{2m}$ by arguing exactly
as we did for the Type 1 terms. To complete the proof of Lemma 2.1, it thus
remains to consider the Type 2 remainder terms with $m=2$. In this case, some
of the $U$’s appear to be overloaded with derivatives, and we need an
additional estimate. We state the following Lemma in terms of the ‘overloaded’
part of the Type 2 remainder term from (66), but the same manipulations will
yield the bound we need for the other Type 2 remainders.
###### Lemma 3.3.
The following estimate holds, for any $j\in\mathbb{N}$ and all
$\alpha\in[0,1]$.
(73)
$\|(\mathcal{L}_{0}+\lambda^{2})^{-1}D^{j+1}U(\mathcal{L}_{0}-\lambda^{2})^{-1}\|_{2}\leq
C(\alpha)|\lambda|^{-1-2\alpha}\|u\|_{H^{j+\alpha}(\mathbb{R})}.$
###### Proof.
Denoting
$T=(\mathcal{L}_{0}+\lambda^{2})^{-1}D^{j+1}U(\mathcal{L}_{0}-\lambda^{2})^{-1}$,
we readily compute as follows:
$\displaystyle\|T\|_{2}^{2}$
$\displaystyle=\frac{1}{\pi}\iint\frac{|\widehat{D^{j+1}u}(\zeta_{1}-\zeta_{2})|^{2}}{|\zeta_{1}-\lambda^{2}|^{2}|\zeta_{2}-\lambda^{2}|^{2}}\mathrm{d}\zeta_{1}\mathrm{d}\zeta_{2}$
$\displaystyle=\frac{1}{\pi}\int|\zeta_{1}|^{2}|\widehat{D^{j}u}(\zeta_{1})|^{2}\left(\int\frac{\mathrm{d}\zeta_{2}}{|\zeta_{1}+\zeta_{2}-\lambda^{2}|^{2}|\zeta_{2}-\lambda^{2}|^{2}}\right)\mathrm{d}\zeta_{1}$
$\displaystyle=\frac{2}{\operatorname{Im}\lambda^{2}}\int\frac{|\zeta_{1}|^{2}|\widehat{D^{j}u}(\zeta_{1})|^{2}}{|\zeta_{1}+2i\operatorname{Im}\lambda^{2}|^{2}}\mathrm{d}\zeta_{1}\leq
C(\alpha)|\lambda|^{-2-4\alpha}\|u\|_{H^{j+\alpha}(\mathbb{R})}^{2}.$
This completes the proof. ∎
With the above Lemma at our disposal, we now return to the estimation of the
Type $2$ remainder terms for $m=2$; we deal first with the one that appears in
(66). Omitting the prefactor $\frac{-i}{\lambda^{4L+6}}$, the quantity under
consideration is
$\operatorname{Tr}\big{[}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(D^{2}R_{2L,1}^{a})(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}\big{]},$
which we write as
(74)
$\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(D^{2}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,1}^{a}(x,\lambda^{-2}D)\big{]}.$
Proceeding as we did for the Type 1 terms, we rewrite this expression as a sum
of terms of the form
(75)
$\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(D^{\eta+2}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}Q_{\gamma}(x)P_{2L-1}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}],$
with $\eta\in\mathbb{N}$,
$\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\in\mathbb{N}^{3}$,
$\eta+|\gamma|=2L-1$, $\eta\leq L-1$, and
$\max(\gamma_{1},\gamma_{2},\gamma_{3})\leq L$. The expression (75) can be
bounded by
$\displaystyle\|(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(D^{\eta+2}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}\|_{2}\|Q_{\gamma}(x)P_{2L-1}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}\|_{2}.$
By virtue of Lemma 3.3, the above can bounded in turn by
$\displaystyle
C(\alpha)|\lambda|^{4-2\alpha}\|u\|_{H^{L+\alpha}(\mathbb{R})}\|u\|_{H^{L}(\mathbb{R})}^{3},$
for any $\alpha\in[0,1]$.
The next quantity we treat is the third term in (68); we want to estimate
$\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}U^{2})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}R_{2L,1}^{d})(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}\big{]}.$
After an integration by parts we are left with the expression
$-\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(D^{2}U^{2})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,1}^{d}(x,\lambda^{-2}D)(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}\big{]}.$
that can be treated in exactly the same way as (74). We note only the
modification to Lemma 3.3 that we use, namely
$\|(\mathcal{L}_{0}-\lambda^{2})^{-1}(D^{L+1}U^{2})(\mathcal{L}_{0}-\lambda^{2})^{-1}\|_{2}\lesssim_{\alpha}|\lambda|^{-1-2\alpha}\|U^{2}\|_{H^{L+\alpha}(\mathbb{R})}\lesssim_{\alpha}|\lambda|^{-1-2\alpha}\|u\|_{H^{L+\alpha}(\mathbb{R})}\|u\|_{H^{L}(\mathbb{R})}.$
We next consider the second term in (68):
$\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}R_{2L,1}^{d})(x,\lambda^{-2}D)\big{]},$
where as usual we have suppressed the prefactor $\lambda^{-4L-6}$. We start by
rewriting it as the sum
(76)
$\begin{split}-&\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(D^{2}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,1}^{d}(x,\lambda^{-2}D)\big{]}\\\
&+\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,1}^{d}(x,\lambda^{-2}D)\big{]}.\end{split}$
For the first term here we proceed exactly as before: substituting (52) and
integrating by parts we rewrite it as a sum of expressions of the form
$\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(D^{\eta_{1}+2}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(D^{\eta_{2}}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}Q_{\gamma}(x)P_{2L}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}],$
with $\eta=(\eta_{1},\eta_{2})\in\mathbb{N}^{2}$,
$\gamma=(\gamma_{1},\gamma_{2})\in\mathbb{N}^{2}$, $|\eta|+|\gamma|=2L-1$,
$\max(\eta_{1},\eta_{2})\leq L-1$, and $\max(\gamma_{1},\gamma_{2})\leq~{}L$.
We estimate the above by
$\displaystyle\|(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(D^{\eta_{1}+2}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}\|_{2}\|(D^{\eta_{2}}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}\|\|Q_{\gamma}(x)P_{2L}(\lambda^{-2}D)(\lambda^{-4}D^{2}-1)^{-2L-1}\|_{2},$
which can in turn be bounded by
$\displaystyle
C(\alpha)|\lambda|^{4-2\alpha}\|u\|_{H^{L+\alpha}(\mathbb{R})}\|u\|_{H^{L}(\mathbb{R})}^{3}.$
To treat the second term in (76) we distinguish the cases $L=1$ and $L\geq 2$.
In the case of $L=1$ we estimate this expression by
$\displaystyle\|(\lambda^{-2}\mathcal{L}_{0}+1)^{-1}(\mathcal{L}_{0}U)\|_{2}\|(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}U)\|\|(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,1}^{d}(x,\lambda^{-2}D)\|_{2}$
$\displaystyle\lesssim|\lambda|^{2+\frac{2}{p}}\|u\|_{H^{1}(\mathbb{R})}^{3}\|Du\|_{L^{p}(\mathbb{R})},\quad
2\leq p\leq\infty.$
Putting $\sigma=\frac{\alpha}{2}\in[0,\frac{1}{2}[$ and choosing $p$ such that
$\sigma=\frac{1}{2}-\frac{1}{p}$, we get the bound
$|\lambda|^{3-2\sigma}\|u\|_{H^{1}(\mathbb{R})}^{3}\|u\|_{H^{1+\sigma}(\mathbb{R})}\leq|\lambda|^{4-2\alpha}\|u\|_{H^{1}(\mathbb{R})}^{3}\|u\|_{H^{1+\alpha}(\mathbb{R})}.$
If $L\geq 2$, we can proceed as for the Type 1 remainder terms and bound the
second term in (76) by $|\lambda|^{2}\|u\|_{H^{L}(\mathbb{R})}^{4}$. This
finishes our considerations of Type 2 remainder terms coming from (68).
The last group of Type 2 remainder terms comes from (69). Omitting the common
prefactor, the quantity of interest is
(77)
$\begin{split}&\operatorname{Tr}\big{[}(\mathcal{L}_{0}U)(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,0}^{a}(x,\lambda^{-2}D)\big{]}\\\
&-\operatorname{Tr}\big{[}(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}(\mathcal{L}_{0}U^{2})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}U(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,0}^{a}(x,\lambda^{-2}D)\big{]}\\\
&-\operatorname{Tr}\big{[}(\lambda^{-4}D^{2}-1)^{-1}(\mathcal{L}_{0}U^{3})(\lambda^{-2}\mathcal{L}_{0}-1)^{-1}R_{2L,0}^{a}(x,\lambda^{-2}D)\big{]},\end{split}$
where $R_{2L,0}^{a}(x,p)=\partial_{x}^{2L}U(x)P_{2L}(p)(p^{2}-1)^{-2L-1}$. No
new ideas are involved in the estimation of these terms; we simply integrate
by parts $L-1$ times to keep $L+1$ derivatives on $U$ coming from
$R_{2L,0}^{a}$ and then apply Lemma 3.3. We omit the remaining details for
these terms. Having now established the required bounds on the remainder
$|\tau_{L}^{m}(u,\lambda)|$, we have completed the proof of Lemma 2.1, modulo
the proof of Lemma 3.2 below.
### 3.6. Proof of Lemma 3.2
We argue by induction. The base cases are easy to verify explicitly:
$R_{0}^{a}(x,p)=\frac{1}{p^{2}-1}\cdot U(-i)=R_{0,0}^{a}(x,p)$
$R_{1}^{d}(x,p)=\frac{1}{(p^{2}-1)^{2}}\cdot
U^{2}\cdot(-1)\cdot(p\sigma_{3}-1)=R_{1,1}^{d}(x,p).$
Using (48)–(49) along with our inductive hypothesis, we may write, for $k\geq
1$,
$\displaystyle R_{k}^{d}(x,p)$
$\displaystyle=\frac{1}{(p^{2}-1)^{k+1}}\bigg{[}\sum_{r=0}^{k-1}\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2r+1}\\\
|\gamma|=(k-1)-r\end{subarray}}-iUQ_{\gamma}(x)P_{|\gamma|}(p)$
$\displaystyle\hskip
71.13188pt+\sum_{r=1}^{k-1}\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2r}\\\
|\gamma|=(k-1)-r\end{subarray}}i\partial_{x}Q_{\gamma}(x)P_{|\gamma|}(p)(p\sigma_{3}-1)\sigma_{3}\bigg{]}(p\sigma_{3}-1).$
We check that the inner sums (together with the common factors of
$(p^{2}-1)^{-k-1}$ and $(p\sigma_{3}-1)$) can be absorbed into
$R_{k,r+1}^{d}(x,p)$ and $R_{k,r}^{d}(x,p)$, respectively.
* •
When $\gamma\in\mathbb{N}^{2r+1}$ and $|\gamma|=(k-1)-r$, the term
$iUQ_{\gamma}(x)P_{|\gamma|}(p)$ can be absorbed into $R_{k,r+1}^{d}(x,p)$:
* –
First, $UQ_{\gamma}=Q_{(0,\gamma)}$, with $(0,\gamma)\in\mathbb{N}^{2(r+1)}$
and $|(0,\gamma)|=|\gamma|=k-(r+1)$.
* –
Second, $\deg P_{|\gamma|}(p)\leq(k-1)-r=k-(r+1)$.
* •
When $\gamma\in\mathbb{N}^{2r}$ and $|\gamma|=(k-1)-r$, the term
$\partial_{x}Q_{\gamma}(x)P_{|\gamma|}(p)$ can be absorbed into
$R_{k,r}^{d}(x,p)$:
* –
First, $\partial_{x}Q_{\gamma}$ is a sum of $Q_{\gamma^{\prime}}$’s, with
$|\gamma^{\prime}|=|\gamma|+1=k-r$;
* –
Second, $\deg P_{|\gamma|}(p)(p\sigma_{3}-1)\leq[(k-1)-r]+1=k-r$.
The formula for $R_{k}^{a}(x,p)$ may be verified in exactly the same way. We
write
$\displaystyle R_{k}^{a}(x,p)$
$\displaystyle=\frac{1}{(p^{2}-1)^{k+1}}\bigg{[}\sum_{r=0}^{k-1}\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{2r+1}\\\
|\gamma|=(k-1)-r\end{subarray}}U^{2}Q_{\gamma}(x)P_{|\gamma|}(p)+i\partial_{x}Q_{\gamma}(x)P_{|\gamma|}(p)\sigma_{3}(p\sigma_{3}+1)$
$\displaystyle\hskip
85.35826pt-\sum_{r=1}^{k-1}\sum_{\begin{subarray}{c}\eta\in\mathbb{N}^{2r}\\\
|\eta|=(k-1)-r\end{subarray}}U\partial_{x}Q_{\eta}(x)P_{|\eta|}(p)(p\sigma_{3}-1)\sigma_{3}\bigg{]}.$
As above, we perform the routine verifications of the numerology as follows.
* •
When $\gamma\in\mathbb{N}^{2r+1}$ and $|\gamma|=(k-1)-r$, the term
$U^{2}Q_{\gamma}(x)P_{|\gamma|}(x)$ can be absorbed into $R^{a}_{k,r+1}(x,p)$:
* –
First, $U^{2}Q_{\gamma}=Q_{(0,0,\gamma)}$, with
$(0,0,\gamma)\in\mathbb{N}^{2(r+1)+1}$, $|(0,0,\gamma)|=|\gamma|=k-(r+1)$;
* –
Second, $\deg P_{|\gamma|}(p)\leq(k-1)-r=k-(r+1)$.
* •
When $\gamma\in\mathbb{N}^{2r+1}$ and $|\gamma|=(k-1)-r$, the term
$i\partial_{x}Q_{\gamma}(x)P_{|\gamma|}(p)\sigma_{3}(p\sigma_{3}+1)$ can be
absorbed into $R^{a}_{k,r}(x,p)$. When $\eta\in\mathbb{N}^{2r}$ and
$|\eta|=(k-1)-r$, the same is true of
$U\partial_{x}Q_{\eta}(x)P_{|\eta|}(p)(p\sigma_{3}-~{}1)\sigma_{3}$.
* –
First, both $\partial_{x}Q_{\gamma}$ and $U\partial_{x}Q_{\eta}$ are sums of
$Q_{\gamma^{\prime}}$’s, with $\gamma^{\prime}\in\mathbb{N}^{2r+1}$ and
$|\gamma^{\prime}|=k-r$.
* –
Second, $P_{|\gamma|}(p)\sigma_{3}(p\sigma_{3}+1)$ and
$P_{|\eta|}(p)(p\sigma_{3}-1)\sigma_{3}$ each have degree at most $k-r$.
## References
* [1] Mark J. Ablowitz and Harvey Segur. Solitons and the inverse scattering transform, volume 4 of SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa., 1981.
* [2] Hajer Bahouri and Galina Perelman. Global well-posedness for the derivative nonlinear schrödinger equation. arXiv:2012.01923, 2020\.
* [3] H. A. Biagioni and F. Linares. Ill-posedness for the derivative Schrödinger and generalized Benjamin-Ono equations. Trans. Amer. Math. Soc., 353(9):3649–3659, 2001.
* [4] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, and T. Tao. A refined global well-posedness result for Schrödinger equations with derivative. SIAM J. Math. Anal., 34(1):64–86, 2002.
* [5] Patrick Gérard. On the Conservation Laws of the Defocusing Cubic NLS Equation. unpublished, 2015.
* [6] Axel Grünrock. Bi- and trilinear Schrödinger estimates in one space dimension with applications to cubic NLS and DNLS. Int. Math. Res. Not., (41):2525–2558, 2005.
* [7] Zihua Guo and Yifei Wu. Global well-posedness for the derivative nonlinear Schrödinger equation in $H^{\frac{1}{2}}(\mathbb{R})$. Discrete Contin. Dyn. Syst., 37(1):257–264, 2017.
* [8] Nakao Hayashi and Tohru Ozawa. On the derivative nonlinear Schrödinger equation. Phys. D, 55(1-2):14–36, 1992.
* [9] Benjamin Harrop-Griffiths, Rowan Killip, and Monica Vişan. Large-data equicontinuity for the derivative NLS. arXiv:2106.13333, 2021.
* [10] Robert Jenkins, Jiaqi Liu, Peter Perry, and Catherine Sulem. The derivative nonlinear Schrödinger equation: global well-posedness and soliton resolution. Quart. Appl. Math., 78(1):33–73, 2020.
* [11] Robert Jenkins, Jiaqi Liu, Peter Perry, and Catherine Sulem. Global existence for the derivative nonlinear Schrödinger equation with arbitrary spectral singularities. Anal. PDE, 13(5):1539–1578, 2020.
* [12] Robert Jenkins, Jiaqi Liu, Peter A. Perry, and Catherine Sulem. Global well-posedness for the derivative non-linear Schrödinger equation. Comm. Partial Differential Equations, 43(8):1151–1195, 2018.
* [13] David J. Kaup and Alan C. Newell. An exact solution for a derivative nonlinear Schrödinger equation. J. Mathematical Phys., 19(4):798–801, 1978.
* [14] Rowan Killip, Maria Ntekoume, and Monica Vişan. On the well-posedness problem for the derivative nonlinear schrödinger equation, arXiv:2101.12274, 2021.
* [15] Rowan Killip and Monica Vişan. KdV is well-posed in $H^{-1}$. Ann. of Math. (2), 190(1):249–305, 2019.
* [16] Rowan Killip, Monica Vişan, and Xiaoyi Zhang. Low regularity conservation laws for integrable PDE. Geom. Funct. Anal., 28(4):1062–1090, 2018.
* [17] Friedrich Klaus and Robert Schippa. A priori estimates for the derivative nonlinear schrödinger equation, arXiv:2007.13161, 2020.
* [18] Herbert Koch and Daniel Tataru. Conserved energies for the cubic nonlinear Schrödinger equation in one dimension. Duke Math. J., 167(17):3207–3313, 2018.
* [19] Jyh-Hao Lee. Global solvability of the derivative nonlinear Schrödinger equation. Trans. Amer. Math. Soc., 314(1):107–118, 1989.
* [20] Koji Mio, Tatsuki Ogino, Kazuo Minami, and Susumu Takeda. Modified nonlinear Schrödinger equation for Alfvén waves propagating along the magnetic field in cold plasmas. J. Phys. Soc. Japan, 41(1):265–271, 1976.
* [21] Einar Mjølhus. On the modulational instability of hydromagnetic waves parallel to the magnetic field. Journal of Plasma Physics, 16(3):321–334, 1976.
* [22] T. Ozawa. On the nonlinear Schrödinger equations of derivative type. Indiana Univ. Math. J., 45(1):137–163, 1996.
* [23] Dmitry E. Pelinovsky, Aaron Saalmann, and Yusuke Shimabukuro. The derivative NLS equation: global existence with solitons. Dyn. Partial Differ. Equ., 14(3):271–294, 2017.
* [24] Dmitry E. Pelinovsky and Yusuke Shimabukuro. Existence of global solutions to the derivative NLS equation with the inverse scattering transform method. Int. Math. Res. Not. IMRN, (18):5663–5728, 2018.
* [25] Hideo Takaoka. Well-posedness for the one-dimensional nonlinear Schrödinger equation with the derivative nonlinearity. Adv. Differential Equations, 4(4):561–580, 1999.
* [26] Hideo Takaoka. Global well-posedness for Schrödinger equations with derivative in a nonlinear term and data in low-order Sobolev spaces. Electron. J. Differential Equations, pages No. 42, 23, 2001.
* [27] Takayuki Tsuchida and Miki Wadati. Complete integrability of derivative nonlinear Schrödinger-type equations. Inverse Problems, 15(5):1363–1373, 1999.
* [28] Yifei Wu. Global well-posedness on the derivative nonlinear Schrödinger equation. Anal. PDE, 8(5):1101–1112, 2015.
|
arxiv-papers
| 2021-07-26T16:12:06 |
2024-09-04T03:07:19.176129
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Hajer Bahouri, Trevor M. Leslie, and Galina Perelman",
"submitter": "Trevor Leslie",
"url": "https://arxiv.org/abs/2107.12297"
}
|
2107.12298
|
Tom Menzies, Clinical Trials Research Unit, University of Leeds, Leeds, LS2
9JT, UK.
# A Comparison of Various Aggregation Functions in Multi-Criteria Decision
Analysis for Drug Benefit-Risk Assessment
Tom Menzies1,21,2affiliationmark: Gaelle Saint-Hilary3,43,4affiliationmark:
and Pavel Mozgunov55affiliationmark: 11affiliationmark: Clinical Trials
Research Unit, Leeds Institute of Clinical Trials Research, University of
Leeds, Leeds, UK
22affiliationmark: Department of Mathematics and Statistics, Lancaster
University, Lancaster, UK
33affiliationmark: Department of Biostatistics, Institut de Recherches
Internationales Servier (IRIS), Suresnes, France
44affiliationmark: Dipartimento di Scienze Matematiche (DISMA) Giuseppe Luigi
Lagrange, Politecnico di Torino, Torino, Italy
55affiliationmark: Medical and Pharmaceutical Statistics Research Unit,
Department of Mathematics and Statistics, Lancaster University, Lancaster, UK
[email protected]
###### Abstract
Multi-criteria decision analysis (MCDA) is a quantitative approach to the drug
benefit-risk assessment (BRA) which allows for consistent comparisons by
summarising all benefits and risks in a single score. The MCDA consists of
several components, one of which is the utility (or loss) score function that
defines how benefits and risks are aggregated into a single quantity. While a
linear utility score is one of the most widely used approach in BRA, it is
recognised that it can result in counter-intuitive decisions, for example,
recommending a treatment with extremely low benefits or high risks. To
overcome this problem, alternative approaches to the scores construction,
namely, product, multi-linear and Scale Loss Score models, were suggested.
However, to date, the majority of arguments concerning the differences implied
by these models are heuristic. In this work, we consider four models to
calculate the aggregated utility/loss scores and compared their performance in
an extensive simulation study over many different scenarios, and in a case
study. It is found that the product and Scale Loss Score models provide more
intuitive treatment recommendation decisions in the majority of scenarios
compared to the linear and multi-linear models, and are more robust to the
correlation in the criteria.
###### keywords:
Aggregation Function; Benefit-risk; Decision-Making; Loss Score; Multi-
Criteria Decision Analysis
## 1 Introduction
The benefit-risk analysis of a treatment consists of balancing its favourable
therapeutic effects versus adverse reactions it may induce (Chuang-Stein,
Entsuah and Pritchett, 2008). This is a process which drug regulatory
authorities, such as EMA EMA. (2013) and FDA FDA. (2013) use when deciding
whether a treatment should be recommended. Benefit-risk assessment (BRA) is
mostly performed in a qualitative way Hughes D, Bayoumi A and Pirmohamed M.
(2007). However, this approach has been criticised for a lack of transparency
behind the final outcome, in part due to large amounts of data considered for
this assessment, and the differing opinions on what this data means. To
counter this, quantitative approaches ensuring continuity and consistency
across drug BRA, and making the decisions easier to justify and to
communicate, were proposed Thokala P, Devlin N, Marsh K, Baltussen R, Boysen
M, Kalo Z, Longrenn T, Mussen F, Peacock S, Watkins J, and Ijzerman M. (2016);
Marsh K, IJzerman M, Thokala P, Baltussen R, Boysen M, Kaló Z, Longrenn T,
Mussen F, Peacock S, Watkins J and Devlin N. (2016).
While there is a number of methods to conduct the quantitative BRA, the multi-
criteria decision analysis (MCDA) has been particularly recommended by many
expert groups in the field (Mussen F, Salek S, and Walker S, 2007)(IMI
PROTECT., 2013)(NICE Decision Support Unit, 2011)(Marsh K, Lanitis T, Neasham
D, et al, 2014). MCDA provides a single score (a utility or loss score) for a
treatment, which summarises all the benefits and risks induced by the
treatment in question. These scores are then used to compare the treatments
and to guide the recommendation of therapies over others.
Mussen et al. Mussen F, Salek S, and Walker S (2007) proposed to use a linear
aggregation model in the MCDA, which takes into account all main benefits and
risks associated with a treatment (as well as their relative importance) to
generate a treatment utility score by taking a linear combination of all
criteria. This utility score is then compared against the utility score of a
competing treatment, and that with the highest score is recommended. This
model appealed for numerous reasons, one of which was its simplicity. The
proposed method, however, was deterministic, point estimates of the benefit
and risk criteria were used, and no uncertainty around these estimates was
considered. Yet, uncertainty and variance are expected in treatments’
performances, and must therefore be accounted for in the decision-making.
To resolve this shortcoming, probabilistic MCDA (pMCDA) (Waddingham E, Mt-Isa
S, Nixon R and Ashby D., 2016) that accounts for the variability of the
criteria through a Bayesian approach was proposed. Generalisations of pMCDA
for the case of uncertainty in the relative importance of the criteria were
developed, named stochastic multi-criteria acceptability analysis (SMAA)
Tervonen T, van Valkenhoef G, Buskens E, Hillege H, and Postmus D. (2011) or
Dirichlet SMAA Saint-Hilary G, Cadour S, Robert V, and Gasparini M (2017).
However, it was acknowledged that by accounting for several sources of
uncertainty, these models become more complex and should be used primarily for
the sensitivity analysis.
All the works discussed above concern a linear model for aggregation of the
criteria, which is thought to be primarily due to its wider application in
practice rather than its properties. One argument against the linear model is
that a treatment which has either no benefit or extreme risk could be
recommended over other alternatives without such extreme characteristics.
(Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P, 2018) (Morton
A, 2017) (Marsh KD, Sculpher M, Caro J, et al, 2017). In addition, the
linearity implies that the relative tolerance in the toxicity increase is
constant for all levels of benefit that might not be the case for a number of
clinical settings. To address these points, a Scale Loss Score (SLoS) model
was developed. This model made it impossible for treatments with no benefit or
extremely high risk be recommended. It also incorporates a decreasing level of
risk tolerance relative to the benefits: where an increase in risk is more
tolerated when benefit improves from “very low” to “moderate” compared to an
increase from “moderate” to “very high”. SLoS model resulted in similar
recommendations to the linear MCDA model when the one treatment is strictly
preferred to another (i.e. has both lower risk and higher benefit), but
resulted in more intuitive recommendations if one of the treatments has either
extremely low benefit or extremely high risk.
Whilst other methods are discussed in the literature, the only application of
a non-linear BRA model to the medical field is made by Saint-Hilary et al
(Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P, 2018), and this
only compares the linear and SLoS models. This paper shall build on this
comparison by introducing various different aggregation models (AM) to analyse
how each work compared to the other in the medical field (by conduction a case
study and a simulation study), and allow an informed decision to be made as to
which one should be used using the results of an extensive and comprehensive
simulations study over a number of clinical scenarios. We will also use a case
study to demonstrate the implication of the choice of AM on the actual
decision-making using the MCDA.
The rest of the paper proceeds as follows. The general MCDA methodology, the
four different aggregation models considered, linear, product, multi-linear
and SLoS, and the choice of the weights for them are given in Section 2. In
Section 3, we revisit a case study conducted conducted by Nemeroff (Nemeroff
C., 2007) looking at the effects of Venlafaxine, Fluoxetine and a placebo on
depression, applying the various aggregation models to a given dataset. In
Section 4, a comprehensive simulation study comparing the four aggregation
models in many different scenarios is presented, as well as the effects any
correlation between criteria may have. We conclude with a discussion in
Section 5.
## 2 Methodology
All of the aggregation models (referred to as to “models” below) considered in
this work are all classified within the MCDA family - they aggregate the
information about benefits and risks in a single (utility or loss) score.
Therefore, we would refer to each of the approaches by their models for the
computation of the score. Below, we outline the general MCDA framework for the
construction of a score using an arbitrary model. We consider the MCDA taking
into account the variability of estimates, pMCDA Waddingham E, Mt-Isa S, Nixon
R and Ashby D. (2016).
### 2.1 Setting
Consider $m$ treatments (indexed by $i$) which are assessed on $n$ criteria
(indexed by $j$). To ensure continuity, we use the same notations as those of
Saint-Hilary et al. Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov
P (2018):
* •
$\xi_{i,j}$ is the performance of treatment $i$ on criterion $j$, so that
treatment $i$ is characterised by a vector showing how it performed on each
criterion: $\boldsymbol{\xi_{i,j}}$ = ($\xi_{i,1},......,\xi_{in}$).
* •
The monotonically increasing partial value functions $0\leq u_{j}(\cdot)\leq
1$ are used to normalise the criterion performances. Let $\xi^{\prime}_{j}$
and $\xi^{\prime\prime}_{j}$ be the most and the least preferable values, then
$u_{j}(\xi^{\prime\prime}_{j})=0$ and $u_{j}(\xi^{\prime}_{j})=1$. The
inequality $u_{j}(\xi_{ij})>u_{j}(\xi_{hj})$ indicates that the performance of
the treatment $i$ is preferred to the performance of the treatment $h$ on
criterion $j$. In this work, we focus on linear partial value functions, one
of the most common choice in treatment benefit-risk assessment Thokala P,
Devlin N, Marsh K, Baltussen R, Boysen M, Kalo Z, Longrenn T, Mussen F,
Peacock S, Watkins J, and Ijzerman M. (2016); Mussen F, Salek S, and Walker S
(2007); Tervonen T, van Valkenhoef G, Buskens E, Hillege H, and Postmus D.
(2011); Marcelon L, Verstraeten T, Dominiak-Felden G, et al. (2016);
Waddingham E, Mt-Isa S, Nixon R and Ashby D. (2016) that can be written as
$u_{j}(\xi_{ij})=\frac{\xi_{ij}-\xi^{\prime\prime}_{j}}{\xi^{\prime}_{j}-\xi^{\prime\prime}_{j}}.$
(1)
* •
The weights indicating the relative importance of the criteria are known
constants denoted by $w_{j}$. The vector of weights used for the analysis is
denoted by $\boldsymbol{w}=\left(w_{1},...,w_{n}\right)$.
* •
The MCDA utility or loss scores of treatment $i$ are obtained as
$u(\boldsymbol{\xi}_{i},\boldsymbol{w}):=u\left(w_{j},u_{j}(\xi_{ij})\right),~{}~{}j=1,.....,n$
and
$l(\boldsymbol{\xi}_{i},\boldsymbol{w}):=l\left(w_{j},u_{j}(\xi_{ij})\right),~{}~{}j=1,.....,n$
respectively, where $u\left(\cdot\right)$ and $l\left(\cdot\right)$ are the
functions specifying how the criteria should be summarised in a single score,
and are referred to as “aggregation models”. The impact of this model’s choice
on the performance of treatment recommendation is the focus on this work. The
higher the utility score, or lower the loss score, the more preferable the
benefit-risk ratio. Then, the comparison of treatments $i$ and $h$ is based on
$\Delta
u(\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{h},\boldsymbol{w}):=u(\boldsymbol{\xi}_{i},\boldsymbol{w})-u(\boldsymbol{\xi}_{h},\boldsymbol{w})$
(2)
or
$\Delta
l(\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{h},\boldsymbol{w}):=l(\boldsymbol{\xi}_{i},\boldsymbol{w})-l(\boldsymbol{\xi}_{h},\boldsymbol{w}).$
(3)
Within a Bayesian approach, the utility score
$u(\boldsymbol{\xi}_{i},\boldsymbol{w})$ and the loss score
$l(\boldsymbol{\xi}_{i},\boldsymbol{w})$ are random variables having a prior
distribution. Given observed outcomes $\mathbf{x_{i}}=(x_{i1},\ldots,x_{in})$
and $\mathbf{x_{h}}=(x_{h1},\ldots,x_{hn})$ (corresponding to treatment
performances $\boldsymbol{\xi}_{i}$ and $\boldsymbol{\xi}_{h}$, respectively)
for $i$ and $h$, one can obtain the posterior distribution of $\Delta
u(\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{h},\boldsymbol{w})$ or $\Delta
l(\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{h},\boldsymbol{w})$, respectively.
The inference is based on the complete posterior distribution and the
conclusion on the benefit-risk balance is supported by the probability of
treatment $i$ to have a greater utility score (or smaller loss score) than
treatment $h$:
$\mathcal{P}_{u}^{ih}=\mathbb{P}(\Delta
u(\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{h},\boldsymbol{w})>0\mid\mathbf{x_{i}},\mathbf{x_{h}}).$
(4)
or
$\mathcal{P}_{l}^{ih}=\mathbb{P}(\Delta
l(\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{h},\boldsymbol{w})<0\mid\mathbf{x_{i}},\mathbf{x_{h}}).$
(5)
The probabilities (4) or (5) are used to guide a decision on taking/dropping a
treatment. A possible way to formalise the decision based on this probability
is to compare it to a threshold confidence level $0.5\leq\psi\leq 1$. Then,
$\mathcal{P}_{u}^{ih}>\psi$ (or $\mathcal{P}_{l}^{ih}>\psi$) would mean that
one has enough evidence to say that treatment $i$ has a better benefit-risk
balance than $h$ with a level of confidence $\psi$. Note that
$\mathcal{P}_{u}^{ih}=0.5$ (and $\mathcal{P}_{l}^{ih}=0.5$) corresponds to the
case where the benefit-risk profiles of $i$ and $h$ are equal according to the
corresponding MCDA model.
### 2.2 Aggregation Models
Below, we consider four specific forms of aggregation models, namely, linear,
product, multi-linear, and Scale Loss Score, that were argued by various
authors to be used in the MCDA to support decision-making
#### 2.2.1 Linear Model
A linear aggregation of treatment’s effects on benefits and risks remains the
most common choice for the treatment development Mussen F, Salek S, and Walker
S (2007); Tervonen T, van Valkenhoef G, Buskens E, Hillege H, and Postmus D.
(2011); Nixon R, Dierig C, Mt-Isa S, Sto¨ckert I, et al (2016); Marcelon L,
Verstraeten T, Dominiak-Felden G, et al. (2016); Saint-Hilary G, Cadour S,
Robert V, and Gasparini M (2017). Under the linear model, the utility score is
computed as
$u^{{{L}}}(\boldsymbol{\xi_{i}},\boldsymbol{w}^{{L}}):=\sum_{j=1}^{n}w_{j}^{{L}}u_{j}(\xi_{i,j})$
(6)
where $w_{j}^{L}>0$ $\ \forall j$ and $\sum_{j=1}^{n}w_{j}^{L}=1$, the
superscript $L$ referring to the linear model. The expression (6) is used in
Equation (2) and Equation (4) to compare the associated linear scores for a
pair of treatments.
As an illustration of all considered aggregation models, we will use the
following example with two criteria: one benefit indexed by $1$, one risk
indexed by $2$. The linear utility score for treatment $i$ at fixed parameter
values $\theta_{i1}$, $\theta_{i2}$ takes the form
$u^{L}(\theta_{i1},\theta_{i2},w^{L}):=w^{L}u_{1}(\theta_{i1})+(1-w^{L})u_{2}(\theta_{i2}).$
(7)
As values $u_{1}(\theta_{i1}),u_{2}(\theta_{i2})\in(0,1)$, one can interpret
$u_{1}(\theta_{i1})$ as a probability of benefit and $1-u_{2}(\theta_{i2})$ as
a probability of risk. This utility score can be transformed into a loss score
by subtracting it from one:
$l^{L}(\theta_{i1},\theta_{i2},w^{L}):=1-u^{L}(\theta_{i1},\theta_{i2},w^{L})$
(8)
We do this as, historically, the concept of a loss function is preferred both
in statistical decision theory and Bayesian analysis for parameter
estimation.(Berger JO, 2011) The contours of equal linear loss score for all
values of $u_{1}(\theta_{i1})$ and $(1-u_{2}(\theta_{i2}))$ are given in Panel
(A) of Figure 1 using $w^{L}=0.5$ (top row) and $w^{L}=0.25$ (bottom row).
Figure 1: Contour plots for Linear (A), Product (B), Multi-Linear (C), and
SLoS (D) models with (i) two equally important criteria (top row), and (ii)
the risk criterion being twice as important (on average for non-linear model)
as the benefit criterion (bottom row). Red lines on Panels B–D represents the
tangents at the middle point (0.5,0.5).
The contours represent the loss score for each benefit-risk pair. Lower values
of $l^{L}(\theta_{i1},\theta_{i2},w^{L})$ correspond to better treatment
benefit-risk profiles. It is minimised (right bottom corner) when the maximum
possible benefit is reached ($u_{1}(\theta_{i1}$) = 1) with no risk
(1-$u_{2}(\theta_{i2}$) = 0). The contours are linear, with a constant slope
$w^{L}$/(1-$w^{L}$). This implies that if one treatment has an increased
probability of risk of x$\%$ compared to another, its benefit probability
should be increased by (1-$w^{L}$)/$w^{L}$ $\times$ x$\%$ to have the same
utility score, and this holds for all values of benefit and risk. This figure
allows for an illustration of the penalisation of various benefit-risk
criteria and for an illustrative comparison between treatments with different
criteria. For example, any pairwise comparison that lies on a contour line
shows that the two treatments are seen as equal.
The major advantage of the linear model is its intuitive interpretation: a
poor efficacy can be compensated by a good safety, and vice-versa. However,
the linear utility score can result in the recommendation of highly unsafe or
poorly effective treatment Morton A. (2017); Marsh K, IJzerman M, Thokala P,
Baltussen R, Boysen M, Kaló Z, Longrenn T, Mussen F, Peacock S, Watkins J and
Devlin N. (2016) and, consequently, in a counter-intuitive conclusion.
Moreover, the linearity implies that the relative tolerance in the toxicity
increase is constant for all levels of benefit Saint-Hilary G, Robert V,
Gasparini M, Jaki T and Mozgunov P (2018). These pitfalls could be avoided (or
at least reduced) by using non-linear models Marsh K, IJzerman M, Thokala P,
Baltussen R, Boysen M, Kaló Z, Longrenn T, Mussen F, Peacock S, Watkins J and
Devlin N. (2016); Raiffa H, Keeney RL. (1975). Specifically, Saint-Hilary et
al. Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P (2018)
advocated introducing two principles a desirable benefit-risk analysis
aggregation model should have:
1. 1.
One is not interested in treatments with extremely low levels of benefit or
extremely high levels of risks (regardless of how the treatment performs on
other criteria);
2. 2.
For an equivalent absolute increase in benefit, one can tolerate a larger risk
increase if the amount of benefit is small than if it is high.
Below, we consider three models having one or both of these properties.
#### 2.2.2 Product Model
A multiplicative aggregation (known as a product model) is an alternative
method of comparing treatment’s effects on benefits and risks(Cobb C, and
Douglas P, 1928). Under the product model, the utility score is computed as
$u^{{{P}}}(\boldsymbol{\xi_{i}},\boldsymbol{w}^{{P}}):=\prod_{j=1}^{n}u_{j}(\xi_{i,j})^{w_{j}^{{P}}}$
(9)
where the superscript $P$ refers to the product model. The expression (9) is
used in Equation (2) and Equation (4) to compare the associated product scores
for a pair of treatments.
The product utility score for treatment $i$ with two criteria at fixed
parameter values $\theta_{i1}$, $\theta_{i2}$ takes the form
$u^{P}(\theta_{i1},\theta_{i2},w^{P}):=u_{1}(\theta_{i1})^{w^{P}}\times
u_{2}(\theta_{i2})^{(1-w^{P})}.$ (10)
Similarly as for the linear model, this utility score can be transformed into
a loss score by subtracting it from one:
$l^{P}(\theta_{i1},\theta_{i2},w^{P}):=1-u^{P}(\theta_{i1},\theta_{i2},w^{P})$
(11)
The contours of equal product loss score for all values of
$u_{1}(\theta_{i1})$ and $(1-u_{2}(\theta_{i2}))$ are given in Panel (B) of
Figure 1 using $w^{P}=0.5$ (top row) and $w^{P}=0.25$ (bottom row).
One advantage the product model has over the linear model is that it cannot
recommend treatments with either zero benefit or extreme risk. This is because
either of these two options would result in a score of zero for the utility
function, and as such would make it impossible for such a treatment to be
recommended. The contour lines in Panel (B) in Figure 1 demonstrate how the
product model penalises undesirable values compared to the linear model. These
contours are curved, and are bunched together tightest at points where benefit
values are low and where risk values are high. This shows how the penalisation
differs this model from the linear model, as under the linear model, an
increase/decrease in benefit-risk is treated equally regardless of the
marginal values of these criteria, whereas the values of these criteria often
have an effect on our decision making under the product model.
#### 2.2.3 Multi-Linear Model
A multi-linear model for the aggregation of treatments’ benefits and risks
provides a one more alternative for the comparison of two treatments Raiffa H,
Keeney RL. (1975). This model can be seen as attempt to combine the linear and
product model. Under the multi-linear model, the utility score is computed as
$\begin{array}[]{c}u^{{{ML}}}(\boldsymbol{\xi_{i}},\boldsymbol{w}^{{ML}}):=\sum_{j=1}^{n}w_{j}^{{ML}}u_{j}(\xi_{i,j})+\sum_{j=1,k>j}^{n}w^{ML}_{j,k}u_{j}(\xi_{i,j})u_{k}(\xi_{i,k})+\\\
\sum_{j=1,l>k>j}^{n}w^{ML}_{j,k,l}u_{j}(\xi_{i,j})u_{k}(\xi_{i,k})u_{l}(\xi_{i,l})+......+w^{ML}_{1,2,....,n}u_{1}(\xi_{i,1})u_{2}(\xi_{i,2})....u_{n}(\xi_{i,n})\end{array}$
(12)
where the superscript $ML$ refers to the multi-linear model, and the weight
criteria $w^{ML}_{i,j,...}$ refer to the weight criteria given to the
interaction term between criteria $i,j,...$. We require all the weights in the
ML model to sum up to 1. The expression (12) is used in Equation (2) and
Equation (4) to compare the associated multi-linear scores for a pair of
treatments.
Considering the example with two criteria, the multi-linear utility score for
treatment $i$ at fixed parameter values $\theta_{i1}$, $\theta_{i2}$ takes the
form
$u^{ML}(\theta_{i1},\theta_{i2},w_{1}^{{ML}},w_{2}^{{ML}},w_{1,2}^{{ML}}):=w_{1}^{{ML}}u_{1}(\theta_{i1})+w_{2}^{{ML}}u_{2}(\theta_{i2})+w^{ML}_{1,2}(u_{1}(\theta_{i1})u_{1}(\theta_{i2})).$
(13)
Note that the even under the constraint of the sum of the weights to be equal
to one, there is one more weight parameter than for the linear and product
models. This immediately can make the weight elicitation procedure more
involving for all stakeholders. To link the weights of the ML model with the
rest of the competing approaches (see more details in Section 2.3), we set up
one more constraint, so that the number of weight parameters is the same in
all considered model (for the purpose of the comparison in this manuscript).
Specifically, we fix $w^{ML}_{1,2}=c$ where $0\leq c\leq 1$, implying that we
fix the effect of the interaction term. Similarly as for the linear and
product models, this utility score can be transformed into a loss score by
subtracting it from one:
$l^{ML}(\theta_{i1},\theta_{i2},w^{ML}):=1-u^{ML}(\theta_{i1},\theta_{i2},w^{ML})$
(14)
The contours of equal linear loss score for all values of $u_{1}(\theta_{i1})$
and $(1-u_{2}(\theta_{i2}))$, $c=0.20$ are given in Panel (C) of Figure 1
using $w_{1}^{ML}=0.40$ (top row) and $w_{1}^{ML}=0.15$ (bottom row).
The contour lines demonstrate the almost linear trade-off between benefit and
risk, but that there is a slight curvature (which becomes more prominent as it
moves further away from more desirable values), indicating a moderate
penalisation of extreme values. This shows that while this model attempts to
penalise the undesirable criteria values, this effect does not seem to be as
strong as in the product model, admittedly due to the chosen value of the
weight, $w^{ML}_{1,2}$, given to the interaction term . A moderate level of
penalisation for the chosen value of the weight corresponding to the
interaction term allows for treatments to be recommended when there is no
benefit or extreme risk, as is the case in the linear model. The more the
weight of the interaction terms, the less likely this would happen.
#### 2.2.4 Scale Loss Score (SLoS) Model
An alternative to the models proposed above is the Scale Loss Score (SLoS)
model, which was proposed by Saint-Hilary et al. Saint-Hilary G, Robert V,
Gasparini M, Jaki T and Mozgunov P (2018) to satisfy the two desirable
properties for an aggregation method. First of all, in contrast to the three
models above, SLoS considers a loss score, rather than a utility score, as the
output. Therefore, lower values are more desirable. Under the SLoS model, the
loss score is computed as
$l^{{{S}}}(\boldsymbol{\xi_{i}},\boldsymbol{w}^{{S}}):=\sum_{j=1}^{n}\bigg{(}\frac{1}{u_{j}(\xi_{i,j})}\bigg{)}^{w_{j}^{{S}}}$
(15)
where the superscript $S$ refers to the SLoS model. The expression (15) is
used in Equation (3) and Equation (5) to compare the associated SLoS scores
for a pair of treatments.
Coming back to the example with two criteria, the loss score for treatment $i$
at fixed parameter values $\theta_{i1}$, $\theta_{i2}$ takes the form
$l^{s}(\theta_{i1},\theta_{i2},w^{{S}}):=\bigg{(}\frac{1}{u_{1}(\theta_{i1})}\bigg{)}^{w^{{S}}}+\bigg{(}\frac{1}{u_{2}(\theta_{i2})}\bigg{)}^{(1-w^{{S}})}.$
(16)
The contours of equal scale loss score for all values of $u_{1}(\theta_{i1})$
and $(1-u_{2}(\theta_{i2}))$ are given in Panel (D) of Figure 1 using
$w^{S}=0.5$ (top row) and $w^{S}=0.25$ (bottom row).
As is the case with the product model, this penalisation makes it impossible
for treatments with either no benefit or extreme risk to be recommended over
other potential treatments, compared to the linear and multi-linear models
(which can recommend such treatments). This is because a treatment that had
either of these would return a loss score of infinity (regardless of the
values of any other criteria) and would therefore be non-recommendable. On the
Figure, the white colour at extreme undesirable values (either very low
benefit or very high risk) corresponds to very high to infinite loss scores
and demonstrate the penalisation effect.
Even when the contour plots in Figure 1 concern the same values of weights in
the models, the weights themselves are different in each model (represented by
different indices). Therefore, when to provide a fair comparison of these
models, it is important to ensure that the models carry (approximately) the
same relative importance of the criteria defined through the slope of the
contour lines. We propose an approach to match the relative importance of the
models below.
### 2.3 Weight Elicitation and Mapping
Methods for quantifying subjective preferences, for example, Discrete Choice
Experiment and Swing-Weighting, have been widely studied in the literature
Marsh K, IJzerman M, Thokala P, Baltussen R, Boysen M, Kaló Z, Longrenn T,
Mussen F, Peacock S, Watkins J and Devlin N. (2016); Mussen F, Salek S, and
Walker S (2007); Tervonen T, van Valkenhoef G, Buskens E, Hillege HL and
Postmus D. (2011); Broekhuizen, H., IJzerman, M.J., Hauber, A.B. et al (2017).
Applied to drug BRA, the majority of the weight elicitation methods concern
the linear model. In the linear model framework, the weight assigned to one
criterion is interpreted as a scaling factor which relates one increment on
this criterion to increments on all other criteria.
Note that each of the aggregation models use the individual weights,
$w^{L},w^{P},w^{ML}$, and $w^{S}$. However, in the actual analysis, regardless
of the aggregation model used, one can expect only one underlying level of the
relative importance of the considered benefit and risk criteria, as the
stakeholders’ preferences between the criteria should not depend on the
methodology used for the decision-making. Therefore, it is crucial to make
sure when applying different models to the same problem that they reflect the
same stakeholders’ preferences. We adapt the approach proposed by Saint-Hilary
et al. Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P (2018) to
achieve that. Since comprehensive work has been published and is currently
being continued on the weight elicitation for the linear model, we will map
the weights $w^{L}_{j}$ (hypothetically) elicited for the linear model to the
weights $w^{P},w^{ML}$, and $w^{S}$ such that they reflect the same trade-off
preferences between the criteria.
#### 2.3.1 Mapping for Two Criteria
As described in Saint-Hilary et al. Saint-Hilary G, Robert V, Gasparini M,
Jaki T and Mozgunov P (2018), formally, the trade-off between the criteria
could be represented by the slope of the tangent of the contour lines where
the contour line passes through the point (0.5, 0.5) (see the red lines in the
contour plot of Panels B-D in Figure 1). Therefore, the expressions for the
mapping of the linear weight to the competitive models are found through the
equality of the slopes of the tangents to the corresponding contour lines.
We start from the setting with two criteria. As stated above, even for the two
criteria setting, the multi-linear model requires one more weight to be
specified. Therefore, we impose a constraint on the weight corresponding to
the interaction term to obtain the unique solution for the mapped weight
$w^{ML}$, specifically $w^{ML}_{1,2}=1-w_{1}^{ML}-w_{2}^{ML}=c$, where $0\leq
c\leq 1$. Note that for $c=0$, the multi-linear model reduces to the linear
one, and for $c=1$ it becomes the product of the two criteria values.
Using the utility/loss scores $z^{{P}},z^{{ML}},z^{{S}}$ obtained at point
$(u_{1}(\theta_{i1}),u_{2}(\theta_{i2}))$, the expressions of the equality of
the tangents with two criteria take the form
$\begin{array}[]{ccc}\frac{w^{{L}}}{1-w^{{L}}}&=&\frac{w^{{P}}}{1-w^{{P}}}\bigg{(}\frac{1}{u_{1}(\theta_{i,1})}\bigg{)}\bigg{(}\frac{z^{{P}}}{u_{1}(\theta_{i,1})^{w^{{}_{P}}}}\bigg{)}^{\frac{1}{1-w^{{P}}}},\\\
\frac{w^{{L}}}{1-w^{{L}}}&=&\frac{w_{1}^{ML}w_{2}^{ML}+z^{ML}-z^{ML}(1-c)}{\big{(}w_{2}^{ML}+cu_{1}(\theta_{i,1})\big{)}^{2}},\\\
\frac{w^{{L}}}{1-w^{{L}}}&=&\frac{{w^{S}}}{1-{w^{S}}}\left(z^{S}-u_{1}(\theta_{i1})^{-{w^{S}}}\right)^{\frac{{w^{S}}-2}{1-{w^{S}}}}\
\times\ u_{1}(\theta_{i1})^{-({w^{S}}+1)}.\end{array}$ (17)
where the slope for the linear model is given in the left hand size, and the
slopes for the product, multi-linear and SLoS models are given in the right
hand side, respectively.
Note, however, that the slope of the tangent of the contours for the linear
model are constant for all values of parameters and defined by the weights
$w^{L}_{j}$ only, while the slopes for the competitive models change with the
values of the criteria. For the purpose for the weights mapping, we would
interpret $w^{L}_{j}$ as an average relative importance of each criterion over
the others, and would match the slopes of the tangents to the corresponding
contours in the middle point, $u_{1}(\theta_{i1})=u_{2}(\theta_{i2})=0.5$
Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P (2018). Then, the
equalities above reduce to
$\begin{array}[]{ccc}w^{{L}}&=&w^{{P}},\\\ {w^{{L}}}&=&{w^{{ML}}}+c/2,\\\
\frac{w^{{L}}}{1-w^{{L}}}&=&\frac{w^{S}}{1-w^{S}}\ .\
2^{(2w^{S}-1)}.\end{array}$ (18)
Therefore, the product weight coincides with the linear weight in the given
middle mapping point. For the SLoS model, the weight mapping does not have an
analytical solution, but the approximate value of ${w}^{S}$ can be obtained by
line search. Figure 2 shows the mapping from the linear model to the multi-
linear and SLoS models. It demonstrates how the value for the linear model
(x-axis) can be used to find the respective weights for the multi-linear and
SLoS models on the y-axes.
Figure 2: Weight mapping from the linear model to the multi-linear model
(left) and to the SLoS model (right).
One can note that for the multi-linear model, the proposed mapping process may
result in the obtained negative mapped values of weight. This is because of
how the weight mapping function is elicited in the two criteria case: if the
value of a weight under the linear model is less than half the value of $c$,
then this will map to a negative value (which, in theory, gives our criteria a
negative importance - which is impossible) to reflect the same relative
importance as induced by the linear model. Intuitively, if the interaction
terms already contributes more to the importance of the one of the criterion
in the interaction, the model needs to subtract the “excessive” importance
from the weight corresponding to this criterion standing alone. Whilst this
effect can be negated by setting an upper limit of the values $c$ can take,
this in term limits the effect the interaction terms have, and can make the
model more similar to the linear model. This is demonstrated in Figure 2 for
$c=0.2$, where any weights for the linear model that are given a value of 0.1
or less would be mapped to 0 in the multi-linear model, rather than a negative
value.
Proof for the above workings is given in the Supplementary Material.
#### 2.3.2 Mapping for Setting with More Than Two Criteria
The derivation above concerns the setting with two criteria only but could be
directly extended for the product and SLoS models. Specifically, one can apply
the proposed mapping function to each of the weights in the setting with more
than two criteria marginally. This would imply that the weights are mapped
with respect to the importance of all other criteria rather than a single
benefit (or risk) Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P
(2018).
The extension for the multi-linear model, however, is less straightforward.
Generally, it would be a much more involving procedure to elicit weights for
all the interactions terms as their number increases noticeably if more than
two criteria are considered. Specifically, in the case study considered in
Section 3, there are 4 criteria resulting in 11 interaction terms. Following
the two criteria setting, we suggest to fix the total weight attributed to all
the interactions to be equal to $c=0.2$. Then, the ML model for the setting
with 4 criteria takes the form
$\begin{array}[]{c}u^{{{ML}}}(\boldsymbol{\xi_{i}},\boldsymbol{w}^{{ML}}):=\sum_{j=1}^{n}w_{j}^{{ML}}u_{j}(\xi_{i,j})+\frac{c}{2^{n}-n-1}\sum_{j=1,k>j}^{n}u_{j}(\xi_{i,j})u_{k}(\xi_{i,k})+\\\
\frac{c}{2^{n}-n-1}\sum_{j=1,l>k>j}^{n}u_{j}(\xi_{i,j})u_{k}(\xi_{i,k})u_{l}(\xi_{i,l})+......+\frac{c}{2^{n}-n-1}u_{1}(\xi_{i,1})u_{2}(\xi_{i,2})....u_{n}(\xi_{i,n})\end{array}$
(19)
where the fraction $\frac{c}{2^{n}-n-1}$ ensures that the sum of all the
interaction terms equals $c$ and this is split equally between all interaction
terms. To calculate the individual weights $w_{j},j=1,\ldots,n$, again, a
mapping to the linear weights can be used. In order for the weights to sum up
to 1, the transformation ${w^{{ML}}}={w^{{ML}}}-c/n$ could be applied. For
$n=2$, this translates into the corresponding mapping in Equation 18. While
this procedure does not guarantee the equality of the slopes of the tangents,
it, however, emphasises the potential challenge associated with the use of the
multi-linear model that should be taken into account when considering it.
## 3 Case study
In this section, the performance of the four aggregation models is illustrated
in the setting of an actual case study. This will provide an insight on how
the various models perform, and what difference in the decision-making they
induce when applied to real-life data. The case study in question analyses the
effects of two treatments (Venlafaxine and Fluoxetine) compared to a placebo,
on the effects of treating depression. This study uses data from Nemeroff
Nemeroff C. (2007), and expands on the studies conducted by Tervonen et al.
Tervonen T, van Valkenhoef G, Buskens E, Hillege H, and Postmus D. (2011) and
Saint-Hilary et al. Saint-Hilary G, Cadour S, Robert V, and Gasparini M
(2017).
Fluoxetine and Venlafaxine are both treatments used to treat depression. Here,
the benefit criterion is the treatment response (an increase from baseline
score of Hamilton Depression Rating Scale of at least 50%), and the three risk
criteria are nausea, insomnia and anxiety.
Table 1 shows the outcomes of the trial for the two treatments and the
placebo.
Venlafaxine Fluoxetine Placebo Treatment response 51/96 45/100 37/101 Nausea
40/100 22/102 8/102 Insomnia 22/100 15/102 14/102 Anxiety 10/100 7/102 1/102
Table 1: Number of events and number of patients for each criteria for
Venlafaxine, Fluoxetine and Placebo
For all criteria, we approximate the distributions of the event probabilities
by Beta distributions $\mathcal{B}(a,b)$, with $a$ = number of occurrences and
$b$ = (number of patients $-$ number of occurrences) of the considered event
(response or adverse event), assuming Beta(0,0) priors. We generated 100,000
samples from each distribution. These samples are then used to approximate the
distributions of the linear partial value functions (PVFs) as defined in
equation (1) for all criteria and all treatment arms, with the following most
and least preferred probabilities of occurrence $\xi^{\prime}_{j}$ and
$\xi^{\prime\prime}_{j}$:
* •
Most and least preferable values of $\xi^{\prime}_{j}=0.8$ and
$\xi^{\prime\prime}_{j}=0.2$ for the response,
* •
Most and least preferable values $\xi^{\prime}_{j}=0$ and
$\xi^{\prime\prime}_{j}=0.5$ for the adverse events.
Venlafaxine Fluoxetine Placebo Treatment response $\xi_{i,1}$ 0.52 (0.42,0.62)
0.45 (0.35,0.55) 0.37 (0.28,0.46) $u_{1}(\xi_{i,1}$) 0.53 (0.37,0.70) 0.42
(0.26,0.58) $\mathbf{0.28(0.13,0.44)}$ Nausea $\xi_{i,2}$ 0.40 (0.31,0.50)
0.22 (0.14,0.30) 0.08 (0.04,0.14) $u_{2}(\xi_{i,2}$)
$\mathbf{0.20(0.00,0.39)}$ 0.57 (0.40,0.72) 0.84 (0.72,0.93) Insomnia
$\xi_{i,3}$ 0.22 (0.15,0.31) 0.15 (0.09,0.22) 0.14 (0.08,0.21)
$u_{3}(\xi_{i,3}$) 0.56 (0.39,0.71) 0.71 (0.56,0.83) 0.73 (0.58,0.84) Anxiety
$\xi_{i,4}$ 0.10 (0.05,0.17) 0.07 (0.03,0.13) 0.01 (0.00,0.04)
$u_{4}(\xi_{i,4}$) 0.80 (0.67,0.90) 0.86 (0.75,0.94) 0.98 (0.93,1.00)
Table 2: Mean (95$\%$ Credible Interval) of the Beta posterior distributions
of benefit and risk parameters and of corresponding PVFs for Venlafaxine,
Fluoxetine and Placebo (with values in bold corresponding to those that
leading to significant differences between models).
This case study considers three different weighting combinations, which were
used under the linear model by Saint-Hilary et al. Saint-Hilary G, Cadour S,
Robert V, and Gasparini M (2017). These sets of weights correspond to three
different scenarios of the relative importance of the criteria for the
stakeholders. The first scenario reflects the case when all four criteria are
equally important. The second scenario corresponds to the benefit criterion
having more relative importance than all risk criteria together. The third
scenario can be considered as a “safety first” scenario, in which each risk
criterion has a higher weight than the benefit criterion. As discussed in
Section 2.3, the weights of the criteria for the product, multi-linear and
SLoS models are obtained by mapping. Note, again, that while the multi-linear
model might not exactly induce the same average relative importance of the
criteria, the proposed procedure suggests to control the contribution of the
interaction terms in the decision at the given level of $c=0.20$, and
therefore is used for the sake of simplicity. The mapped weights for each of
the three scenarios are presented in Table 3.
Scenario 1 Scenario 2 Scenario 3 Model $w_{1}$ $w_{2}$ $w_{3}$ $w_{4}$ $w_{1}$
$w_{2}$ $w_{3}$ $w_{4}$ $w_{1}$ $w_{2}$ $w_{3}$ $w_{4}$ Linear 0.25 0.25 0.25
0.25 0.58 0.11 0.15 0.15 0.18 0.28 0.25 0.29 Product 0.25 0.25 0.25 0.25 0.58
0.11 0.15 0.15 0.18 0.28 0.25 0.29 Multi-Linear 0.20 0.20 0.20 0.20 0.53 0.06
0.10 0.10 0.13 0.23 0.20 0.24 SLoS 0.30 0.30 0.30 0.30 0.56 0.16 0.21 0.21
0.24 0.33 0.30 0.34
Table 3: Table of mapped weights for each of the three scenarios.
Three pairwise comparisons are made: Venlafaxine against Fluoxetine,
Venlafaxine against Placebo, and Fluoxetine against Placebo. We consider that
one treatment is recommended over another if the probabilities defined in (4)
or (5) are greater than $\psi=0.8$. The probabilities of recommendations under
all three scenarios and for each aggregation model are given in Table 4.
Probability of treatment being Venlafaxine over Venlafaxine over Fluoxetine
over recommended as best treatment Fluoxetine Placebo Placebo Scenario 1
Linear 1.7$\%$ $<$0.1$\%$ 7.2$\%$ Product $1.7\%$ 1.6$\%$ 37.0$\%$ Multi-
Linear 1.7$\%$ $<$0.1$\%$ 9.1$\%$ SLoS 1.8$\%$ 3.7$\%$ 47.3$\%$ Scenario 2
Linear 48.0$\%$ 64.7$\%$ 66.9$\%$ Product 42.6$\%$ 74.9$\%$ 80.4$\%$ Multi-
Linear 46.3$\%$ 63.0$\%$ 66.3$\%$ SLoS 36.6$\%$ 72.5$\%$ 81.4$\%$ Scenario 3
Linear 0.6$\%$ 0$\%$ 2.1$\%$ Product 0.5$\%$ 0.1$\%$ 18.5$\%$ Multi-Linear
0.6$\%$ 0$\%$ 3.0$\%$ SLoS 0.6$\%$ 0.6$\%$ 30.1$\%$
Table 4: Probability of treatment being recommended as the best treatment
against another for the three pairwise comparison, using each of the four
aggregation models, for each of the three weighting scenarios.
Under the first scenario with the equal weights for all criteria, the
treatment with preferable risk criteria values was more likely to be
recommended as the three risk criteria altogether have a greater weight than
the one benefit criterion. For the comparison between Venlafaxine and
Fluoxetine, the probability that Venlafaxine has better benefit-risk
characteristics is around 1.7-1.8$\%$ under all four models. For the
comparison between Venlafaxine and the placebo, there is only a minor
difference in the probability that Venlafaxine has better benefit-risk
characteristics ($<$ 0.1 $\%$ in the linear and multi-linear models, 1.6$\%$
in the product model and 3.7$\%$ in the SLoS model), not enough of a
difference to change the recommendation. However, when comparing Fluoxetine to
the placebo, a notable difference is observed. Under the linear and multi-
linear models, the probability of Fluoxetine having the better benefit-risk
characteristics is around 7-10$\%$ (suggesting the placebo is much more
preferable), whilst this rises to 37$\%$ under the product model and 47.3$\%$
under the SLoS model (suggesting near-parity of treatments). This occurs due
to the penalisation of low benefit criterion values for the placebo, where the
95% credible interval includes values close to zero (in bold in Table 2).
These low values are harshly penalised under the product and SLoS models, as
they suggest that the placebo induces no treatment benefit with a non-
neglectable probability. The linear model does not account for this and
strongly favours the placebo, while the multi-linear does not penalise these
values strongly.
Under the second scenario, the treatment response is considered as the most
important factor, and is given a weighting greater than that of the three risk
criteria combined. For the comparison between Venlafaxine and Fluoxetine, both
the product and SLoS models say that Venlafaxine has inferior benefit-risk
characteristics (42.6$\%$ and 36.6$\%$ probability of being better,
respectively). More average results are observed with both the linear model,
which gives a probability of 48.0$\%$, and the multi-linear model, which gives
a probability of 46.3$\%$. Again, the difference between the probability of
the linear model and those of the product and SLoS models is due to the
penalising effects of the latter. This occurs because of the nausea risk
criterion interval contains zero for Venlafaxine (in bold in Table 2), which
causes the product and SLoS models to recommend Fluoxetine more often than
Venlafaxine, despite the weighing criteria giving preference to the treatment
response (which is greater with Venlafaxine). With the multi-linear model, the
penalisation of the undesirable nausea criterion is not as strong as in the
product or SLoS models, as the weight mapping induces a drop from 0.11 to 0.06
in the weight given to the corresponding individual term, and the effect of
the interaction terms is not enough to overcome this.
For the comparison between Venlafaxine and the placebo, the probability that
Venlafaxine has better benefit-risk characteristics is between 63-75$\%$
across the four models. The product and SLoS models both penalise the low
benefit value of the placebo, which is why they are both more likely to
recommend Venlafaxine than the other two models. Additionally, the product and
SLoS models both also penalise the nausea criterion value of Venlafaxine, and
due to the increase weighting given to it by the SLoS model mapping, this
causes the product model to be more likely to recommend Venlafaxine than the
SLoS model.
For the comparison between Fluoxetine and the placebo, the probability that
Fluoxetine has better benefit-risk characteristics is around 65-80$\%$ under
all four models, with the probability of Fluoxetine being preferable
increasing as the methods increase the penalisation applied to the placebo’s
lack of benefit effect. The stronger penalisation occurs under the product and
SLoS models, hence why they are both more likely to recommend Fluoxetine.
Across all three comparisons, the multi-linear model is always slightly less
likely to recommend the treatment with the greater benefit value than the
linear model. As this is the scenario where the benefit criterion is
considered to be the most important, this shows that the weight splitting with
the multi-linear model induces a loss of the preferences that were given when
the weights were originally set out for the linear model, illustrating some of
the problems theorised in the methods section.
Under the third scenario, a “safety first” approach is adopted, giving the
risk factors a higher weighting. The probability that Venlafaxine has better
benefit-risk characteristics is around 0.5-0.6$\%$ when it is compared to
Fluoxetine and around 0-0.6$\%$ when it is compared to placebo, under all four
models. For the comparison between Fluoxetine and the placebo, the probability
that Fluoxetine has better benefit-risk characteristics is around 2.1-3.0$\%$
for the linear and multi-linear models, whilst this increases to 18.5$\%$
under the product model and 30.1$\%$ under the SLoS model. This increase
occurs for the same reasons outlined for the same comparison in scenario 1:
The penalisation of the benefit criterion for the placebo, with its 95%
credible interval including low values (in bold in Table 2). The linear model
does not account for this and strongly favours the placebo, while the multi-
linear does not penalise these values sufficiently and still favours the
placebo.
Overall, this case study provides us with a number of important observations
shedding a light on the differences in the aggregation performances. Firstly,
the effects of extremely undesirable outcomes (those highlighted in bold in
Table 2) are more significantly and consistently penalised in the product and
SLoS models (the penalisation is stronger in the SLoS model than the product
model, although they give the same recommendation for every comparison). These
examples also help to show that the models provide similar recommendations
when one treatment is clearly preferable than its competitor. Lastly, the
weight splitting in the multi-linear model induces a change in the relative
importance between criteria that may not always reflect the choices of weights
as well as other models, highlighted in scenario 2. This makes it less
appealing than other models.
To draw further conclusions regarding the differences between models, we
conduct a comprehensive simulation study under various scenarios and under
their many different realisations.
## 4 Simulation Study
### 4.1 Setting
To evaluate the performances of the four aggregation models, a comprehensive
simulation study covering a wide range of possible clinical cases is
conducted. This allows us to investigate many scenarios and their various
realisations rather than a single dataset as in the case study. The simulation
study is preformed in a setting with two treatments, named $T_{1}$ and
$T_{2}$, that are compared in randomised clinical trials with $N=100$ patients
allocated to each treatment. Each treatment is evaluated based on two
criteria: one benefit ($j=1$) and one risk ($j=2$). We assume that benefit
events are desirable (e.g. treatment response), while risk events should be
avoided (e.g. adverse event), with $\theta_{ij}$ being their true probability
of occurrence for each treatment $i=1,2$. The PVFs are defined as
$u_{1}(\theta_{i1})=\theta_{i1}$ and $u_{2}(\theta_{i2})=1-\theta_{i2}$. The
two criteria are deemed equally important and therefore are given equal
weighting criteria. We start with the case of uncorrelated criteria and
explore the effect of the presence of correlations in Section 4.4. The range
of true values of the benefit and risk criteria and the corresponding
simulation scenarios are given in Figure 3.
Figure 3: Simulation scenarios for the trial with two criteria.
Figure 3 shows all the different values that the benefit and risk criteria can
take for both $T_{1}$ and $T_{2}$, where black squares correspond to the pairs
of criterion values for $T_{1}$ and white circles correspond to the pairs of
criterion values for $T_{2}$. For each of the nine fixed characteristics of
$T_{1}$, all 81 possible values of $T_{2}$, with
$\theta_{2,1},\theta_{2,2}\in(0.1,0.2,\ldots,0.9)$ are considered, resulting
in 729 scenarios. The fixed characteristics for $T_{1}$ are referred to as
follows:
Scenario 1: $T_{1}$=($\theta_{1,1}$=0.5,$\theta_{1,2}$=0.5) Scenario 2:
$T_{1}$=($\theta_{1,1}$=0.3,$\theta_{1,2}$=0.7)
Scenario 3: $T_{1}$=($\theta_{1,1}$=0.7,$\theta_{1,2}$=0.3) Scenario 4:
$T_{1}$=($\theta_{1,1}$=0.1,$\theta_{1,2}$=0.1)
Scenario 5: $T_{1}$=($\theta_{1,1}$=0.9,$\theta_{1,2}$=0.9) Scenario 6:
$T_{1}$=($\theta_{1,1}$=0.3,$\theta_{1,2}$=0.3)
Scenario 7: $T_{1}$=($\theta_{1,1}$=0.7,$\theta_{1,2}$=0.7) Scenario 8:
$T_{1}$=($\theta_{1,1}$=0.9,$\theta_{1,2}$=0.1)
Scenario 9: $T_{1}$=($\theta_{1,1}$=0.1,$\theta_{1,2}$=0.9)) where
$\theta_{1,j}$ is the true value of criterion $j$ for $T_{1}$.
### 4.2 Data Generation and Comparison Procedure
The following Bayesian procedure is used for the simulation study:
* •
Step 1: Simulate randomised clinical trials with two treatments $T_{1}$ and
$T_{2}$, each with two uncorrelated criteria, and the sample size of $N=100$
in each treatment arm.
* •
Step 2: Derive the posterior distributions using the simulated data assuming a
degenerate prior, Beta(0,0), to reduce the influence of the prior
distribution. Draw 2000 samples from each posterior distribution of the
criteria and obtain the corresponding empirical distribution for the PVF.
* •
Step 3: Use the posterior distributions of the PVF in each of the aggregation
models as given in Equations 2 and 3 to compute the probability in Equations 4
and 5 that treatment $T_{1}$ has better benefit-risk profile,
$\mathcal{P}_{X}^{1,2}$ (for some model $X$), and compare to the threshold
value $\psi=0.8$. If $\mathcal{P}_{X}^{1,2}>0.8$, then treatment $T_{1}$ is
recommended. If $\mathcal{P}_{X}^{1,2}<0.2$, then treatment $T_{2}$ is
recommended. If 0.2 $\leq\mathcal{P}_{X}^{1,2}\leq 0.8$, then neither
treatment is recommended.
* •
Step 4: Repeat steps 1-3 for 2500 simulations trials.
* •
Step 5: Estimate the probability that each treatment is recommended
$\big{(}\mathbb{P}\big{(}\mathcal{P}_{X}^{1,2}>0.8\big{)}\big{)}$ by its
proportion over 2500 simulated trials.
The aggregation models will be compared using
$\mathbb{P}\big{(}\mathcal{P}_{X}^{1,2}>0.8\big{)}$, which is the probability
that the model $X$ recommends $T_{1}$ over $T_{2}$, and
$\phi_{X-Y}=\mathbb{P}\big{(}\mathcal{P}_{X}^{1,2}>0.8\big{)}-\mathbb{P}\big{(}\mathcal{P}_{Y}^{1,2}>0.8\big{)}$,
which is the difference between the probability that the model $X$ recommends
$T_{1}$ and the probability that the model $Y$ recommends $T_{1}$. The value
of $\phi$ represents a difference between two probabilities, and can therefore
take the range of values $-1\leq\phi\leq 1$. If $0<\phi\leq 1$, then the model
$X$ recommends $T_{1}$ more often than model $Y$. If $-1\leq\phi<0$, then the
model $Y$ recommends $T_{1}$ more often than model $X$. If $\phi=0$, then the
two models make the recommendations with the same probability. Note that, for
the ML model, we adopt $c=0.20$ as in the case study above.
### 4.3 Results
The results are presented on Figures 4 and 5. The first seven scenarios
referred to above for treatment $T_{1}$ are presented in the rows labeled 1-7.
Each graph corresponds to fixed expected probabilities of event for treatment
$T_{1}$, and each cell corresponds to a combination of expected probabilities
of benefit and risk for $T_{2}$. When reference is made to the “diagonal”,
this refers to the diagonal line that runs from the bottom left corner of the
graph to the top right. In all scenarios, all models agree to recommend
$T_{1}$ when it is undoubtedly better than $T_{2}$ i.e. when $T_{1}$ is more
effective and less harmful than $T_{2}$ (or to recommend $T_{2}$ when $T_{1}$
is indisputably worse, i.e. less effective and more toxic). For this reason,
the results for scenarios 8 and 9 are not presented here, but are included in
the Supplementary Material for completeness.
The probabilities $\mathbb{P}\big{(}\mathcal{P}_{L}^{1,2}>0.8$) (Red),
$\mathbb{P}\big{(}\mathcal{P}_{P}^{1,2}>0.8$) (Purple),
$\mathbb{P}\big{(}\mathcal{P}_{ML}^{1,2}>0.8$) (Orange), and
$\mathbb{P}\big{(}\mathcal{P}_{S}^{1,2}>0.8$) (Blue) are shown in Figure 4,
and all six pairwise comparisons in these probabilities are given in Figure 5.
From left to right, Figure 5 shows
$\phi_{P-L},\phi_{ML-L},\phi_{ML-P},\phi_{S-L},\phi_{S-P}$ and $\phi_{S-ML}$.
Figure 4: Probability that the model recommends $T_{1}$ over $T_{2}$,
$\mathbb{P}\big{(}\mathcal{P}^{1,2}>0.8$), for scenarios 1-7 for the linear
(red), product (purple), multi-linear (orange), and SLoS (blue) models.
Figure 5: Results of the six pairwise comparisons of the four AM, where a cell
being a colour indicates that that AM recommended $T_{1}$ more than the
comparative AM (the deeper the colour, the greater the difference in
recommendation).
In Figure 5, a colour of a cell corresponds to the aggregation model of this
colour to recommend treatment $T_{1}$ with higher probability than another
method. For instance, red cells in the first column of Figure 5 showing
($\phi_{P-L}$) indicate that, when $T_{2}$ characteristics take the
corresponding value, the linear model recommends $T_{1}$ more often than the
product one.
In Scenario 1, the four models are in agreement to recommend $T_{1}$ when
$T_{2}$ corresponds to less benefit and more risk. On the diagonal, the
product and SLoS models both favour $T_{1}$ over $T_{2}$ when $T_{2}$ has
either extremely high benefit and risk (top right corner), or extremely low
benefit and risk (left bottom corner), compared to either the linear or multi-
linear models. This occurs due to the penalisation of extremely low benefit
and extremely high risk by the product and SLoS models. Comparing product and
SLoS models for these values of benefit-risk, SLoS favours $T_{1}$ over
$T_{2}$ more often for low but not boundary values of the criteria. This
occurs due to the SLoS model penalising the undesirable qualities more than
the product one (this is similar to trends observed in the case study).
Compared to the linear model, the multi-linear model recommends $T_{1}$ over
$T_{2}$ with higher probability when $T_{2}$ has either higher benefit and
higher risk, or lower benefit and lower risk due to the interaction term
providing mild penalisation of extremely high risk or extremely low benefit.
However, there is (in most cases), a greater magnitude of difference between
the SLoS and product models than between the linear and multi-linear models.
For example, when $T_{2}$ has criteria values $\theta_{2,1}=0.2$ (benefit),
$\theta_{2,2}=0.1$ (risk) (lower benefit, lower risk), $T_{1}$ is recommended
in 2$\%$ of the trials under the linear model, in 70$\%$ under the product, in
8$\%$ under the multi-linear and in 90$\%$ under SLoS. This tells us that the
product and SLoS models do not permit that the decrease in risk is worth the
decrease in benefit that comes with it (the SLoS model more than the product
model), whilst the linear and multi-linear models both consider it acceptable.
Considering the case when $\theta_{2,1}=0.7$, $\theta_{2,2}=0.7$ (higher
benefit and higher risk risk compared to $T_{1}$), $T_{1}$ is recommended in
20$\%$ of the trials under the linear model compared to 49$\%$ for product
model, 25$\%$ for the multi-linear model and 61$\%$ for SLoS model. This tells
us that the product and SLoS models do not permit that the increase in benefit
is worth the increase in risk that comes with it (again, this effect is
stronger in the SLoS model than the product model), whilst the linear and
multi-linear both consider it acceptable (the linear model more-so than the
multi-linear model). Similar observations can be made in Scenarios 2-3.
However, a distinguishing difference between the designs under Scenario 1 can
be found when $T_{2}$ has the criteria $\theta_{2,1}=0.9$, $\theta_{2,2}=0.7$.
In this comparison, $T_{1}$ is recommended in 0$\%$ of the trials under the
linear model compared to 11$\%$ for product model, 0$\%$ for the multi-linear
model and 30$\%$ for SLoS model. Meanwhile, $T_{2}$ is recommended in 92$\%$
of the trials under the linear model compared to 32$\%$ for product model,
84$\%$ for the multi-linear model and 13$\%$ for SLoS model. This shows that
the linear, product and multi-linear models are all more likely to recommend
$T_{2}$, whilst only the SLoS model is more likely to recommend $T_{1}$. This
occurs due to the different strengths of penalisation between the models, and
only the SLoS model does not consider this an acceptable trade-off. This shows
that the product model and the SLoS model do not always make the same
recommendations, and that these differences can sometimes be quite large.
In Scenario 4, where $T_{1}$ has extremely low benefit and risk, it is very
rarely recommended by either the product of SLoS models, whereas it
recommended by both the linear and multi-linear models, in cases where $T_{2}$
has some increase in benefit, but a higher increase in risk. This occurs
because the SLoS and product models penalise extremely low benefit so severely
that the level of risk has almost no impact on the recommendation. The multi-
linear model also penalises the extreme low benefit, but on a much smaller
scale. For example, for $T_{2}$ with criteria values $\theta_{2,1}=0.6$,
$\theta_{2,2}=0.7$, $T_{1}$ is recommended with probability 68$\%$ under the
linear model, never recommended under the product model, 41$\%$ under the
multi-linear model and never recommended under the SLoS model. This shows that
the product and SLoS models reflect the desirable properties outlined above:
that we are not interested in the risk criterion value of a treatment if the
benefit criterion value is small/zero, whilst both the linear and multi-linear
models do not reflect this (although the multi-linear model does somewhat
penalise this). Similar results are observed in Scenario 5, where $T_{1}$ has
extreme risk and extreme benefit. The SLoS and product models will recommend
$T_{2}$ if it has lower risk than $T_{1}$ as long as it has some benefit,
whereas the linear model and the multi-linear model will recommend $T_{1}$
over $T_{2}$ if the benefit of $T_{2}$ decreases by a greater amount than the
risk.
It should be noted that poor recommendations can be made under the product and
SLoS models if both $T_{1}$ and $T_{2}$ have a risk criterion value of 0.9, as
the strength of the penalisation of the undesirable criteria overpowers the
effect of the benefit. For example, in scenario 5 where $T_{2}$ has criteria
values $\theta_{2,1}=0.8$, $\theta_{2,2}=0.9$ (same risk criterion value as
$T_{1}$ but a lower benefit criterion value), $T_{1}$ is recommended with
probability 75$\%$ under the linear model, 27$\%$ under the product model,
68$\%$ under the multi-linear model and 23$\%$ under the SLoS model (this
effect is stronger in the SLoS model than in the product model due to its
harsher penalisation of the undesirable criteria). They both did recommend
$T_{2}$ with probabilities 13$\%$ and 17$\%$ respectively, showing that they
still recommend the better treatment $T_{1}$ more often than $T_{2}$, but that
these two models hardly discriminate very unsafe drugs (for comparison, both
the linear and multi-linear models only recommended $T_{2}$ with probability
1$\%$ each).
In Scenarios 6-7, all AM recommend $T_{1}$ over $T_{2}$ when $T_{2}$ is
unarguably worse (similarly they all recommend $T_{2}$ over $T_{1}$ when
$T_{1}$ is unarguably worse). Along the diagonal, the SLoS model recommends
$T_{1}$ over $T_{2}$ more often than the other AM when $T_{2}$ has either
extreme low benefit and extreme low risk, or extreme high benefit and extreme
high risk, compared to $T_{1}$ (although the product model recommends $T_{1}$
only a slightly smaller proportion of times than the SLoS model). Again, this
is the result of the penalisation of extremely low benefit or extremely high
risk criteria. Similarly, the multi-linear model recommends $T_{1}$ over
$T_{2}$ more often than the linear model in the same circumstances. For
example, in Scenario 6, when $T_{2}$ has criteria values $\theta_{2,1}=0.2$,
$\theta_{2,2}=0.2$ (lower benefit and lower risk), $T_{1}$ is recommended with
probability 21$\%$ under the linear model, 59$\%$ under the product model,
28$\%$ under the multi-linear model and 68$\%$ under the SLoS model. This
shows how the different levels of penalisation affect the recommendations,
where the stronger the penalisation of the undesirable low benefit criterion
value, the more likely an AM is to recommend $T_{1}$, and is the reason why
there is such a large difference between the linear and SLoS models
recommendations.
Overall, the simulation study has shown that, for the two criteria having an
equal relative importance, SLoS penalises extremely low benefit and extremely
high risk criteria the most, whilst the product model penalises these
moderately, acting as a sort of middle ground between the linear and SLoS
models. The multi-linear model offers a small amount of penalisation (less
than the product model), but due to the added complexity of this model when
more criteria are added, it should not be recommended over either the SLoS
model or the product model. The linear and multi-linear models both recommend
treatments with no benefit/high risk over other viable alternatives, which
contradicts conditions set out by Saint-Hilary et al. Saint-Hilary G, Robert
V, Gasparini M, Jaki T and Mozgunov P (2018). Therefore we can provisionally
conclude that the two models that appeal most at this point are the product
and SLoS models.
### 4.4 Sensitivity Analysis: Correlated criteria
The results above concerned the case with the two criteria being uncorrelated.
However, it might be reasonable to assume that the criteria for one treatment
might be correlated. In this section, we study how robust the recommendation
by each of the four models are to the correlation between the benefit and risk
criteria. We consider two cases of the correlation: a strong positive
correlation ($\rho=0.8$) and a strong negative correlation ($\rho=-0.8$)
between the criteria. The correlated outcomes were generated using a procedure
laid out in Mozgunov et al. (2018)
We study how likely the correlated outcomes are to change the final
recommendation of one of the treatments. Specifically, we study the proportion
of cases under each of the scenarios in which the difference in the
probability of recommending treatment $T_{1}$,
$\mathbb{P}\big{(}\mathcal{P}_{X}^{1,2}>0.8\big{)}$, changes by more than 2.5%
and by 5%. Table 5 show the number of cases (out of 81) under each of nine
scenarios, in which the differences in the probabilities to recommend $T_{1}$
over $T_{2}$ changes by at least 2.5% and 5% comparing the positively
correlated and uncorrelated criteria. The case investigating the effects of
negative correlation shows similar results to those presented here, and is
included in the Supplementary Material. For example, the first entry in Table
5 shows that in 37% cases under Scenario 1, the probability to recommend
$T_{1}$ changes by at least 2.5% if the linear model is used.
Linear Model Product Model Multi-Linear Model SLoS Model Scenario 1
$\geq$2.5$\%$ $\geq$5$\%$ 30/81 (37.0$\%$) 22/81 (27.2$\%$) 24/81 (29.6$\%$)
15/81 (18.5$\%$) 29/81 (35.8$\%$) 21/81 (25.9$\%$) 22/81 (27.2$\%$) 12/81
(14.8$\%$) Scenario 2 $\geq$2.5$\%$ $\geq$5$\%$ 10/81 (12.3$\%$) 5/81
(6.2$\%$) 15/81 (18.5$\%$) 3/81 (3.7$\%$) 15/81 (18.5$\%$) 5/81 (6.2$\%$)
12/81 (14.8$\%$) 3/81 (3.7$\%$) Scenario 3 $\geq$2.5$\%$ $\geq$5$\%$ 16/81
(19.8$\%$) 6/81 (7.4$\%$) 13/81 (16.0$\%$) 5/81 (6.2$\%$) 17/81 (21.0$\%$)
4/81 (4.9$\%$) 14/81 (17.3$\%$) 4/81 (4.9$\%$) Scenario 4 $\geq$2.5$\%$
$\geq$5$\%$ 22/81 (27.2$\%$) 17/81 (21.0$\%$) 3/81 (3.7$\%$) 0/81 (0$\%$)
22/81 (27.2$\%$) 15/81 (18.5$\%$) 0/81 (0$\%$) 0/81 (0$\%$) Scenario 5
$\geq$2.5$\%$ $\geq$5$\%$ 23/81 (28.4$\%$) 17/81 (21.0$\%$) 4/81 (4.9$\%$)
0/81 (0$\%$) 22/81 (27.2$\%$) 15/81 (18.5$\%$) 0/81 (0$\%$) 0/81 (0$\%$)
Scenario 6 $\geq$2.5$\%$ $\geq$5$\%$ 29/81 (35.8$\%$) 20/81 (24.7$\%$) 21/81
(25.9$\%$) 12/81 (14.8$\%$) 30/81 (37.0$\%$) 16/81 (19.8$\%$) 17/81 (21.0$\%$)
4/81 (4.9$\%$) Scenario 7 $\geq$2.5$\%$ $\geq$5$\%$ 25/81 (30.9$\%$) 14/81
(17.3$\%$) 19/81 (23.5$\%$) 9/81 (11.1$\%$) 24/81 (29.6$\%$) 15/81 (18.5$\%$)
13/81 (16.0$\%$) 2/81 (2.5$\%$) Scenario 8 $\geq$2.5$\%$ $\geq$5$\%$ 0/81
(0$\%$) 0/81 (0$\%$) 0/81 (0$\%$) 0/81 (0$\%$) 0/81 (0$\%$) 0/81 (0$\%$) 0/81
(0$\%$) 0/81 (0$\%$) Scenario 9 $\geq$2.5$\%$ $\geq$5$\%$ 1/81 (1.2$\%$) 0/81
(0$\%$) 1/81 (1.2$\%$) 0/81 (0$\%$) 1/81 (1.2$\%$) 0/81 (0$\%$) 1/81 (1.2$\%$)
0/81 (0$\%$) Total $\geq$2.5$\%$ $\geq$5$\%$ 156/729 (21.4$\%$) 101/729
(13.9$\%$) 100/729 (13.7$\%$) 44/729 (6.0$\%$) 160/729 (21.9$\%$) 91/729
(12.5$\%$) 79/729 (10.88$\%$) 25/729 (3.4$\%$)
Table 5: Number of times ($\%$) when the difference in recommending $T_{1}$
changes by at least 2.5$\%$ or 5$\%$ between the positively correlated
criteria and the non-correlated criteria
Table 5 shows that all four models are the most affected by correlation under
Scenario 1 with the characteristics of $T_{1}$ being in the middle of the unit
interval. This effect is, however, less prominent for the Product and SLoS
Models. At the same time, under Scenarios 2-7, the correlation has a larger
effect on the linear and multi-linear models than on the other two models.
Scenarios 8-9 are hardly affected by any correlation, and the effect is
similar across all four models.
Overall, the SLoS model is the least affected by correlation between the
criteria, the product model is the second least affected whereas the multi-
linear (for the threshold 2.5%) and the linear model (for the threshold 5%)
are the most affected ones.
## 5 Discussion
In this article, four potential AM are investigated for use in benefit-risk
analyses: The linear model, product model, multi-linear model and the SLoS
model. The differences of these models were highlighted in a case-study and a
simulation study.
In most clear cases (i.e. when one treatment has more benefit and less risk
than the competitor), all AM gave similar recommendations. However, in cases
where one treatment had either no benefit or extreme risk, the models which
penalised undesirable values more (the product and SLoS models) gave more
desirable recommendations: non-effective or extremely unsafe treatments are
never recommended. Furthermore, with these models, more risk is accepted in
order to increase benefit when the amount of benefit is small than when it is
high (or less benefit is desirable to reduce risk when the amount of risk is
high than when it is small), which is consistent with the well established
assumption of non-linearity of human preferences Berger JO (2011). It should
be noted that these models hardly discriminate two treatments that slightly
differ but have both extremely undesirable properties. However, in this case,
none of the treatments should be recommended anyway.
The effects of correlations between criteria was also investigated in this
study. The overall effect of correlations was small to negligible in the
product and SLoS models, showing these AM are not much affected by
correlations between the criteria. However, the linear and multi-linear models
were more likely to see a 2.5$\%$ or 5$\%$ change in the probability of
recommending one treatment over another, showing that they are more affected
by correlations between the criteria.
Overall, the two models to recommend from this investigation are the product
model and the SLoS model, depending on how severely the decision-maker whish
to penalise treatments with either no benefit or extreme risk (moderate
penalisation: product model, strong penalisation: SLoS model). The multi-
linear model, whilst acting as a middle ground between the linear model and
the product and SLoS models in the simulation study, involves an increased
complexity behind the model. These include the increased complexity involved
with adding additional terms and increased difficulty in weight mapping. This
model also struggled to truly reflect the weightings given in the case study,
especially in scenario 2. Because of this, we do not recommend this AM over
the product or SLoS models. Additionally, the linear and multi-linear models
should not be recommended as both of these models do not contain the two
desirable properties outlined in Saint-Hilary et al.Saint-Hilary G, Robert V,
Gasparini M, Jaki T and Mozgunov P (2018): That treatments with no
benefit/extreme risk should not be recommended, and that a larger increase in
risk is accepted in order to increase the benefit if the benefit is small
compared to if the benefit is high – both of which are present in the product
and SLoS models.
## Supplemental Material
Supplemental material available at: https://github.com/Tom-Menzies/Code-
Menzies-2020
## Acknowledgements
This report is independent research supported by the National Institute for
Health Research (NIHR Advanced Fellowship, Dr Pavel Mozgunov, NIHR300576). The
views expressed in this publication are those of the authors and not
necessarily those of the NHS, the National Institute for Health Research or
the Department of Health and Social Care (DHCS).
## References
* Chuang-Stein, Entsuah and Pritchett (2008) Chuang-Stein C, Entsuah R and Pritchett Y. Measures for Conducting Comparative Benefit: Risk Assessment. Drug Information Journal 2008; $\boldsymbol{42}$: 223-233.
* EMA. (2013) EMA. Guidance document on the content of the $<$co-$>$rapporteur day 80 critical assessment report. https://www.ema.europa.eu/en/documents/regulatory-procedural-guideline/day-80-assessment-report-clinical-guidance$\\_$en.pdf (2013, accessed 30th August 2019).
* FDA. (2013) FDA. Providing postmarket periodic safety reports in the ICH E2C(R2) format. U.S. Food and Drug Administration https://www.fda.gov/regulatory-information/search-fda-guidance-documents/providing-postmarket-periodic-safety-reports-ich-e2cr2-format-periodic-benefit-risk-evaluation?source=govdelivery, journal=U.S. Food and Drug Administration
(2013, accessed 30th August 2019).
* Hughes D, Bayoumi A and Pirmohamed M. (2007) Hughes D, Bayoumi A and Pirmohamed M. Current Assessment of Risk-Benefit by Regulators: Is It Time to Introduce Decision Analyses? Clinical Pharmacology & Therapeutics 2007; $\boldsymbol{82}$: 123-127.
* Thokala P, Devlin N, Marsh K, Baltussen R, Boysen M, Kalo Z, Longrenn T, Mussen F, Peacock S, Watkins J, and Ijzerman M. (2016) Thokala P, Devlin N, Marsh K, Baltussen R, Boysen M, Kalo Z, Longrenn T Mussen F, Peacock S, Watkins J, and Ijzerman M. Multiple Criteria Decision Analysis for Health Care Decision Making-An Introduction: Report 1 of the ISPOR MCDA Emerging Good Practices Task Force. Value in Health 2016; $\boldsymbol{19}$: 1-13.
* Marsh K, IJzerman M, Thokala P, Baltussen R, Boysen M, Kaló Z, Longrenn T, Mussen F, Peacock S, Watkins J and Devlin N. (2016) Marsh K, IJzerman M, Thokala P, Baltussen R, Boysen M, Kaló Z, Longrenn T, Mussen F, Peacock S, Watkins J and Devlin N. Multiple Criteria Decision Analysis for Health Care Decision Making-Emerging Good Practices: Report 2 of the ISPOR MCDA Emerging Good Practices Task Force. Value in Health 2016; $\boldsymbol{19}$: 125-137.
* Mussen F, Salek S, and Walker S (2007) Mussen F, Salek S and Walker S. A quantitative approach to benefit-risk assessment of medicines - part 1: the development of a new model using multi-criteria decision analysis Pharmacoepidemiology and Drug Safety 2007; $\boldsymbol{16}$: S2-S15.
* IMI PROTECT. (2013) IMI PROTECT. Work package 5: Benefit-risk integration and representation. http://www.imi-protect.eu/wp5.shtml (2013, accessed 30th August 2019).
* NICE Decision Support Unit (2011) NICE Decision Support Unit. Multi-criteria decision analysis (MCDA), http://scharr.dept.shef.ac.uk/nicedsu/wp-content/ uploads/sites/7/2016/03/MCDA-for-HTA-DSU.pdf (2011, accessed 1 February 2020).
* Marsh K, Lanitis T, Neasham D, et al (2014) Marsh K, Lanitis T, Neasham D, et al. Assessing the value of healthcare interventions using multi-criteria decision analysis: a review of the literature. PharmacoEconomics 2014; $\mathbf{32}$: 345–365.
* Waddingham E, Mt-Isa S, Nixon R and Ashby D. (2016) Waddingham E, Mt-Isa S, Nixon R and Ashby D. A Bayesian approach to probabilistic sensitivity analysis in structured benefit-risk assessment. Biometrical Journal 2016; $\boldsymbol{58}$: 28-42.
* Tervonen T, van Valkenhoef G, Buskens E, Hillege H, and Postmus D. (2011) Tervonen T, van Valkenhoef G, Buskens E, Hillege H, and Postmus D. A stochastic multicriteria model for evidence-based decision making in drug benefit-risk analysis. Statistics in Medicine 2011; $\boldsymbol{30}$: 1419-1428.
* Saint-Hilary G, Cadour S, Robert V, and Gasparini M (2017) Saint-Hilary G, Cadour S, Robert V and Gasparini M. A simple way to unify multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) using a Dirichlet distribution in benefit-risk assessment. Biometrical Journal 2017; $\boldsymbol{59}$: 567-578.
* Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P (2018) Saint-Hilary G, Robert V, Gasparini M, Jaki T and Mozgunov P. A novel measure of drug benefit–risk assessment based on Scale Loss Score. Statistical methods in medical research 2018; $\boldsymbol{1}$ 2738-2753.
* Morton A (2017) Morton A. Treacle and smallpox: two tests for multicriteria decision analysis models in health technology assessment. Value Health 2017; $\mathbf{20}$: 512–515.
* Marsh KD, Sculpher M, Caro J, et al (2017) Marsh KD, Sculpher M, Caro J, et al. The use of MCDA in HTA: great potential, but more effort needed. Value Health 2017; $\mathbf{21}$: 394–397
* Nemeroff C. (2007) Nemeroff C. Prevalence and management of treatment-resistant depression. The Journal of clinical psychiatry 2007; $\boldsymbol{68}$: 17-25.
* Marcelon L, Verstraeten T, Dominiak-Felden G, et al. (2016) Marcelon L, Verstraeten T, Dominiak-Felden G, et al. Quantitative benefit-risk assessment by MCDA of the quadrivalent HPV vaccine for preventing anal cancer in males. Expert Rev Vaccines 2016; $\boldsymbol{15}$: 139-148.
* Nixon R, Dierig C, Mt-Isa S, Sto¨ckert I, et al (2016) Nixon R, Dierig C, Mt-Isa S, Sto¨ckert I, et al. A case study using the PrOACT-URL and BRAT frameworks for structured benefit risk assessment. Biometric J 2016; $\boldsymbol{58}$: 8-27
* Berger JO (2011) Berger JO. Statistical decision theory and Bayesian analysis. 2nd ed. New York, NY: Springer-Verlag. 1985.
* Morton A. (2017) Morton A. Treacle and smallpox: two tests for multicriteria decision analysis models in health technology assessment. Value Health 2017; $\boldsymbol{20}$: 512-515.
* Raiffa H, Keeney RL. (1975) Raiffa H, Keeney RL. Decision Analysis with Multiple Conflicting Objectives, Preferences and Value Tradeoffs. http://pure.iiasa.ac.at/id/eprint/375/ (1975, accessed 30th June 2019).
* Cobb C, and Douglas P (1928) Cobb C, and Douglas P. A Theory of Production. The American Economic Review. 1928; $\boldsymbol{18}$ 139-165.
* Tervonen T, van Valkenhoef G, Buskens E, Hillege HL and Postmus D. (2011) Tervonen T, van Valkenhoef G, Buskens E, Hillege HL and Postmus D. A stochastic multicriteria model for evidence‐based decision making in drug benefit‐risk analysis. Statist. Med. 2011; $\boldsymbol{30}$: 1419-1428.
* Broekhuizen, H., IJzerman, M.J., Hauber, A.B. et al (2017) Broekhuizen, H., IJzerman, M.J., Hauber, A.B. et al. Weighing Clinical Evidence Using Patient Preferences: An Application of Probabilistic Multi-Criteria Decision Analysis. PharmacoEconomics 2017; $\boldsymbol{35}$: 259-269
* Mozgunov et al. (2018) Mozgunov P, Jaki T and Paoletti X. A benchmark for dose finding studies with continuous outcomes. Biostatistics 2018; $\boldsymbol{59}$: 567-578.
|
arxiv-papers
| 2021-07-26T16:14:23 |
2024-09-04T03:07:19.191098
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Tom Menzies (1,2), Gaelle Saint-Hilary (3,4) and Pavel Mozgunov (5)\n ((1) Clinical Trials Research Unit, Leeds Institute of Clinical Trials\n Research, University of Leeds, Leeds, UK, (2) Department of Mathematics and\n Statistics, Lancaster University, Lancaster, UK, (3) Department of\n Biostatistics, Institut de Recherches Internationales Servier (IRIS),\n Suresnes, France, (4) Dipartimento di Scienze Matematiche (DISMA) Giuseppe\n Luigi Lagrange, Politecnico di Torino, Torino, Italy, (5) Medical and\n Pharmaceutical Statistics Research Unit, Department of Mathematics and\n Statistics, Lancaster University, Lancaster, UK)",
"submitter": "Gaelle Saint-Hilary",
"url": "https://arxiv.org/abs/2107.12298"
}
|
2107.12304
|
# In Defense of the Learning Without Forgetting for Task Incremental Learning
Guy Oren and Lior Wolf
Tel-Aviv University
{guyoren347, liorwolf}@gmail.com
###### Abstract
Catastrophic forgetting is one of the major challenges on the road for
continual learning systems, which are presented with an on-line stream of
tasks. The field has attracted considerable interest and a diverse set of
methods have been presented for overcoming this challenge. Learning without
Forgetting (LwF) is one of the earliest and most frequently cited methods. It
has the advantages of not requiring the storage of samples from the previous
tasks, of implementation simplicity, and of being well-grounded by relying on
knowledge distillation. However, the prevailing view is that while it shows a
relatively small amount of forgetting when only two tasks are introduced, it
fails to scale to long sequences of tasks. This paper challenges this view, by
showing that using the right architecture along with a standard set of
augmentations, the results obtained by LwF surpass the latest algorithms for
task incremental scenario. This improved performance is demonstrated by an
extensive set of experiments over CIFAR-100 and Tiny-ImageNet, where it is
also shown that other methods cannot benefit as much from similar
improvements. Our code is available at: https://github.com/guy-
oren/In_defence_of_LWF
## 1 Introduction
The phenomenon of catastrophic forgetting (CF) of old concepts as new ones are
learned in an online manner is well-known. The approaches to overcome it can
be categorized, as suggested by De Lange _et al_. [3], into three families:
(i) replay-based methods, which store selected samples of previously
encountered classes, (ii) regularization-based methods, that limit the freedom
to learn new concepts, and (iii) parameter isolation methods, which directly
protect the knowledge gained in the past, by dividing the network parameters
into separate compartments.
The field of continual learning is very active, with dozens of methods that
have emerged in the last few years. However, it seems that the growing
interest leads to confusion rather than to the consolidation of knowledge. As
practitioners looking to find out which online learning method would be
suitable for a real-world application, we were unable to identify the solid
methods of the field and could not infer from the literature the guiding
principles for tackling catastrophic forgetting.
Indeed, reviewing the literature, one can find many insightful ideas and well-
motivated solutions. However, little data regarding the generality of
continual learning methods, the sensitivity of the methods to the specific
setting and hyperparameters, the tradeoff between memory, run-time and
performance, and so on. Ideally, one would like to find a method that is not
only well-grounded and motivated, but also displays a set of desired
properties: (i) work across multiple datasets, (ii) be stable to long
sequences of on-line learning tasks, (iii) benefit from additional capacity,
(iv) display flexibility in network architecture that allows the incorporation
of modern architectures, (v) display an intuitive behavior when applying
regularization, and (vi) present robustness to hyperparameters.
We demonstrate that these properties hold for one of the first methods to be
proposed for tackling CF, namely the Learning without Forgetting (LwF) method
[22]. This is a bit surprising, since this method, as a classical method in a
fast-evolving field, has been repeatedly used as an inferior baseline.
However, we show that unlike many of the more recent methods, this scapegoat
method can benefit from residual architectures and further benefits from
simple augmentation techniques. Moreover, while the original LwF
implementation employed techniques such as warmup and weight decay, we were
able to train without these techniques and their associated hyperparameters.
Overall, we find LwF, which is a simple data-driven regularization technique,
to be more effective than the most promising regularization-based and
parameter-isolation methods.
## 2 Related work
It is often the case that new methods are presented as having clear advantages
over existing ones, based on empirical evidence. The inventors of these
methods have little incentive to explore the underlying reason for the
performance gap. Without a dedicated effort to do so, the literature can
quickly become misleading.
In our work, we demonstrate that the task-incremental learning methods that
have emerged since the 2016 inception of the LwF method are not more accurate
than this straightforward method. This demonstration is based on changing the
underlying neural network architecture to a ResNet [10] and on employing a
simple augmentation technique during training. Moreover, we show that LwF
benefits from more capacity, width wise.
A recent related attempt by De Lange _et al_. [3] also addresses the need to
compare multiple continual learning algorithms in task-incremental settings.
That study has employed multiple architectures, and, similar to us, have noted
that the LwF method benefits from the additional capacity given by extra width
but not from extra depth. However, ResNets or augmentations were not employed
and the conclusion was that LwF is not competitive with the more recent
techniques. This conclusion is in sheer contrast to ours, demonstrating the
challenge of comparing methods in a way that exposes their full potential, and
the need to perform such comparative work repeatedly.
### 2.1 Task-incremental learning
CF in neural networks has been observed from the beginning. However, there is
no consensus regarding the proper settings and metrics for comparing different
techniques. In this work, we adopt a setting definition from the work of [33,
12], who define three different settings for continual learning – task
incremental, domain incremental, and class incremental. In all scenarios, the
system is presented with a stream of tasks and is required to solve all tasks
that are seen so far. In task incremental, the task identifier is provided
both in train and inference time. In domain incremental, the task identifier
is provided only in train time, and the classifier does not need to infer the
task identifier but rather just solve the task at hand. In class incremental,
the learner also needs to infer the task identifier in inference time.
We focus on the task incremental setting. Moreover, we do not consider replay-
based methods since these rely heavily on accessing data retained from the
previous tasks, which is not desirable in real-world scenarios, and depends on
an additional parameter that is the size of the memory.
The literature has a great number of methods, further emphasizing the need for
comparative work. In this work, we focus on the methods that are repeatedly
reported in the literature [3, 29, 13, 21]. These include: Elastic Weight
Consolidation (EWC; [16], online version), Incremental Moment Matching (IMM;
[20], both Mean and Mode variants), overcoming CF with Hard Attention to the
Task (HAT; [29]), continual learning with Hypernetworks (Hyper-CL; [34]) and
Adversarial Continual Learning (ACL; [4]).
Both the EWC and IMM variants, belong to a regularization-based family and add
a structural, weight-based, regularization term to the loss function to
discourage changes to weights that are important for previous tasks. IMM
performs a separate model-merging step after learning a new task, which EWC
does not. Although this family of methods is very rich, IMM and EWC are among
the leading methods and are often cited as baselines.
The HAT approach belongs to the parameter isolation family and applies a
light-weight, unit-based, learnable, and ’soft’ masks per task. HAT is a
successor to various works, including (i) progressive neural networks (PNNs;
[27]), which applies a complete and separate network for each task (columns)
with adapters between columns, (ii) PathNet [5] that also pre-assigns some
amount of network capacity per task but, in contrast to PNNs, avoids network
columns and adapters and instead suggests to learn evolutionary the paths
between modules, and (iii) PackNet [24], which uses weight-based pruning
heuristics and a retraining phase to maintain a binary mask for each task.
Since HAT was shown to have both performance and computational advantages over
(i)-(iii), we focus on it as a representative method from this line of work.
Hyper-CL [34], a recent addition to the parameter isolation family, belongs to
a different branch in this family than HAT. Instead of using a fixed pre-
determined capacity, Hyper-CL suggests learning the weights of a target
network for each task. Hyper-CL employs a variant of Hypernetworks [8], called
Chunked-Hypernetworks [25], which generates different subsets of the target
network’s parameters using the same generator. To do so, the method learns
both the task embedding and the “chunk” embedding. This variant makes it
possible to maintain a much smaller hypernetwork than the target network. To
overcome CF, they apply regularization that constrains the weights of the
previously seen target task from changing.
Some methods belong to more than one category. ACL [4] employs both parameter
isolation using a small private network for each task, and regularization for
a shared network across tasks. This regularization contains two parts: an
adversarial loss that makes the shared encoding task-independent [6] and a
disentanglement loss that acts to remove the overlap between the private- and
the shared-encoding [28].
Naturally, given the number of relevant methods, it is not feasible to compare
with all of them. The regularization-based family presents two additional
methods that we considered: Encoder Based Lifelong Learning (EBLL; [26]) and
Memory Aware Synapses (MAS; [1]). EBLL extends LwF by adding a per-task auto-
encoder, requiring further hyperparameter tuning. The literature shows that it
only marginally improves over LwF for AlexNet-like architectures [3, 1], and
our attempts to apply it together with ResNets led to poor results. MAS was
also shown in [3] to only slightly improved over LwF.
## 3 The LwF method and its modifications
The LwF method by Li _et al_. [22], belongs to the regularization-based
family. However, unlike EWC and IMM, its regularization is data-driven. The
method seeks to utilize the knowledge distillation loss [11] between the
previous model and the current model to preserve the outputs of the previous
task. Since maintaining the data of previous tasks is not desirable and rather
not scalable, LwF uses only the current task data for knowledge distillation.
In the task-incremental setting, the learner is given a new set of labels to
learn at each round. This set of classes is called a task. In LwF the
classifier is composed out of two parts: the feature extractor $f$ and a
classifier head $c_{i}$ per each task for $i=1,2,\dots,T$.
Let $\\{(x_{j}^{t},y_{j}^{t})\\}$ be the set of training samples for task t.
The cross-entropy loss is used as the primary loss for training the classifier
$c_{t}\circ f$:
$L_{CE}=-\sum_{j}\log[c_{t}(f(x^{t}_{j}))]_{y_{j}^{t}}$ (1)
where the subscript $y_{j}^{t}$ is used to denote the pseudo-probability of
the classifier for the ground truth label.
When learning a new task $t$, to maintain previous task knowledge, we employ
knowledge distillation between the “old” feature extraction and the previous
task classifier heads and the new ones. These are denoted by $f^{o}$ for the
previous feature extractor network (as learned after task $t-1$), and
$c_{i}^{o}$ for $i=1,2,\dots,t-1$ for the previous heads. The learned feature
extraction is denoted by $f$ and the updated task classifiers are denoted by
$c_{i}$, for $i=1,2,\dots t$.
For simplicity, we described the knowledge distillation process for one
previous task and one sample $(x,y)\in\\{(x_{j}^{t},y_{j}^{t})\\}$ from the
current task $t$. However, the process is repeated for the classifier heads of
all previous tasks and all samples of task $t$, while summing up the
individual losses. Let $Y^{o}:=[y^{o}_{1},y^{o}_{2},...]=c^{o}_{i}(f^{o}(x))$
be the vector of probabilities that the old classifier of task $i$ assigns to
sample $x$. Similarly, let $Y:=[y_{1},y_{2},...]$ be the vector of
probabilities for the same training samples obtained with $c_{i}\circ f$. To
apply the knowledge distillation loss, these vectors are modified in
accordance with some temperature parameter $\theta$:
$\displaystyle
y^{\prime}_{k}=\frac{y_{k}^{\frac{1}{\theta}}}{\sum_{m}y_{m}^{\frac{1}{\theta}}}\,,\quad{y_{k}^{\prime}}^{o}=\frac{(y_{k}^{o})^{\frac{1}{\theta}}}{\sum_{m}(y^{o}_{m})^{\frac{1}{\theta}}}\,.$
(2)
The temperature is taken to be larger than one, to increase small probability
values and reduce the dominance of the high values. The knowledge distillation
loss is defined as:
$\displaystyle L_{dist}=-\sum_{k}{y^{\prime}_{k}}^{o}log(y^{\prime}_{k})\,,$
(3)
where the summation is done over all labels of task $i$.
We followed the author’s suggestions and in all our experiments and set
$\theta=2$ and the regularization weight to one, _i.e_., the knowledge
distillation loss had the same weight as the classification loss of the new
task. It is worth mentioning that although the original LwF work [22]
evaluated the method in the two task scenario, it can be readily extended to
any number of tasks by using knowledge distillation loss over all
$c^{o}_{i},i=1,2\dots,t-1$. This further highlights the need for performing
our research, since such an extension was previously done in the context of
attempting to present the preferable performance of a new method. We also note
that it was suggested in [22] to use a warmup phase at the beginning of
training for each new task, in which both $f$ and $c_{i},i=1,2,\dots,t-1$ are
frozen and one trains $c_{t}$ with the cross-entropy loss until convergence.
However, since the effect of this seems negligible even in the original paper,
we do not perform this. The authors also used regularization in the form of
weight decay during training, which we remove to avoid the need to fit a
regularization hyperparameter for each experiment. Moreover, in our initial
experiments weight decay tends to hurt the accuracy of new tasks.
### 3.1 Architecture
Li _et al_. [22] employed AlexNet [18] and VGGNet [30] to evaluate the
performance of the method. Interestingly, even the recent review work by De
Lange _et al_. [3] uses AlexNet as a reference network, despite ongoing
advances in network architectures. There is also a key difference between the
different versions of AlexNet-like architectures employed in [22] and [29].
The latter use Dropout [31], which as we show empirically, is detrimental.
We also offer to use the ResNet [10] architecture. We are not the first to
attempt to use ResNets for LwF. Mallya _et al_. [24] employed LwF with a
ResNet-50 network as an underperforming baseline. However, our experiments
demonstrate that LwF mostly benefits from a Wide-ResNet [35] network rather
than from deeper ones.
### 3.2 Data augmentation
Using a method with a shared model presents a challenge. On the one hand, the
shared part must have enough capacity to learn new tasks. On the other hand,
bigger networks are more vulnerable to overfitting when training on the first
tasks. The parameter isolation family works around this problem by dynamically
changing the capacity of the network as in PNNs [27] or learning a specific
target network for each task with enough capacity for each task, like in
Hyper-CL [34].
In addition to the capacity needs, another challenge that the LwF method faces
is the need to mitigate the difference between the input distributions for
different tasks. In the extreme, where the input distributions are very
dissimilar, the knowledge distillation loss is no longer constraining the
network to success on previous tasks.
Data augmentation, which is a well-studied technique for overcoming
overfitting by virtually expending the dataset at hand, also has the potential
to close the gap between different input distributions and therefore reduce
forgetting. In our experiments, we employ a very basic set of augmentation
consisting of random horizontal flips, color jitter (randomly change the
brightness, contrast, saturation, and hue), and translation. As it turns out,
these are sufficient to reduce the forgetting almost to zero, while
substantially increasing the average accuracy for all tested settings.
## 4 Experiments
The common datasets for evaluating CF in classification problems include
permutations of the MNIST data [32], a split of the MNIST data [20],
incrementally learning classes of the CIFAR data sets [23], or on considering
two datasets and learning the transfer between them [22]. Serrà _et al_. [29]
points out the limitations of the MNIST setups, since these do not well
represent modern classification tasks. The two-task scenario is criticized for
being limited and does not enable the evaluation of CF for sequential learning
with more than two tasks. CIFAR-100 splits are criticized for having tasks
that are relatively similar in nature. However, in our experiments,
performance on CIFAR-100 splits discriminates well between different methods
and between different settings of the same method.
In addition to CIFAR-100 [17], we employ Tiny-ImageNet [19] in our
experiments. The latter presents a higher diversity with more classes and the
ability to challenge methods with longer and more meaningful sequences of
tasks. To obtain a generic estimate, we shuffle the order of classes in each
dataset and repeat each experiment setup five times with different seeds.
A common CIFAR setup, introduced in [36] offers to use CIFAR-10 as a first
task, then split CIFAR-100 into five distinct tasks with 10 disjoint classes
each. However, it may introduce a bias in evaluating task-incremental methods,
since it makes the first task much larger and, therefore, conceals the problem
of first-task overfitting. In this work, we consider a different setting, in
which CIFAR-100 is divided into 5-Splits (i.e., 5-tasks), 10-Splits, and
20-Splits with 20, 10, and 5 classes in each task, respectively. Each class in
CIFAR-100 contains 500 training images and 100 testing images. Each image size
is $3\times 32\times 32$. As a validation set, we shuffle the training data
and use $90\%$ as training examples and $10\%$ as validation examples.
A recent work by De Lange _et al_. [3] employed Tiny-ImageNet as a benchmark
using a similar setup to the CIFAR-100 setup above. However, they split the
dataset to 20 disjoint tasks with 10 classes each. Since we opt for a longer
sequence of tasks while still keeping them meaningful, we split the dataset
into 40 disjoint tasks with 5 classes each. As our results will show, this
setting pushes the limits of the task-incremental methods.
Each class in Tiny-ImageNet contains 500 training images, 50 validation
images, and 50 testing images. The original image size for this dataset is
$3\times 64\times 64$. Since the test set is not publicly available, we use
the validation set as a test set and as a validation set, we shuffle the
training data and use $90\%$ for training and $10\%$ for validation.
To evaluate performance, we adopt the metrics of [23]:
Average Accuracy: ACC $\displaystyle=\frac{1}{T}\sum_{i=1}^{T}R_{T,i}$ (4)
Backward Transfer: BWT
$\displaystyle=\frac{1}{T-1}\sum_{i=1}^{T-1}R_{T,i}-R_{i,i}$ (5)
where $T$ is the number of tasks and $R_{i,j}$ is the test accuracy score for
task $j$ after the model learned task $i$. We note that $BWT<0$ reports CF,
while $BWT>0$ indicates that learning new tasks helped the preceding tasks.
### 4.1 The effect of the network architecture
We first present experiments for LwF with various network architectures and no
data augmentation. The AlexNet-like architecture [18] we use follows [29] and
has three convolutional layers of 64, 128, and 256 filters with $4\times 4$,
$3\times 3$, and $2\times 2$ kernel sizes, respectively. On top, there are two
fully-connected layers of 2048 units each. This network employs rectified
linear units (ReLU) as activations, and $2\times 2$ max-pooling after the
convolutional layers. A Dropout of 0.2 is applied for the first two layers and
0.5 for the rest. All layers are randomly initialized with Xavier uniform
initialization [7].
| | CIFAR 5-Split | CIFAR 10-Split | CIFAR 20-Split | Tiny-ImageNet 40-Split
---|---|---|---|---|---
Arch. | #Params | BWT | ACC | BWT | ACC | BWT | ACC | BWT | ACC
AlexNet-D | $6.50$ | $-39.9\pm 1.4$ | $36.6\pm 1.5$ | $-52.9\pm 1.2$ | $28.1\pm 1.3$ | $-54.4\pm 1.1$ | $31.3\pm 0.8$ | $-50.5\pm 1.0$ | $25.0\pm 0.4$
AlexNet-ND | $6.50$ | $-1.8\pm 0.6$ | $56.6\pm 1.1$ | $-2.9\pm 0.2$ | $67.0\pm 1.0$ | $-3.1\pm 0.3$ | $75.5\pm 0.6$ | $-2.8\pm 0.3$ | $66.9\pm 0.8$
RN-20 | $0.27$ | $-0.4\pm 0.3$ | $60.4\pm 0.7$ | $-1.9\pm 0.5$ | $67.2\pm 1.0$ | $-2.3\pm 0.4$ | $76.2\pm 0.8$ | $-3.0\pm 0.5$ | $70.8\pm 1.0$
RN-32 | $0.47$ | $-1.8\pm 0.7$ | $58.8\pm 2.0$ | $-1.8\pm 0.2$ | $67.1\pm 1.1$ | $-2.7\pm 0.2$ | $75.6\pm 0.4$ | $-2.4\pm 0.2$ | $70.9\pm 1.1$
RN-62 | $0.95$ | $-1.7\pm 0.6$ | $58.9\pm 0.7$ | $-2.7\pm 0.4$ | $66.0\pm 0.8$ | $-2.9\pm 0.4$ | $75.6\pm 0.7$ | $-3.1\pm 0.9$ | $70.3\pm 1.2$
WRN-20-W2 | $1.08$ | $-1.2\pm 0.6$ | $62.0\pm 0.3$ | $-2.1\pm 0.6$ | $69.6\pm 0.8$ | $-3.3\pm 0.4$ | $77.3\pm 0.4$ | $-3.8\pm 0.2$ | $71.5\pm 0.6$
WRN-20-W5 | $6.71$ | $-2.0\pm 0.5$ | $64.2\pm 1.1$ | $-2.9\pm 0.3$ | $71.2\pm 0.5$ | $-3.7\pm 0.3$ | $79.4\pm 0.6$ | $-4.5\pm 0.3$ | $72.6\pm 0.8$
Table 1: Network results summary for LwF. BWT and ACC in %. #Params in
millions and counts only for the shared feature extractor. All results are
averaged over five runs with standard deviations. D=Dropout, ND=No Dropout,
RN=ResNet, WRN=WideResNet.
While LwF is commonly used with an AlexNet-like architecture [21, 29, 3], we
opt to use more modern architectures. We choose to use the popular
architecture family of ResNets. In this work, we use ResNet-20 (RN-20),
ResNet-32 (RN-32) and ResNet-62 (RN-62) [10], as well as Wide-ResNet-20
networks with width factors 2 or 5 [35] (WRN-20-W2 and WRN-20-W5
respectively). Those networks employ ReLU activations and Batch Normalization
layers [14]. All convolutional layers were randomly initialized with Kaiming
normal inits with fan-out mode [9], and the normalization layers were
initialized as constants with 1 and 0 for weight and bias, respectively. All
architecture tested use separated fully-connected layers with a softmax output
for each task as a final layer. More details can be found in the appendix.
In all experiments, LwF is trained up to 200 epochs for each task. We use a
batch size of 64 and an SGD optimizer with a learning rate of $0.01$ and a
momentum of $0.9$. We used the validation set to schedule the learning rate,
where we drop the learning rate by a factor of 3 if there is no improvement in
the validation loss for five consecutive epochs. Training is stopped when the
learning rate becomes lower than $10^{-4}$.
The results are depicted in Tab. 1. Our clearest and most significant result
is that the underlying network has a great effect on LwF performance. While
LwF with AlexNet with Dropout architecture greatly suffers from forgetting
which results in low ACC, just removing the Dropout from the network results
in a sizable performance boost. This makes sense while using Dropout on the
teacher side creates a strong teacher that can be viewed as a large ensemble
of models that shares weight [11], on the student side, this weakens the
regularization of LwF. Randomly choosing which weights to regularize ignores
their importance for older tasks, which results in high forgetting.
Next, switching to RN-20 with an order of magnitude fewer parameters shows
preferable performance. This change reveals the potential of LwF to obtain
competitive ACC and BWT.
Following [3] we investigate the effect of width and depth of the architecture
with the ResNet network on LwF performance. We used two deeper networks (RN-32
and RN-62) and two wider networks (WRN-20-W2 and WRN-20-W5). Our results (Tab.
1) show that while using a deeper network gives similar or inferior results
compare to RN-20, using wider networks increases performance.
### 4.2 The effect of data augmentation
We conjectured in Sec. 3.2 that LwF performance can be further increased by
using data augmentations. In this section, we conduct experiments on
WRN-20-W5, which is the best performer among the tested architectures, with a
relatively simple set of random augmentations: random horizontal translation
of up to 3 pixels with reflection padding, random horizontal flip, and color
jitter (brightness, contrast and saturation with jitter of $0.3$ and hue with
jitter of $0.2$).
| CIFAR 5-Split | CIFAR 10-Split | CIFAR 20-Split | Tiny-ImageNet 40-Split
---|---|---|---|---
Augmentation | BWT | ACC | BWT | ACC | BWT | ACC | BWT | ACC
Without | $-2.0\pm 0.5$ | $64.2\pm 1.1$ | $-2.9\pm 0.3$ | $71.2\pm 0.5$ | $-3.7\pm 0.3$ | $79.4\pm 0.6$ | $-4.5\pm 0.3$ | $72.6\pm 0.8$
With | $-0.2\pm 0.2$ | $80.3\pm 0.6$ | $-0.6\pm 0.2$ | $83.7\pm 0.8$ | $-1.5\pm 0.3$ | $86.6\pm 0.4$ | $-2.1\pm 0.2$ | $78.6\pm 0.6$
Table 2: Data augmentation results for LwF with WRN-20-W5 architecture. BWT
and ACC in %. All results are averaged over five runs with standard
deviations.
| | | CIFAR 5-Split | CIFAR 10-Split | CIFAR 20-Split | Tiny-ImageNet 40-Split
---|---|---|---|---|---|---
Method | Arch. | Aug. | BWT | ACC | BWT | ACC | BWT | ACC | BWT | ACC
EWC | AlexNet-D | | $+0.2\pm 0.1$ | $58.6\pm 0.9$ | $+0.7\pm 0.4$ | $64.1\pm 0.5$ | $+0.0\pm 0.9$ | $74.0\pm 1.0$ | $-0.8\pm 0.4$ | $63.3\pm 0.9$
EWC | AlexNet-D | ✓ | $+0.0\pm 0.2$ | $62.9\pm 1.5$ | $+0.1\pm 0.4$ | $68.4\pm 0.9$ | $-0.5\pm 1.1$ | $75.2\pm 1.3$ | $-1.5\pm 2.0$ | $63.8\pm 2.6$
IMM-MEAN | AlexNet-D | | $-1.2\pm 0.8$ | $58.9\pm 1.1$ | $-0.6\pm 0.7$ | $58.6\pm 1.9$ | $-0.8\pm 0.3$ | $55.9\pm 1.6$ | $-0.6\pm 0.8$ | $43.6\pm 1.3$
IMM-MEAN | AlexNet-D | ✓ | $-2.5\pm 1.0$ | $62.5\pm 1.8$ | $-1.3\pm 0.8$ | $61.4\pm 2.0$ | $-1.3\pm 0.5$ | $57.9\pm 2.9$ | $-1.2\pm 0.5$ | $44.7\pm 1.5$
IMM-MODE | AlexNet-D | | $-8.3\pm 1.5$ | $63.7\pm 1.5$ | $-21.7\pm 2.9$ | $58.6\pm 2.9$ | $-30.5\pm 3.2$ | $54.9\pm 3.0$ | $-25.0\pm 1.4$ | $50.6\pm 1.7$
IMM-MODE | AlexNet-D | ✓ | $-6.9\pm 0.3$ | $68.9\pm 0.9$ | $-19.8\pm 2.7$ | $64.4\pm 2.9$ | $-31.1\pm 4.2$ | $58.2\pm 4.3$ | $-24.2\pm 2.4$ | $54.6\pm 2.9$
HAT | AlexNet-D | | $+0.0\pm 0.0$ | $67.1\pm 0.6$ | $+0.0\pm 0.0$ | $72.8\pm 0.8$ | $+0.0\pm 0.0$ | $76.6\pm 0.6$ | $+0.0\pm 0.0$ | $65.9\pm 1.1$
HAT | AlexNet-D | ✓ | $-0.1\pm 0.0$ | $70.5\pm 0.9$ | $+0.0\pm 0.0$ | $76.2\pm 0.8$ | $+0.0\pm 0.0$ | $78.4\pm 1.0$ | $+0.0\pm 0.0$ | $67.3\pm 0.9$
HyperCL | H:Lin,M:RN32 | | $+0.0\pm 0.1$ | $53.0\pm 2.3$ | $+0.0\pm 0.0$ | $62.9\pm 0.4$ | $+0.0\pm 0.0$ | $75.5\pm 1.0$ | $-0.8\pm 0.3$ | $48.9\pm 1.6$
HyperCL | H:Lin,M:RN32 | ✓ | $+0.0\pm 0.0$ | $69.5\pm 1.1$ | $+0.0\pm 0.0$ | $78.2\pm 0.6$ | $+0.0\pm 0.0$ | $85.3\pm 0.9$ | $-0.9\pm 0.3$ | $60.7\pm 0.3$
$\text{ACL}^{o}$ | $\text{AlexNet-D}^{**}$ | | - | - | - | - | $+0.0\pm 0.0$ | $78.0\pm 1.2$ | - | -
LwF | WRN-20-W5 | | $-2.0\pm 0.5$ | $64.2\pm 1.1$ | $-2.9\pm 0.3$ | $71.2\pm 0.5$ | $-3.7\pm 0.3$ | $79.4\pm 0.6$ | $-4.5\pm 0.3$ | $72.6\pm 0.8$
LwF | WRN-20-W5 | ✓ | $-0.2\pm 0.2$ | $80.3\pm 0.6$ | $-0.6\pm 0.2$ | $83.7\pm 0.8$ | $-1.5\pm 0.3$ | $86.6\pm 0.4$ | $-2.1\pm 0.2$ | $78.6\pm 0.6$
$\text{JOINT}^{*}$ | WRN-20-W5 | | $+4.5\pm 2.0$ | $72.3\pm 1.9$ | $+4.2\pm 1.9$ | $80.2\pm 2.0$ | $+3.0\pm 1.1$ | $86.1\pm 0.9$ | $+3.5\pm 0.3$ | $80.3\pm 0.3$
$\text{JOINT}^{*}$ | WRN-20-W5 | ✓ | $+2.4\pm 0.8$ | $85.3\pm 0.5$ | $+2.3\pm 0.2$ | $89.9\pm 0.4$ | $+1.7\pm 0.6$ | $93.2\pm 0.4$ | $+2.2\pm 0.5$ | $86.7\pm 0.4$
Table 3: Comparison between multiple methods. BWT and ACC in %. *JOINT does
not adhere to the task incremental setup, and is performed in order to serve
as the upper bound for LwF. **Slightly different AlexNet-like architecture
than used in HAT with a similar capacity. oresults reported in [4]; all other
results are reproduced by us and are averaged over five runs with standard
deviations. D=Dropout, RN=ResNet, WRN=WideResNet, Lin=a linear layer,
H=Hypernetwork, M=Target network.
| | |
---|---|---|---
(a) | (b) | (c) | (d)
Figure 1: $BWT$ and $ACC$ of the best performance obtained for each of the
evaluated methods average over 5 random seeds. JOINT is an upper-bound
training on all past tasks data. (a) CIFAR 5-Split, (b) CIFAR 10-Split, (c)
CIFAR 20-Split, (d) Tiny-ImageNet 40-Split.
|
---|---
(a) | (b)
|
(c) | (d)
Figure 2: The evolution in time of the accuracy and the forgetting, for the
best performing setting of each method average over 5 random seeds. $ACC$ (Eq.
4) after learning task $t$ as a function of $t$. $BWT$ (Eq. 5) after learning
task $t$ as a function of $t$. (a) & (b) $ACC$ & $BWT$ results over time for
CIFAR 20-Split and (c) & (d) similar results over time for Tiny-ImageNet
40-Split.
The results are summarized in Tab. 2. As can be observed, applying
augmentation in this setting leads to improvement in both ACC and BWT.
Therefore, there is no trade-off between accuracy and forgetting. We emphasize
that even though no augmentations protocol search was conducted and that the
set of augmentations in use is rather small and simple, the performance boost
is substantial.
### 4.3 Comparison with other methods
We consider two regularization-based methods: EWC [16] and IMM [20] and two
parameter isolation methods: HAT [29] and Hyper-CL [34]. ACL [4] is considered
as a recent hybrid method. As an upper bound for overall performance we
consider a joint training method (JOINT), which for each incoming task, trains
on the data of all tasks seen so far. The hyper-parameters for EWC, IMM and
HAT were the best found in [29] and for Hyper-CL to the best found in [34].
For ACL, we quote the results mentioned in the paper, _i.e_. for AlexNet-like
architecture with Dropout (both private and shared) and no augmentations at
all.
Following our findings for LwF, we opt to use all baseline methods with
WRN-20-W5. However, we found that none of the baseline methods performs well
with it. We found that some of the baseline methods are tightly coupled with
the architecture originally presented in the paper. The authors of Hyper-CL
[34] did an extensive hyperparameter search for both the hypernetwork and
target architectures. They conclude that it is crucial to choose the right
combination since it has a great effect on performance. Therefore, we used the
best Hypernetwork-Target pair they found for the “chunked”, more effective,
version. This pair consists of a hypernetwork which has a linear layer that
maps task and chunk embedding of size 32 each to a chunk of size 7000 of a
ResNet-32 target network. Another coupling we found was for the HAT method, we
could not achieve reasonable performance with an underlying ResNet
architecture. We conjecture that the masking process in HAT needs to be
adapted for usage with batch normalization layers, and report results with the
AlexNet-like network presented by Serrà _et al_. [29].
Both EWC and IMM, although not coupled with specific architecture, were found
to be under-performing with WRN-20-W5, see appendix. We conjecture that the
difference from LwF lies in the type of regularization term used by each
method. LwF employs a ‘soft’ regularization on the network output for previous
tasks, which handles statistical shift due to batch normalization better than
the weight-based regularization. For the comparison table we use the best
evaluated architecture for each method.
All methods, except Hyper-CL and ACL, use separated fully-connected layers
with a softmax output for each task as a final layer. Hyper-CL employs a
separate generated network for each task, and ACL employs a separate 3-layer
MLP with softmax output for each task on top of private and shared
concatenation.
Training We made an effort to find the best training protocol for each
method, based on the existing literature and initial experiments. For all
methods except for Hyper-CL we followed the same training protocol described
in Sec. 4.1. For Hyper-CL, we use batch size 32 and with the Adam optimizer
[15] with a learning rate of $0.001$. As for learning rate scheduling, Hyper-
CL uses a validation accuracy to schedule the learning rate by dropping the
learning rate with a factor of $(\sqrt{0.1})^{-1}$, if there is no improvement
in the validation accuracy for 5 consecutive epochs. The Hyper-CL
implementation further employs a custom multi-step scheduler adapted from
Keras [2]. However, there is no early stopping in Hyper-CL. Also, no other
regularization is used in any of the methods, except to the ones that are
inherent to the method itself.
The Hyper-CL official implementation and the author’s experiments use the test
set for parameter selection in lieu of a proper validation set. We were able
to fix and rerun the experiments in time only for the Hyper-CL experiments on
CIFAR and not for the Hyper-CL experiments on Tiny-ImageNet. We observed that
moving to an independent validation set reduces the performance of Hyper-CL on
CIFAR by a significant margin. We, therefore, view the results obtained for
this method on Tiny-ImageNet as an upper bound for the method’s performance.
We note that (i) Hyper-CL is by far the slowest method out of all methods
tested, and (ii) On Tiny-ImageNet even though the results of this method are
positively biased, the method is not competitive.
The comparison to the literature methods is provided in Tab. 3 and summarized
in Fig. 2 for the best configuration for each method. Evidently, in contrast
to the picture the literature paints, when a proper architecture and added
augmentations are used, LwF, which is a simple regularization-base method,
outperforms all other methods. The results also show that although IMM has
evolved from EWC, both its variants are not competitive with EWC except for
the smallest split (CIFAR 5-Split). When considering the augmentation
mechanism, we have mixed results. Although augmentations increase ACC, they
also increase forgetting for EWC and IMM-MEAN and only slightly reduce
forgetting for IMM-MODE, which is still quite high. In contrast, for LwF,
where we show that augmentations help to both ACC and BWT.
HAT as originally conceived (recall that it is not compatible with ResNets),
has a very competitive ACC in CIFAR and even outperforms Hyper-CL for the
longer and more challenging sequence of tasks from Tiny-ImageNet. It also
further benefits from the augmentation. For Hyper-CL, we can see that although
it has a smaller capacity (considering only the hypernetwork learnable
parameters for capacity computation) it outperforms all of the baselines for
CIFAR when augmentation is used. However, this advantage does not generalize
to the Tiny-ImageNet dataset, and it falls behind HAT, and even EWC, for a
longer sequence, which further emphasizes the need for comparison over a
diverse set of experiments. To check if this shortcoming is a result of the
capacity of the model, we experimented with larger models, both for the
hypernetwork and target network. We observed that the performance drops
significantly in all experiments for the larger network. This result
emphasizes the need for careful tuning of the Hyper-CL method, which is
challenging since unlike other methods it requires the tuning of two
architectures at once, which enlarges the space of possible hyper-parameters
dramatically. We note also that [34] reported that out of many architectures
tried, the smallest ones showed the best performance-compression ratio.
For ACL, we quote the results for CIFAR 20-Split with no augmentation from the
paper itself [4]. The network used in the paper was similar to the one used by
HAT. As the results show, ACL outperforms both HAT and Hyper-CL when no
augmentation is used. LwF is not considered as a baseline in [4]. However, LwF
outperforms ACL with WRN-20-W5 even without augmentation. We emphasize that
the difference does not come from capacity, since both networks have a similar
capacity as described in Tab. 1.
We further analyze the performance by evaluating ACC and BWT after learning
each task. Fig. 2 shows the results for the longer sequences of tasks, 20 for
CIFAR and 40 for Tiny-ImageNet (the results for the other experiments can be
found in the appendix). One can observe that the methods differ in substantial
ways. First, the non-LwF regularization methods, namely EWC and IMM, are not
competitive with LwF since the early stages of the online training. The
results also indicate that although more careful tuning between the primary
loss and the regularization loss could be made, there is a high degree of
trade-off between forgetting and new learning in these methods. Where EWC and
IMM-MEAN favor old tasks (low forgetting, low ACC) and IMM-MODE favors new
tasks (high forgetting, comparable, or higher, final ACC to IMM-MEAN). Second,
the same trade-off exists for HAT: while almost no forgetting exists, the
accuracy for new tasks is lower. Since HAT is a parameter isolation method, we
conjecture that it struggles to utilize the underlined architecture for
learning new tasks. Third, while Hyper-CL and LwF seem close on CIFAR, an
important difference is evident in Tiny-ImageNet. Looking at the profile of
ACC for Tiny-ImageNet, Fig. 2 (c), shows that Hyper-CL struggles to learn new
tasks after task 34 is learned, and the drop of accuracy is not due to
forgetting, as is evident by the BWT plot in Fig. 2 (d). Interestingly, this
drop also enables EWC to outperform Hyper-CL through more consistent
performance after the drop in task 8. Last, for LwF, in both CIFAR and Tiny-
ImageNet, it enjoys the capability of learning new tasks and almost does not
forget previous tasks. We conclude that, although LwF is a regularization
based method, given the right architecture and augmentation, it can maintain
both the ability to learn new tasks and to not forget old ones, even at the
tails of long tasks sequence.
This emphasizes the need for a careful evaluation of each method. While EWC,
IMM, HAT, and ACL outperform AlexNet-based LwF with Dropout architecture they
fall short when dropout is removed and when selecting more appropriate
architectures. The reason that these other methods do not suffer from Dropout
is that they employ hard regularization on the weights which considers their
importance. However, as Fig. 2 shows, this type of regularization quickly
results in a network utilization problem for fixed-size backbones.
## 5 Conclusions
Many of the recent task-incremental publications [21, 29, 1] compare with LwF
and found their method to be superior. These conclusions seem to arise from
the little incentive authors have to explore the effect of the evaluation
settings on prior work, or to invest effort in modernizing the form (_e.g_.,
architecture) of baseline methods. However, LwF itself is built on top of
solid knowledge-distillation foundations and, as we show, can be upgraded to
become extremely competitive.
We demonstrate that the LwF method can benefit from a higher capacity (width-
wise) and a network that employs residual connections as well as from
augmentations. It is not obvious that the method would benefit from these
changes, as many of the other methods cannot benefit from ResNets due to the
challenges of applying batch normalization and the need to carefully control
the capacity. Moreover, not all methods benefit from augmentations in both ACC
and BWT.
Overall, our contributions are two-fold. First, we provide strong baselines
for task-incremental methods, that form a solid foundation for comparing
future methods. Second, we show the effect of added capacity, residual
architectures, and regularization in the form of augmentation on task-
incremental methods. Demonstrating sometimes paradoxical behavior, expected to
improve performance but deteriorates it. We believe that LwF’s ability to
benefit from such improvements is a strong indication that this method would
stand the test of time.
## Acknowledgments
This project has received funding from the European Research Council (ERC)
under the European Unions Horizon 2020 research and innovation programme
(grant ERC CoG 725974).
## References
* [1] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pages 139–154, 2018.
* [2] François Chollet et al. Keras. https://keras.io, 2015.
* [3] Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. Continual learning: A comparative study on how to defy forgetting in classification tasks. arXiv preprint arXiv:1909.08383, 2019.
* [4] S. Ebrahimi, F. Meier, R. Calandra, Trevor Darrell, and Marcus Rohrbach. Adversarial continual learning. ArXiv, abs/2003.09553, 2020.
* [5] Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017.
* [6] Yaroslav Ganin, E. Ustinova, Hana Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17:59:1–59:35, 2016.
* [7] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256, 2010.
* [8] David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
* [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
* [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [11] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
* [12] Yen-Chang Hsu, Y. Liu, and Z. Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. ArXiv, abs/1810.12488, 2018.
* [13] Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Dongyan Zhao, Jinwen Ma, and Rui Yan. Overcoming catastrophic forgetting for continual learning via model adaptation. 7th International Conference on Learning Representations, ICLR 2019, pages 1–13, 2019.
* [14] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
* [15] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [16] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
* [17] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\.
* [18] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
* [19] Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. 2015\.
* [20] Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. In Advances in neural information processing systems, pages 4652–4662, 2017.
* [21] Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting. 2019\.
* [22] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017.
* [23] David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pages 6467–6476, 2017.
* [24] Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7765–7773, 2018.
* [25] Nick Pawlowski, Andrew Brock, Matthew CH Lee, Martin Rajchl, and Ben Glocker. Implicit weight uncertainty in neural networks. arXiv preprint arXiv:1711.01297, 2017.
* [26] Amal Rannen, Rahaf Aljundi, Matthew B Blaschko, and Tinne Tuytelaars. Encoder based lifelong learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 1320–1328, 2017.
* [27] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
* [28] M. Salzmann, C. Ek, R. Urtasun, and Trevor Darrell. Factorized orthogonal latent spaces. In AISTATS, 2010.
* [29] Joan Serrà, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. arXiv preprint arXiv:1801.01423, 2018.
* [30] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
* [31] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014\.
* [32] Rupesh K Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and Jürgen Schmidhuber. Compete to compute. In Advances in neural information processing systems, pages 2310–2318, 2013.
* [33] Gido M van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734, 2019.
* [34] Johannes von Oswald, Christian Henning, João Sacramento, and Benjamin F Grewe. Continual learning with hypernetworks. arXiv preprint arXiv:1906.00695, 2019.
* [35] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
* [36] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3987–3995. JMLR. org, 2017.
## Appendix A ResNets architectures
In section 4.1 of the main paper, we offered to use various ResNet
architectures for LwF: RN-20, RN-32, RN-62, WRN-20-W2, and WRN-20-W5. All
these networks share a common structure but differ in width or depth. This
structure starts with a single convolutional layer of 16 filters with a kernel
size of 3x3 and stride 1, followed by 3 groups of “blocks”. Each group is
parameterized by the number of blocks, width, and stride for the first block
in the group. The baseline width (width factor equals 1) of each group is 16,
32, and 64, and strides 1, 2, and 2 respectively.
To implement the blocks, the class of BasicBlock from the PyTorch framework is
employed. Each block contains 2 convolutional layers with a kernel size of 3x3
and a skip connection. The structure ends with an adaptive average pooling of
size 1x1. Moreover, each convolutional layer is followed by a batch
normalization layer and a ReLU activation function.
The parameters of the architectures in our work:
* •
RN-20 a width factor of 1 and 3 blocks in each group.
* •
RN-32 a width factor of 1 and 5 blocks in each group.
* •
RN-62 a width factor of 1 and 10 blocks in each group.
* •
WRN-20-W2 a width factor of 2 and 3 blocks in each group.
* •
WRN-20-W5 a width factor of 5 and 3 blocks in each group.
## Appendix B LwF with AlexNet and data augmentations
In the main text the best architecture is tested for LwF with data
augmentations, namely WRN-20-W5. In this section we provide results for
AlexNet-like architectures with augmentations as well, the results are
provided in Tab. 4. We observe that the data augmentations does not provide
recovery from the harmful Dropout component in AlexNet-D. However, it does
provide performance boost for AlexNet-ND, as expected.
| | CIFAR 5-Split | CIFAR 10-Split | CIFAR 20-Split | Tiny-ImageNet 40-Split
---|---|---|---|---|---
Arch. | Aug. | BWT | ACC | BWT | ACC | BWT | ACC | BWT | ACC
AlexNet-D | | $-39.9\pm 1.4$ | $36.6\pm 1.5$ | $-52.9\pm 1.2$ | $28.1\pm 1.3$ | $-54.4\pm 1.1$ | $31.3\pm 0.8$ | $-50.5\pm 1.0$ | $25.0\pm 0.4$
AlexNet-D | ✓ | $-46.2\pm 1.8$ | $38.0\pm 1.7$ | $-56.9\pm 0.8$ | $30.1\pm 0.7$ | $-58.0\pm 0.5$ | $31.6\pm 0.3$ | $52.6\pm 0.8$ | $25.9\pm 0.5$
AlexNet-ND | | $-1.8\pm 0.6$ | $56.6\pm 1.1$ | $-2.9\pm 0.2$ | $67.0\pm 1.0$ | $-3.1\pm 0.3$ | $75.5\pm 0.6$ | $-2.8\pm 0.3$ | $66.9\pm 0.8$
AlexNet-ND | ✓ | $-0.5\pm 0.4$ | $69.5\pm 1.1$ | $-0.7\pm 0.3$ | $76.7\pm 0.9$ | $-0.9\pm 0.2$ | $83.5\pm 0.5$ | $-1.4\pm 0.3$ | $73.2\pm 0.7$
Table 4: LwF results with AlexNet-like architecture with data augmentations. all results are produced by us and are averaged over five runs with standard deviations. D=Dropout, ND=No Dropout. |
---|---
(a) | (b)
Figure 3: The evolution in time of the accuracy and the forgetting for CIFAR
20-Split with LwF and different width and depth architectures, average over 5
random seeds. No augmentation used in these experiments. (a) $ACC$ (Eq. 1)
after learning task $t$ as a function of $t$. (b) $BWT$ (Eq. 2) after learning
task $t$ as function of $t$.
## Appendix C Width vs. depth for LwF
In Fig. 3 we offer another view on the effect of different depth and width for
LwF. The results are provided for the baseline ResNet architecture, RN-20, and
two comparable capacity architectures. One with greater depth, RN-62, and
another with greater width, WRN-20-W2. The results show that although RN-62
and WRN-20-W2 share a similar amount of forgetting, from task 2 onward RN-62
under-performs with respect to ACC.
This suggests that LwF with a deeper ResNet network is struggling to acquire
new knowledge while keeping the previous one. Comparing RN-62 with RN-20
highlights a more severe problem where LwF is struggling to utilize deeper
networks both in terms of ACC and BWT. However, increased width has a positive
effect on performance over time, even at the price of increased forgetting.
Fortunately, we were able to mitigate this increased forgetting with data
augmentations, which not only reduced forgetting substantially but also
increased ACC.
## Appendix D EWC and IMM with WRN-20-W5
In our experiments we found EWC and IMM (both MEAN and MODE variants) to
perform poorly with ResNet architectures and specifically with WRN-20-W5. The
results, for this architecture, can be found in Tab. 5. As can be seen, using
WRN-20-W5 the methods are not competitive and perform lower than when using
the AlexNet-like architecture, as quoted in the main paper. This performance
gap suggests that the methods require modifications in order to enjoy more
modern architecture, like ResNet. We attribute this to the challenge imposed
by the batch normalization layers.
| | CIFAR 5-Split | CIFAR 10-Split | CIFAR 20-Split | Tiny-ImageNet 40-Split
---|---|---|---|---|---
Method | Aug. | BWT | ACC | BWT | ACC | BWT | ACC | BWT | ACC
EWC | | $-11.0\pm 2.4$ | $46.8\pm 2.1$ | $-24.8\pm 3.6$ | $39.8\pm 2.6$ | $-33.5\pm 5.5$ | $40.9\pm 5.3$ | $-31.4\pm 2.0$ | $34.8\pm 1.6$
EWC | ✓ | $-11.6\pm 3.9$ | $60.1\pm 4.4$ | $-31.9\pm 2.6$ | $46.8\pm 2.4$ | $-45.7\pm 4.1$ | $38.2\pm 3.4$ | $-45.1\pm 3.1$ | $31.1\pm 3.5$
IMM-MEAN | | $-12.3\pm 8.5$ | $24.6\pm 8.7$ | $-3.5\pm 5.6$ | $27.3\pm 4.4$ | $-2.9\pm 1.3$ | $33.3\pm 2.0$ | $+0.2\pm 1.5$ | $28.1\pm 1.3$
IMM-MEAN | ✓ | $-16.9\pm 4.7$ | $29.3\pm 3.2$ | $-4.9\pm 2.5$ | $29.4\pm 3.1$ | $-3.3\pm 2.1$ | $30.9\pm 1.3$ | $-1.6\pm 4.0$ | $26.8\pm 3.0$
IMM-MODE | | $-22.7\pm 6.3$ | $39.4\pm 3.9$ | $-34.8\pm 4.0$ | $34.5\pm 3.1$ | $-47.3\pm 4.0$ | $30.3\pm 3.3$ | $-42.5\pm 2.1$ | $27.5\pm 1.4$
IMM-MODE | ✓ | $-39.8\pm 2.1$ | $44.0\pm 2.1$ | $-52.0\pm 3.3$ | $35.2\pm 2.7$ | $-58.8\pm 5.4$ | $30.2\pm 5.2$ | $-52.4\pm 2.7$ | $26.4\pm 2.5$
Table 5: EWC and IMM results with WRN-20-W5. all results are produced by us
and are averaged over five runs with standard deviations.
## Appendix E ACC and BWT over time
In Fig. 4 we provide the BWT and ACC scores after learning each task for
CIFAR-100 with 5 and 10 splits. These results were omitted from the main text
for brevity and provided here as complementary results.
Similarly to the results shown in the paper (main text Fig. 2), the advantage
of LwF over the baseline methods is evident. LwF can learn new tasks with a
similar level of performance to the previous ones while maintaining the
knowledge from the previous tasks. In contrast, both EWC and IMM fail to do
so. For HAT, the difference in performance between different CIFAR-100 splits,
where the performance is more stable for a short sequence of tasks, could
point to an insufficient per task capacity problem. However, since LwF can
both learn new tasks and maintain old ones with similar capacity, this points
to an under-utilization of the network capacity. Thus, we suspect that HAT is
not scalable for long task sequences even with larger networks. Although
HyperCL seems to have very competitive results for these splits, its
shortcoming is revealed in the main paper, looking at a longer sequence of
tasks, such as Tiny-ImageNet.
|
---|---
(a) | (b)
|
(c) | (d)
Figure 4: The evolution in time of the accuracy and the forgetting, for the
best performing setting of each method average over 5 random seeds. $ACC$ (Eq.
1) after learning task $t$ as a function of $t$. $BWT$ (Eq. 2) after learning
task $t$ as function of $t$. (a) & (b) results over time for CIFAR 5-Split and
(c) & (d) results over time for CIFAR 10-Split.
|
arxiv-papers
| 2021-07-26T16:23:13 |
2024-09-04T03:07:19.211209
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Guy Oren and Lior Wolf",
"submitter": "Guy Oren",
"url": "https://arxiv.org/abs/2107.12304"
}
|
2107.12313
|
# The UV-brightest Lyman continuum emitting star-forming galaxy
R. Marques-Chaves1, D. Schaerer1,2, J. Álvarez-Márquez3, L. Colina3,4, M.
Dessauges-Zavadsky1, I. Pérez-Fournon5,6, A. Saldana-Lopez1, A. Verhamme1
1Geneva Observatory, University of Geneva, Chemin Pegasi 51, CH-1290 Versoix,
Switzerland
2CNRS, IRAP, 14 Avenue E. Belin, 31400 Toulouse, France
3Centro de Astrobiología (CSIC-INTA), Carretera de Ajalvir, 28850 Torrejón de
Ardoz, Madrid, Spain
4International Associate, Cosmic Dawn Center (DAWN)
5Instituto de Astrofísica de Canarias, C/Vía Láctea, s/n, E-38205 San
Cristóbal de La Laguna, Tenerife, Spain
6Universidad de La Laguna, Dpto. Astrofísica, E-38206 San Cristóbal de La
Laguna, Tenerife, Spain
E-mail: [email protected]
###### Abstract
We report the discovery of J0121$+$0025, an extremely luminous and young star-
forming galaxy ($M_{\rm UV}=-24.11$, log[$L_{\rm Ly\alpha}/\rm
erg\leavevmode\nobreak\ s^{-1}]=43.8$) at $z=3.244$ showing copious Lyman
continuum (LyC) leakage ($f_{\rm esc,abs}\approx 40\%$). High signal-to-noise
ratio rest-frame UV spectroscopy with the Gran Telescopio Canarias reveals a
high significance ($7.9\sigma$) emission below the Lyman limit ($<912$Å), with
a flux density level $f_{900}=0.78\pm 0.10\mu$Jy, and strong P-Cygni in wind
lines of O vi 1033Å, N v 1240Å and C iv 1550Å that are indicative of a young
age of the starburst ($<10$ Myr). The spectrum is rich in stellar photospheric
features, for which a significant contribution of an AGN at these wavelengths
is ruled out. Low-ionization ISM absorption lines are also detected, but are
weak ($EW_{0}\rm\simeq 1$Å) and show large residual intensities, suggesting a
clumpy geometry of the gas with a non-unity covering fraction or a highly
ionized ISM. The contribution of a foreground and AGN contamination to the LyC
signal is unlikely. Deep optical to Spitzer/IRAC $4.5\mu$m imaging show that
the spectral energy distribution of J0121$+$0025 is dominated by the emission
of the young starburst, with log($M_{\star}^{\rm burst}/M_{\odot})=9.9\pm 0.1$
and $\rm SFR=981\pm 232$ $M_{\odot}$ yr-1. J0121$+$0025 is the most powerful
LyC emitters known among the star-forming galaxy population. The discovery of
such luminous and young starburst leaking LyC radiation suggests that a
significant fraction of LyC photons can escape in sources with a wide range of
UV luminosities and are not restricted to the faintest ones as previously
thought. These findings might shed further light on the role of luminous
starbursts to the cosmic reionization.
###### keywords:
galaxies: formation – galaxies: evolution – galaxies: high-redshift
††pagerange: The UV-brightest Lyman continuum emitting star-forming
galaxy–References
## 1 Introduction
Lyman-$\alpha$ emitters (LAEs) and Lyman break galaxies (LBGs) are widely
studied populations of star-forming galaxies that are common in the early
Universe. Deep field surveys have been used to study these star-forming
galaxies, which are typically faint with magnitudes $R\sim 25$ AB at $z\sim
3$, corresponding to absolute UV magnitudes $M_{\rm UV}^{*}$ of about $-20$ to
$-21$ at those redshifts (for LAEs and LBGs, respectively; e.g., Ouchi et al.,
2008; Reddy & Steidel, 2009). These studies have revealed that typical
($M_{\rm UV}^{*}$) LAEs and LBGs present a wide range of properties, with
stellar masses $\log(M_{\star}/M_{\odot})\sim 8.0-10.0$ (e.g., Shapley et al.,
2001; Gawiser et al., 2007; Ono et al., 2010; Santos et al., 2020), star-
formation rates (SFR) up to a few tens $M_{\odot}$ yr-1 (e.g., Shapley et al.,
2003; Nakajima et al., 2012; Sobral et al., 2018), and low metallicity on
average (e.g., Finkelstein et al., 2011; Nakajima et al., 2013; Kojima et al.,
2017).
These star-forming galaxies are likely the dominant sources responsible for
ionizing the intergalactic medium (IGM) in the early Universe, during the so-
called Epoch of Reionization (EoR at $6<z<15$; e.g., Robertson et al. 2015),
due to their expected large hydrogen ionizing photon (hereafter Lyman
continuum; LyC with $>13.6$ eV) escape fraction, $f_{\rm esc}\rm(LyC)$.
However, it is still unclear whether the faint and numerous, or the more
luminous and rare are the main contributors (cf., Finkelstein et al., 2019;
Naidu et al., 2020).
Recent progress has been made in selecting LyC emitters and understanding
their properties. To date $\approx 50$ LyC emitting star-forming galaxies have
been discovered and studied in detailed at low-$z$ ($z\lesssim 0.4$; e.g.,
Borthakur et al. 2014; Izotov et al. 2016; Leitherer et al. 2016; Izotov et
al. 2018a; Izotov et al. 2018b, intermediate-$z$ ($z\sim 1.4$, Saha et al.
2020, and moderately high-$z$ ($z>2$, de Barros et al. 2016; Shapley et al.
2016; Vanzella et al. 2016; Bian et al. 2017; Steidel et al. 2018; Fletcher et
al. 2019; Rivera-Thorsen et al. 2019; Ji et al. 2020) up to $z\sim 4$
(Vanzella et al., 2018), where the IGM still allows the direct observation of
LyC photons. However, the relation between $f_{\rm esc}\rm(LyC)$ and $M_{\rm
UV}$ is still a matter of debate. Some statistical studies targeting star-
forming galaxies with $M_{\rm UV}$ between $\simeq-20$ to $-21.5$ find a clear
trend, where faint galaxies show larger $f_{\rm esc}\rm(LyC)$ than luminous
ones (Steidel et al., 2018; Pahl et al., 2021). However, these findings are in
tension with those from other works. For example, Bian & Fan (2020) find a low
$f_{\rm esc}\rm(LyC)<14\%$ ($3\sigma$) in faint LAEs ($M_{\rm
UV}\simeq-18.8$), suggesting that the LyC leakage in star-forming galaxies do
not follow such trend. In addition, one of the strongest LyC emitters known at
high redshift, with $f_{\rm esc}\rm(LyC)\approx 60\%$ is also a very luminous
($M_{\rm UV}=-22.20$) star-forming galaxy (Ion3 at $z=4.0$; Vanzella et al.
2018).
Understanding the role of UV-luminous star-forming galaxies to the cosmic
reionization requires the knowledge of two key properties: the volume density
of these sources and their intrinsic properties, those governing the
production (e.g., stellar population, star-formation histories) and escape of
LyC photons (i.e. neutral gas and dust content and the geometry of the
interstellar medium, ISM). UV-luminous star-forming galaxies are rare (e.g.,
Sobral et al., 2018), yet how much is still unknown, in particular at $z>6$.
Recent studies have found remarkably bright galaxies well into the EoR ($z>7$)
that are in excess compared to the generally observed Schechter component of
the UV luminosity function (LF; e.g., Bowler et al. 2014, 2015; Ono et al.
2018; Stefanon et al. 2019). For example, the most distant spectroscopically-
confirmed galaxy known, GN-z11 at $z\simeq 11.0$ (Oesch et al., 2016; Jiang et
al., 2021), is also very luminous in the UV ($M_{\rm UV}=-22.1$), and the
inferred volume density of this source is a factor of $>15$ higher than
predicted by models (Oesch et al., 2016).
On the other hand, the number of UV-luminous star-forming galaxies known is
scarce, in particular those as bright as $M_{\rm UV}<-23$ (e.g., Dessauges-
Zavadsky et al., 2010; Lee et al., 2013; Harikane et al., 2020; Marques-Chaves
et al., 2020a), preventing us to study their physical properties in a
statistical basis. While bright sources are in principle easy and optimal
targets for deep follow-up observations and detailed analysis of their
properties, identifying them is extremely challenging, as it requires large
area surveys and, in addition, spectroscopic follow-up to distinguish them
from the much more abundant active galactic nuclei (AGN), which is time
consuming.
Motivated by this, we have undertaken a search for UV-luminous star-forming
galaxies at $z>2$ in of one of the widest spectroscopic surveys ever
performed, the $\sim 9300$ deg2-wide extended Baryon Oscillation Spectroscopic
Survey (eBOSS: Abolfathi et al., 2018) of the Sloan Digital Sky Survey (SDSS:
Eisenstein et al., 2011). While eBOSS was specifically designed to select
bright quasi stellar objects (QSOs) at these redshifts, not necessarily
high-$z$ star-forming galaxies, some QSO candidates were selected and observed
based on their optical colors only (for details see: Ross et al., 2012),
mimicking the Lyman break technique. The combination of a wide area surveyed,
the limiting $r$-band magnitude of $\sim 22$ AB, which corresponds to $M_{\rm
UV}\lesssim-23$ at $z\simeq 2.5$, and the available optical spectra for every
source, makes eBOSS an alternative and efficient survey to search and explore
such luminous star-forming galaxies.
In Marques-Chaves et al. (2020b) we presented the first result of this
project, where an extremely luminous star-forming galaxy was analysed, BOSS-
EVULG1 at $z=2.47$. First selected as a SDSS QSO, follow-up observations
revealed that the large luminosities in the UV ($M_{\rm UV}=-24.4$) and
nebular emission (log(L[Ly$\alpha$, erg s-1]) = $44.0$) in BOSS-EVULG1 are
powered by a vigorous starburst with SFR $\simeq$ 1000 $M_{\odot}$ yr-1, a
young age ($\simeq 4$ Myr) and very high sSFR ($\sim 100$ Gyr-1), with no
evidence for a dominant contribution of a type-I/II AGN. Interestingly, BOSS-
EVULG1 shows properties that are common in LyC leakers, such as low dust
content ($E(B-V)\simeq 0.07$ and log($L_{\rm IR}$/$L_{\rm UV})<-1.2$) and a
highly ionized ISM ([O iii] 5007Å / [O ii] 3728Å $\simeq 18$). In addition, it
presents very weak low-ionization ISM absorption, suggestive of low geometric
covering fraction of the neutral gas, that could be related with the powerful
starburst-driven ionized outflow detected in H$\alpha$ in this galaxy
(Álvarez-Márquez et al., 2021). However, no direct information on the LyC
leakage is known for this luminous source.
In this work, we present the discovery and detailed analysis of a new luminous
source, SDSS J012156.09$+$002520.3 ($\alpha$, $\delta$ [J2000] = 20.4837∘,
0.4223∘), hereafter J0121$+$0025, an extremely UV-luminous ($M_{\rm
UV}=-24.1$) star-forming galaxy at $z=3.244$ showing copious LyC leakage
($f_{\rm esc,abs}\approx 40\%$). The paper is structured as follows. The
discovery and follow-up observations are presented in Section 2. The analysis
of the rest-frame UV spectroscopic observations and imaging data are presented
in Section 3. In Section 4 we discuss the LyC properties of J0121$+$0025 and
compare them with those from other LyC emitters, and, finally, in Section 5 we
present the summary of our main findings. Throughout this work, we assume a
Salpeter (1955) initial mass function (IMF) and a concordance cosmology with
$\Omega_{\rm m}=0.274$, $\Omega_{\Lambda}=0.726$, and $H_{0}=70$ km s-1 Mpc-1.
All magnitudes are given in the AB system.
Figure 1: Cutout of J0121$+$0025 from the Subaru/HSC $R$-band image (left).
The orientation of the GTC/OSIRIS long slit is marked in blue. The slit also
includes a low-$z$ star-forming galaxy with a photometric redshift of $1.10\pm
0.27$ and located $\simeq 6.6^{\prime\prime}$ North from J0121$+$0025\. The
spectroscopic $1^{\prime\prime}$-radius BOSS fiber is also marked in red.
J0121$+$0025 shows a compact morphology ($\simeq 0.55^{\prime\prime}$ FWHM)
and it is only barely resolved in this image (with a PSF of $\simeq
0.50^{\prime\prime}$ FWHM). Middle and right panels show the residuals from
the Galfit modeling of J0121$+$0025 in the same band using a Sersic and PSF
profiles, respectively. As shown in these panels, J0121$+$0025 is better
modeled using a Sersic profile than a PSF, being the r.m.s of the residuals in
the region encompassing J0121$+$0025 reduced by $\approx 50\%$.
## 2 Discovery and Follow-up Observations
J0121$+$0025 was discovered as part of our search for Extremely UV-Luminous
Galaxies (EUVLGs, $M_{\rm UV}$<$-$23) within the eBOSS survey (Abolfathi et
al., 2018) of the SDSS (Eisenstein et al., 2011). J0121$+$0025 is part of a
large sample of $\sim$70 very luminous star-forming galaxies that were
previously classified as QSOs in the Data Release 14 Quasar catalog (Pâris et
al., 2018). This sample also includes BOSS-EUVLG1 at $z=2.469$ recently
analysed in Marques-Chaves et al. (2020b) and in Álvarez-Márquez et al.
(2021). The sample and the selection techniques will be presented in a
separated work (R. Marques-Chaves in prep.). Briefly, the selection consists
on searching for narrow Ly$\alpha$ profiles in optical SDSS spectra of
$z\gtrsim 2$ sources, and blue/flat optical to mid-IR colors, i.e., properties
that are not expected to be present in QSOs.
The BOSS spectrum of J0121$+$0025 (plate-mjd-fiberid: 4228-55484-818) shows
features characteristic of an un-obscured, luminous star-forming galaxy,
rather than an AGN. In particular, the shallow BOSS spectrum shows a narrow
Ly$\alpha$ profile ($\simeq 350$ km s-1 full width half maximum, FWHM) and
evidence of wind profiles in N v 1240Å and C iv 1550Å in the form of P-Cygni,
that could be indicative of a young starburst. In addition, the BOSS spectrum
shows emission below $\lambda_{\rm obs}<3870$Å , that although detected with
low significance ($\simeq 2.3\sigma$), could be an indication of LyC leakage
($\lambda_{\rm 0}<912$Å). Given this, J0121$+$0025 was selected as a priority
target for deep follow-up spectroscopy.
### 2.1 GTC/OSIRIS rest-frame UV spectroscopy
Optical spectroscopy was obtained with the Optical System for Imaging and low-
Intermediate-Resolution Integrated Spectroscopy instrument
(OSIRIS)111http://www.gtc.iac.es/instruments/osiris/ on the 10.4 m Gran
Telescopio Canarias (GTC). The data were obtained in service mode over two
nights, 2020 August 18 and 19 in dark Moon and sub-arcsec ($\simeq
0.8^{\prime\prime}-0.9^{\prime\prime}$) conditions as part of the GTC program
GTC21-20A (PI: R. Marques-Chaves). The R1000B grism was used, with dispersion
of 2.12Å, providing a full spectral coverage of 3600 - 7500Å, which
corresponds to 860 - 1800Å in the rest-frame at $z\simeq 3.25$. The OSIRIS
1.2′′-wide slit was centered on J0121$+$0025 and oriented with the parallactic
angle (see Figure 1). Given this configuration, the corresponding instrumental
resolution is $\rm R\sim 800$ or $\simeq 400$ km s-1. In total 12 exposures of
900 s were acquired.
Data were processed with standard Iraf222http://iraf.noao.edu/ tasks. Each
individual two-dimensional spectrum is bias-subtracted and flat-field
corrected. The wavelength calibration is done using HgAr+Ne+Xe arc lamps data
obtained in both days. Individual 2D spectra were background subtracted using
sky regions around J0121$+$0025 ($\simeq 10^{\prime\prime}$ on both sides).
The slit includes a faint ($R=23.6$, $z_{\rm phot}=1.10\pm 0.27$) star-forming
galaxy $\simeq 6.6^{\prime\prime}$ North from J0121$+$0025 (see Figure 1), so
this region was excluded for the background statistics. The continuum of this
source is detected, but not any emission line covered in the OSIRIS spectral
range. Individual 1D spectra are extracted, stacked and corrected for the
instrumental response using observations of the standard star Ross 640
observed in both nights. The reddening effect for the extinction in the Galaxy
was taken into account adopting the extinction curve of Cardelli et al. (1989)
and using the extinction map of Schlafly & Finkbeiner (2011). Finally, the
flux of the spectrum is matched to that obtained from photometry in the
$R$-band to account for slit-losses, and the final spectrum is corrected for
telluric absorption using the Iraf telluric routine. The reduced spectrum of
J0121$+$0025 is shown in Figure 2 and presents very high signal-to-noise ratio
in the continuum, $\rm SNR\simeq 20-30$ per spectral bin.
### 2.2 Ancillary data
J0121$+$0025 falls in the SDSS equatorial Stripe 82 and the rich ancillary
data available in this field are used. These consist of optical imaging from
the Hyper Suprime-Cam (HSC) on Subaru in the $G$, $R$, $Z$, and $Y$ bands from
the the second data release of the HSC Subaru Strategic Program (Aihara et
al., 2019) and MEGACAM image in the $I$ band on the Canada-France-Hawaii
Telescope (CFHT), processed and stacked using the MegaPipe image staking
pipeline (Gwyn, 2008). The optical images are of great quality, both in terms
of seeing conditions ($0.50^{\prime\prime}-0.85^{\prime\prime}$ FWHM) and
depth, reaching $5\sigma$ limits similar or above $\simeq 25.5$ for all bands.
J0121$+$0025 is detected in all images. Figure 1 shows a cutout of
J0121$+$0025 from the Subaru/HSC $R$-band. J0121$+$0025 shows a compact
morphology, barely resolved ($\simeq 0.55^{\prime\prime}$ FWHM) only in the
best seeing conditions image, $R$-band, with a point spread function (PSF) of
$0.50^{\prime\prime}$ FWHM measured using several stars in the field. The lack
of lensing structures, such as multiple images or arc-like morphologies, and
the compact morphology make unlikely that J0121$+$0025 is being magnified by
gravitational lensing. Using aperture photometry with a diameter of
$2.5\times\rm FWHM$, J0121$+$0025 shows a flat spectral energy distribution
(SED) in the optical with magnitudes of $\simeq 21.60$ AB in the $R$\- to
$Y-$bands, consistent with those from SDSS photometry. Table 1 summarizes the
photometry of J0121$+$0025.
Table 1: Optical to Mid-IR Photometry of J0121$+$0025\. Band | $\lambda_{\rm eff}$ | Magnitude | Telescope / Instrument
---|---|---|---
| ($\mu$m) | (AB) |
$G$ | 0.47 | $21.98\pm 0.08$ | Subaru / HSC
$R$ | 0.61 | $21.60\pm 0.05$ | Subaru / HSC
$I$ | 0.77 | $21.57\pm 0.06$ | CFHT / MEGACAM
$Z$ | 0.89 | $21.53\pm 0.08$ | Subaru / HSC
$Y$ | 1.00 | $21.58\pm 0.08$ | Subaru / HSC
$J$ | 1.25 | $21.87\pm 0.23$ | VISTA / VIRCAM
$K_{\rm s}$ | 2.14 | $21.46\pm 0.18$ | VISTA / VIRCAM
$I1$ | 3.56 | $21.61\pm 0.16$ | Spitzer / IRAC
$I2$ | 4.51 | $22.01\pm 0.20$ | Spitzer / IRAC
Near-IR imaging is also available in this field. J0121$+$0025 was observed
with the VISTA InfraRed CAMera (VIRCAM) as part of the VISTA–CFHT Stripe 82
(VICS82) survey (Geach et al., 2017). It is detected in the $J$\- and $K_{\rm
s}$-bands, with magnitudes of $21.87\pm 0.23$ and $21.46\pm 0.18$,
respectively. J0121$+$0025 is also included in the Spitzer/HETDEX Exploratory
Large-Area (SHELA) survey (Papovich et al., 2016) and is detected in the two
first IRAC channels at $3.6\mu$m (I1) and $4.5\mu$m (I2) with magnitudes of
$21.61\pm 0.16$ and $22.01\pm 0.20$, respectively. Finally, this region has
been imaged in the X-ray by XMM-Newton with a total integration time of 2.6
ks. However, J0121$+$0025 is not detected with an X-ray flux limit of $\simeq
6\times 10^{-15}$ erg s-1 cm-2 (0.2-2 keV), corresponding to a luminosity
$3\sigma$ limit of $1.2\times 10^{45}$ erg s-1 at $z=3.25$ (considering a
photon index $\Gamma=1.7$).
Figure 2: GTC/OSIRIS rest-frame UV spectrum of J0121$+$0025 (black) and its
corresponding $1\sigma$ uncertainty (blue). Yellow dotted lines marked below
the spectrum identify several photospheric absorption lines, some of them
resolved and detected with high significance for which the systemic redshift
was derived, $z_{\rm sys}=3.244\pm 0.001$. Low-ionization ISM absorption, wind
lines in the form of P-Cygni, and nebular emission are marked in blue, green,
and magenta lines, respectively. The best-fit Starburst99 model with age of 3
Myr, $Z_{\star}/Z_{\odot}=0.4$ and $E(B-V)=0.04\pm 0.02$ is shown in red. The
S99 spectrum blueward Ly$\alpha$ has been corrected for the Lyman forest
absorption, using the mean and standard deviation IGM transmission $\rm
T(IGM)=0.60\pm 0.19$ (see Section 3.3). The spectral region in green
corresponds to the emission below $\lambda_{0}<912$Å related to LyC leakage.
## 3 Results
### 3.1 The nature of the ionizing source: SFG or AGN?
The brightness of J0121$+$0025 ($R=21.6$) and its corresponding UV luminosity
at $z=3.25$, $M_{\rm UV}=-24.1$, rival that of bright QSOs at similar redshift
(e.g. Pâris et al., 2018). Therefore, it is critical to investigate first the
nature of the ionizing source of J0121$+$0025.
The high SNR rest-frame UV spectrum (Figure 2) is rich in absorption/emission
features that are common in young starbursts, rather than in AGNs. Features
associated with different components of the galaxy, such as stars, nebular
emission and ISM, are clearly identified, and marked in Figure 2 with
different colors.
Figure 3: Spectral features detected in J0121$+$0025\. Top left: Ly$\alpha$
spectral profile seen in the GTC/OSIRIS (black) and BOSS (green) spectra. It
shows a narrow profile with an intrinsic $\rm FWHM=350\pm 40$ km s-1 with its
peak redshifted by $\simeq 120$ km s-1 relative to the systemic velocity. Top
middle and right panels show the N v and O vi wind lines in the form of
P-Cygni (black) and the best-fit S99 model (red). Bottom left: stellar
photospheric lines used to derive the systemic redshift. Bottom middle:
profiles of the low-ionization ISM lines Si ii 1260Å and C ii 1334Å (blue and
red, respectively). These lines are weak and have their centroids redshifted
respect to the systemic velocity by $\simeq-460$ km s-1 and $\simeq-510$ km
s-1, respectively. The spectrum also shows two other absorption lines (marked
with dashed lines), whose nature is still unclear, but likely not physically
associated with J0121$+$0025 (e.g., outflows). Bottom right: peculiar profile
of the He ii line with three peaks (yellow). S99 and BPASS models are also
shown in red and blue, respectively, but they fail to reproduce the observed
emission.
In particular, stellar wind P-Cygni profiles and photospheric absorption lines
are detected with high significance (green and yellow dashed lines in Figure
2). The detection of photospheric lines in J0121$+$0025 indicates
unambiguously that the UV luminosity is dominated by stellar emission, rather
than an AGN. We identify more than ten photospheric features. Some of them are
resolved and detected with high significance (e.g., C ii 1324Å, O iv 1343Å,
and S v 1501Å, see Figure 3). We use these to determine the systemic redshift
$z_{\rm sys}=3.244\pm 0.001$ of J0121$+$0025\. Others are seen in blends from
multiple transitions (e.g., Si iii 1417Å, C iii 1427Å and Fe v 1430Å at
$\lambda_{0}\simeq 1415-1435$Å). These stellar absorption lines are
intrinsically weak in star-forming galaxies, with $EW_{0}$ typically well
bellow 1Å (e.g., Shapley et al., 2003; Steidel et al., 2016; Rigby et al.,
2018). As they are formed in the photospheres of hot stars and are seen in
absorption, the background radiation should be dominated by the starlight,
otherwise they would not be detected. Even a small contribution of an AGN to
the UV continuum ($\lesssim 25$%), that is featureless in these spectral
regions, would make these lines disappear at the SNR of our spectrum. In
addition, the observed P-Cygni profiles in N v 1240Å and C iv 1550Å can be
also well explained/modelled by stellar models with a very young age ($\simeq
3$ Myr burst, see Figure 3 and Section 3.2 for details), similar to those seen
in other very young starbursts (e.g. Rivera-Thorsen et al., 2019; Vanzella et
al., 2020), some of them also very/extremely luminous (Vanzella et al., 2018;
Marques-Chaves et al., 2020b). While some rare AGNs, such as broad or narrow
absorption line QSOs (BAL/NAL QSOs), can show N v and C iv profiles mimic
those of stellar P-Cygni, from the combination of a broad emission and a
redshifted absorption (see for example Bentz et al., 2004; Appenzeller et al.,
2005), photospheric lines are not present in the spectra of AGNs.
The rest-frame UV morphology of J0121$+$0025 appears compact, but there is
evidence of a resolved structure. Using the best seeing-condition image
($R$-band from Subaru, $0.50^{\prime\prime}$ FWHM), J0121$+$0025 appears
marginally resolved with a $\rm FWHM\simeq 0.55^{\prime\prime}$, that
corresponds to $\simeq 1.5-2.0$ kpc proper. Using Galfit (Peng et al., 2002),
the light distribution of J0121$+$0025 is better modeled using a Sersic
profile instead of using a PSF model, with residuals in the region
encompassing J0121$+$0025 reduced by $\simeq 50\%$ (Figure 1). This suggests
that the source is spatially resolved. The best-fit model (Sersic profile)
gives an effective radius $r_{\rm eff}=0.6$ pix, that corresponds to $r_{\rm
eff}=0.1^{\prime\prime}$ assuming the Subaru pixel scale
$0.168^{\prime\prime}$/pix, and $r_{\rm eff}\sim 0.8$ kpc at $z=3.244$.
Although it has been shown that Galfit can recover effective radius down to
$\sim 0.5$ pix if the PSF is properly known and the source is bright enough
(see: Vanzella et al. 2017), we assume conservatively an $r_{\rm eff}<1$ kpc.
From another perspective, the Ly$\alpha$ line shows a narrow profile with an
intrinsic FWHM $\simeq 350$ km s-1 compatible with star-forming galaxies (see
Figure 3 and Section 3.2.3). The AGN population typically shows broad
Ly$\alpha$ profiles, up to several hundreds or thousands km s-1, even in the
case of narrow line AGNs (e.g., Hainline et al., 2011). In addition, the
observed Ly$\alpha$ equivalent width, flux and corresponding luminosity can be
well explained by star-formation only (discussed in Section 3.2), without the
need to invoke an AGN contribution. He ii 1640Å is also detected, showing a
broad profile with $\simeq 2500$ km s-1. Because He ii is a non-resonant (and
recombination) line, its origin is likely stellar and not nebular (from an
AGN), otherwise we would expect to detect a similar or even broader profile in
the resonant Ly$\alpha$ line, which is not the case. This will be discussed in
more detail in Section 3.2. Nebular emission of [O iii] 1666Å is also detected
(with low significance) and shows a narrow profile, not resolved in our OSIRIS
spectrum ($<450$ km s-1, Figures 2). This line is not present in the spectra
of typical AGNs.
The presence of a UV-faint or an obscured type-II AGN is more difficult to
exclude. Unfortunately, we cannot use rest-frame UV and optical line
diagnostics (e.g., Baldwin et al., 1981; Nakajima et al., 2018) to
discriminate between star-formation and AGN, as these lines are not covered by
the OSIRIS spectrum. However, the mid-IR photometry of J0121$+$0025 disfavours
the presence of such AGN contribution (top panel of Figure 4). J0121$+$0025
shows a flat/blue spectral energy distribution (SED) from the rest-frame UV to
near-IR with $R-I2=-0.41\pm 0.21$ and $I1-I2=-0.40\pm 0.31$, where $R$, $I1$
and $I2$ bands probe rest-frame wavelengths of $\simeq 0.16\mu$m, $0.84\mu$m
and $1.06\mu$m, respectively. As shown in Figure 4, the optical to mid-IR
colors of J0121$+$0025 place it far away from the locus of AGNs at similar
redshift (Pâris et al., 2018). AGNs tend to have red optical-to-mid-IR SEDs
due to the rising emission at $\lambda_{0}\gtrsim 1\mu$m (rest) originated by
the dust torus (Assef et al., 2013). We also check for possible variability
from an AGN in the optical photometry. In the bottom panel of Figure 4 we
compare the observed magnitudes in the $g$, $r$ and $i$ bands of J0121$+$0025
from Subaru, CFHT, SDSS, DECaLS and Pan-STARRS1, that probe different epochs,
from MJD 51544 to 58362 ($\sim 18$ years). No variability is detected in
J0121$+$0025\. Lastly, J0121$+$0025 is not detected in X-rays in the 2.6 ks
XMM-Newton data, although the corresponding $3\sigma$ limit
$L_{\rm(0.2-2)keV}=1.2\times 10^{45}$ erg s-1 is not deep enough to further
explore a possible X-rays emission of an AGN. Note that a significant X-ray
emission from star-formation is still expected in J0121$+$0025\. Assuming $\rm
SFR=981M_{\odot}$ yr-1 (see Section 3.4) and following Grimm et al. (2003), we
expect an X-ray emission from star-formation $L_{\rm x}(\rm SFR)\sim 10^{43}$
erg s-1.
Figure 4: Top: Comparison between the 3.4$\mu$m - 4.6$\mu$m and 0.6$\mu$m -
4.6$\mu$m colors of J0121$+$0025 (blue) and those from SDSS/BOSS QSOs (Pâris
et al., 2018) at similar redshift (red). J0121$+$0025 shows bluer colors than
typical QSOs. For comparison, the young starburst BOSS-EUVLG1 ($\simeq$4 Myr,
Marques-Chaves et al. 2020b) is also shown in yellow. Bottom: observed
magnitudes of J0121$+$0025 in various filter bands ($g$, $r$ and $i$) with
different telescopes and epochs. No variability is detected in J0121$+$0025.
Overall, the shape of the rest-frame UV spectrum, the detection of stellar
features (photospheric absorption and wind lines), the resolved morphology and
the multi-wavelength SED highly suggest that the luminosity of J0121$+$0025 is
being powered by a vigorous starburst and that a significant contribution of
an AGN at these wavelengths is unlikely.
### 3.2 Rest-frame UV properties
#### 3.2.1 Young stellar population: age, metallicity and attenuation
One of the most prominent features in the spectrum of J0121$+$0025 is the
P-Cygni associated with wind lines, that are much stronger than in typical
LBGs (e.g., Shapley et al., 2003). These lines are produced by strong outflows
of material from the most massive stars, whose strength and spectral shapes
depend strongly on the age and metallicity (and on the initial mass function,
IMF) of the stellar population, being N v and C iv by far the most sensitive
features (Chisholm et al., 2019).
To infer the properties of the young stellar population in J0121$+$0025, the
observed line profiles of N v and C iv are compared to those obtained with the
spectral synthesis code Starburst99 (S99: Leitherer et al., 1999), following
the same methodology described in Marques-Chaves et al. (2018) and Marques-
Chaves et al. (2020a). We use S99 instead of BPASS models (Stanway et al.,
2016), because the later are less able to match the details of the P-Cygni
absorption of C iv (see discussion and Figure 5 in Steidel et al. 2016).
Briefly, we generate high-resolution (0.4Å) UV spectra using standard Geneva
tracks with a grid of metallicities ($Z_{\star}/Z_{\odot}$, where
$Z_{\odot}=0.02$) of 0.05, 0.2, 0.4 and 1, and burst ages from 1 Myr to 30
Myr. An IMF with a power slope index $\alpha=-2.35$ over the mass range
$0.5<M_{\star}/M_{\odot}<100$ is considered. S99 outputs are redshifted to
$z=3.244$, smoothed to the spectral resolution of the OSIRIS spectrum and
rebined to the spectral bin of 2.12Å. Dust attenuation is also taken into
account, considering values $E(B-V)_{\star}$ ranging from 0 to 0.2 and using
the Calzetti et al. (2000) extinction curve and its extension to short
wavelengths ($<0.15\mu$m) provided by Reddy et al. (2016). Spectral windows
that are free of of absorption/emission features (Rix et al., 2004) and strong
sky-subtracted residuals are used to offset the flux of S99 models. We then
compare the observed N v and C iv profiles with those from S99, performing a
$\chi^{2}$ minimization over the spectral range $1225-1245$Å for N v and
$1528-1538$ for C iv, excluding in the fit spectral regions that could be
affected by interstellar absorption or nebular emission, which is particularly
relevant for C iv.
Wind lines of N v and C iv are well reproduced by a $Z_{\star}/Z_{\odot}\simeq
0.4$ and $\simeq 3$ Myr burst of star formation (red in Figure 2). A color
excess of the stellar continuum $E(B-V)_{\star}=0.04\pm 0.02$ is also
inferred, which is compatible with the observed UV slope, $\beta_{\rm
UV}=-2.05\pm 0.10$. Scaling the best-fit S99 model to $M_{\rm UV}=-24.1$, this
leads to a burst mass log($M_{\star}/M_{\odot})=9.8$. According to S99, the
number of O-type stars is $\sim 8\times 10^{5}$ yielding an intrinsic ionizing
photon production rate $N_{\rm int}\rm(LyC)\simeq 1.4\times 10^{55}$ s-1 and
the production efficiency, $\xi=N_{\rm int}$ (LyC) / $L_{\rm UV,int}$, is
log($\xi)=25.2$.
Considering the full spectral range, the overall agreement between the best-
fit model and the observed spectrum is mixed. Some stellar features are well-
fitted, as the N v and C iv P-Cygni and some photospheric absorption, while
others show poor agreement. The profile of Si iv 1393,1402Å shows evidence of
a P-Cygni contribution (in addition to the ISM absorption), but the model
underpredicts it, which could suggest an additional contribution of a slightly
older stellar population with age $\simeq 5$ Myr (see Figure 6 in Leitherer et
al., 2001). It is also interesting to note that the OSIRIS spectrum shows
evidence of a P-Cygni around the photospheric blanketing by Fe v and O v
around $\simeq 1360-1375$Å, but it appears slightly blueshifted with respect
to the predicted S99 model, by $\simeq 400-500$ km s-1. On the other hand,
other stellar features are relatively well reproduced, such as the region
around $\simeq 1420-1430$Å from the blended emission of Si iii, Fe v and C iii
transitions (also called the "1425" index), the photospheric S v 1501Å line,
and the Fe iv complex around $\sim 1600$Å (see Figure 2). The model also
predicts a relatively strong P-Cygni in O vi 1031,1033Å, but is still weak
compared to the observed one (Figure 3), which could indicate an even younger
stellar population ($\leq 2$ Mry). Note however that the profile of O vi is
likely affected by the contribution of the Ly$\beta$ absorption and the IGM
attenuation, that could impact the observed profile.
Overall, the best-fit S99 model with $Z_{\star}/Z_{\odot}=0.4$, age of 3 Myr
and $E(B-V)_{\star}=0.04$ fits reasonably well the observed spectrum of
J0121$+$0025, in particular the wind lines N v and C iv, that are the most
sensitive features to these parameters (Chisholm et al., 2019). Note however
that the inferred $Z_{\star}$ and age are model-dependent and should be
treated with caution and considered as approximated values. This is
particularly relevant for the metallicity, which is less constrained, as our
analysis is limited to discrete models with a set of four metallicities.
Consequently we can only say that the model with $Z/Z_{\odot}=0.4$ is favoured
with respected to the other models ($Z/Z_{\odot}=0.05,0.2,1$). In addition, we
are assuming a single burst model, which might not be realistic (nor a
continuum star-formation rate). Nevertheless, the age is much better
constrained and should be $<8$ Myr, otherwise the wind line of N v would
appear much weaker ($\simeq 10$ Myr) or almost non-existent ($>10$ Myr). The
same applies if a continuous star-formation history is assumed. In this case,
the redshifted emission of N v could be well described with a continuous SFH
with an age up to $\simeq 20$ Myr, but not the corresponding blueshifted
absorption, that would be underestimated for ages $\gtrsim 10$ Myr due to the
increasing contribution of B-type stars to the UV continuum. In addition, the
blue mid-IR color $I1-I2=-0.4\pm 0.31$ highly supports a very young age of the
stellar population, that would be difficult to explain with a sightly older
stellar population ($\gtrsim 15-20$ Myr) with a continuous star-formation
rate.
#### 3.2.2 Broad He ii emission
The He ii 1640Å emission is shown in detail in the bottom right panel of
Figure 3 and presents a complex morphology characterized by a broad profile
with two absorption in the central part of the line forming a triple peaked
emission. Fitting a Gaussian profile and excluding the two absorption features
in the fit, we measure a $\rm FWHM\simeq 2500$ km s-1. Because of its
broadness, this emission is likely stellar in origin, otherwise we would
expect a similar or even broader profile in other nebular lines like the
resonant Ly$\alpha$ line. However, this is not the case as Ly$\alpha$ shows a
narrow profile ($\simeq 350$ km s-1 FWHM, see Section 3.2.3). The nebular [O
iii] 1666Å line is also detected and show an unresolved profile in the OSIRIS
spectrum ($<450$ km s-1 FWHM, see Figure 2).
We measure a rest-frame equivalent width $EW_{0}=3.2\pm 0.3$Å for He ii
(corresponding to the yellow region of Figure 3), that is much larger than
that inferred in the $z\sim 3$ LBG composite spectrum of Shapley et al. (2003)
($EW_{0}^{\rm LBGs}=1.3\pm 0.3$). The He ii emission in J0121$+$0025 is also
stronger than the average $EW_{0}\simeq 2.5$Å found in some extreme Wolf-Rayet
(WR) star clusters in the local Universe (Chandar et al. 2004, although see
the cases of NGC 3125-1 with $EW_{0}\simeq 7.4$Å, or the dwarf galaxy II Zw 40
with $EW_{0}\simeq 7.1$Å, Leitherer et al. 2018). As shown in Figure 3, the
best fit S99 model clearly underpredicts the strength of the observed He ii
profile. In fact, Brinchmann et al. (2008) predict a $EW_{0}\rm(S99)=0.3$Å for
a S99 burst model with $Z_{\star}/Z_{\odot}=0.4$ and $3$ Myr age. Even
considering an extreme case with a continuous star-formation history and
$Z_{\star}=Z_{\odot}$, S99 models predict $EW_{0}\rm(S99)\leq 2.4$Å (see
Figure 2 of Brinchmann et al., 2008). For comparison, a BPASS binary model
(Stanway et al., 2016) with the same age and metallicity is also shown in
Figure 3, and although it predicts stronger stellar He ii emission than S99
models (see also: Steidel et al. 2016), it still under-predicts the observed
emission in J0121$+$0025.
The strength of He ii in J0121$+$0025 raises now the question regarding the
presence of more exotic stellar populations. For example, He ii appears very
strong in the spectra of very massive stars (VMS, $>100M_{\odot}$) in the
central cluster R136 of the 30 Doradus star-forming region (see: Crowther et
al., 2016). In fact, the contribution of such massive stars has been proposed
to explain the large $EW_{0}$’s in He ii in a few local star-forming galaxies
(with $EW_{0}\simeq 2.0-4.7$Å, Senchyna et al. 2021). Interestingly, the
complex triple-peaked He ii profile seen in the spectrum of J0121$+$0025
resembles that observed in two star-forming galaxies analysed by Senchyna et
al. (2021), namely SB 179 and SB 191 (their Figure 6), that according to these
authors could be the product of rapid rotation of rare Onfp stars (e.g.,
Walborn et al., 2010). Investigating in detail the nature of the He ii line
and WR and VMS content (and possible nebular contribution) is out of the scope
of this work as it requires follow-up observations (e.g., high-spectral
resolution of the He ii line and near-IR spectroscopy to put strong
constraints on the metallicity), and in addition updated stellar models with
the inclusion of very massive stars.
#### 3.2.3 The Ly$\alpha$ line
The Ly$\alpha$ line in J0121$+$0025 shows a spectrally unresolved profile in
the OSIRIS spectrum ($R\simeq 800$), but it is slightly resolved in the higher
resolution BOSS spectrum (Figure 3). Fitting a Gaussian profile, we measure an
intrinsic $\rm FWHM=350\pm 40$ km s-1, after correcting for the instrumental
broadening ($\simeq 150$ km s-1), and a rest-frame equivalent width
$EW_{0}\rm(Ly\alpha)=14\pm 3$Å. The Ly$\alpha$ line has its peak closed to the
systemic redshift, redshifted by $v_{\rm peak}\simeq 120\pm 50$ km s-1. It is
still not clear why the measured $EW_{0}\rm(Ly\alpha)$ appears so low,
compared to the intrinsic $EW_{0}^{\rm int}\rm(Ly\alpha)\sim 100$Å expected
for a $\simeq 3$ Myr age burst and $Z_{\star}/Z_{\odot}$ (Schaerer, 2003).
Roughly, this yields a Ly$\alpha$ escape fraction of $\sim 14\%$, which is
lower than $f_{\rm esc,abs}\rm(LyC)\approx 40\%$ (see Section 3.3).
Considerable fiber/slit losses, strong IGM attenuation near $\lambda_{0}\simeq
1215$Å, destruction of Ly$\alpha$ photons by dust, and/or large $f_{\rm esc}$
of ionizing photons could in principle explain such differences. In addition,
there could be more ionising photons available than H i to be ionised, i.e.,
the ISM in J0121$+$0025 could be mostly ionized approaching to a density-
bounded geometry. Such a scenario should be further investigated using, e.g.,
the [O ii] 3727,29Å and [O iii] 5008Å lines ([O iii]/[O ii] ratio). In
addition, the H$\beta$ line could provide constraints on the properties of the
Ly$\alpha$ emission, both in terms of its intensity and spectral shape. These
lines ([O ii], [O iii] and H$\beta$) are redshifted to the near-IR $H$\- and
$K$-bands and are accessible from the ground.
Using the BOSS/SDSS spectrum, that is less affected by flux losses, we measure
a total Ly$\alpha$ flux of $F\rm(Ly\alpha)=(5.72\pm 0.10)\times 10^{-16}$ erg
s-1 cm-2. This corresponds to a luminosity log($L_{\rm Ly\alpha}/$erg
s${}^{-1})=43.8\pm 0.1$ at the redshift of J0121$+$0025\. To test whether or
not this luminosity could be explained by star-formation, we compare the SFR
of J0121$+$0025 obtained from SED fitting ($\rm SFR(SED)=981\pm 232M_{\odot}$
yr-1, see Section 3.4) to the Ly$\alpha$ star-formation rate using the
Kennicutt (1998) conversion. Assuming case-B recombination and the Salpeter
(1955) initial mass function (IMF), the Ly$\alpha$ luminosity corresponds to a
$\rm SFR(\rm Ly\alpha)\simeq 80$ $M_{\odot}$ yr-1. Even considering $f_{\rm
esc}\rm(LyC)>0$ (and so $\rm SFR(\rm Ly\alpha)\gtrsim 80$ $M_{\odot}$ yr-1),
star-formation can naturally explain the observed Ly$\alpha$ flux and the
corresponding luminosity.
#### 3.2.4 ISM, kinematics and covering fraction
In addition to the strong Ly$\alpha$ line and other stellar features, the high
S/N spectrum reveals also absorption features that are associated with the
interstellar medium gas (ISM) and produced by the resonance transition of
several ionic species.
Low-ionization ISM lines (LIS) of Si ii 1260Å, C ii 1334Å and Si ii 1526Å are
detected in the spectrum. Absorption in O i 1302Å and Si ii 1304Å is also
detected but it is not resolved and, in addition, its profile is also
contaminated by the photospheric complex around $\lambda_{0}\simeq
1294-1298$Å. Others LIS lines that are usually present in the spectra of LBGs,
such as Fe ii 1608Å or Al ii 1670Å (Shapley et al., 2003), are not detected in
J0121$+$0025.
Since LIS lines are seen against the continuum provided by starlight of
J0121$+$0025, they are useful to probe the kinematics of the gas along the
line-of-sight. Despite the low spectral resolution of the OSIRIS spectrum, the
centroids of the LIS lines are clearly blueshifted with respect to the
systemic redshift, by $v_{\rm peak}\rm(LIS)\simeq-450$ km s-1 (Figure 3). This
could be an indication of fast gas outflows caused by supernova explosions and
stellar winds, similar to those detected in the other extremely UV luminous
starburst BOSS-EUVLG1 ($v_{\rm peak}\rm(LIS)\simeq-400$ km s-1 Marques-Chaves
et al. 2020b; Álvarez-Márquez et al. 2021) or in the Sunburst LyC emitter
(Rivera-Thorsen et al., 2017; Vanzella et al., 2021).
The OSIRIS spectrum also shows two additional absorption lines at $\simeq
5320$Å and $\simeq 5640$Å (Figure 3), whose origin is still not clear. These
lines appear close to Si ii 1260Å and C ii 1334Å of J0121$+$0025, by
$\simeq-1700$ km s-1 and $\simeq-1200$ km s-1, respectively (Figure 3). These
lines could be associated with outflows from J0121$+$0025, but such scenario
is unlikely. Because these lines have broadly similar ionization potentials,
the outflowing gas traced by Si ii and C ii should be kinematically coherent
(and co-spatial), which is not the case. Moreover, the profile of Si ii 1526Å
does not show this secondary absorption component. It is possible that the
absorption line at $\simeq 5320$Å is associated with Si iv 1393Å at $z=2.816$,
for which Ly$\alpha$ is also detected in absorption around $\simeq 4640$Å (H i
absorbing system #3 in Figure 5). In such a case, the corresponding Si iv
1402Å absorption of this intervening system is contaminating the profile of Si
ii 1260Å of J0121$+$0025\. For the other absorption at $\simeq 5640$Å, we are
not able to find any redshift solution, so the redshift of this intervening
system is still not known.
Regarding the strength ($EW_{0}$) of LIS lines in J0121$+$0025, we fit
Gaussian profiles to the absorption profiles, excluding the contribution of
the secondary, not related absorption component mentioned before (Figure 3).
We measure rest-frame equivalent widths of $1.05\pm 0.15$Å, $0.59\pm 0.08$Å,
and $0.81\pm 0.10$Å, for Si ii 1260Å, C ii 1334Å and Si ii 1526Å,
respectively. Note that the measured equivalent width of Si ii 1260Å should be
considered as an upper limit, because its profile is likely contaminated by
the intervening metal lines Si iv 1402Å at $z=2.816$. These lines appear
spectrally resolved with an intrinsic $\rm FWHM\simeq 550-650$ km s-1. For
comparison, the $z\sim 3$ LBG composite spectrum of Shapley et al. (2003)
shows large $EW_{0}$’s for the same lines, with $EW_{0}\simeq 1.7$Å.
The weakness of LIS lines in J0121$+$0025 could arise either by a low
geometric covering fraction of the gas, $C_{f}$, a low ion column density,
and/or a highly ionized ISM. Considering the linear part of the curve of
growth (i.e. low column density), the ratios of the
$EW_{0}(1260)/EW_{0}(1526)$ can be related through their oscillator strengths,
for which we would expect a $EW_{0}(1260)/EW_{0}(1526)\simeq 5$ if these lines
were not saturated. However, we measure $EW_{0}(1260)/EW_{0}(1526)\lesssim
1.3$, suggesting that at least one of these lines is saturated. This is not
surprising at all, as these lines appear almost always saturated in the
spectra of star-forming galaxies, even in damped-Ly$\alpha$ systems with sub-
solar metallicities (e.g., Dessauges-Zavadsky et al., 2006).
Taking this in consideration, it is possible that the weakness of LIS lines
arises from a low $C_{f}$. Assuming the optically thick regime and an
ionization-bounded ISM with a uniform dust-screen geometry, the $C_{f}$ can be
inferred using the residual intensity of the absorption line, $I$, so that
$C_{f}=1-I/I_{0}$, where $I_{0}$ is the continuum level. We measure
$I/I_{0}\simeq 0.8$ for Si ii 1260Å and C ii 1334Å, yielding to
$C_{f}\rm(SiII)\simeq 0.2$. Note that with the low spectral resolution of our
data we are likely overestimating $I/I_{0}$, however this effect should not be
dominant as the lines are spectrally resolved. Following Gazagnes et al.
(2018), this yields a neutral gas covering fraction $C_{f}\rm(HI)\simeq 0.55$,
for that a significant fraction of LyC photons could escape. In fact, using
the prescriptions of Chisholm et al. (2018) (see also Saldana-Lopez in prep.),
the inferred $C_{f}\rm(HI)\simeq 0.55$ leads to a predicted $f_{\rm
esc,abs}^{\rm pred}\rm(LyC)\approx 0.25$, which is consistent with the
observed value from the spectrum (next section).
Figure 5: 2D (top) and 1D (bottom) spectra of the far-UV region of
J0121$+$0025\. The 2D spectrum has been smoothed for visualization purpose.
The GTC spectrum is in black and its corresponding $1\sigma$ uncertainty is in
grey. Features associated with the Lyman series (from Ly$\alpha$ to the Lyman
limit), ISM and stellar absorption (photospheric and wind lines) are marked
below the spectrum in green, blue, and yellow colors, respectively. The low-
resolution best-fit S99 model (3 Mry age, $Z_{\star}/Z_{\odot}=0.4$ and
$E(B-V)_{\star}=0.04$) is plotted in red, one corrected for the IGM
transmission (<$T(IGM)$> $=0.60\pm 0.19$, solid red) and other assuming
$T(IGM)=1$ (dashed red). The high-resolution S99 model is also shown in blue.
Horizontal grey lines mark the spectral windows used to infer $T(IGM)$ (see
text). These exclude the regions associated with the Lyman series and ISM from
J0121$+$0025, so that we probe H i absorbers from the Lyman forest in the line
of sight only. Vertical lines above the spectrum mark the position of four
strong H i absorbing systems identified at $z=3.076,2.898,2.816$ and 2.733 (#1
to # 4, respectively). We also note that the 2D spectrum also shows a faint
continuum offset by $\simeq 6^{\prime\prime}$ of J0121$+$0025 from a low-$z$
star-forming galaxy (with a $z_{\rm phot}=1.10$, see Figure 1).
### 3.3 Lyman Continuum radiation
#### 3.3.1 The direct detection of LyC
One of the most remarkable features observed in the OSIRIS spectrum of
J0121$+$0025 is the detection of emission below $\lambda_{\rm obs}\simeq
3880$Å, i.e. $\lambda_{\rm 0}<911.8$Å, that can be related to LyC leakage.
This emission is real and it is not related with detector artifacts (e.g.,
cosmic rays, flat-field correction) or bad sky subtraction. It is detected in
the spectra of different nights, as well as in the much shallower BOSS
spectrum although with less SNR ($\simeq 2.3\sigma$).
A zoom-in into this region is shown in Figure 5. The observed flux emission at
rest-frame $880-910$Å has a total SNR of $7.9$ and an average SNR per spectral
bin of $0.98$. The mean flux density in this spectral region is $f_{900}(\rm
obs)=0.781\pm 0.099\mu$Jy, which corresponds to a magnitude of $m_{900}=24.17$
(AB). For comparison, the mean non-ionizing UV flux density, estimated from
the OSIRIS spectrum over the rest-frame range $1490-1510$Å, is
$f_{1500}=8.874\pm 0.399\mu$Jy. Combining these measurements, we find a ratio
of the ionizing to non-ionizing flux density $(f_{900}/f_{1500})_{\rm
obs}=0.088\pm 0.012$, which corresponds to a $\Delta m=2.64$ (AB). It is worth
noting that the ionizing emission $\lambda_{\rm obs}<3880$Å suffers for an
apparent absorption at $\lambda_{\rm obs}\simeq 3813$Å. After careful
inspection, this absorption could be related with Ly$\beta$ and Ly$\delta$
absorption associated with two H i systems at $z=2.733$ and $z=3.076$,
respectively (#1 and #4 in Figure 5).
To infer the relative LyC photon escape fraction, we compare the observed
ionizing flux density of J0121$+$0025 at $900$Å, $f_{900}(\rm obs)$, to that
from the S99 model, $f_{900}(\rm S99)$, that best represents the shape of the
rest-frame UV spectrum (Section 3.2.1), i.e., a burst with an age of 3 Myr,
$Z_{\star}/Z_{\odot}=0.4$ and $E(B-V)=0.04$ (Figures 3 and 5). The S99 model
has been already corrected for $E(B-V)$, thus it incorporates the relative
attenuation between 900Å and 1500Å due to the variation of the attenuation
coefficient with the wavelength ($k_{900}\simeq 14.3$ and $k_{1500}\simeq
10.3$, assuming the Calzetti et al. 2000 curve and its extension to the UV
provided by Reddy et al. 2016).333The inferred $E(B-V)$ using Calzetti et al.
(2000) curve is already low, so using other extinction curves (e.g., SMC) will
have little impact in our results. To probe the region below
$\lambda_{0}<912$Å, we use the low-resolution version of S99 model because it
extends to the LyC region. Finally, we also take into account the contribution
of the IGM transmission, $T(IGM)$. The relative LyC photon escape fraction,
$f_{\rm esc,rel}(LyC)$, can thus be expressed using the following formulation:
$f_{\rm
esc,rel}(LyC)=\frac{f_{900}(obs)}{f_{900}(S99)}\times\frac{1}{T(IGM)}.$ (1)
Both $f_{900}(\rm obs)$ and $f_{900}(\rm S99)$ are measured using the same
spectral window, defined at $880-910$Å in the rest-frame. We find $f_{900}(\rm
obs)/f_{900}(\rm S99)=0.34\pm 0.04$, where the uncertainties arise from
$f_{900}(\rm obs)$. We note that Equation 1 is consistent to that used in the
literature (e.g., Shapley et al., 2016; Vanzella et al., 2016; Steidel et al.,
2018), where $f_{900}/f_{1500}$ is used instead of $f_{900}$, because in our
case $f_{1500}\rm(S99)$ has been already matched to $f_{1500}\rm(obs)$ (see
Figure 2), thus $f_{1500}\rm(obs)\simeq$ $f_{1500}\rm(S99)$.
A precise estimate of $f_{\rm esc,rel}(LyC)$ is not possible given the
stochastic nature of $T(IGM)$ and the large fluctuation of the attenuation in
one single line-of-sight (Inoue & Iwata, 2008; Inoue et al., 2014). However,
it is still possible to place rough constraints. Assuming that $f_{900}(\rm
obs)/f_{900}(\rm S99)=0.34\pm 0.04$ is well constrained,444For an age of 3 Myr
the predicted ionizing flux at 900Å using S99 and BPASS (including binaries)
are roughly the same, see e.g., Chisholm et al. (2019). this implies that
$T(IGM)$ should be at least larger than $>0.34$ to keep a physical $f_{\rm
esc,rel}(LyC)<1$ (e.g., Vanzella et al., 2012). On the other hand, $f_{\rm
esc,rel}(LyC)$ must be $\geq 0.34$, where the most extreme value ($0.34$)
stands for a completely transparent IGM.
To get a more quantitative estimate of $T(IGM)$, we use the non-ionizing part
of the OSIRIS spectrum, from $912-1215$Å, and compare it to that of S99 best-
model. We exclude spectral regions that are associated with the Lyman series
and ISM from J0121$+$0025 (marked in Figure 5) that are not included in the
S99 models, so that we probe H i absorbers from the Lyman forest in the line
of sight only, and not the the CGM and ISM associated to J0121$+$0025\. The
spectral regions used to estimate $T(IGM)$ are marked as horizontal grey lines
in Figure 5. Note however that with this comparison, we are inferring the
$T(IGM)$ in this spectral range, not necessarily at $\leq 912$Å, still it can
serve as a first-order approximation. It is worth noting that the S99 model
predicts a relatively strong break around the Lyman limit, which is not
observed in the spectrum of J0121$+$0025 (and in other LyC leakers, see e.g.,
Steidel et al. 2018). We find a mean value and standard deviation
$T(IGM)=0.60\pm 0.19$, which is compatible with the Inoue et al. (2014) model
or that obtained in other LyC leakers at similar redshifts and $f_{\rm
esc,rel}(LyC)$ (e.g., Shapley et al., 2016) using Monte Carlo simulations of
the IGM transmission, but it is larger than those obtained by, e.g., Steidel
et al. (2018) and Fletcher et al. (2019), with mean values of $T(IGM)\simeq
0.3$.
Using Equation 1 we infer $f_{\rm esc,rel}(LyC)\sim 0.56$, with possible
values ranging from $0.34$ to $1$. Note that we are considering only the
uncertainties due to the IGM. A sightly older stellar population, for example
a burst with $\simeq 6$ Myr, will produce less ionizing photons than the model
we are considering ($3$ Myr) by a factor of $\simeq 2$. However, the $6$ Myr
burst model would require less extinction to explain the observed $\beta_{\rm
UV}=-2.05$, so that the $f_{900}(\rm obs)/f_{900}(\rm S99)$ would be roughly
similar in both cases. Other sources of uncertainty can impact the inferred
$f_{\rm esc,rel}(LyC)$, but are extremely difficult to quantify. These include
the uncertainties due to the flux calibration at the blue edge of the OSIRIS
spectrum, which has a total efficiency $\lesssim 5\%$ at $\lambda<4000$Å
only,555http://www.gtc.iac.es/instruments/osiris/#Spectroscopic_Photon_Detection_Efficiency
or differential slit-losses between $\lambda_{0}<912$Å and $\lambda_{0}\simeq
1500$Å, due to the effect of the atmospheric dispersion and the broadening of
the point-spread function at blue wavelengths. Nevertheless, the uncertainties
due to the IGM should be dominant. Given this, the absolute LyC escape
fraction, $f_{\rm esc,abs}(LyC)$, defined as (e.g., Leitet et al., 2013):
$f_{\rm esc,abs}(LyC)=f_{\rm esc,rel}(LyC)\times 10^{-0.4[E(B-V)\times
k_{1500}]},$ (2)
is $f_{\rm esc,abs}(LyC)\approx 0.39$, with possible values between $\simeq
0.23-0.69$, allowed by the constraints on the IGM ($0.37<T(IGM)<1$) and
$f_{\rm esc,rel}(LyC)$ ($<1$). This a useful quantity to estimate the number
of ionizing photons escaping from J0121$+$0025, $N_{\rm esc}\rm(LyC)$, such
as:
$N_{\rm esc}(LyC)=f_{\rm esc,abs}(LyC)\times N_{\rm int}(LyC),$ (3)
where $N_{\rm int}\rm(LyC)\simeq 1.4\times 10^{55}$ s-1 is the intrinsic
ionizing photon production rate from the S99 model scaled to $M_{\rm
UV}=-24.1$ for a burst of 3 Myr and $Z_{\star}/Z_{\odot}=0.4$ (and a Salpeter
1955 IMF). We thus obtain an ionizing photon escape $N_{\rm esc}(LyC)\approx
6\times 10^{54}$ s-1 (with possible values ranging from $[3-10]\times 10^{54}$
s-1).
#### 3.3.2 Possibility of foreground or AGN contamination
We now discuss the possibility that the emission detected below $\lambda_{\rm
obs}\simeq 3880$Å is due to foreground contamination leading to a false-
positive detection of LyC signal of J0121$+$0025 or arising from an AGN.
In Figure 6 the spatial profiles of the emission below and above $\lambda_{\rm
0}=912$Å are compared. These profiles have been extracted from the 2D spectrum
over the rest-frame range $880-910$Å and $920-950$Å, respectively. Both
profiles have consistent spatial morphologies with $\rm FWHM_{(\lambda_{\rm
0}<912)}=0.98^{\prime\prime}\pm 0.11^{\prime\prime}$ and $\rm
FWHM_{(\lambda_{\rm 0}>912)}=1.02^{\prime\prime}\pm 0.02^{\prime\prime}$ which
are compatible with being unresolved. Their centroids appear co-spatial,
without any evidence for a spatial offset.
Figure 6: Comparison of the spatial profiles of the emission below (solid
blue) and above (dashed red) the rest-frame $\lambda=912$Å in the 2D spectrum
of J0121$+$0025\. These profiles have been extracted from the 2D spectrum over
the rest-frame range $880-910$Å and $920-950$Å, respectively, and both have
consistent spatial morphologies and are co-spatial.
From the best seeing condition image $R$-band ($\rm FWHM\simeq
0.5^{\prime\prime}$) we do not see any evidence for the presence of an
additional source to J0121$+$0025, which is barely resolved only (Figure 1).
Given the $5\sigma$ depth of $\simeq 26.5$ of this image, a possible
contaminant would be easily detected if it was spatially offset from
J0121$+$0025 by $\gtrsim 0.3^{\prime\prime}$, given the observed magnitude
$m=24.18$ measured from the spectrum at $\lambda_{\rm 0}<912$Å. In addition,
one strong absorption line with a residual intensity compatible with zero is
detected at $\lambda_{\rm obs}\simeq 4740$Å, that is associated with
Ly$\alpha$ from a H i absorption system at $z=2.898$ (#2 in Figure 5). The
non-detection of any flux at $\lambda_{\rm obs}\simeq 4740$Å below a $2\sigma$
level of $\simeq 1\times 10^{-18}$ erg s-1 cm-2 Å-1, where the OSIRIS spectrum
is $\simeq 4$ times more sensitive than at $\lambda_{\rm obs}\simeq 3850$Å,
makes the presence of a contaminant very unlikely, because it requires an
extremely low, and maybe unrealistic $\beta_{\rm UV}<-3.2$ ($2\sigma$) source
to explain such color.
The presence of a relatively bright ($m=24.18$), very compact ($r_{\rm
eff}\lesssim 0.1^{\prime\prime}$), quasi co-spatial with J0121$+$0025
($\lesssim 0.2^{\prime\prime}$), and very blue ($\beta_{\rm UV}<-3.2$)
lower-$z$ interloper is still possible and cannot be completely ruled out, but
it is highly unlikely.
We now discuss the possible contribution of an AGN to the LyC emission
observed in J0121$+$0025\. In Section 3.1 we have shown that the contribution
of an AGN to the UV luminosity of J0121$+$0025 should be small, if present.
More specifically, the AGN contribution should be at least $\lesssim 25\%$,
otherwise photospheric absorption lines, which are intrinsically very weak,
would not be detected in the OSIRIS spectrum. Considering the most extreme
case, i.e., a contribution of $25\%$ to the UV flux density, the AGN would
have a rest-frame 1500Å flux density $f_{1500,\rm obs,AGN}\simeq 2.2\mu$Jy,
corresponding to $M_{\rm UV}=-22.7$. Assuming that the observed
$f_{900}\rm(obs)=0.781\pm 0.099\mu$Jy arises from such AGN, we obtain
$(f_{900}/f_{1500})_{\rm obs,AGN}\simeq 0.36$, which is a factor of $\simeq
3-7$ larger than the typical values observed in type-I AGNs with LyC detection
at similar redshift and luminosities ($(f_{900}/f_{1500})\rm(AGN)\simeq
0.05-0.14$; Steidel et al. 2001; Micheva et al. 2017) or in other bright QSOs
(e.g., Cristiani et al., 2016), possible corresponding to a nonphysical LyC
escape fraction. Therefore, it is unlikely that the LyC emission arises from a
type-I AGN. The presence of an obscured type-II AGN in J0121$+$0025 is more
difficult to constrain, but its contribution to the LyC emission can be
neglected, as these sources are by definition very obscured at short
wavelengths.
An additional piece of evidence that the emission detected below $\lambda_{\rm
obs}\simeq 3880$Å is related with escape of ionizing photons comes from the
intrinsic properties of J0121$+$0025\. In fact, J0121$+$0025 could be
identified as a strong LyC leaker candidate, even excluding the direct
information about the LyC detection. From the spectrum, J0121$+$0025 shows
very weak LIS lines, both in terms of $EW_{0}$ and the residual intensity. It
has been shown, from observations and simulations, that the residual intensity
of LIS lines correlates with $f_{\rm esc}(LyC)$ (e.g. Heckman et al., 2001;
Alexandroff et al., 2015; Chisholm et al., 2018; Mauerhofer et al., 2021, and
Saldana-Lopez in prep.). Using the prescription given in Chisholm et al.
(2018), the predicted $f_{\rm esc,abs}^{\rm pred}\rm(LyC)\approx 0.25$ from
the residual intensity of the Si ii line (see Section 3.2.4) is compatible,
within the uncertainties, to that observed/inferred from the spectrum, $f_{\rm
esc,abs}(\rm LyC)\approx 0.4$. In addition, the Ly$\alpha$ line shows a narrow
profile with an intrinsic $\rm FWHM=350\pm 40$ km s-1 and with its peak closed
to the systemic redshift, $v_{\rm peak}\simeq 120\pm 50$ km s-1. The
Ly$\alpha$ profiles in the confirmed LyC leakers analysed in Steidel et al.
(2018) and Fletcher et al. (2019) have their peaks less redshifted than the
LyC non-leakers (their Figure 26), suggesting low neutral gas column density
where Ly$\alpha$ photons, and likely LyC, could escape more easily (Verhamme
et al., 2015). Other observational signatures shared by LyC leakers are also
present in J0121$+$0025, such as the strong P-Cygni profiles and broad He ii
emission (e.g., Vanzella et al., 2018; Rivera-Thorsen et al., 2019; Vanzella
et al., 2020), low dust attenuation ($E(B-V)\simeq 0.04$), compact morphology
($r_{\rm eff}\sim 1$ kpc) but large SFR (next section), and so large SFR
surface density (e.g., Izotov et al., 2016), or the evidence for strong
outflows ($v_{\rm peak}\rm(LIS)\simeq-450$ km s-1).
### 3.4 Multi-wavelength SED fitting
Turning to the multi-wavelength properties, J0121$+$0025 shows a flat
($F_{\nu}$) spectral energy distribution (Figure 7) from optical to the mid-
IR, that is broadly consistent with a young starburst.
We perform SED-fitting with CIGALE code (Burgarella et al., 2005; Boquien et
al., 2019) using the photometry from $G$ to Spitzer $4.5\mu$m (Table 1),
covering a rest-frame wavelength $0.11-1.2\mu$m. The star-formation history
(SFH) is modeled using two components: a young starburst with age $\leq 10$
Myr allowed by the analysis on the UV spectral features (see Section 3.2.1 and
Figure 2), and a exponentially declining SFH with age of $500$ Myr. Using
simultaneously two SFH components allows us to probe the properties of the
young stellar population (i), that likely dominates the SED, and investigate
the presence of an underlying old stellar component (ii). We adopted the
stellar population models from Bruzual & Charlot (2003), and assume a fixed
metallicity $Z/Z_{\odot}=0.4$ based on our analysis in Section 3.2.1. The
Calzetti et al. (2000) dust attenuation law is considered with
$E(B-V)_{\star}<0.1$, and assuming the ratio of the stellar and nebular
$E(B-V)_{\star}/E(B-V)_{\rm neb}=0.44$. We also adopt $f_{\rm
esc,abs}\rm(LyC)\approx 0.40$.
Figure 7 shows the best-fit model. The emission of J0121$+$0025 is dominated
by a young stellar burst in the whole spectral range covered by the imaging
data. The burst is characterized by a 10 Myr-weighted SFR=981$\pm$232
$M_{\odot}$ yr-1, an age of 7$\pm 2$ Myr, and a stellar mass of
$\log(M_{\star}/M_{\odot}$)=9.9$\pm$0.1. This yields a high specific SFR
(sSFR=SFR/$M_{\star}$) of 98$\pm$32 Gyr-1, which is a factor 20 times higher
than $10^{10}M_{\odot}$ main-sequence star-forming galaxies at $3<z<4$ (e.g.,
Tomczak et al., 2016). Table 2 summarizes the properties of J0121$+$0025\.
Both the age and the stellar mass of the burst are broadly consistent with the
results inferred from the S99 best-fit model using the UV wind lines (age of
$\sim 3$ Myr and $\log(M_{\star}/M_{\odot}$)=9.8, Section 3.2.1).
Figure 7: Best-fit model of the spectral energy distribution of J0121$+$0025
using CIGALE (Burgarella et al., 2005). The SED of J0121$+$0025 is dominated
by a young and intense burst of star formation with SFR=981$\pm$232
$M_{\odot}$ yr-1, an age of 7$\pm 2$ Myr, and a stellar mass of
$\log(M_{\star}/M_{\odot}$)=9.9$\pm$0.1 (black line and orange circles). The
fit uses photometry from $G$-band to IRAC $4.5\mu$m (blue squares, see Table
1). The red shaded region represents the upper limit SED of the old stellar
component with an age of 500 Myr and $\log(M_{\star}^{\rm
old}/M_{\odot}$)<10.4. Table 2: Properties of J0121$+$0025\.
| Value | Uncertainty
---|---|---
R.A. (J2000) | 01:21:56.09 | $0.1^{\prime\prime}$
Dec. (J2000) | $+$00:25:20.30 | $0.1^{\prime\prime}$
$z_{\rm sys}$ | 3.244 | $0.001$
$M_{\rm UV}$ (AB) | $-24.11$ | $0.1$
log(L[Ly$\alpha$/erg s-1]) | 43.8 | $0.1$
$r_{\rm eff}$ (kpc) | $<1.0$ | —
Age (Myr) | 3-7a) | $<10^{a)}$
$Z_{\star}/Z_{\odot}$ | 0.4 | [0.2 - 1.0]
E(B-V)⋆ | 0.04 | 0.02
SFR ($M_{\odot}$ yr-1) | 981b) | 232b)
log($M_{\star}^{\rm burst}/M_{\odot}$) | 9.9 | $0.1$
sSFR (Gyr-1) | 98 | 32
$\Sigma$SFR ($M_{\odot}$ yr-1 kpc-2) | $>157^{c)}$ | —
$f_{\nu}(900$Å) ($\mu$Jy) | 0.781 | 0.099
$f_{\nu}(900$Å) / $f_{\nu}(1500$Å) | 0.088 | 0.012
$T(IGM)$ | 0.60 | 0.19
$f_{\rm esc,rel}$ (LyC) | 0.56 | [0.34-1.0]
$f_{\rm esc,abs}$ (LyC) | 0.39 | [0.23-0.69]
Notes. — (a) Age of the young stellar population obtained using UV wind lines
($\simeq 3$ Myr) and the best-fit model of the SED using CIGALE ($7\pm 2$
Myr); (b) 10 Myr-weighted SFR obtained from the best-fit SED; (c) considering
$r_{\rm eff}<1$ kpc.
On the other hand, the old stellar population is not well constrained. The
best-fit gives a stellar mass for the old stellar component
$\log(M_{\star}^{\rm old}/M_{\odot}$)=9.8$\pm$0.6. Figure 7 shows the upper
limit SED of the old stellar population (red), corresponding to
$\log(M_{\star}^{\rm old}/M_{\odot}$)<10.4. Overall, the observed blue color
around the rest-frame $\simeq 1.0\mu$m from the two IRAC/Spitzer channels
$I1-I2=-0.40\pm 0.31$ limits the presence of a $\log(M_{\star}^{\rm
old}/M_{\odot}$)>10.4 old stellar component.
## 4 Discussion
With $M_{\rm UV}=-24.1\pm 0.1$ and $f_{\rm esc,abs}\rm(LyC)\approx 0.40$,
J0121$+$0025 is not only one of the most UV-luminous star-forming galaxies
ever discovered, but also the brightest LyC leaker known among star-forming
galaxies. We now discuss the implications of such discovery.
### 4.1 The brightest LyC emitter known
J0121$+$0025 meets the two necessary conditions to be a strong LyC leaker: it
is very efficient at producing hydrogen-ionizing photons and the properties of
its ISM are favourable for their escape.
The remarkably strong P-Cygni profiles in the wind lines O vi, N v and C iv
seen in the spectrum of J0121$+$0025 (see Figures 2 and 3) indicates
unambiguously a very young age of the burst ($\simeq 3$ Mry) and the presence
of a large number of O-type stars, the main-sequence stars hot enough to
generate a significant number of ionizing photons. Strong P-Cygni profiles in
these lines are ubiquitous in the spectra of LyC leakers, both at low-$z$
(e.g., Borthakur et al., 2014; Izotov et al., 2016; Izotov et al., 2018a;
Izotov et al., 2018b) and moderately high-$z$ (e.g., Rivera-Thorsen et al.,
2019; Vanzella et al., 2018, 2020), but appear weak or absent in composite
spectra of more typical LBGs or LAEs (e.g., Shapley et al., 2003; Du et al.,
2018; Nakajima et al., 2018; Feltre et al., 2020; Marques-Chaves et al.,
2020a). In contrast to the burstiness nature of J0121$+$0025, and likely other
LyC leakers, smooth/continuous or declining star formation histories with ages
from several tens to hundreds Myr could explain the weakness of these profiles
in the spectra of typical LBGs/LAEs, which is in line with the SFHs usually
inferred or assumed for them (e.g., Kornei et al., 2010; de Barros et al.,
2014; Arrabal Haro et al., 2020). As a result, the amplitude and the age of
the burst in J0121$+$0025 produces a large number of ionizing photons, $N_{\rm
int}\rm(LyC)\simeq 1.4\times 10^{55}$ s-1 (see Section 3.3), which is a factor
of $\sim 10-30$ larger than that expected to be produced by a $M_{\rm
UV}^{*}=-20.97$ galaxy (e.g., Reddy & Steidel, 2009), assuming a continuous
star formation with 100 Myr age and the same metallicity.
Figure 8: Relation between the observed ratio of the ionizing to non-ionizing
flux density (in $F_{\nu}$ units, left) and the observed flux density at
$\simeq 900$Å (in $\mu$Jy, right) versus the UV absolute magnitude.
J0121$+$0025 is represented with a blue circle. For comparison, we also show
the other $z\gtrsim 3$ star-forming galaxies known with significant detection
of LyC radiation (grey squares; de Barros et al., 2016; Shapley et al., 2016;
Vanzella et al., 2016; Steidel et al., 2018; Vanzella et al., 2018; Fletcher
et al., 2019; Ji et al., 2020). Empty diamonds mark the position of the
estimates of several composites from Pahl et al. (2021) in different bins of
$M_{\rm UV}$.
The presence of strong P-Cygni profiles could be a potential indicator of LyC
leakage, as discussed in previous works (e.g., Izotov et al., 2018b; Chisholm
et al., 2019), at least for moderately metal-rich galaxies (because the
strength of the P-Cygni is also metallicity dependent, see: Izotov et al.,
2021). Note, however, that the presence of such wind lines indicates a high
production efficiency of LyC photons, not necessarily their escape.
Nevertheless, feedback from the strong winds of massive stars together with SN
explosions expected in the early phase of a burst could play a major role in
shaping the ISM, creating cavities of ionized gas where ionizing photons could
escape more efficiently (e.g., Heckman et al., 2011; Trebitsch et al., 2017).
This might be the case of J0121$+$0025\. The particular conditions of the ISM
in J0121$+$0025 are in fact favorable for the escape of LyC radiation. LyC
photons are easily absorbed by dust and neutral gas and these sources of
opacity are apparently weak in J0121$+$0025, at least from our line-of-sight.
The inferred $\beta_{\rm UV}=-2.05$ from the spectrum is compatible with low
attenuation by dust, corresponding to $E(B-V)=0.04$. In addition, ISM
absorption lines are very weak, with $EW_{0}\lesssim 1$Å, and show residual
intensities of $I/I_{0}\simeq 0.8$ for Si ii 1260Å and C ii 1334Å, even though
they are likely saturated (Section 3.2.4). These findings suggest a clumpy ISM
with a non-unity covering fraction ($C_{f}$ (H i) $\simeq 0.55$, Gazagnes et
al. 2018). The detection of blueshifted profiles in LIS lines in J0121$+$0025
($\simeq-450$ km s-1, see Section 3.2.4 and Figure 3) supports the presence of
such strong outflows.
We now compare in Figure 8 the LyC properties of J0121+0025 with those from
other confirmed LyC leakers at $z\gtrsim 3$. The comparison sample consists of
$\sim 20$ sources with significant detection of LyC radiation taken from de
Barros et al. (2016), Shapley et al. (2016), Vanzella et al. (2016), Steidel
et al. (2018), Vanzella et al. (2018), Fletcher et al. (2019) and Ji et al.
(2020). The left panel compares the observed ratio of the ionizing to non-
ionizing flux density, $(f_{900}/f_{1500})_{\rm obs}$, versus the UV absolute
magnitude. We use $(f_{900}/f_{1500})_{\rm obs}$ because it is model-
independent and does not rely on assumptions about the properties of the
underlying stellar population, such as SFHs, age and metallicity. Furthermore
we do not correct these values for the IGM absorption, because it is highly
uncertainty and it is not always available. From this figure, our results
indicate that a significant fraction of LyC photons can escape in sources with
a wide range of UV luminosity, from UV faint ($M_{\rm UV}\gtrsim-18.7$,
Fletcher et al. 2019) to extremely UV luminous ($M_{\rm UV}=-24.1$, this
work), and, therefore, are not restricted to the faintest ones as previously
thought (Steidel et al. 2018; Pahl et al. 2021, see also: Bian & Fan 2020).
The importance of detecting LyC emission in such a luminous source like
J0121+0025 is highlighted in the right panel of Figure 8. Here, we compare the
observed flux density at $\simeq 900$Å, $f_{900}$. Again, no IGM corrections
have been applied nor corrections for the luminosity distance, although all
but two LyC leakers (Vanzella et al., 2018; Ji et al., 2020) are roughly at
the same redshift of J0121+0025. Therefore, this comparison should be treated
with care, serving for illustrative purposes only. The combination of the
extreme luminosity of the starburst ($M_{\rm UV}=-24.1$) and the large $f_{\rm
esc}$ (LyC) makes J0121$+$0025 the brightest and the most powerful LyC emitter
known among the star-forming galaxy population. The observed LyC flux in
J0121$+$0025 is in fact comparable to the sum of the LyC flux of all star-
forming galaxies with LyC leakage known at these redshifts. Only the highly
magnified Sunburst Arc ($\mu\sim 30-100\times$, Pignataro et al. 2021) or
bright QSOs have comparable observed LyC fluxes ($\approx 1\mu$Jy, Steidel et
al. 2001; Lusso et al. 2015; Rivera-Thorsen et al. 2019).
The discovery of such a powerful LyC emitter raises now the question regarding
the role of UV-luminous star-forming galaxies to the cosmic reionization
(e.g., Sharma et al. 2016, Naidu et al. 2020). Answering this question is
however out of the scope of this work, as it requires the knowledge of two
fundamental properties that are highly uncertain: the volume density of such
luminous sources at $z\gtrsim 7$ and the physical properties connecting LyC
leakage. Nevertheless, we can place rough constraints on the UV ionizing
background at $z\sim 3$. To do so, we assume that luminous sources ($M_{\rm
UV}<-22$) share the same properties as J0121$+$0025, i.e., $f_{\rm
esc,abs}\rm(LyC)\sim 0.40$ and log($\xi_{\rm ion})=25.2$. The co-moving
production rate of hydrogen-ionizing photons ($\dot{N}_{\rm ion}$ ) is given
by:
$\dot{N}_{\rm ion}=f_{\rm esc,abs}(LyC)\>\xi_{\rm ion}\>\rho_{\rm
UV}\;\rm[s^{-1}Mpc^{-3}],$ (4)
where $\rho_{\rm UV}$ is the dust-corrected UV luminosity density. For
$\rho_{\rm UV}$, we integrate the UV luminosity function of Reddy & Steidel
(2009) down to $-22$ AB (or $\gtrsim 3L_{\rm UV}^{*}$) and assume
$E(B-V)=0.04$. This yields a $\dot{N}_{\rm ion}\sim 10^{49.8}$ s-1 Mpc-3,
which is fairly lower than that provided by QSOs at $z\sim 3$ ($\dot{N}_{\rm
ion}\rm(QSO)\sim 10^{50.5}-10^{51.0}$ s-1 Mpc-3, e.g., Becker & Bolton 2013,
Cristiani et al. 2016). However, the situation may differ at very high-$z$.
Recent studies have found remarkably bright galaxies at $z\gtrsim 7$ that are
in excess compared to the generally observed Schechter function of luminosity
functions (e.g., Bowler et al., 2014; Oesch et al., 2014; Bowler et al., 2015;
Ono et al., 2018; Stefanon et al., 2019). Assuming the double power-law
luminosity function at $z\sim 6$ of Bowler et al. (2015), we find
$\dot{N}_{\rm ion}\sim 10^{48.9}$ s-1 Mpc-3 for star-forming galaxies brighter
than $-22$ AB, that is comparable to that inferred in QSOs at these redshifts
($\dot{N}_{\rm ion}\rm(QSO)\sim 10^{48.8}$ s-1 Mpc-3; Matsuoka et al. 2018c).
On the other hand, it is not clear if the properties connecting the LyC
leakage in J0121$+$0025, and therefore the large $f_{\rm esc}$ (LyC), can be
applied to other bright star-forming galaxies (see e.g., Harikane et al.,
2020). This will be discussed in the next section.
### 4.2 Understanding UV-luminous star-forming galaxies: diverse properties
and insights for LyC leakage
J0121$+$0025 shows intriguing properties that differ from those expected in
bright star-forming galaxies. Here we discuss some of these properties and
compare them with other UV-luminous star-forming galaxies, with particular
emphasis to the properties that could be related with the LyC leakage.
A well established trend relating $M_{\rm UV}$ and the strength of Ly$\alpha$
and ISM lines has been found in previous works (e.g., Shapley et al., 2003;
Vanzella et al., 2009; Trainor et al., 2015; Du et al., 2018), for which
Ly$\alpha$ is found to be weak and LIS lines strong in UV-bright sources.
While this trend has been established with statistical significance for
galaxies with $M_{\rm UV}$ between $-18$ and $-21$, a few other known $M_{\rm
UV}\simeq-23$ LBGs (Dessauges-Zavadsky et al. 2010; Lee et al. 2013; Marques-
Chaves et al. 2018; Harikane et al. 2020) show the same trend, presenting a
completely damped Ly$\alpha$ absorption and strong LIS lines ($EW_{0}\simeq
2-4$Å) in their spectra, which is compatible with a large column density of
neutral gas ($N$(H i$)>10^{20}$ cm-2). However, such trend is not seen in
J0121$+$0025 as it shows weak LIS lines and a relatively strong Ly$\alpha$
line. Another interesting difference is that P-Cygni in wind lines, in
particular in N v that is very sensitive to the age, are weak or absent in
these $M_{\rm UV}\simeq-23$ LBGs (e.g. Dessauges-Zavadsky et al., 2010;
Marques-Chaves et al., 2018; Harikane et al., 2020), which could indicate that
their UV continua is not dominated by O-type stars ($\leq 10$ Myr), at least
compared to J0121$+$0025, but by older and less luminous stars (e.g., B-type
stars). In addition, vigorous star-forming galaxies are also found to be more
dusty, because the production of dust is tightly linked with star formation.
However, this does not apply for J0121$+$0025\. In fact, the derived $\rm
SFR=981\pm 232$ $M_{\odot}$ yr-1 of J0121$+$0025 is comparable, in absolute
terms, to those inferred in very dusty, far-IR bright systems, but as opposed
to them, only a small fraction of the total SFR of J0121$+$0025 is obscured
($\simeq 30\%$).
On the other hand, the properties of J0121$+$0025 are remarkably similar to
those observed in a few, very young, and also luminous starbursts, like the
recently discovered extremely luminous starburst galaxy BOSS-EUVLG1 at
$z=2.469$ ($M_{\rm UV}\simeq-24.4$; Marques-Chaves et al. 2020b), or the less,
but still luminous LyC leaker Ion3 galaxy at $z\simeq 4.0$ ($M_{\rm
UV}\simeq-22.2$; Vanzella et al. 2018). The spectra of these galaxies show
very strong P-Cygni profiles in wind lines (O vi, N v and C iv), intense
nebular emission in rest-frame optical lines ($EW_{0}(\rm H\alpha)\gtrsim
700$Å) and SEDs that are compatible with a very young and intense starburst of
a few Myr ($\lesssim 10$ Myr). As J0121$+$0025, these galaxies show very weak
ISM absorption lines ($EW_{0}<1$Å), strong Ly$\alpha$ emission $EW_{0}>20$Å
and are almost un-obscured ($\beta_{\rm UV}<-2.2$; Vanzella et al. 2018;
Marques-Chaves et al. 2020b). Other interesting properties shared in these
galaxies are the high specific SFR ($\rm sSFR\simeq 90-100$ Gyr-1), compact
morphologies ($r_{\rm eff}\sim 1$ kpc or less), and therefore high star-
formation rate surface density ($\Sigma\rm SFR\gtrsim 100$ $M_{\odot}$ yr-1
kpc-2), properties that are common in other LyC leakers (e.g., Izotov et al.,
2016, 2018b) and are thought to play a key role in transforming the ISM
structure (feedback).
Understanding such diversity in the properties of UV-luminous galaxies is
challenging, and possibly premature for now given the lack of statistics. We
note that only a handful of luminous sources are known as bright as $M_{\rm
UV}\sim-23$ and only two brighter than $M_{\rm UV}\sim-24$, J0121$+$0025 (this
work) and BOSS-EUVLG1 (Marques-Chaves et al., 2020b). Nevertheless, the
differences are already notorious, in particular those thought to be closely
related to the LyC leakage.
A possible explanation to these differences may be simply related to different
stages in the evolution of galaxies. J0121$+$0025, BOSS-EUVLG1 or Ion3 could
represent a vigorous starburst seen at very initial stages ($<10$ Myr), when
the starlight is dominated by young stars and before dust-attenuation become
efficient, enhancing the UV luminosity. The extreme feedback expected in such
early, intense, and likely short-lived phase could eject the gas and dust from
star-forming regions, allowing the escape of LyC photons (e.g., Sharma et al.,
2017; Trebitsch et al., 2017; Arata et al., 2019). In fact, a powerful ionized
gas outflow has been detected in BOSS-EUVLG1 with a log($M_{\rm
out}/M_{\odot})\simeq 8$ and an outflowing velocity $v_{\rm out}\simeq 600$ km
s-1 (Álvarez-Márquez et al., 2021). On the other hand, if the SFR drops
significantly at later stages, the UV luminosity will be still high for a
considerable period of time ($\sim 50$ Myr) due to the contribution of B-type
stars, but in such case SN and stellar feedback could be less effective in
clearing sight lines, and neutral gas and dust would cover star-forming
regions absorbing the LyC radiation. In both cases, the galaxy will appear
bright in the UV, but some spectral features would appear dramatically
different. Alternatively, the differences seen in the spectra of UV-luminous
galaxies could arise by a non-homogeneous distribution of gas and dust in
these UV-luminous galaxies, for which some of them would have star-forming
regions cleared by dust and gas from a favourable sight-line (J0121$+$0025,
BOSS-EUVLG1 or Ion3), while others not (e.g., Harikane et al., 2020).
Independently of the scenario invoked to explain these differences, the
discovery of J0121$+$0025 (and Ion3, Vanzella et al. 2018) indicates that, at
least some UV-luminous star-forming galaxies can be strong LyC emitters, which
contradicts recent findings from Harikane et al. (2020). We note that the
sample of $M_{\rm UV}\simeq-23$ LBGs at $z\sim 6$ analysed in Harikane et al.
(2020) was previously selected to have log(L[Ly$\alpha$/erg s${}^{-1}])<43.0$
(Matsuoka et al., 2016, 2018a, 2018b, 2019) to avoid a possible contamination
of an AGN. However, such threshold is conservative (see e.g., Ouchi et al.,
2009; Sobral et al., 2015; Matthee et al., 2017, for other luminous LAEs) and
can yield to a selection bias towards high $N$(H i) and/or declining SFHs. In
fact, the composite spectrum presented in Harikane et al. (2020) shows a
completely damped Ly$\alpha$ absorption, which is compatible with a large
column density of neutral gas, and naturally explain the strong LIS lines
observed in the spectrum. On the other hand, other UV-bright sources
identified as AGNs by Matsuoka et al. (2016, 2018a, 2018b, 2019), based only
on log(L[Ly$\alpha$/erg s${}^{-1}])>43.0$, show intense and narrow Ly$\alpha$
emission (as narrow as $<230$ km s-1), very weak LIS lines, and,
interestingly, evidence of strong P-Cygni in N v (see Figure 9 in Matsuoka et
al. 2019). The authors argue that such a P-Cygni profile could arise from a
weak BAL-QSO. However, as shown here in this work, similar profiles could be
naturally explained by a very young and hot stellar population that could
enhance the UV and Ly$\alpha$ luminosities in these sources, similar to what
is happening in J0121$+$0025 and BOSS-EUVLG1 (Marques-Chaves et al., 2020b).
In closing, it is clear that properties of UV-luminous star-forming galaxies
are still not very well understood and must be investigated in more detail
with a large statistical sample. In a future work, we will present a large
sample of other $\sim 70$ extremely UV-luminous star-forming galaxies
discovered within the eBOSS survey (R. Marques-Chaves, in prep.), hoping to
answer some of these important questions.
## 5 Summary and Conclusion
This work reports the discovery of J0121$+$0025 at $z=3.244\pm 0.001$, an
extremely luminous in the UV ($M_{\rm UV}\simeq-24.11$, AB) and Ly$\alpha$
line (log[$L_{\rm Ly\alpha}/\rm erg\leavevmode\nobreak\ s^{-1}]=43.8$) star-
forming galaxy with copious emission in the LyC spectral range
($\lambda_{0}<912$Å). J0121$+$0025 is a compact starburst, with $r_{\rm
eff}=1\pm 0.5$ kpc, that is only barely resolved in very good seeing
conditions ground-based imaging. The optical to mid-IR photometry is dominated
by the emission of a vigorous starburst, with log($M_{\star}^{\rm
burst}/M_{\odot})=9.9\pm 0.1$ and a 10 Myr-weighted $\rm SFR=981\pm 232$
$M_{\odot}$ yr-1. This yields a high specific star-formation rate $\rm
sSFR=98\pm 32$ Gyr-1 and a SFR density $\Sigma\rm SFR>157$ $M_{\odot}$ yr-1
kpc-2 (considering $r_{\rm eff}<1$ kpc). The high SNR OSIRIS/GTC spectrum of
J0121$+$0025 reveals strong P-Cygni in wind lines of O vi, N v and C iv, which
are well reproduced by a starburst model with an extremely young age of
$\simeq 3$ Myr, which is roughly consistent with the age derived from the
multi-wavelength SED ($7\pm 2$ Myr). The spectrum shows a rest-frame UV slope
$\beta_{\rm UV}=-2.05\pm 0.10$, consistent with low dust attenuation
$E(B-V)_{\star}=0.04\pm 0.02$. It also shows other features characteristic of
star-forming galaxies, such as stellar absorption originated in the
photospheres of hot stars, for which a significant contribution of an AGN to
the luminosity is ruled out. The Ly$\alpha$ is moderately strong
($EW_{0}\rm[Ly\alpha]=14\pm 3$Å) and shows a narrow profile ($\rm FWHM\simeq
350$ km s-1) with its peak redshifted, but close to the systemic velocity, by
$\simeq 120$ km s-1. Low-ionization ISM lines are also detected, but appear
much weaker when compared to those observed in typical LBGs. Both the weakness
($EW_{0}\rm[LIS]\simeq 1$Å) and the large residual intensity ($I/I_{0})\simeq
0.8$) suggest a clumpy geometry with a non-unity covering fraction or a highly
ionized ISM, for which a significant fraction of ionizing photons could
escape. LyC radiation is detected with a significance of $\simeq 7.9$ in the
OSIRIS spectrum, corresponding to a flux density $f_{900}=0.781\pm 0.099\mu$Jy
and an ionizing to non-ionizing flux density $(f_{900}/f_{1500})_{\rm
obs}=0.09\pm 0.01$. The contribution of a foreground or AGN contamination to
the LyC signal is discussed in detail, and although it cannot be completely
ruled out, it is very unlikely. This makes J0121$+$0025 the most powerful LyC
emitter known among the star-forming galaxy population. Our results indicate
that at least some UV-luminous star-forming galaxies are strong LyC leakers,
bringing new insights to the discussion of the role of luminous and very young
starbursts to the cosmic reionization.
## Acknowledgements
The authors thank the referee, Eros Vanzella, for useful comments that greatly
improved the clarity of this work. Based on observations made with the Gran
Telescopio Canarias (GTC) installed in the Spanish Observatorio del Roque de
los Muchachos of the Instituto de Astrofísica de Canarias, in the island of La
Palma. J.A.M., L.C., and I.P.F. acknowledge support from the Spanish State
Research Agency (AEI) under grant numbers ESP2015-65597-C4-4-R,
ESP2017-86852-C4-2-R, RyC-2015-18078, PGC2018-094975-B-C22, and MDM-2017-0737
Unidad de Excelencia ”María de Maeztu”- Centro de Astrobiología (CSIC-INTA).
A.S.L. acknowledge support from Swiss National Science Foundation.
## Data availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* Abolfathi et al. (2018) Abolfathi B., et al., 2018, ApJS, 235, 42
* Aihara et al. (2019) Aihara H., et al., 2019, PASJ, 71, 114
* Alexandroff et al. (2015) Alexandroff R. M., Heckman T. M., Borthakur S., Overzier R., Leitherer C., 2015, ApJ, 810, 104
* Álvarez-Márquez et al. (2021) Álvarez-Márquez J., Marques-Chaves R., Colina L., Pérez-Fournon I., 2021, A&A, 647, A133
* Appenzeller et al. (2005) Appenzeller I., Stahl O., Tapken C., Mehlert D., Noll S., 2005, A&A, 435, 465
* Arata et al. (2019) Arata S., Yajima H., Nagamine K., Li Y., Khochfar S., 2019, MNRAS, 488, 2629
* Arrabal Haro et al. (2020) Arrabal Haro P., et al., 2020, MNRAS, 495, 1807
* Assef et al. (2013) Assef R. J., et al., 2013, ApJ, 772, 26
* Baldwin et al. (1981) Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5
* Becker & Bolton (2013) Becker G. D., Bolton J. S., 2013, MNRAS, 436, 1023
* Bentz et al. (2004) Bentz M. C., Osmer P. S., Weinberg D. H., 2004, ApJ, 600, L19
* Bian & Fan (2020) Bian F., Fan X., 2020, MNRAS, 493, L65
* Bian et al. (2017) Bian F., Fan X., McGreer I., Cai Z., Jiang L., 2017, ApJ, 837, L12
* Boquien et al. (2019) Boquien M., Burgarella D., Roehlly Y., Buat V., Ciesla L., Corre D., Inoue A. K., Salas H., 2019, A&A, 622, A103
* Borthakur et al. (2014) Borthakur S., Heckman T. M., Leitherer C., Overzier R. A., 2014, Science, 346, 216
* Bowler et al. (2014) Bowler R. A. A., et al., 2014, MNRAS, 440, 2810
* Bowler et al. (2015) Bowler R. A. A., et al., 2015, MNRAS, 452, 1817
* Brinchmann et al. (2008) Brinchmann J., Pettini M., Charlot S., 2008, MNRAS, 385, 769
* Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
* Burgarella et al. (2005) Burgarella D., Buat V., Iglesias-Páramo J., 2005, MNRAS, 360, 1413
* Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682
* Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245
* Chandar et al. (2004) Chandar R., Leitherer C., Tremonti C. A., 2004, ApJ, 604, 153
* Chisholm et al. (2018) Chisholm J., et al., 2018, A&A, 616, A30
* Chisholm et al. (2019) Chisholm J., Rigby J. R., Bayliss M., Berg D. A., Dahle H., Gladders M., Sharon K., 2019, ApJ, 882, 182
* Cristiani et al. (2016) Cristiani S., Serrano L. M., Fontanot F., Vanzella E., Monaco P., 2016, MNRAS, 462, 2478
* Crowther et al. (2016) Crowther P. A., et al., 2016, MNRAS, 458, 624
* Dessauges-Zavadsky et al. (2006) Dessauges-Zavadsky M., Prochaska J. X., D’Odorico S., Calura F., Matteucci F., 2006, A&A, 445, 93
* Dessauges-Zavadsky et al. (2010) Dessauges-Zavadsky M., D’Odorico S., Schaerer D., Modigliani A., Tapken C., Vernet J., 2010, A&A, 510, A26
* Du et al. (2018) Du X., et al., 2018, ApJ, 860, 75
* Eisenstein et al. (2011) Eisenstein D. J., et al., 2011, AJ, 142, 72
* Feltre et al. (2020) Feltre A., et al., 2020, A&A, 641, A118
* Finkelstein et al. (2011) Finkelstein S. L., et al., 2011, ApJ, 729, 140
* Finkelstein et al. (2019) Finkelstein S. L., et al., 2019, ApJ, 879, 36
* Fletcher et al. (2019) Fletcher T. J., Tang M., Robertson B. E., Nakajima K., Ellis R. S., Stark D. P., Inoue A., 2019, ApJ, 878, 87
* Gawiser et al. (2007) Gawiser E., et al., 2007, ApJ, 671, 278
* Gazagnes et al. (2018) Gazagnes S., Chisholm J., Schaerer D., Verhamme A., Rigby J. R., Bayliss M., 2018, A&A, 616, A29
* Geach et al. (2017) Geach J. E., et al., 2017, ApJS, 231, 7
* Grimm et al. (2003) Grimm H. J., Gilfanov M., Sunyaev R., 2003, MNRAS, 339, 793
* Gwyn (2008) Gwyn S. D. J., 2008, PASP, 120, 212
* Hainline et al. (2011) Hainline K. N., Shapley A. E., Greene J. E., Steidel C. C., 2011, ApJ, 733, 31
* Harikane et al. (2020) Harikane Y., Laporte N., Ellis R. S., Matsuoka Y., 2020, ApJ, 902, 117
* Heckman et al. (2001) Heckman T. M., Sembach K. R., Meurer G. R., Leitherer C., Calzetti D., Martin C. L., 2001, ApJ, 558, 56
* Heckman et al. (2011) Heckman T. M., et al., 2011, ApJ, 730, 5
* Inoue & Iwata (2008) Inoue A. K., Iwata I., 2008, MNRAS, 387, 1681
* Inoue et al. (2014) Inoue A. K., Shimizu I., Iwata I., Tanaka M., 2014, MNRAS, 442, 1805
* Izotov et al. (2016) Izotov Y. I., Schaerer D., Thuan T. X., Worseck G., Guseva N. G., Orlitová I., Verhamme A., 2016, MNRAS, 461, 3683
* Izotov et al. (2018a) Izotov Y. I., Schaerer D., Worseck G., Guseva N. G., Thuan T. X., Verhamme A., Orlitová I., Fricke K. J., 2018a, MNRAS, 474, 4514
* Izotov et al. (2018b) Izotov Y. I., Worseck G., Schaerer D., Guseva N. G., Thuan T. X., Fricke Verhamme A., Orlitová I., 2018b, MNRAS, 478, 4851
* Izotov et al. (2021) Izotov Y. I., Worseck G., Schaerer D., Guseva N. G., Chisholm J., Thuan T. X., Fricke K. J., Verhamme A., 2021, MNRAS, 503, 1734
* Ji et al. (2020) Ji Z., et al., 2020, ApJ, 888, 109
* Jiang et al. (2021) Jiang L., et al., 2021, Nature Astronomy, 5, 256
* Kennicutt (1998) Kennicutt Jr. R. C., 1998, ARA&A, 36, 189
* Kojima et al. (2017) Kojima T., Ouchi M., Nakajima K., Shibuya T., Harikane Y., Ono Y., 2017, PASJ, 69, 44
* Kornei et al. (2010) Kornei K. A., Shapley A. E., Erb D. K., Steidel C. C., Reddy N. A., Pettini M., Bogosavljević M., 2010, ApJ, 711, 693
* Lee et al. (2013) Lee K.-S., Dey A., Cooper M. C., Reddy N., Jannuzi B. T., 2013, ApJ, 771, 25
* Leitet et al. (2013) Leitet E., Bergvall N., Hayes M., Linné S., Zackrisson E., 2013, A&A, 553, A106
* Leitherer et al. (1999) Leitherer C., et al., 1999, ApJS, 123, 3
* Leitherer et al. (2001) Leitherer C., Leão J. R. S., Heckman T. M., Lennon D. J., Pettini M., Robert C., 2001, ApJ, 550, 724
* Leitherer et al. (2016) Leitherer C., Hernandez S., Lee J. C., Oey M. S., 2016, ApJ, 823, 64
* Leitherer et al. (2018) Leitherer C., Byler N., Lee J. C., Levesque E. M., 2018, ApJ, 865, 55
* Lusso et al. (2015) Lusso E., Worseck G., Hennawi J. F., Prochaska J. X., Vignali C., Stern J., O’Meara J. M., 2015, MNRAS, 449, 4204
* Marques-Chaves et al. (2018) Marques-Chaves R., et al., 2018, ApJ, 854, 151
* Marques-Chaves et al. (2020a) Marques-Chaves R., et al., 2020a, MNRAS, 492, 1257
* Marques-Chaves et al. (2020b) Marques-Chaves R., et al., 2020b, MNRAS, 499, L105
* Matsuoka et al. (2016) Matsuoka Y., et al., 2016, ApJ, 828, 26
* Matsuoka et al. (2018a) Matsuoka Y., et al., 2018a, PASJ, 70, S35
* Matsuoka et al. (2018b) Matsuoka Y., et al., 2018b, ApJS, 237, 5
* Matsuoka et al. (2018c) Matsuoka Y., et al., 2018c, ApJ, 869, 150
* Matsuoka et al. (2019) Matsuoka Y., et al., 2019, ApJ, 883, 183
* Matthee et al. (2017) Matthee J., Sobral D., Darvish B., Santos S., Mobasher B., Paulino-Afonso A., Röttgering H., Alegre L., 2017, MNRAS, 472, 772
* Mauerhofer et al. (2021) Mauerhofer V., Verhamme A., Blaizot J., Garel T., Kimm T., Michel-Dansac L., Rosdahl J., 2021, A&A, 646, A80
* Micheva et al. (2017) Micheva G., Iwata I., Inoue A. K., 2017, MNRAS, 465, 302
* Naidu et al. (2020) Naidu R. P., Tacchella S., Mason C. A., Bose S., Oesch P. A., Conroy C., 2020, ApJ, 892, 109
* Nakajima et al. (2012) Nakajima K., et al., 2012, ApJ, 745, 12
* Nakajima et al. (2013) Nakajima K., Ouchi M., Shimasaku K., Hashimoto T., Ono Y., Lee J. C., 2013, ApJ, 769, 3
* Nakajima et al. (2018) Nakajima K., Fletcher T., Ellis R. S., Robertson B. E., Iwata I., 2018, MNRAS, 477, 2098
* Oesch et al. (2014) Oesch P. A., et al., 2014, ApJ, 786, 108
* Oesch et al. (2016) Oesch P. A., et al., 2016, ApJ, 819, 129
* Ono et al. (2010) Ono Y., et al., 2010, MNRAS, 402, 1580
* Ono et al. (2018) Ono Y., et al., 2018, PASJ, 70, S10
* Ouchi et al. (2008) Ouchi M., et al., 2008, ApJS, 176, 301
* Ouchi et al. (2009) Ouchi M., et al., 2009, ApJ, 696, 1164
* Pahl et al. (2021) Pahl A. J., Shapley A., Steidel C. C., Chen Y., Reddy N. A., 2021, MNRAS, 505, 2447
* Papovich et al. (2016) Papovich C., et al., 2016, ApJS, 224, 28
* Pâris et al. (2018) Pâris I., et al., 2018, A&A, 613, A51
* Peng et al. (2002) Peng C. Y., Ho L. C., Impey C. D., Rix H.-W., 2002, AJ, 124, 266
* Pignataro et al. (2021) Pignataro G. V., et al., 2021, arXiv e-prints, p. arXiv:2106.10286
* Reddy & Steidel (2009) Reddy N. A., Steidel C. C., 2009, ApJ, 692, 778
* Reddy et al. (2016) Reddy N. A., Steidel C. C., Pettini M., Bogosavljević M., 2016, ApJ, 828, 107
* Rigby et al. (2018) Rigby J. R., et al., 2018, ApJ, 853, 87
* Rivera-Thorsen et al. (2017) Rivera-Thorsen T. E., et al., 2017, A&A, 608, L4
* Rivera-Thorsen et al. (2019) Rivera-Thorsen T. E., et al., 2019, Science, 366, 738
* Rix et al. (2004) Rix S. A., Pettini M., Leitherer C., Bresolin F., Kudritzki R.-P., Steidel C. C., 2004, ApJ, 615, 98
* Robertson et al. (2015) Robertson B. E., Ellis R. S., Furlanetto S. R., Dunlop J. S., 2015, ApJ, 802, L19
* Ross et al. (2012) Ross N. P., et al., 2012, ApJS, 199, 3
* Saha et al. (2020) Saha K., et al., 2020, Nature Astronomy, 4, 1185
* Salpeter (1955) Salpeter E. E., 1955, ApJ, 121, 161
* Santos et al. (2020) Santos S., et al., 2020, MNRAS, 493, 141
* Schaerer (2003) Schaerer D., 2003, A&A, 397, 527
* Schlafly & Finkbeiner (2011) Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103
* Senchyna et al. (2021) Senchyna P., Stark D. P., Charlot S., Chevallard J., Bruzual G., Vidal-García A., 2021, MNRAS, 503, 6112
* Shapley et al. (2001) Shapley A. E., Steidel C. C., Adelberger K. L., Dickinson M., Giavalisco M., Pettini M., 2001, ApJ, 562, 95
* Shapley et al. (2003) Shapley A. E., Steidel C. C., Pettini M., Adelberger K. L., 2003, ApJ, 588, 65
* Shapley et al. (2016) Shapley A. E., Steidel C. C., Strom A. L., Bogosavljević M., Reddy N. A., Siana B., Mostardi R. E., Rudie G. C., 2016, ApJ, 826, L24
* Sharma et al. (2016) Sharma M., Theuns T., Frenk C., Bower R., Crain R., Schaller M., Schaye J., 2016, MNRAS, 458, L94
* Sharma et al. (2017) Sharma M., Theuns T., Frenk C., Bower R. G., Crain R. A., Schaller M., Schaye J., 2017, MNRAS, 468, 2176
* Sobral et al. (2015) Sobral D., Matthee J., Darvish B., Schaerer D., Mobasher B., Röttgering H. J. A., Santos S., Hemmati S., 2015, ApJ, 808, 139
* Sobral et al. (2018) Sobral D., et al., 2018, MNRAS, 477, 2817
* Stanway et al. (2016) Stanway E. R., Eldridge J. J., Becker G. D., 2016, MNRAS, 456, 485
* Stefanon et al. (2019) Stefanon M., et al., 2019, ApJ, 883, 99
* Steidel et al. (2001) Steidel C. C., Pettini M., Adelberger K. L., 2001, ApJ, 546, 665
* Steidel et al. (2016) Steidel C. C., Strom A. L., Pettini M., Rudie G. C., Reddy N. A., Trainor R. F., 2016, ApJ, 826, 159
* Steidel et al. (2018) Steidel C. C., Bogosavljević M., Shapley A. E., Reddy N. A., Rudie G. C., Pettini M., Trainor R. F., Strom A. L., 2018, ApJ, 869, 123
* Tomczak et al. (2016) Tomczak A. R., et al., 2016, ApJ, 817, 118
* Trainor et al. (2015) Trainor R. F., Steidel C. C., Strom A. L., Rudie G. C., 2015, ApJ, 809, 89
* Trebitsch et al. (2017) Trebitsch M., Blaizot J., Rosdahl J., Devriendt J., Slyz A., 2017, MNRAS, 470, 224
* Vanzella et al. (2009) Vanzella E., et al., 2009, ApJ, 695, 1163
* Vanzella et al. (2012) Vanzella E., et al., 2012, ApJ, 751, 70
* Vanzella et al. (2016) Vanzella E., et al., 2016, ApJ, 825, 41
* Vanzella et al. (2017) Vanzella E., et al., 2017, MNRAS, 467, 4304
* Vanzella et al. (2018) Vanzella E., et al., 2018, MNRAS, 476, L15
* Vanzella et al. (2020) Vanzella E., et al., 2020, MNRAS, 491, 1093
* Vanzella et al. (2021) Vanzella E., et al., 2021, arXiv e-prints, p. arXiv:2106.10280
* Verhamme et al. (2015) Verhamme A., Orlitová I., Schaerer D., Hayes M., 2015, Astronomy and Astrophysics, 578, A7
* Walborn et al. (2010) Walborn N. R., et al., 2010, AJ, 139, 1283
* de Barros et al. (2014) de Barros S., Schaerer D., Stark D. P., 2014, A&A, 563, A81
* de Barros et al. (2016) de Barros S., et al., 2016, A&A, 585, A51
|
arxiv-papers
| 2021-07-26T16:35:15 |
2024-09-04T03:07:19.232180
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Rui Marques-Chaves, Daniel Schaerer, Javier Alvarez-Marquez, Luis\n Colina, Miroslava Dessauges-Zavadsky, Ismael Perez-Fournon, Alberto\n Saldana-Lopez, Anne Verhamme",
"submitter": "Rui Marques-Chaves",
"url": "https://arxiv.org/abs/2107.12313"
}
|
2107.12320
|
# End-to-End Deep Learning of Long-Haul Coherent Optical Fiber Communications
via Regular Perturbation Model
Vladislav Neskorniuk(1,2) Andrea Carnio(3) Vinod Bajaj(1,4) Domenico
Marsella(3)
Sergei K. Turitsyn(2) Jaroslaw E. Prilepsky(2) Vahid Aref(1)
## 1 Introduction
Figure 1: Principal scheme of the implemented autoencoder, trained over
auxiliary RP model and assessed over SSFM simulation. Figure 2: Principal
scheme of the first-order regular-perturbation (RP) algorithm.
In modern communication systems, transceivers typically contain a chain of
digital signal processing (DSP) blocks individually designed based on
analytical models. The end-to-end (E2E) neural network (NN)-based autoencoders
have become of particular interest to improve the overall system performance,
particularly for the scenarios where accurate models are either unknown or
computationally prohibitive to use. In this approach, the transmitter (TX),
the channel, and the receiver (RX) are implemented as a single deep NN, and
then, TX and RX are jointly trained to reproduce the TX inputs from the RX
outputs.
The autoencoder-based communication system design has been first proposed [1]
and, subsequently, realized for various communication systems [2, 3, 4, 5, 6,
7, 8, 9, 10, 11]. In optical fiber communications, the E2E learning has been
applied for both intensity modulation and direct detection (IM/DD) systems [5,
12, 6] and coherent systems [8, 9, 10, 13]; For the latter, E2E learning is
much more involved. The nonlinear dispersive channel is typically modeled by
the Manakov equation and simulated by a serial cascade of alternating linear
and nonlinear operators, known as the split-step Fourier method (SSFM) [14].
The corresponding neural network consists of many layers making the training
process via “back-propagation” very slow and challenging. It requires the
storage of all intermediate states, thus, making the process memory hungry. In
addition, the back-propagation through many layers is prone to uncontrolled
growth or vanishing of gradients [13]. To bypass these problems, the one way
is to approximate the channel with simplified models. For instance, the E2E
learning is done using dispersion-free channel model [10] or Gaussian noise
model [8] which considers nonlinear distortion as an additive noise; However,
these two models do not account for channel memory.
In this paper, we propose E2E learning via the first-order regular
perturbation (RP) model. As we will show, RP model offers not only a quite
accurate approximation of the Manakov equation in the power range of interest
but, also, it can be implemented in parallel branches, an architecture
suitable for neural networks optimization. As a case study, we consider
single-channel dual-polarization (DP) 64 Gbaud transmission over 30 spans of
80 km standard single mode fiber (SSMF) with lumped optical amplifiers (OAs).
We assume the linear coherent reception without any nonlinear equalization.
For a range of launch powers, we learn optimized 64-point geometrically shaped
(GS-64) constellations and nonlinear pre-emphasis filters maximizing the E2E
mutual information (MI). The training is done via RP model but the performance
is evaluated over SSFM (as a precise channel model). We show that in
comparison to the standard 64-QAM, the learned GS-64 constellation and
waveforms increase the optimal launch power by about 0.5 dB and improve the MI
from 4.95 bits/sym./pol. to 5.13 bits/sym./pol.
## 2 RP as Auxiliary Channel Model
Consider the Manakov equation describing a DP optical signal
$\mathbf{E}(z,t)=\mathbf{u}(z,t)\sqrt{f(z)}$ over a fiber-optic link with
lumped amplification [14, 15]. $f(z)=\exp\left(-\alpha z+\alpha
L_{\text{sp}}\lfloor z/L_{\text{sp}}\rfloor\right)$ models the optical losses
$\alpha$ and amplification, $L_{\text{sp}}$ is the fiber span length
$\frac{\partial\mathbf{u}}{\partial
z}=-i\frac{\beta_{2}}{2}\frac{\partial^{2}\mathbf{u}}{\partial
t^{2}}+i\frac{8}{9}\gamma f(z)\|\mathbf{u}\|^{2}\mathbf{u}+\eta(z,t),$
where $\beta_{2}$ and $\gamma$ are the dispersion and Kerr nonlinearity
coefficients; $\eta(z,t)$ denotes the amplified spontaneous emission noise
(ASE) of OAs.
The first-order regular perturbation (RP) is an elaborate method to
approximate $\mathbf{u}(z,t)$ in a weakly nonlinear regime as [16, 15, 17]
$\displaystyle\mathbf{u}(z,t)$ $\displaystyle=\mathbf{u}_{\rm
L}(z,t)+\mathbf{u}_{\rm NL}(z,t)+\mathcal{O}(\gamma^{2}),$
$\displaystyle\mathbf{u}_{\rm L}(z,t)$
$\displaystyle=\mathcal{D}_{z}\left[\mathbf{u}(0,t)+\eta(z,t)\right],$
$\displaystyle\mathbf{u}_{\rm NL}(z,t)$
$\displaystyle\approx\sum_{m=1}^{N_{\rm
br}-1}\mathcal{D}_{z-m\delta}\left[\mathcal{K}_{\delta,m}[\mathbf{u}_{\rm
L}(m\delta,t)]\right],$ $\displaystyle\mathcal{K}_{\delta,m}[\mathbf{u}(t)]$
$\displaystyle=i\frac{8}{9}\gamma\frac{1-e^{-\alpha\delta}}{\alpha}f(m\delta)\|\mathbf{u}(t)\|^{2}\mathbf{u}(t),$
where
$\mathcal{D}_{z}[\cdot]=\mathcal{F}^{-1}\left[\exp(i\beta_{2}z\omega^{2}/2)\mathcal{F}[\cdot]\right]$
is chromatic dispersion operator, $\|\cdot\|$ is 2-norm, and $\mathcal{F}$
denotes Fourier transform. Fig. 2 shows the block diagram of the above
equations. The leftmost branch gives $\mathbf{u}_{\rm L}(z,t)$, while the
other branches sum to $\mathbf{u}_{\rm NL}(z,t)$. The number of branches is
$N_{\rm br}=z/\delta$. The smaller $\delta$ is, the more accurately
$\mathbf{u}_{\rm NL}(z,t)$ can be approximated. Each branch also includes an
additive circularly-symmetric weight Gaussian noise $\xi(z^{\prime},t)$ with
power spectral density $\lfloor\frac{z^{\prime}}{L_{\rm sp}}\rfloor\sigma_{\rm
ASE}^{2}$. A link can be modeled by just a single stage ($z$ is the link
length) or few subsequent stages of the RP model. It is evident that a stage
of the RP model is easily parallelizable, i.e. all branches can be computed
independently and in parallel. This allows speeding up its calculation and so
the overall E2E learning by using graphics processing units (GPUs). Moreover,
the danger of exploding or vanishing gradients is reduced, compared to
sequential models like SSFM.
$25$$35$RP Model vs. SSFM (no ASE noise)$-2$$0$$2$$4$$14$$15$$16$Launch Power
(dBm)SNR (dB)SSFM simulation w. ASERP Model w. ASE$\approx$13.6 dB Figure 3:
Comparison of 3-stages RP model with the SSMF simulation. The approximation
error of the RP model is much smaller than the total distortion.
Let us now discuss the accuracy of RP. As a testcase, we numerically consider
single-channel DP 64-QAM transmission at 64 Gbaud over 30 spans of 80 km SSMF.
A root-raised-cosine with roll-off factor of 0.1 is used for pulse shaping.
Manakov equation was used as a reference channel model. We define the SSMF
parameters as: $\alpha=0.21$ dB/km, $\beta_{2}=-21.4$ $\text{ps}^{2}$/km, and
$\gamma=1.14$ $\text{(W*km)}^{-1}$. Every span was followed by an ideal lumped
OA with noise figure $\text{NF}=4$ dB.
To have a more accurate model, we used 3 subsequent stages of RP, each
covering 10 spans with $N_{\rm br}=100$. We compare this auxiliary channel
model to the precise SSFM in Fig. 3. We plot the signal-to-noise ratio (SNR)
of the received signals after chromatic dispersion compensation (CDC), as
depicted in Fig. 1. We see that the received SNR is quite similar for both
SSFM and 3-stages RP model in weak nonlinear regime (up to 2.5 dBm). To
illustrate the approximation error of RP, we compare the outputs of our RP
model $\mathbf{\bar{y}}_{\rm RP}$ with the outputs of SSFM
$\mathbf{\bar{y}}_{\rm SSFM}$ with no additional ASE noise. We characterize
the approximation error in terms of signal-to-distortion ratio (SDR), defined
as $-20\log_{10}\left(\|\mathbf{\bar{y}}_{\rm RP}-\mathbf{\bar{y}}_{\rm
SSFM}\|/\|\mathbf{\bar{y}}_{\rm SSFM}\|\right)$. We see in Fig. 3 that up to
2.5 dBm, the SDR is at least 13 dB larger than the received SNR, implying that
the approximation error of the RP model is much smaller than the total
distortion in the link.
## 3 E2E Learning Procedure and the Results
Fig. 1 illustrates the proposed E2E autoencoder including three separate
neural networks:
Encoder NN: It is a single linear layer with trainable weights $\theta_{\rm
E}$, which maps a one-hot vector of size 64, representing the transmitted
message, to the corresponding constellation point $c\in\mathbb{C}$.
Constellation power is fixed $\mathbb{E}\\{\|c\|^{2}\\}=1$.
Nonlinear pre-emphasis NN: It is implemented based on the known cubic
correction terms [18, 19] $\Delta
x_{h/v,0}=\sum_{m,n}C_{m,n}\,x_{h/v,m}\cdot(x_{h/v,n}x^{*}_{h/v,m+n}+x_{v/h,n}x^{*}_{v/h,m+n})$,
where ${x}_{h/v,m}$ is the $m$-th adjacent symbol in H-/V-polarization of
target symbol ${x}_{h/v,0}$. The trainable weights $\theta_{\rm
P}=\\{C_{m,n}\\}$ with $|m|\leq 10,|n|\leq 10$ are initialized according to
[20].
Decoder NN: It is a dense NN with trainable weights $\theta_{\rm D}$ composed
of size-1 complex-valued input layer followed by 2 hidden layers, 32 ReLU
neurons each, and size-64 softmax as output layer activation. It maps a
received symbol $y\in\mathbb{C}$ to 64 posterior probabilities $P(c_{k}|y)$ of
each constellation point $c_{k}$.
Note that the TX NN was divided into two parts to reduce training complexity
and improve interpretability [13, 11]. The same encoder and decoder NNs were
applied to both polarizations.
The autoencoder is trained on the RP model to maximize the E2E mutual
information (MI), i.e.,
$I_{\rm RP}^{*}=6+\max_{\theta_{\rm E},\theta_{\rm P},\theta_{\rm
D}}\mathbb{E}_{X,Y}\\{\log_{2}P(x|y)\\}\vspace{-1mm}$
where the maximization objective is the negative categorical cross entropy.
Using a large random training sequence, the Adam optimizer [21] is used to
maximize the objective function and to obtain the optimal $\theta^{*}_{\rm
E}$, $\theta^{*}_{\rm P}$, $\theta^{*}_{\rm D}$. Next, these learnt parameters
are used to assess the performance over SSFM simulation. To improve matching
of the NN decoder to the actual channel, $\theta^{*}_{\rm D}$ is fine-tuned on
the SSFM propagation data, maximizing the E2E MI $I_{\rm SSFM}^{*}$. Finally,
the MI is assessed by transmission simulation of 10 newly generated random
sequences of $2^{16}$ symbols over SSFM.
Fig. 4(a) shows the E2E MI optimized for different input powers. We also plot
the E2E MI optimized without pre-emphasis NN, neglecting the channel memory.
For each point, the standard deviation over 10 simulation runs is also shown.
As a reference, we plot the MI of standard 64-QAM without pre-emphasis,
evaluated with two methods: by training the decoder NN to learn $P(x|y)$, and
by using a kernel density estimator (KDE) to estimate $P(y|x)$. The latter
gave larger values, highlighting the opportunities for decoder improvement,
and is taken as a reference. We observe that the E2E learning results in a
considerable gain. Without pre-emphasis, optimization of the constellation
shaping gives MI gain of $\approx 0.11$ bits/sym./pol. and with pre-emphasis,
the MI gain increases further to $\approx 0.18$ bits/sym./pol. while the
optimal power is also increased by $\approx 0.5$ dB.
$1$$1.25$$1.5$$1.75$$2$$2.25$$2.5$$2.75$$3$$4.85$$4.9$$4.95$$5$$5.05$$5.1$$5.15$Launch
power [dBm]MI [bits/sym./pol.]64QAM, KDE64QAM, NNmemoryless GS, NNGS + pre-
emph., NN$\approx$ 0.11$\approx$ 0.18(a)
$-1$$0$$1$$-1$$0$$1$IQ$-1$$0$$1$$-1$$0$$1$IQ(b) without pre-emph. (memoryless)
@ 2.25 dBm (c) with pre-emph. @ 2.5 dBm
Figure 4: (a) MI obtained by 64-QAM, learned GS-64 constellation and by joint
GS-64 and nonlinear pre-emphasis. Examples of the learnt Constellations (b)
memoryless GS-64 and (c) GS-64 $+$ pre-emphasis.
## 4 Conclusion
We presented a novel End-to-End learning approach optimizing geometric
constellation shaping and a nonlinear pre-emphasis for coherent fiber-optic
communications, resulting in a considerable mutual information increase in
simulation. The proposed technique, relying on the “parallelizable” regular
perturbation model, can be used for different fiber channels.
Acknowledgements: This project has received funding from EU Horizon 2020
program under the Marie Skłodowska-Curie grant agreement No. 766115 (FONTE).
JEP is supported by Leverhulme Project RPG-2018-063. SKT acknowledges the
support of EPSRC project TRANSNET.
## References
* [1] T. Oshea and J. Hoydis, “An introduction to deep learning for the physical layer,” _IEEE Transactions on Cognitive Communications and Networking_ , vol. 3, no. 4, pp. 563–575, 2017.
* [2] S. Dörner, S. Cammerer, J. Hoydis, and S. Ten Brink, “Deep learning based communication over the air,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 12, no. 1, pp. 132–143, 2017.
* [3] S. Cammerer, F. A. Aoudia, S. Dörner, M. Stark, J. Hoydis, and S. Ten Brink, “Trainable communication systems: Concepts and prototype,” _IEEE Transactions on Communications_ , vol. 68, no. 9, pp. 5489–5503, 2020\.
* [4] M. Stark, F. A. Aoudia, and J. Hoydis, “Joint learning of geometric and probabilistic constellation shaping,” in _2019 IEEE Globecom Workshops (GC Wkshps)_. IEEE, 2019, pp. 1–6.
* [5] B. Karanov, M. Chagnon, F. Thouin, T. A. Eriksson, H. Bülow, D. Lavery, P. Bayvel, and L. Schmalen, “End-to-end deep learning of optical fiber communications,” _Journal of Lightwave Technology_ , vol. 36, no. 20, pp. 4843–4855, 2018.
* [6] B. Karanov, M. Chagnon, V. Aref, D. Lavery, P. Bayvel, and L. Schmalen, “Concept and experimental demonstration of optical im/dd end-to-end system optimization using a generative model,” in _2020 Optical Fiber Communications Conference and Exhibition (OFC)_. IEEE, 2020, pp. 1–3.
* [7] K. Gümüş, A. Alvarado, B. Chen, C. Häger, and E. Agrell, “End-to-end learning of geometrical shaping maximizing generalized mutual information,” in _2020 Optical Fiber Communications Conference and Exhibition (OFC)_. IEEE, 2020, pp. 1–3.
* [8] R. T. Jones, T. A. Eriksson, M. P. Yankov, and D. Zibar, “Deep learning of geometric constellation shaping including fiber nonlinearities,” in _2018 European Conference on Optical Communication (ECOC)_ , 2018, pp. 1–3.
* [9] S. Gaiarin, F. Da Ros, R. T. Jones, and D. Zibar, “End-to-end optimization of coherent optical communications over the split-step fourier method guided by the nonlinear fourier transform theory,” _Journal of Lightwave Technology_ , vol. 39, no. 2, pp. 418–428, 2020.
* [10] S. Li, C. Häger, N. Garcia, and H. Wymeersch, “Achievable information rates for nonlinear fiber communication via end-to-end autoencoder learning,” in _2018 European Conference on Optical Communication (ECOC)_. IEEE, 2018, pp. 1–3.
* [11] J. Song, C. Häger, J. Schröder, A. Graell i Amat, and H. Wymeersch, “End-to-end autoencoder for superchannel transceivers with hardware impairment,” in _2021 Optical Fiber Communications Conference and Exhibition (OFC)_ , 2021, pp. 1–3.
* [12] B. Karanov, G. Liga, V. Aref, D. Lavery, P. Bayvel, and L. Schmalen, “Deep learning for communication over dispersive nonlinear channels: performance and comparison with classical digital signal processing,” in _2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_. IEEE, 2019, pp. 192–199.
* [13] T. Uhlemann, S. Cammerer, A. Span, S. Doerner, and S. ten Brink, “Deep-learning autoencoder for coherent and nonlinear optical communication,” in _Photonic Networks; 21th ITG-Symposium_. VDE, 2020, pp. 1–8.
* [14] G. P. Agrawal, “Nonlinear fiber optics,” in _Nonlinear Science at the Dawn of the 21st Century_. Springer, 2000, pp. 195–211.
* [15] A. Mecozzi and R.-J. Essiambre, “Nonlinear shannon limit in pseudolinear coherent systems,” _Journal of Lightwave Technology_ , vol. 30, no. 12, pp. 2011–2024, 2012.
* [16] A. Vannucci, P. Serena, and A. Bononi, “The rp method: A new tool for the iterative solution of the nonlinear schrödinger equation,” _Journal of Lightwave Technology_ , vol. 20, no. 7, p. 1102, 2002.
* [17] F. J. Garcia Gomez and G. Kramer, “Mismatched models to lower bound the capacity of dual-polarization optical fiber channels,” _Journal of Lightwave Technology_ , 2021.
* [18] Z. Tao, L. Dou, W. Yan, L. Li, T. Hoshida, and J. C. Rasmussen, “Multiplier-free intrachannel nonlinearity compensating algorithm operating at symbol rate,” _Journal of Lightwave Technology_ , vol. 29, no. 17, pp. 2570–2576, 2011.
* [19] M. Malekiha, I. Tselniker, and D. V. Plant, “Efficient nonlinear equalizer for intra-channel nonlinearity compensation for next generation agile and dynamically reconfigurable optical networks,” _Optics express_ , vol. 24, no. 4, pp. 4097–4108, 2016.
* [20] A. Ghazisaeidi and R.-J. Essiambre, “Calculation of coefficients of perturbative nonlinear pre-compensation for nyquist pulses,” in _2014 The European Conference on Optical Communication (ECOC)_. IEEE, 2014, pp. 1–3.
* [21] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
|
arxiv-papers
| 2021-07-26T16:46:35 |
2024-09-04T03:07:19.252699
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Vladislav Neskorniuk, Andrea Carnio, Vinod Bajaj, Domenico Marsella,\n Sergei K. Turitsyn, Jaroslaw E. Prilepsky, Vahid Aref",
"submitter": "Vahid Aref",
"url": "https://arxiv.org/abs/2107.12320"
}
|
2107.12321
|
11institutetext: Indian Institute of Information Technology Allahabad, India
11email: {mit2019075,pse2017002,rsi2017502,sonali}@iiita.ac.in
# MAG-Net: Multi-task attention guided network for brain tumor segmentation
and classification ††thanks: All authors have contributed equally.
Sachin Gupta Narinder Singh Punn Sanjay Kumar Sonbhadra Sonali Agarwal
###### Abstract
Brain tumor is the most common and deadliest disease that can be found in all
age groups. Generally, MRI modality is adopted for identifying and diagnosing
tumors by the radiologists. The correct identification of tumor regions and
its type can aid to diagnose tumors with the followup treatment plans.
However, for any radiologist analysing such scans is a complex and time-
consuming task. Motivated by the deep learning based computer-aided-diagnosis
systems, this paper proposes multi-task attention guided encoder-decoder
network (MAG-Net) to classify and segment the brain tumor regions using MRI
images. The MAG-Net is trained and evaluated on the Figshare dataset that
includes coronal, axial, and sagittal views with 3 types of tumors meningioma,
glioma, and pituitary tumor. With exhaustive experimental trials the model
achieved promising results as compared to existing state-of-the-art models,
while having least number of training parameters among other state-of-the-art
models.
###### Keywords:
Attention Brain tumor Deep learning Segmentation.
## 1 Introduction
Brain tumor is considered as the deadliest and most common form of cancer in
both children and adults. Determining the correct type of brain tumor in its
early stage is the key aspect for further diagnosis and treatment process.
However, for any radiologist, identification and segmentation of brain tumor
via multi-sequence MRI scans for diagnosis, monitoring, and treatment, are
complex and time-consuming tasks.
Brain tumor segmentation is a challenging task because of its varied behavior
both in terms of structure and function. Furthermore, the tumor intensity of a
person differs significantly from each other. MRI is preferred over other
imaging modalities [4] for the diagnosis of brain tumor because of its non-
invasive property that follows from without the exposure to ionizing
radiations and superior image contrast in soft tissues.
Deep learning has shown advancement in various fields with promising
performance especially in the area of biomedical image analysis [24]. The
convolutional neural networks (CNN) [2] are the most widely used models in
image processing. The CNNs involve combination of convolution, pooling and
activation layers accompanied with the normalization and regularization
operations to extract and learn the target specific features for desired task
(classification, localization, segmentation, etc.). In recent years various
techniques have been proposed for identification (classification and
segmentation) of the brain tumor using MRI images that achieved promising
results [13, 30]. However, most of the approaches use millions of trainable
parameters that result in slower training and analysis time, while also having
high variance in results in case of limited data samples.
In order to overcome the aforementioned drawbacks, Ronneberger et al. [26]
proposed U shaped network (U-Net) for biomedical image segmentation. The model
follows encoder-decoder design with feature extraction (contraction path) and
reconstruction phases (expansion path) respectively. In addition, skip
connections are introduced to propagate the extracted feature maps to the
corresponding reconstruction phase to aid upsample the feature maps. Finally,
model produces segmentation mask in same dimensions as the input highlighting
the target structure (tumor in our case). Following the state-of-the-art
potential of the U-Net model, many U-Net variants are proposed to further
improve the segmentation performance. Attention based U-Net model [19] is one
such variant that tend to draw the focus of the model towards target features
to achieve better segmentation results. The attention filters are introduced
in the skip connections where each feature is assigned weight coefficient to
highlight its importance towards the target features. Despite achieving the
promising results, these models have millions of trainable parameter which can
be reduced by optimizing the convolution operation. This can be achieved by
incorporating the depthwise convolution operations [8] that is performed in
two stages: depthwise and pointwise convlutions. The reduction in the number
of the parameters and multiplications as compared to standard convolution
operation can represented as $1/r+1/f^{2}$, where $r$ is the depth of the
output feature map and $f$ is the kernel height or width [21]. The achieved
reduction in number of parameters and multiplications is $\sim 80\%$.
Following this context, attention guided network is proposed that uses
depthwise separable convolution for real time segmentation and classification
of the brain tumor using MRI imaging. The major contribution of the present
research work is as follows:
* •
A novel model, Multi-task (segmentation and classification) attention guided
network (MAG-Net) is proposed for brain tumor diagnosis.
* •
Optimization of training parameters using depthwise separable convolution. The
training parameters of the MAG-Net reduced from 26.0M to 5.4M.
* •
MAG-Net achieved significant improvement in classification and segmentation as
compared to the state-of-the-art models while having limited data samples.
The rest paper is organized as follows: Section 2 describes the crux of
related work on brain tumor segmentation and classification. Section 3, talks
about the proposed architecture, whereas Section 4 discuses the training and
testing environment with experimental and comparative analysis. Finally,
concluding remarks are presented in Section 5.
## 2 Literature review
Identifying the brain tumor is a challenging task for the radiologists.
Recently, several deep learning based approaches are proposed to aid in faster
diagnosis of the diseases. Segmentation of the infected region is most common
and critical practice involved in the diagnosis. In addition, the segmented
region can be provided with label (classification) to indicate what type of
anomaly or infection is present in the image.
In contrast to the traditional approaches, Cheng et al. [7] proposed a brain
tumor classification approach using augmented tumor region instead of original
tumor region as RoI (region of interest). Authors utilized the bag of word
(BOW) technique to segment and extract local features from RoI. Dictionary is
used for encoding the extracted local features maps that are passed through
SVM (support vector machine) classifier. The approach outperformed the
traditional classification techniques with the accuracy of 91.28% but the
performance is limited by the data availability. In similar work, Ismael et
al. [14] proposed an approach of combining statistical features along with
neural networks by using filter combination: discrete wavelet transform
(DWT)(represented by wavelet coefficient) and Gabor filter (for texture
representation). For classification of the tumor, three layered neural network
classifier is developed using multilayer perceptron network that is trained
with statistical features. In contrast to Cheng et al. [7], authors also
achieved promising results on the limited data samples with an overall
accuracy of. 91.9%.
Recently, capsule network [12] has shown great performance in many fields
especially in biomedical image processing. Afshar et al. [1] proposed basic
capsnet with three capsules in last layer representing three tumor classes.
However, due to varied behavior (background, intensity, structure, etc.) of
MRI image, the proposed model failed to extract optimal features representing
the tumor structure. The author achieved the tumor classification accuracy of
78% and 86.5% using raw MRI images and tumor segmented MRI images
respectively. In another approach, Pashaei et al. [20] utilized CNN and kernel
extreme learning machine that comprises one hidden layer with 100 neurons to
increase the robustness of the model. With several experimental trials, the
authors achieved an accuracy of 93.68% but detects only 1% of the positive
pituitary tumor cases out of the total pituitary tumor case. Deepak et al. [9]
proposed a transfer learning approach that uses pre-trained GoogleNet model to
extract features (referred as deep CNN features) with softmax classifier in
the output layer to classify three tumor classes. Furthermore, the authors
combine the deep CNN features and SVM model to analyse the classification
performance. The authors achieved 97.1% accuracy but resulted in poor
performance by standalone GoogleNet model due to overfitting with limited
training image dataset, and misclassifications in meningioma tumor. In another
approach, Pernas et al. [11] proposed to process images in three different
spatial scales along with multi pathways feature scales for classification and
segmentation of brain tumor. The images are pre-processed with elastic
transform for preventing overfitting. The model analyses entire image and
classifies pixel by pixel in one of four possible output labels (i.e.
0-healthy, 1-meningioma, 2-glioma, and 3-pituitary tumor). The proposed
approach outperformed existing approaches with 97.3% classification accuracy,
but with poor segmentation performance. Following this context, in this
article multi-task attention guided network (MAG-Net) is proposed based on the
U-Net architectural design [26] that uses parallel depthwise separable
convolution layers for multi-level feature extraction along with an attention
mechanism to better extract tumor features for brain tumor classification and
generate the corresponding tumor mask.
Figure 1: Schematic representation of the architecture of MAG-Net model.
## 3 Proposed work
The proposed multi-task attention guided network (MAG-Net) model, as shown in
Fig. 1, focuses on reducing overall computation, better feature extraction and
optimizing the training parameters by reduction. The overall architectural
design consists of an encoder, decoder, and classification module with 5.4M
trainable parameters. The overall architectural design of the model is
inspired by the U-Net encoder-decoder style [23]. Due to its state-of-the-art
potential, this model is the most prominent choice among the researchers to
perform biomedical image segmentation [17].
In MAG-Net to reduce the number of training parameters without the cost of
performance, standard convolution operations are replaced with depthwise
separable convolution. In addition, the skip connections are equipped with
attention filters [19] to better extract the feature maps concerning the tumor
regions. The attention approach filters the irrelevant feature maps in the
skip connection by assigning weights to highlight its importance towards the
tumor regions. Besides, the encoder block is equipped with parallel separable
convolution filters of different sizes, where the extracted feature maps are
concatenated for better feature learning. These features are then passed to
the corresponding decoder blocks via attention enabled skip connections to aid
in feature reconstruction with the help of upsampling operation. The
bottleneck layer connects the feature extraction path to the feature
reconstruction path. In this layer filters of different sizes are used along
with the layer normalization. Furthermore, the classification is performed
using the extracted feature maps obtained from the final encoder block.
### 3.1 Encoder
To detect the shape and size of varying image like brain tumor it is required
to use separable convolution of different sizes. Inspired from the concept of
inception neural network [22] the encoder segment is consist of separable
convolutions of 1 x 1, 3 x 3, and 5 x 5 kernels. Each of separable
convolutions are followed by layer normalization. The extracted feature maps
are fused with add operation that are downsampled by max pooling operation.
Fig. 2, shows the proposed encoder architecture of MAG-Net model for some
input feature map, $\mathcal{F}_{i}\in\mathcal{R}^{w\times h\times d}$, where
$w$, $h$ and $d$ are the width, height and depth of the feature map.
Figure 2: Proposed MAG-Net 2D encoder module.
### 3.2 Decoder
The decoder component follows from the encoder block and that tend to
reconstruct the spatial dimension to generate the output mask in same
dimension as input. It consists of upsampling of the feature maps along with
the concatenation with the attention maps followed by a separable convolution
operation. Long skip connections [10] are used to propagate the attention
feature maps from encoder to decoder to recover spatial information that was
lost during downsampling in encoder. By using attention in the skip connection
it helps the model to suppress the irrelevant features.
### 3.3 Classification
This module classifies the brain tumor MRI images into respective classes i.e
meningioma, glioma, and pituitary tumor by utilizing the features extracted
from the encoder block. This encoder block act as backbone model for both
classification and segmentation, thereby reducing the overall complexity of
the model. In this classification block the feature maps of the last encoder
block act as input that are later transformed into 1D tensor by using global
average pooling. The pooled feature maps are then processed with multiple
fully connected layers. The classification output is generated from the
softmax activated layer that generates the probability distribution of the
tumor classes for an image.
## 4 Experiment and Results
### 4.1 Dataset Setup
The present research work utilizes Figshare [6] dataset that comprises of 2D
MRI scan with T1-weighted contrast-enhanced modality acquired from 233
patients to form a total of 3064 MRI scans. The T1 modality highlight distinct
features of the brain tumor with three classes representing the type of brain
tumor i.e. meningioma (708 slices), glioma (1426 slices), and pituitary (930
slices) forming 23%, 46.5%, and 30% class distribution in the dataset
respectively. The sample MRI slices of different tumor classes are presented
in Fig. 3. Dataset is randomly split into 80% training and 20% of the
validation set. The training and testing composition kept the same throughout
the experiment trails for comparative analysis.
Figure 3: A slice of MRI scan with T1 modality showing different tumor
classes: meningioma, glioma, and pituitary
### 4.2 Training and Testing
The MAG-Net model is trained and evaluated on the Figshare dataset. The
training phase is accompanied with early-stopping [3] to tackle the
overfitting problem, and Adam as a learning rate optimiser [27]. Cross entropy
based loss functions are most popularly used for model training and validating
segmentation and classification tasks. Following this, binary cross entropy
and categorical cross entropy functions are employed for training the model
for binary tumor mask generation and classification respectively. Binary cross
entropy (BCE, shown in Eq. 1) is a sigmoid activation [16] followed by cross
entropy loss [29] that compares each of the predicted probabilities to actual
output. Categorical cross entropy (CE, shown in Eq. 2) is a softmax activation
function followed by cross-entropy-loss that compares the output probability
over each tumor class for each MRI image.
$\mathcal{L}_{BCE}=-\frac{1}{N}\sum_{i=1}^{N}(y_{i}.log(p(y_{i}))+(1-y_{i}).log(1-P(y_{i})))$
(1)
where $y$ represents actual tumor mask, $p(y)$ represents predicted tumor mask
and $N$ is the total number of images.
$\mathcal{L}_{CE}=\sum_{i}^{C}t_{i}log(f(s_{i}))$ (2)
where $C$ is the no. of class, $f(s_{i})$ is the probability of occurrence of
each class $t_{i}$ represents 1 for true label and 0 for others.
For segmentation the most popular evaluation matrics are dice coefficient
(shown in Eq. 3) and intersection-over-union (IoU / Jaccard index) (shown in
Eq. 4), and hence are utilized to evaluate the trained MAG-Net model. TP
defines correctly classified predictions FP defines wrongly classified, and FN
defines missed objects of each voxel.
$DiceCoefficient=\frac{2*TP}{2*TP+FP+FN}$ (3)
$IoU=\frac{TP}{TP+FP+FN}$ (4)
To evaluate classification module of the MAG-Net, accuracy, precision, recall,
f1-score and micro average metrics are considered for better quantification
and visualization of the performance of the model. Precision of the class, as
shown in Eq. 5, quantifies about the positive prediction accuracy of the
model. Recall is the fraction of true positive which are classified correctly
(shown in Eq. 6). F1-score quantifies the amount of correct predictions out of
all the positive predictions (shown in Eq. 7). Support quantifies the true
occurrence in the specified dataset of the respective class. Micro average
($\mu_{avg}$) (shown in Eq. 8, Eq. 9 and Eq. 10) is calculated for precision,
recall, and F1-score. To compute micro average ($\mu_{avg}$), the test dataset
is divided into two sub dataset, on each of which the true positive, false
positive and false negative predictions are identified.
$Precision=\frac{TP}{(TP+FP)}$ (5)
$Recall=\frac{TP}{(FN+FP)}$ (6)
$F1-score=\frac{2*Recall*Precision}{(Recall+Precision)}$ (7)
$\mu_{avg}(Precision)=\frac{TP_{1}+TP_{2}}{(TP_{1}+TP_{2}+FP_{1}+FP_{2})}$ (8)
$\mu_{avg}(Recall)=\frac{TP_{1}+TP_{2}}{(TP_{1}+TP_{2}+FN_{1}+FN_{2})}$ (9)
$\mu_{avg}(F1-score)=HM(\mu_{avg}(Precision),mu_{avg}(Recall))$ (10)
where $TP_{1}$, $FP_{1}$, and $FN_{1}$ belong to the first set and $TP_{2}$,
$FP_{2}$, and $FN_{2}$ belongs to the different sets. $HM$ is the harmonic
mean.
Figure 4: Qualitative results of brain tumor segmentation and classification
on MRI images (a, b, and c of different tumor classes) using MAG-Net model.
### 4.3 Results
The MAG-Net outputs the segmented mask of a given MRI image consisting of
tumor region corresponding to meningioma, glioma, and pituitary as classified
by the model. For randomly chosen MRI slices, Fig. 4 presents the segmentation
and classification results of model. The visual representation confirms that
the results are close to the ground truth of respective tumor classes.
Table 1: Comparative analysis of the MAG-Net with the existing segmentation models on test dataset. Model | Accuracy | Loss | Dice coefficient | Jaccard index | Parameters
---|---|---|---|---|---
U-Net | 99.5 | 0.024 | 0.70 | 0.55 | 31M
wU-Net | 99.4 | 0.034 | 0.66 | 0.49 | 31M
Unet++ | 99.5 | 0.028 | 0.65 | 0.49 | 35M
MAG-Net | 99.52 | 0.021 | 0.74 | 0.60 | 5.4M
*bold quantities indicate the best results.
Table 1 represents the result of the proposed work for segmentation in the
form of accuracy, loss, dice coefficient, Jaccard index, and trainable
parameters along with comparative analysis with other popular approaches. The
proposed framework outperforms the other approaches in segmenting tumor with
the dice and IoU score of 0.74 and 0.60 respectively. In contrast to other
models, MAG-Net achieved best results with minimal trainable parameters. The
other popular approaches taken in comparative analysis for segmentation are
U-Net [15], U-Net++ [18, 31], and wU-Net. [5].
Table 2: Comparative analysis of the MAG-Net with the existing classification models on test dataset using confusion matrix. Model | Acc. | Loss | | Meningioma | Glioma | Pituitary
---|---|---|---|---|---|---
| | | Meningioma | 114 | 25 | 3
VGG16 | 93.15 | 0.26 | Glioma | 13 | 271 | 1
| | | Pituitary | 0 | 1 | 185
| | | Meningioma | 114 | 21 | 7
VGG19 | 93.8 | 0.25 | Glioma | 11 | 274 | 0
| | | Pituitary | 0 | 1 | 185
| | | Meningioma | 123 | 12 | 7
ResNet50 | 94.2 | 0.31 | Glioma | 16 | 266 | 3
| | | Pituitary | 1 | 0 | 185
| | | Meningioma | 134 | 7 | 1
MAG-Net | 98.04 | 0.11 | Glioma | 1 | 282 | 2
| | | Pituitary | 1 | 0 | 185
*bold quantities indicate the best results.
Table 2 and Table 3 represent the results of the proposed work for
classification in the form of accuracy, loss, confusion matrix, and
classification report for meningioma, glioma, and pituitary tumor along with
comparative analysis with other state-of-the-art approaches: VGG-16 [28],
VGG-19 [28], and ResNet50 [25]. With exhaustive experimental trials it is
observed that MAG-Net outperformed the existing approaches with significant
margin in all the metrics.
Table 3: Comparative analysis of the MAG-Net with the existing classification models on test dataset considering classification report as evaluation parameter. Model | Classes | Precision | Recall | f1-Score | Support
---|---|---|---|---|---
VGG16 | Meningioma | 0.90 | 0.80 | 0.85 | 142
Glioma | 0.91 | 0.85 | 0.93 | 285
Pituitary | 0.98 | 0.99 | 0.99 | 186
Micro avg. | 0.93 | 0.93 | 0.93 | 613
VGG19 | Meningioma | 0.91 | 0.80 | 0.85 | 142
Glioma | 0.93 | 0.96 | 0.94 | 285
Pituitary | 0.96 | 0.99 | 0.98 | 186
Micro avg. | 0.93 | 0.93 | 0.93 | 613
ResNet-50 | Meningioma | 0.88 | 0.87 | 0.87 | 142
Glioma | 0.93 | 0.99 | 0.94 | 285
Pituitary | 0.95 | 0.99 | 0.97 | 186
Micro avg. | 0.94 | 0.94 | 0.94 | 613
MAG-Net | Meningioma | 0.99 | 0.94 | 0.96 | 142
Glioma | 0.98 | 0.99 | 0.98 | 285
Pituitary | 0.98 | 0.99 | 0.99 | 186
Micro avg. | 0.98 | 0.98 | 0.98 | 613
*bold quantities indicate the best results.
It is observed that unlike other state-of-the-art models, the MAG-Net model
achieved promising results due to the reduction in the overall computation,
better feature extraction and training parameters optimization. As shown in
Table 1 raw U-Net displayed similar performance but at the cost of large
number of trainable parameters. In the MAG-Net model, the encoder block is
developed by replacing convolution layers with parallel depthwise separable
convolution of various sizes connected in parallel which resulted in better
multi-scale feature learning for varying shapes and sizes of the tumor. For
reducing spatial loss during feature reconstruction, attention mechanism is
used in skip connections for better feature reconstruction. To reduce the
overall complexity of the model the feature extracted by encoder blocks are
reused to classify the type of brain tumor.
## 5 Conclusion
In this paper, the complex task of brain tumor segmentation and classification
is addressed using multi-task attention guided network (MAG-Net). This a U-Net
based model that features reduction in the overall computation, better feature
extraction and training parameters optimization. The proposed architecture
achieved significant performance on the Figshare brain tumor dataset by
exploiting the state-of-the-art advantages of U-Net, depthwise separable
convolution and attention mechanism. The MAG-Net model recorded the best
classification and segmentation results compared to the existing
classification and segmentation approaches. It is believed that this work can
also be extended to other domains involving classification and segmentation
tasks.
## Acknowledgment
We thank our institute, Indian Institute of Information Technology Allahabad
(IIITA), India and Big Data Analytics (BDA) lab for allocating the centralised
computing facility and other necessary resources to perform this research. We
extend our thanks to our colleagues for their valuable guidance and
suggestions.
## References
* [1] Afshar, P., Mohammadi, A., Plataniotis, K.N.: Brain tumor type classification via capsule networks. In: 2018 25th IEEE International Conference on Image Processing (ICIP). pp. 3129–3133. IEEE (2018)
* [2] Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering and Technology (ICET). pp. 1–6. Ieee (2017)
* [3] Brownlee, J.: Use early stopping to halt the training of neural networks at the right time. https://machinelearningmastery.com/how-to-stop-training-deep-neural-networks-at-the-right-time-using-early-stopping/ (2018), [Online; accessed April 17, 2021]
* [4] Cancer.Net: Brain tumor: Diagnosis. https://www.cancer.net/cancer-types/brain-tumor/diagnosis (2020), [Online; accessed March 20, 2021]
* [5] CarryHJR: Nested unet. https://github.com/CarryHJR/Nested-UNet/blob/master/model.py. (2020), [Online; accessed March 11, 2021]
* [6] Cheng, J.: brain tumor dataset (4 2017), https://figshare.com/articles/dataset/brai\\\n\\_tumor\\_dataset/1512427
* [7] Cheng, J., Huang, W., Cao, S., Yang, R., Yang, W., Yun, Z., Wang, Z., Feng, Q.: Enhanced performance of brain tumor classification via tumor region augmentation and partition. PloS one 10(10), e0140381 (2015)
* [8] Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1251–1258 (2017)
* [9] Deepak, S., Ameer, P.: Brain tumor classification using deep cnn features via transfer learning. Computers in biology and medicine 111, 103345 (2019)
* [10] Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Deep learning and data labeling for medical applications, pp. 179–187. Springer (2016)
* [11] Díaz-Pernas, F.J., Martínez-Zarzuela, M., Antón-Rodríguez, M., González-Ortega, D.: A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network. Healthcare 9(2), 153 (2021). https://doi.org/10.3390/healthcare9020153, https://app.dimensions.ai/details/publication/pub.1135094000andhttps://www.mdpi.com/2227-9032/9/2/153/pdf
* [12] Hinton, G.E., Sabour, S., Frosst, N.: Matrix capsules with em routing. In: International conference on learning representations (2018)
* [13] Işın, A., Direkoğlu, C., Şah, M.: Review of mri-based brain tumor image segmentation using deep learning methods. Procedia Computer Science 102, 317–324 (2016)
* [14] Ismael, M.R., Abdel-Qader, I.: Brain tumor classification via statistical features and back-propagation neural network. In: 2018 IEEE international conference on electro/information technology (EIT). pp. 0252–0257. IEEE (2018)
* [15] Jain, A.: brain tumor segmentation u-net. https://github.com/adityajn105/brain-tumor-segmentation-unet (2020), [Online; accessed January 08, 2021]
* [16] Jamel, T.M., Khammas, B.M.: Implementation of a sigmoid activation function for neural network using fpga. In: 13th Scientific Conference of Al-Ma’moon University College. vol. 13 (2012)
* [17] Minaee, S., Boykov, Y.Y., Porikli, F., Plaza, A.J., Kehtarnavaz, N., Terzopoulos, D.: Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021)
* [18] MrGiovanni: U-net++ keras. https://github.com/MrGiovanni/UNetPlusPlus (2020), [Online; accessed March 12, 2021]
* [19] Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., Glocker, B., Rueckert, D.: Attention u-net: Learning where to look for the pancreas (2018)
* [20] Pashaei, A., Sajedi, H., Jazayeri, N.: Brain tumor classification via convolutional neural network and extreme learning machines. In: 2018 8th International conference on computer and knowledge engineering (ICCKE). pp. 314–319. IEEE (2018)
* [21] Punn, N.S., Agarwal, S.: Chs-net: A deep learning approach for hierarchical segmentation of covid-19 infected ct images. arXiv preprint arXiv:2012.07079 (2020)
* [22] Punn, N.S., Agarwal, S.: Inception u-net architecture for semantic segmentation to identify nuclei in microscopy cell images. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 16(1), 1–15 (2020)
* [23] Punn, N.S., Agarwal, S.: Multi-modality encoded fusion with 3d inception u-net and decoder model for brain tumor segmentation. Multimedia Tools and Applications pp. 1–16 (2020)
* [24] Punn, N.S., Agarwal, S.: Modality specific u-net variants for biomedical image segmentation: A survey. arXiv preprint arXiv:2107.04537 (2021)
* [25] raghakot: keras-resnet. https://github.com/raghakot/keras-resnet (2017), [Online; accessed March 18, 2021]
* [26] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015)
* [27] Ruder, S.: An overview of gradient descent optimization algorithms (2017)
* [28] Thakur, R.: step by step vgg16 implementation in keras for beginners. https://towardsdatascience.com/step-by-step-vgg16-implementation-in-keras-for-beginners-a833c686ae6c (2019), [Online; accessed March 20, 2021]
* [29] Zhang, Z., Sabuncu, M.R.: Generalized cross entropy loss for training deep neural networks with noisy labels. arXiv preprint arXiv:1805.07836 (2018)
* [30] Zhou, T., Ruan, S., Canu, S.: A review: Deep learning for medical image segmentation using multi-modality fusion. Array 3, 100004 (2019)
* [31] Zhou, Z., Siddiquee, M., Tajbakhsh, N., Liang, J.U.: A nested u-net architecture for medical image segmentation. arXiv preprint arXiv:1807.10165 (2018)
|
arxiv-papers
| 2021-07-26T16:51:00 |
2024-09-04T03:07:19.262683
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Sachin Gupta, Narinder Singh Punn, Sanjay Kumar Sonbhadra, Sonali\n Agarwal",
"submitter": "Narinder Singh Punn",
"url": "https://arxiv.org/abs/2107.12321"
}
|
2107.12327
|
# Ionospheric and geomagnetic response to the total solar eclipse on 21 August
2017
Amalia Meza [email protected] Guillermo Bosch María Paula Natali
Bernardo Eylenstein Laboratorio de Meteorología espacial, Atmósfera
terrestre, Geodesia, Geodinámica, diseño de Instrumental y Astrometría
(MAGGIA), Facultad de Ciencias Astronómicas y Geofísicas (FCAG), Universidad
Nacional de La Plata (UNLP), Paseo del Bosque s/n, B1900FWA, La Plata,
Argentina Consejo Nacional de Investigaciones Científicas y Técnicas
(CONICET), Godoy Cruz 2290, C1425FQB, Buenos Aires, Argentina Instituto de
Astrofísica de La Plata (UNLP - CONICET), La Plata, Argentina Observatorio
Geofísico Trelew, Trelew, Chubut, Argentina.
###### Abstract
Solar eclipses provide an excellent opportunity to study the effects of a
sudden localized change in photoionization flux in the Earth’s ionosphere and
its consequent repercussion in the Geomagnetic field. We have focused on a
subset of the data available from the North American 2017 eclipse in order to
study VTEC measurements from GNSS data and geomagnetic field estimations from
INTERMAGNET observatories near the eclipse path. Our simultaneous analysis of
both datasets allowed us to quantify the ionosphere and magnetic field
reaction to the eclipse event with which allowed us to compare how differently
these take place in time. We found that studying the behaviour of VTEC
differences with respect to reference values provides better insight of the
actual eclipse effect and were able to characterize the dependence of
parameters such as time delay of maximum depletion and recovery phase. We were
also able to test models that link the ionospheric variations in a
quantitative manner. Total electron content depletion measured from GNSS were
fed into an approximation of Ashour-Chapman model at the locations of
geomagnetic observatories and its predictions match the behaviour of magnetic
field components in time and magnitude strikingly accurately.
###### keywords:
total solar eclipse, VTEC from GNSS, geomagnetic field variation , LaTeX,
Elsevier , template
###### MSC:
[2010] 00-01, 99-00
††journal: Journal of LaTeX Templates
## 1 Introduction
The geomagnetic field exhibits variations in timescales that range from
fractions of seconds to millions of years. Some of them are internally
generated (in the Earth’s core, or in the Earth’s crust) and others are
external, created in the ionosphere-magnetosphere system. Among these the
regular daily variation of geomagnetic field during quiet time periods is a
common feature of geomagnetic field measurements; the current system
associated with the geomagnetic daily variation is typically termed the solar
quiet (Sq) current system. The Sq current system appears due to the
ionospheric wind dynamo.
The ionospheric dynamo is produced by movement of charged particles of the
ionosphere across Earth’s magnetic field. Tidal effects of the Sun and the
Moon and solar heating are both responsible for this effect. It is therefore
controlled by two parameters: the distribution of winds and the distribution
of electrical conductivity in the ionosphere. The orbital parameters of Earth,
Moon, and Sun; the solar cycle; solar flares; and solar eclipses, are some of
the external sources that influence the ionospheric dynamo. In particular any
process that alters ionospheric conductivity affects the electric current.
On the illuminated side of Earth the dominant source of ionization is solar UV
radiation. Solar flares and eclipses are very important and unique events to
analyze the dynamo current answer to short timescale perturbations. Solar
flares emit far UV together with soft and hard X-rays that penetrate deeper in
the atmosphere, temporarily ionizing the E region and D region. A solar
eclipse produces the opposite effect on ionospheric conductivity, as
ionization decreases when the Moon’s shadow crosses the Earth’s atmosphere.
Recombination of ionospheric electrons and ions in the absence of light
quickly reduces the conductivity.
Solar eclipses can be accurately predicted, therefore its effects on the
ionospheric plasma and geomagnetic field can be planned to be studied using
complementary techniques, e.g GNSS, ionospheric radiosonde, magnetometers,
etc. A solar eclipse produces an abrupt variation of the atmospheric
conditions. During this event the photoionization is strongly reduced, the
temperature of the atmosphere also falls and a cold spot with a well-defined
edge can then be defined. After that, the photoionizing flux recovers its
previous magnitude, and the atmosphere is heated again to the diurnal level
[Knížová and Mošna, 2011].
Consequently, solar eclipses produce a series of phenomena that can be
studied, such as: temporal and spatial analysis of the sudden electron density
(or total electron content) decay [Chernogor, 2013, Kumar et al., 2013,
Lyashenko and Chernogor, 2013], the abrupt geomagnetic variation [Kim and
Chang, 2018] and the generation of gravity and acoustic waves [Jakowski et
al., 2008, Chen et al., 2011]. The changes in the electron density
distribution and the total electron content (TEC) have been studied using
different techniques such as the Faraday rotation measurements [Tyagi et al.,
1980, Das Gupta et al., 1981], ionosonde networks [Le et al., 2009, Kurkin et
al., 2001], incoherent scatter radars [MacPherson et al., 2000, Cherniak and
Lysenko, 2013] and GNSS systems [Afraimovich et al., 1998, Ding et al., 2010,
Cherniak and Zakharenkova, 2018].
Eclipse effects is considered an external source of the geomagnetic field
variation, as detected in the middle of twentieth century [e.g. Kato et al.,
1956]. Later, other studies highlighted the relationship between the
geomagnetic component variability and the electric current obstruction effect.
One of the first studies was presented by Takeda and Araki [1984], in which
they show signatures of additional currents and fields generated by the
obstruction of Sq current system due to the ionospheric conductivity
depression during the eclipse. A decrease or increase on the magnetic field
component’s have been reported by many authors [e.g. Nevanlinna and Hakkinen,
1991, Brenes et al., 1993, Malin et al., 2000]. In spite of all of them taking
place during different local time, and at different geographical position, the
eclipse effect on the geomagnetic field is very evident.
A few investigations carried out simultaneous measurements of ionospheric and
magnetospheric parameters to study the effect of the solar eclipse on them
[e.g. Walker et al., 1991] that employ measurements of ionograms,
magnetograms, and microbarographs to study the solar eclipse of March 18th,
1988 in East Asia. Momani et al. [2010] used GPS, Incoherent Scatter Radar and
Earth’s magnetic field observations the Earth’s magnetic field to study the
total solar eclipse on August 1st 2008 over Northern Hemisphere.
In particular the effect of the total eclipse of the Sun on 21 August 2017 on
the terrestrial ionosphere was studied exhaustively by several authors [e.g.
Cherniak and Zakharenkova, 2018, Dang et al., 2018, Goncharenko et al., 2018,
Bullett and Mabie, 2018, Reinisch et al., 2018, Cnossen et al., 2019, Wang et
al., 2019]. The latter concludes that throughout the course of the moon’s
shadow over a particular observation site, the disturbance winds at the site
change direction and consequently their effects on the electron densities of
the F2 region also vary. These winds push the plasma down during the eclipse
and transport it up to the upper ionosphere after the eclipse. The combination
of chemical processes, wind transport, and ambipolar diffusion cause the time
lag and the asymmetric characteristic (rapid decline in Ne and slow recovery
from eclipse effects) of the topside ionosphere. Consequently, geographic
position, local time, geomagnetic conditions constrain the rate of electron
content depletion and its corresponding recovery times.
However, almost all studies mentioned above have studied either ionospheric or
geomagnetic effects independently. Hvoz̆dara and Prigancová [2002] proposed a
mathematical model based on the classical Ashour-Chapman model to explain the
geomagnetic field components’ variation. They quantify these in terms of the
position of both the quasi-circular spot of the ionospheric conductivity
decrease and the location of the geomagnetic observatory. In this work we
propose to simultaneously study both ionospheric and geomagnetic response to
the 2017 North America Solar Eclipse, combining information provided by GNSS
measurements and quantifying its relation to observed geomagnetic
perturbations.
## 2 Data and Methodology
The VTEC data computed using GNSS measurements and the three geomagnetic field
component using magnetometers observations are used in this analysis. A brief
description of these data and their variability are presented in this section.
Figure 1 shows the geographical distribution of the GNSS stations and the
location of the geomagnetic observatories. Victoria, Newport, and Boulder
geomagnetic observatories lie close to the totality path at a distance shorter
than 500 km, so we will restrict our analysis of the North America 2017
eclipse to the area relevant to these observatories indicated with a red
rectangle in Figure 1.
Figure 1: GNSS stations (blue circles) and geomagnetic observatories (red
circles). The blue and light-blue lines correspond to the total and the limits
of eclipse totality path, respectively. The red rectangle indicates the area
where VTEC maps were calculated
### 2.1 The VTEC GNSS
To analyze the response of the ionosphere to the sudden decrease of radiation
from the sun during the solar eclipse, the vertical total electron content
(VTEC) is computed. The VTEC values are obtained from the observations
recorded by more than 400 ground-based GNSS receivers located in Western
United States. All stations belong to the NOAA Continuously Operating
Reference Stations (CORS) Network (NCN), managed by NOAA/National Geodetic
Survey (ftp://geodesy.noaa.gov/cors/rinex/).
These observations were pre-processed with the Bernese GNSS Software version
5.2 [Dach R., 2015], using models recommended by the International Earth
Rotation and Reference Systems Service (IERS) [Petit and Luzum, 2010]. Ocean
tidal loading corrections were applied, following Letellier [2004], together
with atmospheric tidal loading displacements provided by van Dam et al.
[2010], and absolute phase-centre corrections for satellites and receivers, as
issued by the IGS.
The Bernese GNSS Software was modified to obtain the phase-code delay
ionospheric observable ($\tilde{L}_{I,\mathrm{arc}}$, Equation 2) along with
the geographic latitude and the sun-fixed longitude of the ionospheric pierce
point, zenith distance ($z^{\prime}$), azimuth angle, and time for each
satellite over each GNSS station displayed in Figure 1.
The ionosphere is approximated by a single shell of infinitesimal thickness
with equivalent STEC, located at 450 km above the Earth surface. The
intersection point of the line receiver-satellite with the ionospheric layer
is named ionospheric pierce point. An obliquity factor ($1/\cos z^{\prime}$ )
is used to map VTEC into STEC (integrated electron density along the signal
path); being $z^{\prime}$ the zenithal distance of the slant path at the
ionospheric piercing point.
$\mathrm{STEC}=\frac{1}{\cos z^{\prime}}\mathrm{VTEC}$ (1)
The code-delay ionospheric observable is modeled using an arc-dependent bias,
$\tilde{c}_{\mathrm{arc}}$, which accounts for receiver and satellite inter-
frequency bias and the ambiguity term. Following Ciraolo et al. [2007] and
Meza et al. [2009] the observation equation can be written as:
$\tilde{L}_{I,\mathrm{arc}}=\mathrm{STEC}+\tilde{c}_{\mathrm{arc}}+\varepsilon_{L}$
(2)
where $\tilde{L}_{I,\mathrm{arc}}$ is in TECU (10${}^{16}\,el$ m-2). Daily
solutions are computed to estimate the arc-dependent bias which is therefore
removed from $\tilde{L}_{I,\mathrm{arc}}$ to obtain STEC and VTEC through
equations 2 and 1 respectively.
The derived VTEC determinations were analyzed with an ad-hoc Python code that
iterates selecting data relevant to individual one degree longitude and
latitude bins. Within each bin the code performs the following tasks:
1. 1.
Calculates the time evolution of the VTEC values averaging along 1.5 minute
intervals all VTEC determinations for the corresponding coordinate pair.
(VTECEcl, black points in Figure 2 upper panel).
2. 2.
Uses astronomical ephemeris (PyEphem, https://rhodesmill.org/pyephem/) to
derive timing and percentage of eclipse obscuration at a reference altitude of
120 km [Yamazaki and Maute, 2017] (green dotted vertical lines and green
points respectively in Figure 2 upper panel). This reference height was chosen
to match the altitude of the ionosphere E-layer, where the electric field and
Sq currents are mostly affected by changes in electron density during the
eclipse. The timing of maximum occultation ($t_{0}$) will be used as reference
for estimating time delays of other ionospheric effects.
3. 3.
Fits a polynomial function to the VTEC variations (red dashed curve) during
the eclipse to determine the timing of the maximum drop measured on the VTEC
curve due to the occultation ($t_{1}$, red dotted vertical line in Figure 2)
and its corresponding delay with respect to $t_{0}$ (${\Delta
t}_{1}=t_{1}-t_{0}$).
4. 4.
Calculates a masked average of VTEC values from the immediately previous and
following days (VTECRef, blue points in Figure 2 upper panel). The masked
average consists of masking time intervals without data and consequently using
the only available data point when the other one is missing or performing the
average if both are present. In this way we can obtained a well sampled VTEC
reference to derive the change in VTEC ($\Delta$VTEC, black points in Figure 2
bottom panel) by comparing the observed behaviour against VTECEcl.
5. 5.
Fits a Skewed Gaussian distribution (SkG, Ashour and Abdel-hameed [2010] and
references therein) to the $\Delta$VTEC values during the eclipse event (blue
dashed line in Figure 2 bottom panel) in order to derive both the timing
($t_{2}$, blue dotted vertical lines) and the maximum value of the VTEC
difference, defined as $\Delta$VTECmax = max( VTECRef \- VTECEcl). The use of
the SkG allows to account for the varying behaviour of $\Delta$VTEC curve and
derive additional parameters such as skewness, which will address the relation
between depletion and recovery times.
6. 6.
Obtains spatially resolved information of $\Delta$VTECmax and ${\Delta
t}_{2}=t_{2}-t_{0}$, i.e. the time delay between the maximum Solar obscuration
and $\Delta$VTECmax.
Figure 2: Top panel: Black dots show VTEC values together with polynomial fit
(red dashed curve) to the VTEC variations during eclipse and the red dotted
line indicates the time of the maximum VTEC drop ($t_{1}$). Blue dots indicate
the masked average VTEC values from previous and following days as described
in text. Green curve shows the solar obscuration during the eclipse
(referenced at right axis), with green dashed vertical lines indicating the
first contact, maximum occultation and last contact. Bottom panel: To ease
comparison with top panel, the y-axis has been inverted as $\Delta$VTEC is
defined positive. Black dots show $\Delta$VTEC values calculated from the
difference between reference and eclipse days. The best fit using an
exponentially modified Gaussian distribution is plotted as a blue dashed line.
The blue dashed vertical line indicates the time of the maximum $\Delta$VTEC
derived from the fit ($t_{2}$).
### 2.2 The geomagnetic field
As stated before, we focused on geomagnetic values corresponding to data
collected at the three ground based observatories closest to the totality
path. The Boulder (BOU) and Newport (NEW) Observatories are operated by the
United States Geological Survey (USGS), and the Victoria (VIC) Observatory is
in turn supported by the Geological Survey of Canada (GSC). All three
participate in the International Real-time Magnetic Observatory Network
(INTERMAGNET). Consequently the geomagnetic values were obtained from the
INTERMAGNET data site. Considering that the eclipse signature in the
geomagnetic field, as seen by Malin et al. [2000] and Hvoz̆dara and Prigancová
[2002], is noticed as a smooth variation of its components within a 1-hour
interval, we chose to work with the available data at a sampling rate of 1
minute. These values are usually obtained by applying a digital Gaussian
filter to a higher sample rate data set centered in the minute, thus
eliminating short-term disturbances such as errors due to the instrument. The
retrieved data were already originally available in the three geomagnetic
field components of interest ($X,Y,Z$). In order to reject superposed
disturbances of much shorter period, the minute data will be further smoothed
by a 15-min window moving average.
The geomagnetic field variability is defined as the difference between the
values obtained during the eclipse event and the reference values. The latter
are obtained calculating the mean value of the five nearest geomagnetic quiet
days [Momani et al., 2010].
To examine patterns of the geomagnetic field variations induced by solar
eclipse, we analyzed data within a two hours time interval centered in the
maximum occultation . In order to eliminate the intrinsic regular daily
variabilities of the eclipse day and the reference day, during the two hour
window selected, the linear trend was removed using a first order polynomial
fit [Malin et al., 2000]. Finally the geomagnetic field variations, produced
by the solar eclipse, $\Delta X$, $\Delta Y$ and $\Delta Z$ are computed and
shown in the left column of Figure 7.
### 2.3 Relationship between VTEC and geomagnetic field
The classical Ashour-Chapman model, with Hvoz̆dara and Prigancová [2002]
modifications, is considered to analyse the geomagnetic components variability
and its relationship with the VTEC variation in the region of eclipse
obscuration.
Low-conductivity ionospheric spot is used as the Ashour-Chapman model of a
thin current sheet model with the arbitrarily directed undisturbed electric
field $\bf{E_{0}}$. In the present work the angle between x-axis (to
geographic North) and $\bf{E_{0}}$, $\epsilon$, is different from zero and the
direction of the equivalent Sq current system is assumed similar to
$\bf{E_{0}}$ (in this first approximation the Hall conductivity is not taken
into account for determination). Dedicated Ionospheric Field Inversion
(DIFI-3) model, time-varying spherical harmonic representation of the quiet-
time Sq and equatorial electrojet field, is used to $\epsilon$ determination
(https://geomag.colorado.edu/difi-calculator).
Our analysis is based on the mathematical explanation described in Hvoz̆dara
and Prigancová [2002], Appendix A. In their paper, the authors model the
geomagnetic effect due to changes in the local ionospheric conductivity linked
to the TEC decrement originated by the eclipse. They do this by means of a
cilindrical coordinate system ($r$, $\phi$, $z$) with origin on the eclipse-
induced conductivity spot and its z axis normal to the Earth’s surface. In
this system, the magnetic potential field can be written as:
$\mathrm{{\Omega}=-I\,a\,\sin\,\phi\,W(r,z)}$ (3)
where:
$\begin{split}I&=\frac{1-\kappa}{1+\kappa}\,I_{0}[A/m]\\\
W(r,z)&=\int_{0}^{\infty}s^{-1}J_{1}(sa)J_{1}(sr)e^{-sz}ds\end{split}$ (4)
$J_{1}$ is the Bessel function of the first kind and index 1.
Being $\mathbf{H}$ the disturbing magnetic field which is related with the
corresponding potential $\mathbf{\Omega}$ [Ashour and Chapman, 1965]. Then
$\mathbf{H}=-\mathrm{grad}\,{\Omega}$ (5)
The geomagnetic disturbance is defined by
$\mathrm{\mathbf{b}=\mu_{0}\,\mathbf{H}}$ where $\mu_{0}=400\,\pi$ and
$\mathbf{b}$ is in nT unit. Its cartesian components are:
$\begin{split}b_{x}&=\mu_{0}(H_{r}\,\cos\alpha-H_{\varphi}\,\sin\alpha)\\\
b_{y}&=\mu_{0}(H_{r}\,\sin\alpha+H_{\varphi}\,\cos\alpha)\\\
b_{z}&=\mu_{0}H_{z}\end{split}$ (6)
Table 1 shows the values defined to calculate the geomagnetic disturbance for
the different geomagnetic stations. The angle $\epsilon$, the distance from
the observatories and the center of the eclipse-induced conductivity spot,
$\delta$, and the degree of the TEC decrease caused by the solar eclipse,
$\kappa$.
Table 1: Parameters used in the theoretical model of geomagnetic eclipse-disturbance: $\epsilon$ is the angle between x-axis (to geographic North) and $\bf{E_{0}}$; $\delta$ is the distance from the observatories and the center of the eclipse-induced conductivity spot, and $\kappa$ is the degree of the VTEC decrement caused by the solar eclipse | $\epsilon$ | $\delta$ | $\kappa$
---|---|---|---
VIC | 130 | 390 | 0.63
NEW | 116 | 450 | 0.61
BOU | 97 | -206 | 0.57
The magnetic field variations $\mathbf{b}$ $=(b_{x},b_{y},b_{z})$ are adjusted
for the electromagnetic induction effect, in order to compare with the
geomagnetic field components variability from the geomagnetic observatories
Hvoz̆dara and Prigancová [2002]. Consequently, the $\Delta X$ , $\Delta Y$ and
$\Delta Z$, predicted by the model are expected to be 1.5 $b_{x}$, 1.5 $b_{y}$
and 0.3 $b_{z}$ respectively.
## 3 Results and Discussions
### 3.1 VTEC variation
Figure 3 shows the $\Delta$VTECmax during the eclipse, displayed as a
percentage value from the reference VTEC value. No evident trend can be seen
longitudinally, besides a local peak about 110°W. Figure 4 shows the
$\Delta$VTECmax as function of the geographical longitude, confirming that no
dependence on occultation percentage is present either. Liu et al. [2019]
studied the effects of an annular eclipse centered in Taiwan and found a
strong correlation between maximum obscuration and $\Delta$VTECmax in
percentage. Their figure 6i shows that $\Delta$VTECmax is directly
proportional to maximum obscuration (ranging between 40% and 90%), although
there is evidence of saturation at high occultation. Bottom/right panel in
Figure 4 confirms this, as no visible trend with $\Delta$VTECmax is present
when focusing on occultations larger than 75% .
Figure 3: Geographical distribution of relative $\Delta$VTECmax (in percentage
units). Lower and upper limits of eclipse totality path, together with
locations of geomagnetic observatories are also plotted as reference. Figure
4: Relative $\Delta$VTECmax plotted against maximum eclipse occultation (left
panel) and geographical longitude (right panel).
There is a noticeable difference among $\Delta$VTECmax north and south of the
totality path. We can interpret this considering that the integral of the
electron content in the ionosphere results from the balance between transport,
ionization and loss processes; the eclipse reduces the electron temperature,
decreases the pressure and consequently induces a downward drift of plasma
from the topside ionosphere [Ding et al., 2010, Cherniak and Zakharenkova,
2018]. An explanation of the latitudinal distribution of $\Delta$VTECmax is
that the eclipse switches off the ionization source, therefore the
recombination is more effective specially at lower latitudes where the neutral
mass density is higher [Cherniak and Zakharenkova, 2018].
Figure 5: Analysis of time delay distribution between maximum eclipse
occultation and ionospheric response. Both panels show trends of ${\Delta
t}_{1}$ in blue outline and ${\Delta t}_{2}$ in filled red. Left panel
highlights the overall difference between time delays and right panel displays
distinct dependence of time delays with geographical longitude.
Another interesting result worth discussing is the analysis of the time delay
between solar occultation and the ionospheric response measured from VTEC. The
presence of a time delay between occultation and ionospheric response has
already been presented in previous researches [e.g. Jakowski et al., 2008,
Boitman et al., 1999, Liu et al., 2019, Cherniak and Zakharenkova, 2018,
Momani et al., 2010]; although showing shorter delays. As outlined in Section
2.1 we measure time delays from the $\Delta$VTEC curve as this is more
strongly linked to the eclipse effect itself.
The variations in the local VTEC minimum, cited in references above are
actually due to a combination of the eclipse occultation and the rapid
increase in VTEC expected at mid-morning. The apparent increase in ${\Delta
t}_{1}$ seen in Figure 5 is not due to the eclipse itself but to the fact that
the daily VTEC increase in the ionosphere is slowing down as it reaches its
peak at about 20hrs UT. For comparison purposes we also show in Figure 5 the
time delay distribution derived from the VTEC curve which resemble values
cited in the literature. Furthermore, we were able to reproduce to longitude
bin average of the time delay as calculated from the VTEC curve in the right
panel of Figure 5. Blue outline bins show almost identical behaviour as in
Cherniak and Zakharenkova [2018], a small (2 to 3 minutes) offset is present,
mostly due to the fact that the cited authors refer to eclipse ephemeris at
ground level while we do it 120 km above. Filled red bins show, besides larger
values as already mentioned a slow but constant decreasing pattern as the
eclipse advances eastward.
We also analyzed the presence of trends among the asymmetry in the
$\Delta$VTEC curve measured via the $\gamma$ parameter of the skewed Gaussian
profile. A $\gamma$ value of 0 indicates a symmetric Gaussian profile and
positive values show recovery times larger than depletion ones. Figure 6
displays a visible trend with $\gamma$ values diminishing asymmetry eastwards
even within the scatter. This indicates that the recovery time is noticeable
larger than depletion time at the west coast but this difference as the shadow
moves eastward. Several effects can contribute to this behaviour, as the moon
shadow changes its shape and reduces its apparent speed when its cone axis
passes closest to Earth’s center.
Figure 6: Skewness variation plotted against geographic longitude where the
strong reduction of asymmetry in the $\Delta$VTEC profile can be seen. We have
added a color scale for eclipse occultation making it evident there is no
dependence on this parameter.
### 3.2 Geomagnetic field variation
The geomagnetic variability during total solar eclipse is analysed over the
observatories affected by an occultation larger than 80 %: VIC has the 89,1 %
at 17:20 UT (10:20 LT), NEW has the 84,6 % at 17:26 UT (10:26LT), and BOU has
the 92,4% at 17:46 UT (11:46 LT), its geographical locations are shown in
Figure 1.
In this section we will discuss the comparison of the observed geomagnetic
variability and the one proposed by the theoretical model described in Section
2.3 and displayed in the right column of Figure 7. These modeled geomagnetic
disturbances along the Cartesian components are defined along one hour
interval centered at the time of maximum occultation ($t_{0}$), which is
highlighted by the green box .
Top row in Figure 7 displays BOU observatory geomagnetic variability: the
$\Delta X$ component shows positive values with its maximum arising near
$t_{0}$, the $\Delta Y$ component has positive and then negative values before
and after $t_{0}$ respectively, and the $\Delta Z$ component has the lower
amplitude, showing positive values with its maximum near $t_{0}$. Its
corresponding right panel shows the very good agreement for this eclipse
configuration. Middle and bottom rows (VIC and NEW observatories) show similar
behaviours among themselves, which is quite expected due to close geographical
coordinates and relative location to the eclipse path, but quite different as
the one shown at BOU. The $\Delta X$ component is positive and its maximum
value takes place before $t_{0}$, the $\Delta Y$ component is negative and its
minimum value happens close to $t_{0}$, and the $\Delta Z$ Component records
lower magnitude variations, where its maximum difference is negative and
occurs after $t_{0}$. This distinct observed behaviour is still well described
qualitatively by the model in their corresponding right panels, but the
agreement is not as good as for BOU observatory.
Malin et al. [2000] and Momani et al. [2010] had also found perceptible
differences in geomagnetic response at different observatories locations.
Regarding the model predictions, dissimilar performance among observatories
can be attributed to a combination of observing conditions and model
limitations. VIC and NEW stations lie farther from eclipse path and smaller
geomagnetic disturbances are therefore expected which might be subject to
contamination by other sources of variations. Model assumptions (such as
cylindrical symmetry and ionospheric isotropy) can also contribute to these
discrepancies.
As a reference, we have also added the $\Delta$VTEC values in the left column
of Figure 7 which allows a straightforward comparison between the VTEC and
geomagnetic variations.
From the visual inspection of Figure 7 it can be readily seen that eclipse
effects impact on the VTEC and the geomagnetic field with very different
timescales. We interpret this as due to the fact that the variations of
electric current which flow mainly in the E layer are the ones responsible for
the slight changes in the geomagnetic field observed from ground level
observatories. This implies that the time variation of eclipse effects on the
magnetic field depends mostly on processes taking place in the E layer. On the
other hand, even though the VTEC determinations are integral determinations
over the line of sight, the F2 layer provides the major contribution to it.
Therefore, an explanation of the observed delay of the VTEC variability on
Figure 7 is that in the F2 layer the recombination process time is longer than
E layer, because the eclipse effect is controlled by the transport of plasma
[Bienstock et al., 1970].
Figure 7: Geomagnetic variability detected from BOU, VIC, and NEW
observatories. The left column shows the measurements of $\Delta X$, $\Delta
Y$, and $\Delta Z$, in blue, red, and yellow color respectively. The black
curve shows the $\Delta$VTECmax measurement. The right column shows the
geomagnetic variability based on the Ashour-Chapman thin current sheet model.
## 4 Concluding remarks
This work examines the ionospheric and geomagnetic response to the 2017 North
America Solar Eclipse. The VTEC obtained from GNSS observations were used to
compute the VTEC variability during the eclipse; the $\Delta$VTEC, which is
the difference between VTEC during the eclipse and reference days, and its
maximum ($\Delta$VTECmax) and also the maximum drop of VTEC curve due to the
occultation. The values of $\Delta$VTECmax range from 20% to 45% along the
eclipse path within an area of 70% obscuration, and the maximum values stretch
equatorward from the totality. The $\Delta$VTECmax lags 22-32 min after the
maximal eclipse time (${\Delta t}_{1}$). The values of VTEC drop had a 14-22
min time delay after the maximum occultation (${\Delta t}_{2}$).
The ${\Delta t}_{2}$ values and the recovery times concerning the depletion
times ( $\gamma$ parameter) are strongly related to local time. They have
larger values at westwards longitudes when the eclipse is at mid-morning. When
the eclipse is carried out close to midday, the depletion and recovery times,
and ${\Delta t}_{1}$ and ${\Delta t}_{2}$, become more similar.
For the first time, a total solar eclipse was simultaneously studied from both
ionospheric and geomagnetic points of view. The degree of the TEC decrease
caused by the solar eclipse was used in a mathematical model based on the
Ashour-Chapman model to predict the geomagnetic disturbance. Quantitatively
the model and the Cartesian geomagnetic components variabilities of the three
geomagnetic observatories, with larger occultation than 84%, were comparable
and consistent.
## 5 Acknowledgments
We wish to acknowledge the thorough review and useful comments provided by the
anonymous referees, which have helped to improve the final version of this
article. The results presented in this paper rely on data collected at
magnetic observatories. We thank the national institutes that support them and
INTERMAGNET for promoting high standards of magnetic observatory practice
(www.intermagnet.org).
## References
* Afraimovich et al. [1998] Afraimovich, E.L., Palamartchouk, K.S., Perevalova, N.P., Chernukhov, V.V., Lukhnev, A.V., Zalutsky, V.T., 1998\. Ionospheric effects of the solar eclipse of march 9, 1997, as deduced from gps data. Geophysical Research Letters 25, 465–468. doi:10.1029/98GL00186.
* Ashour and Chapman [1965] Ashour, A.A., Chapman, S., 1965\. The Magnetic Field of Electric Currents in an Unbounded Plane Sheet, Uniform except for a Circular Area of Different Uniform Conductivity. Geophysical Journal International 10, 31–44. URL: https://doi.org/10.1111/j.1365-246X.1965.tb03048.x, doi:10.1111/j.1365-246X.1965.tb03048.x, arXiv:https://academic.oup.com/gji/article-pdf/10/1/31/2373568/10-1-31.pdf.
* Ashour and Abdel-hameed [2010] Ashour, S.K., Abdel-hameed, M.A., 2010\. Approximate skew normal distribution. Journal of Advanced Research 1, 341–350. URL: https://www.sciencedirect.com/science/article/pii/S209012321000069X, doi:https://doi.org/10.1016/j.jare.2010.06.004.
* Bienstock et al. [1970] Bienstock, B.J., Marriott, R.T., John, D.E., Thorne, R.M., Venkateswaran, S.V., 1970. Changes in the Electron Content of the Ionosphere. Nature 226, 1111–1112. doi:10.1038/2261111a0.
* Boitman et al. [1999] Boitman, O.N., Kalikhman, A.D., Tashchilin, A.V., 1999. The midlatitude ionosphere during the total solar eclipse of march 9, 1997. Journal of Geophysical Research: Space Physics 104, 28197–28206. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/1999JA900228, doi:https://doi.org/10.1029/1999JA900228, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999JA900228.
* Brenes et al. [1993] Brenes, J., Leandro, G., Fernandez, W., 1993. Variation of the Geomagnetic Field in Costa Rica During the Total Solar Eclipse of July 11, 1991. Earth Moon and Planets 63, 105\. doi:10.1007/BF00575100.
* Bullett and Mabie [2018] Bullett, T., Mabie, J., 2018\. Vertical and oblique ionosphere sounding during the 21 august 2017 solar eclipse. Geophysical Research Letters 45, 3690–3697. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2018GL077413, doi:https://doi.org/10.1002/2018GL077413, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2018GL077413.
* Chen et al. [2011] Chen, G., Zhao, Z., Zhang, Y., Yang, G., Zhou, C., Huang, S., Li, T., Li, N., Sun, H., 2011. Gravity waves and spread es observed during the solar eclipse of 22 july 2009. Journal of Geophysical Research: Space Physics 116\. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2011JA016720, doi:https://doi.org/10.1029/2011JA016720, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2011JA016720.
* Cherniak and Lysenko [2013] Cherniak, I., Lysenko, V., 2013\. Measurements of the ionosphere plasma electron density variation by the Kharkov incoherent scatter radar. Acta Geophysica 61, 1289–1303. doi:10.2478/s11600-013-0118-0.
* Cherniak and Zakharenkova [2018] Cherniak, I., Zakharenkova, I., 2018\. Ionospheric total electron content response to the great american solar eclipse of 21 august 2017. Geophysical Research Letters 45, 1199–1208. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017GL075989, doi:https://doi.org/10.1002/2017GL075989, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017GL075989.
* Chernogor [2013] Chernogor, L.F., 2013. Physical processes in the middle ionosphere accompanying the solar eclipse of january 4, 2011, in kharkov. Geomagnetism and Aeronomy 53, 19–31. doi:https://doi.org/10.1134/S0016793213010052.
* Ciraolo et al. [2007] Ciraolo, L., Azpilicueta, F., Brunini, C., Meza, A., Radicella, S.M., 2007. Calibration errors on experimental slant total electron content (TEC) determined with GPS. Journal of Geodesy 81, 111–120. doi:10.1007/s00190-006-0093-1.
* Cnossen et al. [2019] Cnossen, I., Ridley, A.J., Goncharenko, L.P., Harding, B.J., 2019\. The response of the ionosphere-thermosphere system to the 21 august 2017 solar eclipse. Journal of Geophysical Research: Space Physics 124, 7341–7355. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JA026402, doi:https://doi.org/10.1029/2018JA026402, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA026402.
* Dach R. [2015] Dach R., L.S. (Ed.), 2015. Bernese GNSS Software Version 5.2. User manual. Astronomical Institute, University of Bern. doi:10.7892/boris.72297.
* van Dam et al. [2010] van Dam, T., Altamimi, Z., Collilieux, X., Ray, J., 2010\. Topographically induced height errors in predicted atmospheric loading effects. Journal of Geophysical Research: Solid Earth 115\. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2009JB006810, doi:https://doi.org/10.1029/2009JB006810, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2009JB006810.
* Dang et al. [2018] Dang, T., Lei, J., Wang, W., Zhang, B., Burns, A., Le, H., Wu, Q., Ruan, H., Dou, X., Wan, W., 2018\. Global responses of the coupled thermosphere and ionosphere system to the august 2017 great american solar eclipse. Journal of Geophysical Research: Space Physics 123, 7040–7050. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JA025566, doi:https://doi.org/10.1029/2018JA025566, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA025566.
* Das Gupta et al. [1981] Das Gupta, A., Maitra, A., Das, S., Sen, S., 1981. Ionospheric electron content observations during the total solar eclipse of february 16, 1980. Journal of Atmospheric and Terrestrial Physics 43, 135–137. URL: https://www.sciencedirect.com/science/article/pii/0021916981900714, doi:https://doi.org/10.1016/0021-9169(81)90071-4.
* Ding et al. [2010] Ding, F., Wan, W., Ning, B., Liu, L., Le, H., Xu, G., Wang, M., Li, G., Chen, Y., Ren, Z., Xiong, B., Hu, L., Yue, X., Zhao, B., Li, F., Yang, M., 2010. Gps tec response to the 22 july 2009 total solar eclipse in east asia. Journal of Geophysical Research: Space Physics 115\. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2009JA015113, doi:https://doi.org/10.1029/2009JA015113, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2009JA015113.
* Goncharenko et al. [2018] Goncharenko, L.P., Erickson, P.J., Zhang, S.R., Galkin, I., Coster, A.J., Jonah, O.F., 2018\. Ionospheric response to the solar eclipse of 21 august 2017 in millstone hill (42n) observations. Geophysical Research Letters 45, 4601–4609. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018GL077334, doi:https://doi.org/10.1029/2018GL077334, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018GL077334.
* Hvoz̆dara and Prigancová [2002] Hvoz̆dara, M., Prigancová, A., 2002\. Geomagnetic effects due to an eclipse-induced low-conductivity ionospheric spot. Journal of Geophysical Research: Space Physics 107, SIA 14–1–SIA 14–13. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2002JA009260, doi:https://doi.org/10.1029/2002JA009260, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2002JA009260.
* Jakowski et al. [2008] Jakowski, N., Stankov, S., Wilken, V., Borries, C., Altadill, D., Chum, J., Buresova, D., Boska, J., Sauli, P., Hruska, F., Cander, L., 2008. Ionospheric behavior over europe during the solar eclipse of 3 october 2005. Journal of Atmospheric and Solar-Terrestrial Physics 70, 836–853. URL: https://www.sciencedirect.com/science/article/pii/S1364682607003495, doi:https://doi.org/10.1016/j.jastp.2007.02.016. measurements of Ionospheric Parameters influencing Radio Systems.
* Kato et al. [1956] Kato, Y., Ossaka, J., Sakurai, A., 1956. Preliminary report on the effect of the solar eclipse of 20 June 1955 on the earth’s magnetic field, in: Beynon, W.J.G., Brown, G.M. (Eds.), Solar Eclipses and the Ionosphere, p. 243\.
* Kim and Chang [2018] Kim, J.H., Chang, H.Y., 2018\. Possible Influence of the Solar Eclipse on the Global Geomagnetic Field, in: Foullon, C., Malandraki, O.E. (Eds.), Space Weather of the Heliosphere: Processes and Forecasts, pp. 167–170. doi:10.1017/S1743921317007219.
* Knížová and Mošna [2011] Knížová, P.K., Mošna, Z., 2011\. Acoustic-gravity waves in the ionosphere during solar eclipse events, in: Beghi, M.G. (Ed.), Acoustic Waves. IntechOpen, Rijeka. chapter 14, pp. 303–320. URL: https://doi.org/10.5772/19722, doi:10.5772/19722.
* Kumar et al. [2013] Kumar, S., Singh, A.K., Singh, R.P., 2013. Ionospheric response to total solar eclipse of 22 july 2009 in different indian regions. Annales Geophysicae 31, 1549–1558. URL: https://angeo.copernicus.org/articles/31/1549/2013/, doi:10.5194/angeo-31-1549-2013.
* Kurkin et al. [2001] Kurkin, V.I., Nosov, V.E., Potekhin, A.P., Smirnov, V.F., Zherebtsov, G.A., 2001. The March 9, 1997 solar eclipse ionospheric effects over the Russian asian region. Advances in Space Research 27, 1437–1440. doi:10.1016/S0273-1177(01)00030-8.
* Le et al. [2009] Le, H., Liu, L., Yue, X., Wan, W., Ning, B., 2009. Latitudinal dependence of the ionospheric response to solar eclipses. Journal of Geophysical Research: Space Physics 114\. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2009JA014072, doi:https://doi.org/10.1029/2009JA014072, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2009JA014072.
* Letellier [2004] Letellier, 2004. Etude des ondes de marée sur les plateaux continentaux. Ph.D. thesis. Université III Paul Sabatier.
* Liu et al. [2019] Liu, J.Y., Yang, S.S., Rajesh, P.K., Sun, Y.Y., Chum, J., Pan, C.J., Chu, Y.H., Chao, C.K., Chang, L.C., 2019. Ionospheric response to the 21 may 2012 annular solar eclipse over taiwan. Journal of Geophysical Research: Space Physics 124, 3623–3636. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JA025928, doi:https://doi.org/10.1029/2018JA025928, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA025928.
* Lyashenko and Chernogor [2013] Lyashenko, M.V., Chernogor, L.F., 2013\. Solar eclipse of august 1, 2008, over kharkov: 3. calculation results and discussion. Geomagnetism and Aeronomy 53, 367–376. doi:https://doi.org/10.1134/S0016793213020096.
* MacPherson et al. [2000] MacPherson, B., González, S.A., Sulzer, M.P., Bailey, G.J., Djuth, F., Rodriguez, P., 2000\. Measurements of the topside ionosphere over arecibo during the total solar eclipse of february 26, 1998. Journal of Geophysical Research: Space Physics 105, 23055–23067. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2000JA000145, doi:https://doi.org/10.1029/2000JA000145, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2000JA000145.
* Malin et al. [2000] Malin, S.R.C., Özcan, O., Tank, S.B., Tunçer, M.K., Yazici-Çakın, O., 2000. Geomagnetic signature of the 1999 august 11 total eclipse. Geophysical Journal International 140, F13–F16. URL: https://onlinelibrary.wiley.com/doi/abs/10.1046/j.1365-246X.2000.00061.x, doi:https://doi.org/10.1046/j.1365-246X.2000.00061.x, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1046/j.1365-246X.2000.00061.x.
* Meza et al. [2009] Meza, A., van Zele, M.A., Rovira, M., 2009. Solar flare effect on the geomagnetic field and ionosphere. Journal of Atmospheric and Solar-Terrestrial Physics 71, 1322–1332. doi:10.1016/j.jastp.2009.05.015.
* Momani et al. [2010] Momani, M.A., Yatim, B., Mohd Ali, M.A., 2010. Ionospheric and geomagnetic response to the total solar eclipse on 1 August 2008 over Northern Hemisphere. Journal of Geophysical Research (Space Physics) 115, A08321. doi:10.1029/2009JA014999.
* Nevanlinna and Hakkinen [1991] Nevanlinna, H., Hakkinen, L., 1991\. Geomagnetic effect of the total solar eclipse on July 22, 1990. Journal of Geomagnetism and Geoelectricity 43, 319–321. doi:10.5636/jgg.43.319.
* Petit and Luzum [2010] Petit, G., Luzum, B., 2010\. IERS Conventions (2010). IERS Technical Note 36, 1\.
* Reinisch et al. [2018] Reinisch, B.W., Dandenault, P.B., Galkin, I.A., Hamel, R., Richards, P.G., 2018. Investigation of the electron density variation during the 21 august 2017 solar eclipse. Geophysical Research Letters 45, 1253–1261. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017GL076572, doi:https://doi.org/10.1002/2017GL076572, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017GL076572.
* Takeda and Araki [1984] Takeda, M., Araki, T., 1984\. Ionospheric currents and fields during the solar eclipse. Planetary and Space Sciences 32, 1013–1019. doi:10.1016/0032-0633(84)90057-6.
* Tyagi et al. [1980] Tyagi, T.R., Singh, L., Vijaya-Kumar, P.N., Somayajulu, Y.N., Lokanadham, B., Yelliah, G., 1980\. Satellite Radio Beacon Study of the Ionospheric Variations at Hyderabad during the Total Solar Eclipse of 1980FEB16. Bulletin of the Astronomical Society of India 8, 69.
* Walker et al. [1991] Walker, G.O., Li, T.Y.Y., Wong, Y.W., Kikuchi, T., Huang, Y.N., 1991. Ionospheric and geomagnetic effects of the solar eclipse of 18 March 1988 in East Asia. Journal of Atmospheric and Terrestrial Physics 53, 25–37. doi:10.1016/0021-9169(91)90017-2.
* Wang et al. [2019] Wang, W., Dang, T., Lei, J., Zhang, S., Zhang, B., Burns, A., 2019. Physical processes driving the response of the f2 region ionosphere to the 21 august 2017 solar eclipse at millstone hill. Journal of Geophysical Research: Space Physics 124, 2978–2991. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JA025479, doi:https://doi.org/10.1029/2018JA025479, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA025479.
* Yamazaki and Maute [2017] Yamazaki, Y., Maute, A., 2017\. Sq and EEJ—A Review on the Daily Variation of the Geomagnetic Field Caused by Ionospheric Dynamo Currents. Space Science Reviews 206, 299–405. doi:10.1007/s11214-016-0282-z.
|
arxiv-papers
| 2021-07-26T17:01:35 |
2024-09-04T03:07:19.279940
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Amalia Meza, Guillermo Bosch, Maria Paula Natali, Bernardo Eylenstein",
"submitter": "Guillermo Bosch",
"url": "https://arxiv.org/abs/2107.12327"
}
|
2107.12328
|
# HW2VEC: A Graph Learning Tool for Automating Hardware Security
Shih-Yuan Yu§, Rozhin Yasaei§, Qingrong Zhou, Tommy Nguyen, Mohammad Abdullah
Al Faruque Department of Electrical Engineering and Computer Science
University of California, Irvine, California, USA
{shihyuay, ryasaei, qingronz, tommytn1, [email protected]}
###### Abstract
The time-to-market pressure and continuous growing complexity of hardware
designs have promoted the globalization of the Integrated Circuit (IC) supply
chain. However, such globalization also poses various security threats in each
phase of the IC supply chain. Although the advancements of Machine Learning
(ML) have pushed the frontier of hardware security, most conventional ML-based
methods can only achieve the desired performance by manually finding a robust
feature representation for circuits that are non-Euclidean data. As a result,
modeling these circuits using graph learning to improve design flows has
attracted research attention in the Electronic Design Automation (EDA) field.
However, due to the lack of supporting tools, only a few existing works apply
graph learning to resolve hardware security issues. To attract more attention,
we propose HW2VEC, an open-source graph learning tool that lowers the
threshold for newcomers to research hardware security applications with
graphs. HW2VEC provides an automated pipeline for extracting a graph
representation from a hardware design in various abstraction levels (register
transfer level or gate-level netlist). Besides, HW2VEC users can automatically
transform the non-Euclidean hardware designs into Euclidean graph embeddings
for solving their problems. In this paper, we demonstrate that HW2VEC can
achieve state-of-the-art performance on two hardware security-related tasks:
Hardware Trojan Detection and Intellectual Property Piracy Detection. We
provide the time profiling results for the graph extraction and the learning
pipelines in HW2VEC.
§§footnotetext: Both are equal-contributing first authors. Yu is the
corresponding author.
## I Introduction
In past decades, the growing design complexity and the time-to-market pressure
have jointly contributed to the globalization of the Integrated Circuit (IC)
supply chain [35]. Along this globalized supply chain, IC designers tend to
leverage third-party Electronic Design Automation (EDA) tools and Intellectual
Property (IP) cores or outsource costly services to reduce their overall
expense. This results in a worldwide distribution of IC design, fabrication,
assembly, deployment, and testing [6, 18, 33]. However, such globalization can
also make the IC supply chain vulnerable to various hardware security threats
such as Hardware Trojan Insertion, IP Theft, Overbuilding, Counterfeiting,
Reverse Engineering, and Covert & Side Channel Attacks.
As the consequences of not promptly addressing these security threats can be
severe, countermeasures and tools have been proposed to mitigate, prevent, or
detect these threats [15]. For example, hardware-based primitives, physical
unclonable functions (PUFs) [14], true random number generator (TRNG) [28],
and cryptographic hardware can all intrinsically enhance architectural
security. The countermeasures built into hardware design tools are also
critical for securing the hardware in the early phases of the IC supply chain.
Some Machine Learning (ML) based approaches have been proven effective for
detecting Hardware Trojans (HT) from hardware designs in both Register
Transfer Level (RTL) and Gate-Level Netlist (GLN) [11, 13]. Besides, [16]
automates the process of identifying the counterfeited ICs by leveraging
Support Vector Machine (SVM) to analyze the sensor readings from on-chip
hardware performance counters (HPCs). However, as indicated in [40],
effectively applying ML models is a non-trivial task as the defenders must
first identify an appropriate input representation based on hardware domain
knowledge. Therefore, ML-based approaches can only achieve the desired
performance with a robust feature representation of a circuit (non-Euclidean
data) which is more challenging to acquire than finding the one for Euclidean
data such as images, texts, or signals.
Figure 1: The illustration of the process that extracts features for hardware
analysis.
In IC design flow, many fundamental objects such as netlists or layouts are
natural graph representations [24]. These graphs are non-Euclidean data with
irregular structures, thus making it hard to generalize basic mathematical
operations and apply them to conventional Deep Learning (DL) approaches [7].
Also, extracting a feature that captures structural information requires a
non-trivial effort to achieve the desired performance. To overcome these
challenges, many graph learning approaches such as Graph Convolutional
Networks (GCN), Graph Neural Networks (GNN), or Graph Autoencoder (GAE) have
been proposed and applied in various applications such as computer vision,
natural language processing, and program analysis [19, 45]. In the EDA field,
some works tackle netlists with GCNs for test point insertion [25] or with
GNNs for fast and accurate power estimation in pre-silicon simulation [52]. As
Figure 1 shows, these approaches typically begin with extracting the graph
representation ($g$) from a hardware design $p$, then use the graph-based
models as an alternative to the manual feature engineering process. Lastly, by
projecting each hardware design onto the Euclidean space ($h_{g}$), these
designs can be passed to ML models for learning tasks. However, only a few
works have applied GNN-based approaches for securing hardware during IC design
phases due to the lack of supporting tools [48, 49].
To attract more research attention to this field, we propose HW2VEC, an open-
source graph learning tool for enhancing hardware security. HW2VEC provides
automated pipelines for extracting graph representations from hardware designs
and leveraging graph learning to secure hardware in design phases. Besides,
HW2VEC automates the processes of engineering features and modeling hardware
designs. To the best of our knowledge, HW2VEC is the first open-source
research tool that supports applying graph learning methods to hardware
designs in different abstraction levels for hardware security. In addition,
HW2VEC supports transforming hardware designs into various graph
representations such as the Data-Flow Graph (DFG), or the Abstract Syntax Tree
(AST). In this paper, we also demonstrate that HW2VEC can be utilized in
resolving two hardware security applications: Hardware Trojan Detection and IP
Piracy Detection and can perform as good as the state-of-the-art GNN-based
approaches.
### I-A Our Novel Contributions
Our contributions to the hardware security research community are as follows,
* •
We propose an automated pipeline to convert a hardware design in RTL or GLN
into various graph representations.
* •
We propose a GNN-based tool to generate vectorized embeddings that capture the
behavioral features of hardware designs from their graph representations.
* •
We demonstrate HW2VEC’s effectiveness by showing that it can perform similarly
compared to state-of-the-art GNN-based approaches for various real-world
hardware security problems, including Hardware Trojan Detection and IP Piracy
Detection.
* •
We open-source HW2VEC as a Python library111The HW2VEC is publicly available
at https://github.com/AICPS/hw2vec/. to contribute to the hardware security
research community.
### I-B Paper Organization
We organize the rest of the paper as follows: we introduce background
information and literature survey in Section II; we present the overall
architecture of HW2VEC in Section III; then, we demonstrate the usage examples
and two advanced use-cases (HT detection and IP piracy detection) in Section
IV; Next, we show experimental results and discuss HW2VEC’s practicability in
Section V; Lastly, we conclude in Section VI.
## II Related works and Background
This section first briefly overviews hardware security problems and
countermeasures. Then it describes the works applying ML-based approaches for
hardware security. Lastly, we introduce the works that utilize graph learning
methods in both EDA and hardware security.
### II-A Hardware Security Threats in IC Supply Chain
In the IC supply chain, each IC is passed through multiple processes as shown
in Figure 2. First, the specification of a hardware design is turned into a
behavioral description written in a Hardware Design Language (HDL) such as
Verilog or VHDL. Then, it is transformed into a design implementation in terms
of logic gates (i.e., netlist) with Logic Synthesis. Physical Synthesis
implements the netlist as a layout design (e.g., a GDSII file). Lastly, the
resulting GDSII file is handed to a foundry to fabricate the actual IC. Once a
foundry produces the IC (Bare Die), several tests are performed to guarantee
its correct behavior. The verified IC is then packaged by the assembly and
sent to the market to be deployed in systems.
For a System-on-Chip (SoC) company, all of the mentioned stages of the IC
supply chain require a vast investment of money and effort. For example, it
costs $5 billion in 2015 to develop a new foundry [50]. Therefore, to lower
R&D cost and catch up with the competitive development cycle, an SoC company
may choose to outsource the fabrication to a third-party foundry, purchase
third-party IP cores, and use third-party EDA tools. The use of worldwide
distributed third parties makes the IC supply chain susceptible to various
security threats [46] such as Hardware Trojan Insertion, IP Theft,
Overbuilding, Counterfeiting, Reverse Engineering, and Covert & Side Channel
Attacks, etc. Not detecting or preventing these threats can lead to severe
outcomes. For example, in 2008, a suspected nuclear installation in Syria was
bombed by Israeli jets because a backdoor in its commercial off-the-shelf
microprocessors disabled Syrian radar [4]. In another instance, the IP-
intensive industries of the USA lose between $225 to $600 billion annually as
the companies from China steal American IPs, mainly in the semiconductor
industry [2].
Figure 2: The illustration of the IC supply chain demonstrating the hardware
design flow from a specification to the behavioral description (RTL), logic
implementation (GLN), physical implementation (GDSII), and the actual chip
(Bare Die or IC).
Among the mentioned security threats, the insertion of Hardware Trojan (HT)
can cause the infected hardware to leak sensitive information, degrade its
performance, or even trigger a Denial-of-Service (DoS) attack. In System-on-
Chip (SoC) or IC designs, IP Theft, the illegal usage and distribution of an
IP core can occur. The third-party foundries responsible for outsourced
fabrication can overbuild extra chips just for their benefits without the
designer’s permission. Moreover, selling the Counterfeited designs in the name
of its original supplier leads to financial or safety damage to its producer
or even the national security if the target is within essential
infrastructures or military systems. Reverse engineering (RE) recovers the
high-level information from a circuit available in its lower-level
abstraction. Although RE can be helpful in the design and verification
process, an attacker can misuse the reconstructed IC designs for malicious
intentions. Covert Channel uses non-traditional communication (e.g., shared
cache) to leak critical information of a circuit. In contrast, Side Channel
exists among the hardware components that are physically isolated or not even
in proximity (e.g., power or electromagnetic channel).
### II-B Hardware Security Countermeasures
Due to the globalization of the IC supply chain, the hardware is susceptible
to security threats such as IP piracy (unlicensed usage of IP), overbuilding
(unauthorized manufacturing of the circuit), counterfeiting (producing a
faithful copy of circuit), reverse engineering, hardware Trojan (malicious
modification of circuit), and side-channel attacks [5].
In the literature, countermeasures and tools have been proposed to mitigate,
prevent, or detect these threats [15]. For example, a cryptographic
accelerator is a hardware-based countermeasure that can reinforce the build-in
instead of the add-on defense against security threats. True Random Number
Generator (TRNG) and Physical Unclonable Function (PUF) are two other
effective security primitives [14, 28]. These solutions are critical for
security protocols and unique IC identification, and they rely on the physical
phenomena for randomness, stability, and uniqueness, such as process
variations during fabrication [40].
In addition to hardware-based solutions, countermeasures enhancing the
security during the hardware design process are also present in the
literature. For example, side-channel analysis for HT detection using various
models such as hierarchical temporal memory [10] and DL [9] has grabbed lots
of attention recently. However, they postpone the detection to post-silicon
stage. On the other hand, Formal Verification (FV) is a pre-silicon
algorithmic method which converts the 3PIP to a proof checking format and
checks if the IP satisfies some predefined security properties [17, 37].
Although FV leverages the predefined security properties in IP for HT
detection, its detection scope is limited to certain types of HTs because the
properties are not comprehensive enough to cover all kinds of malicious
behaviors [31]. Some works employ model checking but are not scalable to large
designs as model checking is NP-complete and can suffer from state explosion
[32]. Another existing approach is code coverage which analyzes the RTL code
using metrics such as line, statement, finite state machine, and toggle
coverage to ascertain the suspicious signals that imitate the HT [44, 54].
As for IP theft prevention, watermarking and fingerprinting are two approaches
that embed the IP owner and legal IP user’s signatures into a circuit to
prevent infringement [27, 29]. Hardware metering is an IP protection method in
which the designer assigns a unique tag to each chip for chip identification
(passive tag) or enabling/disabling the chip (active tag) [21]. Obfuscation is
another countermeasure for IP theft [8] which comprises two main approach;
Logic Locking and Camouflaging. In Logic Locking, the designer inserts
additional gates such as XOR into non-critical wires. The circuit will only be
functional if the correct key is presented in a secure memory out of reach of
the attacker [47]. Camouflaging modifies the design such that cells with
different functionalities look similar to the attacker and confuses the
reverse engineering process [30]. Lastly, another countermeasure is to split
the design into separate ICs and have them fabricated in different foundries
so that none of them has access to the whole design to perform malicious
activities [26, 53].
In [15], several academic and commercial tools have been proposed to secure
hardware. For example, VeriSketch, SecVerilog, etc., are the open-source
academia verification tools for securing hardware. SecureCheck from Mentor
Graphics, JasperGold Formal Verification Platform from Cadence, and Prospect
from Tortuga Logic are all commercial verification tools ready in the market.
PyVerilog [38] is a hardware design tool that allows users to parse HDL code
and perform pre-silicon formal verification side-by-side with functional
verification. In short, though many approaches have been proposed to
counteract security threats, security is still an afterthought in hardware
design. Therefore, new countermeasures will be needed against new security
threats.
### II-C Machine Learning for Hardware Security
In the last few decades, the advancements in Machine Learning (ML) have
revolutionized the conventional methods and models in numerous applications
throughout the design flow. Defenders can use ML with hardware-based
observations for detecting attacks, while attackers can also use ML to steal
sensitive information from an IC, breaching hardware security [40]. Some ML-
based countermeasures have been proven effective for detecting HT from
hardware designs in both Register Transfer Level (RTL) or gate-level netlists
(GLN) [11, 13]. In [11], the circuit features are extracted from the Abstract
Syntax Tree (AST) representations of RTL codes and fed to gradient boosting
algorithm to train the ML model to construct an HT library. [13] extracts 11
Trojan-net feature values from GLNs and then trains a Multi-Layer Neural
Network on them to classify each net in a netlist as a normal netlist or
Trojan. Similarly, researchers have applied ML for automating the process of
detecting other threats. For instance, SVM can be used to analyze the on-chip
sensor readings (e.g., HPCs) to identify counterfeited ICs and detect HT in
real-time [16, 22]. However, as indicated in [40], effectively applying ML
models is not a trivial task as the defenders must first identify an
appropriate input representation for a hardware design. Unlike Euclidean data
such as images, texts, or signals, finding a robust feature representation for
a circuit (Non-Euclidean data) is more challenging as it requires domain
knowledge in both hardware and ML. To overcome this challenge, HW2VEC provides
more effective graph learning methods to automatically find a robust feature
representation for a non-Euclidean hardware design.
### II-D Graph Learning for Hardware Design and Security
Although conventional ML and DL approaches can effectively capture the
features hidden in Euclidean data, such as images, text, or videos, there are
still various applications where the data is graph-structured. As graphs can
be irregular, a graph can have a variable size of unordered nodes, and nodes
can have a different number of neighbors, thus making mathematical operations
used in deep learning (e.g., 2D Convolution) challenging to be applied [7].
Also, extracting a feature that captures structural information requires
challenging efforts to achieve the desired performance. To address these
challenges, recently, many graph learning approaches such as Graph
Convolutional Networks (GCN), Graph Neural Networks (GNN), or Graph
Autoencoder (GAE) have been proposed and applied in various applications [19,
45]. Only by projecting non-Euclidean data into low-dimensional embedding
space can the operations in ML methods be applied.
In EDA applications, many fundamental objects such as Boolean functions,
netlists, or layouts are natural graph representations [24]. Some works tackle
netlists with GCNs for test point insertion [25] or with GNNs for fast and
accurate power estimation in pre-silicon simulation [52]. [52] uses a GNN-
based model to infer the toggle rate of each logic gate from a netlist graph
for fast and accurate average power estimation without gate-level simulations,
which is a slower way to acquire toggle rates compared to RTL simulation. They
use GLNs, corresponding input port, and register toggle rates as input
features and logic gate toggle rates as ground-truth to train the model. The
model can infer the toggle rate of a logic gate from input features acquired
from RTL simulation for average power analysis computed by other power
analysis tools.
As for hardware security, only a few works utilizing GNN-based approaches
against security threats exist [48, 49]. [49] utilizes a GNN-based approach
for detecting HT in pre-silicon design phases without the need for golden HT-
free reference. Besides, using the GNN-based approach allows the extraction of
features from Data-Flow graphs to be automated. In [48], the proposed GNN-
based approach can detect IP piracy without the need to extract hardware
overhead to insert signatures to prove ownership. Specifically, the Siamese-
based network architecture allows their approach to capturing the features to
assess the similarity between hardware designs in the form of a Data-Flow
Graph. In short, these works have shown the effectiveness of securing hardware
designs with graph learning approaches. To further attract attention, we
propose HW2VEC as a convenient research tool that lowers the threshold for
newcomers to make research progress and for experienced researchers to explore
this topic more in-depth.
## III HW2VEC Architecture
As Figure 3 shows, HW2VEC contains HW2GRAPH and GRAPH2VEC. During the IC
design flow, a hardware design can have various levels of abstraction such as
High-Level Synthesis (HLS), RTL, GLN, and GDSII, each of which are
fundamentally non-Euclidean data. Overall, in HW2VEC, a hardware design $p$ is
first turned into a graph $g$ by HW2GRAPH, which defines the pairwise
relationships between objects that preserve the structural information. Then,
GRAPH2VEC consumes $g$ and produces the Euclidean representation $h_{g}$ for
learning.
Figure 3: The overall architecture of hw2vec. Beginning with hardware design
objects (RTL or GLN), the HW2GRAPH leverages PRE_PROC, GRAPH_GEN, and
POST_PROC to extract graph representations from hardware designs in the form
of node embedding matrix ($\mathbf{X}$) and adjacency matrix ($\mathbf{A}$).
These graphs are then passed to GRAPH2VEC to acquire the graph embeddings for
graph learning tasks of hardware security.
### III-A HW2GRAPH: from hardware design to graph
The first step is to convert each textual hardware design code $p$ into a
graph $g$. HW2GRAPH supports the automatic conversion of raw hardware code
into various graph formats such as Abstract Syntax Tree (AST) or Data-Flow
Graph (DFG). AST captures the syntactic structure of hardware code while DFG
indicates the relationships and dependencies between the signals and gives a
higher-level expression of the code’s computational structure. HW2GRAPH
consists of three primary modules: pre-processing, graph generation engine,
and post-processing.
#### III-A1 Pre-processing (PRE_PROC)
In this module, we have several automatic scripts for pre-processing a raw
hardware code $p$. As a hardware design can contain several modules stored in
separate files, the first step is to combine them into a single file (i.e.,
flattening). Next, to automatically locate the “entry point” top module of
$p$, the script scans the flattened code for the keyword “module” and extracts
the module names and the number of repetitions in $p$. Then, the script
analyzes the list of discovered module names and takes the one that appears
only once, which means the module is not instantiated by any other module, as
the top module. Here, we denote the pre-processed hardware design code as
$p^{\prime}$.
#### III-A2 Graph Generation Engine (GRAPH_GEN)
We integrate PyVerilog [39], a hardware design toolkit for parsing the Verilog
code, into this module. The pre-processed code $p^{\prime}$ is first converted
by a lexical analyzer, YACC (Yet Another Compiler-Compiler), into a
corresponding parse tree. Then, we recursively iterate through each node in
the parse tree with Depth-First Search (DFS). At each recursive step, we
determine whether to construct a collection of name/value pairs, an ordered
list of values, or a single name/value pair based on the token names used in
Verilog AST. To acquire DFG, the AST is further processed by the data flow
analyzer to create a signal DFG for each signal in the circuit such that the
signal is the root node. Lastly, we merge all the signal DFGs. The resulting
graph, either DFG or AST, is denoted as $g=(V,E)$. The AST is a tree type of
graph in which the nodes $V$ can be operators (mathematical, gates, loop,
conditional, etc.), signals, or attributes of signals. The edges $E$ indicate
the relation between nodes. The DFG shows data dependency where each node in
$V$ represents signals, constant values, and operations such as xor, and,
concatenation, branch, or branch condition, etc. Each edge in $E$ stands for
the data dependency relation between two nodes. Specifically, for all
$v_{i},v_{j}$ pairs, the edge ${e_{ij}}$ belongs to $E$ (${e_{ij}}\in E$) if
$v_{i}$ depends on $v_{j}$, or if $v_{j}$ is applied on $v_{i}$.
#### III-A3 Post-processsing (POST_PROC)
The output from Graph Generatifon Engine is in JSON (JavaScript Object
Notation) format. In this phase, we convert a JSON-formatted graph into a
NetworkX graph object. NetworkX is an efficient, scalable, and highly portable
framework for graph analysis. Several popular geometric representation
learning libraries (PyTorch-Geometric and Deep Graph Library) take this format
of graphs as the primary data structure in their pipelines.
### III-B Graph2Vec: from graph to graph embedding
Once hw2graph has converted a hardware design into a graph $g$, we begin to
process $g$ with the modules in graph2vec, including Dataset Processor,
Trainer, and Evaluator to acquire the graph embedding $h_{g}$.
#### III-B1 Dataset Processor
This module handles the low-level parsing tasks such as caching the data on
disk to optimize the tasks that involve repetitive model testing, performing
train-test split, finding the unique set of node labels among all the graph
data instances. One important task of the dataset processor is to convert a
graph $g=(V,E)$ into the tensor-like inputs $\mathbf{X}$ and $\mathbf{A}$
where $\mathbf{X}$ represents the node embeddings in matrix form and
$\mathbf{A}$ stands for the adjacency information of $g$. The conversion
between $E$ and $\mathbf{A}$ is straightforward. To acquire $\mathbf{X}$,
Dataset Processor performs a normalization process and assigns each of the
nodes a label that indicates its type, which may vary for different kinds of
graphs (AST or DFG). Each node gets converted to an initial vectorized
representation using one-hot encoding based on its type label.
#### III-B2 Graph Embedding Model
In this module, we break down the graph learning pipeline into multiple
network components, including graph convolution layers (GRAPH_CONV), graph
pooling layers (GRAPH_POOL), and graph readout operations (GRAPH_READOUT).
In HW2VEC, the GRAPH_CONV is inspired by the Spatial Graph Convolution Neural
Network (SGCN), which defines the convolution operation based on a node’s
spatial relations. In literature, this phase is also referred to as message
propagation phase which involves two sub-functions: AGGREGATE and COMBINE
functions. Each input graph $g=(V,E)$ is initialized in the form of node
embeddings and adjacency information ($\mathbf{X}^{(0)}$ and $\mathbf{A}$).
For each $k$-th iteration, the process updates the node embeddings
$\mathbf{X}^{(k)}$ using each node representation $h_{v}^{(k-1)}$ in
$\mathbf{X}^{(k-1)}$, given by,
$a_{v}^{(k)}=\textbf{AGGREGATE}^{(k)}(\\{h_{u}^{(k-1)}:u\in N(v)\\})$ (1)
$h_{v}^{(k)}=\textbf{COMBINE}^{(k)}(h_{v}^{(k-1)},a_{v}^{(k)})$ (2)
where $h_{v}^{(k)}\in R^{C^{k}}$ denotes the feature vector after $k$
iterations for the $v$-th node and $N(v)$ returns the neighboring nodes of
$v$-th node. Essentially, the AGGREGATE collects the features of the
neighboring nodes to extract an aggregated feature vector $a_{v}^{(k)}$ for
the layer k, and the COMBINE combines the previous node feature
$h_{v}^{(k-1)}$ with $a_{v}^{(k)}$ to output next feature vector
$h_{v}^{(k)}$. This message propagation is carried out for a pre-determined
number of layers $k$. We denote the final propagation node embedding
$\mathbf{X}^{(k)}$ as $\mathbf{X}^{prop}$.
Next, in GRAPH_POOL, the node embedding $\mathbf{X}^{prop}$ is further
processed with an attention-based graph pooling layer. As indicated from [23,
51], the integration of a graph pooling layer allows the model to operate on
the hierarchical representations of a graph, and hence can better perform the
graph classification task. Besides, such an attention-based pooling layer
allows the model to focus on a local part of the graph and is considered as a
part of a unified computational block of a GNN pipeline [20]. In this layer,
we perform top-k filtering on nodes according to the scoring results, as
follows:
$\mathbf{\alpha}=\textsc{SCORE}(\mathbf{X}^{prop},\mathbf{A})$ (3)
$\mathbf{P}=\mathrm{top}_{k}(\mathbf{\alpha})$ (4)
where $\mathbf{\alpha}$ stands for the coefficients predicted by the graph
pooling layer for nodes. $\mathbf{P}$ represents the indices of the pooled
nodes, which are selected from the top $k$ of the nodes ranked according to
$\alpha$. The number $k$ used in top-k filtering is calculated by a pre-
defined pooling ratio, $pr$ using $k=pr\times|V|$, where we consider only a
constant fraction $pr$ of the embeddings of the nodes of the DFG to be
relevant (i.e., 0.5). One example of the scoring function is to utilize a
separate trainable GNN layer to produce the scores so that the scoring method
considers both node features and topological characteristics [23]. We denote
the node embeddings and edge adjacency information after pooling by
$\mathbf{X}^{pool}$ and $\mathbf{A}^{pool}$ which are calculated as follows:
$\mathbf{X}^{pool}=(\mathbf{X}^{prop}\odot\mathrm{tanh}(\mathbf{\alpha}))_{\mathbf{P}}$
(5) $\mathbf{A}^{pool}={\mathbf{A}^{prop}}_{(\mathbf{P},\mathbf{P})}\\\ $ (6)
where $\odot$ represents an element-wise multiplication, $()_{\mathbf{P}}$
refers to the operation that extracts a subset of nodes based on $P$, and
$()_{(\mathbf{P},\mathbf{P})}$ refers to the information of the adjacency
matrix between the nodes in this subset.
Lastly, in GRAPH_READOUT, the overall graph-level feature extraction is
carried out by either summing up or averaging up the node features
$\mathbf{X}^{pool}$. We denote the graph embedding for each graph $g$ as
$h^{(k)}_{g}$, computed as follows:
$h^{(k)}_{g}=\textit{GRAPH\\_READOUT}(\\{h_{v}^{(k)}:v\in V\\})$ (7)
We use the graph embedding $h^{(k)}_{g}$ to model the behavior of circuits
(use $h_{g}$ for simplicity). After this, the fixed-length embeddings of
hardware designs then become compatible with ML algorithms.
In practice, these network components can be combined in various ways
depending on the type of the tasks (node-level task, graph-level task) or the
complexity of the tasks (simple or complex network architecture). In
GRAPH2VEC, one default option is to use one or multiple GRAPH_CONV, followed
by a GRAPH_POOL and a GRAPH_READOUT. Besides, in conjunction with Multi-Layer
Perceptron (MLP) or other ML layers, this architecture can transform the graph
data into a form that we can use in calculating the loss for learning. In
GRAPH2VEC, we reserve the flexibility for customization, so users may also
choose to combine these components in a way that is effective for their tasks.
#### III-B3 Trainer and Evaluator
The Trainer module takes training datasets, validating datasets, and a set of
hyperparameter configurations to train a GNN model. HW2VEC currently supports
two types of Trainer, graph-trainer and graph-pair-trainer. To be more
specific, graph-trainer uses GRAPH2VEC’s model to perform graph classification
learning and evaluation while graph-pair-trainer considers pairs of graphs,
calculates their similarities, and ultimately performs the graph similarity
learning and evaluation. Some low-level tasks are also handled by Trainer
module, such as caching the best model weights evaluated from the validation
set to the disk space or performing mini-step testing. Once the training is
finished, the Evaluator module plots the training loss and commonly used
metrics in ML-based hardware security applications. To facilitate the analysis
of the results, HW2VEC also provides utilities to visualize the embeddings of
hardware designs with t-SNE based dimensionality reduction [43]. Besides,
HW2VEC provides multiple exporting functionalities so that the learned
embeddings can be presented in standardized formats, and users can also choose
other third-party tools such as Embedding Projector [36] to analyze the
embeddings.
## IV HW2VEC Use-cases
In this section, we describe HW2VEC use-cases. First, Section IV-A exhibits a
fundamental use-case in which a hardware design $p$ is converted into a graph
$g$ and then into a fixed-length embedding $h_{g}$. Next, the use-cases of
HW2VEC for two hardware security applications (detecting hardware Trojan and
hardware IP piracy) are described in Section IV-B and Section IV-C,
respectively.
### IV-A Use-case 1: Converting a Hardware Design to a Graph Embedding
The first use-case demonstrates the transformation of a hardware design $p$
into a graph $g$ and then into an embedding $h_{g}$. As Algorithm 1 shows,
HW2GRAPH uses preprocessing (PRE_PROC), graph generation (GRAPH_GEN) and post-
processing (POST_PROC) modules which are detailed in Section III-A to convert
each hardware design into the corresponding graph. The $g$ is fed to GRAPH2VEC
with the uses of Data Processing (DATA_PROC) to generate $X$ and $A$. Then,
$X$ and $A$ are processed through GRAPH_CONV, GRAPH_POOL, and GRAPH_READOUT to
generate the graph embedding $h_{g}$. This resulting $h_{g}$ can be further
inspected with the utilities of Evaluator (see Section III-B3).
1 Input: A hardware design program $p$.
2 Output: A graph embedding $h_{p}$ for $p$.
3 def _HW2GRAPH(_$p$_)_:
4 $p^{\prime}\leftarrow$ Pre_Proc($p$);
5 $g\leftarrow$ Graph_Gen($p^{\prime}$);
6 $g\prime\leftarrow$ Post_Proc($g$);
7 return $g\prime$;
8
9
10def _GRAPH2VEC(_$g$_)_:
11 $X,A\leftarrow$ Data_Proc($g$)
12 $X^{prop},A^{prop}\leftarrow$ GRAPH_CONV($X,A$)
13 $X^{pool},A^{pool}\leftarrow$ GRAPH_POOL($X^{prop},A^{prop}$)
14 $h_{g}\leftarrow$ GRAPH_READOUT($X^{pool}$)
15 return $h_{g}$
16
17
18$g\leftarrow$ HW2GRAPH($p$);
19 $h_{g}\leftarrow$ GRAPH2VEC($g$);
Algorithm 1 Use-case - HW2VEC
In HW2VEC, we provide Algorithm 1’s implementation in use_case_1.py of our
repository.
### IV-B Use-case 2: Hardware Trojan Detection
In this use-case, we demonstrate how to use HW2VEC to detect HT, which has
been a major hardware security challenge for many years. An HT is an
intentional, malicious modification of a circuit by an attacker [34]. The
capability of detection at an early stage (particularly at RTL level) is
crucial as removing HTs at later stages could be very expensive. The majority
of existing solutions rely on a golden HT-free reference or cannot generalize
detection to previously unseen HTs. [49] proposes a GNN-based approach to
model the circuit’s behavior and identify the presence of HTs.
1 Input: A hardware design program $p$.
2 Output: A label indicating whether $p$ contains Hardware Trojan.
3 def _use_case_2(_$p$_)_:
4 $g\leftarrow$ HW2GRAPH($p$);
5 $h_{g}\leftarrow$ GRAPH2VEC($g$);
6 $\hat{y}\leftarrow$ MLP($h_{g}$);
7 if _$\hat{y}[0] >\hat{y}[1]$_ then
8 return Trojan;
9 else
10 return Non_Trojan;
11
12
13$\hat{Y}\leftarrow$ use_case_2($p$);
Algorithm 2 Use-case - Hardware Trojan Detection
To realize [49] in HW2VEC, we first use HW2GRAPH to convert each hardware
design $p$ into a graph $g$. Then, we transform each $g$ to a graph embedding
$h_{g}$. Lastly, $h_{g}$ is used to make a prediction $\hat{y}$ with an MLP
layer. To train the model, the cross-entropy loss $L$ is calculated
collectively for all the graphs in the training set (see Equation 8).
$L=H(Y,\hat{Y})=\sum_{i}y_{i}*log_{e}(\hat{y_{i}}),$ (8)
where $H$ is the loss function. $Y$ stands for the set of ground-truth labels
(either Trojan or Non_Trojan) and $\hat{Y}$ represents the corresponding set
of predictions. Once trained by minimizing $L$, we use the model and Algorithm
2 to perform HT detection (can also be done with a pre-trained model). In
practice, we provide an implementation in use_case_2.py in our repository.
### IV-C Use-case 3: Hardware IP Piracy Detection
This use-case demonstrates how to leverage HW2VEC to confront another major
hardware security challenge – determining whether one of the two hardware
designs is stolen from the other or not. The IC supply chain has been so
globalized that it exposes the IP providers to theft and illegal IP
redistribution. One state-of-the-art countermeasure embeds the signatures of
IP owners on hardware designs (i.e., watermarking or fingerprinting), but it
causes additional hardware overhead during the manufacturing. Therefore, [48]
addresses IP piracy by assessing the similarities between hardware designs
with a GNN-based approach. Their approach models the behavior of a hardware
design (in RTL or GLN) in graph representations.
1 Input: A pair of hardware design programs $p_{1},p_{2}$.
2 Output: A label indicating whether $p_{1},p_{2}$ is piracy.
3
4def _use_case_3(_$p_{1}$ , $p_{2}$_)_:
5 $g_{1},g_{2}\leftarrow$ HW2GRAPH($p_{1}$), HW2GRAPH($p_{2}$);
6 $h_{g_{1}},h_{g_{2}}\leftarrow$ GRAPH2VEC($g_{1}$), GRAPH2VEC($g_{2}$);
7 $\hat{y}\leftarrow$ Cosine_Sim($h_{g_{1}},h_{g_{2}}$);
8 if _$\hat{y} >\delta$_ then
9 return Piracy;
10 else
11 return Non-Piracy;
12
13
14$\hat{Y}\leftarrow$ use_case_3($p_{1}$, $p_{2}$);
Algorithm 3 Use-case - Hardware IP Piracy Detection
To implement [48], the GNN model has to be trained with a graph-pair
classification trainer in GRAPH2VEC. The first step is to use HW2GRAPH to
convert a pair of circuit designs $p_{1}$, $p_{2}$ into a pair of graphs
$g_{1}$, $g_{2}$. Then, GRAPH2VEC transforms both $g_{1}$ and $g_{2}$ into
graph embeddings $h_{g_{1}}$, $h_{g_{2}}$. To train this GNN model for
assessing the similarity of $h_{g_{1}}$ and $h_{g_{2}}$, a cosine similarity
is computed as the final prediction of piracy, denoted as $\hat{y}\in[-1,1]$.
The loss between a prediction $\hat{y}$ and a ground-truth label $y$ is
calculated as Equation 9 shows. Lastly, the final loss $L$ is computed
collectively with a loss function $H$ for all the graphs in the training set
(see Equation 10).
$G(y,\hat{y})=\left\\{\begin{array}[]{ll}1-\hat{y},&\texttt{if }y=1\\\
\textsc{max}(0,\hat{y}-\textsc{margin})&\texttt{if }y=-1\end{array}\right.$
(9) $L=H(Y,\hat{Y})=\sum_{i}G(y_{i},\hat{y_{i}}),$ (10)
where $Y$ stands for the set of ground-truth labels (either Piracy or
Non_Piracy) and $\hat{Y}$ represents the corresponding set of predictions. The
margin is a constant to prevent the learned embedding from becoming distorted
(always set to 0.5 in [48]). Once trained, we use this model and Algorithm 3
with $\delta$, which is a decision boundary used for making final judgment, to
detect piracy. In practice, we provide the implementation of Algorithm 3 in
use_case_3.py.
## V Experimental Results
In this section, we evaluate the HW2VEC through various experiments using the
use-case implementations described earlier.
### V-A Dataset Preparation
For evaluation, we prepare one RTL dataset for HT detection (TJ-RTL) and both
RTL and GLN datasets (IP-RTL and IP-GLN) for IP piracy detection.
#### V-A1 The TJ-RTL dataset
We construct the TJ-RTL dataset by gathering the hardware designs with or
without HT from the Trust-Hub.org benchmark [1]. From Trust-Hub, we collect
three base circuits, AES, PIC, and RS232, and insert 34 varied types of HTs
into them. We also include these HTs as standalone instances to the TJ-RTL
dataset. Furthermore, we insert these standalone HTs into two other circuits
(DES and RC5) and include the resulting circuits to expand the TJ-RTL dataset.
Among the five base circuits, AES, DES, and RC5 are cryptographic cores that
encrypt the input plaintext into the ciphertext based on a secret key. For
these circuits, the inserted HTs can leak sensitive information (i.e., secret
key) via side-channels such as power and RF radiation or degrade the
performance of their host circuits by increasing the power consumption and
draining the power supply. RS232 is an implementation of the UART
communication channel, while the HT attacks on RS232 can affect the
functionality of either transmitter or receiver or can interrupt/disable the
communication between them. The PIC16F84 is a well-known Power Integrated
Circuit (PIC) microcontroller, and the HTs for PIC fiddle with its
functionality and manipulate the program counter register. Lastly, we create
the graph datasets, DFG-TJ-RTL and AST-TJ-RTL, in which each graph instance is
annotated with a Trojan or Non_Trojan label.
#### V-A2 The IP-RTL and IP-GNL datasets
To construct the datasets for evaluating piracy detection, we gather RTL and
GLN of hardware designs in Verilog format. The RTL dataset includes common
hardware designs such as single-cycle and pipeline implementation of MIPS
processor which are derived from available open-source hardware design in the
internet or designed by a group of in-house designers who are given the same
specification to design a hardware in Verilog. The GLN dataset includes
ISCAS’85 benchmark [12] which includes 7 different hardware designs (c432,
c499, c880, c1355, c1908, c6288, c7552) and their obfuscated instances derived
from TrustHub. Obfuscation complicates the circuit and confuses reverse
engineering but does not change the behavior of the circuit. Our collection
comprises 50 distinct circuit designs and several hardware instances for each
circuit design that sums up 143 GLN and 390 RTL codes. We form a graph-pair
dataset of 19,094 similar pairs and 66,631 different pairs, dedicate 20% of
these 85,725 pairs for testing and the rest for training. This dataset
comprises of pairs of hardware designs, labelled as piracy (positive) or no-
piracy (negative).
### V-B HW2VEC Evaluation: Hardware Trojan Detection
Here, we evaluate the capability of HW2VEC in identifying the existence of HTs
from hardware designs. We leverage the implementation mentioned in Section
IV-B. As for hyperparameters, we follow the best setting used in [49] which is
stored as a preset in a YAML configuration file. For performance metrics, we
count the True Positive ($TP$), False Negative ($FN$) and False Positive
($FP$) for deriving Precision $P=TP/(TP+FP)$ and Recall $R=TP/(TP+FN)$. $R$
manifests the percentage of HT-infested samples that the model can identify.
As the number of HT-free samples incorrectly classified as HT is also
critical, we compute $P$ that indicates what percentage of the samples that
model classifies as HT-infested actually contains HT. $F_{1}$ score is the
weighted average of precision and recall that better presents performance,
calculated as $F_{1}=2\times P\times R/(P+R)$.
To demonstrate whether the learned model can generalize the knowledge to
handle the unknown or unseen circuits, we perform a variant leave-one-out
cross-validation to experiment. We perform a train-test split on the TJ-RTL
dataset by leaving one base circuit benchmark in the testing set and use the
remaining circuits to train the model. We repeat this process for each base
circuit and average the metrics we acquire from evaluating each testing set.
The result is presented in Table I, indicating that HW2VEC can reproduce
comparable results to [49] in terms of $F_{1}$ score (0.926 versus 0.940) if
we use DFG as the graph representation. The difference in performance can be
due to the use of different datasets. When using AST as the graph
representation for detecting HT, HW2VEC performs worse in terms of $F_{1}$
score, indicating that DFG is a better graph representation because it
captures the data flow information instead of simply the syntactic information
of a hardware design code. All in all, these results demonstrate that our
HW2VEC can be leveraged for studying HT detection at design phases.
Method | Graph | Dataset | Precision | Recall | F1
---|---|---|---|---|---
HW2VEC | DFG | RTL | 0.87334 | 0.98572 | 0.92596
HW2VEC | AST | RTL | 0.90288 | 0.8 | 0.8453
[49] | DFG | RTL | 0.923 | 0.966 | 0.940
TABLE I: The performance of HT detection using HW2VEC.
### V-C HW2VEC Evaluation: Hardware IP Piracy Detection
Besides the capability of HT detection, we also evaluate the power of HW2VEC
in detecting IP piracy. We leverage the usage example mentioned in Section
IV-C which examines the cosine-similarity score $\hat{y}$ for each hardware
design pair and produces the final prediction with the decision boundary.
Using the IP-RTL dataset and the IP-GNL dataset (mentioned in Section V-A), we
generate graph-pair datasets by annotating the hardware designs that belong to
the same hardware category as Similar and the ones that belong to different
categories as Dissimilar. We perform a train-test split on the dataset so that
80% of the pairs will be used to train the model. We compute the accuracy of
detecting hardware IP piracy, which expresses the correctly predicted sample
ratio and calculates the $F_{1}$ score as the evaluating metrics. We refer to
[48] for the selection of hyperparameters (stored in a YAML file).
The result is presented in Table II, indicating that HW2VEC can reproduce
comparable results to [48] in terms of piracy detection accuracy. When using
DFG as the graph representation, HW2VEC underperforms [48] by 3% at RTL level
and outperforms [48] by 4.2% at GLN level. Table II also shows a similar
observation with Section V-B that using AST as the graph representation can
lead to worse performance than using DFG. Figure 4 visualizes the graph
embeddings that HW2VEC exports for every processed hardware design, allowing
users to inspect the results manually. For example, by inspecting Figure 4, we
may find a clear separation between mips_single_cycle and AES. Certainly,
HW2VEC can perform better with more fine-tuning processes. However, the
evaluation aims to demonstrate that HW2VEC can help practitioners study the
problem of IP piracy at RTL and GLN levels.
Method | Graph | Dataset | Accuracy | F1
---|---|---|---|---
HW2VEC | DFG | RTL | 0.9438 | 0.9277
HW2VEC | DFG | GLN | 0.9882 | 0.9652
HW2VEC | AST | RTL | 0.9358 | 0.9183
[48] | DFG | RTL | 0.9721 | –
[48] | DFG | GLN | 0.9461 | –
TABLE II: The results of detecting IP piracy with HW2VEC. Figure 4: The
embedding visualization with 3D t-SNE.
### V-D HW2VEC Evaluation: Timing
To evaluate the time required for training and testing, we test the models on
a server with NVIDIA TITAN-XP and NVIDIA GeForce GTX 1080 graphics cards.
Table III indicates that the time taken by training and inference are both
below 15 milliseconds, and the time taken by training is more than inference
as it includes the time for performing back-propagation. As HW2VEC aims to
serve as a research tool, our users must evaluate their applications within a
reasonable time duration. We believe that the time spent by the graph learning
pipelines of HW2VEC should be acceptable for conducting research. For
practically deploying the models, the actual timing can depend on the
computation power of hosting devices and the complexity of the models for the
applications. Suppose our users need an optimized performance for real-time
applications. In that case, they can implement the models with performance-
focused programming languages (C or C++) or ML frameworks (e.g., TensorFlow)
using the best model settings found using HW2VEC. As for specialized hardware
that can accelerate the processing of GNNs, it is still an open challenge as
indicated in [3].
Table IV indicates that the time that HW2VEC spends in converting the raw
hardware code into ASTs is on average 1.98 seconds. Although [11] takes 1.37
seconds on average per hardware code, it requires domain knowledge to find a
deterministic way to perform feature extraction. For DFG extraction, HW2VEC
takes on average 244.58 seconds per graph as it requires recursive traversals
to construct the whole data flow. In our datasets, AES and DES are relatively
more complex, so HW2VEC takes 472.46 seconds on average processing them while
the rest of the data instances take 16.70 seconds on average. Certainly,
HW2VEC performs worse in DFG extraction, but manual feature engineering
possibly requires a much longer time. In design phases, even for an
experienced hardware designer, it can take 6-9 months to prototype a complex
hardware design [41] so the time taken by HW2VEC is acceptable and not slowing
down the design process. However, as the first open-source tool in the field,
HW2VEC will keep evolving and embrace the contributions from the open-source
community.
| TJ-RTL-AST | IP-RTL-AST
---|---|---
training time | 10.5 (ms) | 13.5 (ms)
testing time | 6.8 (ms) | 12.4 (ms)
TABLE III: The time profiling for training/inference. | TJ-DFG-RTL | IP-DFG-GLN | TJ-AST-RTL
---|---|---|---
# of node | 7573.58 | 7616.16 | 971.01
# of edge | 8938.11 | 9495.97 | 970.01
Exec time | 244.58 (s) | 14.61 (s) | 1.98 (s)
TABLE IV: The graph extraction time profiling. For TJ-DFG-RTL, the hardware
AES and DES jointly take 472.46 seconds on average for DFG extraction while
the rest of data instances take 16.7 seconds on average.
### V-E HW2VEC Applicability
In Section V-B and Section V-C, we have discussed the performance of the GNN-
based approach in resolving two hardware security problems: hardware Trojan
detection and IP piracy detection. In Section V-B, our evaluation shows that
HW2VEC can successfully be leveraged to perform HT detection on hardware
designs, particularly on the unseen ones, without the assistance of golden HT-
free reference. The capability to model hardware behaviors can be attributed
to using a natural representation of the hardware design (e.g., DFG) and the
use of the GNN-based method for capturing both the structural information and
semantic information from the DFG and co-relating this information to the
final HT labels. Similarly, Section V-C indicates that HW2VEC can be utilized
to assess the similarities between circuits and thus can be a countermeasure
for IP piracy. The use of graph representation for a hardware design and a
Siamese GNN-based network architecture are the keys in [48] to perform IP
piracy detection at both RTL and GLN levels. For other hardware security
applications, the flexible modules provided by HW2VEC (Trainer and Evaluator)
can be adapted easily to different problem settings. For example, by adjusting
the Trainer to train the GNN models for node classification, HW2VEC can be
adapted to localize the HT(s) or hardware bug(s) that exist in the hardware
designs. Also, the cached models provided by HW2VEC can be used in learning
other new hardware design related tasks through the transfer of knowledge from
a related task that has already been learned as the idea of Transfer Learning
suggests [42].
## VI Conclusion
As technological advancements continue to grow, the fights between attackers
and defenders will rise in complexity and severity. To contribute to the
hardware security research community, we propose HW2VEC: a graph learning tool
for automating hardware security. HW2VEC provides an automated pipeline for
hardware security practitioners to extract graph representations from a
hardware design in either RTL or GLN. Besides, the toolbox of HW2VEC allows
users to realize their hardware security applications with flexibility. Our
evaluation shows that HW2VEC can be leveraged and integrated for counteracting
two critical hardware security threats: Hardware Trojan Detection and IP
Piracy Detection. Lastly, as discussed in this paper, we anticipate that
HW2VEC can provide more straightforward access for both practitioners and
researchers to apply graph learning approaches to hardware security
applications.
## References
* [1] Trusthub. Available on-line: https://www.trust-hub.org, 2016.
* [2] Special 301 report. the United States Trade Representative, 2017.
* [3] S. Abadal, A. Jain, R. Guirado, J. López-Alonso, and E. Alarcón. Computing graph neural networks: A survey from algorithms to accelerators. arXiv preprint arXiv:2010.00130, 2020.
* [4] S. Adee. The hunt for the kill switch. In IEEE Spectrum, 2008.
* [5] M. AshrafiAmiri et al. Towards side channel secure cyber-physical systems. In Real-Time and Embedded Systems and Technologies, 2018.
* [6] D. Board. Defense science board (dsb) study on high performance microchip supply. URL www. acq. osd. mil/dsb/reports/ADA435563. pdf,[March 16, 2015], 2005.
* [7] H. Cai, V. W. Zheng, and K. C.-C. Chang. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Transactions on Knowledge and Data Engineering, 30(9):1616–1637, 2018.
* [8] J. Chen et al. Decoy: Deflection-driven hls-based computation partitioning for obfuscating intellectual property. In Design Automation Conference (DAC), 2020.
* [9] S. Faezi, R. Yasaei, and M. Al Faruque. Htnet: Transfer learning for golden chip-free hardware trojan detection. IEEE/ACM Design Automation and Test in Europe Conference (DATE’21), 2021.
* [10] S. Faezi et al. Brain-inspired golden chip free hardware trojan detection. IEEE Transaction on Information Forensics and Security (IEEE TIFS’21), 2021.
* [11] T. Han, Y. Wang, and P. Liu. Hardware trojans detection at register transfer level based on machine learning. In 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1–5. IEEE, 2019.
* [12] M. C. Hansen, H. Yalcin, and J. P. Hayes. Unveiling the iscas-85 benchmarks: A case study in reverse engineering. IEEE Design & Test of Computers, 16(3):72–80, 1999.
* [13] K. Hasegawa, Y. Shi, and N. Togawa. Hardware trojan detection utilizing machine learning approaches. In 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), pages 1891–1896. IEEE, 2018.
* [14] C. Herder, M.-D. Yu, F. Koushanfar, and S. Devadas. Physical unclonable functions and applications: A tutorial. Proceedings of the IEEE, 102(8):1126–1141, 2014.
* [15] W. Hu, C.-H. Chang, A. Sengupta, S. Bhunia, R. Kastner, and H. Li. An overview of hardware security and trust: Threats, countermeasures and design tools. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020.
* [16] K. Huang, J. M. Carulli, and Y. Makris. Parametric counterfeit ic detection via support vector machines. In 2012 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT), pages 7–12. IEEE, 2012.
* [17] Jasper. Jaspergold: Security path verification app. 2014\.
* [18] S. Jose. Innovation is at risk as semiconductor equipment and materials. Semiconductor Equipment and Material Industry (SEMI), 2008.
* [19] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
* [20] B. Knyazev et al. Understanding attention and generalization in graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019\.
* [21] F. Koushanfar. Active hardware metering by finite state machine obfuscation. In Hardware Protection through Obfuscation. 2017.
* [22] A. Kulkarni, Y. Pino, and T. Mohsenin. Svm-based real-time hardware trojan detection for many-core platform. In 2016 17th International Symposium on Quality Electronic Design (ISQED), pages 362–367. IEEE, 2016.
* [23] J. Lee et al. Self-attention graph pooling. arXiv preprint arXiv:1904.08082, 2019.
* [24] Y. Ma, Z. He, W. Li, L. Zhang, and B. Yu. Understanding graphs in eda: From shallow to deep learning. In ISPD, pages 119–126, 2020.
* [25] Y. Ma, H. Ren, B. Khailany, H. Sikka, L. Luo, K. Natarajan, and B. Yu. High performance graph convolutional networks with applications in testability analysis. In Proceedings of the 56th Annual Design Automation Conference 2019, pages 1–6, 2019.
* [26] S. Patnaik et al. Raise your game for split manufacturing: Restoring the true functionality through beol. In Design Automation Conference (DAC), 2018.
* [27] P. Poudel et al. Flashmark: watermarking of nor flash memories for counterfeit detection. In Design Automation Conference (DAC), 2020.
* [28] M. T. Rahman, K. Xiao, D. Forte, X. Zhang, J. Shi, and M. Tehranipoor. Ti-trng: Technology independent true random number generator. In 2014 51st ACM/EDAC/IEEE Design Automation Conference (DAC), pages 1–6. IEEE, 2014.
* [29] S. Rai et al. Hardware watermarking using polymorphic inverter designs based on reconfigurable nanotechnologies. In 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2019.
* [30] J. Rajendran et al. Security analysis of integrated circuit camouflaging. In ACM conference on Computer & communications security, 2013.
* [31] J. Rajendran et al. Detecting malicious modifications of data in third-party intellectual property cores. In ACM/IEEE Design Automation Conference (DAC), 2015.
* [32] J. Rajendran et al. Formal security verification of third party intellectual property cores for information leakage. In International Conference on VLSI Design and Embedded Systems (VLSID), 2016.
* [33] M. Rostami, F. Koushanfar, and R. Karri. A primer on hardware security: Models, methods, and metrics. Proceedings of the IEEE, 102(8):1283–1295, 2014.
* [34] M. Rostami, F. Koushanfar, J. Rajendran, and R. Karri. Hardware security: Threat models and metrics. In 2013 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 819–823. IEEE, 2013.
* [35] K. Shamsi, M. Li, K. Plaks, S. Fazzari, D. Z. Pan, and Y. Jin. Ip protection and supply chain security through logic obfuscation: A systematic overview. ACM Transactions on Design Automation of Electronic Systems (TODAES), 24(6):1–36, 2019.
* [36] D. Smilkov, N. Thorat, C. Nicholson, E. Reif, F. B. Viégas, and M. Wattenberg. Embedding projector: Interactive visualization and interpretation of embeddings. arXiv preprint arXiv:1611.05469, 2016.
* [37] P. Subramanyan and D. Arora. Formal verification of taint-propagation security properties in a commercial soc design. In Design, Automation & Test in Europe Conference (DATE), 2014\.
* [38] S. Takamaeda-Yamazaki. Pyverilog: A python-based hardware design processing toolkit for verilog hdl. In Applied Reconfigurable Computing, volume 9040 of Lecture Notes in Computer Science, pages 451–460. Springer International Publishing, Apr 2015.
* [39] S. Takamaeda-Yamazaki. Pyverilog: A python-based hardware design processing toolkit for verilog hdl. In International Symposium on Applied Reconfigurable Computing, 2015\.
* [40] B. Tan and R. Karri. Challenges and new directions for ai and hardware security. In 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), pages 277–280. IEEE, 2020.
* [41] J. Teel. How long does it take to develop a new product and get it to market? Oct 2017.
* [42] L. Torrey and J. Shavlik. Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pages 242–264. IGI global, 2010\.
* [43] L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
* [44] A. Waksman et al. Fanci: identification of stealthy malicious logic using boolean functional analysis. In ACM SIGSAC Conference on Computer and Communications Security, 2013.
* [45] Z. Wu et al. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2020\.
* [46] K. Xiao, D. Forte, Y. Jin, R. Karri, S. Bhunia, and M. Tehranipoor. Hardware trojans: Lessons learned after one decade of research. ACM Transactions on Design Automation of Electronic Systems (TODAES), 22(1):1–23, 2016.
* [47] Y. Xie et al. Delay locking: Security enhancement of logic locking against ic counterfeiting and overproduction. In Design Automation Conference (DAC), 2017.
* [48] R. Yasaei, S.-Y. Yu, and M. A. A. Faruque. Gnn4ip: Graph neural network for hardware intellectual property piracy detection. In Design, Automation & Test in Europe Conference & Exhibition (DATE). Ieee, 2021.
* [49] R. Yasaei, S.-Y. Yu, and M. A. A. Faruque. Gnn4tj: Graph neural networks for hardware trojan detection at register transfer level. In Design, Automation & Test in Europe Conference & Exhibition (DATE). Ieee, 2021.
* [50] A. Yeh. Trends in the global ic design service market. DIGITIMES research, 2012.
* [51] R. Ying, J. You, C. Morris, X. Ren, W. L. Hamilton, and J. Leskovec. Hierarchical graph representation learning with differentiable pooling. arXiv preprint arXiv:1806.08804, 2018.
* [52] Y. Zhang, H. Ren, and B. Khailany. Grannite: Graph neural network inference for transferable power estimation. In 2020 57th ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE, 2020.
* [53] B. Zhang et al. Analysis of security of split manufacturing using machine learning. In Design Automation Conference (DAC), 2018.
* [54] J. Zhang et al. Veritrust: Verification for hardware trust. IEEE Tran. on Computer-Aided Design of Integrated Circuits and Systems, 2015.
|
arxiv-papers
| 2021-07-26T17:03:51 |
2024-09-04T03:07:19.294576
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Shih-Yuan Yu, Rozhin Yasaei, Qingrong Zhou, Tommy Nguyen, Mohammad\n Abdullah Al Faruque",
"submitter": "Shih-Yuan Yu",
"url": "https://arxiv.org/abs/2107.12328"
}
|
2107.12331
|
# Uplink Data Detection Analysis
of 1-Bit Quantized Massive MIMO
Italo Atzeni and Antti Tölli
Centre for Wireless Communications, University of Oulu, Finland
Emails: {italo.atzeni, antti.tolli}@oulu.fi The work of I. Atzeni was
supported by the Marie Skłodowska-Curie Actions (MSCA-IF 897938 DELIGHT). The
work of A. Tölli was supported by the Academy of Finland under grant no.
318927 (6Genesis Flagship).
###### Abstract
This paper presents an analytical framework for the data detection in massive
multiple-input multiple-output uplink systems with 1-bit analog-to-digital
converters (ADCs). Considering the single-user case, we provide closed-form
expressions of the expected value and the variance of the estimated symbols
when maximum ratio combining is adopted at the base station (BS) along with
their asymptotic behavior at high signal-to-noise ratio (SNR). These results
are exploited to enhance the performance of maximum likelihood detection by
taking into account the dispersion of the estimated symbols about their
expected values. The symbol error rate with 1-bit ADCs is evaluated with
respect to the number of BS antennas, the SNR, and the pilot length used for
the channel estimation. The proposed analysis highlights a fundamental SNR
trade-off, according to which operating at the right SNR considerably improves
the data detection accuracy.
## I Introduction
Beyond-5G wireless systems are expected to exploit the large amount of
bandwidth available in the mmWave band and raise the operating frequencies up
to 1 THz [1]. In this context, fully digital architectures allow to truly
capitalize on the massive multiple-input multiple-output (MIMO) arrays to
implement highly flexible beamforming and serve more user equipments (UEs)
simultaneously. In fully digital architectures, each base station (BS) antenna
is equipped with a dedicated radio-frequency chain that includes complex,
power-hungry analog-to-digital/digital-to-analog converters (ADCs/DACs) [2].
In this setting, the power consumed by each ADC/DAC scales linearly with the
sampling rate and exponentially with the number of quantization bits [3, 4, 5,
6]. Another limiting aspect is the volume of raw data exchanged between the
remote radio head and the base-band unit, which scales linearly with both the
sampling rate and the number of quantization bits [7].
For these reasons, adopting low-resolution ADCs/DACs (e.g., with 1 to 4
quantization bits) can enable the implementation of fully digital massive MIMO
arrays comprising hundreds (or even thousands) of antennas, which are
necessary to operate in the mmWave and THz bands [7]. In this regard, 1-bit
ADCs/DACs are particularly appealing due to their minimal power consumption
and complexity [3, 8]. Such a coarse quantization is suitable especially at
very high frequencies, where high-order modulations may not be needed due to
the huge bandwidths. There is a vast literature on massive MIMO with 1-bit
ADCs/DACs. For instance, the capacity of the 1-bit quantized MIMO channel is
characterized in [3]. The work in [4] proposes an efficient iterative method
for near maximum likelihood detection (MLD) with 1-bit ADCs. The channel
estimation and the uplink achievable rate with 1-bit ADCs are studied in [5].
The spectral efficiency of single-carrier and orthogonal frequency-division
multiplexing uplink systems with 1-bit ADCs is analyzed in [9]. Some of the
results derived in [5, 9] for 1-bit ADCs are extended to the multi-bit case in
[7]. The performance of downlink linear precoding with 1-bit DACs is studied
in [6]. The benefits of oversampling in massive MIMO systems with 1-bit ADCs
are investigated in [10].
In this paper, we broaden prior analytical studies on the uplink data
detection in massive MIMO systems with 1-bit ADCs. The statistical properties
of the estimated symbols have not been characterized by previous works. In
this respect, it was observed in [7] that the estimated symbols resulting from
transmit symbols with the same phase overlap at high signal-to-noise ratio
(SNR), although this aspect has not been formally described in the literature.
We fill this gap by deriving closed-form expressions of the expected value and
the variance of the estimated symbols for the single-UE case when maximum
ratio combining (MRC) is adopted at the BS. Furthermore, we analyze their
asymptotic behavior at high SNR. Building on these results, we propose an
enhanced MLD method that considerably reduces the symbol error rate (SER) by
properly weighting each detection region with the corresponding variance.
Numerical results are presented to evaluate the SER with respect to the number
of BS antennas, the SNR, and the pilot length used during the channel
estimation phase. Our analysis highlights a fundamental SNR trade-off,
according to which operating at the right SNR significantly improves the data
detection accuracy.
Notation. $\mathbf{A}=(A_{m,n})$ specifies that $A_{m,n}$ is the $(m,n)$th
entry of matrix $\mathbf{A}$; likewise, $\mathbf{a}=(a_{n})$ specifies that
$a_{n}$ is the $n$th entry of vector $\mathbf{a}$. The notation $\\{\cdot\\}$
is used to represent sets, whereas $\mathrm{Re}[\cdot]$ and
$\mathrm{Im}[\cdot]$ denote the real part and imaginary part operators,
respectively.
## II System Model
Let us consider a BS with $M$ antennas serving $K$ single-antenna UEs in the
uplink. Each BS antenna is connected to a pair of 1-bit ADCs for the in-phase
and the quadrature components of the receive signal. We thus introduce the
1-bit quantization function $Q(\cdot):\mbox{$\mathbb{C}$}^{A\times
B}\to\mathcal{Q}$, with
$\displaystyle Q(\mathbf{C})\triangleq\sqrt{\frac{\rho
K+1}{2}}\Big{(}\mathrm{sgn}\big{(}\mathrm{Re}[\mathbf{C}]\big{)}+j\,\mathrm{sgn}\big{(}\mathrm{Im}[\mathbf{C}]\big{)}\Big{)}$
(1)
and where $\mathcal{Q}\triangleq\sqrt{\frac{\rho K+1}{2}}\\{\pm 1\pm
j\\}^{A\times B}$ [7]. We use $\mathbf{H}\in\mbox{$\mathbb{C}$}^{M\times K}$
to denote the uplink channel matrix whose entries are assumed to be
distributed independently as $\mathcal{C}\mathcal{N}(0,1)$ (as, e.g., in [7,
5]); more involved channel models will be considered in our future work.
Furthermore, each UE transmits with power $\rho$ and the additive white
Gaussian noise (AWGN) at the BS has unit variance: hence, $\rho$ can be
interpreted as the transmit SNR.
Let $x_{k}\in\mbox{$\mathbb{C}$}$ be the transmit symbol of UE $k$, with
$\mathbb{E}\big{[}|x_{k}|^{2}\big{]}=1$ and
$\mathbf{x}\triangleq(x_{k})\in\mbox{$\mathbb{C}$}^{K\times 1}$. The receive
signal at the BS at the input of the ADCs is given by
$\displaystyle\mathbf{y}\triangleq\sqrt{\rho}\mathbf{H}\mathbf{x}+\mathbf{z}\in\mbox{$\mathbb{C}$}^{M\times
1}$ (2)
where $\mathbf{z}\in\mbox{$\mathbb{C}$}^{M\times 1}$ is the AWGN term with
entries distributed as $\mathcal{C}\mathcal{N}(0,1)$. Then, at the output of
the ADCs, we have
$\displaystyle\mathbf{r}\triangleq
Q(\mathbf{y})\in\mbox{$\mathbb{C}$}^{M\times 1}.$ (3)
At this stage, the BS obtains a soft estimate of $\mathbf{x}$ as
$\displaystyle\hat{\mathbf{x}}\triangleq\mathbf{V}^{\mathrm{H}}\mathbf{r}\in\mbox{$\mathbb{C}$}^{K\times
1}$ (4)
where $\mathbf{V}\in\mbox{$\mathbb{C}$}^{M\times K}$ is the combining matrix.
Finally, the data detection process maps each estimated symbol to one of the
transmit symbols.
## III Data Detection Analysis with MRC
In this section, we focus on characterizing the performance of the data
detection with respect to the different parameters when 1-bit ADCs are adopted
at each BS antenna. In doing so, we consider the MRC receiver with combining
matrix given by $\mathbf{V}=\hat{\mathbf{H}}$, where
$\hat{\mathbf{H}}\in\mbox{$\mathbb{C}$}^{M\times K}$ is the estimate of
$\mathbf{H}$ acquired during the uplink pilot-aided channel estimation phase.
Let $\mathbf{P}\triangleq(P_{u,k})\in\mbox{$\mathbb{C}$}^{\tau\times K}$
denote the pilot matrix whose columns correspond to the pilots used by the
UEs, with $\\{|P_{u,k}|^{2}=1\\}_{u,k}$, and where $\tau$ is the pilot length:
assuming $\tau\geq K$ and orthogonal pilots among the UEs, we have
$\mathbf{P}^{\mathrm{H}}\mathbf{P}=\tau\mathbf{I}_{K}$. The UEs simultaneously
transmit their uplink pilots and the receive signal at the BS at the input of
the ADCs is given by
$\displaystyle\mathbf{Y}_{\textrm{p}}\triangleq\sqrt{\rho}\mathbf{H}\mathbf{P}^{\mathrm{H}}+\mathbf{Z}_{\textrm{p}}\in\mbox{$\mathbb{C}$}^{M\times\tau}$
(5)
where $\mathbf{Z}_{\textrm{p}}\in\mbox{$\mathbb{C}$}^{M\times\tau}$ is the
AWGN term with entries distributed as $\mathcal{C}\mathcal{N}(0,1)$. Then, at
the output of the ADCs, we have
$\displaystyle\mathbf{R}_{\textrm{p}}$ $\displaystyle\triangleq
Q(\mathbf{Y}_{\textrm{p}})\in\mbox{$\mathbb{C}$}^{M\times\tau}.$ (6)
Let us define
$\displaystyle\Omega(w)\triangleq\frac{2}{\pi}\arcsin(w)$ (7)
and assume that $\hat{\mathbf{H}}$ is obtained via the scaled least-squares
(LS) estimator
$\displaystyle\hat{\mathbf{H}}\triangleq\sqrt{\Upsilon}\mathbf{R}_{\textrm{p}}\mathbf{P}\in\mbox{$\mathbb{C}$}^{M\times
K}$ (8)
where we have defined
$\displaystyle\Upsilon$ $\displaystyle\triangleq\frac{2}{\pi}\frac{\rho}{(\rho
K+1)^{2}}\frac{\tau^{2}}{(\tau+\Delta)^{2}}$ (9)
with
$\displaystyle\Delta$
$\displaystyle\triangleq\frac{1}{K}\sum_{k=1}^{K}\sum_{u\neq
v}\bigg{(}\mathrm{Re}[P_{u,k}^{*}P_{v,k}]\Omega\bigg{(}\frac{\rho\sum_{i=1}^{K}\mathrm{Re}[P_{u,i}P_{v,i}^{*}]}{\rho
K+1}\bigg{)}$ $\displaystyle\phantom{=}\
-\mathrm{Im}[P_{u,k}^{*}P_{v,k}]\Omega\bigg{(}\frac{\rho\sum_{i=1}^{K}\mathrm{Im}[P_{u,i}P_{v,i}^{*}]}{\rho
K+1}\bigg{)}\bigg{)}.$ (10)
Note that the scaling factor in (9) is chosen to minimize the mean squared
error of the channel estimation for the class of scaled LS estimator: this is
discussed in [8], which presents a detailed analysis of the channel estimation
with 1-bit ADCs. Therefore, from (4), the estimated symbols are obtained as
$\hat{\mathbf{x}}=\sqrt{\Upsilon}\mathbf{P}^{\mathrm{H}}\mathbf{R}_{\textrm{p}}^{\mathrm{H}}\mathbf{r}$.
We point out that, when the MRC receiver results from the quantized channel
estimation, it cannot be perfectly aligned with the channel matrix and results
in residual multi-UE interference even when $M\to\infty$.
In this paper, we focus on the single-UE case (i.e., $K=1$) and characterize
the statistical properties of the estimated symbols.111Note that, when $K=1$,
the scaled LS estimator in (8) with the scaling factor chosen as in (9) is
equivalent to the state-of-the-art linear estimator proposed in [5]. We refer
to [8] for more details. Hence, in this preliminary analysis, we do not
consider the aforementioned multi-UE interference, which can be included at
the expense of more involved and less insightful expressions: this will be
explored in our future work.
### III-A Expected Value and Variance of the Estimated Symbols
Let $x\in\mathcal{S}$ denote the transmit symbol of the UE, where
$\mathcal{S}\triangleq\\{s_{\ell}\in\mbox{$\mathbb{C}$}\\}_{\ell=1}^{L}$
represents the set of $L$ transmit symbols. Moreover, let $\hat{s}_{\ell}$ be
the estimated symbol resulting from transmit symbol $s_{\ell}\in\mathcal{S}$.
Lastly, we use $\mathbf{p}\triangleq(p_{u})\in\mbox{$\mathbb{C}$}^{\tau\times
1}$ to denote the pilot used by the UE. To facilitate the data detection
process at the BS, for each $s_{\ell}\in\mathcal{S}$, we are interested in
deriving the closed-form expression of the expected value of $\hat{s}_{\ell}$,
denoted by $\mathsf{E}_{\ell}\triangleq\mathbb{E}[\hat{s}_{\ell}]$.
###### Theorem 1.
Assuming $K=1$ and MRC, for each transmit symbol $s_{\ell}\in\mathcal{S}$, the
expected value of the resulting estimated symbol $\hat{s}_{\ell}$ is given by
$\displaystyle\mathsf{E}_{\ell}$
$\displaystyle=\sqrt{\frac{2}{\pi}\rho}M\frac{\tau}{\tau+\Delta}\sum_{u=1}^{\tau}p_{u}^{*}\bigg{(}\Omega\bigg{(}\frac{\rho\mathrm{Re}[p_{u}s_{\ell}]}{\sqrt{(\rho+1)(\rho|s_{\ell}|^{2}+1)}}\bigg{)}$
$\displaystyle\phantom{=}\
+j\,\Omega\bigg{(}\frac{\rho\mathrm{Im}[p_{u}s_{\ell}]}{\sqrt{(\rho+1)(\rho|s_{\ell}|^{2}+1)}}\bigg{)}\bigg{)}$
(11)
with $\Delta$ defined in (10), which can be simplified for $K=1$ as
$\displaystyle\Delta$ $\displaystyle=\sum_{u\neq
v}\bigg{(}\mathrm{Re}[p_{u}^{*}p_{v}]\Omega\bigg{(}\frac{\rho\mathrm{Re}[p_{u}p_{v}^{*}]}{\rho+1}\bigg{)}$
$\displaystyle\phantom{=}\
-\mathrm{Im}[p_{u}^{*}p_{v}]\Omega\bigg{(}\frac{\rho\mathrm{Im}[p_{u}p_{v}^{*}]}{\rho+1}\bigg{)}\bigg{)}.$
(12)
###### Proof:
See [8, App. V]. ∎
The result of Theorem 1 can be used towards the efficient implementation of
MLD. Specifically, each estimated symbol can be mapped to one of the expected
values $\\{\mathsf{E}_{\ell}\\}_{\ell=1}^{L}$, which are derived as in (11)
without any prior Monte Carlo computation, according to the minimum distance
criterion. To further reduce the data detection complexity, one can construct
the Voronoi tessellation based on $\\{\mathsf{E}_{\ell}\\}_{\ell=1}^{L}$
obtaining well-defined detection regions: this allows to avoid the computation
of the distance between each estimated symbol and each $\mathsf{E}_{\ell}$. It
is worth mentioning that, in the case of multi-UE transmission, the expression
in (11) will be conditioned on the symbols transmitted by all the UEs.
Now, for each $s_{\ell}\in\mathcal{S}$, we are interested in deriving the
closed-form expression of the variance of $\hat{s}_{\ell}$, denoted by
$\mathsf{V}_{\ell}\triangleq\mathbb{V}[\hat{s}_{\ell}]$.
###### Theorem 2.
Assuming $K=1$ and MRC, for each transmit symbol $s_{\ell}\in\mathcal{S}$, the
variance of the resulting estimated symbol $\hat{s}_{\ell}$ is given by
$\displaystyle\mathsf{V}_{\ell}$ $\displaystyle=\frac{2}{\pi}\rho
M\frac{\tau^{2}}{\tau+\Delta}-\frac{1}{M}|\mathsf{E}_{\ell}|^{2}$ (13)
with $\mathsf{E}_{\ell}$ and $\Delta$ given in (11) and (12), respectively.
###### Proof:
See [8, App. VI]. ∎
The result of Theorem 2 allows to quantify the absolute dispersion of the
estimated symbols about their expected value, which arises from the 1-bit
quantization applied to both the channel estimation (through the MRC receiver)
and the uplink data transmission (see (3)). This dispersion is not isotropic
and assumes different shapes for different transmit symbols, as shown in Fig.
1 and in [11]. Furthermore, $\mathsf{V}_{\ell}$ diminishes as $|s_{\ell}|$
increases due to the negative term on the right-hand side of (13), since the
transmit symbols that lie further from the origin are less subject to noise.
Let us now consider the normalized variance
$\mathsf{V}_{\ell}/|\mathsf{E}_{\ell}|^{2}$, which quantifies the relative
dispersion of $\hat{s}_{\ell}$ about its expected value. It is important to
notice that, although $\mathsf{V}_{\ell}$ grows linearly with the number of BS
antennas $M$, the normalized variance is inversely proportional to the latter.
The data detection process can be enhanced by taking into account the
dispersion of the estimated symbols about their expected values. Specifically,
in the context of MLD via Voronoi tessellation based on
$\\{\mathsf{E}_{\ell}\\}_{\ell=1}^{L}$ described above, one can use the
variance of the estimated symbols derived in (13) to further refine the
detection regions. In this setting, we adopt the approach of multiplicatively
weighted Voronoi tessellation, where each detection region
$\mathcal{R}_{\ell}$ around $\mathsf{E}_{\ell}$ is constructed as
$\displaystyle\mathcal{R}_{\ell}\triangleq\big{\\{}\xi\in\mbox{$\mathbb{C}$}:\omega_{\ell}|\xi-\mathsf{E}_{\ell}|\leq\omega_{i}|\xi-\mathsf{E}_{i}|,\forall
i\neq\ell\big{\\}}$ (14)
where $\omega_{\ell}>0$ is the weight corresponding to $\mathsf{E}_{\ell}$. In
particular, one must choose each $\omega_{\ell}$ to be a decreasing function
of $\mathsf{V}_{\ell}$ such that a higher variance of $\hat{s}_{\ell}$
corresponds to a smaller distance function around $\mathsf{E}_{\ell}$ and,
consequently, gives rise to a larger $\mathcal{R}_{\ell}$ (see, e.g., the
choice in (18)).222Note that the case of equal weights corresponds to
conventional MLD. Remarkably, it is shown in Section IV-A that this approach
can greatly boost the performance of the data detection in terms of SER.
We now analyze the asymptotic behavior of the expected value and the variance
of the estimated symbols at high SNR.
###### Corollary 1.
From Theorems 1 and 2, in the limit of $\rho\to\infty$, we have
$\displaystyle\lim_{\rho\to\infty}\frac{\mathsf{E}_{\ell}}{\sqrt{\rho}}$
$\displaystyle=\sqrt{\frac{2}{\pi}}M\frac{\tau}{\tau+\bar{\Delta}}\sum_{u=1}^{\tau}p_{u}^{*}\bigg{(}\Omega\bigg{(}\frac{\mathrm{Re}[p_{u}s_{\ell}]}{|s_{\ell}|}\bigg{)}$
$\displaystyle\phantom{=}\
+j\,\Omega\bigg{(}\frac{\mathrm{Im}[p_{u}s_{\ell}]}{|s_{\ell}|}\bigg{)}\bigg{)}$
(15)
and
$\displaystyle\lim_{\rho\to\infty}\frac{\mathsf{V}_{\ell}}{\rho}$
$\displaystyle=\frac{2}{\pi}M\frac{\tau^{2}}{\tau+\bar{\Delta}}-\frac{1}{M}\lim_{\rho\to\infty}\frac{|\mathsf{E}_{\ell}|^{2}}{\rho}$
(16)
where we have defined
$\displaystyle\bar{\Delta}$ $\displaystyle=\sum_{u\neq
v}\Big{(}\mathrm{Re}[p_{u}^{*}p_{v}]\Omega\big{(}\mathrm{Re}[p_{u}p_{v}^{*}]\big{)}-\mathrm{Im}[p_{u}^{*}p_{v}]\Omega\big{(}\mathrm{Im}[p_{u}p_{v}^{*}]\big{)}\Big{)}.$
(17)
(a) $\rho=0$ dB.
(b) $\rho=10$ dB.
(c) $\rho=20$ dB.
Figure 1: Estimated symbols with the MRC receiver, with 16-QAM transmit
symbols, $M=128$, and $\tau=32$. The expected value of the estimated symbols
is computed in closed form as in (11).
The result of Corollary 1 formalizes a behavior of the estimated symbols that
was observed in [7]. From (15), at high SNR, all the estimated symbols lie on
a circle around the origin and the information carried by the amplitude of the
transmit symbols is entirely suppressed by the 1-bit quantization. Therefore,
the estimated symbols resulting from transmit symbols with the same phase
become indistinguishable in terms of their expected value, which depends only
on $\mathrm{Re}[s_{\ell}]/|s_{\ell}|$ and $\mathrm{Im}[s_{\ell}]/|s_{\ell}|$.
For example, if $\mathcal{S}$ corresponds to the 16-QAM constellation (as
considered in Section IV), the inner estimated symbols become
indistinguishable from the outer estimated symbols with the same phase.
Moreover, according to (16), these estimated symbols become identical also in
terms of variance. In view of these aspects, the system performance cannot be
enhanced simply by minimizing the normalized variance of the estimated
symbols. On the one hand, such a variance roughly decreases with the transmit
SNR; on the other hand, the overlap between different symbols after the
estimation increases with the transmit SNR. This determines a clear SNR trade-
off, according to which operating at the right SNR enhances the data detection
accuracy.
## IV Numerical Results
In this section, we evaluate the performance of the data detection with 1-bit
ADCs with respect to the different parameters using the analytical results
presented in Section III-A. We assume that, during the uplink pilot-aided
channel estimation phase, the second column of the $\tau$-dimensional discrete
Fourier transform matrix is used as pilot, i.e.,
$\mathbf{d}_{2}\triangleq[1,e^{-j\,\frac{2\pi}{\tau}},e^{-j\,2\frac{2\pi}{\tau}},\ldots,e^{-j\,(\tau-1)\frac{2\pi}{\tau}}]^{\mathrm{T}}\in\mbox{$\mathbb{C}$}^{\tau\times
1}$, which represents the best possible pilot choice (see [8, App. I] for more
details). In addition, we assume the same transmit SNR for the two phases of
channel estimation and uplink data transmission. Lastly, although our
analytical framework is valid for any choice of the set of transmit symbols
$\mathcal{S}$, we analyze the scenario where $\mathcal{S}$ corresponds to the
16-QAM constellation, i.e., $\mathcal{S}=\frac{1}{\sqrt{10}}\big{\\{}\pm 1\pm
j,\pm 1\pm j\,3,\pm 3\pm j,\pm 3\pm j\,3\big{\\}}$.333Note that the symbols
are normalized such that $\frac{1}{L}\sum_{\ell=1}^{L}|s_{\ell}|^{2}=1$.
Figure 2: SER against the transmit SNR, with 16-QAM transmit symbols,
$M\in\\{64,128,256\\}$, and $\tau=32$.
Fig. 1 illustrates the estimated symbols for different values of the transmit
SNR $\rho$, with $M=128$ and $\tau=32$; each 16-QAM symbol is transmitted over
$10^{2}$ independent channel realizations. The expected value of the estimated
symbols is computed as in Theorem 1 and clearly matches the corresponding
sample average. Here, we observe two fundamental and conflicting trends that
constitute the SNR trade-off described in Section III-A. First, the normalized
variance of the estimated symbols decreases with the transmit SNR. Second, the
estimated symbols resulting from the transmit symbols with the same phase,
i.e., $\pm\frac{1}{\sqrt{10}}(1\pm j)$ and $\pm\frac{1}{\sqrt{10}}(3\pm
j\,3)$, get closer as the transmit SNR increases from $\rho=0$ dB to $\rho=10$
dB and almost fully overlap at $\rho=20$ dB. This behavior was observed in [7]
and is formalized in Corollary 1, according to which such estimated symbols
become identical at high SNR and the difference in amplitude between symbols
cannot be recovered. For the 16-QAM, this produces a SER of $0.25$ since there
are four pairs of indistinguishable estimated symbols (see also Fig. 2).
Figure 3: SER against the pilot length, with 16-QAM transmit symbols,
$M\in\\{64,128,256\\}$, and $\rho=10$ dB.
We now examine the combined effect of the channel estimation and the data
detection with 1-bit ADCs on the system performance in terms of SER, which is
computed numerically via Monte Carlo simulations with $10^{6}$ independent
channel realizations. The symbols are decoded via MLD aided by the result of
Theorem 1. Furthermore, different numbers of BS antennas are considered, i.e.,
$M\in\\{64,128,256\\}$. Fig. 2 plots the SER against the transmit SNR $\rho$,
with $\tau=32$, showing a clear SNR trade-off. In particular, the SER reduces
until it attains its minimum (which occurs at about $\rho=4$ dB for $M=256$)
before increasing again and reaching asymptotically the value of $0.25$. In
fact, as discussed above for Fig. 1, the inner estimated symbols of the 16-QAM
constellation become indistinguishable from the outer estimated symbols with
the same phase at high SNR. Fig. 3 depicts the SER against the pilot length
$\tau$, with $\rho=10$ dB, showing the impact of the channel estimation
accuracy in the computation of the MRC receiver. For instance, for $M=256$,
the SER is decreased by a factor of $5$ when the pilot length grows from
$\tau=4$ to $\tau=8$. We refer to [8] for a thorough analysis of the channel
estimation with 1-bit ADCs. In both Fig. 2 and 3, we observe that increasing
the size of the antenna array at the BS is always beneficial. For example, in
Fig. 2, the SER is decreased by two orders of magnitude at the optimal
transmit SNR when the number of BS antennas grows from $M=128$ to $M=256$.
Indeed, the higher granularity in the antenna domain allows to sum the
contribution of a larger number of independent channel entries.
### IV-A Enhanced Maximum Likelihood Detection
(a) SER against $\alpha$.
(b) Detection regions for conventional and enhanced MLD corresponding to
$\alpha=0$ and $\alpha=1$, respectively.
Figure 4: Enhanced MLD with weights chosen as in (18), with $M=128$, $\rho=5$
dB, and $\tau=32$.
The SER results presented so far have been obtained with conventional MLD,
whereby each estimated symbol is mapped to one of the expected values
$\\{\mathsf{E}_{\ell}\\}_{\ell=1}^{L}$ according to the minimum distance
criterion. Such a data detection process can be enhanced by taking into
account the dispersion of the estimated symbols about their expected values,
i.e., by assigning larger detection regions to the estimated symbols with
higher variance. Hence, we now construct the detection regions according to a
multiplicatively weighted Voronoi tessellation (see (14)) with the following
heuristic choice of the weights:
$\displaystyle\omega_{\ell}=\frac{1}{1+\alpha(\mathsf{V}_{\ell}-1)},\qquad\ell=1,\ldots,L$
(18)
with $\alpha\in[0,1]$. This choice allows to strike a balance between
conventional MLD (i.e., $\omega_{\ell}=1$ for $\alpha=0$) and enhanced MLD
with weights inversely proportional to the variance of the estimated symbols
(e.g., $\omega_{\ell}=1/\mathsf{V}_{\ell}$ for $\alpha=1$).
Fig. 4(a) plots the SER against $\alpha$, with $M=128$, $\rho=5$ dB, and
$\tau=32$, showing that using even slightly weighted detection regions can
reduce the SER by a factor of $2$. Fig. 4(b) illustrates the detection regions
corresponding to the cases of $\alpha=0$ and $\alpha=1$. It is straightforward
to observe that the detection regions corresponding to the inner estimated
symbols of the 16-QAM constellation (with higher variance) are enlarged at the
expense of the ones corresponding to the outer estimated symbols (with lower
variance). For instance, the detection threshold between the estimated symbols
corresponding to $\frac{1}{\sqrt{10}}(1+j)$ and $\frac{1}{\sqrt{10}}(3+j\,3)$
is shifted outwards to accommodate the larger dispersion of the former (cf.
Fig. 1). Indeed, this simple approach can greatly boost the performance of the
data detection in terms of SER.
## V Conclusions
This paper focuses on the uplink data detection analysis of massive MIMO
systems with 1-bit ADCs. We characterize the expected value and the variance
of the estimated symbols when MRC is adopted at the BS along with their
asymptotic behavior at high SNR. Building on these results, we propose an
enhanced MLD method that is able to greatly reduce the SER by taking into
account the dispersion of the estimated symbols about their expected values.
The proposed analysis provides important practical insights into the design
and the implementation of 1-bit quantized systems: in particular, it
highlights a fundamental SNR trade-off, according to which operating at the
right SNR considerably improves the data detection accuracy. Future work will
consider extensions to the multi-UE case and the optimal design of the set of
transmit symbols capitalizing on our analytical framework.
## References
* [1] N. Rajatheva, I. Atzeni, E. Björnson _et al._ , “White paper on broadband connectivity in 6G,” June 2020. [Online]. Available: http://jultika.oulu.fi/files/isbn9789526226798.pdf
* [2] M. Xiao, S. Mumtaz, Y. Huang _et al._ , “Millimeter wave communications for future mobile networks,” _IEEE J. Sel. Areas Commun._ , vol. 35, no. 9, pp. 1909–1935, Sept. 2017.
* [3] J. Mo and R. W. Heath, “Capacity analysis of one-bit quantized MIMO systems with transmitter channel state information,” _IEEE Trans. Signal Process._ , vol. 63, no. 20, pp. 5498–5512, Oct. 2015.
* [4] J. Choi, J. Mo, and R. W. Heath, “Near maximum-likelihood detector and channel estimator for uplink multiuser massive MIMO systems with one-bit ADCs,” _IEEE Trans. Commun._ , vol. 64, no. 5, pp. 2005–2018, May 2016\.
* [5] Y. Li, C. Tao, G. Seco-Granados, A. Mezghani, A. L. Swindlehurst, and L. Liu, “Channel estimation and performance analysis of one-bit massive MIMO systems,” _IEEE Trans. Signal Process._ , vol. 65, no. 15, pp. 4075–4089, Aug. 2017.
* [6] A. K. Saxena, I. Fijalkow, and A. L. Swindlehurst, “Analysis of one-bit quantized precoding for the multiuser massive MIMO downlink,” _IEEE Trans. Signal Process._ , vol. 65, no. 17, pp. 4624–4634, Sept. 2017.
* [7] S. Jacobsson, G. Durisi, M. Coldrey, U. Gustavsson, and C. Studer, “Throughput analysis of massive MIMO uplink with low-resolution ADCs,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 6, pp. 1304–1309, Jun. 2017\.
* [8] I. Atzeni and A. Tölli, “Channel estimation and data detection analysis of massive MIMO with 1-bit ADCs,” 2021. [Online]. Available: https://arxiv.org/pdf/2102.10172.pdf
* [9] C. Mollén, J. Choi, E. G. Larsson, and R. W. Heath, “Uplink performance of wideband massive MIMO with one-bit ADCs,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 1, pp. 87–100, Jan. 2017.
* [10] A. B. Üçüncü and A. Ŏ. Yılmaz, “Oversampling in one-bit quantized massive MIMO systems and performance analysis,” _IEEE Trans. Wireless Commun._ , vol. 17, no. 12, pp. 7952–7964, Dec. 2018.
* [11] D. Abdelhameed, K. Umebayashi, A. Al-Tahmeesschi, I. Atzeni, and A. Tölli, “Enhanced signal detection for massive SIMO communications with 1-bit ADCs,” in _Proc. IEEE Int. Workshop Signal Process. Adv. in Wireless Commun. (SPAWC)_ , Lucca, Italy, Sep. 2021.
|
arxiv-papers
| 2021-07-26T17:14:40 |
2024-09-04T03:07:19.309776
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Italo Atzeni, Antti T\\\"olli",
"submitter": "Italo Atzeni Dr.",
"url": "https://arxiv.org/abs/2107.12331"
}
|
2107.12332
|
# Overview of Bachelors Theses 2021
Vitaly Aksenov, ITMO University
[email protected]
(July 2021)
## 1 Development of a Streaming Algorithm for the Decomposition of Graph
Metrics to Tree Metrics
Student: Fafurin Oleg, ITMO University
External Supervisor: Michael Kapralov, EPFL
The embedding problem. We are given a graph $G$. We want to embed this graph
onto some tree $T$, so that the shortest distance $d_{G}(u,v)$ between any
pair of vertices $u$ and $v$ does not change much. In other words, we want to
minimize $\max\limits_{(u,v)}\frac{d_{T}(u,v)}{d_{G}(u,v)}$. This value is
named the _distortion_. Obviously, the distortion is upper bounded by the
maximal distortion of edges.
There exists an algorithm that embeds any graph on a tree with distortion
$O(\log^{2}n)$ in the streaming model, i.e., it can use only
$O(n\cdot\mathrm{polylog}\,n)$ memory. It consists of two parts.
In the first part, we insert edges one by one and if for a given edge $(u,v)$
the current distance is less than $t$ then we do not insert it. This
algorithm, obviously, provides a distortion $O(t)$ for each edge and it can be
proven that the total number of edges will not exceed $O(n^{1+\frac{1}{t}})$
[4]. Taking $t=\log n$, we get $O(\log n)$ distortion and
$O(n\cdot\mathrm{polylog}\,n)$.
In the second part, we use a streaming algorithm named FRT [5], it takes a
graph with $O(n\cdot\mathrm{polylog}\,n)$ edges and gets a tree with
distortion $O(\log^{2}n)$.
As the first result, we improved the distortion of this algorithm by taking
$t$ to be $O(\frac{\log n}{\log\log n})$ in the first part and, thus, giving
$O(\frac{\log n}{\log\log n})$ distortion with $O(n\cdot\mathrm{polylog}n)$
edges in the graph. So, in total, the algorithm gives
$O(\frac{\log^{2}n}{\log\log n})$ distortion.
The resulting distortion is the upper bound. We decided to find graphs for
which the distortion matches that upper bound. The following two graphs
satisfy.
Regular graph. We build a regular graph with degree $O(\frac{2\log n}{\log\log
n})$: at first, put all $n$ vertices on a cycle, and then connect each vertex
with $O(\frac{\log n}{\log\log n})$ neighbours in both sides.
Star. Consider $0<\alpha<1$. One of the vertices is a center, from which there
are $n^{\alpha}$ chains with length $n^{1-\alpha}$. Then, we take all the
vertices on the distance at most $O(\frac{\log n}{\log\log n})$ from the
center and add all the edges between them.
Then, we implement the algorithm. The complexity of the first part appeared to
be $O(mn\log n)$ where $m$ is the number of edges and $n$ is the number of
vertices. The complexity of the second part is $O(n^{2}\log n)$.
We run the resulting algorithm on several different open-source network
graphs.
The following plot shows the distortion of paths after the first part of the
algorithm on different graphs such as Facebook [7] and scale-free graphs [6].
The following plot shows the distortion of edges after the second part of the
algorithm (FRT) on different scale-free graphs with different base.
## 2 Development of Memory-friendly Concurrent Data Structures
Student: Roman Smirnov, ITMO University
External Supervisor: Petr Kuznetsov, Telecom Paris
The main idea of this work is to implement the skip-list so that each node can
store up to $k$ elements instead of one. We designed and implemented the
algorithm using locks. This thesis is mostly technical and the main results
are the experiments.
At first, we chose the best $k$—it appeared to be $32$. Then we compared our
approach with two well-known concurrent data structures based on the skip-
list: ConcurrentSkipListSet [1] from Java standard library and
NonBlockingFriendlySkipListSet [11]. Please, note, that we compared sets and
not maps. It can be seen as that our approach does not lose the performance
much.
Then, we decided to replace Objects in the previous implementation by
integers. For that, we rewrote our algorithm and ConcurrentSkipListSet. This
improved the performance of our data structure almost $2$ times since now $k$
elements reside on the same cache line, while the results of
ConcurrentSkipListSet barely changed.
As the result, we can say that the idea of batching the elements from
different nodes into one seems to be a reasonable approach.
## 3 Theoretical Analysis of the Performance of Concurrent Data Structures
Student: Daniil Bolotov, ITMO University
External Supervisor: Petr Kuznetsov, Telecom Paris
In this work we tried to predict the performance of MCS lock [9] and Treiber
stack [10]. The prediction is done in the similar manner as in [3].
For MCS lock, we consider a data structure that takes MCS lock, perform the
critical section of size $C$, releases the lock, and then perform the parallel
section of size $P$. Thus, we can get the following code that emulates such
data structure.
⬇
1class Node:
2 bool locked // shared, atomic
3 Node next = null
4
5 tail = null // shared, global
6 threadlocal myNode = null // per process
7operation():
8 myNode = Node()
9 myNode.locked = true
10 pred = tail.getAndSet(myNode) // $W$ or $X$
11 if pred != null:
12 pred.next = myNode
13 while myNode.locked: // pass // $R_{I}$
14 // CS started
15 for i in 1..C: // $C$
16 //nop
17 // CS finished
18 if myNode.next == null: // $R_{I}$
19 if tail.CAS(myNode,null): // $W$ or $X$
20 return
21 else:
22 while myNode.next == null: // $R_{I}$
23 //pass
24 myNode.next.locked = false // $W$
25 //Parallel section
26 for i in 1..P: // $P$
27 //nop
By considering different schedules we can prove that the throughput is equal
to:
$\begin{cases}\frac{\alpha}{2R_{I}+C+2W}&\text{, if
}P+W\leq(N-1)\cdot(2W+C+R_{I})\\\ \frac{\alpha\cdot
N}{(2W+C+R_{I})+(P+W)}&\text{, else}\end{cases},$
where $C$ is the size of the critical section, $P$ is the size of the parallel
section, $W$ is the cost of a write, $R_{I}$ is the cost of a read, and $N$ is
the number of processes.
On Intel Xeon and $15$ processes we get the following throughput, where red is
the prediction and blue is the real execution:
On AMD Opteron and $15$ processes we get the following throughput:
Now, we consider Treiber stack. The pseudocode is the following:
⬇
1class Node:
2 T data;
3 Node next
4
5head = null //shared, atomic
6
7push(data):
8 newHead = Node(data)
9 while !success:
10 oldHead = atomic_read(head) // $M$ or $X$
11 newHead.next = oldHead
12 success = head.compareAndSet(oldHead, newHead) // $W$
13
14pop():
15 Node oldHead
16 while !success:
17 oldHead = atomic_read(head) // $M$ or $X$
18 if (oldHead == null) {
19 return DEFAULT_VALUE // corner case
20 }
21 newHead = oldHead.next
22 success = head.compareAndSet(oldHead, newHead) // $W$
23
24 return oldHead.data
One can see that push and pop operations are similar and we can write them as
one generic function as follows:
⬇
1pop_or_push_operation():
2 while !success do
3 current = atomic_read(head)
4 new = critical_work(current)
5 success = head.compareAndSet(current, new)
Then, we simulate the application of the Treiber stack: we take an element
from the stack and then we perform an execution of size $P$.
⬇
1class Node:
2 T data;
3 Node next
4
5head = null //shared, atomic
6
7operation():
8 newHead = Node(data)
9 while !success:
10 oldHead = atomic_read(head) // $M$ or $X$
11 newHead.next = oldHead
12 success = head.compareAndSet(oldHead, newHead); // $W$
13
14 for i in 1..P: / / $P$
15 //nop
By considering different schedules we can prove that the throughput is equal
to:
$\begin{cases}\frac{\alpha}{M+W}&\text{, if }P\leq(N-1)\cdot(M+W)\\\
\frac{\alpha\cdot N}{(P+M+W)}&\text{, else}\end{cases}$
On Intel Xeon and $15$ processes we get the following results:
On AMD Opteron and $15$ processes we get the following results:
As a result, we get pretty good theoretical approximation of the throughput.
## 4 Parallel Batched Interpolation Search Tree
Student: Alena Martsenyuk, MIPT
In this thesis, we show how to design _parallel batched_ implementation of
Interpolation Search Tree [8]. “Parallel batched” means that we ask the data
structure to apply multiple operations together in parallel.
We developed the data structure that applies a batch of $m$ operations in
$O(m\log\log n)$ work and $O(\log m\log\log n)$ span, where $n$ is the current
size of the tree.
For experiments, we used an Intel Xeon machine with $16$ threads. On this
plot, you can see how much time (OY-axis) it takes to apply $m$ (OX-axis)
operations using different number of processes into a tree of size $2.5\cdot
10^{7}$.
On this plot, you can see how much time (OY-axis) it takes to apply $10^{6}$
operations using different number of processes into a tree of size $n$ (OX-
axis).
Finally, we insert $10^{6}$ elements into the tree of size $5\cdot 10^{7}$ and
check the speedup. The speedup is approximately $11$ on $16$ processes.
## 5 Parallel Batched Self-adjusting Data Structures
Student: Vitalii Krasnov, MIPT
In this thesis, we show how to design parallel batched self-adjusting binary
search tree. We based our data structure on CBTree data structure [2].
We proved that the resulting data structure is static-optimal, i.e., the total
work is equal to $O(\sum\limits_{x}c_{x}\cdot\frac{m}{c_{x}})$ where $m$ is
the total number of operations from the start of the existence of the data
structure and $c_{x}$ is the number of times $x$ is requested. The span of the
algorithm is $\frac{m}{C}$ where $C$ is $\min\limits_{x}c_{x}$.
For experiments, we used an Intel Xeon machine with $16$ threads. All our
experiments has the following construction: we continuously add $10^{3}$
elements to the same tree until it becomes very large—so, the tree is always
the same but growing. On the first plot, one can see how much time (OY-axis)
it takes to apply batches of size $10^{3}$ into a growing tree (OX-axis). The
speedup is approximately $9$ on $12$ processes.
On the second plot, one can see how much time (OY-axis) it takes to apply
batches of size $10^{3}$ taken from a normal distribution into a growing tree
(OX-axis).
Also, our data structure outperforms the set data structure from the standard
C++ library in the sequential setting.
## 6 Parallel Batched Persistent Binary Search Trees
Student: Ildar Zinatulin, MIPT
In this thesis, we show how to design a persistent parallel batched binary
search tree. We consider persistence in the sense of versions. Suppose we are
asked to apply operations $op_{1},op_{2},\ldots,op_{m}$. A result of any
operation is the new version of the tree, and operations should be applied in
some “sequential” order $op_{\pi(1)},\ldots,op_{\pi(m)}$, i.e., a version of
the tree after operation $op_{\pi(j)}$ should be the initial tree after an
application of all first $j$ operations $op_{\pi(1)},\ldots,op_{\pi(j)}$.
We designed a persistent binary search tree that applies the operations in the
order of their arguments. The idea is a little bit complicated and is similar
to the scan function — we make two traversals from top to bottom. The work of
the resulting algorithm is $O(m\log n)$ and the span is $O(\log n\log m)$.
For experiments, we used an Intel Xeon machine with $16$ threads. We performed
only one experiment — the speedup of an application of a batch with size
$10^{5}$ to a tree with size $10^{6}$. As for the binary search tree we used
Treap. The blue dot on the plot is the sequential algorithm for the persistent
Treap.
## References
* [1] Java concurrentskiplistset, 2021.
* [2] Y. Afek, H. Kaplan, B. Korenfeld, A. Morrison, and R. E. Tarjan. CBTree: A practical concurrent self-adjusting search tree. In Lecture Notes in Computer Science, pages 1–15. Springer Berlin Heidelberg, 2012.
* [3] V. Aksenov, D. Alistarh, and P. Kuznetsov. Brief-announcement: Performance prediction for coarse-grained locking. Proceedings of the thirty seventh annual ACM Symposium on Principles of distributed computing (PODC), pages 411–413, 2018.
* [4] I. Althöfer, G. Das, D. P. Dobkin, D. Joseph, and J. Soares. On sparse spanners of weighted graphs. Discrete and Computational Geometry, (9):81–100, 1993.
* [5] J. Fakcharoenphol, S. Rao, and K. Talwar. A tight bound on approximating arbitrary metrics by tree metrics. Journal of Computer and System Sciences, (69):485–497, 2004.
* [6] D. Fasino, A. Tonetto, and F. Tudisco. Generating large scale-free networks with the chung–lu random graph model. 2019\.
* [7] J. McAuley and J. Leskovec. Learning to discover social circles in ego networks, 2012.
* [8] K. Mehlhorn and A. Tsakalidis. Dynamic interpolation search. In Automata, Languages and Programming, pages 424–434. Springer-Verlag, 1985.
* [9] J. M. Mellor-Crummey and M. L. Scott. Algorithms for scalable synchronization on shared-memory multiprocessors. ACM Transactions on Computer Systems (TOCS), 9(1):21–65, 1991.
* [10] R. K. Treiber. Systems programming: Coping with parallelism. International Business Machines Incorporated, Thomas J. Watson Research …, 1986.
* [11] M. R. Tyler Crain, Vincent Gramoli. A contention-friendly, non-blocking skip list. 2012\.
|
arxiv-papers
| 2021-07-26T17:15:25 |
2024-09-04T03:07:19.322814
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Vitaly Aksenov",
"submitter": "Vitaly Aksenov",
"url": "https://arxiv.org/abs/2107.12332"
}
|
2107.12333
|
# Simulations of helical inflationary magnetogenesis and gravitational waves
Axel Brandenburg Nordita, KTH Royal Institute of Technology and Stockholm
University, Hannes Alfvéns väg 12, SE-10691 Stockholm, Sweden Department of
Astronomy, AlbaNova University Center, Stockholm University, SE-10691
Stockholm, Sweden McWilliams Center for Cosmology & Department of Physics,
Carnegie Mellon University, Pittsburgh, PA 15213, USA School of Natural
Sciences and Medicine, Ilia State University, 3-5 Cholokashvili Avenue, 0194
Tbilisi, Georgia Yutong He Nordita, KTH Royal Institute of Technology and
Stockholm University, Hannes Alfvéns väg 12, SE-10691 Stockholm, Sweden
Department of Astronomy, AlbaNova University Center, Stockholm University,
SE-10691 Stockholm, Sweden Ramkishor Sharma Inter University Centre for
Astronomy and Astrophysics, Post Bag 4, Pune University Campus, Ganeshkhind,
Pune 411 007, India
###### Abstract
Using numerical simulations of helical inflationary magnetogenesis in a low
reheating temperature scenario, we show that the magnetic energy spectrum is
strongly peaked at a particular wavenumber that depends on the reheating
temperature. Gravitational waves (GWs) are produced at frequencies between
$3\,{\rm nHz}$ and $50\,{\rm mHz}$ for reheating temperatures between
$150\,{\rm MeV}$ and $3\times 10^{5}\,{\rm GeV}$, respectively. At and below
the peak frequency, the stress spectrum is always found to be that of white
noise. This implies a linear increase of GW energy per logarithmic wavenumber
interval, instead of a cubic one, as previously thought. Both in the helical
and nonhelical cases, the GW spectrum is followed by a sharp drop for
frequencies above the respective peak frequency. In this magnetogenesis
scenario, the presence of a helical term extends the peak of the GW spectrum
and therefore also the position of the aforementioned drop toward larger
frequencies compared to the case without helicity. This might make a
difference in it being detectable with space interferometers. The efficiency
of GW production is found to be almost the same as in the nonhelical case, and
independent of the reheating temperature, provided the electromagnetic energy
at the end of reheating is fixed to be a certain fraction of the radiation
energy density. Also, contrary to the case without helicity, the electric
energy is now less than the magnetic energy during reheating. The fractional
circular polarization is found to be nearly hundred per cent in a certain
range below the peak frequency range.
gravitational waves—early Universe—turbulence—magnetic fields—MHD
## 1 Introduction
There has been significant interest in the production of helical magnetic
fields and circularly polarized gravitational waves (GWs) from the early
Universe (Garretson et al., 1992; Cornwall, 1997; Vachaspati, 2001;
Kahniashvili et al., 2005, 2021; Anber & Sorbo, 2006; Campanelli, 2009; Durrer
et al., 2011; Caprini & Sorbo, 2014; Adshead et al., 2016, 2018). Owing to
magnetic helicity conservation, such fields would have had a better chance to
survive until the present time (Christensson et al., 2001; Banerjee &
Jedamzik, 2004; Kahniashvili et al., 2016; Brandenburg et al., 2017). The
associated electromagnetic (EM) stress also drives circularly polarized GWs
(Kahniashvili et al., 2005, 2021; Ellis et al., 2020; Roper Pol et al., 2021).
If the sign and spectral shape of the circular polarization can in future be
detected, it would provide important information about the underlying
mechanisms responsible for the generation.
Inflationary magnetogenesis scenarios are particularly attractive, because
they have the advantage of producing large-scale magnetic fields. They tend to
amplify magnetic fields from quantum fluctuations by the breaking of conformal
invariance through a function $f$ such that the Lagrangian density has a term
that takes the form $f^{2}F_{\mu\nu}F^{\mu\nu}$, where $F_{\mu\nu}$ is the
Faraday tensor (Turner & Widrow, 1988; Ratra, 1992). However, those mechanisms
can only be viable if they avoid some well-known problems discussed in detail
in the literature (Demozzi et al., 2009; Ferreira et al., 2013; Kobayashi &
Afshordi, 2014; Kobayashi & Sloth, 2019). These problems are avoided by
requiring the function $f$ to obey certain constraints that have been
discussed in detail by Sharma et al. (2017). For some scenarios, these
magnetic fields can lead to the production of GWs which lie in the sensitivity
range of space interferometers such as LISA and Taiji, as studied analytically
in Sharma et al. (2020). This magnetogenesis model was then extended to the
helical case (Sharma et al., 2018, hereafter referred to as SSS). A similar
model of helical magnetogenesis was also considered by Fujita & Durrer (2019)
and Okano & Fujita (2021). Numerical simulations have recently been performed
for the nonhelical case (Brandenburg & Sharma, 2021, hereafter BS). The goal
of the present paper is to apply numerical simulations now to helical
magnetogenesis. These models continue to amplify EM fields during the post-
inflationary matter-dominated era after inflation, but require relatively low
reheating temperatures, $T_{\rm r}$. Values of $T_{\rm r}$ in the range of the
electroweak and quantum chromodynamics (QCD) epochs are often discussed, but
do not have to coincide with them. Here we consider values of $T_{\rm r}$ in
the range from $150\,{\rm MeV}$ to $3\times 10^{5}\,{\rm GeV}$, which
correspond to peak frequencies of GWs in the ranges accessible to pulsar
timing arrays (Detweiler, 1979; Hobbs et al., 2010; Arzoumanian et al., 2020)
and space interferometers (Caprini et al., 2016; Amaro-Seoane et al., 2017;
Taiji Scientific Collaboration et al., 2021).
As in Sharma et al. (2017) and SSS, we assume that $f$ is a function of the
scale factor $a$ with $f(a)\propto a^{\alpha}$ during inflation, and
$f(a)\propto a^{-\beta}$ during the post-inflationary matter-dominated era,
where $\alpha=2$ was fixed and $\beta$ is an exponent whose value depends on
$T_{\rm r}$. The magnetic field becomes unstable and is rapidly amplified at
large length scales, provided the second derivative of $f$ with respect to
conformal time is positive. This can be the case both for positive and
negative exponents, i.e., both during and after inflation, but no longer in
the radiation dominated era, where $f=1$ must be obeyed for standard
(conformally invariant) electromagnetism to hold.
In contrast to BS, we now consider an additional term $\gamma
f^{2}F_{\mu\nu}\tilde{F}^{\mu\nu}$ in the Lagrangian density, where $\gamma$
is a constant and $\tilde{F}^{\mu\nu}$ is the dual of the Faraday tensor. The
product is proportional to $\mathbf{E}\cdot\mathbf{B}$, where $\mathbf{E}$ and
$\mathbf{B}$ are the electric and magnetic fields, respectively. The term
$\mathbf{E}\cdot\mathbf{B}$ is proportional to the rate of magnetic helicity
production. The presence of such a term is common to many scenarios of helical
magnetogenesis, including the chiral magnetic effect (CME; see Vilenkin, 1980;
Joyce & Shaposhnikov, 1997; Boyarsky et al., 2012, 2015) and axion inflation
(Barnaby et al., 2011; Turner & Widrow, 1988; Fujita et al., 2015; Adshead et
al., 2016; Domcke & Mukaida, 2018; Domcke et al., 2020). In the case of
magnetogenesis via axion inflation (Garretson et al., 1992; Adshead et al.,
2016), the helical term takes the form $f_{\rm m}^{-1}\phi
F_{\mu\nu}\tilde{F}^{\mu\nu}$, where $\phi$ represents the axion field and
$f_{\rm m}$ is a mass scale associated with the axion field. In our model,
$f(a)$ is constructed such that the model avoids the aforementioned
difficulties discussed in detail by Sharma et al. (2017) and SSS.
As in BS, we employ the Pencil Code (Pencil Code Collaboration et al., 2021)
and apply it in two separate steps. In step I, we solve the Maxwell and GW
equations near the end of the post-inflationary matter-dominated phase when
the medium is still electrically nonconducting and no fluid motions can be
driven by the Lorentz force. Just like the (linearized) GW equation, the
Maxwell equations are linear and are advanced analytically between two
subsequent times steps; see Appendix C of BS for details. In step II, when the
conductivity has become large, we solve the standard magnetohydrodynamic (MHD)
equations.
The presence of the helical term proportional to $\gamma$ leads to a
difference in the growth rates between positively and negatively polarized
fields. Fields with one of the two signs of helicities will therefore grow
much faster than the other. Since there is enough time for the magnetic field
to grow over many orders of magnitude, it suffices to consider in step I only
fields of one helicity. This simplifies the computation somewhat. In step II,
however, no such simplification is made.
In this paper, we work with conformal time $\eta$, which is related to
physical time $t$ through $\eta=\int{\rm d}{}t/a(t)$. By adopting
appropriately scaled variables, we arrive at MHD equations that are similar to
those of standard MHD for a non-expanding Universe (Brandenburg et al., 1996).
In step I, during the post-inflationary matter-dominated era, the effective
equation of state is such that the scale factor increases quadratically with
conformal time (and like $t^{2/3}$ with physical time). Conformal time is
normalized such that it is unity at the beginning of the subsequent radiation-
dominated era. Furthermore, the scale factor increases linearly with $\eta$ in
the radiation-dominated era. We assume a spatially flat Universe and adopt the
normalization of Roper Pol et al. (2020a, b), where $a(\eta)=1$ at $\eta=1$
and the mean radiative energy density is then also set to unity.
In Section 2, we present the basic equations applied in steps I and II. Those
for step II are identical to the corresponding ones used in BS, but the
equations for step I are different owing to the presence of the magnetic
helicity producing term proportional to $\gamma$. We then present the results
in Section 3 and conclude in Section 4. We adopt the Heaviside-Lorentz unit
system and set the speed of light equal to unity.
## 2 The model
### 2.1 Polarization basis and governing equations
Any vector field can be decomposed into an irrotational and two vortical parts
that are eigenfunctions of the curl operator with positive and negative
eigenvalues. Here we employ the vector potential $\mathbf{A}$ in the Coulomb
gauge, ${\mathbf{\nabla}}\cdot\mathbf{A}=0$, so the irrotational part
vanishes. We then consider
$\tilde{\mathbf{A}}(\eta,\mathbf{k})=\int{\mathbf{A}}(\eta,\mathbf{x})\,e^{-{\rm
i}\mathbf{k}\cdot\mathbf{x}}{\rm d}{}^{3}\mathbf{x}$ in Fourier space,
indicated by tildae, as a function of conformal time $\eta$ and the wavevector
$\mathbf{k}$, and write it as
$\tilde{\mathbf{A}}(\eta,\mathbf{k})=\tilde{A}_{+}(\eta,\mathbf{k})\,\tilde{\mathbf{e}}_{+}(\mathbf{k})+\tilde{A}_{-}(\eta,\mathbf{k})\,\tilde{\mathbf{e}}_{-}(\mathbf{k}),$
(1)
where
$\tilde{\mathbf{e}}_{\pm}(\mathbf{k})=[\tilde{\mathbf{e}}_{1}(\mathbf{k})\pm{\rm
i}\tilde{\mathbf{e}}_{2}(\mathbf{k})]/\sqrt{2}\,{\rm i}$ (2)
is the polarization basis with ${\rm
i}\mathbf{k}\times\tilde{\mathbf{e}}_{\pm}=\pm k\tilde{\mathbf{e}}_{\pm}$,
$k=|\mathbf{k}|$ is the wavenumber and $\tilde{\mathbf{e}}_{1}(\mathbf{k})$,
$\tilde{\mathbf{e}}_{2}(\mathbf{k})$ represent units vectors orthogonal to
$\mathbf{k}$ and orthogonal to each other. We assume an additional helical
term in the EM Lagrangian density,
$f^{2}F_{\mu\nu}(F^{\mu\nu}+\gamma\tilde{F}^{\mu\nu})$. As in BS, we assume
$f(a)=a^{-\beta}\quad\mbox{with}\quad a=(\eta+1)^{2}/4$ (3)
being the scale factor during the post-inflationary matter-dominated era with
$-1<\eta\leq 1$. The evolution of the scaled vector potential,
$\tilde{\mathcal{A}}_{\pm}\equiv f\tilde{A}_{\pm}$, is then governed by the
equation (SSS; Okano & Fujita, 2021)
$\tilde{\mathcal{A}}_{\pm}^{\prime\prime}+\left(k^{2}\pm 2\gamma
k\frac{f^{\prime}}{f}-\frac{f^{\prime\prime}}{f}\right)\tilde{\mathcal{A}}_{\pm}=0,$
(4)
where primes denote $\eta$ derivatives, and
$\frac{f^{\prime}}{f}=-\frac{2\beta}{\eta+1},\quad\frac{f^{\prime\prime}}{f}=\frac{2\beta(2\beta+1)}{(\eta+1)^{2}}.$
(5)
There are growing modes for $k<k_{*}(\eta)$, given by
$k_{*}(\eta)=2\beta\,\left(\gamma+\sqrt{1+\gamma^{2}+1/2\beta}\right)/(\eta+1),$
(6)
where we have considered the upper sign in Equation (4). Equation (6) reduces
to the expression given in Equation (7) of BS for $\gamma=0$. For $\gamma=1$,
we have $k_{*}(1)=\beta\,(1+\sqrt{2+1/2\beta})$. For $\beta=7.3$, a particular
case considered by BS, we have $k_{*}(1)\approx 18$ in the helical case when
$\gamma=1$, which is more than twice the value $k_{*}(1)\approx 7.5$ for
$\gamma=0$ used by BS for the nonhelical case. This shows that helicity
broadens the range of unstable wavenumbers. For $\gamma=-1$, we would have
$k_{*}(1)\approx 3.2$, but this is not relevant in practice because the
fastest growing mode would then have opposite magnetic helicity, and the
results for $\gamma=1$ apply analogously. Contrary to the case of nonhelical
magnetogenesis ($\gamma=0$), where the growth is fastest for $k=0$, it is now
fastest for finite values of $k$. In fact, as a function of $k$, the
expression in round brackets in Equation (4) has an extremum for
$k=2\beta\gamma/(\eta+1)$, and would instead be at $k=0$ for $\gamma=0$.
As in BS, we also solve the linearized GW equations
$\tilde{h}_{+/\times}^{\prime\prime}+\left(k^{2}-\frac{a^{\prime\prime}}{a}\right)\tilde{h}_{+/\times}={6\over
a}\,\tilde{T}_{+/\times}$ (7)
for the two polarization modes of the Fourier-transformed strain
$\tilde{h}_{+/\times}$. As in Roper Pol et al. (2020a, b), we have made use of
the fact that the critical energy density at $\eta=1$ is unity. The GWs are
driven by the $+$ and $\times$ modes of the traceless-transverse projected EM
stress,
${{\sf T}}_{ij}=f^{2}\,(B_{i}B_{j}+E_{i}E_{j}),$ (8)
where $\mathbf{E}=-\partial\mathbf{A}/\partial\eta$ and
$\mathbf{B}={\mathbf{\nabla}}\times\mathbf{A}$ are the electric and magnetic
fields in real space. We then compute $\tilde{\sf
T}_{ij}(\eta,\mathbf{k})=\int{{\sf T}}_{ij}(\eta,\mathbf{x})\,e^{-{\rm
i}\mathbf{k}\cdot\mathbf{x}}{\rm d}{}^{3}\mathbf{x}$ in Fourier space, project
out the transverse-traceless part, and decompose the result into
$\tilde{T}_{+}$ and $\tilde{T}_{\times}$, which then enter in Equation (7);
see Roper Pol et al. (2020a, b) for details. In step II, we solve the standard
MHD equations with the usual modifications for a radiation-dominated
ultrarelativistic gas; see also BS. The bulk motions with velocity
$\mathbf{u}$ are nonrelativistic, but include second order terms in the
Lorentz factor (see Brandenburg et al., 1996, 2017, for details). As stated
before, the mean radiation energy density is set to unity at $\eta=1$. The new
parameters in this step are the electric conductivity $\sigma$ and the
kinematic viscosity $\nu$. As in BS, we always assume the magnetic Prandtl
number to be unity, i.e., $\nu\sigma=1$.
### 2.2 Diagnostics and initial conditions
Important output diagnostics are energy spectra, $E_{\lambda}(\eta,k)$, where
$\lambda={\rm E}$, ${\rm M}$, ${\rm K}$, and ${\rm GW}$, for electric,
magnetic, kinetic, and GW energy spectra. The symbols for the spectra are only
used with these four subscripts and are not to be confused with the components
of the electric field vector $\mathbf{E}$. The corresponding energy densities
are defined as $k$ integrals over these spectra, i.e., ${\cal
E}_{\lambda}(\eta)=\int E_{\lambda}(\eta,k)\,{\rm d}{}k$, and are normalized
such that ${\cal E}_{\rm E}=\langle\mathbf{E}^{2}\rangle/2$, ${\cal E}_{\rm
M}=\langle\mathbf{B}^{2}\rangle/2$, ${\cal E}_{\rm
K}=\langle\mathbf{u}^{2}\rangle/2$, ${\cal E}_{\rm GW}=\langle
h_{+}^{2}+h_{\times}^{2}\rangle/6$.
We emphasize that $E_{\rm GW}(k)$ denotes the GW energy density per linear
wavenumber interval, normalized to the radiation energy density at $\eta=1$.
To obtain the GW energy density per logarithmic wavenumber interval,
normalized to the critical energy density today, one has to multiply $kE_{\rm
GW}(k)$ by the dilution factor $(a_{\rm r}/a_{0})^{4}(H_{\rm r}/H_{0})^{2}$,
where the subscripts ‘r’ and ‘0’ refer to the scale factor $a$ and the Hubble
parameter $H$ at the end of reheating and today; see Roper Pol et al. (2020b)
for details regarding the normalization. This leads to the quantity
$h_{0}^{2}{\Omega}_{\rm GW}(k)=1.6\times 10^{-5}\,(g_{\rm r}/100)\,kE_{\rm
GW}(k)$, where $g_{\rm r}$ is the number of relativistic degrees of freedom at
the beginning of the radiation dominated era.
The simulations usually start at the initial time $\eta_{\rm ini}=-0.9$, which
implies $a(\eta_{\rm ini})=2.5\times 10^{-3}$. In some cases (Runs C and D
below), we used $\eta_{\rm ini}=-0.99$, so that $a(\eta_{\rm ini})=2.5\times
10^{-5}$. As discussed in BS, the initial magnetic field has usually a
spectrum $E_{\rm M}(k)\propto k^{3}$ for $k<k_{\rm*}(\eta_{\rm ini})$. The
value of $k_{\rm*}(\eta_{\rm ini})$ usually lies between the smallest and
largest wavenumbers in the computational domain, $k_{1}$ and $k_{\rm Ny}$,
respectively, where $k_{\rm Ny}=k_{1}n_{\rm mesh}/2$ is the Nyquist wavenumber
and $n_{\rm mesh}$ is the number of mesh points of the domain of size
$2\pi/k_{1}$. In this paper, we use $n_{\rm mesh}=512$ and we treat $k_{1}$ as
an input parameter that is usually chosen to be unity, but sometimes we also
consider smaller and larger values between 0.2 and 10, respectively.
The transition from step I to step II is discontinuous, as was already
discussed in BS. This may be permissible when the change from zero
conductivity to a finite and large value occurs rapidly; see Appendix D of BS.
In addition, while in step II we have $f=1$, and therefore
$f^{\prime}=f^{\prime\prime}=0$, the values of $f^{\prime}/f$ and
$f^{\prime\prime}/f$ at the end of step I are small, but finite, which can
cause artifacts. BS noted the occurrence of oscillations shortly after
transitioning to step II, but the results presented for our GW spectra are
always averaged over the statistically steady state and are therefore
independent of the oscillations caused by the discontinuities of these two
ratios. In the present case of helical magnetogenesis, there is also another
effect on the spectral slope of the GW energy density that will be addressed
below.
Let us emphasize at this point that in step II, when $\sigma$ is large,
magnetic helicity, $\langle\mathbf{A}\cdot\mathbf{B}\rangle$, is well
conserved. This is not the case in step I, which is the reason why a helical
magnetic field can be produced. Indeed, the magnetic helicity then grows at
the same speed as the magnetic energy grows.
Figure 1: Evolution of (a) $B_{\rm rms}$ and (b) ${\cal E}_{\rm GW}$ for Runs
B (red lines) and Bn (blue lines), compared with two versions of Run B1 of
BHKRS with different initial field strengths. The two orange lines denote Run
B1 of BHKRS with the original and a $10^{12}$ times weaker initial field. Note
that for the helical growth, the slopes change with $a(\eta)$, which is a
consequence of the helical term.
### 2.3 Parameters of the magnetogenesis model
To avoid back-reaction and strong coupling problems of magnetogenesis during
inflation, SSS assumed the function $f$ to grow in a particular fashion. In
the beginning, it grows as $a^{\alpha}$, starting from the value unity. To
recover the standard EM theory at the end of reheating, $f$ is further assumed
to continue evolving as $f\propto a^{-\beta}$ in the post-inflationary era,
which is assumed to be matter dominated. The procedure to obtain the value of
$\beta$ for a particular value of the reheating temperature $T_{\rm r}$ is the
same as explained in Appendix A of BS. The only difference lies in Equation
(A1) of BS, which is obtained by demanding that the total EM energy density is
a certain fraction ${\cal E}_{\rm EM}$ of the background energy density at the
end of the post-inflationary matter-dominated era, will be different in the
helical case. Details are given in Appendix A.
In the model of SSS, $\alpha=2$ was chosen to have a scale-invariant magnetic
energy spectrum during inflation. However, in the post-inflationary era, when
$f$ decreases, the part that provides a scale-invariant spectrum during
inflation decays and the next order term becomes dominant, giving an $E_{\rm
M}\propto k^{3}$ spectrum in the superhorizon limit. In this case, when
$\alpha=2$, the maximum possible value of the reheating temperature is
approximately $50\,{\rm GeV}$. This value is different from the value given by
SSS, which was $4000\,{\rm GeV}$. This difference is due to the fact that in
SSS, the extra amplification due to the presence of the helical term was not
considered in the post-inflationary matter-dominated era.
In BS, we focussed on two sets of runs—one for a reheating temperature of
around $100\,{\rm GeV}$ and another for $150\,{\rm MeV}$. The corresponding
values of $\beta$ where then 7.3 and 2.7, respectively. We begin with similar
choices of $\beta$ here, too. It turns out that for $150\,{\rm MeV}$, the
appropriate value is now $\beta=2.9$, but for the standard scenario with
$\alpha=2$, for the reasons explained above, models for $100\,{\rm GeV}$ would
not be allowed in the helical case, because they would lead to strong
backreaction, which forces us to choose $\approx 10\,{\rm GeV}$ instead. In
that case, the appropriate value would be $\beta=7.7$; see Table
LABEL:Tbeta2_reduced for a summary of parameter combinations and Appendix A
for further details. To facilitate comparison with BS, we have reduced the
value of $T_{\rm r}$ to $8\,{\rm GeV}$, which then corresponds to $\beta=7.3$.
Table 1: $\beta$ for different values of $T_{\rm r}$. $T_{\rm r}$ [GeV] | $\alpha$ | ${\cal E}_{\rm EM}$ | $\beta$ | $g_{r}(\eta_{*})$ | $E_{\rm M}(\eta_{\rm ini},k)$
---|---|---|---|---|---
$10$ | 2 | 0.07 | 7.7 | 86 | $\propto k^{3}$
$8$ | 2 | 0.01 | 7.3 | 86 | $\propto k^{3}$
$0.15$ | 2 | 0.01 | 2.9 | 61.75 | $\propto k^{3}$
$460$ | $-3$ | 0.01 | 3 | 106.75 | $\propto k^{-1}$
$3\times 10^{5}$ | 1 | 0.01 | 1.7 | 106.75 | $\propto k^{5}$
In this paper, we also explore the possibility of a smaller value of $\alpha$.
This allows for higher reheating temperature scales without having any back-
reaction problem in the post-inflation matter-dominated era. For the case
$\alpha=1$, the value of the reheating temperature is $3\times 10^{5}\,{\rm
GeV}$ when the Hubble parameter during inflation is $H_{\rm f}=10^{14}\,{\rm
GeV}$ and the total EM energy density is $1\%$ of the background energy
density at the end of reheating. These large values of $H_{\rm f}$ and $T_{\rm
r}$ were not possible for the case when $\alpha=2$. This case is listed in the
last row of Table LABEL:Tbeta2_reduced along with other relevant parameters.
We also consider the model of Okano & Fujita (2021), where $f(a)\propto
a^{-3}$ both during inflation and in the post-inflationary era, i.e.,
$\beta=3=-\alpha$. In their model, the product $\beta\gamma$ was found to be
$7.6$ so as to have maximum magnetic field strength for the case when the
total EM energy density is 1% of the background energy density; see Equation
(2.19) of Okano & Fujita (2021). This corresponds to $\gamma=2.5$. In that
case, the initial magnetic field had a scale-invariant spectrum proportional
to $k^{-1}$ in the superhorizon limit.
Quantum fluctuations alone would not introduce a preference of one sign of
helicity over the other, so therefore both ${\cal A}_{+}$ and ${\cal A}_{-}$
would grow at the same rate if $\gamma=0$. However, if the magnetic field was
fully helical to begin with, only one of the two signs of helicity would grow,
i.e., either ${\cal A}_{+}$ or ${\cal A}_{-}$, so the field might remain
helical even though $\gamma=0$ and both solutions would still be equally
unstable. In the following, we allow for such a possibility in some of our
simulations.
Table 2: Summary of simulation parameters and properties. Run | $T_{\rm r}$ [GeV] | $B_{0}$ | $\beta$ | $\gamma$ | $k_{\rm*}^{(1)}$ | $\nu$ | ${\cal E}_{\rm M}$ | ${\cal E}_{\rm EM}$ | ${\cal E}_{\rm M}/{\cal E}_{\rm EM}$ | ${\cal E}_{\rm GW}$ | $h_{\rm rms}$ | $q_{\rm M}$ | $q_{\rm EM}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
A | $0.15$ | $5\times 10^{-10}$ | $2.9$ | $1$ | $7.2$ | $1\times 10^{-4}$ | $0.012$ | $0.023$ | $0.51$ | $1.2\times 10^{-5}$ | $9.1\times 10^{-3}$ | $2.1$ | $1.07$
B | $10$ | $4\times 10^{-24}$ | $7.3$ | $1$ | $17$ | $2\times 10^{-4}$ | $0.050$ | $0.11$ | $0.48$ | $6.6\times 10^{-5}$ | $3.6\times 10^{-3}$ | $2.9$ | $1.37$
Bn | $10$ | $3\times 10^{-18}$ | $7.3$ | $0$ | $7.5$ | $2\times 10^{-4}$ | $0.007$ | $0.19$ | $0.04$ | $1.0\times 10^{-3}$ | $2.4\times 10^{-2}$ | $32$ | $1.30$
C | $460$ | $1\times 10^{-27}$ | $3.0$ | $2.5$ | $15$ | $1\times 10^{-4}$ | $0.014$ | $0.017$ | $0.80$ | $1.6\times 10^{-6}$ | $8.1\times 10^{-4}$ | $1.4$ | $1.14$
D | $3\times 10^{5}$ | $5\times 10^{-6}$ | $1.7$ | $1$ | $4.3$ | $5\times 10^{-4}$ | $0.016$ | $0.025$ | $0.64$ | $8.5\times 10^{-5}$ | $7.6\times 10^{-3}$ | $2.5$ | $1.58$
Dn | $3\times 10^{5}$ | $1\times 10^{-3}$ | $1.7$ | $0$ | $1.9$ | $2\times 10^{-4}$ | $0.016$ | $0.052$ | $0.30$ | $2.8\times 10^{-3}$ | $5.7\times 10^{-2}$ | $6.6$ | $1.98$
## 3 Results
### 3.1 Growth of magnetic field and GW energy
In Figure 1, we show the growth and subsequent decay of the root-mean square
(rms) magnetic field $B_{\rm rms}$ during steps I and II, and compare with a
simulation of nonhelical inflationary magnetic field generation (similar to
Run B1 of BS). The growth is still approximately algebraic, but, as expected,
it is now faster than in the nonhelical case. This is caused by the extra
amplification resulting from the helical term proportional to $\gamma$. This
term is reminiscent of the CME, which causes, however, exponential magnetic
field amplification (Joyce & Shaposhnikov, 1997). The CME has been invoked in
the study of GW production from the resulting magnetic field both analytically
(Anand et al., 2019) and numerically (Brandenburg et al., 2021c, hereafter
BHKRS). The difference in the temporal growth of $B_{\rm rms}$ and ${\cal
E}_{\rm GW}$ between the CME and helical magnetogenesis is demonstrated in
Figure 1. Here we have also overplotted two versions of Run B1 of BHKRS.
During the subsequent decay phase, $B_{\rm rms}$ is approximately equally
large for both inflationary and CME runs. This is just because of our choice
of parameters. However, owing to the smaller length scales on which the CME
operates, the corresponding GW energy is now much smaller than for
inflationary magnetogenesis. On the other hand, we also see that the growth,
being exponential, is much faster for the CME runs than for both the helical
and nonhelical inflationary magnetogenesis models. This implies that the CME
can reach saturation with an arbitrarily weak initial seed magnetic field. The
saturation amplitude does, however, depend on the assumed initial imbalance of
left- and right-handed fermions, and may, in reality, be much smaller than
what has been assumed in the models of BHKRS. By contrast, the maximum field
strength from inflationary magnetogenesis is determined by demanding that the
total EM energy density is some fraction of the background energy density at
the end of reheating so that there is no back-reaction.
In Table LABEL:Tsummary, we summarize quantitative aspects of our new runs,
Runs A–D, as well as two nonhelical ones, Runs Bn and Dn, where $\gamma=0$. We
list the reheating temperature $T_{\rm r}$ in GeV, the amplitude parameter
$B_{0}$ for the initial magnetic field, the aforementioned parameters $\beta$,
$\gamma$, $k_{\rm*}^{(1)}$, and $\nu$, as well as the output parameters ${\cal
E}_{\rm M}$, ${\cal E}_{\rm EM}\equiv{\cal E}_{\rm E}+{\cal E}_{\rm M}$, the
ratio ${\cal E}_{\rm M}/{\cal E}_{\rm EM}$, the values of ${\cal E}_{\rm GW}$
and the rms strain $h_{\rm rms}=\langle
h_{+}^{2}+h_{\times}^{2}\rangle^{1/2}$, as well as two different efficiency
parameters $q_{\rm M}$ and $q_{\rm EM}$, defined below.
As in BS, varying the initial magnetic field strength $B_{0}$ always resulted
in a purely quadratic change of ${\cal E}_{\rm M}$, and a quartic change of
${\cal E}_{\rm GW}$. It therefore suffices to present, for each combination of
parameters $\beta$ and $\gamma$, only one value of $B_{0}$, typically such
that ${\cal E}_{\rm EM}$ is roughly in the expected range of between 0.01 and
0.1.
Figure 2: $E_{\rm M}(k)$ (red lines), $E_{\rm E}(k)$ (orange lines), and
$E_{\rm GW}(k)$ (blue lines) for (a) Run B, (c) Run C, and (e) Run D, together
with the associated collapsed spectra $\phi_{\rm M}(\kappa)$ (red lines),
$\phi_{\rm E}(\kappa)$ (orange lines), and $\phi_{\rm GW}(\kappa)$ (blue
lines) for (b) Run B, (d) Run C, and (f) Run D. The spectral GW energy
increases at a rate that is independent of $k$, but the growth speed of
$E_{\rm M}(k)$ does depend on $k$. Figure 3: Visualizations of $B_{z}$ for
Runs B (top), C (middle), and D (bottom) on the periphery of the computational
domain for $\eta=-0.8$, $-0.5$, $0$, and $1$ during step I. The color scale is
symmetric about zero and adjusted with respect to the instantaneous extrema.
Comparing helical with nonhelical runs for similar values of ${\cal E}_{\rm
M}$, the GW energies and strains are smaller than in the earlier cases without
helicity (see also Figure 1). This may suggest that GW production from helical
inflationary magnetogenesis is somewhat less efficient than for the nonhelical
case. However, while the values of ${\cal E}_{\rm M}$ are the same, the total
EM energies, ${\cal E}_{\rm EM}={\cal E}_{\rm E}+{\cal E}_{\rm M}$, are not.
In fact, we see that the ratio ${\cal E}_{\rm E}/{\cal E}_{\rm M}$ is
typically 0.3–0.5, i.e., the electric energy contribution is subdominant
during the post-inflationary matter-dominated era. For nonhelical
magnetogenesis, by contrast, the electric energy is dominant, typically with
${\cal E}_{\rm E}/{\cal E}_{\rm M}=10$–$30$ for $\beta$ between 2.7 and 7.3.
Figure 4: Temporal dependence represented through $a(\eta)$ of spectral
energies at $k=2$ (solid lines) and $k=10$ (dashed lines) for Run C with
$E_{\rm M}(\eta,k)$ (red lines), $E_{\rm E}(\eta,k)$ (orange lines), and
$E_{\rm GW}(\eta,k)$ (blue lines).
As already noted, for fixed values of $\beta$ and $\gamma$, the different
values of ${\cal E}_{\rm M}$, ${\cal E}_{\rm EM}$, ${\cal E}_{\rm GW}$, and
$h_{\rm rms}$ are directly related to the initial amplitude parameter $B_{0}$.
To compare runs with different parameters $\beta$ and $\gamma$, we must
therefore compute normalized efficiencies. Earlier work (Roper Pol et al.,
2020b; Brandenburg et al., 2021b) suggested that ${\cal E}_{\rm GW}=(q_{\rm
M}{\cal E}_{\rm M}/k_{\rm c})^{2}$, where $q_{\rm M}$ is the efficiency and
$k_{\rm c}$ is a characteristic wavenumber. In analogy to their work, we now
postulate an analogous relation, but with ${\cal E}_{\rm EM}$ instead of
${\cal E}_{\rm M}$, i.e.,
${\cal E}_{\rm GW}=(q_{\rm EM}{\cal E}_{\rm EM}/k_{\rm c})^{2},$ (9)
where $q_{\rm EM}$ is a new efficiency parameter, and for $k_{\rm c}$ we
always take the value $k_{\rm c}=k_{\rm*}(1)$, just like in BS.
For nonhelical magnetogenesis, BS found that $q_{\rm M}$ was proportional to
$\beta$. Since $k_{\rm*}(1)$ was also proportional $\beta$, this meant that
the effect of dividing by $k_{\rm*}(1)$ was effectively canceled, and that
therefore a good scaling was obtained by just plotting ${\cal E}_{\rm GW}$
versus ${\cal E}_{\rm M}^{2}$, suggesting that the $1/k_{\rm c}$ scaling may
not have been real. However, our new results for helical magnetogenesis now
show that this is not the case for $q_{\rm EM}$. In fact, looking at Table
LABEL:Tsummary, where we present both $q_{\rm M}$ and $q_{\rm EM}$, we see
that $q_{\rm M}$ shows significant variations
($1.4\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}q_{\rm
M}\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}32$),
while $q_{\rm EM}$ changes comparatively little
($1.1\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}q_{\rm
EM}\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}1.6$).
This suggests that the GW energy is mainly governed by $q_{\rm EM}$,
independently of or only weakly dependent on the value of $\beta$.
Among the four runs A–D, Runs A and B are similar in that only the value of
$\beta$ is different. For Runs C and D, on the other hand, also the values of
$\gamma$ and $\alpha$ were different. In the following, therefore, we focus on
presenting Runs B–D in more detail.
### 3.2 Energy spectra
Next, we compare Runs B, C, and D by looking at the GW and magnetic energy
spectra for step I during $-0.9\leq\eta\leq 1$, where we also compare with
electric energy spectra. As in BS, we try to collapse the spectra on top of
each other by plotting the functions
$\phi_{\lambda}(\kappa)=(\eta+1)^{-(p_{\lambda}+1)}E_{\lambda}(k,\eta),$ (10)
where $\lambda={\rm E}$, ${\rm M}$, or ${\rm GW}$ for electric, magnetic, and
GW energies, respectively, $p_{\lambda}$ are exponents characterizing the
speed of growth, for now and
$\kappa(\eta)=k/k_{*}(\eta)$ (11)
is a time-depended wavenumber where the EM energy spectra peak. We show the
result in Figure 2, where we plot both $E_{\lambda}(k,\eta)$ and
$\phi_{\lambda}(\kappa)$ for Run B in panels (a) and (b), Run C in panels (c)
and (d), and Run D in panels (e) and (f). We see that the tendency of the
lines to collapse on top of each other is better for the GW spectra than for
the electric and magnetic spectra. This shows that those latter two are not
shape-invariant. This is clearly different from the nonhelical case; see the
corresponding Figure 3 of BS.
Interestingly, except for the GW spectra, which show power law scalings with
$E_{\rm GW}(k)\propto k$ for $k<2k_{*}(1)$ and $E_{\rm GW}(k)\propto k^{-46}$
for $k>2k_{*}(1)$ (for Run B), the EM spectra deviate from power law scaling
and show a more peaked spectrum for $k<k_{*}(1)$. The growth is fastest in the
model with $\beta=7.3$, as is indicated by the spectra spanning about forty
orders of magnitude. For Runs C and D, the spectra are progressively more
shallow. For the GW spectrum of Run D, there is a dip at $\kappa\approx 0.17$
(and at decreasing values of $k$ as time increases). This coincides with the
wavenumber where $k^{2}=a^{\prime\prime}/a$ and thus, where the solution to
Equation (7) changes from oscillatory to temporally growing behavior. This
feature is now so prominent, because the growth of the magnetic field is now
slower than before.
Visualizations of the magnetic field on the periphery of the computational
domain are shown in Figure 3 for Runs B–D. We see that the typical length
scales increase with time, but again faster for Runs B and C than for Run D.
To study the temporal growth for specific values of $k$, we show in Figure 4
the dependencies of $E_{\rm E}(\eta,k)$, $E_{\rm M}(\eta,k)$, and $E_{\rm
GW}(\eta,k)$ separately for $k=2$ and $10$ for Run C, where the departure from
shape-invariant behavior appears to be the strongest. We clearly see that the
growth of $E_{\rm GW}(\eta,k)$ is the same for all values of $k$. This is in
agreement with the visual impression from Figure 2. It is also the same at
early and late times. This is not the case for the electric and magnetic
spectra, where we have a growth proportional to $a^{7.5}$ for $k=2$ and small
values of $a$, but a faster growth $\propto a^{20}$ for $k=10$ and
$a(\eta)>0.1$.
When the mode corresponding to a certain wavenumber $k$ is well outside the
horizon, the $f^{\prime\prime}/f$ term within the round brackets of Equation
(4) dominates over the other two terms, and the amplitude of the mode grows in
time. Once the mode is about to enter the horizon, the second term also comes
into the picture and further enhances the growth rate for $\gamma=1$. This
behavior is shown in Figure 4.
To understand the nearly shape-invariant scaling of $E_{\rm GW}(\eta,k)$, it
is important to look at spectra of the stress. This is done in Figure 5, where
we show spectra of the stress, decomposed into tensor, vector, and scalar
modes (Mukhanov et al., 1992). The tensor mode is the transverse-traceless
contribution to the stress, while the vector and scalar modes are composed of
vortical and irrotational constituents, respectively; see Brandenburg et al.
(2021b) for such a decomposition of data from earlier GW simulations. We see
that at all times during step I, the scalar and vector modes are subdominant.
In particular the peak of the stress spectrum is to a large fraction composed
of the tensor mode only. As expected from the work of Brandenburg & Boldyrev
(2020), its spectrum follows a $k^{2}$ subrange to high precision.
Figure 5: Spectra of the total stress at $\eta=-0.2$, $0.1$, $0.5$, and $1$,
decomposed into tensor (solid black), vector (dashed red), and scalar modes
(dotted blue) for Run B of Figure 2.
Figure 6: Early times in the beginning of the radiation-dominated phase for
(a) Run B ($\eta=1.06$, 1.2, 1.4, 1.6, and 2.1), (c) Run C ($\eta=1.06$, 1.9,
2.7, 3.3, and 4.1), and (e) Run D ($\eta=1.6$, 2.1, 3.6, and 6.1). $E_{\rm
M}(k)$, $E_{\rm K}(k)$, and $E_{\rm GW}(k)$ are shown as dashed red, dotted
green, and solid blue lines, respectively. The last times are shown as thick
lines. Later times are shown separately for (b) Run B ($\eta=2$, 6, 16, and
52), (d) Run C ($\eta=11$, 26, and 52), and (f) Run D ($\eta=11$, 26, 51, 101,
and 213). The red and blue vertical dashed-dotted lines goes through
$k_{*}(1)$ and $2k_{*}(1)$, respectively. Again, thick lines denote the last
time. The arrow in panel (d) highlights the sense of time, where $E_{\rm
GW}(k)$ declines at large values of $k$.
Comparing the different models, we see that for $\kappa\ll 1$, we reproduce
the initial scalings $\phi_{\rm M}\propto\kappa^{3}$ for Run B and
$\propto\kappa^{5}$ for Run D, with a shallower scaling by a factor
$\kappa^{2}$ for the electric fields, in particular the $\phi_{\rm
E}\propto\kappa^{-3}$ scaling for Run C. For $\kappa\gg 1$, we have a
progressively shallower decline $\propto\kappa^{-46}$, $\kappa^{-20}$, and
$\kappa^{-4}$ as we go from Run B to Runs C and D.
### 3.3 Spectra in step II
In step II, a velocity field emerges, driven by the Lorentz force. This causes
the magnetic field to develop small-scale structure, as can be seen from
Figure 6(a). This leads to a turbulent cascade that has here a spectrum
proportional to $k^{-3}$ for large $k$; see Figure 6(b). Contrary to BS, the
new GW spectrum now shows a flat power law scaling for $k<2k_{*}(1)$ with
$E_{\rm GW}(k)\propto k^{0}$, i.e. $kE_{\rm GW}(k)\propto k^{1}$. Such a
scaling was already found by Roper Pol et al. (2020b). The reason for this
lies in the direct correspondence with the relevant magnetic stress for the
blue-tilted magnetic energy spectrum, where $E_{\rm M}(k)$ has an increasing
slope with an exponent larger than two, which corresponds to a white noise
spectrum. In that case, this stress itself always has a white noise spectrum
and cannot be steeper than that. This was shown by Brandenburg & Boldyrev
(2020), who just considered the stress spectrum and ignored temporal aspects,
i.e., they did not consider solutions to the GW equation.
Figure 7: (a) $h_{0}^{2}\Omega_{\rm GW}(f_{\rm phys})$ and (b) $h_{c}(f_{\rm
phys})$ for Runs A–D $T_{\rm r}$ ranging from $150\,{\rm MeV}$ to $3\times
10^{5}\,{\rm GeV}$. In (a), dashed lines denote nonhelical runs and dashed-
dotted show the result for $g_{\rm r}=62$. In (b), the dotted lines denote
$1.26\times 10^{-18}\sqrt{h_{0}^{2}{\Omega}_{\rm GW}}\,(1\,{\rm Hz}/f_{\rm
phys})$ (Maggiore, 2000).
As in BS, the GW spectrum shows a marked drop by about six orders of magnitude
for Run B, which is slightly more than what was found in BS. We return to this
in Section 3.4, but we note at this point that for $k\gg 2k_{\rm*}(1)$ in Runs
B and C, the spectral GW energy beyond the drop, which is very small already,
becomes even smaller as time goes on. This is indicated by the arrow in Figure
6(d). Eventually, the spectrum settles at a level close to the fat blue lines
in Figure 6, which marks the last time. Furthermore, at late times, Figure
6(b) shows clear inverse cascading with the peak of the magnetic spectra
traveling towards smaller $k$; see the red dashed lines in Figure 6. The
height of the peak is expected to stay unchanged (Brandenburg & Kahniashvili,
2017), but our present runs show a small decline with time. This is
predominantly a consequence of the conductivity still not being high enough.
Larger conductivity would require larger numerical resolution, which would
begin to pose computational memory problems.
In step II, the GW spectrum is now fairly flat, $E_{\rm GW}\propto k^{0}$ for
Runs B and C, and with a slight rise $\propto k$ for Run D. Therefore, the GW
energy per logarithmic wavenumber interval, normalized by the critical energy
density for a spatially flat universe, is ${\Omega}_{\rm GW}\propto kE_{\rm
GW}\propto k^{1}$ for Run B, and perhaps even slightly shallower for Run C,
and $\propto k^{2}$ for Run D. Thus, as already seen in many earlier numerical
simulations of turbulence-driven GWs (Roper Pol et al., 2020b, BHKRS), this is
shallower than the previously expected $k^{3}$ scaling (Gogoberidze et al.,
2007; Okano & Fujita, 2021). In the present case, during the onset of MHD
turbulence, the spectrum has changed from a $k^{1}$ spectrum to a $k^{0}$
spectrum. As explained in Appendix F of BS, this is associated with the
discontinuous behavior of $f^{\prime}/f$ and $f^{\prime\prime}/f$. They
concluded that the change from a $k^{1}$ spectrum to $k^{0}$ occurs when the
growth of EM energy has stopped. This is at the same time when
$f^{\prime}=f^{\prime\prime}=0$, but it is not a direct consequence of the
discontinuity at $\eta=1$ and therefore not an artifact.
We see clear inverse cascading in the magnetic energy spectra with the peak of
the spectrum moving toward smaller $k$. This has been investigated in detail
in many earlier papers (Hatori, 1984; Biskamp & Müller, 1999); see Brandenburg
& Kahniashvili (2017) for a demonstration of the self-similarity of the
magnetic energy spectra. The conservation of mean magnetic helicity density,
$\langle\mathbf{A}\cdot\mathbf{B}\rangle$, implies a growth of the correlation
length and a corresponding decay of the mean magnetic energy density such that
$\langle\mathbf{A}\cdot\mathbf{B}\rangle\approx\pm B_{\rm rms}^{2}\xi_{\rm
M}\approx{\rm const}{}$ for fully helical turbulence, where the two signs
apply to positive and negative magnetically helicities, respectively.
### 3.4 Observable spectra
In Figure 7, we show the final spectra of ${\Omega}_{\rm GW}$ and $h_{\rm c}$
versus temporal frequency $f_{\rm phys}=kH_{*}/2\pi a_{0}$ for the present
time. The frequency $f_{\rm phys}$ is not to be confused with the function
$f(a)$, defined in Equation (3), which does not carry any subscript. Both the
strain and the energy spectra are scaled for the corresponding values of
$T_{\rm r}$ between $150\,{\rm MeV}$ and $3\times 10^{5}\,{\rm GeV}$. We have
indicated spectra for the nonhelical case as dashed lines.
The spectra in Figure 7 show different shapes of the ${\Omega}_{\rm GW}$
spectra for helical and nonhelical runs. This may, to some extent, be caused
by the larger values of $k_{\rm*}(1)$ in these helical runs. Furthermore, the
drop beyond the peak is stronger in the helical case. This was also found in
previous simulations (Roper Pol et al., 2020b; Brandenburg et al., 2021a), and
may be related to the presence of a weaker forward cascade in favor of a
stronger inverse cascade in helical turbulence (Pouquet et al., 1976). Note
also that for Run B with the largest value of $\beta$, the change from the
scaling ${\Omega}_{\rm GW}\propto f_{\rm phys}$ is much sharper in the case
with helicity than without, where the spectra are much rounder.
In the model with $T_{\rm r}=150\,{\rm MeV}$, we compare the GW spectra
generated both before and after the QCD phase transition, where $g_{\rm r}$
changes by a factor of about four from 62 to about 15. This leads to a drop in
frequency by a factor $\propto g_{\rm r}^{1/2}$ of about two and in an
increase in GW energy by a factor $\propto g_{\rm r}^{1/3}$ of about $1.6$.
Figure 8: (a) ${\cal P}_{\rm GW}(k)$ and (b) ${\cal P}_{\rm M}(k)$ for Run B
(with $k_{1}=1$; blue solid line) and a corresponding run with $k_{1}=0.2$
(red dashed-dotted line), as well as for Run B1 of BHKRS (orange dashed line).
The vertical dashed-dotted lines mark the positions of $k_{\rm*}(1)$ in (a)
and (b) and of $2k_{\rm*}(1)$ in (a).
We see that the high $T_{\rm r}$ model is different from the other models with
lower $T_{\rm r}$ in several respects. The drop in GW energy above the maximum
is now absent and the inertial range slope is no longer $\propto f_{\rm
phys}$, but to $\propto f_{\rm phys}^{2}$. This is mainly caused by the small
value of $\beta$, which results in a slower growth. At the same time, the
spectral peak at $k_{\rm*}(\eta)$ still moves to smaller values as before.
This causes the slope for $k>2k_{\rm*}(1)$ to be shallower than in the other
models with larger values of $\beta$. The slope is then also inherited in step
II, and it is then not much affected any more by the emerging turbulence.
The model of Okano & Fujita (2021) with $T_{\rm r}=460\,{\rm GeV}$ corresponds
to our Run D. They also studied GW production, but they did not include the
turbulent phase after reheating. Comparing our Figure 7 with Figure 5 of Okano
& Fujita (2021), we see that the peak values are slightly different. Our
spectral peak is at approximately $h_{0}^{2}{\Omega}_{\rm GW}\approx
10^{-11}$, while their peak value without the $h_{0}^{2}$ factor is
${\Omega}_{\rm GW}\approx 10^{-12}$. Furthermore, as we saw already from
Figure 6, the slope of $E_{\rm GW}(k)$ was slightly negative close to the
peak. Therefore, the ${\Omega}_{\rm GW}(k)\propto kE_{\rm GW}(k)$ is now
nearly flat. This is quite different from Figure 5 of Okano & Fujita (2021),
which had a clear ${\Omega}_{\rm GW}(k)\propto k^{3}$ range below the peak.
The frequency corresponding to the peak is also slightly different, but this
is to some extent explained by their frequency lacking a $2\pi$ factor.
### 3.5 Circular polarization
In Figure 8(a), we plot the time-averaged fractional circular polarization
spectrum of GWs, ${\cal P}_{\rm GW}(k)$, for Run B. It is defined as (see
Equation B.17 of Roper Pol et al., 2020a)
$\\!\\!{\cal P}_{\rm GW}(k)=\\!\left.\int\\!2\,\mbox{\rm
Im}\,\tilde{h}_{+}\tilde{h}_{\times}^{*}\,k^{2}{\rm
d}{}\Omega_{k}\right/\\!\\!\int\\!\left(|\tilde{h}_{+}|^{2}+\tilde{h}_{\times}|^{2}\right)k^{2}{\rm
d}{}\Omega_{k}.$ (12)
In Figure 8(b), we show the fractional magnetic helicity spectrum,
${\cal P}_{\rm M}(k)=kH_{\rm M}(k)/2E_{\rm M}(k),$ (13)
where $H_{\rm M}(k)$ is the magnetic helicity spectrum, normalized such that
$\int H_{\rm M}(k)\,{\rm d}{}k=\langle\mathbf{A}\cdot\mathbf{B}\rangle$.
Unlike the GW spectrum, which is statistically stationary and we can take a
long-term average, the magnetic field develops a forward cascade and decays at
the same time. During that time, the kinetic energy density has a maximum,
which marks the moment when the turbulent cascade has developed. We have
therefore decided to take a short-term average of the magnetic helicity and
energy spectra around the time when the kinetic energy density is within about
70% of its maximum value.
We also compare with the corresponding spectrum from Run B1 of BHKRS with CME
(not to be confused with Run B1 of BS). Except for a hundredfold shift toward
larger $k$, the shapes of ${\cal P}_{\rm GW}(k)$ are similar in that both have
a plateau with ${\cal P}_{\rm GW}(k)\approx 1$ and a similar decline toward
smaller values of $k$.
Toward larger values of $k$, we see a drop in ${\cal P}_{\rm GW}(k)$ that is
superficially similar to the drop in GW energy—at least for the present runs.
In the runs driven by the CME, such a drop is absent. However, the drop in the
GW energy spectra for large $k$ is probably not related to the drop seen in
the polarization spectra, where it appears for a larger $k$ value of nearly
$4k_{\rm*}(1)$. Furthermore, at about $k=k_{\rm*}(1)$, we rather see that
${\cal P}_{\rm GW}(k)$ declines toward smaller $k$ values, i.e., for
$k<2k_{\rm*}(1)$.
Table 3: Present day values for Runs A–D using parameters from Table LABEL:Tsummary as input, assuming always ${\cal E}_{\rm EM}=0.01$. Run | $T_{\rm r}$ [GeV] | $\eta_{\rm eq}$ | $\xi_{\rm M}^{*}$ [Mpc] | $\xi_{\rm M}^{\rm eq}$ [Mpc] | $B_{\rm rms}^{*}$ [G] | $B_{\rm rms}^{\rm eq}$ [G] | ${\cal E}_{\rm GW}$ | $h_{0}^{2}{\Omega}_{\rm GW}$
---|---|---|---|---|---|---|---|---
A | $0.15$ | $3.8\times 10^{8}$ | $5.8\times 10^{-8}$ | $3.0\times 10^{-2}$ | $3.0\times 10^{-7}$ | $4.2\times 10^{-10}$ | $2.2\times 10^{-6}$ | $4.3\times 10^{-11}$
B | $10$ | $2.8\times 10^{10}$ | $3.2\times 10^{-10}$ | $2.9\times 10^{-3}$ | $2.9\times 10^{-7}$ | $9.6\times 10^{-11}$ | $5.3\times 10^{-7}$ | $9.2\times 10^{-12}$
C | $460$ | $1.4\times 10^{12}$ | $8.0\times 10^{-12}$ | $9.9\times 10^{-4}$ | $3.8\times 10^{-7}$ | $3.4\times 10^{-11}$ | $5.3\times 10^{-7}$ | $8.5\times 10^{-12}$
D | $3\times 10^{5}$ | $9.0\times 10^{14}$ | $4.5\times 10^{-14}$ | $4.2\times 10^{-4}$ | $3.4\times 10^{-7}$ | $3.5\times 10^{-12}$ | $1.4\times 10^{-5}$ | $2.2\times 10^{-10}$
We have also confirmed that the decline below $k=k_{\rm*}(1)$ is not related
to the finite domain size. We have also performed a simulation with a five
times larger domain, where $k_{1}=0.2$ instead of $k_{1}=1$. By comparing
these two runs, we recovered essentially the same ${\cal P}_{\rm GW}(k)$
profile. This is shown in Figure 8 as the red dashed line, which agrees with
the blue one for $k_{1}=1$ for not too small $k$ values. In particular, we see
that there is evidence for a linear scaling of the fractional polarization,
i.e., ${\cal P}_{\rm GW}(k)\propto k$.
Comparing with the fractional magnetic helicity spectrum, ${\cal P}_{\rm
M}(k)$, we see that it also declines toward smaller $k$, but this happens more
slowly. In fact, for Run B, where ${\cal P}_{\rm GW}(k)$ already declines,
${\cal P}_{\rm M}(k)$ is just reaching its maximum. For larger values of $k$,
we see that ${\cal P}_{\rm M}(k)$ already declines for Run B while ${\cal
P}_{\rm GW}(k)$ is still at its plateau. However, for the CME runs, no decline
in ${\cal P}_{\rm M}(k)$ is seen.
### 3.6 Present day values
The values of ${\cal E}_{\rm M}$ listed in Table LABEL:Tsummary gave the
magnetic energy fraction of the radiation energy at $\eta=1$. To obtain the
comoving rms magnetic field in gauss, we set $B_{\rm rms}^{2}/8\pi={\cal
E}_{\rm M}\,(\pi^{2}g_{0}/30)\,(k_{\rm B}T_{0})^{4}/(\hbar c)^{3}$, where
$g_{0}=3.94$ and $T_{0}=2.7\,{\rm K}$ is the present day temperature, $k_{\rm
B}$ is the Boltzmann constant, and $\hbar$ is the reduced Planck constant. By
using ${\cal E}_{\rm EM}=0.01$ in all cases, we can compute ${\cal E}_{\rm M}$
by taking the ${\cal E}_{\rm M}/{\cal E}_{\rm EM}$ ratios from Table
LABEL:Tsummary for Runs A–D. Likewise, we use Equation (9) with the $q_{\rm
EM}$ values listed in that table and compute $h_{0}^{2}{\Omega}_{\rm GW}$ from
${\cal E}_{\rm GW}$ by multiplying with the appropriate dilution factor.
At $\eta=1$, the typical magnetic correlation length is taken to be $\xi_{\rm
M}=c/H_{*}k_{\rm*}(1)$. To compute the present values, we assume turbulent
inverse cascading at constant magnetic helicity until the matter-radiation
equality using $B_{\rm rms}^{\rm eq}=B_{\rm rms}^{*}\eta_{\rm eq}^{-1/3}$ and
$\xi_{\rm M}^{\rm eq}=\xi_{\rm M}^{*}\eta_{\rm eq}^{2/3}$. The value of
$\eta_{\rm eq}$ is obtained by using $g_{\rm eq}^{1/3}a_{\rm eq}T_{\rm
eq}=g_{\rm r}^{1/3}a_{\rm r}T_{\rm r}$, implied by the adiabatic evolution of
the Universe and $a_{\rm eq}=\eta_{\rm eq}$, where we take $T_{\rm eq}=1$eV
and $g_{\rm eq}=3.94$. The results are listed in Table LABEL:Ttoday, where we
use the superscripts ‘r’ and ‘eq’ to indicate comoving values at reheating and
matter–radiation equality, respectively.
We emphasize here that, unlike the magnetic field, which can have much larger
length scales owing to inverse cascading (Pouquet et al., 1976), this is not
the case for GWs. This is because GWs are governed by the imprint from the
time when the stress was maximum.
## 4 Conclusions
The present work has demonstrated that helical inflationary magnetogenesis
modifies the nonhelical case in such a way that the electric and magnetic
power spectra become strongly peaked at a finite wavenumber, corresponding
typically to about a tenth of the horizon scale at $\eta=1$. Such a distinct
wavenumber does not exist in the nonhelical case. Except for the scale-
invariant scaling in Run C at superhorizon scales, this leads to extremely
blue spectra of electric and magnetic fields. Nevertheless, the total stress
has still always a purely white noise spectrum and therefore also the GW field
has a white noise spectrum below its peak value. Furthermore, for runs with
large values of $\beta$, the onset of the drop toward larger frequencies is
much sharper in runs with helicity than without. These aspects can have
observational consequences. In particular, there would be more power at small
wavenumbers and frequencies. On the other hand, for a certain magnetic energy,
helical magnetogenesis produces somewhat weaker GWs than nonhelical
magnetogenesis. However, as we have shown here, the appropriate scaling is not
with ${\cal E}_{\rm M}$, but with ${\cal E}_{\rm EM}$, and therefore this
conclusion is reversed. In fact, the fractional contribution of electric
fields to the stress is much weaker in the helical case than without.
When studying GW generation from the CME, it was anticipated that some general
features or behaviors would carry over to other magnetogenesis scenarios. In
magnetogenesis from the CME, the GW energy was well described by a relation
${\cal E}_{\rm GW}=(q_{\rm M}{\cal E}_{\rm M}/k_{\rm c})^{2}$, where the
efficiency $q_{\rm M}$ depended on the value of the conductivity and it also
depended on which of the two possible regimes one is in. The possibility of
two different regimes seems to be a special property of the CME that has not
yet been encountered in other magnetogenesis scenarios. Also the presence of a
conservation law of total chirality in the CME has no obvious counterpart in
inflationary magnetogenesis, where magnetic helicity conservation is not
obeyed during magnetogenesis in step I.
On the other hand, both the CME and helical inflationary magnetogenesis can
produce circularly polarized GWs. However, the CME operates only on very small
length scales that are in practice much smaller than what is shown in Figure
8, where an unphysically large chiral chemical potential was applied, just to
see what GW strengths would then be possible. This naturally raises the
question whether some combination of CME and inflationary magnetogenesis could
produce either stronger or larger scale magnetic fields. A problem lies in the
fact that the CME requires electric conductivity. It could therefore only be
an effect that operates after inflationary magnetogenesis and during the
radiation-dominated era. It could then enhance the magnetic field, but the
resulting additional magnetic field would then only be of short length scales.
Nevertheless, the preceding inflationary stage could lead to somewhat stronger
fields and could thereby also produce stronger GWs. Another interesting effect
could be the intermediate production of an imbalance of fermions from the
magnetic field produced by inflationary magnetogenesis. This aspect has
recently been explored by Schober et al. (2020), who showed that this effect
is indeed only an intermediate one, because at late times, the chiral
imbalance always gets converted back into magnetic fields.
When comparing a plot of ${\cal E}_{\rm GW}$ versus ${\cal E}_{\rm M}$ from
inflationary magnetogenesis, the work of BS has shown that a scaling of the
form ${\cal E}_{\rm GW}\propto{\cal E}_{\rm M}^{2}$ was obtained. Our new
results for helical inflationary magnetogenesis explicitly confirm a $1/k_{\rm
c}$ dependence, but here with ${\cal E}_{\rm GW}=(q_{\rm EM}{\cal E}_{\rm
EM}/k_{\rm c})^{2}$, where $q_{\rm EM}$ shows only a very weak dependence on
$\beta$. Here, $k_{\rm c}=k_{\rm*}(1)$ has been used (as in BS), and $q_{\rm
EM}=1.1$–$1.6$ has been found as a fit parameter. Note, however, that the
formula for ${\cal E}_{\rm GW}$ in terms of ${\cal E}_{\rm EM}$ is entirely
empirical. It would be important to produce some more robust analytic
justification or refinements to this expectation.
Table 4: Model parameters for different values of $T_{\rm r}$. $T_{\rm r}$ | $\alpha$ | $\gamma$ | ${\cal E}_{\rm EM}$ | $H_{\rm f}$ [GeV] | $N_{\rm r}$ | $N$ | $\beta$ | $g_{\rm r}$ | $E_{\rm M}(\eta_{\rm ini},k)$
---|---|---|---|---|---|---|---|---|---
$10\,{\rm GeV}$ | 2 | 1 | 0.07 | $2.3\times 10^{-11}$ | 8.1 | 31.1 | 7.7 | 86 | $\propto k^{3}$
$8\,{\rm GeV}$ | 2 | 1 | 0.01 | $2.8\times 10^{-11}$ | 8.6 | 31.1 | 7.3 | 86 | $\propto k^{3}$
$120\,{\rm MeV}$ | 2 | 1 | 0.01 | $1.2\times 10^{-3}$ | 26.5 | 35.5 | 2.7 | 20 | $\propto k^{3}$
$150\,{\rm MeV}$ | 2 | 1 | 0.006 | $2.7\times 10^{-4}$ | 24.5 | 35.1 | 2.9 | 61.75 | $\propto k^{3}$
$460\,{\rm GeV}$ | -3 | 2.5 | 0.01 | $1.7\times 10^{-8}$ | 7.3 | 32.9 | 3 | 106.75 | $\propto k^{-1}$
$3\times 10^{5}$ GeV | 1 | 1 | 0.01 | $10^{14}$ | 32.1 | 53.4 | 1.7 | 106.75 | $\propto k^{5}$
Of observational interest may also be the profile and slope with which ${\cal
P}_{\rm GW}(k)$ increases at low $k$. Interestingly, the fractional
polarization continues to be nearly 100% for wavenumbers several times larger
than the peak at $2k_{\rm*}(1)$, but shows a decline for smaller $k$.
We thank Tina Kahniashvili and Kandaswamy Subramanian for useful discussions.
Nordita’s support during the program on Gravitational Waves from the Early
Universe in Stockholm in 2019 is gratefully acknowledged. This work was
support through grants from the Swedish Research Council (Vetenskapsradet,
2019-04234). We acknowledge the allocation of computing resources provided by
the Swedish National Allocations Committee at the Center for Parallel
Computers at the Royal Institute of Technology in Stockholm and Lindköping.
Software and Data Availability. The source code used for the simulations of
this study, the Pencil Code (Pencil Code Collaboration et al., 2021), is
freely available on https://github.com/pencil-code/. The DOI of the code is
https://doi.org/10.5281/zenodo.2315093 v2018.12.16 (Brandenburg, 2018). The
simulation setup and the corresponding data are freely available on
https://doi.org/10.5281/zenodo.5137202 (catalog doi:10.5281/zenodo.5137202);
see also https://www.nordita.org/~brandenb/projects/HelicalMagnetoGenesisGW/
for easier access of the same material as on the Zenodo site.
## Appendix A Relation between $\beta$ and the reheating temperature
We discussed in Section 2.3 various combinations of model parameters $\beta$
and $\gamma$ for a chosen value of $T_{\rm r}$. For the nonhelical case with
$\gamma=0$, details were already given in Appendix A of BS. The expression
corresponding to Equation (A1) of BS is obtained as follows.
Details of the helical magnetogenesis model are explained in SSS. The
expressions below their Equations (23) and (29) represent the solution for the
scaled vector potential $\mathcal{A}_{h}$ during inflation and the matter-
dominated era, respectively, and are given by
$\displaystyle\mathcal{A}_{1h}(\eta)$
$\displaystyle=\frac{e^{-h\pi\alpha/2}}{\sqrt{2k}}W_{i\alpha
h,\alpha+\frac{1}{2}}(2ik\eta),$ (A1) $\displaystyle\mathcal{A}_{2h}(\zeta)$
$\displaystyle=d_{1}M_{2i\beta
h,-(2\beta+\frac{1}{2})}(2ik\zeta)+d_{2}M_{2i\beta
h,2\beta+\frac{1}{2}}(2ik\zeta).$ (A2)
Here $h=\pm 1$, $\zeta$ is a time variable during the matter-dominated era
defined in SSS as $\zeta\equiv\eta-3\eta_{\rm f}$, where $\eta_{\rm f}$ is the
value of conformal time at the end of inflation, and $W$ and $M$ represent the
Whittaker functions of the first and second kind. The coefficients $d_{1}$ and
$d_{2}$ are obtained by the matching $A_{h}\equiv\mathcal{A}_{h}/f$ and its
derivatives at the end of inflation. In SSS, only the $\mathcal{A}_{h}$ in the
superhorizon limit during the matter-dominated era was considered. Since this
solution does not incorporate the extra growth of the modes when they start
entering the horizon (as evident from Figure 2), we consider the full solution
given in Equation (A2) in the present paper. By considering the full solution,
we obtain $d_{1}$ and $d_{2}$ and, further using Equation (29) in Equations
(17) and (18) of SSS, we obtain the magnetic and electric energy densities
during the matter-dominates era. Demanding that the total EM energy be smaller
than the background energy density at the end of inflation, we calculate the
value of the Hubble parameter during inflation, $H_{\rm f}$, for given values
of $T_{\rm r}$, $\alpha$, and ${\cal E}_{\rm EM}$. Further, using these
values, we estimate the value of $\beta\equiv 2N/N_{r}$, where $N$ and $N_{r}$
are the number of $e$-folds during the post-inflationary matter-dominated era
and during inflation, respectively. We provide these values in Table
LABEL:Tbeta2 along with the initial magnetic field spectrum in the
superhorizon limit during matter-dominated era and the value of the
relativistic degrees of freedom at the beginning of the radiation-dominated
era, $g_{\rm r}(\eta_{*})$.
## References
* Adshead et al. (2016) Adshead, P., Giblin, J. T., Scully, T. R., & Sfakianakis, E. I. 2016, JCAP, 2016, 039, doi: 10.1088/1475-7516/2016/10/039
* Adshead et al. (2018) Adshead, P., Giblin, J. T., & Weiner, Z. J. 2018, PhRvD, 98, 043525, doi: 10.1103/PhysRevD.98.043525
* Amaro-Seoane et al. (2017) Amaro-Seoane, P., Audley, H., Babak, S., et al. 2017, arXiv e-prints, arXiv:1702.00786. https://arxiv.org/abs/1702.00786
* Anand et al. (2019) Anand, S., Bhatt, J. R., & Pandey, A. K. 2019, EPJC, 79, 119, doi: 10.1140/epjc/s10052-019-6619-5
* Anber & Sorbo (2006) Anber, M. M., & Sorbo, L. 2006, J. Cosmology Astropart. Phys, 2006, 018, doi: 10.1088/1475-7516/2006/10/018
* Arzoumanian et al. (2020) Arzoumanian, Z., Baker, P. T., Blumer, H., et al. 2020, ApJ, 905, L34, doi: 10.3847/2041-8213/abd401
* Banerjee & Jedamzik (2004) Banerjee, R., & Jedamzik, K. 2004, PhRvD, 70, 123003, doi: 10.1103/PhysRevD.70.123003
* Barnaby et al. (2011) Barnaby, N., Namba, R., & Peloso, M. 2011, J. Cosmology Astropart. Phys, 2011, 009, doi: 10.1088/1475-7516/2011/04/009
* Biskamp & Müller (1999) Biskamp, D., & Müller, W.-C. 1999, Phys. Rev. Lett., 83, 2195, doi: 10.1103/PhysRevLett.83.2195
* Boyarsky et al. (2012) Boyarsky, A., Fröhlich, J., & Ruchayskiy, O. 2012, Phys. Rev. Lett., 108, 031301, doi: 10.1103/PhysRevLett.108.031301
* Boyarsky et al. (2015) —. 2015, Phys. Rev. D, 92, 043004, doi: 10.1103/PhysRevD.92.043004
* Brandenburg (2018) Brandenburg, A. 2018, Pencil Code, v2018.12.16, Zenodo, doi: 10.5281/zenodo.2315093
* Brandenburg & Boldyrev (2020) Brandenburg, A., & Boldyrev, S. 2020, ApJ, 892, 80, doi: 10.3847/1538-4357/ab77bd
* Brandenburg et al. (2021a) Brandenburg, A., Clarke, E., He, Y., & Kahniashvili, T. 2021a, PhRvD, in press, arXiv:2102.12428. https://arxiv.org/abs/2102.12428
* Brandenburg et al. (1996) Brandenburg, A., Enqvist, K., & Olesen, P. 1996, Phys. Rev. D, 54, 1291, doi: 10.1103/PhysRevD.54.1291
* Brandenburg et al. (2021b) Brandenburg, A., Gogoberidze, G., Kahniashvili, T., et al. 2021b, CQGra, 38, 145002. https://arxiv.org/abs/2103.01140
* Brandenburg et al. (2021c) Brandenburg, A., He, Y., Kahniashvili, T., Rheinhardt, M., & Schober, J. 2021c, ApJ, 911, 110 (BHKRS), doi: 10.3847/1538-4357/abe4d7
* Brandenburg & Kahniashvili (2017) Brandenburg, A., & Kahniashvili, T. 2017, PhRvL, 118, 055102, doi: 10.1103/PhysRevLett.118.055102
* Brandenburg et al. (2017) Brandenburg, A., Kahniashvili, T., Mandal, S., et al. 2017, Phys. Rev. D, 96, 123528, doi: 10.1103/PhysRevD.96.123528
* Brandenburg et al. (2017) Brandenburg, A., Kahniashvili, T., Mandal, S., et al. 2017, Phys. Rev. D, 96, 123528, doi: 10.1103/PhysRevD.96.123528
* Brandenburg & Sharma (2021) Brandenburg, A., & Sharma, R. 2021, ApJ, in press, arXiv:2106.03857 (BS). https://arxiv.org/abs/2106.03857
* Campanelli (2009) Campanelli, L. 2009, IJMPD, 18, 1395, doi: 10.1142/S0218271809015175
* Caprini & Sorbo (2014) Caprini, C., & Sorbo, L. 2014, J. Cosmology Astropart. Phys, 2014, 056, doi: 10.1088/1475-7516/2014/10/056
* Caprini et al. (2016) Caprini, C., Hindmarsh, M., Huber, S., et al. 2016, J. Cosmology Astropart. Phys, 2016, 001, doi: 10.1088/1475-7516/2016/04/001
* Christensson et al. (2001) Christensson, M., Hindmarsh, M., & Brandenburg, A. 2001, PhRvE, 64, 056405, doi: 10.1103/PhysRevE.64.056405
* Cornwall (1997) Cornwall, J. M. 1997, PhRvD, 56, 6146, doi: 10.1103/PhysRevD.56.6146
* Demozzi et al. (2009) Demozzi, V., Mukhanov, V., & Rubinstein, H. 2009, JCAP, 8, 025, doi: 10.1088/1475-7516/2009/08/025
* Detweiler (1979) Detweiler, S. 1979, ApJ, 234, 1100, doi: 10.1086/157593
* Domcke et al. (2020) Domcke, V., Ema, Y., & Mukaida, K. 2020, JHEP, 2020, 55, doi: 10.1007/JHEP02(2020)055
* Domcke & Mukaida (2018) Domcke, V., & Mukaida, K. 2018, JCAP, 2018, 020, doi: 10.1088/1475-7516/2018/11/020
* Durrer et al. (2011) Durrer, R., Hollenstein, L., & Jain, R. K. 2011, JCAP, 2011, 037, doi: 10.1088/1475-7516/2011/03/037
* Ellis et al. (2020) Ellis, J., Fairbairn, M., Lewicki, M., Vaskonen, V., & Wickens, A. 2020, J. Cosmology Astropart. Phys, 2020, 032, doi: 10.1088/1475-7516/2020/10/032
* Ferreira et al. (2013) Ferreira, R. J. Z., Jain, R. K., & Sloth, M. S. 2013, J. Cosmology Astropart. Phys, 2013, 004, doi: 10.1088/1475-7516/2013/10/004
* Fujita & Durrer (2019) Fujita, T., & Durrer, R. 2019, JCAP, 2019, 008, doi: 10.1088/1475-7516/2019/09/008
* Fujita et al. (2015) Fujita, T., Namba, R., Tada, Y., Takeda, N., & Tashiro, H. 2015, JCAP, 2015, 054, doi: 10.1088/1475-7516/2015/05/054
* Garretson et al. (1992) Garretson, W. D., Field, G. B., & Carroll, S. M. 1992, PhRvD, 46, 5346, doi: 10.1103/PhysRevD.46.5346
* Gogoberidze et al. (2007) Gogoberidze, G., Kahniashvili, T., & Kosowsky, A. 2007, Phys. Rev. D, 76, 083002, doi: 10.1103/PhysRevD.76.083002
* Hatori (1984) Hatori, T. 1984, JPSJ, 53, 2539, doi: 10.1143/JPSJ.53.2539
* Hobbs et al. (2010) Hobbs, G., Archibald, A., Arzoumanian, Z., et al. 2010, CQGra, 27, 084013, doi: 10.1088/0264-9381/27/8/084013
* Joyce & Shaposhnikov (1997) Joyce, M., & Shaposhnikov, M. 1997, PhRvL, 79, 1193, doi: 10.1103/PhysRevLett.79.1193
* Kahniashvili et al. (2021) Kahniashvili, T., Brandenburg, A., Gogoberidze, G., Mandal, S., & Pol, A. R. 2021, PhRvR, 3, 013193, doi: 10.1103/PhysRevResearch.3.013193
* Kahniashvili et al. (2016) Kahniashvili, T., Brandenburg, A., & Tevzadze, A. G. 2016, PhysS, 91, 104008, doi: 10.1088/0031-8949/91/10/104008
* Kahniashvili et al. (2005) Kahniashvili, T., Gogoberidze, G., & Ratra, B. 2005, Phys. Rev. Lett., 95, 151301, doi: 10.1103/PhysRevLett.95.151301
* Kobayashi & Afshordi (2014) Kobayashi, T., & Afshordi, N. 2014, JHEP, 2014, 166, doi: 10.1007/JHEP10(2014)166
* Kobayashi & Sloth (2019) Kobayashi, T., & Sloth, M. S. 2019, Phys. Rev. D, 100, 023524, doi: 10.1103/PhysRevD.100.023524
* Maggiore (2000) Maggiore, M. 2000, Phys. Rep., 331, 283, doi: 10.1016/S0370-1573(99)00102-7
* Mukhanov et al. (1992) Mukhanov, V. F., Feldman, H. A., & Brandenberger, R. H. 1992, Phys. Rep., 215, 203, doi: 10.1016/0370-1573(92)90044-Z
* Okano & Fujita (2021) Okano, S., & Fujita, T. 2021, J. Cosmology Astropart. Phys, 2021, 026, doi: 10.1088/1475-7516/2021/03/026
* Pencil Code Collaboration et al. (2021) Pencil Code Collaboration, Brandenburg, A., Johansen, A., et al. 2021, JOSS, 6, 2807, doi: 10.21105/joss.02807
* Pouquet et al. (1976) Pouquet, A., Frisch, U., & Leorat, J. 1976, JFM, 77, 321, doi: 10.1017/S0022112076002140
* Ratra (1992) Ratra, B. 1992, ApJ, 391, L1, doi: 10.1086/186384
* Roper Pol et al. (2020a) Roper Pol, A., Brandenburg, A., Kahniashvili, T., Kosowsky, A., & Mandal, S. 2020a, GApFD, 114, 130, doi: 10.1080/03091929.2019.1653460
* Roper Pol et al. (2021) Roper Pol, A., Mandal, S., Brandenburg, A., & Kahniashvili, T. 2021, JCAP, submitted, arXiv:2107.05356. https://arxiv.org/abs/2107.05356
* Roper Pol et al. (2020b) Roper Pol, A., Mandal, S., Brandenburg, A., Kahniashvili, T., & Kosowsky, A. 2020b, Phys. Rev. D, 102, 083512, doi: 10.1103/PhysRevD.102.083512
* Schober et al. (2020) Schober, J., Fujita, T., & Durrer, R. 2020, Phys. Rev. D, 101, 103028, doi: 10.1103/PhysRevD.101.103028
* Sharma et al. (2017) Sharma, R., Jagannathan, S., Seshadri, T. R., & Subramanian, K. 2017, Phys. Rev. D, 96, 083511, doi: 10.1103/PhysRevD.96.083511
* Sharma et al. (2018) Sharma, R., Subramanian, K., & Seshadri, T. R. 2018, Phys. Rev. D, 97, 083503 (SSS), doi: 10.1103/PhysRevD.97.083503
* Sharma et al. (2020) —. 2020, Phys. Rev. D, 101, 103526, doi: 10.1103/PhysRevD.101.103526
* Taiji Scientific Collaboration et al. (2021) Taiji Scientific Collaboration, Wu, Y.-L., Luo, Z.-R., Wang, J.-Y., et al. 2021, CmPhy, 4, 34, doi: 10.1038/s42005-021-00529-z
* Turner & Widrow (1988) Turner, M. S., & Widrow, L. M. 1988, Phys. Rev. D, 37, 2743, doi: 10.1103/PhysRevD.37.2743
* Vachaspati (2001) Vachaspati, T. 2001, Phys. Rev. Lett., 87, 251302, doi: 10.1103/PhysRevLett.87.251302
* Vilenkin (1980) Vilenkin, A. 1980, Phys. Rev. D, 22, 3080, doi: 10.1103/PhysRevD.22.3080
|
arxiv-papers
| 2021-07-26T17:15:52 |
2024-09-04T03:07:19.332710
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Axel Brandenburg, Yutong He, Ramkishor Sharma",
"submitter": "Axel Brandenburg",
"url": "https://arxiv.org/abs/2107.12333"
}
|
2107.12337
|
# Giga-voxel multidimensional fluorescence imaging combining single-pixel
detection and data fusion
F. Soldevila Laboratoire Kastler Brossel, École Normale Supérieure – Paris
Sciences et Lettres (PSL) Research University, Sorbonne Université, Centre
National de la Recherche Scientifique (CNRS) UMR 8552, Collège de France, 24
rue Lhomond, 75005 Paris, France. GROC-UJI, Institute of New Imaging
Technologies (INIT), Universitat Jaume I, 12071, Avda. Sos Baynat, s/n,
Castelló, Spain. These authors contributed equally to this work A. Lenz
GROC-UJI, Institute of New Imaging Technologies (INIT), Universitat Jaume I,
12071, Avda. Sos Baynat, s/n, Castelló, Spain. These authors contributed
equally to this work A. Ghezzi Politecnico di Milano, Dipartimento di
Fisica, Piazza L. da Vinci 32, 20133 Milano, Italy. Consiglio Nazionale delle
Ricerche, Istituto di Fotonica e Nanotecnologie, Piazza L. da Vinci 32, 20133
Milano, Italy. A. Farina Consiglio Nazionale delle Ricerche, Istituto di
Fotonica e Nanotecnologie, Piazza L. da Vinci 32, 20133 Milano, Italy. C.
D’Andrea Politecnico di Milano, Dipartimento di Fisica, Piazza L. da Vinci
32, 20133 Milano, Italy. Istituto Italiano di Tecnologia, Center for Nano
Science and Technology, via Pascoli 70/3, 20133 Milano, Italy. E. Tajahuerce
GROC-UJI, Institute of New Imaging Technologies (INIT), Universitat Jaume I,
12071, Avda. Sos Baynat, s/n, Castelló, Spain.
###### Abstract
Time-resolved fluorescence imaging is a key tool in biomedical applications,
as it allows to non-invasively obtain functional and structural information.
However, the big amount of collected data introduces challenges in both
acquisition speed and processing needs. Here, we introduce a novel technique
that allows to reconstruct a Giga-voxel 4D hypercube in a fast manner while
only measuring 0.03% of the information. The system combines two single-pixel
cameras and a conventional 2D array detector working in parallel. Data fusion
techniques are introduced to combine the individual 2D & 3D projections
acquired by each sensor in the final high-resolution 4D hypercube, which can
be used to identify different fluorophore species by their spectral and
temporal signatures.
During the last few decades, the amount of data being collected by optical
systems has been growing at an exponential rate. Nowadays, bio-imaging
researchers are not only interested in obtaining high-resolution (over
millions of pixels) images, but also in measuring additional physical
properties of light, such as polarization, wavelength, and fluorescence
lifetimes [1, 2]. Furthermore, state of the art biological research spans from
the study of thin microscopic 2D samples to full organisms in vivo, thus
requiring 3D, fast, and highly-dimensional imaging systems [3, 4].
This increase in the amount of acquired data presents several challenges.
First, imaging systems need to be designed with the capability to sense not
only light intensity, but also other physical parameters (wavelength,
polarization, time-resolved decays on the ps timescale, etc.) and to operate
in real-time. Current detector and electronics technology are limited mainly
by the fact that detectors are sensitive only to the intensity of light and
the technical limitations when building a sensor. Manufacturing places a bound
in the number of pixels that can be fitted in a given sensor size, and working
conditions (cooling, power supply, etc.) generate trade-offs between the
number of physical parameters which can be measured and any combination of
frame-rate, pixel size, sensitivity, quantum efficiency, and/or pixel number.
Another main challenge is that, even when multidimensional systems can be
built with adequate specifications, the amount of data generated tends to be
so big that bottlenecks in transmission, storage, and computational power
limit the capability of such systems to perform in real-time [5].
Recently, single-pixel (SP) imaging systems have been proposed as a way to
tackle some of these limitations. SP cameras operate with a single bucket
detector and a spatial light modulator (SLM). The SLM is used to sample the
scene by using coded masks, and the total intensity of the superposition among
the masks and the scene is measured with a detector using just one pixel [6].
In contrast with a conventional camera, which uses millions of pixels to
provide sharp images, SP imaging systems shift the spatial sampling process to
the SLM. By doing this, simple but extremely specialized detectors can be
used, which allow to build very efficient multidimensional systems [7, 8, 9].
Moreover, image recovery in SP systems is very well suited to signal
processing techniques, such as compressive sensing or machine learning [10,
11], which help alleviate the aforementioned data processing hurdles. However,
SP systems are not exempt of limitations. As the SLM needs to generate
multiple masks to sample the scene, SP systems are sequential in nature, and
thus are bounded by a trade-off between spatial resolution and frame-rate.
Using a different approach, data fusion (DF) techniques aim to combine any
number of individual datasets into one single dataset that provides richer
information than any one of the starting ones. In the same way humans merge
information from sight, smell, or touch to determine if it is safe to eat some
food, multidimensional data fusion systems are able to provide novel insights
on sample characteristics from a combined view of multispectral, time-
resolved, and/or polarimetric views of the scene. Historically, the main field
of application of DF has been remote sensing, where satellite design imposes
hard constraints on the energy consumption, bandwidth, and number and size of
detectors [12, 13]. Given these limitations, it is quite normal to have
multiple sensors, each one being sensitive to a different spectral range or to
the polarization state of light. After capturing all the data, the fusion
procedure helps to obtain rich chemical and morphological information about
the surface. With the same spirit, there has been a recent spark of interest
on DF in the life sciences, as merging information from different imaging
modalities has proven to give insights that individual sources cannot provide
[14, 15, 16, 17].
In this letter we present a novel technique that combines both the SP and DF
paradigms. By doing so, it allows to capture high spatial resolution,
multispectral, and time-resolved fluorescence images. Both spectral features
and fluorescence lifetimes provide fundamental insights about the
photophysical processes of many different samples. In particular, emission
spectra allow to distinguish among different chemical species, while
fluorescence lifetimes, being strongly dependent on the fluorophores
microenvironment, provide useful functional information (e.g. pH, temperature,
energy transfer, etc.). The capture process is achieved while still using
simple detectors that individually gather information about a reduced number
of dimensions (space, time, wavelength). Our system relies on the combined use
of three different sensors: two SP cameras capturing multispectral and time-
resolved information, and a conventional array detector capturing high spatial
resolution images. After the measurement process, DF techniques are introduced
to combine the individual 2D/3D projections acquired in parallel by each
sensor in the final 4D hypercube. This provides an efficient system that is
not bound by bandwidth and storage limitations, as each individual sensor only
measures a small fraction of information. Furthermore, the DF procedure is
done by simply solving a regularized inverse problem via gradient descent
without the requirement of the calculation of the Hessian, which typically
entails memory limitations.
Figure 1: Spatio-temporal-spectral data fusion framework. A CMOS camera
acquires a high spatial resolution image with neither temporal nor spectral
resolution. A SP multispectral camera acquires a low spatial, but high
spectral resolution datacube, using a spectrometer as its detector. Last, an
additional SP camera measures a low spatial, but high temporal resolution
datacube, using a fast bucket detector. All three datasets are combined via
regularization to obtain a 4D high resolution spatial, temporal, and spectral
hypercube.
Our system combines the images obtained with two SP cameras with an image
obtained with a CMOS camera (see Fig. 1). Individually, each SP camera
provides either multispectral or time-resolved images with a low spatial
resolution, while the CMOS sensor captures a high spatial resolution image of
the sample, but neither spectral nor time-resolved. The DF procedure makes it
possible to retain the SP benefits of using simple specialized detectors while
still obtaining high spatial resolution images. This allows to acquire full 4D
reconstructions (x, y, wavelength, time) of a fluorescent sample with multiple
fluorophore species.
We model our system in the following way. For each camera, we can formulate a
forward model that represents the acquisition of a projection of the 4D
hypercube ($\mathbf{x}$) over several dimensions. For example, for the single
CMOS image we have $\mathbf{y}_{cmos}=S\cdot T\cdot\mathbf{x}$, where $S$ and
$T$ represent the spectral and temporal integration operators (i.e. $S$ and
$T$, in combination, project the 4D hypercube over the 2D space). In the same
way, we can define forward models for both the spectral and time-resolved SP
cameras. For the spectral camera we have $\mathbf{y}_{spectral}=T\cdot
R_{L}\cdot\mathbf{x}$, where $R_{L}$ is a downsampling operator in the spatial
domain (as the SP cameras acquire low spatial resolution images). Last, for
the time-resolved camera we have $\mathbf{y}_{temporal}=S\cdot
R_{L}\cdot\mathbf{x}$. Given $\mathbf{y}_{cmos}$, $\mathbf{y}_{spectral}$, and
$\mathbf{y}_{temporal}$, the problem then resides on finding an estimation of
the hypercube, ${\mathbf{\hat{x}}}$, that is compatible with all the
individual measurements. To do so, we formulate the following minimization
problem:
$\mathbf{\hat{x}}=\underset{\mathbf{x}}{\text{arg min }}F(\mathbf{x})$ (1)
$\begin{split}F(\mathbf{x})&=\frac{1}{2}\|ST\mathbf{x}-\mathbf{y}_{cmos}\|_{2}^{2}+\frac{1}{2}\alpha\|R_{L}S\mathbf{x}-\mathbf{y}_{temporal}\|_{2}^{2}+\\\
&+\frac{1}{2}\beta\|R_{L}T\mathbf{x}-\mathbf{y}_{spectral}\|_{2}^{2}.\end{split}$
(2)
The first term in Eq. 2 minimizes the difference between the measurements
obtained with the CMOS camera and the projection of the 4D hypercube over the
2D space. The second term minimizes the difference between the time-resolved
SP measurements and the projection of the 4D hypercube over a low-resolution
3D space (x, y, time). Last, the third term minimizes the difference between
the SP multispectral measurements and the projection of the 4D hypercube over
a low spatial resolution 3D space (x, y, wavelength). Both $\alpha$ and
$\beta$ are regularization parameters that tune the weight of each penalty
function. In order to find the ${\mathbf{\hat{x}}}$ that minimizes Eq. 1, we
use a gradient descent algorithm. Given the gradient of the objective
function:
$\begin{split}\nabla
F(\mathbf{x})&=T^{\mathsf{T}}S^{\mathsf{T}}(ST\mathbf{x}-\mathbf{y}_{cmos})+\alpha
S^{\mathsf{T}}R_{L}^{\mathsf{T}}(R_{L}S\mathbf{x}-\mathbf{y}_{temporal})+\\\
&+\beta
T^{\mathsf{T}}R_{L}^{\mathsf{T}}(R_{L}T\mathbf{x}-\mathbf{y}_{spectral}),\end{split}$
(3)
we iteratively obtain ${\mathbf{\hat{x}}}$ by repeating
${\mathbf{\hat{x}}}_{n+1}={\mathbf{\hat{x}}}_{n}-\tau\nabla
F(\mathbf{\hat{x}}_{n})$ until the solution converges [18] (see Supplement for
additional information and an outline of the code).
A proposal for the experimental implementation of the system is shown in Fig.
2. A 40 MHz pulsed supercontinuum laser source (Fianium, SC450) spectrally
filtered through a band-pass filter (CW=480 nm,$\pm$5 nm), illuminates the
sample under study, which consists of a plaque with three letters (U, J, and
I). The U character contains the laser dye DCM, painted on a white paper,
while the characters J and I are made of fluorescent plastic slides,
respectively emitting in the green and orange region. The illumination area is
$2.5\times 2.5$ $\mathrm{cm}^{2}$. A CMOS camera is used to acquire an image
of the sample over a single spectral band ($\mathbf{y}_{cmos}$). In parallel,
a relay system images the sample onto the surface of a digital micromirror
device (DMD, Discovery Kit 4100, Vialux). The DMD sequentially codifies the
structured binary masks for SP image acquisition. In order to speed-up
acquisition and to improve light efficiency, we use both reflection arms of
the DMD in parallel. In one reflection direction, we place a time-resolved
detector, which makes it possible to follow the temporal evolution of the
fluorescence emission. In the other reflection direction, we combine a
spectrometer with a detector array that allows to measure the different
spectral components. After all the masks are generated by the DMD, the signal
from each detector can be used to recover a low spatial resolution
multispectral ($\mathbf{y}_{spectral})$ or time-resolved
($\mathbf{y}_{temporal}$) image by a simple multiplexing procedure that can
easily be done on-the-fly.
Figure 2: Optical implementation of the system. The object is illuminated in
reflectance geometry with a laser beam. The camera records a high-resolution
2D image of the object. An image of the object is also projected on the DMD. A
sequence of Hadamard patterns is codified on the DMD at a high frame rate. For
each pattern, the light emerging from the DMD is collected simultaneously by a
time-resolved bucket detector and a spectrometer coupled with a detector
array.
In our experiments, we acquired a $512\times 512$ px image with the CMOS
camera (Grasshopper3 GS3-U3-23S6M, Point Grey Research). The multispectral SP
camera produced a $32\times 32\times 16$ datacube ($32\times 32$ pixels with
16 spectral channels covering a range between 510 and 650 nm). It consisted of
an imaging spectrometer (Acton, sp-2151i, Princeton Instruments) coupled to a
16-channel Photo-Multiplier Tube (PML16-C, Becker & Hickl). The time-resolved
SP camera is based on a Hybrid-PMT (HPM-100-50, Becker & Hickl) connected to a
Time-Correlated Single-Photon Counting board (TCSPC, SPC130EM, Becker & Hickl)
board, which is capable of providing photon time-of-flight histograms on a
temporal window of about 25 ns. The overall data provided by the SP camera is
a $32\times 32\times 256$ datacube ($32\times 32$ pixels with 256 time bins of
48.8 ps each).
Given the nature of SP imaging, both the multispectral and the time-resolved
images share the same point of view of the scene. Nevertheless, the CMOS sees
the scene under a different perspective. In order for the DF algorithm to
work, we applied a pre-processing step that consisted on a spatial
registration between the SP images and the CMOS image. This was performed
using the Registration Estimator App (registrationEstimator), available in
Matlab. After the registration was done, a geometrical transformation was
applied to the CMOS image in order to overlap its field of view with that of
the SP images. The spatial projection of the results of each individual
acquisition can be seen in Fig. 3.a. After this procedure, the three datasets
were fed to the DF algorithm, which produced a $512\times 512\times 16\times
256\approx 1$ giga-voxel hypercube. The complete reconstruction procedure
consisted in 17 gradient descent steps, which took about 40 minutes. The
computation was done using Matlab in a PC with an Intel Core i7-9700 CPU, with
64 Gb of RAM. A movie showing the individual temporal evolution of all the
spectral channels can be found in Visualization 1.
Figure 3: Time-resolved multispectral results. a) Measured datasets. Top:
CMOS image. Center: spatial projection of the multispectral SP datacube.
Bottom: spatial projection of the time-resolved SP datacube. b) Spatial
projection of the DF-recovered 4D hypercube and temporal-spectral traces for
the different shapes present on the sample (labeled U, J, and I). Insets show
the increased spatial resolution when compared to the SP datasets.
Fig. 3.b shows the DF recovery provided by fusing the three individual
datasets. An increase in the spatial resolution of the images when compared to
the SP measurements can be easily seen. While the improvement might not seem
so high, acquiring $512\times 512$ spatial resolution hypercubes only with the
two SP systems would entail acquisition times 256 times longer (due to the
sequential nature of SP imaging). We also show the temporal-spectral traces
for different regions of the sample. In this visualization we can notice that
the regions with the J and I characters present very similar fluorescence
emission lifetimes, while the regions with the U and I characters have very
similar spectral signatures. Exploiting both spectral and temporal information
we can identify the 3 fluorescent species present in the sample. From the
individual datasets alone, it would not be possible to do this classification.
Figure 4: Temporal-spectral traces quality estimation. Temporal (top) and
spectral (bottom) traces for the three species present in the sample (U, J,
and I characters). Lines correspond to the reference lifetimes and spectral
signatures present in the sample, while the markers correspond to the values
extracted from our 4D reconstruction. To ease visualization, we only show one
of every two intensity values recovered by the DF algorithm in the emission
lifetimes.
In order to test the quality of our results, we compared the recovered spectra
and fluorescence lifetimes with a reference of the species present in the
sample. For the fluorescence lifetimes, we measured the decay time of each
fluorescent region with a fast detector (1024 temporal bins of 12.2 ps each).
We show both the normalized data extracted from our DF reconstruction and the
reference lifetimes in the top graph of Fig. 4. From each one of the curves,
it is possible to estimate the decay time by fitting the data to an
exponential function. The values extracted from the DF reconstruction for the
U, J, and I characters are $\tau_{U}^{DF}=2.07$ ns, $\tau_{J}^{DF}=9.06$ ns,
and $\tau_{I}^{DF}=10.8$ ns, showing a very good agreement with the reference
decays for the three fluorophores. Following the same spirit, we measured the
fluorescence emission spectra for the three fluorophores in the scene using a
high-resolution spectrometer (Hamamatsu TM-VIS/NIR C10083CA-2100), which also
shown excellent agreement with the DF results.
In summary, we have introduced a novel DF-inspired multidimensional SP imaging
system that can be used to identify different fluorescent species by their
spectral and temporal signatures (i.e. their fluorescence spectra and/or
emission lifetimes) and to study their photophysical properties. The system
utilizes both array and SP detectors, combining their strengths while
mitigating their drawbacks. In order to combine the individual datasets
acquired by each camera, we have introduced a straightforward yet powerful DF
recovery algorithm based on the minimization of a cost function that takes
into account all the measurement processes. By doing so, we have demonstrated
that it is possible to obtain high quality results in a fast manner while
actually measuring a very small fraction of the information contained by the
sample. In fact, if we consider the number of measured (M) vs. reconstructed
(N) voxels for our experiments, we can think of the system as a compressive
time-resolved multispectral camera, where the measurement ratio can be defined
as $M.R.=M/N=\frac{512\times 512+32\times 32\times 16+32\times 32\times
256}{512\times 512\times 16\times 256}\approx 0.0003$. In the future, we
envision the use of more sophisticated cost functions introducing additional
information of the system, such as sparsity constraints. This will further
decrease the amount of measured information. While the results shown here
consist of spatial-spectral-temporal information, the technique can be applied
to any system consisting of multiple specialized cameras, and we expect that
the DF paradigm will be useful for the bio-imaging community by also adding
polarization and/or phase information.
## Funding
Ministerio de Ciencia e Innovación (PID2019–110927RB–I00 / AEI /
10.13039/501100011033); Generalitat Valenciana (PROMETEO/2020/029);
Universitat Jaume I (UJIB2018–68); Regione Lombardia NEWMED, POR FESR
2014–2020.
## Acknowledgments
We acknowledge financial support from Laserlab–Europe (Grant Agreement n.
654148, Horizon 2020) through project CUSBO002482. A.J.M. Lenz acknowledges a
grant from Generalitat Valenciana (ACIF/2019/019).
## Disclosures
The authors declare no conflicts of interest.
## Supplementary information
## I Data fusion retrieval algorithm
As described in the main text, we model our system by using a forward model
that contains three different terms, each one representing the measurements by
each sensor. For each camera, the measurements ($\mathbf{y}_{cmos}$,
$\mathbf{y}_{spectral}$, and $\mathbf{y}_{temporal}$) are obtained by
projecting a 4D object into a 1D array of measurements. In order to implement
this process, we define several routines in Matlab that integrate the 4D
object into one or more dimensions (spectral, time) and/or either downsample
the information in the spatial domain (as the single-pixel images are low-
resolution versions of the true object). By using these forward operators, we
define an objective function that can be minimized by using gradient descent.
In order to compute the gradient, we also implement several routines to
calculate the adjoint of these operators [19]. All these, with a low-
resolution example of our experiments, can be seen in [18].
The gradient descent procedure is also implemented in Matlab, with the only
peculiarity of an intermediate step that searches for the best gradient step
($\tau$) at each iteration [20]. The pseudocode of the procedure can be seen
in Alg. 1. Here, we tuned the regularization parameters empirically. However,
for more complex forward models (for example including sparsity terms), this
task could become extremely time consuming. Future experiments will explore
the possibility to use automatic prediction of these paremeters [21, 22, 23],
which would speed-up reconstruction process even with higher number of
regularization terms.
Result: Returns the estimated $\mathbf{\hat{x}}$ by minimizing the objective
function $F(\mathbf{x})$ after a number $numIter$ of gradient descent steps.
In each step an appropriate step size is calculated with a backtracking line
search algorithmn based on the Armijo-Goldstein condition.
Set values for the regularization parameters $\alpha$ and $\beta$, and for the
initial step size $\tau_{init}$ (e.g., $\tau_{init}=1$)
Set values for the backtracking line search parameters
$\varepsilon\in(\,0\,,1)$ , and $\gamma\in(\,0\,,1)$ (e.g., $\frac{1}{2}$ for
both)
Initialize first guess $\mathbf{\hat{x}_{1}}$ randomly
for _$i=1:numIter$_ do
Calculate gradient: $\mathbf{g_{i}}=\nabla F(\mathbf{\hat{x}_{i}})$
Calculate new step size with a backtracking line search algorithm based on the
Armijo-Goldstein condition:
Initialize step size: $\tau_{i}=\tau_{init}$
while
_$F(\mathbf{\hat{x}_{i}})-F(\mathbf{\hat{x}_{i}}-\tau_{i}\,\mathbf{g_{i}})
<\varepsilon\,\tau_{i}\,\|\mathbf{g_{i}}\|^{2}$_ do
Set $\tau_{i}\leftarrow\gamma\tau_{i}$
end while
Update object estimation in the direction provided by the gradient:
$\mathbf{\hat{x}_{i+1}}=\mathbf{\hat{x}_{i}}-\tau_{i}\mathbf{g_{i}}$
end for
Algorithm 1 Gradient descent algorithm with backtracking line search
## References
* [1] Orth, A., Tomaszewski, M. J., Ghosh, R. N. & Schonbrun, E. Gigapixel multispectral microscopy. _Optica_ 2, 654 (2015).
* [2] Wang, P., Liang, J. & Wang, L. V. Single-shot ultrafast imaging attaining 70 trillion frames per second. _Nature Communications_ 11, 2091 (2020).
* [3] Prevedel, R. _et al._ Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. _Nature Methods_ 11, 727–730 (2014).
* [4] Fan, J. _et al._ Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution. _Nature Photonics_ (2019).
* [5] Rueden, C. T. & Eliceiri, K. W. Visualization approaches for multidimensional biological image data. _BioTechniques_ 43, S31–S36 (2007).
* [6] Edgar, M. P., Gibson, G. M. & Padgett, M. J. Principles and prospects for single-pixel imaging. _Nature Photonics_ (2018).
* [7] Radwell, N. _et al._ Single-pixel infrared and visible microscope. _Optica_ 1, 285–289 (2014).
* [8] Rousset, F. _et al._ Time-resolved multispectral imaging based on an adaptive single-pixel camera. _Optics Express_ 26, 10550 (2018).
* [9] Soldevila, F., Durán, V., Clemente, P., Lancis, J. & Tajahuerce, E. Phase imaging by spatial wavefront sampling. _Optica_ 5, 164 (2018).
* [10] Duarte, M. F. _et al._ Single-Pixel Imaging via Compressive Sampling. _IEEE Signal Processing Magazine_ 25, 83–91 (2008).
* [11] Jiang, W., Li, X., Peng, X. & Sun, B. Imaging high-speed moving targets with a single-pixel detector. _Optics Express_ 28, 7889 (2020).
* [12] Zhang, J. Multi-source remote sensing data fusion: status and trends. _International Journal of Image and Data Fusion_ 1, 5–24 (2010).
* [13] Khaleghi, B., Khamis, A., Karray, F. O. & Razavi, S. N. Multisensor data fusion: A review of the state-of-the-art. _Information Fusion_ 14, 28–44 (2013).
* [14] Kessler, M. L. Image registration and data fusion in radiation therapy. _The British Journal of Radiology_ 79, S99–S108 (2006).
* [15] Smith, C. Two microscopes are better than one. _Nature_ 492, 293–297 (2012).
* [16] Van de Plas, R., Yang, J., Spraggins, J. & Caprioli, R. M. Image fusion of mass spectrometry and microscopy: a multimodality paradigm for molecular tissue mapping. _Nature Methods_ 12, 366–372 (2015).
* [17] Fatima, A. _et al._ Enhanced-resolution fluorescence lifetime imaging from multiple sensor data fusion. In _Imaging and Applied Optics Congress_ , CW1B.3 (OSA, Washington, DC, 2020).
* [18] Single-pixel 4d data fusion. https://github.com/cbasedlf/SinglePixelDataFusion4D (2021). Online, accessed on 21-July-2021.
* [19] Claerbout, J. _Basic Earth Imaging_ (2008). URL https://books.google.fr/books?id=FdOhDAEACAAJ.
* [20] Backtracking line search. https://en.wikipedia.org/wiki/Backtracking_line_search (2021). Online, accessed on 10-May-2021.
* [21] Liao, H. & Ng, M. K. Blind deconvolution using generalized cross-validation approach to regularization parameter estimation. _IEEE Transactions on Image Processing_ 20, 670–680 (2011).
* [22] Langer, A. Automated Parameter Selection for Total Variation Minimization in Image Restoration. _Journal of Mathematical Imaging and Vision_ 57, 239–268 (2017).
* [23] Liu, S. & Zhang, J. Machine-learning-based prediction of regularization parameters for seismic inverse problems. _Acta Geophysica_ (2021).
|
arxiv-papers
| 2021-07-26T17:20:28 |
2024-09-04T03:07:19.349724
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Fernando Soldevila, Armin Lenz, Alberto Ghezzi, Andrea Farina, Cosimo\n D'Andrea, Enrique Tajahuerce",
"submitter": "Fernando Soldevila Mr",
"url": "https://arxiv.org/abs/2107.12337"
}
|
2107.12338
|
Gluon saturation in proton and its contribution to single inclusive soft gluon
production in high energy proton-nucleus collisions
Ming Li$\star$
Department of Physics, North Carolina State University, Raleigh, NC 27695, USA
*[email protected]
Proceedings for the XXVIII International Workshop
on Deep-Inelastic Scattering and Related Subjects,
Stony Brook University, New York, USA, 12-16 April 2021
10.21468/SciPostPhysProc.?
## Abstract
The leading order single inclusive soft gluon production in high energy
proton-nucleus (pA) collisions has been studied by various approaches for more
than two decades. The first correction due to the gluon saturation in proton
was analytically attempted recently through a diagrammatic approach in which
only partial results were obtained. An important feature of the first
saturation correction is that it includes both initial state and final state
interactions. In this paper, we present the complete result derived from the
Color Glass Condensate framework. Our approach is to analytically solve the
classical Yang-Mills equations in the dilute-dense regime and then use the
Lehmann-Symanzik-Zimmermann (LSZ) reduction formula to obtain gluon production
from classical gluon fields.
## 1 Introduction
Single inclusive gluon production in high energy pA collisions plays an
important role in understanding the vast amount of experimental data from RHIC
and LHC. These include charged particle transverse momentum spectrum as well
as multiple particle angular correlation patterns. The leading order result
has been studied for more than two decades [1, 2, 3, 4]. It treats the proton
as a perturbative object while resumming all the multiple interactions with
the nucleus eikonally. There are several limitations regarding the leading
order result. First, it is symmetric with respect to momentum change
$\mathbf{k}\leftrightarrow-\mathbf{k}$ and thus, in contrary to experimental
findings, always leads to vanishing triangular flow. Second, it assumes no
final state interaction after scatterings, which might not be a good
approximation for describing high multiplicity events.
Motivated by these considerations, next to leading order corrections,
specifically corrections due to gluon saturation effect in proton are studied.
The saturation correction is different from general perturbative corrections
which are usually organized in powers of coupling constant $g$. Figure 1 shows
some representative diagrams corresponding to the first saturation correction
which, at fixed order of $g$, only capture terms that are enhanced by the
color charge density of the proton $\rho_{P}^{a}(\mathbf{x})$. For example, at
order $g^{3}$ we only consider diagrams enhanced by $g^{3}\rho_{P}^{2}$ and
discard diagrams merely enhanced by $g^{3}\rho_{P}$.
Figure 1: Diagrams illustrating the first saturation correction to single
inclusive gluon production. Each horizontal line represents color charge
density of proton at different transverse positions. The solid rectanglular
bar indicates the Lorentz contracted nucleus. There are tens of diagrams at
order $g^{3}\rho_{P}^{2}$ and $g^{5}\rho_{P}^{3}$, only two representative
diagrams are shown here.
Formally one can write the single gluon production amplitude as
$\mathcal{M}=\mathcal{M}_{(1)}+\mathcal{M}_{(3)}+\mathcal{M}_{(5)}+\ldots$ (1)
Here $\mathcal{M}_{(1)}$ corresponds to diagrams at order $g\rho_{P}$ while
$\mathcal{M}_{(3)}$ and $\mathcal{M}_{(5)}$ representing diagrams at order
$g^{3}\rho_{P}^{2}$ and $g^{5}\rho_{P}^{3}$, respectively. Both
$\mathcal{M}_{(3)}$ and $\mathcal{M}_{(5)}$ contribute to the first saturation
correction. Previous studies wre only able to compute $\mathcal{M}_{(3)}$ [5,
6, 7]. For the first time, we have obtained $\mathcal{M}_{(5)}$ [8, 9]. The
amplitude squared is
$|\mathcal{M}|^{2}=|\mathcal{M}_{(1)}|^{2}+\mathcal{M}_{(1)}^{\ast}\mathcal{M}_{(3)}+\mathcal{M}_{(1)}\mathcal{M}_{(3)}^{\ast}+|\mathcal{M}_{(3)}|^{2}+\mathcal{M}_{(1)}^{\ast}\mathcal{M}_{(5)}+\mathcal{M}_{(1)}\mathcal{M}_{(5)}^{\ast}+\ldots$
(2)
The leading order result comes from $|\mathcal{M}_{(1)}|^{2}$. The next two
terms $\mathcal{M}_{(1)}^{\ast}\mathcal{M}_{(3)}+c.c$ vanish upon ensemble
averaging over the color charge density configurations of the proton using
Gaussian like models such as the McLerran-Venugopalan model. It does not
contribute to single gluon production but will contribute to double and
multiple gluon productions. The first saturation correction therefore is
$\frac{dN}{d^{2}\mathbf{k}}\Big{|}_{FSC}=|\mathcal{M}_{(3)}|^{2}+\mathcal{M}_{(1)}^{\ast}\mathcal{M}_{(5)}+\mathcal{M}_{(1)}\mathcal{M}_{(5)}^{\ast}$
(3)
It is proportional to $g^{6}\rho_{P}^{4}$ as compared to leading order result
$g^{2}\rho_{P}^{2}$.
How do we compute the single gluon production amplitude? We work in the Color
Glass Condensate (CGC) framework. First, we obtain the classical gluon fields
produced in high energy pA collisions by solving the classical Yang-Mills
equations in the dilute-dense regime. Second, using the LSZ reduction formula,
we define the asypmtotic gluon creation operators for the two independent
polarization modes as
$\begin{split}\hat{a}_{\eta}^{a\dagger}(\mathbf{k})=&-i\tau\sqrt{\frac{\pi}{4}}\left(H_{1}^{(2)}(k_{\perp}\tau)\overleftrightarrow{\partial_{\tau}}\tilde{\beta}^{a}(\tau,\mathbf{k})\right)\Big{|}_{\tau=+\infty},\\\
\hat{a}_{\perp}^{a\dagger}(\mathbf{k})=&-i\tau\sqrt{\frac{\pi}{4}}\left(H_{0}^{(2)}(k_{\perp}\tau)\overleftrightarrow{\partial_{\tau}}\beta^{a}_{\perp}(\tau,\mathbf{k})\right)\Big{|}_{\tau=+\infty}.\\\
\end{split}$ (4)
Here $\tilde{\beta}^{a}(\tau,\mathbf{p})$ and
$\beta_{\perp}^{a}(\tau,\mathbf{p})$ are the two independent scalar modes of
classical gluon fields in momentum space. Their expressions will be given in
the next section. The $H^{(2)}_{0}(x)$ and $H^{(2)}_{1}(x)$ are Hankel
functions. The derivative is defined as
$f_{1}(\tau)\overleftrightarrow{\partial_{\tau}}f_{2}(\tau)=f_{1}(\tau)\partial_{\tau}f_{2}(\tau)-\partial_{\tau}f_{1}(\tau)f_{2}(\tau)$.
The single gluon production thus can be computed by
$\frac{dN}{d^{2}\mathbf{k}}=\frac{1}{(2\pi)^{2}}\Big{(}\hat{a}_{\eta}^{a\dagger}(\mathbf{k})\hat{a}_{\eta}^{a}(\mathbf{k})+\hat{a}_{\perp}^{a\dagger}(\mathbf{k})\hat{a}_{\perp}^{a}(\mathbf{k})\Big{)}.$
(5)
To compute the first saturation correction eq. (3), it turns out we only need
the next to leading order solutions of classical Yang-Mills equations
$\tilde{\beta}_{(3)}(\tau,\mathbf{x})$ and
$\beta_{\perp(3)}(\tau,\mathbf{x})$.
## 2 Classical Gluon Fields at Next to Leading Order
To obtain the classical gluon fields produced in high energy pA collisions,
one solves the classical Yang-Mills equations in the forward light cone. We
work in the Fock-Schwinger gauge $x^{-}A^{+}+x^{+}A^{-}=0$ so that one can
parameterize the solutions as $A^{+}=x^{+}\beta$, $A^{-}=-x^{-}\beta$ and
$A^{i}=\beta^{i}$. We also assume boost invariance and the solutions are
independent of spatial rapidity $\eta$. The Yang-Mills equations
$D_{\mu}F^{\mu\nu}=0$ (6)
are supplemented with the initial conditions
$\beta(\tau=0,\mathbf{x})=\frac{ig}{2}\left[\beta^{i}_{P}(\mathbf{x}),\beta^{i}_{T}(\mathbf{x})\right],\qquad\beta^{i}(\tau=0,\mathbf{x})=\beta^{i}_{P}(\mathbf{x})+\beta^{i}_{T}(\mathbf{x}).$
(7)
Here $\beta^{i}_{P}(\mathbf{x})$ and $\beta^{i}_{T}(\mathbf{x})$ are the
Weizsacker-Williams gluon fields of the proton and the nucleus before the
collisions, respectively. In the dilute-dense regime, we treat the proton as
perturbative and solve the equations perturbatively
$\beta(\tau,\mathbf{x})=\sum_{n=0}^{\infty}g^{n}\beta^{(n)}(\tau,\mathbf{x}),\qquad\beta_{i}(\tau,\mathbf{x})=\sum_{n=0}^{\infty}g^{n}\beta_{i}^{(n)}(\tau,\mathbf{x}).$
(8)
Note that both the equations and the initial conditions are to be expanded and
solved order by order. One will need the method of variation of parameters to
solve inhomogeneous Bessel equations. Furthermore, a critical mathematical
trick needed is Graf’s formula that expresses a product of two Bessel
functions in terms of angular integral of one Bessel function. The final next
to leading order solutions are
$\begin{split}\beta^{(3)}(\tau,\mathbf{k})=&2\beta^{(3)}(\tau=0,\mathbf{k})\frac{J_{1}(k_{\perp}\tau)}{k_{\perp}\tau}-i\int\frac{d^{2}\mathbf{p}}{(2\pi)}\Big{[}b_{\perp}(\mathbf{p}),b_{\eta}(\mathbf{k}-\mathbf{p})\Big{]}\frac{\mathbf{k}\times\mathbf{p}}{p_{\perp}^{2}|\mathbf{k}-\mathbf{p}|^{2}}\\\
&\qquad\times\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\Big{(}1+\frac{2\mathbf{k}\cdot(\mathbf{k}-\mathbf{p})}{w_{\perp}^{2}-k_{\perp}^{2}}\Big{)}\left(\frac{J_{1}(w_{\perp}\tau)}{w_{\perp}\tau}-\frac{J_{1}(k_{\perp}\tau)}{k_{\perp}\tau}\right),\\\
\end{split}$ (9)
$\begin{split}\beta^{(3)}_{\perp}(\tau,\mathbf{k})=&\beta_{\perp}^{(3)}(\tau=0,\mathbf{k})J_{0}(k_{\perp}\tau)+\frac{i}{k_{\perp}}\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\Big{[}b_{\eta}(\mathbf{p}),b_{\eta}(\mathbf{k}-\mathbf{p})\Big{]}\frac{\mathbf{k}\times\mathbf{p}}{2p^{2}_{\perp}|\mathbf{k}-\mathbf{p}|^{2}}\\\
&\times\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\Big{(}1+\frac{2\mathbf{p}\cdot(\mathbf{k}-\mathbf{p})}{w_{\perp}^{2}-k_{\perp}^{2}}\Big{)}(J_{0}(w_{\perp}\tau)-J_{0}(k_{\perp}\tau))-\frac{i}{k_{\perp}}\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\Big{[}b_{\perp}(\mathbf{p}),b_{\perp}(\mathbf{k}-\mathbf{p})\Big{]}\\\
&\times\frac{(\mathbf{k}\times\mathbf{p})(-\mathbf{p}\cdot\mathbf{k}+p_{\perp}^{2}+k_{\perp}^{2})}{p^{2}_{\perp}|\mathbf{k}-\mathbf{p}|^{2}}\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\frac{1}{w_{\perp}^{2}-k_{\perp}^{2}}\left(J_{0}(w_{\perp}\tau)-J_{0}(k_{\perp}\tau)\right),\\\
\end{split}$ (10)
$\begin{split}\Lambda^{(3)}(\tau,\mathbf{k})=&-\frac{i}{k_{\perp}^{2}}\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\Big{[}b_{\eta}(\mathbf{p}),b_{\eta}(\mathbf{k}-\mathbf{p})\Big{]}\frac{\mathbf{k}\cdot(\mathbf{k}-2\mathbf{p})}{4p_{\perp}^{2}|\mathbf{k}-\mathbf{p}|^{2}}\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\Big{(}1-\frac{p_{\perp}^{2}+|\mathbf{k}-\mathbf{p}|^{2}}{w^{2}_{\perp}}\Big{)}(1-J_{0}(w_{\perp}\tau))\\\
&-\frac{i}{k_{\perp}^{2}}\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\Big{[}b_{\perp}(\mathbf{p}),b_{\perp}(\mathbf{k}-\mathbf{p})\Big{]}\frac{\mathbf{k}\cdot(\mathbf{k}-2\mathbf{p})\mathbf{p}\cdot(\mathbf{k}-\mathbf{p})}{2p_{\perp}^{2}|\mathbf{k}-\mathbf{p}|^{2}}\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\frac{1}{w^{2}_{\perp}}(1-J_{0}(w_{\perp}\tau)).\\\
\end{split}$ (11)
We have made the decomposition
$\beta_{i}(\tau,\mathbf{k})=\frac{-i\epsilon^{ij}\mathbf{k}_{j}}{k_{\perp}}\beta_{\perp}(\tau,\mathbf{k})-i\mathbf{k}_{i}\Lambda(\tau,\mathbf{k})$
in which $\Lambda(\tau,\mathbf{k})$ is a non-dynamical field. The
$b_{\perp}(\mathbf{p})=-i\epsilon_{ij}\mathbf{p}_{i}\beta_{j}^{(1)}(\tau=0,\mathbf{p})$
and $b_{\eta}(\mathbf{p})=2\beta^{(1)}(\tau=0,\mathbf{p})$ are related to the
leading order initial condition. In addition, here
$w_{\perp}=\sqrt{p_{\perp}^{2}+|\mathbf{k}-\mathbf{p}|^{2}-2p_{\perp}|\mathbf{k}-\mathbf{p}|\cos\phi}$.
In the above solutions, terms containing commutators represent final state
interactions due to three gluon vertices. Initial state interactions are
included in higher order initial conditions $\beta^{(3)}(\tau=0,\mathbf{k})$
and $\beta_{\perp}^{(3)}(\tau=0,\mathbf{k})$. Another important property of
the solutions is that the color structure and the time dependence are
factorized. These solutions can also be used to compute other physical
quantities of interest such as energy-momentum tensor and angular-momentum
tensor.
## 3 Results: First Saturation Correction to Single Gluon Production
Using the next to leading order solutions for gluon fields eqs. (9), (10), one
can apply the LSZ reduction formula to compute the first saturation correction
to single gluon production, the final results are
$\begin{split}&|\mathcal{M}_{(3)}(\mathbf{k})|^{2}\\\
=&-\frac{1}{\pi}\int_{\mathbf{p},\mathbf{p}_{1},\mathbf{q},\mathbf{q}_{1}}\mathcal{H}_{1}(\mathbf{p},\mathbf{p}_{1},\mathbf{q},\mathbf{q}_{1},\mathbf{k})\rho_{P}^{b_{1}}(\mathbf{p}-\mathbf{p}_{1})T^{b}_{b_{1}b_{2}}\rho_{P}^{b_{2}}(\mathbf{p}_{1})\rho_{P}^{d_{1}}(\mathbf{q}-\mathbf{q}_{1})T^{d}_{d_{1}d_{2}}\rho_{P}^{d_{2}}(\mathbf{q}_{1})\\\
&\qquad\qquad\times\Big{[}U(\mathbf{k}-\mathbf{p})U^{T}(-\mathbf{k}-\mathbf{q})\Big{]}^{bd}\\\
&-\frac{1}{\pi}\int_{\mathbf{p},\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{q},\mathbf{q}_{1}}\mathcal{H}_{2}(\mathbf{p},\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{q},\mathbf{q}_{1},\mathbf{k})\rho_{P}^{b_{1}}(\mathbf{p}_{1})\rho_{P}^{b_{2}}(\mathbf{p}_{2})\rho_{P}^{d_{1}}(\mathbf{q}-\mathbf{q}_{1})T^{d}_{d_{1}d_{2}}\rho_{P}^{d_{2}}(\mathbf{q}_{1})\\\
&\qquad\qquad\times\Big{[}U(\mathbf{k}-\mathbf{p}-\mathbf{p}_{1})T^{a}U^{T}(\mathbf{p}-\mathbf{p}_{2})\Big{]}^{b_{1}b_{2}}U^{da}(-\mathbf{k}-\mathbf{q})+c.c.\\\
&-\frac{1}{\pi}\int_{\mathbf{p},\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{q},\mathbf{q}_{1},\mathbf{q}_{2}}\mathcal{H}_{3}(\mathbf{p},\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{q},\mathbf{q}_{1},\mathbf{q}_{2},\mathbf{k})\rho_{P}^{b_{1}}(\mathbf{p}_{1})\rho_{P}^{b_{2}}(\mathbf{p}_{2})\rho_{P}^{d_{1}}(\mathbf{q}_{1})\rho_{P}^{d_{2}}(\mathbf{q}_{2})\\\
&\qquad\qquad\times\Big{[}U(\mathbf{k}-\mathbf{p}-\mathbf{p}_{1})T^{a}U^{T}(\mathbf{p}-\mathbf{p}_{2})]^{b_{1}b_{2}}\Big{[}U(-\mathbf{k}-\mathbf{q}-\mathbf{q}_{1})T^{a}U^{T}(\mathbf{q}-\mathbf{q}_{2})\Big{]}^{d_{1}d_{2}}.\\\
\end{split}$ (12)
and
$\begin{split}&\mathcal{M}_{(1)}^{\ast}(\mathbf{k})\mathcal{M}_{(5)}(\mathbf{k})+\mathcal{M}_{(1)}(\mathbf{k})\mathcal{M}_{(5)}^{\ast}(\mathbf{k})\\\
=&-\frac{1}{\pi}\int_{\mathbf{p},\mathbf{q},\mathbf{p}_{1},\mathbf{p}_{4}}\mathcal{F}_{1}(\mathbf{p},\mathbf{q},\mathbf{p}_{1},\mathbf{p}_{4},\mathbf{k})\rho_{P}^{b_{1}}(\mathbf{p}_{1})T^{a}_{b_{1}b_{2}}\rho_{P}^{b_{2}}(\mathbf{p}-\mathbf{p}_{1})\rho_{P}^{b_{3}}(\mathbf{q}-\mathbf{p})\rho_{P}^{b_{4}}(\mathbf{p}_{4})\\\
&\qquad\qquad\times\Big{[}T^{a}U(\mathbf{k}-\mathbf{q})U^{T}(-\mathbf{k}-\mathbf{p}_{4})\Big{]}^{b_{3}b_{4}}\\\
&-\frac{1}{\pi}\int_{\mathbf{p},\mathbf{q},\mathbf{p}_{1},\mathbf{p}_{3},\mathbf{p}_{4}}\mathcal{F}_{2}(\mathbf{p},\mathbf{q},\mathbf{p}_{1},\mathbf{p}_{3},\mathbf{p}_{4},\mathbf{k})\rho_{P}^{b_{1}}(\mathbf{p}-\mathbf{p_{1}})T^{b}_{b_{1}b_{2}}\rho_{P}^{b_{2}}(\mathbf{p}_{1})\rho_{P}^{b_{3}}(\mathbf{p}_{3})\rho_{P}^{b_{4}}(\mathbf{p}_{4})\\\
&\qquad\qquad\times
U^{ba}(\mathbf{q}-\mathbf{p})\Big{[}U(\mathbf{k}-\mathbf{q}-\mathbf{p}_{3})T^{a}U^{T}(-\mathbf{k}-\mathbf{p}_{4})\Big{]}^{b_{3}b_{4}}\\\
&-\frac{1}{\pi}\int_{\mathbf{q},\mathbf{p},\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}_{4}}\mathcal{F}_{3}(\mathbf{p},\mathbf{q},\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}_{4},\mathbf{k})\rho^{b_{1}}_{P}(\mathbf{p}_{1})\rho^{b_{2}}_{P}(\mathbf{p}_{2})\rho^{b_{3}}_{P}(\mathbf{p}_{3})\rho^{b_{4}}_{P}(\mathbf{p}_{4})\\\
&\qquad\qquad\times\Big{[}U(\mathbf{p}-\mathbf{p}_{1})T^{a}U^{T}(\mathbf{q}-\mathbf{p}-\mathbf{p}_{2})\Big{]}^{b_{1}b_{2}}\Big{[}U(\mathbf{k}-\mathbf{q}-\mathbf{p}_{3})T^{a}U^{T}(-\mathbf{k}-\mathbf{p}_{4})\Big{]}^{b_{3}b_{4}}\\\
&+c.c.\end{split}$ (13)
They are expressed as functionals of the proton color charge density
$\rho_{P}^{a}(\mathbf{p})$ and nucleus color charge density
$\rho_{T}^{a}(\mathbf{p})$ (through the adjoint Wilson line
$U^{cd}(\mathbf{p})$). The explicit expressions for the kinematic factors
$\mathcal{H}_{1,2,3}$ and $\mathcal{F}_{1,2,3}$ are given in [9]. They are
functions of transverse momenta independent of $\rho_{P}^{a}(\mathbf{p})$ and
$U^{cd}(\mathbf{p})$. We also used the shorthand notation
$\int_{\mathbf{p}}=\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}$.
## 4 Conclusion
We have obtained the first saturation correction to single inclusive soft
gluon production in high energy pA collisions. It incorporates both initial
state interactions and final state interactions within the proton. The
functional form eqs. (12) and (13) in terms of color charge densities could be
directly used to compute double and multiple gluon productions. Further
evaluations of the saturation correction require ensemble averaging over
products of Wilson lines. These can be done either under some appropriate
approximations (such as dipole approximation, large $N_{c}$ approximation,
glasma graph approximation) or through numerical simulations. On the other
hand, further analysis of the the first saturation correction might provide
insights on how to compute and resum higher order saturation corrections,
which is ultimately related to the unsolved problem of computing single gluon
production in high energy nucleus-nucleus collisions.
## Acknowledgements
I thank V. Skokov for collaborating on this project and A. Kovner, Y.
Kovchegov, M. Lublinsky, H. Duan for insightful discussions.
### Funding information
We acknowledge support by the DOE Office of Nuclear Physics through Grant No.
DE-SC0020081
## References
* [1] Y. V. Kovchegov and A. H. Mueller, _Gluon production in current nucleus and nucleon - nucleus collisions in a quasiclassical approximation_ , Nucl. Phys. B 529, 451 (1998), 10.1016/S0550-3213(98)00384-8, hep-ph/9802440.
* [2] B. Z. Kopeliovich, A. V. Tarasov and A. Schafer, _Bremsstrahlung of a quark propagating through a nucleus_ , Phys. Rev. C 59, 1609 (1999), 10.1103/PhysRevC.59.1609, hep-ph/9808378.
* [3] A. Kovner and U. A. Wiedemann, _Eikonal evolution and gluon radiation_ , Phys. Rev. D 64, 114002 (2001), 10.1103/PhysRevD.64.114002, hep-ph/0106240.
* [4] A. Dumitru and L. D. McLerran, _How protons shatter colored glass_ , Nucl. Phys. A 700, 492 (2002), 10.1016/S0375-9474(01)01301-X, hep-ph/0105268.
* [5] I. Balitsky, _Scattering of shock waves in QCD_ , Phys. Rev. D 70, 114030 (2004), 10.1103/PhysRevD.70.114030, hep-ph/0409314.
* [6] G. A. Chirilli, Y. V. Kovchegov and D. E. Wertepny, _Classical Gluon Production Amplitude for Nucleus-Nucleus Collisions: First Saturation Correction in the Projectile_ , JHEP 03, 015 (2015), 10.1007/JHEP03(2015)015, 1501.03106.
* [7] L. McLerran and V. Skokov, _Odd Azimuthal Anisotropy of the Glasma for pA Scattering_ , Nucl. Phys. A 959, 83 (2017), 10.1016/j.nuclphysa.2016.12.011, 1611.09870.
* [8] M. Li and V. V. Skokov, _First saturation correction in high energy proton-nucleus collisions. Part I. Time evolution of classical Yang-Mills fields beyond leading order_ , JHEP 06, 140 (2021), 10.1007/JHEP06(2021)140, 2102.01594.
* [9] M. Li and V. V. Skokov, _First saturation correction in high energy proton-nucleus collisions. Part II. Single inclusive semi-hard gluon production_ , JHEP 06, 141 (2021), 10.1007/JHEP06(2021)141, 2104.01879.
|
arxiv-papers
| 2021-07-26T17:20:30 |
2024-09-04T03:07:19.359399
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Ming Li",
"submitter": "Ming Li",
"url": "https://arxiv.org/abs/2107.12338"
}
|
2107.12339
|
# DBSP_DRP: A Python package for automated spectroscopic data reduction of
DBSP data
Milan Sharma Mandigo-Stoba Christoffer Fremling Mansi M. Kasliwal
††margin: DOI: PENDINGhttps://doi.orgPending DOI Software •
https://github.com/openjournals/joss-reviews/issues/3511https://doi.orgReview
• https://github.com/finagle29/DBSP_DRP/https://doi.orgRepository •
PENDINGhttps://doi.orgPending Archive Editor:
http://example.comhttps://doi.orgPending Editor
Submitted: 19 July 2021 License
Authors of papers retain copyright and release the work under a Creative
Commons Attribution 4.0 International License
(http://creativecommons.org/licenses/by/4.0/https://doi.orgCC BY 4.0).
## Summary
In astronomy, the spectrum of light emitted from astrophysical sources is of
great use, allowing astronomers to classify objects and measure their
properties. To measure the spectrum, astronomers use spectrographs, which use
dispersive elements to split the incoming light into its constituent
wavelengths, and then image this dispersed light with a detector, most
commonly a CCD. But to do science with the spectrum, the 2D image in pixel
coordinates taken by the CCD must be converted into a 1D spectrum of flux vs.
wavelength. To increase the signal-to-noise ratio, astronomers can take
multiple exposures of the same object and coadd their 1D spectra to reveal
faint absorption lines or increase the precision with which an important
emission line can be measured. Many spectrographs have multiple paths that
light can go through, and multiple detectors, each measuring a particular part
of the spectrum, to increase the wavelength range that can be captured in a
single exposure, or to allow the high resolution observation of distinct
wavelength ranges. If two detectors cover an overlapping region, caused by
partial reflectance of a dichroic (wavelength-dependent beam splitter), then
the spectra from each detector need to be spliced together, combining the
light collected by each detector. This process of converting 2D CCD images
into 1D spectra is called data reduction.
DBSP_DRP is a python package that provides fully automated data reduction of
data taken by the Double Spectrograph (DBSP) at the 200-inch Hale Telescope at
Palomar Observatory (Oke & Gunn, 1982). The underlying data reduction
functionality to extract 1D spectra, perform flux calibration and correction
for atmospheric absorption, and coadd spectra together is provided by PypeIt
(Prochaska et al., 2020). The new functionality that DBSP_DRP brings is in
orchestrating the complex data reduction process by making smart decisions so
that no user input is required after verifying the correctness of the metadata
in the raw FITS files in a table-like GUI. Though the primary function of
DBSP_DRP is to autmatically reduce an entire night of data without user input,
it has the flexibility for astronomers to fine-tune the data reduction with
GUIs for manually identifying the faintest objects, as well as exposing the
full set of PypeIt parameters to be tweaked for users with particular science
needs. DBSP_DRP also handles some of the occasional quirks specific to DBSP,
such as swapping FITS header cards, adding (an) extra null byte/s to FITS
files making them not conform to the FITS specification, and not writing the
coordinates of the observation to file. Additionally, DBSP_DRP contains a
quicklook script for making real-time decisions during an observing run, and
can open a GUI displaying a minimally reduced exposure in under 15 seconds.
Docker containers are available for ease of deploying DBSP_DRP in its
quicklook configuration (without some large atmospheric model files) or in its
full configuration.
## Statement of Need
Palomar Observatory, located near San Diego, CA, is a multinational
observatory with a broad user base. Users come from large and small
institutions, and their observing experience ranges from novice to expert. One
responsibility for serving such a diverse user base is to provide software
data reduction pipelines for the most frequently used instruments, such as the
Palomar Double Spectrograph (DBSP). Although DBSP was commissioned in 1982, it
remains the workhorse instrument of the 200” Hale Telescope. It is used on 42%
of the nights in a year, comprising nearly all of the valuable “dark”
(moonless) time. In previous years, standard astronomical practice left the
data reduction up to the user. However, attitudes in instrument building have
shifted since DBSP was built. The pipeline is now considered an indispensable
component of the astronomical instrument. In fact, the difference between a
good pipeline and a great pipeline means the difference between counting some
of the photons vs. counting all of the photons.
Spectroscopy is a severe bottleneck in time-domain astronomy; currently less
than 10% of discoveries are spectroscopically classified. Without a pipeline,
data reduction is a difficult process and the standard method without a
pipeline is to use IRAF, a 35 year old program on which development and
maintenance was discontinued in 2013 and whose use is discouraged by many in
the field e.g. Ogaz & Tollerud (2018). Needless to say, data reduction sans
pipeline is extremely time-consuming. There is a clear need for a modern and
stable automated data reduction pipeline for DBSP.
During observing runs, one would like to be able to quickly inspect data as it
is taken, in order to ensure that it is of sufficient quality to do the
desired science with. For objects whose brightness may have changed between a
previous observation and the observing run, the observer may have
uncertainties regarding how long of an exposure is needed to produce quality
data. For very faint objects or objects in crowded fields, the observer may
not even be sure that the telescope is pointed at the right object! A
quicklook functionality, that can do a rudimentary reduction to correct for
instrumental signatures and subtract light from the sky, revealing the spectra
of the objects observed, can answer questions of exposure time and whether the
object observed is the right one.
DBSP_DRP is currently being used by the ZTF Bright Transient Survey (Fremling
et al., 2020; Perley et al., 2020), the ZTF Census of the Local Universe (De
et al., 2020), and a program investigating ZTF Superluminous Supernovae
(Lunnan et al., 2020; Chen et al., in preparation). Ravi et al. (2021) is the
first (known) publication that used DBSP_DRP for data reduction. The
development of DBSP_DRP also lays the groundwork towards a fully automated
pipeline for the Next Generation Palomar Spectrograph that is planned to be
deployed on the Palomar 200-inch Hale Telescope in 2022.
## Acknowledgements
M.S.M.-S. acknowledges funding from the Schmidt Academy of Software
Engineering, which is supported by the generosity of Eric and Wendy Schmidt by
recommendation of the Schmidt Futures program.
We thank the following members of the time domain astronomy group at Caltech
for beta-testing and providing valuable feedback during the development of
this pipeline: Andy Tzanidakis, Lin Yan, Aishwarya Dahiwale, Yuhan Yao, Yashvi
Sharma, and Igor Andreoni.
M.S.M.-S. is extremely grateful to the welcoming, friendly, and helpful team
of developers on the PypeIt team, without whom this package would not exist.
## References
reDe, K., Kasliwal, M. M., Tzanidakis, A., Fremling, U. C., Adams, S., Aloisi,
R., Andreoni, I., Bagdasaryan, A., Bellm, E. C., Bildsten, L., Cannella, C.,
Cook, D. O., Delacroix, A., Drake, A., Duev, D., Dugas, A., Frederick, S.,
Gal-Yam, A., Goldstein, D., … Yao, Y. (2020). The Zwicky Transient Facility
Census of the Local Universe. I. Systematic Search for Calcium-rich Gap
Transients Reveals Three Related Spectroscopic Subclasses. _905_(1), 58.
https://doi.org/10.3847/1538-4357/abb45c
preFremling, C., Miller, A. A., Sharma, Y., Dugas, A., Perley, D. A., Taggart,
K., Sollerman, J., Goobar, A., Graham, M. L., Neill, J. D., Nordin, J.,
Rigault, M., Walters, R., Andreoni, I., Bagdasaryan, A., Belicki, J.,
Cannella, C., Bellm, E. C., Cenko, S. B., … Kulkarni, S. R. (2020). The Zwicky
Transient Facility Bright Transient Survey. I. Spectroscopic Classification
and the Redshift Completeness of Local Galaxy Catalogs. _The Astrophysical
Journal_ , _895_(1), 32\. https://doi.org/10.3847/1538-4357/ab8943
preLunnan, R., Yan, L., Perley, D. A., Schulze, S., Taggart, K., Gal-Yam, A.,
Fremling, C., Soumagnac, M. T., Ofek, E., Adams, S. M., Barbarino, C., Bellm,
E. C., De, K., Fransson, C., Frederick, S., Golkhou, V. Z., Graham, M. J.,
Hallakoun, N., Ho, A. Y. Q., … Yao, Y. (2020). Four (Super)luminous Supernovae
from the First Months of the ZTF Survey. _The Astrophysical Journal_ ,
_901_(1), 61. https://doi.org/10.3847/1538-4357/abaeec
preOgaz, S., & Tollerud, E. (2018). Removing the Institute’s Dependence on
IRAF (You can do it too!). _STScI Newsletter_ , _35_(03).
preOke, J. B., & Gunn, J. E. (1982). An Efficient Low Resolution and Moderate
Resolution Spectrograph for the Hale Telescope. _Publications of the
Astronomical Society of the Pacific_ , _94_ , 586.
https://doi.org/10.1086/131027
prePerley, D. A., Fremling, C., Sollerman, J., Miller, A. A., Dahiwale, A. S.,
Sharma, Y., Bellm, E. C., Biswas, R., Brink, T. G., Bruch, R. J., De, K.,
Dekany, R., Drake, A. J., Duev, D. A., Filippenko, A. V., Gal-Yam, A., Goobar,
A., Graham, M. J., Graham, M. L., … Yan, L. (2020). The Zwicky Transient
Facility Bright Transient Survey. II. A Public Statistical Sample for
Exploring Supernova Demographics. _The Astrophysical Journal_ , _904_(1), 35.
https://doi.org/10.3847/1538-4357/abbd98
preProchaska, J. X., Hennawi, J. F., Westfall, K. B., Cooke, R. J., Wang, F.,
Hsyu, T., Davies, F. B., Farina, E. P., & Pelliccia, D. (2020). PypeIt: The
python spectroscopic data reduction pipeline. _Journal of Open Source
Software_ , _5_(56), 2308. https://doi.org/10.21105/joss.02308
preRavi, V., Law, C. J., Li, D., Aggarwal, K., Burke-Spolaor, S., Connor, L.,
Lazio, T. J. W., Simard, D., Somalwar, J., & Tendulkar, S. P. (2021). The host
galaxy and persistent radio counterpart of FRB 20201124A. _arXiv e-Prints_ ,
arXiv:2106.09710. https://arxiv.org/abs/2106.09710
p
|
arxiv-papers
| 2021-07-26T17:22:29 |
2024-09-04T03:07:19.368955
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Milan Sharma Mandigo-Stoba, Christoffer Fremling, and Mansi M.\n Kasliwal",
"submitter": "Milan Roberson",
"url": "https://arxiv.org/abs/2107.12339"
}
|
2107.12344
|
proposition@alttheorem lemma@alttheorem corollary@alttheorem
counterexample@alttheorem definition@alttheorem question@alttheorem
openquestion@alttheorem conjecture@alttheorem remark@alttheorem
example@alttheorem
# Weak Laplacian bounds and minimal boundaries in non-smooth spaces with
Ricci curvature lower bounds
Andrea Mondino and Daniele Semola
###### Abstract.
The goal of the paper is four-fold. In the setting of spaces with synthetic
Ricci curvature lower bounds (more precisely $\operatorname{RCD}(K,N)$ metric
measure spaces):
* •
we develop an intrinsic theory of Laplacian bounds in viscosity sense and in a
pointwise, heat flow related sense, showing their equivalence also with
Laplacian bounds in distributional sense;
* •
relying on these tools, we establish a PDE principle relating lower Ricci
curvature bounds to the preservation of Laplacian lower bounds under the
evolution via the $p$-Hopf-Lax semigroup, for general exponents
$p\in[1,\infty)$. The principle admits a broad range of applications, going
much beyond the topic of the present paper;
* •
we prove sharp Laplacian bounds on the distance function from a set (locally)
minimizing the perimeter with a flexible technique, not involving any
regularity theory; this corresponds to vanishing mean curvature in the smooth
setting and encodes also information about the second variation of the area;
* •
we initiate a regularity theory for boundaries of sets (locally) minimizing
the perimeter, obtaining sharp dimension estimates for their singular sets,
quantitative estimates of independent interest even in the smooth setting and
topological regularity away from the singular set.
The class of $\operatorname{RCD}(K,N)$ metric measure spaces includes as
remarkable sub-classes: measured Gromov-Hausdorff limits of smooth manifolds
with lower Ricci curvature bounds and finite dimensional Alexandrov spaces
with lower sectional curvature bounds. Most of the results are new also in
these frameworks. Moreover, the tools that we develop here have applications
to classical questions in Geometric Analysis on smooth, non compact Riemannian
manifolds with lower Ricci curvature bounds.
Andrea Mondino: Mathematical Institute, University of Oxford, UK,
email: [email protected]
Daniele Semola: Mathematical Institute, University of Oxford, UK,
email: [email protected]
Mathematics Subject Classification: 53C23, 49Q05, 58J90.
###### Contents
1. 1 Introduction
1. 1.1 Mean curvature bounds and minimal boundaries in a non smooth setting
2. 1.2 A regularity theory for minimal boundaries on $\operatorname{RCD}$ spaces
3. 1.3 Outline of the strategy to establish the Laplacian bounds of Theorem 1.1
4. 1.4 Weak notions of Laplacian bounds
5. 1.5 Hopf-Lax semigroup and lower Ricci curvature bounds
2. 2 Preliminaries
1. 2.1 Slope, Cheeger energy and weak upper gradient
2. 2.2 General properties of $\operatorname{RCD}(K,N)$ spaces
3. 2.3 Non collapsed spaces
4. 2.4 Sets of finite perimeter
1. 2.4.1 Introduction and basic properties
2. 2.4.2 Convergence and stability for sets of finite perimeter and functions of bounded variation
3. 2.4.3 De Giorgi’s Theorem and integration by parts formulae
4. 2.4.4 Gauss Green formulae for essentially bounded divergence measure vector fields
5. 2.4.5 Operations with sets of finite perimeter
6. 2.4.6 Some regularity results for quasi-minimizers
5. 2.5 Laplacian, heat equation and heat kernel
6. 2.6 The Poisson equation
7. 2.7 The Green function of a domain and applications
3. 3 The Laplacian on $\operatorname{RCD}(K,N)$ spaces
1. 3.1 Notions of Laplacian bounds
2. 3.2 The main equivalence results
4. 4 Ricci curvature bounds, Hopf-Lax semigroups and Laplacian bounds
1. 4.1 Smooth Riemannian manifolds
2. 4.2 Kuwada’s lemma
3. 4.3 Hopf-Lax semigroup and Laplacian bounds: the non smooth framework
5. 5 Mean curvature bounds for minimal boundaries
1. 5.1 Minimal boundaries and the Laplacian of the distance function
6. 6 Regularity theory
1. 6.1 An $\varepsilon$-regularity theorem
2. 6.2 Sharp perimeter bounds for the equidistant sets from minimal boundaries
3. 6.3 Partial regularity of minimal boundaries away from sets of codimension three
4. 6.4 Quantitative estimates for singular sets of minimal boundaries
7. A Laplacian bounds Vs mean curvature bounds: a comparison with the classical literature
## 1\. Introduction
Minimal surfaces constitute a fascinating research topic across Analysis and
Geometry, with strong connections with Topology and Mathematical Physics. Even
if the field is extremely rich in results and techniques, arguably two
cornerstones in the theory of minimal surfaces in Riemannian manifolds are:
* •
the regularity theory, asserting that a minimal surface is smooth away from a
small (in the sense of Hausdorff dimension) singular set;
* •
the first and second variations formulae, encoding at a differential level the
fact that a minimal surface is a stationary point (resp. a local minimum or a
min-max critical point) of the area functional.
Classically, the regularity theory is established for minimal surfaces in
Euclidean ambient spaces, and then transplanted to the smooth curved setting
of Riemannian manifolds by using local coordinates or Nash embedding theorem.
While on the one hand this procedure gives sharp qualitative regularity
results (such as the dimension of the singular set), on the other hand it is
not completely satisfactory in terms of effective estimates, which usually
depend on quantities like the injectivity radius or the full Riemann curvature
tensor.
A natural question (raised for instance in Gromov’s lectures [81, pp.
334-335]) is to which extent one can develop a theory for minimal surfaces if
the ambient space is non-smooth. In the case of 2-dimensional minimal surfaces
in (suitable) metric spaces, there has been recent progress by Lytchak-Wenger
[104, 105, 106] who successfully studied the Plateau problem together with
geometric applications.
The aim of the present paper is to investigate the higher dimensional case of
minimal boundaries in possibly non-smooth finite dimensional ambient spaces,
satisfying Ricci curvature lower bounds in a synthetic sense. More precisely,
the framework for the ambient space throughout the paper is the one of
$\operatorname{RCD}(K,N)$ metric measure spaces, for finite $N\in[1,\infty)$
and $K\in\mathbb{R}$ (see subsection 2.2 for a quick introduction and relevant
bibliography). Here $K\in\mathbb{R}$ plays the role of (synthetic) lower bound
on the Ricci curvature and $N\in[1,\infty)$ plays the role of (synthetic)
upper bound on the dimension. This class includes measured Gromov-Hausdorff
limits of smooth manifolds with Ricci curvature lower bounds (see [41, 42, 43,
46, 45]) and finite dimensional Alexandrov spaces with sectional curvature
lower bounds (see [31, 120]). Most of our results are new also in these more
classical settings.
The goal of the paper is four-fold. In the aforementioned setting of (possibly
non-smooth) $\operatorname{RCD}(K,N)$ metric measure spaces:
* •
we develop an intrinsic theory of Laplacian bounds in viscosity sense and in a
pointwise, heat flow related sense, showing their equivalence also with
Laplacian bounds in distributional and (various) comparison senses;
* •
we establish a PDE principle relating lower Ricci curvature bounds to the
preservation of Laplacian lower bounds under the evolution via the $p$-Hopf-
Lax semigroup, for general exponents $p\in[1,\infty)$;
* •
we prove sharp Laplacian bounds on the distance function from a set (locally)
minimizing the perimeter; this corresponds to vanishing mean curvature in the
smooth setting (i.e. the first variation formula), encoding at the same time
information about the second variation of the area along equidistant sets.
This is achieved with a flexible technique, independent of any regularity
theory and applicable to solutions of different variational problems;
* •
we initiate a regularity theory for boundaries of sets (locally) minimizing
the perimeter, obtaining sharp Hausdorff dimension bounds for the singular
set, Minkowski bounds, and topological regularity in a neighbourhood of the
regular set (i.e., where the tangent is flat half-space).
Besides the deep theoretical interest towards developing Geometric Measure
Theory under curvature bounds in a non smooth setting, the tools that we
develop here find applications in the study of classical questions in
Geometric Analysis on smooth non compact Riemannian manifolds with lower Ricci
bounds, see for instance [21, 59]. In particular, due to the compactness and
stability of $\operatorname{RCD}(K,N)$ spaces and to the stability of minimal
boundaries, the aforementioned fourth goal is a step towards an effective
theory of minimal boundaries under lower Ricci curvature bounds, not depending
on additional assumptions such as lower bounds on the injectivity radius or
full Riemann curvature bounds.
We next illustrate the main results of the paper.
### 1.1. Mean curvature bounds and minimal boundaries in a non smooth setting
The subject of our study will be sets of finite perimeter that locally
minimize the perimeter, according to the following.
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain. Let $E\subset X$ be
a set of locally finite perimeter. We say that $E$ is locally perimeter
minimizing in $\Omega$ if for any $x\in\Omega$ there exists $r_{x}>0$ such
that $E$ minimizes the perimeter among all the perturbations that are
compactly supported in $B_{r_{x}}(x)$, i.e. for any Borel set $F$ such that
$E\Delta F\subset B_{r_{x}}(x)$ it holds
$\operatorname{Per}(E,B_{r_{x}}(x))\leq\operatorname{Per}(F,B_{r_{x}}(x))\,.$
The above is a very general condition. For instance, smooth minimal
hypersurfaces in Riemannian manifolds are locally boundaries of locally
perimeter minimizing sets according to subsection 1.1, even though, in
general, they do not minimize the perimeter among arbitrarily compactly
supported variations. A simple example in this regard is the equator inside
the sphere.
Let us define the comparison function $\mathrm{t}_{K,N}:I_{K,N}\to\mathbb{R}$
as
(1.1)
$\begin{split}\mathrm{t}_{K,N}(x)&:=\begin{cases}-\sqrt{K(N-1)}\tan\big{(}\sqrt{\frac{K}{N-1}}x\big{)}\,&\quad\text{if
}K>0\\\ \quad 0\,&\quad\text{if }K=0\\\
\sqrt{-K(N-1)}\tanh\big{(}\sqrt{\frac{-K}{N-1}}x\big{)}\,&\quad\text{if
}K<0\;,\end{cases}\\\
I_{K,N}&:=\begin{cases}\big{(}-\frac{\pi}{2}\sqrt{\frac{N-1}{K}},\frac{\pi}{2}\sqrt{\frac{N-1}{K}}\big{)}\,&\quad\text{if
}K>0\\\ \quad\mathbb{R}\,&\quad\text{if }K\leq 0\;.\end{cases}\end{split}$
A fundamental property of locally area minimizing hypersurfaces in a smooth
Riemannian manifold is that their mean curvature vanishes. Our first main
result is the following sharp Laplacian comparison for the distance from a
locally perimeter minimizing boundary. It shall be thought as a global and non
smooth counterpart of the smooth fact that the mean curvature vanishes for
sets locally minimizing the perimeter.
###### Theorem 1.1 (Theorem 5.1).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E\subset X$ be a set of locally finite perimeter and
assume that it is a local perimeter minimizer. Let
$\mathsf{d}_{\overline{E}}:X\setminus\overline{E}\to[0,\infty)$ be the
distance function from $\overline{E}$. Then
(1.2)
$\Delta\mathsf{d}_{\overline{E}}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{\overline{E}}\,\quad\text{on
$X\setminus\overline{E}$}\,,$
where $\mathrm{t}_{K,N}$ is defined in (1.1).
###### Remark (How to interpret the Laplacian bounds).
The Laplacian bounds (1.2) have to be intended in any of the equivalent ways
stated in Theorem 3.4, i.e. either in the viscosity, distributional, heat
flow, or comparison senses (see subsection 1.4 later in the introduction for
an outline of the various notions).
###### Remark .
The upper bound (1.2) is sharp already in the class of smooth Riemannian
manifolds with Ricci curvature bounded below by $K\in\mathbb{R}$ and dimension
equal to $N\in\mathbb{N},N\geq 2$. Indeed, it is easily seen that:
* •
Case $K>0$. The distance function from a equatorial hyper-sphere inside the
$N$-dimensional sphere of constant sectional curvature $K/(N-1)$ achieves
equality in (1.2).
* •
Case $K=0$. The distance function from a hyperplane in $\mathbb{R}^{N}$ is
harmonic, and thus achieves equality in (1.2).
* •
Case $K<0$. The distance function from a horosphere inside the $N$-dimensional
hyperbolic space of constant sectional curvature $K/(N-1)$ achieves equality
in (1.2).
Encoding mean curvature bounds through the Laplacian of the distance function
as in (1.2) is equivalent to the classical vanishing mean curvature condition
for smooth hypersurfaces on Riemannian manifolds. Moreover, according to [134,
76], this is the right way to look at mean curvature bounds, having in mind
the perspective of global differential geometry. As we shall explain, (1.2)
also encodes the information about the second variation of the perimeter on
equidistant sets from $\overline{E}$ usually obtained with the second
variation formula for the perimeter.
Let us mention that some proposals of weak notions of mean curvature bounds in
the non-smooth setting have been put forward in [91, Section 5] and [38,
Section 5.1] by using localisation (also called needle decomposition)
techniques. Compared to such proposals, the remarkable advantage of the
approach via Laplacian comparison (1.1), and key new point of the present
work, is that we establish mean curvature bounds for solutions of variational
problems, such as local perimeter minimizers. This makes the new tools very
powerful for geometric applications.
Theorem 1.1 is new even for Alexandrov spaces with sectional curvature bounded
from below and for Ricci limit spaces. The proof is independent of the
regularity theory for minimal boundaries and it avoids the first variation
formula for the perimeter. Hence it is different from those present in the
literature also when read on smooth Riemannian manifolds. Moreover, the
technique that we develop here is flexible and can be applied to solutions of
more general variational problems as the isoperimetric one, see [21].
We remark that it is much simpler to prove the sharp Laplacian comparison for
minimal boundaries inside Ricci limit spaces that can be obtained as limits of
minimizing boundaries in smooth Riemannian manifolds with Ricci curvature
uniformly bounded from below, essentially by passing to the limit the
analogous statements for smooth manifolds. This assumption, however, would
largely restrict the set of applications with respect to Theorem 1.1.
Extensions of some classical theorems in Riemannian Geometry such as Frankel’s
theorem [63] about intersecting minimal hypersurfaces on closed manifolds with
positive Ricci curvature and Simons’ theorem [125] about the non-existence of
two-sided area-minimizing hypersurfaces on closed manifolds with positive
Ricci curvature will follow as corollaries (see Theorem 5.3 and subsection
6.2), thus confirming the strength of this approach.
Moreover, Theorem 1.1 plays a key role in the regularity theory, for instance
in establishing Minkowski-type bounds on the singular set (see Theorem 6.7).
### 1.2. A regularity theory for minimal boundaries on $\operatorname{RCD}$
spaces
A second main goal of this paper is to initiate the regularity theory of
minimal boundaries on $\operatorname{RCD}$ spaces. This can be seen as a step
towards an effective regularity theory for minimal hypersurfaces under lower
Ricci bounds, where by effective we mean only depending on the ambient Ricci
curvature and volume lower bounds (and independent of extra assumptions such
as injectivity radius, or bounds on the full Riemann curvature tensor).
Our first result in this direction is an $\varepsilon$-regularity theorem in
the spirit of De Giorgi’s regularity theory for Euclidean minimal boundaries
[54] and of the volume $\varepsilon$-regularity theorem for manifolds with
lower Ricci bounds originally due to Cheeger-Colding [49, 41] (see Theorem 2.2
below).
###### Definition ($\varepsilon$-regular points).
Let $\varepsilon>0$. If $(X,\mathsf{d},\mathscr{H}^{N})$ is an
$\operatorname{RCD}(-\varepsilon,N)$ metric measure space and $E\subset X$ is
a set of finite perimeter, minimizing the perimeter in $B_{2}(x)\subset X$,
such that:
* (i)
the ball $B_{2}(x)\subset X$ is $\varepsilon$-GH close to the ball
$B_{2}(0)\subset\mathbb{R}^{N}$;
* (ii)
$E$ is $\varepsilon$-close on $B_{2}(x)$ in the $L^{1}$ topology to
$\\{t<0\\}\subset\mathbb{R}^{N}$ and $\partial E\cap B_{2}(x)$ is
$\varepsilon$-GH close to $\\{t=0\\}\cap B_{2}(0)\subset\mathbb{R}^{N}$, where
we denoted by $t$ one of the canonical coordinates on $\mathbb{R}^{N}$;
then we shall say that $E$ is $\varepsilon$-regular at $x$ in $B_{2}(x)$.
The notion of $\varepsilon$-regular at $x$ in $B_{r}(x)$ can be introduced
analogously by scaling.
Notice that, as we prove in Theorem 2.11, $L^{1}$-convergence of perimeter
minimizing open sets automatically self-improves to Hausdorff convergence of
their boundaries in this setting.
###### Theorem 1.2 ($\varepsilon$-regularity).
Let $N>1$ be fixed. For any $\varepsilon>0$ there exists
$\delta=\delta(\varepsilon,N)>0$ such that the following holds. If
$(X,\mathsf{d},\mathscr{H}^{N})$ is an $\operatorname{RCD}(-\delta,N)$ metric
measure space, $E\subset X$ is perimeter minimizing on $B_{4}(x)\subset X$ and
$E$ is $\delta$-regular in $B_{2}(x)$ then, for any $y\in\partial E\cap
B_{1}(x)$ and any $0<r<1$, $E$ is $\varepsilon r$-regular in $B_{r}(y)$.
Moreover, for any $0<\alpha<1$, there exists $\delta=\delta(\alpha,N)>0$ such
that, if $x\in\partial E$ and $E$ is $\delta$-regular at $x$ on $B_{2}(x)$,
then $\partial E\cap B_{1}(x)$ is $C^{\alpha}$-homeomorphic to the ball
$B_{1}(0)\subset\mathbb{R}^{N-1}$.
The uniform Reifenberg flatness of minimal boundaries on sequences of smooth
manifolds converging in the Lipschitz sense had been previously considered by
Gromov in [77, 80]. Here we remove the smoothness assumption, we rely only on
the synthetic Ricci curvature lower bounds, and we relax the notion of
closeness for the ambient spaces to Gromov-Hausdorff. This has the effect of
largely broadening the set of possible applications, thanks to the well known
precompactness of spaces with lower Ricci and upper dimension bounds in
Gromov-Hausdorff sense and to the well established regularity theory for
ambient spaces.
The main new idea that we introduce for the proof of Theorem 1.2 is very
robust. The same technique applies to general variational problems in the
setting of spaces with lower Ricci bounds, as soon as there are enough
stability and an $\varepsilon$-regularity theorem with gap for the analogous
problem in the Euclidean setting, see subsection 6.1 for the precise
statement.
Theorem 1.2 is the building block to prove that the boundary of a locally-
perimeter-minimizing set is a topological manifold away from sets of ambient
codimention three. A difficulty, which is absent in the Euclidean theory, is
that we need to control simultaneously the flatness of the ambient and the
flatness of the hypersurface inside it.
###### Theorem 1.3 (Theorem 6.6).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E\subset X$ be a set of finite perimeter. Assume that $E$
is perimeter minimizing in $B_{2}(x)\subset X$ and $B_{2}(x)\cap\partial
X=\emptyset$. Then, letting $\mathcal{S}^{E}$ be the set of singular boundary
points of $\partial E$, i.e. those points where there exists a blow-up which
is not a flat Euclidean half-space, it holds
(1.3) $\dim_{H}(\mathcal{S}^{E}\cap B_{2}(x))\leq N-3\,.$
Moreover, for any $0<\alpha<1$ there exists a relatively open set
$O_{\alpha}\subset\partial E\cap B_{1}(x)$
such that
* •
$\big{(}\partial E\setminus\mathcal{S}^{E}\big{)}\cap B_{1}(x)\subset
O_{\alpha}\,$; hence, in particular, $\dim_{H}\big{(}(\partial E\setminus
O_{\alpha})\cap B_{1}(x)\big{)}\leq N-3$;
* •
$O_{\alpha}$ is $C^{\alpha}$-biHölder homeomorphic to an $(N-1)$-dimensional
open manifold.
Additionally, in Theorem 6.3 we will prove a sharp dimension estimate
(1.4) $\dim_{H}\left(\mathcal{S}^{E}\cap\mathcal{R}(X)\right)\leq N-8\,,$
for the intersection of the singular set of the minimal boundary with the
regular set $\mathcal{R}(X)$ of the ambient space.
###### Remark .
The Hausdorff dimension estimate (1.3) is sharp in this context, as elementary
examples illustrate (see subsection 6.3 and subsection 6.3). It will be
obtained through the classical dimension reduction pattern, but several new
difficulties arise, due to the non smoothness of the ambient space (for
instance it is not clear whether the classical monotonicity formula for
minimal surfaces holds in such a general framework).
The $C^{0,\alpha}$ regularity of the manifold $O_{\alpha}$ containing the
regular set matches the (currently known) regularity of the regular part
$\mathcal{R}(X)$ of the ambient space $X$ (after Cheeger-Colding’s metric
Reifenberg Theorem [41, Appendix 1] and [89]). Higher regularity of $\partial
E\setminus\mathcal{S}^{E}$ (e.g. contained in a Lipschitz manifold), would
require first improving the structure theory of the ambient space.
In Theorem 6.7 we will also obtain a Minkowski estimate for the quantitative
singular sets of minimal boundaries in this framework, in the spirit of [46,
47, 114]. The estimate has independent interest and it is new also for smooth
manifolds with lower Ricci curvature and volume bounds (see subsection
6.4).111In [58], which appeared on the arXiv the day before the appearance of
the present paper, Q. Ding has independently proved the first part of Theorem
1.2 and the Hausdorff dimension estimate (1.3) under the additional assumption
that the minimal boundary is a limit of minimal boundaries along a sequence of
smooth manifolds with Ricci curvature and volume of unit balls uniformly
bounded from below. These results played a fundamental role in the subsequent
proof of the Poincaré inequality for minimal graphs over smooth manifolds with
nonnegative Ricci curvature and Euclidean volume growth and of generalized
versions of Bernstein’s theorem in [59].
### 1.3. Outline of the strategy to establish the Laplacian bounds of Theorem
1.1
On smooth Riemannian manifolds, minimal surfaces are critical points of the
area functional. A key technical tool for this definition is the first
variation formula.
For the sake of this presentation, let us focus on sets of finite perimeter in
Euclidean ambient spaces. Given any such set $E\subset\mathbb{R}^{n}$ and any
smooth vector field ${\bm{X}}$ with compact support in
$B_{r}(x)\subset\mathbb{R}^{n}$, we can consider the induced flow of
diffeomorphisms $(\Phi_{t})_{t\in(-\varepsilon,\varepsilon)}$ such that
$\Phi_{0}=\mathrm{Id}$. Then
(1.5)
$\left.\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\right|_{t=0}\operatorname{Per}(\Phi_{t}(E)\cap
B_{r}(x))=\int_{\mathcal{F}E\cap
B_{r}(x)}\operatorname{div}_{E}{\bm{X}}\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\,,$
where $\operatorname{div}_{E}$ denotes the tangential divergence,
$\mathcal{F}E$ denotes the so-called reduced boundary of the set of finite
perimeter $E$ and $\operatorname{Per}_{E}$ its perimeter measure. When $E$ is
an open set with smooth boundary, $\mathcal{F}E$ coincides with the
topological boundary and $\operatorname{Per}_{E}$ is the surface measure, see
[72, 107].
If $E$ is locally perimeter minimizing, then a deep regularity result
originally due to De Giorgi [54] and refined by Federer (after work of Simons)
is that $\mathcal{F}E$ is smooth and $\partial E\setminus\mathcal{F}E$ has
ambient codimension $8$; moreover, (1.5) implies that the classical mean
curvature vanishes on $\mathcal{F}E$.
It is often advocated that Ricci curvature governs the distortion of volumes
on a smooth Riemannian manifold. Indeed, it enters into the variation formula
for the area element of the equidistant sets from a given smooth hypersurface,
see [83, 76]. If we consider a smooth minimal hypersurface, the first
derivative of the area element of equidistant surfaces vanishes at $t=0$,
moreover the Ricci curvature in normal direction and the second fundamental
form enter into the expression for the second derivative.
There are two main drawbacks of this approach: it only looks at the
infinitesimal geometry near to the hypersurface and it requires smoothness,
while usually minimal hypersurfaces are built through variational methods and
global regularity is not guaranteed.
Focusing on the first issue, it is possible to switch from an infinitesimal to
a global perspective. If $\Sigma\subset M$ is a smooth minimal hypersurface
inside a smooth Riemannian manifold with non-negative Ricci curvature, then
the distance function $\mathsf{d}_{\Sigma}$ is superharmonic on
$M\setminus\Sigma$, see [134] and Appendix A. This is a remarkable observation
for the sake of developing an analogous theory on metric measure spaces, since
it avoids the necessity of giving a meaning to the mean curvature of a
hypersurface.
Let us recall a classical argument [78] to deal with the aforementioned
regularity issue in the setting of smooth Riemannian manifolds that was key in
the proof of the Lévy-Gromov isoperimetric inequality. The fundamental
observation is that in order to bound the Laplacian of the distance function,
minimality (in the stricter sense of local area minimizing) was only needed at
footpoints of minimizing geodesics on the hypersurface itself. In various
situations, deep regularity theorems ([54, 2]) guarantee that minimal
hypersurfaces are smooth in a neighbourhood of these points and the classical
arguments can then be applied.
Given our current knowledge of $\operatorname{RCD}$ spaces, there is little
hope that such an approach could prove Theorem 1.1: there is no first
variation formula as (1.1) available at the moment and, even more
dramatically, the classical regularity theorems do not make sense in this non-
smooth setting. The Lévy-Gromov isoperimetric inequality has been generalized
to the present framework in [36], avoiding the analysis of the mean curvature
of isoperimetric sets (see also [97], dealing with smooth Riemannian
manifolds). However, a sharper understanding of mean curvature bounds for
solutions of variational problems is definitely needed for more refined
developments.
In [33], a different proof of the vanishing of the mean curvature for local
minimizers of the perimeter functional was obtained in the Euclidean setting.
It does not rely on the regularity theory for area minimizers nor on the first
variation formula, rather, it follows the pattern of viscosity theory in
partial differential equations. The possibility of following a similar pattern
to prove the Lévy-Gromov isoperimetric inequality on Alexandrov spaces was
pointed out later in the research announcement [119], together with the key
remark that the sup-convolution could act as a counterpart of the more
classical slicing with quadratic polynomials of the viscosity theory.
Below, we outline the strategy that we will follow, inspired by [33] and
[119], neglecting some of the regularity issues.
Consider a locally area minimizing hypersurface $\Sigma\subset\mathbb{R}^{n}$,
and assume that it is the boundary of a smooth domain $D$, locally minimizing
the surface measure among all compactly supported perturbations.
Let $\mathsf{d}_{\Sigma}:X\to[0,\infty)$ be the distance function from
$\Sigma$, defined by:
$\mathsf{d}_{\Sigma}(x):=\inf\\{\mathsf{d}(x,y)\,:\,y\in\Sigma\\}.$
We wish to prove that $\Delta\mathsf{d}_{\Sigma}\leq 0$ in the viscous sense
on $\mathbb{R}^{n}\setminus\Sigma$. Let us suppose that this is not the case.
Then there exist $x\in\mathbb{R}^{n}\setminus\Sigma$ and a smooth function
$\varphi:U_{x}\to\mathbb{R}$ such that
(1.6)
$\Delta\varphi(x)\geq\varepsilon>0\,,\quad\varphi(x)=\mathsf{d}_{\Sigma}(x)\,,\quad\varphi\leq\mathsf{d}_{\Sigma}\,.$
Let us extend $\varphi$ to a globally defined function
$\hat{\varphi}:\mathbb{R}^{n}\to\mathbb{R}$ such that
$\hat{\varphi}\leq\mathsf{d}_{\Sigma}$. Then we introduce
$\tilde{\varphi}:\mathbb{R}^{n}\to\mathbb{R}$ by
$\tilde{\varphi}(y):=\max_{z\in\mathbb{R}^{n}}\\{\hat{\varphi}(z)-\mathsf{d}(z,y)\\}\,.$
The properties of $\tilde{\varphi}$ that will be relevant for our purposes are
the following:
* (i)
$\tilde{\varphi}$ is a $1$-Lipschitz map;
* (ii)
$\tilde{\varphi}\leq\mathsf{d}_{\Sigma}$;
* (iii)
let us denote by $x_{\Sigma}$ one of the footpoints of $x$ on $\Sigma$. Then
$\tilde{\varphi}=\mathsf{d}_{\Sigma}$ along the minimal geodesic connecting
$x$ to $x_{\Sigma}$;
* (iv)
suppose for simplicity that $x_{\Sigma}$ is the unique footpoint of $x$ on
$\Sigma$. Then $\tilde{\varphi}<\mathsf{d}_{\Sigma}$ outside from the minimal
geodesic connecting $x$ to $x_{\Sigma}$. Moreover, there is a neighbourhood
$U_{x_{\Sigma}}$ of $x_{\Sigma}$ such that the maximum defining
$\tilde{\varphi}$ is achieved at points in a neighbourhood $U_{x}$ of $x$ for
any $y\in U_{x_{\Sigma}}$;
* (v)
as a first consequence of (iv),
$\left\lvert\nabla\tilde{\varphi}\right\rvert=1$ almost everywhere in
$U_{x_{\Sigma}}$;
* (vi)
as a second consequence of (iv),
(1.7) $\Delta\tilde{\varphi}\geq\varepsilon^{\prime}>0\,,$
in the sense of distributions on $U_{x_{\Sigma}}$.
Property (vi) above is a consequence of the completely non trivial fact that
the transform mapping $\varphi$ into $\tilde{\varphi}$ preserves, in a
suitable sense, Laplacian lower bounds. We shall focus more in detail later on
this fact.
Let us see how to combine the ingredients above to reach a contradiction with
the assumption that $\Sigma$ is a locally area minimizing surface.
Suppose that $\tilde{\varphi}$ is also smooth in a neighbourhood of
$x_{\Sigma}$ and let us cut the original surface $\Sigma$ along the level sets
of $\tilde{\varphi}$. By (ii), (iii) and (iv) above we obtain a family of
compactly supported perturbations $\Sigma_{t}$, $t\in[0,\delta)$ of
$\Sigma=\Sigma_{0}$ in this way. We claim that, for some
$t\in[0,\varepsilon)$, $\Sigma_{t}$ has area smaller than $\Sigma$.
Let $\Omega_{t}$ be the region bounded between $\Sigma$ and $\Sigma_{t}$. The
boundary $\partial\Omega_{t}$ is made of two components, one along $\Sigma$,
denoted by $\Sigma_{old}$, and one along $\Sigma_{t}$, denoted by
$\Sigma_{new}$. Then we can compute:
$\displaystyle 0<\,\int_{\Omega_{t}}\Delta\tilde{\varphi}=\,$
$\displaystyle-\int_{\Sigma_{old}}\nabla\tilde{\varphi}\cdot\nu_{\Sigma_{old}}\mathop{}\\!\mathrm{d}\mathscr{H}^{n-1}+\int_{\Sigma_{new}}\nabla\tilde{\varphi}\cdot\nu_{\Sigma_{new}}\mathop{}\\!\mathrm{d}\mathscr{H}^{n-1}$
$\displaystyle=\,$
$\displaystyle-\int_{\Sigma_{old}}\nabla\tilde{\varphi}\cdot\nu_{\Sigma_{old}}\mathop{}\\!\mathrm{d}\mathscr{H}^{n-1}-\mathscr{H}^{n-1}(\Sigma_{new})$
$\displaystyle\leq\,$
$\displaystyle\mathscr{H}^{n-1}(\Sigma_{old})-\mathscr{H}^{n-1}(\Sigma_{new})\,.$
Above, the first inequality follows from (vi), the first identity follows from
the Gauss-Green formula, the second one from the fact that $\Sigma_{new}$ is
along the level hypersurface of $\tilde{\varphi}$ therefore (taking into
account also (v)) we have $-\nu_{\Sigma_{new}}=\nabla\tilde{\varphi}$. The
last inequality follows from (i), which guarantees in turn that
$\left\lvert\nabla\tilde{\varphi}\cdot\nu_{\Sigma_{old}}\right\rvert\leq 1\,.$
Hence
$\mathscr{H}^{n-1}(\Sigma_{old})-\mathscr{H}^{n-1}(\Sigma_{new})>0\,,$
contradicting the local minimality of $\Sigma$.
Let us now comment on the main steps in the formal argument above.
* •
We will deal with sets of finite perimeter: their boundaries provide a weak
notion of codimension one hypersurface suitable for compactness and stability
arguments. The Euclidean theory was developed in the 50’s and later partially
generalized to metric measure spaces in [3, 4]. In the framework of
$\operatorname{RCD}$ spaces they are quite well understood after [6, 26, 27].
This class is very natural to consider. Indeed, we recall that the classical
regularity theory for area minimizing surfaces in codimension one was built on
top of the regularity theory for minimal boundaries.
* •
In order to exploit the variational structure of the problem in the
contradiction argument we rely on the viscous perspective, while for the sake
of applying the Gauss-Green theorem it is important to understand Laplacian
bounds in the sense of distributions. To this aim, we are going to develop a
theory of Laplacian bounds in viscous sense on $\operatorname{RCD}(K,N)$
spaces and prove the equivalence with other weak notions of Laplacian bounds,
including the distributional one. This part will be used in some of the
geometric applications but it is also of independent analytical interest.
* •
Conclusion (vi) above is a consequence of a completely non trivial statement
about the preservation of Laplacian bounds via sup-convolution in the
Euclidean setting. As we shall see, this statement holds, in a suitable sense,
also for $\operatorname{RCD}$ spaces and it turns that it characterizes lower
Ricci curvature bounds, at least on smooth Riemannian manifolds.
### 1.4. Weak notions of Laplacian bounds
Notions of superharmonicity for non smooth functions and, more in general, a
weak theory of bounds for the Laplacian on smooth Riemannian manifolds have
been fundamental in the Geometric Analysis of manifolds with lower curvature
bounds. In [34] a global version of the Laplacian comparison theorem was
formulated in the sense of barriers; such a barrier formulation played a role
also in the proof of the splitting theorem in [44]. Then a viscous notion of
Laplacian bounds was considered in [134] and its equivalence with other
notions, such as the distributional one, was studied in [73]. Since then,
these different perspectives have played key roles in the theory. We refer for
instance to [18] for a survey of some recent applications of the viscous
perspective.
In more recent years, some of these weak notions of Laplacian have been
necessary for the developments of an analysis on metric (measure) spaces.
In the first approaches [93, 124] the perspective was variational. This was
made possible by the presence of a good notion of modulus of the gradient on
metric measure spaces (see [39, 82]). More recently, on the one hand the point
of view of gradient flows came into play in [9], also in connection with the
heat flow. On the other hand, in [65] a distributional approach to the
Laplacian on metric measure spaces was put forward.
All of the theories above were dealing with quite general metric measure
spaces. We aim to show that the further regularity of
$\operatorname{RCD}(K,N)$ spaces allows to partially fill the gap with the
classical Riemannian theory.
The first contribution in this regard is a theory of viscous bounds for the
Laplacian.
###### Definition (Viscous bound for the Laplacian).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open and bounded domain. Let
$f:\Omega\to\mathbb{R}$ be locally Lipschitz and
$\eta\in\operatorname{C_{b}}(\Omega)$. We say that $\Delta f\leq\eta$ in the
viscous sense in $\Omega$ if the following holds. For any open domain
$\Omega^{\prime}\Subset\Omega$ and for any test function
$\varphi:\Omega^{\prime}\to\mathbb{R}$ such that
* (i)
$\varphi\in D(\Delta,\Omega^{\prime})$ and $\Delta\varphi$ is continuous on
$\Omega^{\prime}$;
* (ii)
for some $x\in\Omega^{\prime}$ it holds $\varphi(x)=f(x)$ and $\varphi(y)\leq
f(y)$ for any $y\in\Omega^{\prime}$, $y\neq x$;
it holds
$\Delta\varphi(x)\leq\eta(x)\,.$
The starting point for the viscosity theory of PDEs is the observation that a
smooth function at a minimum point has vanishing gradient and non-negative
Hessian. By tracing the Hessian, it has also non-negative Laplacian (since
also the gradient is vanishing, this principle holds true in the weighted
Riemannian setting as well).
For evident reasons, this is a delicate point on metric measure spaces. The
first issue is singling out a class of sufficiently smooth functions that is
rich enough to make definitions non trivial. The second is that there is no
pointwise notion of Hessian available in this setting. Nevertheless we are
able to prove the equivalence between viscosity bounds on the Laplacian and
distributional bounds.
###### Theorem 1.4.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\subset X$ be an open and bounded domain,
$f:\Omega\to\mathbb{R}$ be a Lipschitz function and $\eta:\Omega\to\mathbb{R}$
be continuous. Then $\Delta f\leq\eta$ in the sense of distributions if and
only if $\Delta f\leq\eta$ in the viscous sense.
The key difficulty discussed above will be circumvented relying on a powerful
maximum principle obtained in [137], reminiscent of the Omori-Yau and Jensen’s
maximum principles.
To prove that - at a minimum point of a sufficiently regular function - the
Laplacian is non-negative, we will build a family of auxiliary functions
playing the role of the distance squared in the Euclidean setting, i.e.
sufficiently regular, with a strict minimum at a prescribed point and with
non-negative Laplacian. This construction, of independent interest, is based
on the study of the local Green function of the Laplacian on domains.
As we already remarked, the connection between the heat flow and the
distributional Laplacian is classical, see for instance [9, 75, 65]. Another
contribution of the paper will be the proposal and the analysis of a new
approach to Laplacian bounds, based on the pointwise short time behaviour of
the heat flow.
For a smooth function $f$ on a (compact and possibly weighted) Riemannian
manifold,
(1.8) $P_{t}f(x)=f(x)+t\Delta f(x)+o(t^{2})\,,\quad\text{as $t\to 0$}\,.$
Then we propose the following.
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open and bounded domain. Let
$f:\Omega\to\mathbb{R}$ be a Lipschitz function and let
$\eta\in\operatorname{C_{b}}(\Omega)$. We say that $\Delta f\leq\eta$ on
$\Omega$ in the heat flow sense if the following holds. For any
$\Omega^{\prime}\Subset\Omega$ and any function $\tilde{f}:X\to\mathbb{R}$
extending $f$ from $\Omega^{\prime}$ to $X$ and with polynomial growth, we
have
$\limsup_{t\downarrow
0}\frac{P_{t}\tilde{f}(x)-\tilde{f}(x)}{t}\leq\eta(x)\,,\quad\text{for any
$x\in\Omega^{\prime}$}\,.$
Building on the top of Theorem 1.4 we shall prove that also the notion in
subsection 1.4 is an equivalent characterization of Laplacian bounds, see
subsection 3.2 and subsection 3.2.
Besides its own theoretical interest, this perspective will be the key to
understand the interplay between the Hopf-Lax semigroup and the preservation
of Laplacian bounds under lower Ricci curvature bounds, as discussed below.
### 1.5. Hopf-Lax semigroup and lower Ricci curvature bounds
The Hopf-Lax semigroup is a fundamental tool in the viscosity theory of
Partial Differential Equations, in Optimal Transport and in Geometric
Analysis. In this paper we establish a new principle about the stability of
Laplacian bounds through the Hopf-Lax semigroup under (possibly synthetic)
lower Ricci curvature bounds.
Let $1\leq p<\infty$ and let $(X,\mathsf{d})$ be a metric space. Let us
consider $f:X\to\mathbb{R}\cup\\{\pm\infty\\}$ not identically $+\infty$ and
let the evolution via the $p$-Hopf-Lax semigroup, for $0<t<\infty$ be defined
by
(1.9) $\mathcal{Q}^{p}_{t}f(x):=\inf_{y\in
X}\left(f(y)+\frac{\mathsf{d}(x,y)^{p}}{p\,t^{p-1}}\right)\,.$
Notice that when $p=1$ there is a simpler expression for the Hopf-Lax
semigroup, actually independent of $t$, namely:
$f^{c}(x):=\mathcal{Q}^{1}_{t}f(x)=\mathcal{Q}^{1}f(x)=\inf_{y\in
X}\big{(}f(y)+\mathsf{d}(x,y)\big{)}\,.$
The role of the $2$-Hopf-Lax semigroup (commonly known also as inf-
convolution) as a non linear regularization tool was put forward in [100]. The
connection of the $2$-Hopf Lax semigroup with the viscous theory was made
clear later in [52] where the magic property of this non linear convolution
(see Lemma A.5 therein) is that viscosity supersolutions are mapped into
viscosity supersolutions by $\mathcal{Q}_{t}^{2}$. All these properties, in
this generality, are usually proved relying on the Hilbert space structure of
the Euclidean space.
The $2$-Hopf-Lax semigroup was then used in [32] in the analysis of elliptic
operators in non-divergent form on Riemannian manifolds with non-negative
sectional curvature, later extended to lower Ricci curvature bounds in [92,
132]. The Hopf-Lax semigroup also played a key role in the characterization of
lower Ricci bounds for smooth Riemannian manifolds in terms of optimal
transport [115, 51, 129] which paved the way to the synthetic theory of Lott-
Sturm-Villani $\operatorname{CD}(K,N)$ spaces [127, 128, 102].
A subsequent breakthrough came in [99] with a new connection between the Hopf-
Lax semigroup (for general exponents $p$) and lower bounds on the Ricci
curvature. On a smooth Riemannian manifold $(M,g)$ with Riemannian distance
$\mathsf{d}$ the following conditions are equivalent:
* (i)
$\operatorname{Ric}\geq K$, for some $K\in\mathbb{R}$;
* (ii)
let $1\leq p<\infty$ be fixed. For any non-negative Lipschitz function with
bounded support $f:M\to\mathbb{R}$ it holds
(1.10)
$P_{s}\left(\mathcal{Q}^{p}_{1}f\right)(x)-P_{s}f(y)\leq\frac{e^{-pKs}}{p}\mathsf{d}(x,y)^{p}\,,$
for any $x,y\in X$ and for any $s\geq 0$, where we denoted by $P_{s}$ the heat
flow at time $s$.
The robustness of condition (ii) (notice that it involves only objects that do
have a meaning in the setting of metric measure spaces) and of the proof of
the equivalence, opened the way to several developments in the smooth and in
the non-smooth theory of lower Ricci curvature bounds, see for instance [10,
11, 23]. In particular, (ii) is a synthetic condition, valid also in the
framework of $\operatorname{RCD}(K,\infty)$ metric measure spaces.
A striking consequence of the Kuwada duality (1.10) which is explored in this
paper is that the Hopf-Lax semigroup maps superharmonic functions into
superharmonic functions on spaces with non-negative Ricci curvature, in
synthetic sense, for any $1\leq p<\infty$. More in general, it preserves (up
to errors depending on the lower Ricci curvature bound) Laplacian upper
bounds.
Indeed, suppose that $(M,g)$ is a compact manifold with non-negative Ricci
curvature and that $f:M\to\mathbb{R}$ is a smooth function. Let $x,y\in M$ be
such that
(1.11) $\mathcal{Q}^{p}_{1}f(x)-f(y)=\frac{1}{p}\mathsf{d}(x,y)^{p}\,.$
Then, assuming for the sake of this presentation that $\mathcal{Q}^{p}_{1}f$
is smooth at $x$, we can take the right derivatives at time $s=0$ in (1.10),
taking into account (1.11) to obtain
$\Delta\mathcal{Q}^{p}_{1}f(x)\leq\Delta f(y)\,.$
Focusing on the case $p=1$, the theory of Laplacian bounds for non-smooth
functions allows to remove the (un-natural, even on smooth manifolds)
regularity assumptions and prove the following.
###### Theorem 1.5.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $f:X\to\mathbb{R}$ be a locally Lipschitz function. Let
$\Omega,\Omega^{\prime}\subset X$ be open domains and $\eta\in\mathbb{R}$.
Then the following holds. Assume that $f^{c}$ is finite and that, for any
$x\in\Omega^{\prime}$ the infimum defining $f^{c}(x)$ is attained at some
$y\in\Omega$. Assume moreover that
(1.12) $\Delta f\leq\eta\quad\text{on $\Omega$}\,.$
Then
$\Delta
f^{c}\leq\eta-\min_{x\in\Omega^{\prime},y\in\Omega}K\mathsf{d}(x,y)\quad\text{on
$\Omega^{\prime}$},$
where the Laplacian bounds have to be intended in any of the equivalent senses
discussed in subsection 1.4 (see also Theorem 3.4).
Similar results can be obtained for general exponents $p\in[1,\infty)$,
covering in particular the case $p=2$ that was classically considered in the
viscosity theory, as we recalled above.
We are not aware of any reference for the above stability of Laplacian bounds
with respect to the Hopf-Lax semigroup for general exponents $p\in[1,\infty)$,
even in the setting of smooth Riemannian manifolds. The property is stated in
the unpublished [118] for Alexandrov spaces with lower sectional curvature
bounds, where a sketch of the proof is also presented. The only other
references we are aware of are [136], dealing only with the case $p=2$ on
Alexandrov spaces with lower Ricci curvature bounds and relying on the
existence of a parallel transport between tangent cones along minimizing
geodesics and on the second variation formula for the arc length from [117],
and the more recent [135], dealing with $1<p<\infty$ on smooth Riemannian
manifolds. Also in this case, our proof is completely different and more
robust, as it avoids completely the use of parallel transport along geodesics.
Let us also mention that the property in Theorem 1.5 is equivalent to a lower
Ricci curvature bound, at least on smooth Riemannian manifolds (see Theorem
4.1). The range of the applications of this PDE principle is expected to be
broad. For instance, it plays a key role in the solution of the well known
open question about Lipschitz continuity of harmonic maps from
$\operatorname{RCD}(K,N)$ to $\mathrm{CAT}(0)$ spaces by the authors in [113]
(see also the subsequent [67]).
Finally, we also mention that some of the results of the present work (namely:
the equivalence of Laplacian bounds, Theorem 3.4, and the Laplacian bounds on
the distance function from locally perimeter minimizers, Theorem 1.1) have
been subsequently extended [70] to $\operatorname{RCD}(K,N)$ spaces endowed
with a general reference measure $\mathfrak{m}$ (i.e. not necessarily the
$N$-dimensional Hausdorff measure $\mathscr{H}^{N}$).
### Organization of the paper
The paper is organised as follows:
* •
In section 2, we collect some background results about
$\operatorname{RCD}(K,N)$ metric measure spaces that will be needed in the
subsequent developments. Let us mention that this preliminary section already
contains some original result about the pointwise short time behaviour of the
heat flow and about local Green functions of the Laplacian. In particular, the
properties of the local Green functions are employed in the construction of a
local Green distance with good properties, which is of independent interest.
* •
In section 3 we consider some new equivalences between different notions of
Laplacian and bounds for the Laplacian on an $\operatorname{RCD}(K,N)$ metric
measure space $(X,\mathsf{d},\mathscr{H}^{N})$, as outlined in subsection 1.4.
* •
section 4 is dedicated to analyze the interplay between the Hopf-Lax
semigroups (associated to exponents $1\leq p<\infty$), Ricci curvature lower
bounds and Laplacian upper bounds, as sketched in subsection 1.5.
* •
section 5 is devoted to the study of mean curvature bounds for boundaries of
locally perimeter minimizing sets of finite perimeter, in the framework of
$\operatorname{RCD}(K,N)$ metric measure spaces
$(X,\mathsf{d},\mathscr{H}^{N})$. Mean curvature bounds will be encoded into
Laplacian bounds for distance functions, as outlined in subsection 1.1 and
subsection 1.3.
* •
Finally, section 6 is dedicated to the partial regularity theory for minimal
boundaries on non collapsed $\operatorname{RCD}$ spaces, as sketched in
subsection 1.2.
## Acknowledgements
The authors are supported by the European Research Council (ERC), under the
European Union Horizon 2020 research and innovation programme, via the ERC
Starting Grant “CURVATURE”, grant agreement No. 802689.
The second author is grateful to Gioacchino Antonelli and Giovanni Comi for
useful comments on a preliminary version of this note. The authors are
grateful to the anonymous reviewers for their careful reading and comments.
## 2\. Preliminaries
In this preliminary section we collect some background results about
$\operatorname{RCD}(K,N)$ metric measure spaces that will be needed in the
subsequent developments of the paper. This section already contains some
original result of independent interest, as detailed below.
In subsection 2.1 we fix some notation and quickly recall the definition and
basic properties of the Cheeger energy. In subsection 2.2 we briefly introduce
$\operatorname{RCD}(K,N)$ spaces and recall some of their fundamental
properties, together with some useful terminology. In subsection 2.3 we focus
on the regularity properties of those $\operatorname{RCD}(K,N)$ metric measure
spaces where the reference measure $\mathfrak{m}$ is the $N$-dimensional
Hausdorff measure $\mathscr{H}^{N}$. We dedicate subsection 2.4 to the
background material about sets of finite perimeter. In subsection 2.5 we focus
on the Laplacian, the heat flow and the heat kernel. After recalling the basic
notions and properties, we present some new results about the pointwise short
time behaviour of the heat flow. Then in subsection 2.6 we recall some
existence and regularity results about the Poisson equation and in subsection
2.7 we present a new analysis of the local Green function of the Laplacian in
this framework. The properties of the local Green function are finally
employed in the construction of a local Green distance with good properties,
which is of independent interest.
### 2.1. Slope, Cheeger energy and weak upper gradient
Throughout the paper, $(X,\mathsf{d},\mathfrak{m})$ will be a metric measure
space, i.e. $(X,\mathsf{d})$ is a complete and separable metric space endowed
with a non-negative Borel measure which is finite on bounded sets.
Given $f:X\to\mathbb{R}$, we denote with $\operatorname{lip}f$ the slope of
$f$ defined as
$\operatorname{lip}f(x_{0}):=\limsup_{x\to
x_{0}}\frac{|f(x)-f(x_{0})|}{\mathsf{d}(x,x_{0})}\;\text{ if $x_{0}$ is not
isolated}\,$
and $\operatorname{lip}f(x_{0})=0$ otherwise.
We denote with $\operatorname{LIP}(X)$ (resp.
$\operatorname{LIP_{b}}(X),\operatorname{LIP_{\rm bs}}(X)$) the space of
Lipschitz functions on $(X,\mathsf{d})$ (resp. bounded Lipschitz functions,
and Lipschitz functions with bounded support). For
$f\in\operatorname{LIP}(X)$, let $\operatorname{Lip}(f)$ denote the Lipschitz
constant of $f$. Clearly, $\operatorname{lip}f\leq\operatorname{Lip}(f)$ on
all $X$.
The Cheeger energy (introduced in [39] and further studied in [9]) is defined
as the $L^{2}$-lower semicontinuous envelope of the functional
$f\mapsto\frac{1}{2}\int_{X}(\operatorname{lip}f)^{2}\,\mathop{}\\!\mathrm{d}\mathfrak{m}$,
i.e.:
${\sf
Ch}(f):=\inf\left\\{\liminf_{n\to\infty}\frac{1}{2}\int_{X}(\operatorname{lip}f_{n})^{2}\,\mathop{}\\!\mathrm{d}\mathfrak{m}\,:\,f_{n}\in\operatorname{LIP}(X),\;f_{n}\to
f\text{ in }L^{2}(X,\mathfrak{m})\right\\}\,.$
If ${\sf Ch}(f)<\infty$ it was proved in [39, 9] that the set
$G(f):=\left\\{g\in
L^{2}(X,\mathfrak{m})\,:\,\exists\,f_{n}\in\operatorname{LIP}(X),\,f_{n}\to
f,\,\operatorname{lip}f_{n}\rightharpoonup h\geq g\text{ in
}L^{2}(X,\mathfrak{m})\right\\}$
is closed and convex, therefore it admits a unique element of minimal norm
called minimal weak upper gradient and denoted by $|\nabla f|$. The Cheeger
energy can be then represented by integration as
${\sf Ch}(f):=\frac{1}{2}\int_{X}|\nabla
f|^{2}\mathop{}\\!\mathrm{d}\mathfrak{m}\,.$
It is not difficult to see that ${\sf Ch}$ is a $2$-homogeneous, lower semi-
continuous, convex functional on $L^{2}(X,\mathfrak{m})$, whose proper domain
${\rm Dom}({\sf Ch}):=\\{f\in L^{2}(X,\mathfrak{m})\,:\,{\sf Ch}(f)<\infty\\}$
is a dense linear subspace of $L^{2}(X,\mathfrak{m})$. It then admits an
$L^{2}$-gradient flow which is a continuous semigroup of contractions
$(P_{t})_{t\geq 0}$ in $L^{2}(X,\mathfrak{m})$, whose continuous trajectories
$t\mapsto P_{t}f$, for $f\in L^{2}(X,\mathfrak{m})$, are locally Lipschitz
curves from $(0,\infty)$ with values into $L^{2}(X,\mathfrak{m})$.
Throughout the paper, we will assume that ${\sf Ch}:{\rm Dom}({\sf
Ch})\to\mathbb{R}$ satisfies the parallelogram identity (i.e. it is a
quadratic form) or, equivalently, that $P_{t}:L^{2}(X,\mathfrak{m})\to
L^{2}(X,\mathfrak{m})$ is a linear operator for every $t\geq 0$. This, in
turn, is equivalent to require that ${\rm Dom}({\sf Ch})$ endowed with the
norm $\|f\|_{H^{1,2}}^{2}:=\|f\|_{L^{2}}+2{\sf Ch}(f)$ is a Hilbert space (in
general it is only a Banach space) that will be denoted by
$H^{1,2}(X,\mathsf{d},\mathfrak{m})$, see [10, 65].
### 2.2. General properties of $\operatorname{RCD}(K,N)$ spaces
The main subject of our investigation will be the so-called
$\operatorname{RCD}(K,N)$ metric measure spaces $(X,\mathsf{d},\mathfrak{m})$,
i.e. infinitesimally Hilbertian metric measure spaces with Ricci curvature
bounded from below and dimension bounded from above, in synthetic sense.
The Riemannian Curvature Dimension condition $\operatorname{RCD}(K,\infty)$
was introduced in [10] (see also the subsequent [8]) coupling the Curvature
Dimension condition $\operatorname{CD}(K,\infty)$, previously developed in
[127, 128] and independently in [102], with the assumption that the heat
semigroup $(P_{t})_{t\geq 0}$ is linear in $L^{2}(X,\mathfrak{m})$. The finite
dimensional refinements subsequently led to the notions of
$\operatorname{RCD}(K,N)$ and $\operatorname{RCD}^{*}(K,N)$ spaces,
corresponding to $\operatorname{CD}(K,N)$ (resp. $\operatorname{CD}^{*}(K,N)$,
see [22]) coupled with linear heat flow. The class $\operatorname{RCD}(K,N)$
was proposed in [65]. The (a priori more general)
$\operatorname{RCD}^{*}(K,N)$ condition was thoroughly analysed in [60] and
(subsequently and independently) in [15] (see also [35] for the equivalence
betweeen $\operatorname{RCD}^{*}$ and $\operatorname{RCD}$ in the case of
finite reference measure).
We avoid giving a detailed introduction to this notion, addressing the reader
to the survey [5] and references therein for the relevant background. Below we
recall some of the main properties that will be relevant for our purposes.
Note that, if $(X,\mathsf{d},\mathfrak{m})$ is an $\operatorname{RCD}(K,N)$
m.m.s., then so is
$(\operatorname{supp}\,\mathfrak{m},\mathsf{d},\mathfrak{m})$, hence in the
following we will always tacitly assume $\operatorname{supp}\,\mathfrak{m}=X$.
Any $\operatorname{RCD}(K,N)$ m.m.s. $(X,\mathsf{d},\mathfrak{m})$ satisfies
the Bishop-Gromov inequality:
(2.1)
$\frac{\mathfrak{m}(B_{R}(x))}{v_{K,N}(R)}\leq\frac{\mathfrak{m}(B_{r}(x))}{v_{K,N}(r)}\quad\text{for
any $0<r<R$ and $x\in X$}\,,$
where $v_{K,N}(r)$ is the volume of the ball with radius $r$ in the model
space with dimension $N$ and Ricci curvature $K$. In particular
$(X,\mathsf{d},\mathfrak{m})$ is locally uniformly doubling. Furthermore, it
was proved in [121] that it satisfies a local Poincaré inequality. Therefore
$\operatorname{RCD}(K,N)$ spaces fit in the framework of PI spaces.
We assume the reader to be familiar with the notion of (pointed measured)
Gromov-Hausdorff convergence (pmGH-convergence for short), referring to [130,
Chapter 27] and [69] for an overview on the subject.
###### Definition .
A sequence
$\set{(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})}_{i\in\mathbb{N}}$ of
pointed m.m.s. is said to converge in the pmGH topology to $(Y,\varrho,\mu,y)$
if there exist a complete separable metric space $(Z,\mathsf{d}_{Z})$ and
isometric embeddings
$\displaystyle\Psi_{i}:(\operatorname{supp}\mathfrak{m}_{i},\mathsf{d}_{i})\to(Z,\mathsf{d}_{Z})\qquad\forall
i\in\mathbb{N}\,,$
$\displaystyle\Psi:(\operatorname{supp}\mu,\varrho)\to(Z,\mathsf{d}_{Z})\,,$
such that for every $\varepsilon>0$ and $R>0$ there exists $i_{0}$ such that
for every $i>i_{0}$
$\Psi(B^{Y}_{R}(y))\subset[\Psi_{i}(B^{X_{i}}_{R}(x_{i}))]_{\varepsilon}\,,\qquad\Psi_{i}(B^{X_{i}}_{R}(x_{i}))\subset[\Psi(B^{Y}_{R}(y))]_{\varepsilon}\,,$
where $[A]_{\varepsilon}:=\set{z\in Z\ :\mathsf{d}_{Z}(z,A)<\varepsilon}$ for
every $A\subset Z$. Moreover
$(\Psi_{i})_{\\#}\mathfrak{m}_{i}\rightharpoonup\Psi_{\\#}\mu$, where the
convergence is understood in duality with $\operatorname{C_{\rm bs}}(Z)$.
In the case of a sequence of uniformly locally doubling metric measure spaces
$(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})$ (as in the case of
$\operatorname{RCD}(K,N)$ spaces), the pointed measured Gromov-Hausdorff
convergence to $(Y,\varrho,\mu,y)$ can be equivalently characterized asking
for the existence of a proper metric space $(Z,\mathsf{d}_{Z})$ such that all
the metric spaces $(X_{i},\mathsf{d}_{i})$ are isometrically embedded into
$(Z,\mathsf{d}_{Z})$, $x_{i}\to y$ and $\mathfrak{m}_{i}\rightharpoonup\mu$ in
duality with $\operatorname{C_{\rm bs}}(Z)$. Notice also that the pmGH
convergence is metrizable, and therefore it makes sense to say that two
pointed metric measure spaces are $\varepsilon$-close in this sense. Analogous
remarks hold for the Gromov-Hausdorff distance between metric spaces.
A fundamental property of $\operatorname{RCD}(K,N)$ spaces, that will be used
several times in this paper, is the stability w.r.t. pmGH-convergence, meaning
that a pmGH-limit of a sequence of (pointed) $\operatorname{RCD}(K_{n},N_{n})$
spaces for some $K_{n}\to K$ and $N_{n}\to N$ is an $\operatorname{RCD}(K,N)$
m.m.s..
Given a m.m.s. $(X,\mathsf{d},\mathfrak{m})$, $x\in X$ and $r\in(0,1)$, we
consider the rescaled and normalized pointed m.m.s.
$(X,r^{-1}\mathsf{d},\mathfrak{m}_{r}^{x},x)$, where
$C(x,r):=\left(\int_{B_{r}(x)}\left(1-\frac{\mathsf{d}(x,y)}{r}\right)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\right)\quad\mathfrak{m}_{r}^{x}=C(x,r)^{-1}\mathfrak{m}\,.$
###### Definition (Tangent cone).
We say that a pointed m.m.s. $(Y,\mathsf{d}_{Y},\eta,y)$ is tangent to
$(X,\mathsf{d},\mathfrak{m})$ at $x$ if there exists a sequence
$r_{i}\downarrow 0$ such that
$(X,r_{i}^{-1}\mathsf{d},\mathfrak{m}_{r_{i}}^{x},x)\rightarrow(Y,\mathsf{d}_{Y},\eta,y)$
in the pmGH-topology. The collection of all the tangent spaces of
$(X,\mathsf{d},\mathfrak{m})$ at $x$ is denoted by
$\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m})$.
A compactness argument, which is due to Gromov, together with the rescaling
and stability properties of the $\operatorname{RCD}(K,N)$ condition, yields
that $\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m})$ is non-empty for
every $x\in X$ and its elements are all $\operatorname{RCD}(0,N)$ pointed m.m.
spaces.
Let us recall below the notion of $k$-regular point and $k$-regular set.
###### Definition .
Given any natural $1\leq k\leq N$, we say that $x\in X$ is a $k$-regular point
if
$\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m})=\left\\{(\mathbb{R}^{k},\mathsf{d}_{eucl},c_{k}\mathscr{L}^{k},0)\right\\}\,.$
We shall denote by $\mathcal{R}_{k}$ the set of $k$-regular points in $X$.
Combing the results in [112] with [90, 57, 71] and [28], we have a good
understanding of the rectifiable structure of $\operatorname{RCD}(K,N)$ metric
measure spaces.
###### Theorem 2.1 (Rectifiable structure).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ m.m.s. with
$K\in\mathbb{R}$ and $N\geq 1$. Then there exists a natural number $1\leq
n\leq N$, called essential dimension of $X$, such that
$\mathfrak{m}(X\setminus\mathcal{R}_{n})=0$. Moreover $\mathcal{R}_{n}$ is
$(\mathfrak{m},n)$-rectifiable and $\mathfrak{m}$ is representable as
$\theta\mathscr{H}^{n}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits{\mathcal{R}_{n}}$ for some
non-negative density $\theta\in L^{1}_{\rm
loc}(X,\mathscr{H}^{n}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\mathcal{R}_{n})$.
Recall that $X$ is said to be $(\mathfrak{m},n)$-rectifiable if there exists a
family $\left\\{A_{i}\right\\}_{i\in\mathbb{N}}$ of Borel subsets of $X$ such
that each $A_{i}$ is bi-Lipschitz to a Borel subset of $\mathbb{R}^{n}$ and
$\mathfrak{m}(X\setminus\cup_{i\in\mathbb{N}}A_{i})=0$.
### 2.3. Non collapsed spaces
We will mainly focus on the so called noncollapsed $\operatorname{RCD}(K,N)$
metric measure spaces, i.e. those spaces for which the reference measure is
the $N$-dimensional Hausdorff measure $\mathscr{H}^{N}$.
As it happens for noncollapsed Ricci limits, whose regularity is much better
than that of collapsed limits (see [41, 42, 43]), noncollapsed
$\operatorname{RCD}$ spaces are more regular than general $\operatorname{RCD}$
spaces. Their properties have been investigated throughout in [96, 56, 89, 19,
29].
Below we state a fundamental $\varepsilon$-regularity result for non collapsed
spaces. For smooth manifolds and their limits it was proved in [49, 41],
building on a variant of the classical Reifenberg theorem valid for metric
spaces (see also the earlier [17]). We refer to [56, 89] for the
generalization to $\operatorname{RCD}$ spaces and the present form.
###### Theorem 2.2 ($\varepsilon$-regularity).
Let $1\leq N<\infty$ be a fixed natural number. Then, for any
$0<\varepsilon<1/5$ there exists $\delta=\delta(\varepsilon,N)>0$ such that
for any $\operatorname{RCD}(-\delta(N-1),N)$ space
$(X,\mathsf{d},\mathscr{H}^{N})$, if
$\mathsf{d}_{GH}(B_{2}(x),B_{2}(0^{N}))<\delta\,,$
then:
* i)
$\left\lvert\mathscr{H}^{N}(B_{1}(x))-\mathscr{H}^{N}(B_{1}(0^{N}))\right\rvert<\varepsilon$;
* ii)
for any $y\in B_{1}(x)$ and for any $0<r<1/2$ it holds
$\mathsf{d}_{GH}(B_{r}(y),B_{r}(0^{N}))<\varepsilon r\,;$
* iii)
$B_{1}(x)$ is $C^{1-\varepsilon}$-biHölder homeomorphic to the Euclidean ball
$B_{1}(0^{N})$.
Another key regularity property of noncollapsed $\operatorname{RCD}$ spaces is
that all their tangents are metric cones, see [56]. This is a consequence of
the so-called volume cone implies metric cone property, originally proved in
[40] for limits of smooth manifolds and later extended to $\operatorname{RCD}$
spaces in [55].
Building on the top of this, one can introduce a natural stratification of the
singular set of an $\operatorname{RCD}(K,N)$ metric measure space
$(X,\mathsf{d},\mathscr{H}^{N})$, i.e. the set
$\mathcal{S}:=X\setminus\mathcal{R}=X\setminus\mathcal{R}_{N}$, based on the
maximal number of Euclidean factors in any tangent cone.
###### Definition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Then for any $0\leq k\leq N$ we let
$\mathcal{S}_{k}:=\\{x\in X\,:\quad\text{no tangent cone at $x$ splits a
factor $\mathbb{R}^{k+1}$}\\}\,.$
A classical dimension reduction argument then allows to get the Hausdorff
dimension bounds
(2.2) $\dim_{H}\mathcal{S}_{k}\leq k\,,$
for any $0\leq k\leq N-1$.
When combined with the $\varepsilon$-regularity Theorem 2.2, together with its
counterpart for points in the top dimensional singular stratum obtained in
[29] (see Theorem 6.8), the Hausdorff dimension bound (2.2) allows to
understand the topological regularity of non collapsed $\operatorname{RCD}$
spaces away from sets of codimension two.
###### Theorem 2.3 (Topological structure of non collapsed spaces).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Then, for any
$0<\alpha<1$ there exists a decomposition
$X=\partial X\cup O_{\alpha}\cup\mathcal{S}_{\alpha}\,,$
where $\partial X=\overline{\mathcal{S}^{N-1}\setminus\mathcal{S}^{N-2}}$ is
the boundary of $(X,\mathsf{d},\mathscr{H}^{N})$, $O_{\alpha}$ is an open
neighbourhood of the regular set $\mathcal{R}$ that is $C^{\alpha}$-biHölder
to a smooth $N$-dimensional Riemannian manifold and
$\dim_{H}\mathcal{S}_{\alpha}\leq N-2$.
Moreover, for any $0<\alpha<1$ there exists an open neighbourhood $V_{\alpha}$
of $\mathcal{S}^{N-1}\setminus\mathcal{S}^{N-2}$ inside $\partial X$ such that
$V_{\alpha}$ is $C^{\alpha}$-biHölder to a smooth $(N-1)$-dimensional
Riemannian manifold.
Further estimates for singular sets on non collapsed $\operatorname{RCD}$
spaces will be recalled later in the note.
### 2.4. Sets of finite perimeter
This subsection is aimed at introducing some classical and most recent results
about sets of finite perimeter in the framework of $\operatorname{RCD}(K,N)$
metric measure spaces.
#### 2.4.1. Introduction and basic properties
We recall the definition of function of bounded variation in the present
setting.
###### Definition (Function of bounded variation).
We say that a function $f\in L^{1}(X,\mathfrak{m})$ has bounded variation (and
we write $f\in\operatorname{BV}(X,\mathsf{d},\mathfrak{m})$) if there exist
locally Lipschitz functions $f_{i}$ converging to $f$ in
$L^{1}(X,\mathfrak{m})$ such that
$\limsup_{i\to\infty}\int_{X}\operatorname{lip}f_{i}\mathop{}\\!\mathrm{d}\mathfrak{m}<\infty\,.$
By localizing this construction one can define
$\left\lvert
Df\right\rvert(A):=\inf\left\\{\liminf_{i\to\infty}\int_{A}\operatorname{lip}f_{i}\mathop{}\\!\mathrm{d}\mathfrak{m}:f_{i}\in\operatorname{LIP}_{{\rm
loc}}(A),\quad f_{i}\to f\text{ in }L^{1}(A,\mathfrak{m})\right\\}$
for any open set $A\subset X$. In [7] (see also [109] for the case of locally
compact spaces) it is proven that this set function is the restriction to open
sets of a finite Borel measure that we call total variation of $f$ and still
denote $\left\lvert Df\right\rvert$.
Dropping the global integrability condition on $f=\chi_{E}$, let us recall now
the analogous definition of a set of finite perimeter in a metric measure
space (see again [4, 109, 7]).
###### Definition (Perimeter and sets of finite perimeter).
Given a Borel set $E\subset X$ and an open set $A$, the perimeter
$\operatorname{Per}(E,A)$ is defined in the following way:
$\operatorname{Per}(E,A):=\inf\left\\{\liminf_{n\to\infty}\int_{A}\operatorname{lip}u_{n}\mathop{}\\!\mathrm{d}\mathfrak{m}:u_{n}\in\operatorname{LIP}_{{\rm
loc}}(A),\quad u_{n}\to\chi_{E}\quad\text{in }L^{1}_{{\rm
loc}}(A,\mathfrak{m})\right\\}\,.$
We say that $E$ has finite perimeter if $\operatorname{Per}(E,X)<\infty$. In
that case it can be proved that the set function
$A\mapsto\operatorname{Per}(E,A)$ is the restriction to open sets of a finite
Borel measure $\operatorname{Per}(E,\cdot)$ defined by
$\operatorname{Per}(E,B):=\inf\left\\{\operatorname{Per}(E,A):B\subset
A,\text{ }A\text{ open}\right\\}\,.$
Let us remark for the sake of clarity that $E\subset X$ with finite
$\mathfrak{m}$-measure is a set of finite perimeter if and only if
$\chi_{E}\in\operatorname{BV}(X,\mathsf{d},\mathfrak{m})$ and that
$\operatorname{Per}(E,\cdot)=\left\lvert D\chi_{E}\right\rvert(\cdot)$. In the
following we will say that $E\subset X$ is a set of locally finite perimeter
if $\chi_{E}$ is a function of locally bounded variation, that is to say
$\eta\chi_{E}\in\operatorname{BV}(X,\mathsf{d},\mathfrak{m})$ for any
$\eta\in\operatorname{LIP_{\rm bs}}(X,\mathsf{d})$. In the sequel we shall
adopt both the notations $\left\lvert D\chi_{E}\right\rvert$ and
$\operatorname{Per}_{E}$ to denote the perimeter measure of a set with finite
perimeter $E$.
We will usually assume that a set of finite perimeter $E\subset X$ is
normalized in the following sense (see [107, Proposition 12.19] for an
analogous classical result in the Euclidean space and the proof of [94,
Theorem 4.2] for the present setting): up to modification on an
$\mathfrak{m}$-negligible set of $E$, it holds that $\mathfrak{m}(E\cap
B_{r}(x))>0$ for any $x\in E$ and $r>0$ and $\mathfrak{m}(B_{r}(x)\setminus
E)>0$ for any $x\in X\setminus E$ and $r>0$.
This implies in particular that, for any $x\in\partial E$ (where we denoted by
$\partial E$ the topological boundary of $E$), it holds
(2.3) $\mathfrak{m}(B_{r}(x)\cap E)>0\,\quad\text{and
}\,\mathfrak{m}(B_{r}(x)\setminus E)>0\,,\quad\text{for any $r>0$}\,.$
###### Definition .
We adopt the terminology measure theoretic interior to indicate
$\mathrm{Int}(E):=\Big{\\{}x\in X\,:\,\lim_{r\to 0}\frac{\mathfrak{m}(E\cap
B_{r}(x))}{\mathfrak{m}(B_{r}(x))}=1\Big{\\}}\,,$
i.e. the set of point of density $1$ of $\chi_{E}$. Note that, by Lebesgue
differentiation theorem, $\mathfrak{m}(E\Delta\mathrm{Int}(E))=0$.
When considering the lower and upper approximate limits of the indicator
function $\chi_{E}$ of $E$, i.e.
$\chi_{E}^{\vee}(x):=\inf\Big{\\{}t\in\mathbb{R}\,:\,\lim_{r\to
0}\frac{\mathfrak{m}(\\{\chi_{E}<t\\}\cap
B_{r}(x))}{\mathfrak{m}(B_{r}(x))}=0\Big{\\}}\,$
and
(2.4) $\chi_{E}^{\wedge}(x):=\sup\Big{\\{}t\in\mathbb{R}\,:\,\lim_{r\to
0}\frac{\mathfrak{m}(\\{\chi_{E}>t\\}\cap
B_{r}(x))}{\mathfrak{m}(B_{r}(x))}=0\Big{\\}}\,,$
it is easy to verify that
$\chi_{E}^{\vee}(x)=1\,,\quad\text{on
$X\setminus\mathrm{Int}(E^{c})$}\,\quad\text{and
}\quad\chi_{E}^{\vee}(x)=0\,\quad\text{otherwise}\,,$
while
(2.5) $\chi_{E}^{\wedge}(x)=1\,,\quad\text{on
$\mathrm{Int}(E)$}\,\quad\text{and
}\quad\chi_{E}^{\wedge}(x)=0\,\quad\text{otherwise}\,.$
Following [3, 4] we recall the notion of essential boundary of a set of finite
perimeter.
###### Definition (Essential boundary).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a set of locally finite perimeter. Then
we introduce the essential boundary $\partial^{*}E$ as
(2.6) $\partial^{*}E:=\Big{\\{}x\in X\,:\,\lim_{r\to
0}\frac{\mathfrak{m}(B_{r}(x)\cap E)}{\mathfrak{m}(B_{r}(x))}\neq
0\,\quad\text{and }\quad\lim_{r\to 0}\frac{\mathfrak{m}(B_{r}(x)\setminus
E)}{\mathfrak{m}(B_{r}(x))}\neq 0\Big{\\}}\,.$
The following coarea formula for functions of bounded variation on metric
measure spaces is taken from [109, Proposition 4.2], dealing with locally
compact spaces and its proof works in the more general setting of metric
measure spaces.
###### Theorem 2.4 (Coarea formula).
Let $v\in\operatorname{BV}(X,\mathsf{d},\mathfrak{m})$. Then, $\\{v>r\\}$ has
finite perimeter for $\mathscr{L}^{1}$-a.e. $r\in\mathbb{R}$. Moreover, for
any Borel function $f:X\to[0,+\infty]$, it holds
(2.7) $\int_{X}f\mathop{}\\!\mathrm{d}\left\lvert
Dv\right\rvert=\int_{-\infty}^{+\infty}\left(\int_{X}f\mathop{}\\!\mathrm{d}\operatorname{Per}(\\{v>r\\},\cdot)\right)\mathop{}\\!\mathrm{d}r\,.$
Let us recall that if $(X,\mathsf{d},\mathfrak{m})$ verifies doubling and
Poincaré inequalities then a local, relative isoperimetric inequality holds,
see for instance [98, Theorem 3.3]. More precisely: there exists constants
$\lambda>1,C>0,r_{0}>0$, depending only on the doubling and Poincaré
constants, such that
(2.8) $\min\\{\mathfrak{m}(E\cap B_{r}(x)),\mathfrak{m}(B_{r}(x)\setminus
E)\\}\leq Cr\;\operatorname{Per}(E,B_{\lambda r}(x))\,,$
for all $x\in X$, $r\in(0,r_{0})$.
#### 2.4.2. Convergence and stability for sets of finite perimeter and
functions of bounded variation
Before introducing tangents for sets of finite perimeter over
$\operatorname{RCD}$ spaces, let us recall some terminology about convergence
and stability for $\operatorname{BV}$ functions along converging sequences of
metric measure spaces. The discussion below is borrowed from [6], the main
references being [69, 12] and [13], to which we address the reader for details
and relevant background.
Let $(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},\bar{x}_{i})$ be a sequence of
pointed metric measure spaces converging in pointed-measured-Gromov-Hausdorff
sense (or, more generally, in pointed measured Gromov sense) to
$(Y,\varrho,\mu,y)$.
###### Definition .
We say that a sequence $(f_{i})\subset L^{1}(X_{i},\mathfrak{m}_{i})$
converges $L^{1}$-strongly to $f\in L^{1}(Y,\mu)$ if
$\sigma\circ f_{i}\mathfrak{m}_{i}\rightharpoonup\sigma\circ
f\mu\qquad\text{and}\qquad\int_{X_{i}}|f_{i}|\mathop{}\\!\mathrm{d}\mathfrak{m}_{i}\to\int_{Y}|f|\mathop{}\\!\mathrm{d}\mu\,,$
where $\sigma(z):=\operatorname{sign}(z)\sqrt{|z|}$ and the weak convergence
is understood in duality with $\operatorname{C_{\rm bs}}(Z)$.
We say that $f_{i}\in\operatorname{BV}(X_{i},\mathfrak{m}_{i})$ converge in
energy in $\operatorname{BV}$ to $f\in\operatorname{BV}(Y,\mu)$ if $f_{i}$
converge $L^{1}$-strongly to $f$ and
$\lim_{i\to\infty}|Df_{i}|(X_{i})=|Df|(Y)\,.$
###### Definition .
We say that a sequence of Borel sets $E_{i}\subset X_{i}$ such that
$\mathfrak{m}_{i}(E_{i})<\infty$ for any $i\in\mathbb{N}$ converges in
$L^{1}$-strong to a Borel set $F\subset Y$ with $\mu(F)<\infty$ if
$\chi_{E_{i}}\mathfrak{m}_{i}\rightharpoonup\chi_{F}\mu$ in duality with
$\operatorname{C_{\rm bs}}(Z)$ and $\mathfrak{m}_{i}(E_{i})\to\mu(F)$.
We also say that a sequence of Borel sets $E_{i}\subset X_{i}$ converges in
$L^{1}_{{\rm loc}}$ to a Borel set $F\subset Y$ if $E_{i}\cap B_{R}(x_{i})\to
F\cap B_{R}(y)$ in $L^{1}$-strong for any $R>0$.
#### 2.4.3. De Giorgi’s Theorem and integration by parts formulae
Let us recall the definition of tangent to a set of finite perimeter from [6].
###### Definition (Tangents to a set of finite perimeter).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ m.m.s., fix
$x\in X$ and let $E\subset X$ be a set of locally finite perimeter. We denote
by $\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m},E)$ the collection of
quintuples $(Y,\varrho,\mu,y,F)$ satisfying the following two properties:
* (a)
$(Y,\varrho,\mu,y)\in\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m})$ and
$r_{i}\downarrow 0$ are such that the rescaled spaces
$(X,r_{i}^{-1}\mathsf{d},\mathfrak{m}_{x}^{r_{i}},x)$ converge to
$(Y,\varrho,\mu,y)$ in the pointed measured Gromov-Hausdorff topology;
* (b)
$F$ is a set of locally finite perimeter in $Y$ with $\mu(F)>0$ and, if
$r_{i}$ are as in (a), then the sequence $f_{i}=\chi_{E}$ converges in
$L^{1}_{\rm loc}$ to $\chi_{F}$ according to subsubsection 2.4.2.
It is clear that the following locality property of tangents holds: if
(2.9) $\mathfrak{m}\bigl{(}A\cap(E\Delta F)\bigr{)}=0\,,$
then
(2.10)
$\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m},E)=\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m},F)\qquad\forall
x\in A\,,$
whenever $E,\,F$ are sets of locally finite perimeter and $A\subset X$ is
open.
In [26, 27], essential uniqueness of tangents and rectifiability of the
reduced boundary were obtained for sets of finite perimeter on
$\operatorname{RCD}(K,N)$ metric measure spaces.
###### Theorem 2.5 (Uniqueness of tangents).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ m.m.s. with
essential dimension $1\leq n\leq N$ and let $E\subset X$ be a set of finite
perimeter. Then, for $\left\lvert D\chi_{E}\right\rvert$-a.e. $x\in X$ it
holds
$\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m},E)=\left\\{(\mathbb{R}^{n},\mathsf{d}_{eucl},c_{n}\mathscr{L}^{n},0^{n},\left\\{x_{n}>0\right\\})\right\\}\,.$
We next introduce a notion of reduced boundary, in analogy with the Euclidean
theory.
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space with essential dimension equal to $n\in\mathbb{N}$, and let
$E\subset X$ be a set of locally finite perimeter. We set
$\mathcal{F}E:=\left\\{x\in
X\;:\;\operatorname{Tan}_{x}(X,\mathsf{d},\mathfrak{m},E)=\left\\{(\mathbb{R}^{n},\mathsf{d}_{eucl},c_{n}\mathscr{L}^{n},0^{n},\left\\{x_{n}>0\right\\})\right\\}\right\\}\,.$
###### Remark .
Let us point out, for the sake of clarity, that the reduced boundary in the
above sense does not fully coincide with the reduced boundary in the classical
Euclidean sense. Indeed the definition of reduced boundary point in the
$\operatorname{RCD}$ framework does not prevent, when read in the Euclidean
context, the possibility that different half-spaces arise as blow-ups when
rescaling along different sequences of radii converging to $0$.
###### Theorem 2.6 (Rectifiability).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ m.m.s. with
essential dimension $1\leq n\leq N$ and let $E\subset X$ be a set of locally
finite perimeter. Then the reduced boundary $\mathcal{F}E$ is
$\big{(}\left\lvert D\chi_{E}\right\rvert,(n-1)\big{)}$-rectifiable.
When specialized to the non-collapsed case, where the essential dimension
$n=N$ (cf. with the discussion before subsection 2.3), Theorem 2.6 turns into:
###### Corollary .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be a $\operatorname{RCD}(K,N)$ m.m.s. and
$E\subset X$ a set of locally finite perimeter. Then $\mathcal{F}E$ is
$\left(\left\lvert D\chi_{E}\right\rvert,N-1\right)$-rectifiable
(equivalently, $\left(\mathcal{H}^{N-1},N-1\right)$-rectifiable). Furthermore
$\left\lvert D\chi_{E}\right\rvert=\mathcal{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\mathcal{F}E.$
In [26] the following Gauss-Green integration by parts formula for sets of
finite perimeter and Sobolev vector fields has been proved. We refer to [66]
for the notion of Sobolev vector fields in $H^{1,2}_{C}(TX)$ and to [26] for
the notion of restriction of the tangent module over the boundary of a set of
finite perimeter $L^{2}_{E}(TX)$.
###### Theorem 2.7 (Theorem 2.4 in [26]).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a set with finite perimeter and finite
measure. Then there exists a unique vector field $\nu_{E}\in L^{2}_{E}(TX)$
such that $\left\lvert\nu_{E}\right\rvert=1$ holds $\operatorname{Per}$-a.e.
and
$\int_{E}\operatorname{div}v\mathop{}\\!\mathrm{d}\mathfrak{m}=-\int<\mathrm{tr}_{E}v,\nu_{E}>\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\,,$
for any $v\in H^{1,2}_{C}(TX)\cap D(\operatorname{div})$ such that
$\left\lvert v\right\rvert\in L^{\infty}(\mathfrak{m})$.
For the sake of notation we shall denote
(2.11) $\mu_{E}:=\nu_{E}\cdot\operatorname{Per}_{E},\quad\text{the {Gauss-
Green measure}}.$
Notice that, by our choice of signs, $\nu_{E}$ corresponds to the inward-
pointing unit normal vector for a domain with smooth boundary in a smooth
Riemannian manifold.
Let us also recall a mild regularity result for sets of finite perimeter which
follows again from [26] and has been proved in [27, Proposition 4.2] (even for
general $\operatorname{RCD}(K,N)$ metric measure spaces
$(X,\mathsf{d},\mathfrak{m})$). It can be considered as a counterpart tailored
for this framework of the Euclidean Federer type characterization of sets of
finite perimeter.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $N\geq 1$ and let $E\subset X$ be a set of locally
finite perimeter. Then the following hold:
* i)
for $\mathscr{H}^{N-1}$-a.e. $x\in X$ it holds
$\lim_{r\downarrow 0}\frac{\mathscr{H}^{N}(B_{r}(x)\cap
E)}{\mathscr{H}^{N}(B_{r}(x))}\in\Big{\\{}0,\frac{1}{2},1\Big{\\}}\,.$
Moreover, up to an $\mathscr{H}^{N-1}$-negligible set it holds
$\mathcal{F}E=\Big{\\{}x\in E\,:\,\lim_{r\downarrow
0}\frac{\mathscr{H}^{N}(B_{r}(x)\cap
E)}{\mathscr{H}^{N}(B_{r}(x))}=\frac{1}{2}\Big{\\}}\,.$
* ii)
For $\mathscr{H}^{N-1}$-a.e. $x\in X$ it holds
(2.12) $\lim_{t\downarrow
0}P_{t}\chi_{E}(x)\in\Big{\\{}0,\frac{1}{2},1\Big{\\}}\,.$
Moreover, up to an $\mathscr{H}^{N-1}$-negligible set it holds
$\mathcal{F}E=\Big{\\{}x\in E\,:\,\lim_{t\downarrow
0}P_{t}\chi_{E}(x)=\frac{1}{2}\Big{\\}}\,.$
###### Definition .
Given a set of finite perimeter $E\subset X$ and any $0\leq t\leq 1$, we set
$E^{(t)}:=\Big{\\{}x\in X:\lim_{r\downarrow
0}\frac{\mathscr{H}^{N}(B_{r}(x)\cap
E)}{\mathscr{H}^{N}(B_{r}(x))}=t\Big{\\}}\,.$
A consequence of subsubsection 2.4.3 above is that, up to an
$\mathscr{H}^{N-1}$-negligible set,
$X=E^{(1)}\cup E^{(1/2)}\cup E^{(0)}\,.$
###### Definition .
In the following we shall adopt the notation $M\sim N$ to indicate that two
Borel sets coincide up to $\mathscr{H}^{N-1}$ negligible sets, i.e.
$\mathscr{H}^{N-1}(M\Delta N)=0$.
It follows from the discussion above that, for any Borel set $M\subset X$,
$M\sim(M\cap E^{(1)})\cup(M\cap E^{(0)})\cup(M\cap E^{(1/2)})\,.$
In order to ease the notation, given a set of finite perimeter $E\subset X$
and $x\in X$ we shall denote by
$\theta(E,x):=\lim_{r\to 0}\frac{\mathscr{H}^{N}(E\cap
B_{r}(x))}{\mathscr{H}^{N}(B_{r}(x))}\,,$
whenever the limit exists.
It follows again from the discussion above that $\theta(E,x)$ is well defined
and belongs to $\\{0,1/2,1\\}$ for $\mathscr{H}^{N-1}$-a.e. $x\in X$.
###### Remark .
Analogous statements hold changing $\lim_{r\to 0}\mathscr{H}^{N}(B_{r}(x)\cap
E)/\mathscr{H}^{N}(B_{r}(x))$ with $\lim_{t\to 0}P_{t}\chi_{E}$, see [27,
Remark 4.5].
#### 2.4.4. Gauss Green formulae for essentially bounded divergence measure
vector fields
In order to make rigorous the formal argument described in subsection 1.1, we
need to consider vector fields that are bounded and have measure valued
divergence, but do not belong to $H^{1,2}_{C}(TX)$ in general.
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. We say that a vector field $V\in L^{\infty}(TX)$ is an
essentially bounded divergence measure vector field if its distributional
divergence is a finite Radon measure, that is if $\operatorname{div}V$ is a
finite Radon measure such that, for any Lipschitz function with compact
support $g:X\to\mathbb{R}$, it holds
$\int_{X}g\mathop{}\\!\mathrm{d}\operatorname{div}V=-\int_{X}\nabla g\cdot
V\mathop{}\\!\mathrm{d}\mathfrak{m}\,.$
We shall denote the class of these vector fields by $\mathcal{DM}^{\infty}(X)$
and sometimes, to ease the notation, we will abbreviate $\int
g\mathop{}\\!\mathrm{d}\operatorname{div}V$ with $\int g\operatorname{div}V$.
We recall a useful regularity result, whose proof can be found in the proof of
[29, Theorem 7.4].
###### Lemma .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $V\in\mathcal{DM}^{\infty}(X)$. Then
$\operatorname{div}V\ll\mathscr{H}^{N-1}$.
Notice that the divergence measure of a vector field in this class might have
singular parts with respect to the reference measure. In particular, it might
charge the boundary of a set of finite perimeter and it becomes relevant to
choose wether in the Gauss-Green formula we integrate the divergence of the
vector field only over the interior of the set of finite perimeter or over its
closure.
As a second issue, contrary to smooth vector fields (and to
$H^{1,2}_{C}$-vector fields in the $\operatorname{RCD}$ framework) essentially
bounded divergence measure vector fields do not have pointwise-a.e. defined
representatives over boundaries of sets of finite perimeter.
It turns that, despite not being able to pointwise define the vector field
over the reduced boundary of a set of finite perimeter, it is possible to
define interior and exterior normal traces, possibly different, playing the
role of the term $V\cdot\nu_{E}$ in the Gauss-Green formula.
Given an essentially bounded divergence measure vector field
$V\in\mathcal{DM}^{\infty}(X)$ and a set of finite perimeter $E\subset X$, it
is proved in [30, Section 6.5] and [27, Section 5] that there exist measures
$D\chi_{E}(\chi_{E}V)$ and $D\chi_{E}(\chi_{E^{c}}V)$ such that
$\nabla P_{t}\chi_{E}\cdot(\chi_{E}V)\rightharpoonup
D\chi_{E}(\chi_{E}V)\quad\text{and}\quad\nabla
P_{t}\chi_{E}\cdot(\chi_{E^{c}}V)\rightharpoonup D\chi_{E}(\chi_{E^{c}}V)\,,$
as $t\to 0$.
Moreover, $D\chi_{E}(\chi_{E}V)$ and $D\chi_{E}(\chi_{E^{c}}X)$ are both
absolutely continuous w.r.t. $\left\lvert D\chi_{E}\right\rvert$. Therefore we
are entitled to consider their densities,
$\left(V\cdot\nu_{E}\right)_{\mathrm{int}}$ and
$\left(V\cdot\nu_{E}\right)_{\mathrm{ext}}$, defined by
$2D\chi_{E}(\chi_{E}V)=\left(V\cdot\nu_{E}\right)_{\mathrm{int}}\left\lvert
D\chi_{E}\right\rvert\quad\text{and}\quad
2D\chi_{E}(\chi_{E^{c}}V)=\left(V\cdot\nu_{E}\right)_{\mathrm{ext}}\left\lvert
D\chi_{E}\right\rvert.$
Below we report a Gauss-Green integration by parts for essentially bounded
divergence measure vector fields and sets of finite perimeter on
$\operatorname{RCD}(K,N)$ spaces. It is the outcome of [30, Theorem 6.20],
where the integration by parts formula has been obtained with non sharp bounds
for the normal traces, and of [27, Theorem 5.2], where these bounds have been
sharpened.
###### Theorem 2.8.
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E\subset X$ be a set of finite perimeter and let
$V\in\mathcal{DM}^{\infty}(X)$. Then for any function
$f\in\operatorname{LIP}_{c}(X)$ it holds
$\displaystyle\int_{E^{(1)}}f\operatorname{div}V+\int_{E}\nabla f\cdot
V\mathop{}\\!\mathrm{d}\mathfrak{m}$
$\displaystyle=-\int_{\mathcal{F}E}f\left(V\cdot\nu_{E}\right)_{\mathrm{int}}\mathop{}\\!\mathrm{d}\operatorname{Per}\,,$
$\displaystyle\int_{E^{(1)}\cup\mathcal{F}E}f\operatorname{div}V+\int_{E}\nabla
f\cdot V\mathop{}\\!\mathrm{d}\mathfrak{m}$
$\displaystyle=-\int_{\mathcal{F}E}f\left(V\cdot\nu_{E}\right)_{\mathrm{ext}}\mathop{}\\!\mathrm{d}\operatorname{Per}\,.$
Moreover
(2.13)
$\displaystyle\left\lVert\left(V\cdot\nu_{E}\right)_{\mathrm{int}}\right\rVert_{L^{\infty}(\mathcal{F}E,\operatorname{Per})}$
$\displaystyle\leq\left\lVert V\right\rVert_{L^{\infty}(E,\mathfrak{m})}\,,$
(2.14)
$\displaystyle\left\lVert\left(V\cdot\nu_{E}\right)_{\mathrm{ext}}\right\rVert_{L^{\infty}(\mathcal{F}E,\operatorname{Per})}$
$\displaystyle\leq\left\lVert V\right\rVert_{L^{\infty}(X\setminus
E,\mathfrak{m})}\,.$
#### 2.4.5. Operations with sets of finite perimeter
In order to build competitors for variational problems, we will rely on the
following characterization theorem for the perimeter and the Gauss-Green
measure of intersections, union and differences of sets of finite perimeter,
that has been obtained in [27, Theorem 4.11]. We refer to [107, Theorem 16.3]
for the analogous statement for sets of finite perimeter on $\mathbb{R}^{n}$.
Recall the definitions of essential boundary $\partial^{*}E$ given in (2.6)
and of Gauss-Green measure $\mu_{E}$ given in (2.11) for a set of finite
perimeter $E\subset X$. We refer also to [27, Definition 4.9] for the
introduction of the set of coincidence $\\{\nu_{E}=\nu_{F}\\}$ of the unit
normals to two sets of finite perimeter $E$ and $F$.
###### Theorem 2.9.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E,F\subset X$ be sets of finite perimeter. Let us set
$\\{\nu_{E}=\nu_{F}\\}:=\\{x\in\partial^{*}E\cap\partial^{*}F\,:\,\nu_{E}=\nu_{F}\\}$
and
$\\{\nu_{E}=-\nu_{F}\\}:=\\{x\in\partial^{*}E\cap\partial^{*}F\,:\,\nu_{E}=-\nu_{F}\\}\,.$
Then $E\cap F$, $E\cup F$ and $E\setminus F$ are sets of finite perimeter;
moreover the following hold:
(2.15) $\displaystyle\mu_{E\cap F}$ $\displaystyle=\mu_{E}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
F^{(1)}+\mu_{F}\mathop{\hbox{\vrule height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
E^{(1)}+\nu_{E}\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\\{\nu_{E}=\nu_{F}\\}\,,$
(2.16) $\displaystyle\mu_{E\cup F}$ $\displaystyle=\mu_{E}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
F^{(0)}+\mu_{F}\mathop{\hbox{\vrule height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
E^{(0)}+\nu_{E}\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\\{\nu_{E}=\nu_{F}\\}\,,\ $
(2.17) $\displaystyle\mu_{E\setminus F}$
$\displaystyle=\mu_{E}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
F^{(0)}-\mu_{F}\mathop{\hbox{\vrule height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
E^{(1)}+\nu_{E}\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\\{\nu_{E}=-\nu_{F}\\}\,.$
###### Remark .
Let us clarify the meaning of (2.15), (2.16) and (2.17). With this notation,
(2.15) means that (and analogously for the others) for any vector field $v\in
H^{1,2}_{C}(TX)\cap D(\operatorname{div})$ such that $\left\lvert
v\right\rvert\in L^{\infty}(\mathfrak{m})$,
$\displaystyle\int_{E\cap
F}\operatorname{div}v\mathop{}\\!\mathrm{d}\mathfrak{m}=$
$\displaystyle-\int_{F^{(1)}}<\mathrm{tr}_{E}v,\nu_{E}>\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}-\int_{E^{(1)}}<\mathrm{tr}_{F}v,\nu_{F}>\mathop{}\\!\mathrm{d}\operatorname{Per}_{F}$
$\displaystyle-\int_{E^{(1/2)}\cap
F^{(1/2)}}<\mathrm{tr}_{E}v,\nu_{E}>\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\,.$
###### Corollary .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset F\subset X$ be sets of finite perimeter. Then
$\nu_{E}=\nu_{F}$ on $\partial^{*}E\cap\partial^{*}F$ and
$\mu_{E}=\mu_{E}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
F^{(1)}+\nu_{F}\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(\partial^{*}E\cap\partial^{*}F\right)\,.$
We wish to understand to which extent the cut and paste operations for sets of
finite perimeter are well behaved under the weaker regularity assumptions of
Theorem 2.8. This is the content of [27, Proposition 5.4] that we report
below.
###### Proposition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E,F\subset X$ be sets of (locally) finite perimeter and
let $V\in\mathcal{DM}^{\infty}(X)$. Then the following hold:
$\displaystyle\left(V\cdot\nu_{E\cap F}\right)_{int}$
$\displaystyle=\left(V\cdot\nu_{E}\right)_{\mathrm{int}}\,,\quad\text{$\operatorname{Per}$-a.e.
on $F^{(1)}$}\,,$ $\displaystyle\left(V\cdot\nu_{E\cap F}\right)_{int}$
$\displaystyle=\left(V\cdot\nu_{F}\right)_{\mathrm{int}}\,,\quad\text{$\operatorname{Per}$-a.e.
on $E^{(1)}$}\,,$ $\displaystyle\left(V\cdot\nu_{E\cap F}\right)_{int}$
$\displaystyle=\left(V\cdot\nu_{E}\right)_{\mathrm{int}}\,,\quad\text{$\operatorname{Per}$-a.e.
on $E^{(1/2)}\cap F^{(1/2)}$}\,.$
Analogous conclusions hold for the exterior normal traces and for the interior
and exterior normal traces on $E\cup F$ and on $E\setminus F$.
Another technical result which is needed for the strategy we overviewed in
subsection 1.1 is a rigorous version, within our framework, of the fact that
the outward-pointing unit normal to a sub-level set of a distance function is
the gradient of the distance function itself. We refer to [27, Proposition
6.1] for its proof.
###### Proposition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\Subset\Omega^{\prime}\subset X$ be open domains and
let $\varphi:\Omega^{\prime}\to\mathbb{R}$ be a $1$-Lipschitz function such
that
* i)
$\left\lvert\nabla\varphi\right\rvert=1$, $\mathfrak{m}$-a.e. on
$\Omega^{\prime}$;
* ii)
$\varphi$ has measure valued Laplacian on $\Omega^{\prime}$ with
$\mathfrak{m}$-essentially bounded negative (or positive) part.
Then, for $\mathscr{L}^{1}$-a.e. $t$ such that
$\\{\varphi=t\\}\cap\Omega\neq\emptyset$, it holds that $\\{\varphi<t\\}$ is a
set of locally finite perimeter in $\Omega$; moreover, the following holds:
$\left(\nabla\varphi\cdot\nu_{\\{\varphi<t\\}}\right)_{\mathrm{int}}=\left(\nabla\varphi\cdot\nu_{\\{\varphi<t\\}}\right)_{\mathrm{ext}}=-1\,\quad\operatorname{Per}_{\\{\varphi<t\\}}\text{-a.e.
on $\Omega$}\,.$
#### 2.4.6. Some regularity results for quasi-minimizers
Let us recall the definition of quasi-minimal set of finite perimeter in this
framework.
###### Definition (Quasi-minimality).
Let $(X,\mathsf{d},\mathfrak{m})$ be a metric measure space verifying the
doubling and Poincaré inequalities . Let $E\subset X$ be a Borel set with
finite perimeter and $\Omega\subset X$ be an open set. Given any $\kappa\geq
1$ we say that $E$ is a $\kappa$-quasi-minimal set if for any $U\Subset\Omega$
and for all Borel sets $F,G\subset U$ it holds
$\operatorname{Per}(E,U)\leq\kappa\operatorname{Per}\left((E\cup F)\setminus
G,U\right)\,.$
In the Euclidean setting, or on smooth Riemannian manifolds, quasi-minimality
is a property shared by minimizers of many variational problems: the Plateau
problem, the prescribed mean curvature problem, Cheeger sets and isoperimetric
sets, among others. We refer to [107, Chapter 21] for a throughout discussion
and references. This is indeed a general principle that holds also on
$\operatorname{RCD}(K,N)$ metric measure spaces
$(X,\mathsf{d},\mathscr{H}^{N})$:
* •
perimeter minimizers are quasi minimizers as it directly follows from the
definition;
* •
with minor modifications to the classical Euclidean proof it is possible to
argue that solutions of the prescribed mean curvature problem are quasi
minimizers under suitable assumptions;
* •
in [20, Theorem 3.4] it has been recently shown that isoperimetric sets are
quasi minimizers.
A stronger notion involves a function in place of the constant $\kappa$, whose
behaviour forces the set to be more and more almost minimizing inside smaller
and smaller balls.
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be a metric measure space verifying the
doubling and Poincaré inequalities. Given an increasing function
$\omega:[0,\infty)\to[0,\infty]$ such that $\omega(0)=0$, we say that a set of
finite perimeter $E\subset X$ is an $\omega$-minimizer if, for any $x\in X$
and $r>0$, for any $F\subset X$ such that $E\Delta F\Subset B_{r}(x)$, it
holds
$\operatorname{Per}(E,B_{r}(x))\leq(1+\omega(r))\operatorname{Per}(F,B_{r}(x))\,.$
###### Remark .
An equivalent reformulation of the quasi-minimality condition above is that
$E$ is a $\kappa$-quasi-minimal set if for any $U\Subset\Omega$ and for all
Borel sets $F\subset X$ such that $E\Delta F\Subset U$ it holds
(2.18)
$\operatorname{Per}(E,U)\leq\kappa\operatorname{Per}\left(F,U\right)\,.$
Notice that $\kappa$-quasi-minimality for $\kappa=1$ corresponds to
minimality, while it is a weaker notion for $\kappa>1$.
###### Remark .
We will sometimes work with the weaker assumption that (2.18) holds for
competitors $F$ such that $E\Delta F$ is supported in $B_{r}(x)$, where $r>0$
is fixed. This corresponds to a localized version of the quasi-minimality
condition, which has the same consequences at the level of regularity.
One of the main results in [94] is the following theorem, asserting that a
quasi-minimal set of finite perimeter, up to modification on a negligible set
as in (2.3), has measure theoretic boundary coinciding with the topological
boundary. This is a generalization of the Euclidean result in [54].
###### Theorem 2.10 (Theorem 4.2 of [94]).
Let $E\subset X$ be a quasi-minimal set in $\Omega$. Then, up to modifying $E$
on a $\mathfrak{m}$-negligible set, there exists $\gamma_{0}>0$ such that, for
any $x\in\partial E\cap\Omega$, we have
(2.19) $\frac{\mathfrak{m}(E\cap
B_{r}(x))}{\mathfrak{m}(B_{r}(x))}\geq\gamma_{0}\,,\quad\frac{\mathfrak{m}(B_{r}(x)\setminus
E)}{\mathfrak{m}(B_{r}(x))}\geq\gamma_{0}\,,$
for any $r>0$ such that $B_{2r}(x)\subset\Omega$. The density constant
$\gamma_{0}$ depends only on the quasi-minimality constant $\kappa$, the
doubling constant and the Poincaré constant.
Given the measure bounds (2.19), perimeter bounds follow from the
isoperimetric inequality (2.8).
###### Corollary (Lemma 5.1 of [94]).
Let $E\subset X$ be a quasi-minimal set in $\Omega$. Then there exist
$r_{0}>0$ and $C>0$ such that for any $x\in\partial E\cap\Omega$ and
$0<r<r_{0}$, it holds
(2.20)
$C^{-1}\frac{\mathfrak{m}(B_{r}(x))}{r}\leq\operatorname{Per}(E,B_{r}(x))\leq
C\frac{\mathfrak{m}(B_{r}(x))}{r}\,,$
whenever $B_{2r}(x)\subset\Omega$. The constants $C>0$ and $r_{0}>0$ depend
only on the quasi-minimality constant $\kappa$, the doubling constant and the
Poincaré constant.
The main outcome of Theorem 2.10, together with [3, 4] and [6], is that, in
the framework of noncollapsed $\operatorname{RCD}(K,N)$ metric measure spaces,
the reduced boundary of a quasi-minimal set of finite perimeter is closed.
###### Corollary .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E\subset X$ be a set of finite perimeter and
$\Omega\subset X$ be an open set such that $E$ is quasi-minimal in $\Omega$.
Then, up to a modification of $E$ on an $\mathscr{H}^{N}$-negligible set, it
holds that:
* (i)
the perimeter measure $\operatorname{Per}$ coincides with
$\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial E$ on $\Omega$ (up to
a normalization constant);
* (ii)
$\partial E$ is $\mathscr{H}^{N-1}$-rectifiable and
$\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial E$ is a locally
Ahlfors regular measure.
###### Proof.
The identification of the reduced boundary with the topological boundary
follows from Theorem 2.10.
Rectifiability of the reduced boundary (and hence of the topological boundary)
and identification of the perimeter measure with the $(N-1)$-Hausdorff measure
are then consequences of Theorem 2.6 and subsubsection 2.4.3. ∎
A classical consequence of the local Ahlfors regularity of the perimeter for
quasi-minimal sets is a measure estimate for the tubular neighbourhood of
their boundaries. Given a subset $U\subset X$ and $r>0$, we adopt the notation
that $U^{r}:=\\{x\in X\,:\,\mathsf{d}(x,U)<r\\}$ denotes the $r$-enlargement
of $U$.
###### Lemma .
There exist constants $C_{\kappa,K,N}>0$ and $0<r_{0}=r_{0}(\kappa,K,N)<1$
with the following property. Let $(X,\mathsf{d},\mathfrak{m})$ be an
$\operatorname{RCD}(K,N)$ metric measure space and let $E\subset X$ be a set
of finite perimeter. Assume that $E\cap B_{2}(x)$ is $\kappa$-quasi-minimal in
$B_{2}(x)$. Then, for any open subset $\Omega\subset B_{1}(x)$ it holds
$\mathfrak{m}\left(\\{x\in X\,:\,\mathsf{d}(x,\partial E\cap\Omega)\leq
r\\}\right)\leq C_{\kappa,K,N}\,r\,\operatorname{Per}(E,B_{2}(x))\,$
for every $r\in(0,r_{0})$.
In particular, if $E\cap\Omega$ is locally perimeter minimizing in $B_{2}(x)$,
then the dependence on $\kappa$ in the constant $C_{\kappa,K,N}>0$ can be
dropped.
###### Proof.
By subsubsection 2.4.6, there exist $r_{0}=r_{0}(\kappa,K,N)>0$ and
$C=C_{\kappa,K,N}>0$ such that, for any $x\in\partial E\cap\Omega$ and for any
$r\in(0,r_{0})$ it holds
(2.21)
$C^{-1}\frac{\mathfrak{m}(B_{r}(x))}{r}\leq\operatorname{Per}(E,B_{r}(x))\leq
C\frac{\mathfrak{m}(B_{r}(x))}{r}\,.$
We wish to estimate the volume of the tubular neighbourhood of $\partial
E\cap\Omega$.
Let $r<r_{0}/5$ be fixed and let us consider, thanks to Vitali’s covering
lemma, a covering of $\\{x\in X\,:\,\mathsf{d}(x,\partial E\cap\Omega)\leq
r\\}$ with balls $B_{5r_{i}}(x_{i})$ such that $x_{i}\in\partial E\cap\Omega$,
$r_{i}<r<r_{0}/5$ and $\\{B_{r_{i}}(x_{i})\\}$ is a disjoint family of subsets
of $B_{2}(x)$. Relying on (2.21) and the disjointedness of the family
$\\{B_{r_{i}}(x_{i})\\}$ we can estimate
$\displaystyle\mathfrak{m}\left(\\{x\in X\,:\,\mathsf{d}(x,\partial
E\cap\Omega)\leq r\\}\right)\leq$
$\displaystyle\,\mathfrak{m}\left(\bigcup_{i}B_{5r_{i}}(x_{i})\right)\leq\,\sum_{i}\mathfrak{m}\left(B_{5r_{i}}(x_{i})\right)$
$\displaystyle\leq$
$\displaystyle\,C_{K,N}\sum_{i}\mathfrak{m}(B_{r_{i}}(x_{i}))$
$\displaystyle\leq$
$\displaystyle\,C_{\kappa,K,N}\sum_{i}\operatorname{Per}(E,B_{r_{i}}(x_{i}))r_{i}$
$\displaystyle\leq$
$\displaystyle\,C_{\kappa,K,N}r\operatorname{Per}(E,B_{2}(x))\,.$
∎
In the Euclidean setting a well known fact is that, when dealing with a family
of sets of finite perimeter that are uniformly quasi-minimizing, the usual
$L^{1}_{{\rm loc}}$ convergence up to subsequence guaranteed for uniformly
bounded $\operatorname{BV}$ functions can be improved. We refer for instance
to [107, Section 21.5] and references therein for the treatment of this topic
on $\mathbb{R}^{n}$.
This principle has already played a role in the proof of De Giorgi’s theorem
for sets of finite perimeter on $\operatorname{RCD}(K,N)$ spaces in [6]. Below
we present a slight enforcement of [6, Proposition 3.9], allowing for more
general quasi-minimality conditions and dealing with the Hausdorff convergence
of the topological/measure theoretic boundaries.
###### Theorem 2.11.
Let $(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})$ be
$\operatorname{RCD}(K,N)$ m.m. spaces converging in the pmGH topology to
$(Y,\varrho,\mu,y)$ and let $(Z,\mathsf{d}_{Z})$ be realizing the convergence.
For any $i\in\mathbb{N}$, let $\omega_{i}:[0,\infty):\to[0,\infty)$ be a
modulus of continuity and let $E_{i}\subset X_{i}$ be sets of finite perimeter
satisfying the following $\omega_{i}$-minimality condition: there exists
$R_{i}>0$ such that
$\left\lvert
D\chi_{E_{i}}\right\rvert(B_{r}(z_{i}))\leq(1+\omega_{i}(r))\left\lvert
D\chi_{E^{\prime}}\right\rvert(B_{r}(z_{i}))$
for any $E^{\prime}\subset X_{i}$ such that $E_{i}\Delta E^{\prime}\Subset
B_{r}(z_{i})\subset X_{i}$, for some $r<R_{i}$.
Assume that, as $i\to\infty$, $E_{i}\to F$ in $L^{1}_{{\rm loc}}$ for some set
$F\subset Y$ of locally finite perimeter, and $\omega_{i}\to\omega$ pointwise,
where $\omega:[0,\infty)\to[0,\infty)$ is a modulus of continuity and
$R_{i}\to\infty$. Then:
* (i)
$F$ is an entire $\omega$-minimizer of the perimeter (relative to
$(Y,\varrho,\mu)$), namely
(2.22) $\left\lvert
D\chi_{F}\right\rvert(B_{r}(z))\leq(1+\omega(r))\left\lvert
D\chi_{F^{\prime}}\right\rvert(B_{r}(z))$
whenever $F\Delta F^{\prime}\Subset B_{r}(z)\Subset Y$ and $r>0$;
* (ii)
$|D\chi_{E_{i}}|\to|D\chi_{F}|$ in duality with $C_{\mathrm{bs}}(Z)$ as
$i\to\infty$;
* (iii)
$\partial E_{i}\to\partial F$ in the Kuratowski sense as $i\to\infty$.
###### Proof.
The statement is classical in the Euclidean setting, see for instance [16],
and the adaptation to the present framework requires only minor adjustments.
Therefore some details will be omitted. We will adapt the arguments in the
proof of [6, Proposition 3.9] to deal with the present setting.
The strategy is to consider a weak limit measure of the sequence of locally
uniformly bounded perimeter measures $\left\lvert D\chi_{E_{i}}\right\rvert$.
Let us call it $\nu$. Then we show simultaneously that $\nu=\left\lvert
D\chi_{F}\right\rvert$ and that $F$ verifies the $\omega$-minimality condition
(2.22).
The inequality $\left\lvert D\chi_{F}\right\rvert\leq\nu$ follows from
localizing the lower-semicontinuity of the perimeter [6, Proposition 3.6], and
does not require the $\omega$-minimality condition. It remains to check that
$\nu\leq\left\lvert D\chi_{F}\right\rvert$. Below we report part of the proof
in [6] and indicate where changes are needed.
Let us fix $\bar{y}\in Y$ and let $F^{\prime}\subset Y$ be a set of locally
finite perimeter satisfying $F\Delta F^{\prime}\Subset B_{r}(\bar{y})$. Let
$\bar{x}_{i}\in X_{i}$ converging to $\bar{y}$ in $Z$ and $R>0$ be such that
the following properties hold true:
(2.23) $\sup_{i\in\mathbb{N}}\left\lvert
D\chi_{B_{R}(x_{i})}\right\rvert(X_{i})<\infty\qquad\text{and}\qquad
B_{r}(\bar{x}_{i})\Subset B_{R}(x_{i})\qquad\forall i\in\mathbb{N}\,.$
Using [6, Proposition 3.8] we can find a sequence of sets of finite perimeter
$E^{\prime}_{i}\subset X_{i}$ converging to $F\cap B_{R}(y)$ in
$\operatorname{BV}$ energy (notice that $F\cap B_{R}(y)$ is a set of finite
perimeter thanks to (2.23)).
We claim that, for any set of finite perimeter $F^{\prime}\subset Y$ such that
$F\Delta F^{\prime}\Subset B_{r}(\bar{y})$,
(2.24) $\nu(B_{s}(\bar{y}))\leq(1+\omega(r))\left\lvert
D\chi_{F^{\prime}}\right\rvert(B_{s}(\bar{y}))\,,$
for $\mathscr{L}^{1}$-a.e. $s\in(r^{\prime},r)$, for some $0<r^{\prime}<r$.
Let us illustrate how to use (2.24) to conclude the proof.
If we apply (2.24) with $F^{\prime}=F$ we get that
$\nu(B_{s}(\bar{y}))\leq(1+\omega(r))\left\lvert
D\chi_{F}\right\rvert(B_{s}(\bar{y}))\,,\ \ $
for $\mathscr{L}^{1}$-a.e. $s\in(r^{\prime},r)$, for some $0<r^{\prime}<r$.
Hence, letting $s\uparrow r$, we obtain
(2.25) $\nu(B_{r}(\bar{y}))\leq(1+\omega(r))\left\lvert
D\chi_{F}\right\rvert(B_{r}(\bar{y}))\,.$
In particular $\nu\ll\left\lvert D\chi_{F}\right\rvert$, which is an
asymptotically doubling measure. Hence, noticing that by (2.25) and the
continuity at $0$ of $\omega$,
$\limsup_{r\downarrow 0}\frac{\nu(B_{r}(\bar{y}))}{\left\lvert
D\chi_{F}\right\rvert(B_{r}(\bar{y}))}\leq\limsup_{r\downarrow
0}(1+\omega(r))=1\,,$
we can apply the differentiation theorem to infer that $\nu\leq\left\lvert
D\chi_{F}\right\rvert$. This proves (ii).
Substituting back in (2.24), we obtain that
$\left\lvert D\chi_{F}\right\rvert(B_{s}(\bar{y}))\leq(1+\omega(r))\left\lvert
D\chi_{F^{\prime}}\right\rvert(B_{s}(\bar{y}))\,,$
for $\mathscr{L}^{1}$-a.e. $s\in(r^{\prime},r)$, for some $0<r^{\prime}<r$,
and (i) follows by letting $s\uparrow r$.
Let us prove (2.24). We first fix $0<r^{\prime}<r$ such that $F\Delta
F^{\prime}\subset B_{r^{\prime}}(y)$. Then we fix a parameter
$s\in(r^{\prime},r)$ with $\nu(\partial B_{s}(\bar{y}))=0$, $\left\lvert
D\chi_{F^{\prime}}\right\rvert(\partial B_{s}(\bar{y}))=0$ and set
$\tilde{E}_{i}^{s}:=\left(E_{i}^{\prime}\cap
B_{s}(\overline{x}_{i})\right)\cup\left(E_{i}\setminus
B_{s}(\overline{x}_{i})\right)\,.$
We also choose $s<s^{\prime}<r$ such that $\nu(\partial
B_{s^{\prime}}(\bar{y}))=0$.
From now on, up to the end of the proof, we are going to adopt the notation
$\operatorname{Per}(G,A)$ to denote $\left\lvert D\chi_{G}\right\rvert(A)$
whenever $G$ has finite perimeter and $A$ is a Borel set, to avoid multiple
subscripts.
Using the locality of the perimeter and the $\omega_{i}$-minimality of $E_{i}$
(notice that $R_{i}\geq r$ for $i$ big enough), we get
$\displaystyle\operatorname{Per}(E_{i},\overline{B}_{s}(\bar{x}_{i}))=\,$
$\displaystyle\operatorname{Per}(E_{i},B_{s^{\prime}}(\bar{x_{i}}))-\operatorname{Per}(E_{i},B_{s^{\prime}}(\bar{x_{i}})\setminus\overline{B}_{s}(\bar{x}_{i}))$
$\displaystyle\leq\,$
$\displaystyle(1+\omega_{i}(r))\operatorname{Per}(\tilde{E}^{s}_{i},B_{s^{\prime}}(\bar{x}_{i}))-\operatorname{Per}(E_{i},B_{s^{\prime}}(\bar{x}_{i})\setminus\overline{B}_{s}(\bar{x}_{i}))$
$\displaystyle=\,$
$\displaystyle(1+\omega_{i}(r))\operatorname{Per}(\tilde{E}^{s}_{i},B_{s}(\bar{x}_{i}))+(1+\omega_{i}(r))\operatorname{Per}(\tilde{E}^{s}_{i},\partial
B_{s}(\bar{x}_{i}))$
$\displaystyle+(1+\omega_{i}(r))\operatorname{Per}(\tilde{E}^{s}_{i},B_{s^{\prime}}(\bar{x}_{i})\setminus\overline{B}_{s}(\bar{x}_{i}))$
(2.26)
$\displaystyle-\operatorname{Per}(E_{i},B_{s^{\prime}}(\bar{x}_{i})\setminus\overline{B}_{s}(\bar{x}_{i}))$
$\displaystyle=\,$
$\displaystyle(1+\omega_{i}(r))\operatorname{Per}(E_{i}^{\prime},B_{s}(\bar{x}_{i}))+(1+\omega_{i}(r))\operatorname{Per}(\tilde{E}^{s}_{i},\partial
B_{s}(\bar{x}_{i}))$ (2.27)
$\displaystyle+\omega_{i}(r)\operatorname{Per}(E_{i},B_{s^{\prime}}(\bar{x}_{i})\setminus\overline{B}_{s}(\bar{x}_{i}))\,.$
Taking the limit as $i\to\infty$, arguing as in the last part of the proof of
[6, Proposition 3.9] it is possible to prove that
(2.28) $\liminf_{i\to\infty}\operatorname{Per}(\tilde{E}_{i}^{s},\partial
B_{s}(\bar{x}_{i}))=0\,,\qquad\text{for a.e.}\ s\in(r^{\prime},r).$
Thanks to our choice of $s$, it holds that
$\operatorname{Per}(E_{i},\overline{B}_{s}(\bar{x}_{i}))\to\nu(B_{s}(\bar{y}))$
and moreover
$(1+\omega_{i}(r))\operatorname{Per}(E_{i}^{\prime},B_{s}(\bar{x}_{i}))\to(1+\omega(r))\operatorname{Per}(F^{\prime},B_{s}(\bar{y}))$,
since $\chi_{E_{i}^{\prime}}\to\chi_{F^{\prime}\cap B_{R}(y)}$ in
$\operatorname{BV}$ energy and therefore [6, Corollary 3.7] applies. Combining
these last observations with (2.28) and (2.27) we obtain that
$\nu(B_{s}(\bar{y}))\leq(1+\omega(r))\left\lvert
D\chi_{F^{\prime}}\right\rvert(B_{s}(\bar{y}))+\omega(r)\nu(B_{s^{\prime}}(\bar{y})\setminus\overline{B}_{s}(\bar{y}))\,.$
Letting then $s\uparrow s^{\prime}$ we infer
$\nu(B_{s^{\prime}}(\bar{y}))\leq(1+\omega(r))\left\lvert
D\chi_{F^{\prime}}\right\rvert(B_{s^{\prime}}(\bar{y}))\,,$
which is equivalent to (2.24) up to changing $s^{\prime}$ into $s$.
In order to prove (iii), it is enough to observe that all the $E_{i}$’s and
the limit set of finite perimeter $F$ verify uniform upper and lower density
estimates, thanks to $\omega_{i}$-minimality, convergence of $\omega_{i}$ to
$\omega$, Theorem 2.10 and subsubsection 2.4.6.
By (ii) and the lower density estimate for $\left\lvert
D\chi_{F}\right\rvert$, any point in $\partial F$ can be approximated by
points in $\partial E_{i}$. On the other hand, limit points of sequences
$x_{i}\in\partial E_{i}$ do belong to $\partial F$ due to the uniform density
estimates at $x_{i}$ and weak convergence of $\left\lvert
D\chi_{E_{i}}\right\rvert$ again. We refer to [29, Section 7] for an analogous
statement in the case of boundaries of noncollapsed $\operatorname{RCD}(K,N)$
spaces. ∎
### 2.5. Laplacian, heat equation and heat kernel
Unless otherwise stated from now on we assume that
$(X,\mathsf{d},\mathfrak{m})$ is an $\operatorname{RCD}(K,N)$ metric measure
space for some $K\in\mathbb{R}$ and $1\leq N<\infty$.
In the first part of this subsection we collect some basic notation and
results about the Laplacian, the heat flow and the heat kernel, together with
some terminology about first and second order differential calculus on
$\operatorname{RCD}$ spaces. The basic references for this part are [10, 65,
66]. The second part contains some new technical results about the pointwise
short time behaviour of the heat flow.
###### Definition .
The Laplacian $\Delta:D(\Delta)\to L^{2}(X,\mathfrak{m})$ is a densely defined
linear operator whose domain consists of all functions $f\in
H^{1,2}(X,\mathsf{d},\mathfrak{m})$ satisfying
$\int hg\mathop{}\\!\mathrm{d}\mathfrak{m}=-\int\nabla h\cdot\nabla
f\mathop{}\\!\mathrm{d}\mathfrak{m}\quad\text{for any $h\in
H^{1,2}(X,\mathsf{d},\mathfrak{m})$}$
for some $g\in L^{2}(X,\mathfrak{m})$. The unique $g$ with this property is
denoted by $\Delta f$.
As consequence of the infinitesimal hilbertianity, it is easily checked that
$\Delta$ is an (unbounded) linear operator. More generally, we say that $f\in
H^{1,2}_{{\rm loc}}(X,\mathsf{d},\mathfrak{m})$ is in the domain of the
measure valued Laplacian, and we write $f\in D(\bm{\Delta})$, if there exists
a Radon measure $\mu$ on $X$ such that, for every
$\psi\in\operatorname{LIP}_{c}(X)$, it holds
$\int\psi\mathop{}\\!\mathrm{d}\mu=-\int\nabla
f\cdot\nabla\psi\mathop{}\\!\mathrm{d}\mathfrak{m}\,.$
In this case we write $\bm{\Delta}f:=\mu$. If moreover
$\bm{\Delta}f\ll\mathfrak{m}$ with $L^{2}_{{\rm loc}}$ density we denote by
$\Delta f$ the unique function in $L^{2}_{{\rm loc}}(X,\mathfrak{m})$ such
that $\bm{\Delta}f=\Delta f\,\mathfrak{m}$ and we write $f\in D_{{\rm
loc}}(\Delta)$.
Notice that the definition makes sense even under the assumption that $f\in
H^{1,p}_{{\rm loc}}(X,\mathsf{d},\mathfrak{m})$ for some $1\leq p<\infty$, and
we will rely on this observation later.
We shall also consider the Laplacian on open sets, imposing Dirichlet boundary
conditions. Let us first introduce the local Sobolev space with Dirichlet
boundary conditions.
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open and bounded domain. Then we
let $H^{1,2}_{0}(\Omega)$ be the $H^{1,2}(X,\mathsf{d},\mathfrak{m})$ closure
of $\operatorname{LIP}_{c}(\Omega,\mathsf{d})$.
We also introduce the local Sobolev space (i.e. without imposing Dirichlet
boundary conditions).
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open and bounded domain. We say
that a function $f\in L^{2}(\Omega,\mathfrak{m})$ belongs to the local Sobolev
space $H^{1,2}(\Omega,\mathsf{d},\mathfrak{m})$ if
* i)
$f\varphi\in H^{1,2}(X,\mathsf{d},\mathfrak{m})$ for any
$\varphi\in\operatorname{LIP}_{c}(\Omega,\mathsf{d})$;
* ii)
$\left\lvert\nabla f\right\rvert\in L^{2}(X,\mathfrak{m})$.
Above we intend that $f\varphi$ is set to be $0$ outside from $\Omega$. Notice
that $\left\lvert\nabla f\right\rvert$ is well defined on any
$\Omega^{\prime}\subset\Omega$ (and hence on $\Omega$) as
$\left\lvert\nabla(f\varphi)\right\rvert$ for some
$\varphi\in\operatorname{LIP}_{c}(\Omega)$ such that $\varphi\equiv 1$ on
$\Omega^{\prime}$.
###### Definition .
Let $f\in H^{1,2}(\Omega)$. We say that $f\in D(\Delta,\Omega)$ if there
exists a function $h\in L^{2}(\Omega,\mathfrak{m})$ such that
$\int_{\Omega}gh\mathop{}\\!\mathrm{d}\mathfrak{m}=-\int_{\Omega}\nabla
g\cdot\nabla f\mathop{}\\!\mathrm{d}\mathfrak{m}\,,\quad\text{for any $g\in
H^{1,2}_{0}(\Omega,\mathsf{d},\mathfrak{m})$}\,.$
We refer to [66] for the basic terminology and results about tangent and
cotangent modules on metric measure spaces and for the interpretation of
vector fields as elements of the tangent modules. The notations $L^{2}(TX)$,
$L^{2}_{{\rm loc}}(TX)$ and $L^{\infty}(TX)$ will be adopted to indicate the
spaces of $L^{2}$, $L^{2}_{{\rm loc}}$ and bounded vector fields,
respectively.
###### Definition .
Let $V\in L^{2}(TX)$ be a vector field. We say that $V$ belongs to the domain
of the divergence (and write $v\in D(\operatorname{div})$) if there exists a
function $f\in L^{2}(\mathfrak{m})$ such that
$\int_{X}V\cdot\nabla
g\mathop{}\\!\mathrm{d}\mathfrak{m}=-\int_{X}fg\mathop{}\\!\mathrm{d}\mathfrak{m}\,,\quad\text{for
any $g\in H^{1,2}(X)$}\,.$
Under these assumptions, the function $f$ is uniquely determined and we shall
denote $f=\operatorname{div}(V)$.
We refer again the reader to [66] for the introduction of more regular classes
of vector fields, such as the class $H^{1,2}_{C}(TX)$ that will be relevant
later in the paper.
The heat flow $P_{t}$, previously defined in subsection 2.1 as the
$L^{2}(X,\mathfrak{m})$-gradient flow of ${\sf Ch}$, can be equivalently
characterised by the following property: for any $u\in L^{2}(X,\mathfrak{m})$,
the curve $t\mapsto P_{t}u\in L^{2}(X,\mathfrak{m})$ is locally absolutely
continuous in $(0,+\infty)$ and satisfies
$\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}P_{t}u=\Delta
P_{t}u\quad\text{for $\mathscr{L}^{1}$-a.e. $t\in(0,+\infty)$}\,.$
Under our assumptions the heat flow provides a linear, continuous and self-
adjoint contraction semigroup in $L^{2}(X,\mathfrak{m})$. Moreover $P_{t}$
extends to a linear, continuous and mass preserving operator, still denoted by
$P_{t}$, in all the $L^{p}$ spaces for $1\leq p<+\infty$.
It has been proved in [10, 8] that, on $\operatorname{RCD}(K,\infty)$ metric
measure spaces, the dual heat semigroup
$\bar{P}_{t}:\mathcal{P}_{2}(X)\to\mathcal{P}_{2}(X)$ of $P_{t}$, defined by
$\int_{X}f\mathop{}\\!\mathrm{d}\bar{P}_{t}\mu:=\int_{X}P_{t}f\mathop{}\\!\mathrm{d}\mu\qquad\quad\forall\mu\in\mathcal{P}_{2}(X),\quad\forall
f\in\operatorname{LIP_{b}}(X)\,,$
is $K$-contractive (w.r.t. the $W_{2}$-distance) and, for $t>0$, maps
probability measures into probability measures absolutely continuous w.r.t.
$\mathfrak{m}$. Then, for any $t>0$, we can introduce the so called heat
kernel $p_{t}:X\times X\to[0,+\infty)$ by
$p_{t}(x,\cdot)\mathfrak{m}:=\bar{P}_{t}\delta_{x}\,.$
A key property of the heat kernel follows, namely the so-called stochastic
completeness: for any $x\in X$ and for any $t>0$ it holds
(2.29) $\int_{X}p_{t}(x,y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)=1\,.$
###### Remark .
From now on, for any $f\in L^{\infty}(X,\mathfrak{m})$ we will denote by
$P_{t}f$ the representative pointwise everywhere defined by
$P_{t}f(x)=\int_{X}f(y)p_{t}(x,y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\,.$
Let us recall a few regularizing properties of the heat flow on
$\operatorname{RCD}(K,N)$ spaces (which hold true more generally for any
$\operatorname{RCD}(K,\infty)$ m.m.s.) referring again to [10, 8] for a more
detailed discussion and the proofs of these results.
First we have the Bakry-Émery contraction estimate:
(2.30) $\left\lvert\nabla P_{t}f\right\rvert^{2}\leq
e^{-2Kt}P_{t}\left\lvert\nabla
f\right\rvert^{2}\quad\text{$\mathfrak{m}$-a.e.,}$
for any $t>0$ and for any $f\in H^{1,2}(X,\mathsf{d},\mathfrak{m})$.
Later on it was proved in [122] that the Bakry-Émery contraction estimates
extends to the full range of exponents $p\in[1,\infty)$, i.e.
(2.31) $\left\lvert\nabla P_{t}f\right\rvert^{p}\leq
e^{-pKt}P_{t}\left\lvert\nabla
f\right\rvert^{p}\,,\quad\text{$\mathfrak{m}$-a.e.}\,,$
for any $t>0$, for any function $f\in H^{1,p}(X,\mathsf{d},\mathfrak{m})$ if
$p>1$ and for any function $f\in\operatorname{BV}(X,\mathsf{d},\mathfrak{m})$
if $p=1$.
Another non trivial regularity property is the so-called $L^{\infty}$–
$\operatorname{LIP}$ regularization of the heat flow: for any $f\in
L^{\infty}(X,\mathfrak{m})$, we have $P_{t}f\in\operatorname{LIP}(X)$ with
(2.32) $\sqrt{2I_{2K}(t)}\operatorname{Lip}(P_{t}f)\leq\left\lVert
f\right\rVert_{L^{\infty}}\,,\quad\text{for any $t>0$}\,,$
where $I_{L}(t):=\int_{0}^{t}e^{Lr}\mathop{}\\!\mathrm{d}r$.
We also have the so-called Sobolev to Lipschitz property: any $f\in
H^{1,2}(X,\mathsf{d},\mathfrak{m})$ with $\left\lvert\nabla f\right\rvert\in
L^{\infty}(X,\mathfrak{m})$ admits a Lipschitz representative $\bar{f}$ such
that $\operatorname{Lip}\bar{f}\leq\left\lVert\nabla f\right\rVert_{\infty}$.
###### Definition .
We introduce the space of “test” functions ${\rm
Test}(X,\mathsf{d},\mathfrak{m})$ by
$\displaystyle{\rm Test}(X,\mathsf{d},\mathfrak{m}):=$ $\displaystyle\\{f\in
D(\Delta)\cap L^{\infty}(X,\mathfrak{m}):\left\lvert\nabla f\right\rvert\in
L^{\infty}(X)$ (2.33) $\displaystyle\quad\text{and}\quad\Delta f\in
H^{1,2}(X,\mathsf{d},\mathfrak{m})\\}\,.$
and the subspace ${\rm Test}^{\infty}(X,\mathsf{d},\mathfrak{m})$ by
$\displaystyle{\rm Test}^{\infty}(X,\mathsf{d},\mathfrak{m}):=$
$\displaystyle\\{f\in D(\Delta)\cap\operatorname{LIP}_{b}(X)$ (2.34)
$\displaystyle\quad\text{and}\quad\Delta f\in L^{\infty}\cap
H^{1,2}(X,\mathsf{d},\mathfrak{m})\\}\,.$
###### Remark .
We remark that, for any $g\in L^{2}\cap L^{\infty}(X,\mathfrak{m})$, it holds
that $P_{t}g\in{\rm Test}(X,\mathsf{d},\mathfrak{m})$ for any $t>0$, thanks to
(2.30), (2.32), the fact that $P_{t}$ maps $L^{2}(X,\mathfrak{m})$ into
$D(\Delta)$ and the commutation $\Delta P_{t}f=P_{t}\Delta f$, which holds
true for any $f\in D(\Delta)$.
On $\operatorname{RCD}(K,N)$ metric measure spaces it is possible to build
regular cut-off functions, see [112, Lemma 3.1] (the Test regularity was not
required in [112] but can be obtained with a similar construction, see also
[14, Lemma 6.7] and [66]).
###### Lemma .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Then, for any $R>0$ there exists a constant $C=C(K,N,R)>0$ such
that, for any $x\in X$ and for any $0<r<R$, there exists a function
$\varphi_{r}:X\to[0,\infty)$ such that the following properties hold:
* i)
$\varphi_{r}\equiv 1$ on $B_{r}(x)$ and $\varphi_{r}\equiv 0$ outside from
$B_{2r}(x)$;
* ii)
$\varphi_{r}$ is Lipschitz and belongs to $D(\Delta)$, moreover
$r^{2}\left\lvert\Delta\varphi_{r}\right\rvert+r\left\lvert\nabla\varphi_{r}\right\rvert\leq
C(K,N,R)\,.$
* iii)
$\varphi_{r}\in{\rm Test}(X,\mathsf{d},\mathfrak{m})$.
Since $\operatorname{RCD}(K,N)$ spaces are locally doubling and they satisfy a
local Poincaré inequality (see [131, 121]), the general theory of Dirichlet
forms guarantees that we can find a locally Hölder continuous heat kernel $p$
on $X\times X\times(0,+\infty)$, see [126].
Moreover in [87] the following finer properties of the heat kernel over
$\operatorname{RCD}(K,N)$ spaces, have been proved: there exist constants
$C_{1}=C_{1}(K,N)>1$ and $c=c(K,N)\geq 0$ such that
$\displaystyle\frac{1}{C_{1}\mathfrak{m}(B_{\sqrt{t}}(x))}\exp\left\\{-\frac{\mathsf{d}^{2}(x,y)}{3t}-ct\right\\}$
$\displaystyle\leq p_{t}(x,y)$ (2.35)
$\displaystyle\leq\frac{C_{1}}{\mathfrak{m}(B_{\sqrt{t}}(x))}\exp\left\\{-\frac{\mathsf{d}^{2}(x,y)}{5t}+ct\right\\}$
for any $x,y\in X$ and for any $t>0$. Moreover it holds
(2.36) $\left\lvert\nabla
p_{t}(x,\cdot)\right\rvert(y)\leq\frac{C_{1}}{\sqrt{t}\mathfrak{m}(B_{\sqrt{t}}(x))}\exp\left\\{-\frac{\mathsf{d}^{2}(x,y)}{5t}+ct\right\\}\quad\text{for
$\mathfrak{m}$-a.e. $y\in X$},$
for any $t>0$ and for any $x\in X$.
We remark that in (2.5) and (2.36) above one can take $c=0$ whenever
$(X,\mathsf{d},\mathfrak{m})$ is an $\operatorname{RCD}(0,N)$ m.m.s..
It is also possible to combine the upper bound for the heat kernel in (2.5)
with the general theory of the heat kernels (see again [126]) to infer that
$\left\lvert\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}p_{t}(x,y)\right\rvert=\left\lvert\Delta_{x}p_{t}(x,y)\right\rvert\leq\frac{C}{t\mathfrak{m}(B_{\sqrt{t}}(x))}\exp\left\\{-\frac{\mathsf{d}^{2}(x,y)}{5t}+ct\right\\}\,,$
for all $t>0$ and $\mathfrak{m}\otimes\mathfrak{m}$-a.e. $(x,y)\in X\times X$.
We will deal several times with the heat flow for initial data with polynomial
growth, i.e. for those functions $f:X\to\mathbb{R}$ such that for some
$n\in\mathbb{N}$, some constant $C>0$ and $x\in X$ it holds
(2.37) $\left\lvert f(y)\right\rvert\leq
C\mathsf{d}(x,y)^{n}+C\,,\quad\text{for any $y\in X$}\,.$
In this case the evolution via heat flow can be pointwise defined by
(2.38)
$P_{t}f(x):=\int_{X}p_{t}(x,y)f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\,,$
for any $x\in X$ and for any $t>0$.
Observe that the integral in (2.38) is absolutely convergent thanks to the
upper heat kernel estimate in (2.5), the Bishop-Gromov inequality (2.1) and
the polynomial growth assumption (2.37).
Whenever $f:X\to\mathbb{R}$ has polynomial growth, it belongs to the domain of
the Laplacian locally and has Laplacian with polynomial growth, it is possible
to verify that $P_{t}f$ belongs to the domain of the Laplacian locally and
(2.39) $\Delta P_{t}f(x)=\int_{X}\Delta
p_{t}(x,y)f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)=\int_{X}p_{t}(x,y)\Delta
f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\,,$
for any $x\in X$ and for any $t>0$. Then one can easily argue that
$\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}P_{t}f(x)=\Delta
P_{t}f(x)\,,\quad\text{for a.e. $t>0$ and every $x\in X$}\,.$
Among the consequences of the Gaussian bounds there is the fact that the heat
kernel is strictly positive. It follows that, whenever $f\in L^{1}_{{\rm
loc}}(X,\mathfrak{m})$ has polynomial growth and $f\geq 0$, then $P_{t}f$ is
strictly positive at any point and any positive time unless $f\equiv 0$. Below
we wish to show that, nevertheless, the action of the heat flow is still
local, to some extent.
###### Lemma .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let $f\in
L^{1}_{{\rm loc}}(X,\mathfrak{m})$ be a function with polynomial growth and
assume that there exist $x_{0}\in X$ and $r_{0}>0$ such that $f\equiv 0$ on
$B_{r_{0}}(x_{0})$. Then, for any $n\in\mathbb{N}$,
$P_{t}f(x_{0})=o(t^{n})\,,\quad\text{as $t\downarrow 0$}\,.$
###### Proof.
Observe that, since $p_{t}(x,\cdot)$ is a probability measure for any $x\in X$
and for any $t\geq 0$ (see (2.29)), by Jensen’s inequality it holds
$\left\lvert P_{t}f(x)\right\rvert\leq P_{t}\left\lvert
f\right\rvert(x)\,,\quad\text{for any $t\geq 0$ and for any $x\in X$}\,.$
Therefore we can assume without loss of generality that $f\geq 0$.
Using the coarea formula and abbreviating by $\operatorname{Per}_{r}$ the
perimeter measure of the ball $B_{r}(x)$, we can compute
(2.40)
$P_{t}f(x)=\int_{X}p_{t}(x,y)f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)=\int_{0}^{\infty}\int_{\partial
B_{r}(x)}f(y)p_{t}(x,y)\mathop{}\\!\mathrm{d}\operatorname{Per}_{r}(y)\mathop{}\\!\mathrm{d}r\,.$
Using the upper bound for the heat kernel in (2.5) we estimate
$\displaystyle\int_{0}^{\infty}$ $\displaystyle\int_{\partial
B_{r}(x)}f(y)p_{t}(x,y)\mathop{}\\!\mathrm{d}\operatorname{Per}_{r}(y)\mathop{}\\!\mathrm{d}r$
(2.41)
$\displaystyle\leq\frac{Ce^{ct}}{\mathfrak{m}(B_{\sqrt{t}}(x))}\int_{0}^{\infty}e^{-\frac{r^{2}}{5t}}\int_{\partial
B_{r}(x)}f(y)\mathop{}\\!\mathrm{d}\operatorname{Per}_{r}(y)\mathop{}\\!\mathrm{d}r\,.$
Let us set now
$g(r):=\int_{\partial
B_{r}(x_{0})}f(y)\mathop{}\\!\mathrm{d}\operatorname{Per}_{r}(y)\,$
and
$h(r):=\int_{B_{r}(x_{0})}f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\,.$
By the coarea formula,
$h(r)=\int_{0}^{r}g(s)\mathop{}\\!\mathrm{d}s\,,\quad\text{for any $r>0$}\,,$
hence $r\mapsto h(r)$ is an absolutely continuous monotone map and
(2.42) $h^{\prime}(r)=g(r)\,,\quad\text{for a.e. $r>0$}\,.$
Moreover, by the polynomial growth assumption and since $f\equiv 0$ on
$B_{r_{0}}(x_{0})$, we know that, for any $n\geq n_{0}$ (where $n_{0}$ is the
order in the polynomial growth assumption), there exists a constant $C=C(n)>0$
such that
(2.43) $\fint_{B_{r}(x_{0})}f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\leq
Cr^{n}\,,\quad\text{for any $r>0$}\,.$
When read in terms of the function $h$, this can be rephrased by
$h(r)\leq Cr^{n}\mathfrak{m}(B_{r}(x_{0}))\,,\quad\text{for any $r>0$}.$
With the above introduced notation, (2.40) and (2.5) can be rephrased as
$P_{t}f(x_{0})\leq\frac{Ce^{ct}}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\int_{0}^{\infty}e^{-\frac{r^{2}}{5t}}g(r)\mathop{}\\!\mathrm{d}r\,.$
Changing variables in the integral by setting $s:=r/\sqrt{5t}$ and integrating
by parts, taking into account (2.42) and the polynomial growth of $f$ and the
Bishop-Gromov inequality (2.1) to prove vanishing of the boundary terms, we
obtain
$\displaystyle P_{t}f(x_{0})\leq\,$
$\displaystyle\frac{Ce^{ct}\sqrt{t}}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\int_{0}^{\infty}e^{-s^{2}}g(\sqrt{5t}s)\mathop{}\\!\mathrm{d}s$
$\displaystyle\leq\,$
$\displaystyle\frac{Ce^{ct}\sqrt{t}}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\frac{1}{\sqrt{5t}}\int_{0}^{\infty}se^{-s^{2}}h(\sqrt{5t}s)\mathop{}\\!\mathrm{d}s$
$\displaystyle\leq\,$
$\displaystyle\frac{Ce^{ct}}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\int_{0}^{\infty}se^{-s^{2}}h(\sqrt{5t}s)\mathop{}\\!\mathrm{d}s$
(2.44) $\displaystyle=\,$ $\displaystyle
Ce^{ct}\int_{0}^{\infty}se^{-s^{2}}\frac{h(\sqrt{5t}s)}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\mathop{}\\!\mathrm{d}s\,.$
Let us set, for any $0<t<1$ and for any $s>0$,
$\varphi_{t}(s):=\frac{h(\sqrt{5t}s)}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\,.$
We wish to bound $\varphi_{s}(t)$ in a sufficiently uniform way (w.r.t
$t\in(0,1)$) in order to apply Fatou’s Lemma and prove that
$P_{t}f(x)=o(t^{m})$, for any $m\in\mathbb{N}$ as $t\downarrow 0$.
To this aim, fix $m\in\mathbb{N}$ and let $n\in\mathbb{N},$ with $n>m$. We
split $(0,\infty)$ into two intervals.
If $s\in(0,1/\sqrt{5})$, then, for any $t\in(0,1)$, we can bound
(2.45)
$\varphi_{t}(s)\leq\frac{h(\sqrt{t})}{\mathfrak{m}(B_{\sqrt{t}}(x))}=\fint_{B_{\sqrt{t}}(x)}f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\leq
Ct^{n/2}\,,$
where we used (2.43) for the last inequality. If instead $s>1/\sqrt{5}$, we
can bound
$\displaystyle\varphi_{t}(s)=\,$
$\displaystyle\frac{h(\sqrt{5t}s)}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\leq\frac{\mathfrak{m}(B_{\sqrt{5t}s}(x_{0}))}{\mathfrak{m}(B_{\sqrt{t}}(x_{0}))}\fint_{B_{\sqrt{5t}s}(x_{0})}f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)$
$\displaystyle\leq\,$
$\displaystyle\frac{v_{K,N}(\sqrt{5t}s)}{v_{K,N}(\sqrt{t})}\fint_{B_{\sqrt{5t}s}(x_{0})}f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)$
(2.46) $\displaystyle\leq\,$ $\displaystyle
C\frac{v_{K,N}(\sqrt{5t}s)}{v_{K,N}(\sqrt{t})}\left(\sqrt{5t}s\right)^{n}\,,$
where we used the Bishop-Gromov inequality (2.1) and the last bound follows
from (2.43).
From (2.46) we infer that for every $t\in(0,1)$ it holds
(2.47) $0\leq t^{-m}\varphi_{t}(s)\leq\psi_{K,N,n,m}(s)\quad\text{ with
}\quad\int_{0}^{\infty}\psi_{K,N,n,m}(s)\,s\,e^{-s^{2}}\mathop{}\\!\mathrm{d}s<\infty.$
Moreover, since $f\equiv 0$ on $B_{r}(x)$, it holds
(2.48) $t^{-m}\varphi_{t}(s)\to 0\,,\quad\text{as $t\downarrow 0$, for any
$s>0$}\,.$
Now observe that (2.44) can be rewritten as
$t^{-m}P_{t}f(x_{0})\leq
Ce^{ct}\int_{0}^{\infty}t^{-m}\varphi_{t}(s)\,se^{-s^{2}}\mathop{}\\!\mathrm{d}s\,.$
Thanks to the domination (2.47) and to the pointwise convergence (2.48) we can
apply the Dominated Convergence Theorem and get
$\lim_{t\downarrow 0}t^{-m}P_{t}f(x_{0})=0\,.$
Since $m\in\mathbb{N}$ was arbitrary, the claim follows. ∎
The next lemma is an instance of the fact that the heat flow acts as an
averaging operator on smaller and smaller scales as time goes to $0$, even
though being non local.
###### Lemma .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let us assume
that $f\in L^{1}_{{\rm loc}}(X,\mathfrak{m})$ has polynomial growth and let
$x\in X$ be such that
(2.49) $\lim_{r\downarrow
0}\frac{1}{\mathfrak{m}(B_{r}(x))}\int_{B_{r}(x)}\left\lvert
f(y)-f(x)\right\rvert\mathop{}\\!\mathrm{d}\mathfrak{m}=0\,.$
Then
(2.50) $\lim_{t\downarrow 0}P_{t}f(x)=f(x)\,.$
###### Proof.
We start by observing that, for any $t>0$,
$P_{t}f(x)-f(x)=\int_{X}p_{t}(x,y)(f(y)-f(x))\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\,,$
thanks to the stochastic completeness (2.29).
Therefore, in order to prove (2.50), using Jensen’s inequality it is
sufficient to prove that
$\int_{X}p_{t}(x,y)\left\lvert
f(y)-f(x)\right\rvert\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\to
0\,,\quad\text{as $t\downarrow 0$}\,.$
Thanks to subsection 2.5, we can assume without loss of generality that $f$
has compact support, up to multiplying with a compactly supported continuous
cut-off function.
Under this assumption, (2.49) can be rephrased by saying that
(2.51) $\fint_{B_{r}(x)}\left\lvert
f(y)-f(x)\right\rvert\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\leq
C<\infty\,,\quad\text{for any $r>0$}$
and
(2.52) $\fint_{B_{r}(x)}\left\lvert
f(y)-f(x)\right\rvert\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\to
0\,,\quad\text{as $r\downarrow 0$}\,.$
Setting
$h(r):=\int_{B_{r}(x)}\left\lvert
f(y)-f(x)\right\rvert\mathop{}\\!\mathrm{d}\mathfrak{m}(y)$
and arguing as in the proof of subsection 2.5, we can bound
$\int_{X}p_{t}(x,y)\left\lvert
f(y)-f(x)\right\rvert\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\leq
Ce^{ct}\int_{0}^{\infty}se^{-s^{2}}\frac{h(\sqrt{5t}s)}{\mathfrak{m}(B_{\sqrt{t}}(x))}\mathop{}\\!\mathrm{d}s\,.$
Relying on (2.51) to get the uniform bounds and on (2.52) to get the pointwise
convergence to $0$ of the integrands as $t\downarrow 0$, we can argue as in
subsection 2.5 and prove that
$\int_{0}^{\infty}se^{-s^{2}}\frac{h(\sqrt{5t}s)}{\mathfrak{m}(B_{\sqrt{t}}(x))}\mathop{}\\!\mathrm{d}s\to
0\,,\quad\text{as $t\downarrow 0$}\,,$
hence (2.50) holds. ∎
###### Remark .
In subsection 2.5 above we can weaken the assumption by requiring only that
$\lim_{r\downarrow
0}\fint_{B_{r}(x)}f(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)=c\,.$
In that case, the very same proof shows that
$\lim_{t\to 0}P_{t}f(x)=c\,.$
###### Lemma .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let $f\in
L^{1}_{{\rm loc}}(X,\mathfrak{m})$ be a function with polynomial growth.
Moreover, let us assume that:
* i)
There exists $B_{r}(x)\subset X$ such that $f\in D(\Delta,B_{2r}(x))$;
* ii)
$\Delta f$ is $\mathfrak{m}$-essentially bounded on $B_{r}(x)$;
* iii)
$x$ is a Lebesgue point for $\Delta f$, i.e.
$\lim_{r\to 0}\fint_{B_{r}(x)}\left\lvert\Delta f(y)-\Delta
f(x)\right\rvert\mathop{}\\!\mathrm{d}\mathfrak{m}(y)=0\,.$
Then
(2.53) $\lim_{t\downarrow 0}\frac{P_{t}f(x)-f(x)}{t}=\Delta f(x)\,.$
###### Proof.
Thanks to subsection 2.5, up to multiplying $f$ with a cut-off function with
good estimates from subsection 2.5, we can assume that $f\in D(\Delta)$ and
$\Delta f\in L^{\infty}(X,\mathfrak{m})$.
Thanks to (2.39), we can consider the pointwise defined versions of $P_{t}f$
and $P_{t}\Delta f$, and compute:
$\displaystyle\frac{P_{t}f(x)-f(x)}{t}$
$\displaystyle=\frac{1}{t}\int_{0}^{t}\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}s}P_{s}f(x)\mathop{}\\!\mathrm{d}s$
(2.54) $\displaystyle=\frac{1}{t}\int_{0}^{t}\Delta
P_{s}f(x)\mathop{}\\!\mathrm{d}s=\frac{1}{t}\int_{0}^{t}P_{s}\Delta
f(x)\mathop{}\\!\mathrm{d}s\,.$
Observe that, in particular, $P_{t}\Delta f$ is continuous for any $t>0$
thanks to the $L^{\infty}$-$\operatorname{LIP}$ regularization property of the
heat flow. Thanks to (2.5), in order to get (2.53) it is sufficient to prove
that
(2.55) $P_{t}\Delta f(x)\to\Delta f(x)\,,\quad\text{as $t\downarrow 0$}\,.$
In order to obtain (2.55), it is now sufficient to apply subsection 2.5 with
$\Delta f$ in place of $f$. ∎
###### Remark .
The technical lemmas above essentially provide a counterpart, tailored for the
non smooth $\operatorname{RCD}(K,N)$ framework, of the classical fact that if
one evolves a smooth initial datum $f$ through the heat flow on a Riemannian
manifold, then $P_{t}f$ converges to $f$ smoothly as $t\to 0$. Moreover, local
smoothness yields local smooth convergence.
### 2.6. The Poisson equation
Let us collect here some existence and comparison results for the Poisson
equation with Dirichlet boundary conditions on $\operatorname{RCD}(K,N)$
metric measure spaces. Some of them are valid in the much more general
framework of metric measure spaces verifying doubling and Poincaré
inequalities, but for the present formulation we rely on the
$\operatorname{RCD}(K,N)$ structure.
We will often rely on the following regularity result for the Poisson equation
on $\operatorname{RCD}(K,N)$ spaces, which is in turn a corollary of [86,
Theorem 1.2].
###### Theorem 2.12.
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let
$\Omega\subset X$ be an open domain and let $f\in D(\Delta,\Omega)$ be such
that $\Delta f$ is continuous on $\Omega$.
Then $f$ has a locally Lipschitz representative on $\Omega$.
From now on, when dealing with solutions of the Poisson problem $\Delta
f=\eta$ for some continuous function $\eta$, we will always assume that $f$ is
the continuous representative given by Theorem 2.12 above.
###### Theorem 2.13.
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let
$\Omega\subset X$ be an open and bounded domain. Then the following hold:
* (i)
(Strong maximum principle) Assume that $\Delta f=0$ on $\Omega$ and that $f$
has a maximum point at $x_{0}\in\Omega$. Then $f$ is constant on the connected
component of $\Omega$ containing $x_{0}$.
* (ii)
(Existence for the Dirichlet problem) Assume that
$\mathfrak{m}(X\setminus\Omega)>0$ and that $g\in H^{1,2}(X)$ and let
$\eta:\Omega\to\mathbb{R}$ be continuous and bounded. Then there exists a
unique solution $f$ of the Poisson problem with Dirichlet boundary conditions
$\Delta f=\eta\,\quad\text{on $\Omega$}\,,\quad f-g\in H^{1,2}_{0}(\Omega)\,.$
* (iii)
(Comparison principle) Assume that, under the same assumptions above,
$\bm{\Delta}g\leq\eta$ on $\Omega$, then $g\geq f$ on $\Omega$.
###### Proof.
i) (resp. iii)) follows by combining [25, Theorem 8.13] (resp. [25, Theorem
9.39]) with the PDE characterization of sub-harmonic functions obtained in
[68].
ii) follows from the solvability of the Poisson equation with null boundary
conditions proved in [24, Corollary 1.2], combined with the existence of
harmonic functions with Dirichlet boundary conditions (see for instance [25,
Theorem 10.12]). Alternatively, one can argue as in the proof of [25, Theorem
10.12] and minimize the functional $J_{\eta}(u):=\int_{\Omega}|\nabla
u|^{2}\mathop{}\\!\mathrm{d}\mathfrak{m}-\int_{\Omega}\eta\,u\,\mathop{}\\!\mathrm{d}\mathfrak{m}$
instead of $J_{0}(u):=\int_{\Omega}|\nabla
u|^{2}\mathop{}\\!\mathrm{d}\mathfrak{m}$, among the functions $u\in
H^{1,2}(\Omega)$ such that $u-g\in H^{1,2}_{0}(\Omega)$. ∎
### 2.7. The Green function of a domain and applications
Here we deal with some relevant estimates for the Green function of the
Laplacian on a domain of an $\operatorname{RCD}(K,N)$ metric measure space
$(X,\mathsf{d},\mathscr{H}^{N})$. We assume that $N\geq 3$, for the sake of
this discussion. The arguments can be adapted to deal with the case $N=2$, as
it is classical in geometric analysis when dealing with Green’s functions.
A classical way (cf. for instance with [88, Lemma 5.15] and [74]) to construct
a positive Green’s function for the Laplacian with Dirichlet boundary
condition (and estimate it) on a smooth domain of a Riemannian manifold is
given by the following procedure.
Let $p_{t}:X\times X\to[0,\infty)$ denote the global heat kernel of the
Riemannian manifold. Fix a time parameter $T>0$ and consider
$G^{T}(x,y):=\int_{0}^{T}p_{t}(x,y)\mathop{}\\!\mathrm{d}t\,.$
This is formally a solution of
$\Delta_{x}G^{T}(\cdot,y)=-\delta_{y}+p_{T}(\cdot,y)$. Indeed, we can compute
$\displaystyle\Delta_{x}G^{T}(\cdot,y)=$
$\displaystyle\Delta_{x}\int_{0}^{T}p_{t}(\cdot,y)\mathop{}\\!\mathrm{d}t=\int_{0}^{T}\Delta_{x}p_{t}(\cdot,y)\mathop{}\\!\mathrm{d}t$
$\displaystyle=$
$\displaystyle-\int_{0}^{T}\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}p_{t}(\cdot,y)\mathop{}\\!\mathrm{d}t=p_{T}(\cdot,y)-\delta_{y}\,.$
Then we solve the Dirichlet boundary value problem
$\Delta f=p_{T}(\cdot,y)$
with boundary condition
$f=G^{T}(\cdot,y)\,,\;\;\;\text{on $\partial\Omega$}\,,$
and subtract the solution $f$ to $G^{T}(\cdot,y)$. In this way we obtain, for
$y\in\Omega$ fixed, a solution for the problem
$\Delta_{x}G(\cdot,y)=-\delta_{y}\,,\;\;\;G(\cdot,y)=0\,\;\;\;\text{on
$\partial\Omega$}\,.$
Good properties such as regularity away from the pole and strict positivity
can be proven by regularization and exploiting harmonicity outside from the
pole, once suitable integrability is established.
We wish to prove that the construction above can be carried over even in the
non smooth framework. This will require some slight adjustments to the
construction of global Green functions on $\operatorname{RCD}(K,N)$ metric
measure spaces verifying suitable volume growth assumptions performed in [28]
following one of the classical Riemannian strategies.
Notice that, as it is classical in the study of Green functions of the
Laplacian, the case of dimension $2$ would require a separate treatment, that
we omit here since it does not involve really different ideas.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain such that
$\mathscr{H}^{N}(X\setminus\Omega)>0$. Assume that $N\geq 3$. Then, for any
$x\in\Omega$ there a exists a positive Green’s function of the Laplacian on
$\Omega$ with pole at $x$, i.e. a function $G_{x}:\Omega\to(0,\infty]$ such
that
$\bm{\Delta}G_{x}(\cdot)=-\delta_{x}\,,$
i.e. $G_{x}$ is locally Lipschitz away from $x$, $\left\lvert\nabla
G_{x}\right\rvert\in L^{1}_{{\rm loc}}(\Omega)$ and
$\int_{\Omega}\nabla
G_{x}\cdot\nabla\varphi\mathop{}\\!\mathrm{d}\mathfrak{m}=\varphi(x)\,,$
for any function $\varphi\in\operatorname{LIP}_{c}(\Omega)$. In particular,
$G_{x}$ is harmonic away from the pole $x$.
###### Proof.
Let us fix $x\in\Omega\subset X$. For $T>0$ sufficiently small, we set
$G^{T}_{x}(y)=G^{T}(x,y):=\int_{0}^{T}p_{t}(x,y)\mathop{}\\!\mathrm{d}t\,$
and, for any $0<\varepsilon<T$ we also set
$G^{T,\varepsilon}_{x}(y):=\int_{\varepsilon}^{T}p_{t}(x,y)\mathop{}\\!\mathrm{d}t\,.$
Let us consider $G^{T}_{x}$ as a function of $y$. Then, relying on (2.5), the
smallness of $T>0$, and the local Ahlfors regularity of
$(X,\mathsf{d},\mathscr{H}^{N})$, we can estimate
$\displaystyle G^{T}_{x}(y)=$
$\displaystyle\int_{0}^{T}p_{t}(x,y)\mathop{}\\!\mathrm{d}t\leq\int_{0}^{T}\frac{C_{1}}{\mathfrak{m}(B_{\sqrt{t}}(x))}\exp\left\\{-\frac{\mathsf{d}^{2}(x,y)}{5t}+ct\right\\}\mathop{}\\!\mathrm{d}t$
$\displaystyle\leq$ $\displaystyle
C\int_{0}^{T}\frac{e^{-\frac{\mathsf{d}^{2}(x,y)}{5t}}}{\mathfrak{m}(B_{\sqrt{t}}(x))}\mathop{}\\!\mathrm{d}t\leq
C\int_{0}^{T}\frac{e^{-\frac{\mathsf{d}^{2}(x,y)}{5t}}}{t^{N/2}}\mathop{}\\!\mathrm{d}t$
(2.56) $\displaystyle\leq$ $\displaystyle C\mathsf{d}(x,y)^{2-N}\,.$
In an analogous way, relying on the lower Gaussian heat kernel bound (2.5), we
obtain
(2.57) $G^{T}_{x}(y)\geq C^{\prime}\mathsf{d}(x,y)^{2-N}\,,\quad\text{for any
$y\in X$, $y\neq x$}\,,$
for some constant $C^{\prime}=C^{\prime}_{x,T}>0$.
Using the gradient bound for the heat kernel (2.36) it is also possible to
prove that $G^{T}_{x}$ is locally Lipschitz away from $x$ with the bound
(2.58) $\left\lvert\nabla G^{T}_{x}(y)\right\rvert\leq
C\mathsf{d}(x,y)^{1-N}\,,\quad\text{for a.e. $y\in X$}\,.$
It follows in particular that $G^{T}_{x}\in L^{1}_{{\rm loc}}(X,\mathfrak{m})$
and $\left\lvert\nabla G^{T}_{x}\right\rvert\in L^{1}_{{\rm
loc}}(X,\mathfrak{m})$.
Arguing as in the proof of [28, Lemma 2.5] it is then possible to prove that,
for any function $\varphi\in\operatorname{LIP}_{c}(X,\mathsf{d})$, it holds
$\int_{X}\nabla
G^{T}_{x}(y)\cdot\nabla\varphi(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)=\varphi(x)-\int_{X}p_{T}(x,y)\varphi(y)\mathop{}\\!\mathrm{d}\mathfrak{m}(y)\,,$
which is the distributional formulation of
$\bm{\Delta}G_{x}^{T}=-\delta_{x}+p_{T}(x,\cdot)$.
Let us also notice (cf. again with [28]) that $G^{T,\varepsilon}_{x}$ is a
regularized version of $G^{T}_{x}$. Indeed, it is possible to show that
$G^{T,\varepsilon}_{x}\in{\rm Test}_{{\rm loc}}(X,\mathsf{d},\mathscr{H}^{N})$
for any $0<\varepsilon<T$ and
$\Delta
G^{T,\varepsilon}_{x}(\cdot)=-p_{\varepsilon}(x,\cdot)+p_{T}(x,\cdot)\,.$
Now let us notice that $p_{T}(x,\cdot)\in{\rm Test}_{{\rm
loc}}(X,\mathsf{d},\mathscr{H}^{N})$ as it follows from the regularization
properties of the heat flow and the semigroup law.
Using Theorem 2.13 (ii), for any $\varepsilon>0$ we can consider a solution
$g^{\varepsilon}$ of the Dirichlet problem
(2.59) $\Delta g^{\varepsilon}=p_{T}(x,\cdot)\,\quad\text{on $\Omega$}\,,\quad
g^{\varepsilon}-G^{T,\varepsilon}_{x}\in H^{1,2}_{0}(\Omega)\,.$
Setting $G^{\varepsilon}_{x}:=G^{T,\varepsilon}_{x}-g^{\varepsilon}$, it holds
$\Delta G^{\varepsilon}_{x}=-p_{\varepsilon}(x,\cdot)\,,$
and $G^{\varepsilon}_{x}\in H^{1,2}_{0}(\Omega)$. Moreover, by the comparison
principle Theorem 2.13 (iii), we get that $G^{\varepsilon}_{x}\geq 0$ on
$\Omega$.
Now we can fix $0<\varepsilon_{0}<T$ and set
$G_{x}:=G^{T}_{x}-g^{\varepsilon_{0}}$. Observe that
$G_{x}:=G^{T}_{x}-g^{\varepsilon_{0}}=G^{T,\varepsilon_{0}}-g^{\varepsilon_{0}}+\int_{0}^{\varepsilon_{0}}p_{t}(x,\cdot)\mathop{}\\!\mathrm{d}t>G^{\varepsilon_{0}}_{x}\geq
0\quad\text{on }\Omega\,.$
Notice that Theorem 2.12 applied to the Poisson problem (2.59) yields that
$g^{\varepsilon}$ is a locally Lipschitz function. Hence $G_{x}$ is locally
Lipschitz away from the pole $x$ and $\left\lvert\nabla G_{x}\right\rvert\in
L^{1}_{{\rm loc}}(\Omega)$. Moreover, by the very construction of
$g^{\varepsilon}$, it holds that
$\bm{\Delta}G_{x}=-\delta_{x}\,.$
∎
###### Remark .
With an additional limiting argument (basically setting $\varepsilon_{0}=0$ in
the proof above) it is possible to obtain the Green function of the Laplacian
on $\Omega$ with pole at $x$ and homogeneous Dirichlet boundary conditions.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $N\geq 3$ and let $\Omega\subset X$ be an open domain
such that $\mathscr{H}^{N}(X\setminus\Omega)>0$. Let $x\in\Omega$ and consider
the positive Green function of the Laplacian with Dirichlet boundary
conditions on $\Omega$ and pole at $x$, constructed in subsection 2.7.
Then the following estimates hold: there exist constants $c_{x},C_{x}>0$ such
that
$\frac{c_{x}}{\mathsf{d}^{N-2}(x,y)}\leq
G_{x}(y)\leq\frac{C_{x}}{\mathsf{d}^{N-2}(x,y)}\,,$
for every $y\in B_{r}(x)$ such that $y\neq x$ (where $r>0$ is such that
$B_{r}(x)\subset\Omega$), and
$\left\lvert\nabla
G_{x}(y)\right\rvert\leq\frac{C_{x}}{\mathsf{d}^{N-1}(x,y)}\,,$
for a.e. $y\in B_{r}(x)$.
###### Proof.
The sought estimates follow from the estimates for the function $G^{T}_{x}$
and its gradient (see (2.56), (2.57) and (2.58)) combined with the local
uniform Lipschitz estimate for the solution of the Dirichlet problem
$g^{\varepsilon}$ considered in the proof of subsection 2.7, that follow in
turn from Theorem 2.12. ∎
Our next step is to use the local Green function in order to build a
replacement of the distance function with better regularity properties.
On the Euclidean space of dimension $N\geq 3$, the Green function of the
Laplacian is a negative power of the distance function. On a general
Riemannian manifold this is not the case of course, but still a suitable power
of the Green function of the Laplacian is comparable to the distance function
(under suitable curvature and volume growth assumptions). Moreover, the Green
function solves an equation, which makes it sometimes more suitable for the
applications. We refer to [50, 88, 28] for previous instances of this idea in
Geometric Analysis.
###### Proposition (The Green distance).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ space for
some $N\geq 3$. Let $\Omega\subset X$ be an open and bounded domain with
$\mathfrak{m}(X\setminus\Omega)>0$ and $x\in\Omega$. Let us suppose, up to
scaling, that $B_{1}(x)\subset\Omega$ and let $G_{x}$ be the positive Green
function of the Laplacian with pole at $x$ and Dirichlet boundary conditions,
constructed in subsection 2.7.
Then, setting
$b_{x}(y):=G^{-\frac{1}{N-2}}_{x}(y)\,,$
the following hold:
* (i)
there exist constants $c_{x},C_{x}>0$ such that
(2.60) $c_{x}\mathsf{d}(x,y)\leq b_{x}(y)\leq
C_{x}\mathsf{d}(x,y)\,\quad\text{for any $y\in B_{1}(x)$}\,;$
* (ii)
there exists $C_{x}>0$ such that
(2.61) $\left\lvert\nabla b_{x}(y)\right\rvert\leq C_{x}\,\quad\text{for a.e.
$y\in B_{1}(x)$}\,;$
* (iii)
$b_{x}^{2}\in D(\Delta,B_{1}(x))$ and
(2.62) $\Delta b_{x}^{2}=2N\left\lvert\nabla b_{x}\right\rvert^{2}\,;$
###### Proof.
The estimates in items (i) and (ii) directly follows from the estimates for
the Green function $G_{x}$ of subsection 2.7.
In order to prove (2.62) we argue in two steps. First we prove that
$b_{x}^{2}\in D(\Delta,B_{1}(x)\setminus\\{x\\})$ and that (2.62) holds on
$B_{1}(x)\setminus\\{x\\}$, then we verify that $b_{x}^{2}$ is globally in the
domain of the Laplacian on $B_{1}(x)$ and that the pole gives no singular
contribution.
Let us point out that $G_{x}$ is harmonic outside from the pole $x$. Given
this remark, it can be easily verified via the chain rule for the gradient and
the Leibniz formula for the Laplacian that $b_{x}^{2}\in
D(\Delta,B_{1}(x)\setminus\\{x\\})$ and that (2.62) holds on
$B_{1}(x)\setminus\\{x\\}$.
To conclude, we need to verify that $b_{x}^{2}$ belongs locally to the domain
of the Laplacian. This conclusion will be achieved through a standard cutting-
off and limiting procedure.
We wish to prove that
(2.63) $\int_{B_{1}(x)}\nabla
b_{x}^{2}\cdot\nabla\varphi\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}=-2N\int_{B_{1}(x)}\varphi\left\lvert\nabla
b_{x}\right\rvert^{2}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\,,$
for any Lipschitz function $\varphi$ with compact support in $B_{1}(x)$.
We already argued that $b_{x}^{2}\in D(\Delta,B_{1}(x)\setminus\\{x\\})$,
hence (2.63) holds true as soon as $\varphi$ has compact support in
$B_{1}(x)\setminus\\{x\\}$.
Let us consider then radial Lipschitz cut-off functions $\eta_{\varepsilon}$,
for $0<\varepsilon<1$ such that $\eta_{\varepsilon}\equiv 1$ on
$B_{1}(x)\setminus B_{2\varepsilon}(x)$, $\eta_{\varepsilon}\equiv 0$ on
$B_{\varepsilon}(x)$ and $\left\lvert\nabla\eta_{\varepsilon}\right\rvert\leq
C/\varepsilon$. Then we can apply (2.63) to
$\varphi_{\varepsilon}:=\varphi\eta_{\varepsilon}$ for any $\varepsilon>0$ and
get
(2.64) $\displaystyle\int_{B_{1}(x)}$ $\displaystyle\eta_{\varepsilon}\nabla
b_{x}^{2}\cdot\nabla\varphi\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}+\int_{B_{1}(x)}\varphi\nabla\eta_{\varepsilon}\cdot\nabla
b_{x}^{2}\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$ $\displaystyle=$
$\displaystyle\int_{B_{1}(x)}\nabla
b_{x}^{2}\cdot\nabla\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
$\displaystyle=$
$\displaystyle-2N\int_{B_{1}(x)}\varphi\eta_{\varepsilon}\left\lvert\nabla
b_{x}\right\rvert^{2}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\,.$
The last term above converges to
$-2N\int_{B_{1}(x)}\varphi\left\lvert\nabla
b_{x}\right\rvert^{2}\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
as $\varepsilon\to 0$ by the dominated convergence theorem. By the same
reason, also the first term in the left hand side of (2.64) converges to
$\int_{B_{1}(x)}\nabla
b_{x}^{2}\cdot\nabla\varphi\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\,,$
as $\varepsilon\to 0$. Hence to complete the proof of (2.63), it remains to
prove that the second term in the left hand side of (2.64) converges to $0$ as
$\varepsilon\to 0$. To this aim, it is sufficient to observe that
$\displaystyle\left\lvert\int_{B_{1}(x)}\varphi\nabla\eta_{\varepsilon}\cdot\nabla
b_{x}^{2}\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\right\rvert$
$\displaystyle=\int_{B_{2\varepsilon}(x)\setminus
B_{\varepsilon}(x)}\left\lvert\varphi\right\rvert\left\lvert\nabla\eta_{\varepsilon}\right\rvert\left\lvert\nabla
b_{x}^{2}\right\rvert\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
$\displaystyle\leq\frac{C\max_{B_{1}(x)}\left\lvert\varphi\right\rvert}{\varepsilon}\mathscr{H}^{N}(B_{2\varepsilon}(x)\setminus
B_{\varepsilon}(x))\,,$
which is easily seen to converge to $0$ as $\varepsilon\to 0$. ∎
###### Remark .
The main use of the Green function for the purposes of the present paper will
be the possibility (guaranteed by the construction of the function $b_{x}$
above) of considering locally a sufficiently regular function
$f:B_{r}(x)\to\mathbb{R}$ with the following properties:
* i)
it is non-negative;
* ii)
it vanishes only at $x$ and is strictly positive in a neighbourhood of $x$;
* iii)
also its gradient is vanishing at $x$, at least in a weak sense;
* iv)
its Laplacian is non-negative, in a weak sense.
This function plays the role of a power of the distance function in the
development of a viscous theory of bounds for the Laplacian on
$\operatorname{RCD}$ metric measure spaces.
In the Euclidean setting, by considering powers of the distance function it is
possible to work with smooth functions whose derivatives are vanishing at any
given order. In the synthetic framework this is of course too much to ask.
## 3\. The Laplacian on $\operatorname{RCD}(K,N)$ spaces
We are going to consider some new equivalences between different notions of
Laplacian and bounds for the Laplacian on an $\operatorname{RCD}(K,N)$ metric
measure space $(X,\mathsf{d},\mathscr{H}^{N})$. We will be guided by the
equivalences that hold in the Euclidean setting and on smooth Riemannian
manifolds. In particular we shall address bounds on the Laplacian:
* •
in the sense of distributions;
* •
in the viscous sense;
* •
in the sense of sub/super minimizers of Dirichlet type energies;
* •
in the sense of comparison with solutions of the Dirichlet problem;
* •
in the sense of pointwise behaviour of the heat flow.
Some of the equivalences had already appeared in the literature, even under
less restrictive assumptions on the metric measure spaces. The main
contribution here will be in the direction of the viscous theory, in which
case the only previous treatment we are aware of is [136], dealing with
Alexandrov spaces (and inspired in turn by the unpublished [118]), and of the
pointwise behaviour of the heat flow, a notion that seems to be new also in
the smooth setting.
We are going to restrict the analysis to locally Lipschitz functions, in order
to avoid technicalities and since this class will be sufficiently rich for the
sake of the applications in later sections of the paper. We remark that likely
more general functions could be considered.
### 3.1. Notions of Laplacian bounds
We start with distributional Laplacian bounds, borrowing the definition from
[65].
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain. Let
$f:\Omega\to\mathbb{R}$ be a locally Lipschitz function and
$\eta\in\operatorname{C_{b}}(\Omega)$. Then we say that $\bm{\Delta}f\leq\eta$
in the sense of distributions if the following holds. For any non-negative
function $\varphi\in\operatorname{LIP}_{c}(\Omega)$,
$-\int_{\Omega}\nabla
f\cdot\nabla\varphi\mathop{}\\!\mathrm{d}\mathfrak{m}\leq\int_{\Omega}\varphi\eta\mathop{}\\!\mathrm{d}\mathfrak{m}\,.$
The following is a classical result, relying on the fact that a distribution
with a sign is represented by a measure, in great generality. We refer to [68,
65] for a proof.
###### Proposition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain. Let moreover
$f:\Omega\to\mathbb{R}$ be a locally Lipschitz function and
$\eta\in\operatorname{C_{b}}(\Omega)$. Then $\bm{\Delta}f\leq\eta$ in the
sense of distributions if and only if there exists a locally finite measure
$\mu$ on $\Omega$ such that
(3.1) $-\int_{\Omega}\nabla
f\cdot\nabla\varphi\mathop{}\\!\mathrm{d}\mathfrak{m}=\int_{\Omega}\varphi\mathop{}\\!\mathrm{d}\mu\,,$
for any $\varphi\in\operatorname{LIP}_{c}(\Omega)$. Moreover, under these
assumption $\mu\leq\eta\mathfrak{m}$, $\mu$ is uniquely determined by (3.1)
and we shall denote it by $\bm{\Delta}f$.
Given a function $\eta\in\operatorname{C_{b}}(\Omega)$, we introduce the
energy
$E_{\eta}:\operatorname{LIP}(\Omega)\to\mathbb{R}\,,$
by
(3.2) $E_{\eta}(v):=\frac{1}{2}\int_{\Omega}\left\lvert\nabla
v\right\rvert^{2}\mathop{}\\!\mathrm{d}\mathfrak{m}+\int_{\Omega}v\eta\mathop{}\\!\mathrm{d}\mathfrak{m}\,.$
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain. Let
$f:\Omega\to\mathbb{R}$ be a locally Lipschitz function and
$\eta\in\operatorname{C_{b}}(\Omega)$. Let us consider the energy functional
$E_{\eta}:\operatorname{LIP}(\Omega)\to\mathbb{R}$ defined above. Then we say
that $f$ is a superminimizer of $E_{\eta}$ on $\Omega$ if
$E_{\eta}(f+\varphi)\geq E_{\eta}(f)\,,\quad\text{for any non-negative
function $\varphi\in\operatorname{LIP}_{c}(\Omega)$}\,.$
The following result comparing superminimizers with functions having Laplacian
bounded from above in the sense of distributions will be of some relevance for
our purposes. A version of this statement tailored for more general ambient
spaces (but restricted to the case of subharmonic/superharmonic functions)
appears for instance in [68, Theorem 4.1, Corollary 4.4]. The extension to
more general upper/lower bounds for the Laplacian requires just slight
modifications to the original argument, that we omit for brevity.
###### Proposition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain. Let
$f:\Omega\to\mathbb{R}$ be a locally Lipschitz function and
$\eta\in\operatorname{C_{b}}(\Omega)$. Then $\bm{\Delta}f\leq\eta$ in the
sense of distributions if and only if $f$ is a superminimizer of the energy
$E_{\eta}$ on $\Omega$ according to subsection 3.1.
Various definitions of sub/superharmonic functions on metric measure spaces in
the sense of comparison with Dirichlet boundary value problems have appeared
in the last twenty years. Here we choose a slight modification of [25,
Definition 14.8] tailored to the purpose of studying locally Lipschitz
functions (and general Laplacian bounds).
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain. Let
$f:\Omega\to\mathbb{R}$ be a locally Lipschitz function and
$\eta\in\operatorname{C_{b}}(\Omega)$. We say that $f$ is a classical
supersolution of $\Delta f=\eta$ if the following holds: for any open domain
$\Omega^{\prime}\Subset\Omega$ and for any function $g\in
C(\overline{\Omega^{\prime}})$ such that $\Delta g=\eta$ in $\Omega^{\prime}$
and $g\leq f$ on $\partial\Omega^{\prime}$ it holds $g\leq f$ on
$\Omega^{\prime}$.
###### Remark .
If $f\in D(\Delta,\Omega)$ and $\Delta f=\eta$ on $\Omega$, then it is a
classical supersolution of $\Delta f=\eta$ according to subsection 3.1 above.
Indeed, for any test function $g$ as in the definition above, $\Delta f=\Delta
g=\eta$ on $\Omega^{\prime}$ and $g$ is continuous on $\Omega^{\prime}$ by
assumption. Moreover $f$ is continuous on $\Omega^{\prime}$, since it is
locally Lipschitz on $\Omega$ by Theorem 2.12. Therefore, letting $h:=f-g$,
$h$ is harmonic and continuous on $\Omega^{\prime}$ and $h\geq 0$ on
$\partial\Omega^{\prime}$.
We claim that $h\geq 0$ on $\Omega^{\prime}$. Suppose that this is not the
case, then $h$ admits a strictly negative minimum in the interior of
$\Omega^{\prime}$. Therefore it is constant and strictly negative in the
connected component of $\Omega^{\prime}$ where this minimum is achieved by
Theorem 2.13 (iii). This yields a contradiction since $h\geq 0$ on
$\partial\Omega^{\prime}$.
###### Remark .
By subsection 3.1 and thanks to the linearity of the Laplacian on
$\operatorname{RCD}(K,N)$ spaces, the extension of the results in [25] from
the case of sub/supersolutions of the equation $\Delta f=0$ to the general
Poisson problem $\Delta f=\eta$ is harmless. Indeed we can always subtract off
a solution of the Poisson problem and reduce to the harmonic case.
###### Remark .
In [25, Chapter 11] it is proved that subsection 3.1 is equivalent to another
(a priori stronger, since we test with more functions) definition of
supersolution of the problem $\Delta f=\eta$.
The outcome is that $f$ (verifying the usual assumptions) is a classical
supersolution of $\Delta f=\eta$ on $\Omega$ if and only if for any
$\Omega^{\prime}\Subset\Omega$ and for any
$g\in\operatorname{LIP}(\partial\Omega^{\prime})$ such that $g\leq f$ on
$\partial\Omega^{\prime}$, it holds that $H_{\eta}g\leq f$ on
$\Omega^{\prime}$. Here $H_{\eta}g$ is the solution of the Poisson problem
with Dirichlet boundary conditions
$\Delta H_{\eta}g=\eta\,,\quad H_{\eta}g-\tilde{g}\in
H^{1,2}_{0}(\Omega^{\prime})\,,$
with $\tilde{g}$ any global extension of $g$.
Let us quote a fundamental result connecting (classical) supersolutions of the
equation $\Delta f=\eta$ with superminimizers. Under our assumptions, it is a
direct corollary of [25, Theorem 9.24] (see also [93]), where equivalence of
supersolutions with superminimizers of the energy is addressed, and subsection
3.1, that gives the equivalence between the superminimizing property and
bounds for the Laplacian in the sense of distributions.
###### Theorem 3.1.
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open and bounded domain. Let
$f:\Omega\to\mathbb{R}$ be locally Lipschitz and bounded and let
$\eta\in\operatorname{C_{b}}(\Omega)$. Then $f$ is a classical supersolution
of $\Delta f=\eta$ in the sense of subsection 3.1 if and only if
$\bm{\Delta}f\leq\eta$ in the sense of subsection 3.1.
Next we propose a definition of sub/supersolutions of the equation $\Delta
f=\eta$ in the viscous sense tailored to the setting of
$\operatorname{RCD}(K,N)$ metric measure spaces.
The viscous theory for the Laplacian allows for several simplifications with
respect to the general viscosity theory of PDEs in the Euclidean case.
When considering general smooth Riemannian manifolds, there are intrinsic
definitions of Laplacian bounds in the viscosity sense, see for instance [134]
and the more recent [108], that require essentially no modification with
respect to the classical Euclidean notion.
In the non smooth framework, the development of a viscous theory of Laplacian
bounds presents some further challenges, the first one being the necessity to
single out the right class of smooth tests to use as comparison functions.
###### Definition (Viscous bounds for the Laplacian).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open and bounded domain. Let
$f:\Omega\to\mathbb{R}$ be locally Lipschitz and
$\eta\in\operatorname{C_{b}}(\Omega)$. We say that $\Delta f\leq\eta$ in the
viscous sense in $\Omega$ if the following holds. For any
$\Omega^{\prime}\Subset\Omega$ and for any test function
$\varphi:\Omega^{\prime}\to\mathbb{R}$ such that
* (i)
$\varphi\in D(\Delta,\Omega^{\prime})$ and $\Delta\varphi$ is continuous on
$\Omega^{\prime}$;
* (ii)
for some $x\in\Omega^{\prime}$ it holds $\varphi(x)=f(x)$ and $\varphi(y)\leq
f(y)$ for any $y\in\Omega^{\prime}$, $y\neq x$;
it holds
$\Delta\varphi(x)\leq\eta(x)\,.$
###### Remark .
In the classical definitions of viscosity supersolutions for PDEs on the
Euclidean space or on Riemannian manifolds, test functions are required to be
$C^{2}$. Therefore the notion considered above is a priori stronger than the
classical one on smooth Riemannian manifolds, since it is well known that
there are functions with continuous Laplacian that are not $C^{2}$.
Nevertheless, it follows from the equivalence Theorem 3.3 that this notion is
equivalent to the classical one.
We introduce yet another definition of supersolution of the equation $\Delta
f=\eta$ based on the pointwise behaviour of the heat flow.
###### Definition (Supersolution in the heat flow sense).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open and bounded domain. Let
$f:\Omega\to\mathbb{R}$ be a Lipschitz function and let
$\eta\in\operatorname{C_{b}}(\Omega)$. We say that $\Delta f\leq\eta$ on
$\Omega$ in the heat flow sense if the following holds. For any
$\Omega^{\prime}\Subset\Omega$ and any function $\tilde{f}:X\to\mathbb{R}$
extending $f$ from $\Omega^{\prime}$ to $X$ and with polynomial growth, we
have
$\limsup_{t\downarrow
0}\frac{P_{t}\tilde{f}(x)-\tilde{f}(x)}{t}\leq\eta(x)\,,\quad\text{for any
$x\in\Omega^{\prime}$}\,.$
###### Remark .
subsection 3.1 is independent of the choice of the global extension with
polynomial growth of the function $f$ to $X$, therefore it is well-posed. This
is a consequence of subsection 2.5, applied to the difference of two global
extensions of $f$ with polynomial growth.
###### Remark .
The role of the heat flow in the treatment of weak notions of Laplacian bounds
on smooth Riemannian manifolds can be traced back at least to [73], where the
original idea is attributed to Malliavin. Notions of Laplacian and Laplacian
bounds related to the asymptotic behaviour of the heat flow appear also in
[65, Section 4] and [75]. The novelty of subsection 3.1 is the absence of
integrations against test functions and that the bound is required to hold
pointwise.
###### Remark .
We can consider counterparts of all the notions in the case of lower bounds
for the Laplacian of the type $\Delta f\geq\eta$. The only difference being
that all the signs in the inequalities need to be switched.
###### Remark .
Since we chose to adopt the same notation $\Delta f\leq\eta$ for most of the
notions of Laplacian bounds that we have introduced, we shall usually clarify
in which sense the bound has to be intended, whenever there is risk of
confusion.
### 3.2. The main equivalence results
The aim of this subsection is to establish the equivalence of the upper bounds
for the Laplacian in the viscous sense and in the sense of distributions. This
will allow also to prove equivalence with the less classical notion of
Laplacian bounds through pointwise behaviour of the heat flow that we have
introduced in subsection 3.1.
We will mostly consider the case of an $\operatorname{RCD}(K,N)$ metric
measure space $(X,\mathsf{d},\mathscr{H}^{N})$ and limit our analysis to
functions that are locally Lipschitz continuous. We shall give the proofs
under the additional assumption that $N\geq 3$. The case $N=1$ is elementary,
due to the classification of non collapsed $\operatorname{RCD}(K,1)$ metric
measure spaces, see [95]. The case $N=2$ could be treated with arguments
analogous to those considered here, with the slight modifications due to the
different behaviour of the Green function. Notice also that the theory of non
collapsed $\operatorname{RCD}(K,2)$ metric measure spaces is very well
understood, thanks to [103], where it is shown that they are Alexandrov spaces
with curvature bounded from below.
###### Remark .
Let us remark that the case of general $\operatorname{RCD}(K,N)$ metric
measure spaces $(X,\mathsf{d},\mathfrak{m})$ could be handled with similar
arguments, after imposing some mild lower bounds on the measure growth of
balls, necessary in order to have a good definition of local Green’s
functions.
A fundamental tool in order to establish the equivalence between viscous and
distributional bounds will be the following maximum principle, which follows
from [137]. It is reminescent of the Omori-Yau maximum principle and of
Jensen’s maximum principle in the viscous theory of PDEs.
Below, given a measure $\bm{\mu}$ on an $\operatorname{RCD}(K,N)$ metric
measure space $(X,\mathsf{d},\mathfrak{m})$ we shall denote by $\bm{\mu}^{ac}$
its absolutely continuous part w.r.t. $\mathfrak{m}$ and by $\mu^{ac}$ its
density, i.e. $\bm{\mu}^{ac}=\mu^{ac}\,\mathfrak{m}$.
###### Theorem 3.2.
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\subset X$ be an open and bounded domain. Let
$f\in\operatorname{LIP}(\Omega)$ be such that $\bm{\Delta}f$ is a signed Radon
measure with non-negative singular part. Suppose that $f$ achieves one of its
locally strict maxima in $\Omega$. Then there exists a sequence of points that
are approximate continuity points of $\Delta^{ac}f$ and such that
$f(x_{n})\geq\sup
f_{\Omega}-\frac{1}{n}\,,\;\;\;\;\;\Delta^{ac}f(x_{n})\leq\frac{1}{n}\,.$
In particular, if $\bar{x}\in\Omega$ is a strict maximum point of $f$ in
$\Omega$, then there exists a sequence $(x_{n})$ of approximate continuity
points for $\Delta^{ac}f$ such that
$x_{n}\to\bar{x}\,,\;\;\;\;\Delta^{ac}f(x_{n})\to 0\,.$
More strongly, for any $\varepsilon>0$ it holds that
$\mathfrak{m}\left(\set{x\in
B_{\varepsilon}(\bar{x})\,:\;\;\Delta^{ac}f(x)\leq\varepsilon}\right)>0\,.$
###### Proof.
The proof follows from the more general statement of [137, Theorem 1.3]. The
conclusion that the points $x_{n}$ can be chosen to be converging to $\bar{x}$
follows from the fact that $\bar{x}$ is assumed to be the unique strict
maximum in a neighbourhood of $\bar{x}$ in $\Omega$, i.e., there exists a
neighbourhood $U_{x}\ni x$ such that $f(y)<f(x)$ for any $y\in U_{x}$ with
$y\neq x$. ∎
###### Remark .
A dual statement holds when dealing with functions whose distributional
Laplacian is a signed Radon measure with non-positive singular part and local
minima instead of local maxima.
One of the steps towards a viscosity theory is the comparison between
classical bounds for the Laplacian and bounds in the viscous sense for
sufficiently smooth functions.
###### Proposition (Classical vs viscous for functions with continuous
Laplacian).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let us consider a function $f\in D(\Delta,\Omega)$ and assume
that $\Delta f$ has a continuous representative. Let
$\eta:\Omega\to\mathbb{R}$ be a continuous function. Then $\Delta f\leq\eta$
pointwise if and only if $\Delta f\leq\eta$ in the viscous sense on $\Omega$.
###### Proof.
Let us suppose that $\Delta f\leq\eta$ in the viscous sense. We wish to prove
that $\Delta f\leq\eta$ pointwise. To this aim, it is enough to observe that
we can take $f$ as a test function in the definition of Laplacian bound in the
viscous sense. This directly yields that $\Delta f\leq\eta$ pointwise.
Let us prove conversely that if $\Delta f\leq\eta$ pointwise, then $\Delta
f\leq\eta$ in the viscous sense. To this aim, fix $x\in\Omega$ and
$\Omega^{\prime}\Subset\Omega$. Let $\varphi:\Omega^{\prime}\to\mathbb{R}$ be
such that $\varphi\leq f$ on $\Omega^{\prime}$, $\varphi(x)=f(x)$ and
$\varphi$ has continuous Laplacian on $\Omega^{\prime}$. We wish to prove that
$\Delta\varphi(x)\leq\eta(x)$. Set $\psi:=f-\varphi$.
Without loss of generality we can assume $\Omega^{\prime}$ to be small enough
in order for the Green type distance $b_{x}$ to be well defined with good
properties on $\Omega^{\prime}$, as in subsection 2.7. Set
$\tilde{\psi}:=\psi+b_{x}^{4}$. Then $\tilde{\psi}$ has a strict local minimum
at $x$. Observe also that $\tilde{\psi}$ is locally Lipschitz. Hence, by
Theorem 3.2 (see also subsection 3.2), we can find a sequence of points
$(x_{n})$ converging to $x$ and such that
(3.3) $\liminf_{n\to\infty}\Delta\tilde{\psi}(x_{n})\geq 0\,.$
By the properties of the auxiliary function $b_{x}$, we infer that
(3.4) $\liminf_{n\to\infty}\Delta\psi(x_{n})\geq 0\,.$
Indeed
(3.5) $\Delta b_{x}^{4}=\Delta(b^{2}_{x})^{2}=2\left\lvert\nabla
b^{2}_{x}\right\rvert^{2}+4Nb_{x}^{2}\left\lvert\nabla
b_{x}\right\rvert^{2}=4(N+2)b_{x}^{2}\left\lvert\nabla
b_{x}\right\rvert^{2}\,,$
where we rely on the identity $\Delta b_{x}^{2}=2N\left\lvert\nabla
b_{x}\right\rvert^{2}$ obtained in subsection 2.7. Then (3.4) follows from
(3.3), via (3.5) and relying on the two sided estimates (2.60) for $b_{x}$ and
on the gradient estimate (2.61).
Hence
$\liminf_{n\to\infty}(\Delta f(x_{n})-\Delta\varphi(x_{n}))\geq 0\,.$
Since $\Delta f\leq\eta$ pointwise and $\eta$ is continuous, we infer
$\limsup_{n\to\infty}\Delta\varphi(x_{n})\leq\liminf_{n\to\infty}\eta(x_{n})=\eta(x)\,.$
Hence $\Delta\varphi(x)\leq\eta(x)$ and we can conclude that $\Delta
f\leq\eta$ in the viscous sense, as claimed.
∎
###### Remark .
An easy consequence of the existence for solutions to the Dirichlet problem
Theorem 2.13 and of the linearity of the Laplacian is now the following: given
a continuous function $\eta$ and a function $u$ with continuous Laplacian, it
holds that $\Delta u\leq\eta$ in the viscous sense if and only if, denoting by
$v_{\eta}$ a local solution of $\Delta v_{\eta}=\eta$, it holds that
$\Delta(u-v_{\eta})\leq 0$ in the viscous sense.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Assume that $f:\Omega\to\mathbb{R}$ is a locally Lipschitz
function and that $\eta:\Omega\to\mathbb{R}$ is a continuous function. If
$\bm{\Delta}f\leq\eta$ in the sense of distributions, then $\Delta f\leq\eta$
in the viscous sense.
###### Proof.
If $\bm{\Delta}f\leq\eta$ in the sense of distributions, then $\bm{\Delta}f$
is a signed Radon measure whose singular part is non-positive. Moreover, for
any Lebesgue point $x\in\Omega$ of $\bm{\Delta}^{\mathrm{ac}}f$, it holds
$\Delta^{\mathrm{ac}}f(x)\leq\eta(x)\,.$
This is a direct consequence of the observation that
$\bm{\Delta}^{\mathrm{ac}}f\leq\eta$ and of the very definition of Lebesgue
point.
The proof now follows from the same argument used in the proof of subsection
3.2 with the only adjustment that we have to consider Lebesgue points
$(x_{n})$ of the absolutely continuous part of the Laplacian in place of
general points and $\Delta^{\mathrm{ac}}$ in place of the pointwise defined
Laplacian $\Delta$. ∎
###### Lemma (Maximum principle for viscosity sub/super solutions).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let
$\Omega\subset X$ be an open and bounded domain such that there exists
$\Omega\Subset\tilde{\Omega}$ with $\mathfrak{m}(X\setminus\tilde{\Omega})>0$.
Let moreover $f:\Omega\to\mathbb{R}$ be a Lipschitz function such that $\Delta
f\leq 0$ in the viscous sense. Then
$\min_{x\in\Omega}f(x)=\min_{x\in\partial\Omega}f(x)\,.$
###### Proof.
Let us suppose by contradiction that
$\min_{x\in\Omega}f(x)<\min_{x\in\partial\Omega}f(x)\,.$
Then the minimum in the left hand side is attained at an interior point
$x_{0}\in\Omega$. In particular
(3.6) $\min_{x\in\partial\Omega}f(x)>f(x_{0})\,.$
Consider a solution of the Poisson problem $\Delta v=1$ on $\Omega^{\prime}$
such that $v\geq 0$ on $\Omega$ and
$M:=\max_{\partial\Omega}v\geq\min_{\partial\Omega}v=:m>0\,.$
This function can be obtained with an additive perturbation from any solution
of $\Delta f=1$ on $\Omega^{\prime}$, by the local Lipschitz regularity
Theorem 2.12.
We claim that, for $\varepsilon>0$ sufficiently small, also
$f_{\varepsilon}(x):=f(x)-\varepsilon v(x)$
attains a local minimum at an interior point in $\Omega$.
Let us suppose by contradiction that this is not the case. Then, for any
$\varepsilon>0$, the global minimum of $f_{\varepsilon}$ on $\bar{\Omega}$ is
attained on $\partial\Omega$. In particular there exists
$x_{\varepsilon}\in\partial\Omega$ such that
$f(x_{\varepsilon})-\varepsilon M\leq f(x_{\varepsilon})-\varepsilon
v(x_{\varepsilon})=f_{\varepsilon}(x_{\varepsilon})\leq
f_{\varepsilon}(x_{0})\leq f(x_{0})\,.$
Hence
$\min_{x\in\partial\Omega}f(x)-f(x_{0})\leq f(x_{\varepsilon})-f(x_{0})\leq
M\varepsilon\,,\quad\text{for any $\varepsilon>0$}\,,$
which yields a contradiction with (3.6) a soon as $\varepsilon$ is
sufficiently small.
Let now $\varepsilon>0$ be small enough to get that
$f_{\varepsilon}=f-\varepsilon v$ has a local minimum $c\in\mathbb{R}$ at
$\bar{x}\in\Omega$. Note that, by assumption, the function $g:=f-c$ has
$\Delta g\leq 0$ in the viscous sense. Using $\varepsilon v$ as a test
function in the definition of the bound $\Delta g\leq 0$ in viscous sense, we
infer
$\Delta(\varepsilon v)(\bar{x})\leq 0\,,$
a contradiction since $\Delta v=1$ on $\Omega$. ∎
###### Theorem 3.3.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\subset X$ be an open and bounded domain,
$f:\Omega\to\mathbb{R}$ be a Lipschitz function and $\eta:\Omega\to\mathbb{R}$
be continuous. Then $\bm{\Delta}f\leq\eta$ in the sense of distributions if
and only if $\Delta f\leq\eta$ in the viscous sense.
###### Proof.
We already proved in subsection 3.2 that distributional bounds on the
Laplacian imply viscous bounds, so we are left to prove the converse
implication.
We claim that if $\Delta f\leq\eta$ in the viscous sense, then $f$ is a
classical supersolution to $\Delta f=\eta$ in the sense of subsection 3.1.
This is a consequence of subsection 3.2. Indeed, let us consider any open
subdomain $\Omega^{\prime}\Subset\Omega$ and any function $g\in
C(\overline{\Omega^{\prime}})$ such that $\Delta g=\eta$ on $\Omega^{\prime}$
and $g\leq f$ on $\partial\Omega^{\prime}$.
Observe that $h:=f-g$ is continuous on $\overline{\Omega^{\prime}}$ and
verifies $\Delta h\leq 0$ in the viscous sense on $\Omega^{\prime}$, since
$\Delta f\leq\eta$ in the viscous sense and $\Delta g=\eta$. Therefore we can
apply subsection 3.2 and infer, by subsection 3.2, that
$\min_{x\in\Omega^{\prime}}h(x)=\min_{x\in\partial\overline{\Omega^{\prime}}}h(x)\geq
0\,.$
It follows that $f\geq g$ on $\Omega^{\prime}$, hence $f$ is a classical
supersolution of $\Delta f=\eta$.
The validity of the bound $\bm{\Delta}f\leq\eta$ in the sense of distributions
follows then from Theorem 3.1. ∎
The following is a counterpart, tailored to our purposes and under simplified
assumptions, of the classical fact that the infimum of a family of viscosity
supersolutions to a given equation is still a supersolution. Notice that the
viscous approach fits particularly well with the stability issue for Laplacian
bounds under infima. This property seems to be known to experts but we are not
aware of any reference.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\subset X$ be an open domain and let
$f:\Omega\to\mathbb{R}$ be continuous. Let $\mathcal{F}$ be a family of
uniformly Lipschitz functions $u:\Omega\to\mathbb{R}$ such that
$\Delta u\leq f\quad\text{in the viscous sense on $\Omega$}.$
Let $v:\Omega\to\mathbb{R}\cup\\{-\infty\\}$ be defined by
$v(x):=\inf\\{u(x)\,:\,u\in\mathcal{F}\\}\,.$
Assume there exists a point $x_{0}\in\Omega$ such that $v(x_{0})>-\infty$.
Then
$\Delta v\leq f\quad\text{in the viscous sense on $\Omega$}.$
###### Proof.
Let us preliminarily point out that, if $v(x_{0})>-\infty$, then
$v:\Omega\to\mathbb{R}$ and, by the uniform Lipschitz assumption on the family
$\mathcal{F}$, $v$ is Lipschitz on $\Omega$.
We wish to verify that $\Delta v\leq f$ in the viscous sense. To this aim, let
$\Omega^{\prime}\Subset\Omega$, $x\in\Omega^{\prime}$ and
$\varphi:\Omega^{\prime}\to\mathbb{R}$ be such that $\varphi\leq v$ on
$\Omega^{\prime}$, $\varphi(x)=v(x)$ and $\varphi$ has continuous Laplacian on
$\Omega^{\prime}$.
Let us suppose by contradiction that $\Delta\varphi(x)>f(x)$. Then there exist
$\varepsilon>0$ and a neighbourhood $U_{x}\ni x$ such that
$\Delta\varphi>f+\varepsilon$ on $U_{x}$, by continuity of $\Delta\varphi$ and
$f$.
Let $b_{x}$ be the Green-type distance of subsection 2.7, and recall the
expression (3.5) of $\Delta b_{x}^{4}$. Using the two sided estimates (2.60)
for $b_{x}$ and the gradient estimate (2.61), we can find
$\varepsilon^{\prime}>0$ small enough such that, setting
$\varphi_{\varepsilon^{\prime}}:=\varphi-\varepsilon^{\prime}b_{x}^{4}$, it
holds $\Delta\varphi_{\varepsilon^{\prime}}>f+\varepsilon^{\prime\prime}$ on
$U_{x}$, for some $\varepsilon^{\prime\prime}>0$.
Observe that $v-\varphi_{\varepsilon^{\prime}}$ is non-negative and, thanks to
the perturbation, it has a strict minimum at $x$. Let us consider now
$u_{h}\in\mathcal{F}$ such that
$v(x)=\lim_{h\to\infty}u_{h}(x)\,.$
Let $\tilde{u}_{h}:=u_{h}-\varphi_{\varepsilon^{\prime}}$. Let
$y_{h}\in\overline{U_{x}}$ be a minimum point of $\tilde{u}_{h}$ on
$\overline{U_{x}}$. Then it is easy to prove that $y_{h}\to x$ as
$h\to\infty$, since $v-\varphi_{\varepsilon^{\prime}}$ has its unique minimum
on $\overline{U_{x}}$ at $x$.
It is now sufficient to observe that $\Delta u_{h}\leq f$ in the viscous sense
and use that
$\Delta\varphi_{\varepsilon^{\prime}}>f+\varepsilon^{\prime\prime}$ in the
viscous and a.e. sense. Hence
(3.7) $\Delta\tilde{u}_{h}<-\varepsilon^{\prime\prime}\,\quad\text{in the
viscous sense on $U_{x}$}.$
From the proof of Theorem 3.3, we infer that $\tilde{u}_{h}$ is a classical
supersolution of $\Delta w=0$, i.e. it is superharmonic in classical sense.
Since $\tilde{u}_{h}$ is achieving its minimum at an interior point of
$U_{x}$, by strong maximum principle for superharmonic functions (see for
instance [25, Theorem 8.13]), it is constant on $U_{x}$. But then
$\Delta\tilde{u}_{h}=0$ on $U_{x}$, contradicting (3.7). ∎
The last part of this subsection is dedicated to the relationship between
subsection 3.1 and the other notions of Laplacian bounds that we have
introduced and investigated so far.
For a sufficiently smooth function $f$ on the Euclidean space or on a
Riemannian manifold, the Laplacian $\Delta f(x)$ determines the first non
trivial term in the asymptotic expansion of the average of $f$ on balls
centred at $x$:
(3.8)
$\fint_{B_{r}(x)}f(y)\mathop{}\\!\mathrm{d}\mathscr{H}^{n}(y)=f(x)+C_{n}\Delta
f(x)r^{2}+o(r^{2})\,,\quad\text{as $r\to 0$}\,,$
where $C_{n}$ is a constant depending only on the ambient dimension. A
classical result is the fact that a continuous function
$u:\Omega\to\mathbb{R}$ on a Euclidean domain is harmonic (in the classical
sense) if and only if
$\lim_{r\to
0}\fint_{B_{r}(x)}(u(y)-u(x))\mathop{}\\!\mathrm{d}\mathscr{L}^{n}(y)=0\,,\quad\text{for
any $x\in\Omega$}\,.$
Although being a really powerful tool, at first sight, the asymptotic
expansion above seems to require smoothness of the ambient space for its
validity. Moreover, it is easy to check that it fails in general on smooth
weighted Riemannian manifolds.
There have been recent attempts of understanding the connections between this
approach through asymptotic mean values and the distributional notion of
Laplacian on metric measure spaces. Let us mention in particular [136, Section
4] where, relying on some ideas originally due to [118, 119], it is shown that
the asymptotic of the average on balls determines the Laplacian of a
semiconcave function at sufficiently regular points on Alexandrov spaces.
Here we consider an alternative approach: instead of looking at the behaviour
of averages on balls, we look at the pointwise behaviour of the heat flow.
Basically, we consider weighted averages instead of averages, the weight being
given by the heat kernel.
As we shall see, this turns to be a more intrinsic approach (the infinitesimal
generator of the heat semigroup is the Laplacian) and allows for a counterpart
of (3.8) better suited for the non smooth framework.
###### Proposition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\subset X$ be an open subset,
$f:\Omega\to\mathbb{R}$ be Lipschitz and $\eta:\Omega\to\mathbb{R}$ be
continuous. Assume that for some global extension $\tilde{f}:X\to\mathbb{R}$
of $f$ with polynomial growth, it holds
(3.9) $\limsup_{t\downarrow
0}\frac{P_{t}\tilde{f}(x)-\tilde{f}(x)}{t}\leq\eta(x)\,,\quad\text{ for any
}x\in\Omega.$
Then $\Delta f\leq\eta$ on $\Omega$ in the viscous sense.
###### Proof.
We need to verify that, for any open subdomain $\Omega^{\prime}\Subset\Omega$
and for any function $\varphi:\Omega^{\prime}\to\mathbb{R}$ with continuous
Laplacian on $\Omega^{\prime}$ satisfying $\varphi\leq f$ on $\Omega^{\prime}$
and $\varphi(x)=f(x)$ for some $x\in\Omega^{\prime}$, the following estimate
holds:
$\Delta\varphi(x)\leq\eta(x)\,.$
Let us first assume that $\varphi$ extends to a global function
$\tilde{\varphi}:X\to\mathbb{R}$ with polynomial growth and such that
$\tilde{\varphi}\leq\tilde{f}$. Then
$\Delta\varphi(x)=\Delta\tilde{\varphi}(x)=\lim_{t\downarrow
0}\frac{P_{t}\tilde{\varphi}(x)-\tilde{\varphi}(x)}{t}\leq\limsup_{t\downarrow
0}\frac{P_{t}\tilde{f}(x)-\tilde{f}(x)}{t}\leq\eta(x)\,,$
where the first equality follows from the locality of the Laplacian, the
second one from subsection 2.5, the first inequality follows from the
comparison principle for the heat flow and the last one from (3.9).
To complete the proof, we need to extend locally defined test functions for
the Laplacian bound in viscous sense to globally defined functions, keeping
the comparison.
To this aim, observe that we can extend any test function for the Laplacian
bound in viscous sense to a global function $\hat{\varphi}$ by multiplying it
with a cut-off function with good estimates which is constantly $1$ on a small
ball centred at $x$, see subsection 2.5. In this way, we loose the comparison
with $f$ but we obtain a globally defined function which coincides with
$\varphi$ in a neighbourhood of $x$. Then, setting
$\tilde{\varphi}:=\min\\{\tilde{f},\hat{\varphi}\\}$, we can easily verify
that $\tilde{\varphi}$ has polynomial growth, and
$\tilde{\varphi}\leq\tilde{f}$ globally. Moreover, since
$\tilde{\varphi}\equiv\varphi$ in a neighbourhood of $x$, still
$\tilde{\varphi}(x)=f(x)$ and $\tilde{\varphi}$ has continuous Laplacian in a
neighbourhood of $x$. ∎
###### Proposition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\subset X$ be an open domain and let
$f:\Omega\to\mathbb{R}$ be a locally Lipschitz function. Let
$\eta\in\operatorname{C_{b}}(\Omega)$ and assume that
$\bm{\Delta}f\leq\eta\,,\quad\text{in the distributional sense on $\Omega$.}$
Then, for any $\Omega^{\prime}\Subset\Omega$ and for any function
$\tilde{f}:X\to\mathbb{R}$ with polynomial growth and such that
$\tilde{f}\equiv f$ on $\Omega^{\prime}$, it holds
$\limsup_{t\downarrow
0}\frac{P_{t}\tilde{f}(x)-\tilde{f}(x)}{t}\leq\eta(x)\,,\quad\text{for any
$x\in\Omega^{\prime}$}\,.$
###### Proof.
We divide the proof into three steps: first we deal with the case of
superharmonic functions. Then we deal with the case of solutions of Poisson
equations with continuous right hand sides. To conclude we combine the
previous two steps to treat the general case.
Step 1. Let us assume that $\eta\equiv 0$ on $\Omega$. Thanks to subsection
2.5 we can choose a good cut-off function $\varphi:X\to\mathbb{R}$ supported
on $B_{2r}(x)$ and such that $\varphi\equiv 1$ on $B_{r}(x)$. Computing the
Laplacian of $f\varphi$ by the standard calculus rules, we infer that
(3.10) $\bm{\Delta}(f\varphi)=\varphi\bm{\Delta}f+2\nabla\varphi\cdot\nabla
f+f\Delta\varphi\,.$
By subsection 2.5, it is sufficient to prove that
$\limsup_{t\downarrow 0}\frac{P_{t}(f\varphi)(x)-(f\varphi)(x)}{t}\leq 0\,.$
Moreover, setting $\bm{\mu}:=\bm{\Delta}(f\varphi)$, we have that $\bm{\mu}$
is the sum of a bounded function $\psi:=2\nabla\varphi\cdot\nabla
f+f\Delta\varphi$ supported on $B_{2r}(x)\setminus B_{r}(x)$ and a non-
positive measure $\bm{\nu}:=\varphi\bm{\Delta}f$. We claim that
(3.11)
$P_{t}(f\varphi)(x)-(f\varphi)(x)\leq\int_{0}^{t}P_{s}\psi(x)\mathop{}\\!\mathrm{d}s\,.$
In order to establish the claim we borrow the argument from the proof of [67,
Lemma 3.2]. We set
(3.12) $\tilde{{\rm Test}}^{\infty}:=\\{\eta\in L^{1}(X)\cap{\rm
Test}^{\infty}(X)\,:\,\left\lvert\nabla\eta\right\rvert,\Delta\eta\in
L^{1}(X)\\}\,,$
and let $\tilde{{\rm Test}}^{\infty}_{+}$ be the cone of nonnegative functions
in $\tilde{{\rm Test}}^{\infty}$. We recall that for any nonnegative function
$\eta\in L^{1}\cap L^{\infty}$ there exists a sequence $\eta_{n}\in\tilde{{\rm
Test}}^{\infty}_{+}$ such that $\eta_{n}$ are uniformly bounded in
$L^{\infty}$ and converge to $\eta$ in $L^{1}$. Hence, in order to prove
(3.11) it is sufficient to show that
(3.13)
$\int_{X}\eta\left(P_{t}(f\varphi)-f\varphi\right)\mathop{}\\!\mathrm{d}\mathfrak{m}\leq\int_{X}\eta\left(\int_{0}^{t}P_{s}\psi\mathop{}\\!\mathrm{d}s\right)\mathop{}\\!\mathrm{d}\mathfrak{m}\,,$
for any $\eta\in\tilde{{\rm Test}}^{\infty}_{+}$. To this aim we can compute
$\displaystyle\int_{X}\eta\left(P_{t}(f\varphi)-f\varphi\right)\mathop{}\\!\mathrm{d}\mathfrak{m}=$
$\displaystyle\int_{X}f\varphi\left(P_{t}\eta-\eta\right)\mathop{}\\!\mathrm{d}\mathfrak{m}$
$\displaystyle=$ $\displaystyle\int_{X}\int_{0}^{t}f\varphi\Delta
P_{s}\eta\mathop{}\\!\mathrm{d}s\mathop{}\\!\mathrm{d}\mathfrak{m}$
$\displaystyle\leq$ $\displaystyle\int_{X}\int_{0}^{t}\psi
P_{s}\eta\mathop{}\\!\mathrm{d}s\mathop{}\\!\mathrm{d}\mathfrak{m}$
$\displaystyle=$
$\displaystyle\int_{X}\eta\left(\int_{0}^{t}P_{s}\psi\mathop{}\\!\mathrm{d}s\right)\mathop{}\\!\mathrm{d}\mathfrak{m}\,.$
Since $\psi$ is bounded and supported on $B_{2r}(x)\setminus B_{r}(x)$, by
subsection 2.5 we infer:
$\limsup_{t\downarrow
0}\frac{P_{t}(f\varphi)(x)-(f\varphi)(x)}{t}\leq\limsup_{t\downarrow
0}\frac{\int_{0}^{t}P_{s}\psi(x)\mathop{}\\!\mathrm{d}s}{t}=0\,,$
which proves (3.10).
Step 2. By subsection 2.5, if $g:X\to\mathbb{R}$ has polynomial growth and,
for some $r>0$ and $x\in X$, $g$ belongs to the domain of the Laplacian on
$B_{r}(x)$ and it has bounded and continuous Laplacian $\Delta g=\eta$
therein, then
$\lim_{t\downarrow 0}\frac{P_{t}g(x)-g(x)}{t}=\eta(x)\,,\quad\text{for any
$x\in B_{r}(x)$}\,.$
Step 3. Let us combine the outcomes of the previous two steps to prove the
statement.
Let us consider a ball $B_{2r}(x)\Subset\Omega^{\prime}$ and let
$\varphi:B_{2r}(x)\to\mathbb{R}$ be a solution (see Theorem 2.13) of
$\Delta\varphi=\eta\,,\quad\text{on $B_{2r}(x)$}\,.$
Observe that $f-\varphi$ is Lipschitz on $B_{r}(x)$ and
$\bm{\Delta}(f-\varphi)\leq 0\,,\quad\text{on $B_{r}(x)$}\,.$
From Step 1, we infer that for any extension
$\tilde{f}_{\varphi}:X\to\mathbb{R}$ of $f-\varphi$ with polynomial growth it
holds
(3.14) $\limsup_{t\downarrow
0}\frac{P_{t}\tilde{f}_{\varphi}(x)-\tilde{f}_{\varphi}(x)}{t}\leq 0\,.$
Moreover, we can consider an extension $\tilde{\varphi}:X\to\mathbb{R}$ of
$\varphi$ and observe that, by Step 2,
(3.15) $\lim_{t\downarrow
0}\frac{P_{t}\tilde{\varphi}(x)-\tilde{\varphi}(x)}{t}=\eta(x)\,.$
It is straightforward to check that, for any extension
$\tilde{f}:X\to\mathbb{R}$ of $f$, $\tilde{f}-\tilde{\varphi}$ is an extension
of $f-\varphi$. Hence, applying (3.14) to
$\tilde{f}_{\varphi}:=\tilde{f}-\tilde{\varphi}$ and then (3.15), we get
$\limsup_{t\downarrow
0}\frac{P_{t}\tilde{f}(x)-\tilde{f}(x)}{t}\leq\eta(x)\,,$
as we claimed. ∎
We collect the main equivalence results for Laplacian bounds in a single
statement below. Many of the equivalences are proved without the restriction
that $\mathfrak{m}=\mathscr{H}^{N}$ and we expect all of them to hold in
general. We do not pursue the most general statements as they will not be
needed in the sequel of the paper.
###### Theorem 3.4 (Equivalent notions of Laplacian bounds).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $\Omega\subset X$ be an open domain,
$\eta\in\operatorname{C_{b}}(\Omega)$ and $f:\Omega\to\mathbb{R}$ be a locally
Lipschitz function. Then the following are equivalent:
* (i)
$\bm{\Delta}f\leq\eta$ in the sense of distributions on $\Omega$, as in
subsection 3.1;
* (ii)
$f$ is a superminimizer of the energy $E_{\eta}$, as in subsection 3.1;
* (iii)
$f$ is a classical supersolution of $\Delta f=\eta$ in the sense of subsection
3.1;
* (iv)
$f$ satisfies $\Delta f\leq\eta$ in the viscous sense as in subsection 3.1;
* (v)
$f$ is a supersolution of $\Delta f\leq\eta$ in the heat flow sense as in
subsection 3.1.
While the equivalences between (i), (ii) and (iii) are well established within
the theory of metric measure spaces that are doubling and verify a Poincaré
inequality, our proofs of the equivalence between (iv), (v) and the previous
ones heavily rely on the $\operatorname{RCD}(K,N)$ assumption. Indeed, the
Omori-Yau-Jensen type maximum principle Theorem 3.2, the existence of a nice
auxiliary function with the properties detailed in subsection 2.7 and the
Gaussian heat kernel bounds played a fundamental role in all of the arguments
above.
## 4\. Ricci curvature bounds, Hopf-Lax semigroups and Laplacian bounds
This section is dedicated to analyse the interplay between the Hopf-Lax
semigroups (associated to exponents $1\leq p<\infty$), Ricci curvature lower
bounds and Laplacian upper bounds.
Let us introduce some notation and terminology.
Let $1\leq p<\infty$. We shall consider the evolution via the $p$-Hopf-Lax
semigroup on a general metric space $(X,\mathsf{d})$. Let us consider
$f:X\to\mathbb{R}\cup\\{\pm\infty\\}$, not identically $+\infty$, and let the
evolution via $p$-Hopf-Lax semigroup be defined by
(4.1) $\mathcal{Q}^{p}_{t}f(x):=\inf_{y\in
X}\left(f(y)+\frac{\mathsf{d}(x,y)^{p}}{pt^{p-1}}\right)\,.$
Observe that, in the case $p=1$, the expression for the Hopf-Lax semigroup is
actually independent of $t$:
$\mathcal{Q}^{1}_{t}f(x)=\mathcal{Q}^{1}f(x)=\inf_{y\in
X}\left(f(y)+\mathsf{d}(x,y)\right)\,.$
The key result of this section will be that the Hopf-Lax semigroup preserves
upper bounds on the Laplacian on $\operatorname{RCD}$ spaces, when suitably
interpreted, for any exponent $1\leq p<\infty$. This observation appears to be
new for general exponents $p$, even for smooth Riemannian manifolds. The only
previous references we are aware of are [136], dealing with the case $p=2$ on
Alexandrov spaces with lower Ricci curvature bounds, (the result had been
previously announced in the unpublished [118], where a strategy on Alexandrov
spaces was also indicated) and the more recent [135], where exponents
$1<p<\infty$ on smooth Riemannian manifolds are considered. Even in this case,
our proof seems more robust and it is based on a completely different idea,
relying on the connection between the heat flow and lower Ricci curvature
bounds instead of the second variation formula.
In the Euclidean setting, the inf-convolution preserves the property of being
a supersolution of the Laplace equation, $\Delta u=0$. Classical proofs of
this fact, that allow for extensions to more general PDEs, are based on the
affine invariance of the Euclidean space.
In subsection 4.1 we generalize this statement to Riemannian manifolds with
lower Ricci curvature bounds. The proof introduces a different approach based
on the characterization of the Laplacian of smooth functions through
asymptotics of averages on balls. To avoid technicalities we will consider
only smooth functions, though it is worth pointing out that the Hopf-Lax
semigroup does not preserve smoothness, even in the Euclidean setting.
The extension to non smooth $\operatorname{RCD}(K,N)$ spaces, that we shall
address in subsection 4.3, requires two further ideas: a weak theory of
Laplacian bounds in the non smooth context, that we have at our disposal after
section 3, and a new intrinsic way to connect the Laplacian to the Hopf-Lax
semigroup under the $\operatorname{RCD}$ condition. This connection will be
achieved exploiting a powerful duality formula, originally due to Kuwada [99],
that we review in subsection 4.2.
### 4.1. Smooth Riemannian manifolds
For the sake of motivation, let us present a characterization of lower Ricci
bounds for smooth Riemannian manifolds involving the interplay between the
Hopf-Lax semigroup and Laplacian bounds.
Let $(M^{n},g)$ be a smooth Riemannian manifold and, given a sufficiently
smooth function $f:M\to\mathbb{R}$, let us set
$\sigma_{r}f(x):=\fint_{\partial
B_{r}(x)}f(y)\mathop{}\\!\mathrm{d}\mathscr{H}^{n-1}(y)=\int
f(y)\mathop{}\\!\mathrm{d}\sigma_{x,r}(y)\,,$
where we denoted by $\mathscr{H}^{n-1}$ the surface measure of $\partial
B_{r}(x)$ and notice that, by its very definition,
$\sigma_{x,r}:=\mathscr{H}^{n-1}(\partial
B_{r}(x))^{-1}\,\mathscr{H}^{n-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial B_{r}(x)$ is a
probability measure.
Let us recall (see for instance the proof of [129, Theorem 1.5]) the following
classical fact: for any $x\in U_{x}\subset M$ and any function $f\in
C^{3}(U_{x})$, it holds
(4.2) $\sigma_{r}f(x)=f(x)+\frac{r^{2}}{2n}\Delta
f(x)+o(r^{2})\,,\quad\text{as $r\downarrow 0$}\,.$
We will denote by $f^{c}$ the dual of a function $f$, with respect to the
optimal transport duality induced by cost equal to distance, i.e.
$f^{c}(y):=\inf_{z\in M}\\{f(z)+\mathsf{d}(y,z)\\}\,,\quad\text{for any $y\in
M$}.$
###### Theorem 4.1.
Let $(M^{n},g)$ be a smooth closed Riemannian manifold and let
$K\in\mathbb{R}$. The following conditions are equivalent:
* (i)
$\operatorname{Ric}\geq K$ on $M$;
* (ii)
for any function $f:M\to\mathbb{R}$ and for any $x,y\in M$ such that
$f^{c}(x)-f(y)=\mathsf{d}(x,y)\,,$
if $f$ is smooth in a neighbourhood of $y$ and $f^{c}$ is smooth in a
neighbourhood of $x$, then
(4.3) $\Delta f^{c}(x)\leq\Delta f(y)-K\mathsf{d}(x,y)\,,$
###### Proof.
Let us start proving the implication from (i) to (ii).
By [129, Theorem 1.5], if $(M^{n},g)$ is a smooth Riemannian manifold such
that $\operatorname{Ric}\geq K$, then for any couple of points $x,y\in M$,
(4.4)
$W_{1}(\sigma_{x,r},\sigma_{y,r})\leq\left(1-\frac{K}{2n}r^{2}+o(r^{2})\right)\mathsf{d}(x,y)\,,\quad\text{as
$r\downarrow 0$}\,,$
where we denoted by $W_{1}$ the Wasserstein distance associated to the
exponent $p=1$.
Then we can apply the classical Kantorovich-Rubinstein duality to infer that
(4.5)
$\sigma_{r}f^{c}(x)-\sigma_{r}f(y)\leq\left(1-\frac{K}{2n}r^{2}+o(r^{2})\right)\mathsf{d}(x,y)\,,\quad\text{as
$r\downarrow 0$}\,.$
Indeed
$\sigma_{r}f^{c}(x)=\int f^{c}(z)\mathop{}\\!\mathrm{d}\sigma_{x,r}(z)\,,$
$\sigma_{r}f(y)=\int f(z)\mathop{}\\!\mathrm{d}\sigma_{y,r}(z)\,$
and
$f^{c}(x)-f(y)\leq\mathsf{d}(x,y)\,,\quad\text{for any $x,y\in M$}\,.$
Therefore
$\sigma_{r}f^{c}(x)-\sigma_{r}f(y)\leq W_{1}(\sigma_{x,r},\sigma_{y,r})\,$
and we can apply (4.4) to get (4.5).
Taking into account (4.2), the assumption $f^{c}(x)-f(y)=\mathsf{d}(x,y)$ and
the fact that $x$ and $y$ are smooth points for $f^{c}$ and $f$ respectively,
starting from (4.5) we can easily infer that
$\Delta f^{c}(x)\leq\Delta f(y)-K\mathsf{d}(x,y)\,,$
as we claimed.
Let us prove the converse implication. As for the classical implications
between different characterizations of lower Ricci bounds in [129], we wish to
apply (4.3) to suitably chosen functions $f$ in order to control from below
the Ricci curvature at any point and in any direction.
To this aim, let us choose $x\in M$ and a tangent vector $v\in T_{x}M$. Let us
assume without loss of generality that $\left\lvert v\right\rvert_{x}=1$. Then
we can find, via a standard construction, a smooth hypersurface
$\Sigma_{x,v}\subset B_{r}(x)$ for $r>0$ small enough, such that
$x\in\Sigma_{x,v}$, the tangent hyperplane to $\Sigma_{x,v}$ is the orthogonal
to $v$ in $T_{x}M$ and the second fundamental form of the hypersurface is
vanishing at $x$.
It is a standard fact in Riemannian geometry that the signed distance function
$\mathsf{d}^{\pm}_{\Sigma}$ from $\Sigma_{x,v}$ is a smooth $1$-Lipschitz
function in a neighbourhood of $x$. Moreover, for some $\varepsilon>0$
sufficiently small, we can consider a unit speed geodesic
$\gamma:(-\varepsilon,\varepsilon)\to M$ such that $\gamma(0)=x$,
$\gamma^{\prime}(0)=v$ and
$\mathsf{d}^{\pm}_{\Sigma_{x,v}}(\gamma(t))=t\,,\quad\text{for any
$t\in(-\varepsilon,\varepsilon)$}\,.$
The following is a well known identity in Riemannian geometry (observe that
$\operatorname{Hess}\mathsf{d}^{\pm}_{\Sigma}=0$ at $x$ due to the vanishing
of the second fundamental form of $\Sigma_{x,v}$ at $x$):
(4.6)
$\left.\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\right|_{{t=0}}\Delta\mathsf{d}^{\pm}_{\Sigma_{x,r}}(\gamma(t))=-\operatorname{Ric}_{x}(v,v)\,.$
Now, applying (4.3) to $f=f^{c}=\mathsf{d}^{\pm}_{\Sigma_{x,r}}$ at the points
$\gamma(0)$ and $\gamma(t)$, we obtain that
$\Delta\mathsf{d}^{\pm}_{\Sigma_{x,r}}(\gamma(t))\leq\Delta\mathsf{d}^{\pm}_{\Sigma_{x,r}}(\gamma(0))-Kt\,,\quad\text{for
any $t\in(0,\varepsilon)$.}\,$
Thus, we infer that
$\left.\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\right|_{{t=0}}\Delta\mathsf{d}^{\pm}_{\Sigma_{x,r}}(\gamma(t))\leq-K\,,$
which proves that $\operatorname{Ric}_{x}(v,v)\geq K$, thanks to (4.6). ∎
###### Remark .
For brevity, we discussed only the case $p=1$, however it is possible to
consider variants of Theorem 4.1 above dealing with the Hopf-Lax semigroups
associated to any exponent $1\leq p<\infty$.
###### Remark .
Smoothness of the test function $f$ in condition (ii) in Theorem 4.1 above is
an assumption which can be relaxed, if we understand the Laplacian bounds in a
more general sense. This will be the key to formulate a counterpart of this
results on general $\operatorname{RCD}(K,N)$ metric measure spaces and it will
be a key for the applications later in the paper.
Moreover, as the forthcoming discussion will clarify, also the compactness of
the manifold is a completely unnecessary assumption.
### 4.2. Kuwada’s lemma
We recall here a fundamental result highlighting the interplay between lower
Ricci curvature bounds, contractivity estimates for the heat-flow and the
Hopf-Lax semigroup. The original formulation on smooth Riemannian manifolds is
due to Kuwada [99]. Later on, due to its particular robustness, it has been
extended to $\operatorname{RCD}(K,\infty)$ metric measure spaces in [11, Lemma
3.4] in the case of exponent $p=2$.
###### Theorem 4.2 (Kuwada duality).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,\infty)$ metric
measure space and $f\in\operatorname{LIP_{b}}(X)$ be non-negative and with
bounded support. Then, for any $t\geq 0$, $\mathcal{Q}_{t}^{2}f$ is Lipschitz,
non-negative, with bounded support and it holds
(4.7)
$P_{s}\left(\mathcal{Q}^{2}_{1}f\right)(x)-P_{s}f(y)\leq\frac{e^{-2Ks}}{2}\mathsf{d}(x,y)^{2}\,,$
for any $x,y\in X$ and for any $s\geq 0$.
Thanks to the self-improvement of the Bakry-Émery gradient contraction
estimate for the heat flow obtained on $\operatorname{RCD}(K,\infty)$ spaces
in [122] (see (2.31)), Theorem 4.2 can then be generalized to arbitrary
exponents $p$, along the original lines of [99]. Since it can be proved with
the very same strategy of the case $p=2$ we omit the proof.
###### Theorem 4.3.
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,\infty)$ metric
measure space and $f\in\operatorname{LIP_{b}}(X)$ be non-negative and with
bounded support. Let $1\leq p<\infty$. Then, for any $t\geq 0$,
$\mathcal{Q}_{t}^{p}f$ is Lipschitz, non-negative, with bounded support and it
holds
(4.8)
$P_{s}\left(\mathcal{Q}^{p}_{1}f\right)(x)-P_{s}f(y)\leq\frac{e^{-pKs}}{p}\mathsf{d}(x,y)^{p}\,,$
for any $x,y\in X$ and for any $s\geq 0$
For our purposes it will be relevant to apply Kuwada’s duality under milder
assumptions on the function $f$. This is possible under the
$\operatorname{RCD}(K,N)$ condition for finite $N$, thanks to the Gaussian
estimates for the heat kernel, that, as we already pointed out (see (2.37) and
the discussion following it), enlarge the class of functions to which the heat
flow can be applied. We focus for simplicity on the case $p=1$, which is the
relevant one for our purposes.
###### Theorem 4.4.
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let
$f:X\to\mathbb{R}$ be a locally Lipschitz function with polynomial growth. Let
us assume that there exists $x_{0}\in X$ such that
$\mathcal{Q}^{1}_{1}f(x_{0})\in\mathbb{R}$. Then
(4.9) $P_{s}\left(\mathcal{Q}^{1}_{1}f\right)(x)-P_{s}f(y)\leq
e^{-Ks}\mathsf{d}(x,y)\ ,$
for any $x,y\in X$ and for any $s\geq 0$.
###### Proof.
Let us set $f^{c}:=\mathcal{Q}^{1}_{1}f$, in order to ease the notation.
Observe that, if there exists $x_{0}\in X$ such that
$f^{c}(x_{0})\in\mathbb{R}$, then $f^{c}$ is a $1$-Lipschitz function.
Moreover, since for any function $f$ as above, it holds that $f^{c}\leq f$, it
is sufficient to prove (4.9) for $1$-Lipschitz functions. Indeed, if the
statement holds for $1$-Lipschitz functions, then
$\left(P_{s}f^{c}\right)(x)-\left(P_{s}f^{c}\right)(y)\leq
e^{-Ks}\mathsf{d}(x,y)\,,$
for any $x,y\in X$ and for any $s\geq 0$.
Hence, since $f^{c}\leq f$ and therefore $P_{s}f^{c}\leq P_{s}f$, we obtain
$\left(P_{s}f^{c}\right)(x)-P_{s}f(y)\leq e^{-Ks}\mathsf{d}(x,y)\;\;\;\text{
for any $x,y\in X$ and for any $s\geq 0$}\,,$
as we wished. Now, given any $1$-Lipschitz function $f:X\to\mathbb{R}$,
observe that $f^{c}=f$. Using [10, Theorem 6.1 (iv)], we can estimate
$\operatorname{Lip}(P_{s}f)\leq
e^{-Ks}P_{s}\big{(}\operatorname{Lip}(f)\big{)}\leq e^{-Ks}\,.$
Hence
$\left\lvert P_{s}f(x)-P_{s}f(y)\right\rvert\leq
e^{-Ks}\mathsf{d}(x,y)\,,\quad\text{for any $x,y\in X$ and for any $s\geq
0$}\,.$
∎
###### Remark .
Note that, if $f:X\to\mathbb{R}$ is 1-Lipschitz, then one can reinforce the
estimate (4.9) by putting the modulus in the left hand side.
### 4.3. Hopf-Lax semigroup and Laplacian bounds: the non smooth framework
Let us consider an $\operatorname{RCD}(K,N)$ metric measure space
$(X,\mathsf{d},\mathfrak{m})$. Recall the definition of the $p$-Hopf-Lax
semigroup (4.1).
In order to motivate the next developments, let us start with some formal
computations, neglecting the regularity issues.
To this aim let $x\in X$ and suppose that there exists $y\in X$ such that
(4.10) $\mathcal{Q}^{p}_{1}f(x)=f(y)+\frac{\mathsf{d}(x,y)^{p}}{p}\,,$
i.e., $y$ is a point where the infimum defining the $p$-Hopf-Lax semigroup for
$t=1$ at $x$ is attained.
Observe that, for $x$ and $y$ as above, equality holds at time $s=0$ in (4.8).
Hence, by taking the right derivative,
(4.11) $\limsup_{s\downarrow
0}\frac{P_{s}\left(\mathcal{Q}^{p}_{1}f\right)(x)-\mathcal{Q}^{p}_{1}f(x)}{s}\leq\limsup_{s\downarrow
0}\frac{P_{s}f(y)-f(y)}{s}-K\mathsf{d}(x,y)^{p}\,.$
If $f$ is regular at $y$, the first term in the right hand side of (4.11) is
the value $\Delta f(y)$. Hence (4.11) can be turned into
$\limsup_{s\downarrow
0}\frac{P_{s}\left(\mathcal{Q}^{p}_{1}f\right)(x)-\mathcal{Q}^{p}_{1}f(x)}{s}\leq\Delta
f(y)-K\mathsf{d}(x,y)^{p}\,,$
where we recall that $x$ and $y$ are such that (4.10) holds. If also
$\mathcal{Q}^{p}_{1}f$ happens to be regular near to $x$, then
$\Delta\mathcal{Q}^{p}_{1}f(x)\leq\Delta f(y)-K\mathsf{d}(x,y)^{p}\,.$
As we shall see, the viscous theory of Laplacian bounds allows to let the
heuristic above become rigorous.
In order to ease the notation, we shall indicate
$\Delta^{h}f(x):=\limsup_{t\downarrow 0}\frac{P_{t}f(x)-f(x)}{t}\,,$
whenever $f:X\to\mathbb{R}$ is a locally Lipschitz function with polynomial
growth.
###### Proposition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let
$f:X\to\mathbb{R}$ be a locally Lipschitz function with polynomial growth. Let
us assume that there exists $x_{0}\in X$ such that
$f^{c}(x_{0}):=\mathcal{Q}^{1}_{1}f(x_{0})\in\mathbb{R}$. If $x,y\in X$ verify
$f^{c}(x)-f(y)=\mathsf{d}(x,y)\,,$
then
$\Delta^{h}f^{c}(x)\leq\Delta^{h}f(y)-K\mathsf{d}(x,y)\,.$
###### Proof.
The conclusion follows from Theorem 4.4, relying on the very definition of
$\Delta^{h}$ through the formal argument presented above. Indeed, under the
assumption of the statement, by Theorem 4.4 we have:
(4.12) $P_{s}f^{c}(x)-P_{s}f(y)\leq e^{-Ks}\mathsf{d}(x,y)\,,\quad\text{ for
any $x,y\in X$ and for any $s\geq 0$}\,.$
Moreover, by assumption, equality holds in (4.12) at time $s=0$. Hence, by
taking the right derivative at both sides, we infer that
$\displaystyle\Delta^{h}f^{c}(x)$ $\displaystyle=\limsup_{s\downarrow
0}\frac{P_{s}f^{c}(x)-f^{c}(x)}{s}$ $\displaystyle\leq\limsup_{s\downarrow
0}\frac{P_{s}f(y)-f(y)}{s}+\lim_{s\downarrow
0}\frac{e^{-Ks}-1}{s}\mathsf{d}(x,y)$
$\displaystyle=\Delta^{h}f(y)-K\mathsf{d}(x,y)\,.$
∎
Thanks to the equivalences for Laplacian bounds over noncollapsed
$\operatorname{RCD}(K,N)$ metric measure spaces (see Theorem 3.4), we obtain
the following.
###### Theorem 4.5.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space for some $K\in\mathbb{R}$ and $1\leq N<\infty$. Let
$f:X\to\mathbb{R}$ be a locally Lipschitz function with polynomial growth. Let
$\Omega,\Omega^{\prime}\subset X$ be open domains and $\eta\in\mathbb{R}$.
Then the following holds. Assume that $f^{c}$ is finite and that, for any
$x\in\Omega^{\prime}$ the infimum defining $f^{c}(x)$ is attained at some
$y\in\Omega$. Assume moreover that
(4.13) $\Delta f\leq\eta\quad\text{on $\Omega$}\,.$
Then
$\Delta
f^{c}\leq\eta-\min_{x\in\Omega^{\prime},y\in\Omega}K\mathsf{d}(x,y)\quad\text{on
$\Omega^{\prime}$}$
where the Laplacian bounds have to be intended in any of the equivalent senses
of Theorem 3.4.
###### Proof.
The statement follows from subsection 4.3 and Theorem 3.4. Indeed, by (4.13)
and Theorem 3.4, we have
$\Delta^{h}f(y)\leq\eta\,\quad\text{for any $y\in\Omega$}\,.$
Hence, by subsection 4.3,
$\Delta^{h}f^{c}(x)\leq\eta-K\mathsf{d}(x,y)\,,$
where $y\in\Omega$ is such that $f^{c}(x)-f(y)=\mathsf{d}(x,y)$.
The conclusion follows applying Theorem 3.4 again to $f^{c}$ on
$\Omega^{\prime}$. ∎
Specializing to the case of non-negative Ricci curvature $K=0$, we get a
cleaner statement.
###### Corollary .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(0,N)$ metric
measure space. Let $f:X\to\mathbb{R}$ be a locally Lipschitz function with
polynomial growth. Let $\Omega,\Omega^{\prime}\subset X$ be open domains and
$\eta\in\mathbb{R}$. Assume that $f^{c}$ is finite and that, for any
$x\in\Omega^{\prime}$ the infimum defining $f^{c}(x)$ is attained at some
$y\in\Omega$. Assume moreover that
$\Delta f\leq\eta\,\quad\text{on $\Omega$}\,.$
Then
$\Delta f^{c}\leq\eta\,\quad\text{on $\Omega^{\prime}$}\,,$
where the Laplacian bound can be intended in any of the equivalent senses of
Theorem 3.4.
###### Remark .
For brevity, we discussed only the case $p=1$, however it is possible to
obtain counterparts of all the results above dealing with the Hopf-Lax
semigroup associated to an arbitrary exponent $1\leq p<\infty$.
## 5\. Mean curvature bounds for minimal boundaries
This section is dedicated to the study of mean curvature bounds for boundaries
of locally perimeter minimizing sets of finite perimeter, in the framework of
$\operatorname{RCD}(K,N)$ metric measure spaces
$(X,\mathsf{d},\mathscr{H}^{N})$.
Mean curvature bounds will be encoded into Laplacian bounds for distance
functions. As it is well known, this is equivalent to the classical
information about the vanishing mean curvature condition in the smooth
setting, see Theorem A.1. At the same time, this perspective allows for a
meaningful formulation and analysis in our non smooth framework: switching to
global Laplacian bounds, avoids the necessity of considering second order
objects (like the mean curvature, the Laplacian of the distance, the Hessian
of a function) on a prescribed codimension one hypersurface. This is key,
indeed, in our non-smooth framework, as second order objects are usually well
defined $\mathfrak{m}$-a.e. and thus it can be quite tricky to work with them
on a codimesion one hypersurface.
As we shall see, this way of formulating mean curvature bounds is also fine
enough to allow for several extensions of classical results in Riemannian
geometry to the synthetic framework. Here we focus on the beginning of a
regularity theory, see section 6, and on some direct geometric applications,
see for instance Theorem 5.3 for a generalized version of the Frankel
property. The extension to different notions of minimal hypersurfaces and
their geometric applications are left to future investigation.
We mention that the Laplacian bounds on the distance function, in addition to
encoding the vanishing of the mean curvature (i.e. a “first variation-type”
information), also encode “second variation-type” information. Moreover, such
“second variation-type” information is encoded not only at an infinitesimal
level, but at a finite level; see for example subsection 6.2 where the case of
equidistant surfaces is treated.
Our treatment is inspired by [33], where a new approach to mean curvature
bounds for perimeter minimizing sets was proposed by Caffarelli and Cordoba.
Their strategy partially avoids the first variation formula (that was a
fundamental tool in the previous approach due to De Giorgi [54]) and is
inspired by the viscosity theory in PDEs, instead. Later on, the possibility
of relying on this approach on non smooth spaces was suggested by Petrunin in
[119], with a sketch of proof of the Lévy-Gromov isoperimetric inequality on
Alexandrov spaces along similar lines.
### 5.1. Minimal boundaries and the Laplacian of the distance function
The subject of our study will be sets of finite perimeter that locally
minimize the perimeter, according to the following.
###### Definition .
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $\Omega\subset X$ be an open domain. Let $E\subset X$ be
a set of locally finite perimeter. We say that $E$ is locally perimeter
minimizing in $\Omega$ if for any $x\in\Omega$ there exists $r_{x}>0$ such
that $E$ minimizes the perimeter among all the perturbations that are
compactly supported in $B_{r_{x}}(x)$, i.e., for any Borel set $F$ such that
$E\Delta F\subset B_{r_{x}}(x)$ it holds
$\operatorname{Per}(E,B_{r_{x}}(x))\leq\operatorname{Per}(F,B_{r_{x}}(x))\,.$
Let us notice that the above is a very general condition. For instance, smooth
minimal hypersurfaces in Riemannian manifolds are locally boundaries of
locally perimeter minimizing sets according to subsection 5.1, even though, in
general, they do not minimize the perimeter among arbitrarily compactly
supported variations (a simple example in this regard is the equator inside
the sphere).
###### Theorem 5.1.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E\subset X$ be a set of locally finite perimeter and
assume that it is a local perimeter minimizer. Let
$\mathsf{d}_{\overline{E}}:X\setminus\overline{E}\to[0,\infty)$ be the
distance function from $\overline{E}$. Then
(5.1)
$\Delta\mathsf{d}_{\overline{E}}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{\overline{E}}\,\quad\text{on
$X\setminus\overline{E}$}\,,$
where $\mathrm{t}_{K,N}$ is defined in (1.1). If $\Omega\subset X$ is an open
domain and $E\subset X$ is locally perimeter minimizing in $\Omega$, then
setting
(5.2) ${\mathcal{K}}:=\\{x\in X\,:\,\exists\,y\in\Omega\cap\partial
E\,:\,\mathsf{d}_{\overline{E}}(x)=\mathsf{d}(x,y)\\}\,,$
it holds
(5.3)
$\Delta\mathsf{d}_{\overline{E}}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{\overline{E}}\,\,\quad\text{on
any open subset
$\Omega^{\prime}\Subset\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$}\,.$
As observed in subsection 1.1, the upper bound (5.1) is sharp already in the
class of smooth Riemannian manifolds with Ricci curvature bounded below by
$K\in\mathbb{R}$ and dimension equal to $N\in\mathbb{N},N\geq 2$.
###### Remark (How to interpret the Laplacian bounds).
The Laplacian bounds (5.1) and (5.3) have to be intended in any of the
equivalent ways stated in Theorem 3.4. However let us mention that, if
suitably interpreted, the Laplacian bounds (5.3) hold more generally on the
whole (possibly non-open, but measurable) set
$\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$. Indeed, from the
general representation theorem for the Laplacian of
$\mathsf{d}_{\overline{E}}$ obtained in [37, Corollary 4.16], we know that
$\Delta\mathsf{d}_{\overline{E}}$ is a Radon functional, meaning that its
positive and negative parts
$\left(\Delta\mathsf{d}_{\overline{E}}\right)^{\pm}$ are Radon measures. Thus
it makes sense to consider the restrictions
$\left(\Delta\mathsf{d}_{\overline{E}}\right)^{\pm}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$,
and set
$\Delta\mathsf{d}_{\overline{E}}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}:=\left(\Delta\mathsf{d}_{\overline{E}}\right)^{+}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}-\left(\Delta\mathsf{d}_{\overline{E}}\right)^{-}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}\,.$
The same arguments used below to show (5.3), actually show the stronger claim
that
(5.4) $\displaystyle\left(\Delta\mathsf{d}_{\overline{E}}\right)^{+}$
$\displaystyle\mathop{\hbox{\vrule height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}\leq\mathrm{t}_{K,N}^{+}\circ\mathsf{d}_{\overline{E}}\;\mathfrak{m}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$
(5.5) $\displaystyle-\left(\Delta\mathsf{d}_{\overline{E}}\right)^{-}$
$\displaystyle\mathop{\hbox{\vrule height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}\leq-\mathrm{t}_{K,N}^{-}\circ\mathsf{d}_{\overline{E}}\;\mathfrak{m}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}.$
In this sense, the bound (5.3) holds on the whole set
$\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$.
###### Proof of Theorem 5.1.
The proof follows the outline in subsection 1.1. We shall focus on the case
$K=0$, assuming that $E$ is bounded and locally perimeter minimizing in $X$.
Minor adjustments that are required to cover the more general situation will
be mentioned at the end of the proof.
Let us recall the general strategy. Set $f:=\mathsf{d}_{\overline{E}}$, we
rely on the equivalence between Laplacian bounds in distributional and viscous
sense and prove by contradiction that $\Delta f\leq 0$ in viscous sense. If
this is not the case, we find a function with strictly positive Laplacian
supporting $f$ from below. Then we apply the Hopf-Lax semigroup to obtain a
$1$-Lipschitz function $\varphi$ which has still positive Laplacian and
touches the distance to the boundary of $E$ at a footpoint $x_{E}$ of a
minimizing geodesic. Then, cutting along the level sets of $\varphi$, we build
inner perturbations of $E$, compactly supported in a small ball centred at the
footpoint $x_{E}$. The strictly positive Laplacian assumption on $\varphi$
yields that these perturbations decrease the perimeter, a contradiction.
Step 1. Mild regularity properties of $E$.
Since $E$ is locally a quasi-minimizer of the perimeter (see subsubsection
2.4.6), Theorem 2.10 and subsubsection 2.4.6 apply. We assume $E$ to be
normalized according to (2.3). Hence, the essential boundary of $E$ is closed
and it coincides with the topological boundary $\partial E$. Moreover, $E$
verifies the lower and upper measure bounds and the lower and upper perimeter
bounds (2.20) at any point of its topological boundary. We shall also assume
that $E\subset X$ is an open subset.
Step 2. Globalization of Laplacian upper bound.
We claim that if every $z\in\partial E$ admits a small neighbourhood $U$ such
that $\bm{\Delta}f\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits(U\setminus\overline{E})\leq
0$, then the upper bound globalises to $\bm{\Delta}f\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits(X\setminus\overline{E})\leq
0$.
Such a claim follows from the general representation theorem for the Laplacian
of distance functions obtained in [37] via the localization technique, we next
outline the argument. From [37, Corollary 4.16], we know that
$\bm{\Delta}f\mathop{\hbox{\vrule height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
X\setminus\overline{E}=(\bm{\Delta}f)^{reg}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
X\setminus\overline{E}+(\bm{\Delta}f)^{sing}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}\,,$
where the singular part $(\bm{\Delta}f)^{sing}\perp\mathscr{H}^{N}$ satisfies
$(\bm{\Delta}f)^{sing}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}\leq 0$
and the regular part $(\bm{\Delta}f)^{reg}\ll\mathscr{H}^{N}$ admits the
representation formula
(5.6) $(\bm{\Delta}f)^{reg}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}=(\log
h_{\alpha})^{\prime}\mathscr{H}^{N}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}\,.$
In (5.6), $Q$ is a suitable set of indices, $(h_{\alpha})_{\alpha\in Q}$ are
suitable densities defined on geodesics $(X_{\alpha})_{\alpha\in Q}$, which
are essentially partitioning $X\setminus\overline{E}$ (in the smooth setting,
$(X_{\alpha})_{\alpha\in Q}$ correspond to the integral curves of
$\nabla\mathsf{d}_{E}$; note that here we are using the reverse
parametrization of $X_{\alpha}$ with respect to [37], hence the reversed sign
in the right hand side of (5.6)), such that the following disintegration
formula holds:
(5.7) $\mathscr{H}^{N}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
X\setminus\overline{E}=\int_{Q}h_{\alpha}\mathscr{H}^{1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits
X_{\alpha}\,{\mathfrak{q}}(\mathop{}\\!\mathrm{d}\alpha)\,.$
The non-negative measure $\mathfrak{q}$ in (5.7), defined on the set of
indices $Q$, is obtained in a natural way from the essential partition
$(X_{\alpha})_{\alpha\in Q}$ of $X\setminus\overline{E}$, roughly by
projecting $\mathscr{H}^{N}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}$ on the
set $Q$ of equivalence classes (we refer to [37] for the details).
The key point for the proof of Step 2 is that each $h_{\alpha}$ is a
$\operatorname{CD}(0,N)$ density over the ray $X_{\alpha}$ (see [37, Theorem
3.6]), implying that $\log(h_{\alpha})$ is concave and thus $(\log
h_{\alpha})^{\prime}$ is non-increasing (recall that the geodesic $X_{\alpha}$
is parametrized in terms of $\mathsf{d}_{E}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X_{\alpha}$, i.e in the
direction “from $\overline{E}$ towards $X\setminus\overline{E}$”).
From the discussion above, the claim now easily follows. Indeed, if every
$z\in\partial E$ admits a small neighbourhood $U$ such that
$\bm{\Delta}f\mathop{\hbox{\vrule height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits(U\setminus\overline{E})\leq
0$, then in particular $(\log h_{\alpha})^{\prime}\leq 0$ on $(X_{\alpha}\cap
U)\setminus\overline{E}$ and the concavity of $\log(h_{\alpha})$ along
$X_{\alpha}$ implies that $(\log h_{\alpha})^{\prime}\leq 0$ on
$X_{\alpha}\setminus\overline{E}$. Thus (5.6) yields
$(\bm{\Delta}f)^{reg}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}\leq 0$.
We conclude recalling that the singular part
$(\bm{\Delta}f)^{sing}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}$ is
non-positive.
Step 3. Construction of the auxiliary function $\varphi$ and properties.
Suppose by contradiction that $\Delta f\leq 0$ does not hold on
$X\setminus\overline{E}$. Then, by Step 2, there exist arbitrarily small
neighbourhoods $U$ centred at points of $\partial E$ such that $\Delta f\leq
0$ does not hold on $U\setminus\overline{E}$. Moreover, from the equivalence
Theorem 3.3, the bound is not verified in the viscous sense. It follows that
there exist $x\in U\setminus\overline{E}$, a ball $B_{r}(x)\subset
U\setminus\overline{E}$ and a lower supporting function
$\psi:B_{r}(x)\to\mathbb{R}$ with the following properties:
* (i)
$\psi\in D(\Delta,B_{r}(x))$ and $\Delta\psi$ is continuous on $B_{r}(x)$;
* (ii)
$\psi(x)=f(x)$;
* (iii)
$\psi(y)\leq f(y)$ for any $y\in B_{r}(x)$;
* (iv)
$0<\Delta\psi(x)<1/2$.
We wish to modify $\psi$ into a globally defined function
$\overline{\psi}:X\to\mathbb{R}$, while keeping all its good properties.
By the continuity of $\Delta\psi$ and (iv), there exists $\varepsilon>0$ such
that $\varepsilon<\Delta\psi<3/4$ on a neighbourhood of $x$. Then we can
consider a local Green type distance $b_{x}$, see subsection 2.7 (possibly in
a smaller neighbourhood of $x$) and subtract a small multiple of $b_{x}^{4}$
to $\psi$, to obtain a function
$\hat{\psi}:=\psi-\delta b_{x}^{4}.$
For $\delta>0$ sufficiently small, possibly on a smaller ball $B_{s}(x)\subset
B_{r}(x)$, it holds that:
* (i’)
$\hat{\psi}:B_{s}(x)\to\mathbb{R}$ is Lipschitz and $\hat{\psi}\in
D(\Delta,B_{s}(x))$;
* (ii’)
$\hat{\psi}(x)=f(x)$;
* (iii’)
$\hat{\psi}(y)<f(y)$ for any $y\in B_{s}(x)$, $y\neq x$ and there exist
$s^{\prime}<s$ and $\delta^{\prime}>0$ such that
$\hat{\psi}<f-\delta^{\prime}$ on $B_{s}(x)\setminus B_{s^{\prime}}(x)$;
* (iv’)
$0<\varepsilon^{\prime}<\Delta\hat{\psi}\leq 1$ on $B_{s}(x)$, for some
$\varepsilon^{\prime}>0$.
Next, we extend $\hat{\psi}$ to a global function
$\overline{\psi}:X\to\mathbb{R}$ such that:
* (i”)
$\overline{\psi}:X\to\mathbb{R}$ is Lipschitz and $\overline{\psi}\in
D(\Delta,B_{s}(x))$;
* (ii”)
$\overline{\psi}(x)=f(x)$;
* (iii”)
$\overline{\psi}(y)<f(y)$ for any $y\neq x$ and there exist $s^{\prime}>0$ and
$\delta^{\prime}>0$ such that $\overline{\psi}<f-\delta^{\prime}$ on
$X\setminus B_{s^{\prime}}(x)$;
* (iv”)
$0<\varepsilon^{\prime}<\Delta\overline{\psi}\leq 1$ on $B_{s}(x)$, for some
$\varepsilon^{\prime}>0$.
Now, let us define $\varphi:X\to\mathbb{R}$ by
(5.8) $\varphi(z):=\sup_{y\in X}\\{\overline{\psi}(y)-\mathsf{d}(z,y)\\}\,.$
Observe that the supremum in (5.8) is always finite. Moreover,
(5.9) $\varphi$ is $1$-Lipschitz and $\varphi\leq f$.
In order to check these properties, observe that $\overline{\psi}\leq f$.
Hence, for any $z\in X$,
$\varphi(z)=\sup_{y\in
X}\\{\overline{\psi}(y)-\mathsf{d}(z,y)\\}\leq\sup_{y\in
X}\\{f(y)-\mathsf{d}(z,y)\\}=f(z)\,.$
Therefore $\varphi$ is finite and, being the supremum of a family of
$1$-Lipschitz functions (the functions
$z\mapsto\overline{\psi}(y)-\mathsf{d}(z,y)$, indexed by $y\in X$), it is
$1$-Lipschitz.
Let now $x_{E}\in\partial E$ be any footpoint of minimizing geodesic from $x$
to $\overline{E}$. In particular, $f(x_{E})=0$ and
$f(x)-f(x_{E})=\mathsf{d}(x,x_{E})$. Let $\gamma:[0,\mathsf{d}(x,x_{E})]\to X$
be a unit speed minimizing geodesic between $\gamma(0)=x_{E}$ and
$\gamma(\mathsf{d}(x,x_{E}))=x$. Observe that
(5.10) $f(\gamma(t))=t\,\quad\text{for any $t\in[0,\mathsf{d}(x,x_{E})]$}\,.$
Moreover,
(5.11) $\varphi(\gamma(t))=f(\gamma(t)),\quad\text{for any
$t\in[0,\mathsf{d}(x,x_{E})]$}$
and, for any such $t$, the supremum defining $\varphi(\gamma(t))$ in (5.8) is
attained only at $x$.
Indeed, by (iii”) above, $\overline{\psi}<f-\delta^{\prime}$ on $X\setminus
B_{s^{\prime}}(x)$. Hence, for any $z\in X$ such that
$\varphi(z)>f-\delta^{\prime}$, we can restrict the supremum defining
$\varphi(z)$ in (5.8) to $\overline{B_{s^{\prime}}(x)}$. Since
$\overline{B_{s^{\prime}}(x)}$ is compact, the supremum is attained. In
details, if $\varphi(z)>f(z)-\delta^{\prime}$, then
(5.12)
$\varphi(z)=\sup_{y\in\overline{B_{s^{\prime}}(x)}}\\{\overline{\psi}(y)-\mathsf{d}(y,z)\\}=\overline{\psi}(y_{z})-\mathsf{d}(y_{z},z)\leq
f(y_{z})-\mathsf{d}(y_{z},z)\leq f(z)\,,$
for some $y_{z}\in\overline{B_{s^{\prime}}(x)}$. In particular, whenever
$\varphi(z)=f(z)$, all the inequalities above become equalities. Hence
$\overline{\psi}(y_{z})=f(y_{z})$, that implies $y_{z}=x$ by (ii”) and (iii”),
and $f(z)-f(x)=-\mathsf{d}(x,z)$. Viceversa, if $f(z)-f(x)=-\mathsf{d}(x,z)$
then $\varphi(z)=f(z)$ and the supremum defining $\varphi(z)$ is attained
(only) at $x$.
We claim that
(5.13) $\left\lvert\nabla\varphi\right\rvert=1,\quad\mathscr{H}^{N}\text{-a.e.
on }\\{\varphi>f-\delta\\}\setminus B_{s^{\prime}}(x).$
In order to verify this claim, we let $z\in\\{\varphi>f-\delta\\}\setminus
B_{s^{\prime}}(x)$. By the argument above, the supremum defining $\varphi(z)$
is a maximum and it is attained at some
$x_{z}\in\overline{B_{s^{\prime}}(x)}$. By assumption $x_{z}\neq z$. Let us
consider now a minimizing geodesic $\gamma:[0,\mathsf{d}(z,x_{z})]\to X$
connecting $z$ with $x_{z}$ and with unit speed. We claim that
(5.14) $\varphi(\gamma(t))=\varphi(z)+t\,,\quad\text{for any
$t\in[0,\mathsf{d}(z,x_{z})]$}\,.$
The inequality $\varphi(\gamma(t))\leq\varphi(z)+t$ follows from the fact that
$\varphi$ is $1$-Lipschitz. We only need to prove that
$\varphi(\gamma(t))\geq\varphi(z)+t$. To this aim, observe that
$\displaystyle\varphi(\gamma(t))=$ $\displaystyle\sup_{y\in
X}\\{\overline{\psi}(y)-\mathsf{d}(y,\gamma(t))\\}$ $\displaystyle\geq$
$\displaystyle\overline{\psi}(x_{z})-\mathsf{d}(\gamma(t),x_{z})$
$\displaystyle=$ $\displaystyle\overline{\psi}(x_{z})-\mathsf{d}(z,x_{z})+t$
$\displaystyle=$ $\displaystyle\varphi(z)+t\,.$
From (5.14) we infer that, for any $z\in\\{\varphi>f-\delta\\}\setminus
B_{s^{\prime}}(x)$, the function $\varphi$ has slope $1$ at $z$. The
conclusion that $\left\lvert\nabla\varphi\right\rvert=1$-a.e. on
$\\{\varphi>f-\delta\\}\setminus B_{s^{\prime}}(x)$ follows from the a.e.
identification between slope and upper gradient obtained in [39].
Let us consider the Laplacian of $\varphi$. By construction, $\overline{\psi}$
verifies the Laplacian bound (iv”) on $B_{s^{\prime}}(x)$. In particular,
$\Delta\overline{\psi}\geq\varepsilon>0$ on $B_{s^{\prime}}(x)$ in the sense
of subsection 3.1. Hence, since we already observed that for points
$z\in\\{\varphi>f-\delta\\}\setminus B_{s^{\prime}}(x)$ the supremum defining
$\varphi(z)$ is a maximum attained in $\overline{B_{s^{\prime}}(x)}$, we
obtain by subsection 4.3 that
(5.15) $\bm{\Delta}\varphi\geq\varepsilon\,\quad\text{on
$\\{\varphi>f-\delta\\}\setminus B_{s^{\prime}}(x)$}\,,$
in the sense of distributions.
Step 4. Construction of the inner variations of $E$.
Our next goal is to construct a suitable inner variation of $E$, compactly
supported in a small ball centred at a point of $\partial E$. Such a
perturbation is obtained by cutting along a level set of $\varphi$, with value
$-\delta<t<0$. In Step 5, we will reach a contradiction by showing that such
an inner perturbation has perimeter strictly less than $E$.
Let us start by proving that for small values of $t\in(-\delta,0)$, we can cut
$E$ along a level set of $\varphi$ to obtain inner perturbations $E_{t}\subset
E$, supported on suitable balls of arbitrary small radius.
Let us define
$E_{t}:=E\setminus\\{\varphi>t\\}\,.$
Observe that for $t=0$ it holds $\\{\varphi>0\\}\cap E=\emptyset$, since from
(5.9) we know that $\\{\varphi>0\\}\subset\\{f>0\\}\subset X\setminus E$. When
we decrease the value of $t$, the super-level set $\\{\varphi>t\\}$ starts
cutting $E$.
Recall that $x_{E}\in\partial E$ is a footpoint of minimizing geodesic from
$x$ to $\overline{E}$. We claim that for any $t<0$ sufficiently close to $0$,
$E_{t}$ is a perturbation of $E$ supported in a small ball $B_{r}(x_{E})$,
i.e. $\\{\varphi>t\\}\cap E\subset B_{r}(x_{E})$. To prove this claim, it is
enough to observe that from $f\equiv 0$ on $E$, (5.14), and
$B_{s^{\prime}}(x)\subset X\setminus\bar{E}$, we get
(5.16) $\\{\varphi>t\\}\cap
E\subset\\{\varphi>f-\delta\\}\setminus\overline{B_{s^{\prime}}(x)}\quad\text{for
any }t\in(-\delta,0)\,.$
Moreover, for every $z\in\\{\varphi>t\\}\cap E$, the maximum defining
$\varphi(z)$ is attained inside $\overline{B_{s^{\prime}}(x)}$, see (5.12) and
the nearby discussion.
Now we wish to bound the distance from $x_{E}$ to $\\{\varphi>t\\}\cap E$. For
any $z\in\\{\varphi>t\\}\cap E$, there exists
$x_{z}\in\overline{B_{s^{\prime}}(x)}$ such that
$\varphi(z)=\overline{\psi}(x_{z})-\mathsf{d}(x_{z},z)\leq
f(x_{z})-\mathsf{d}(x_{z},z)\leq
s^{\prime}+\mathsf{d}(x,\overline{E})-\mathsf{d}(x_{z},z)\,.$
Hence
$\mathsf{d}(x_{z},z)\leq\mathsf{d}(x,\overline{E})+s^{\prime}-\varphi(z)\leq\mathsf{d}(x,\overline{E})+s^{\prime}-t\,.$
In particular, we can bound the distance of $\\{\varphi>t\\}\cap E$ from $x$,
and hence from $x_{E}$, and obtain
(5.17) $\\{\varphi>t\\}\cap E\subset B_{r}(x_{E}),\quad
r:=2\mathsf{d}(x,\overline{E})+s^{\prime}+\delta\,.$
Recalling that $x$ can be chosen arbitrarily close to $\overline{E}$ (see
beginning of Step 3), and that $s^{\prime},\delta>0$ can be chosen arbitrarily
small (see Step 3), we infer that
$r:=2\mathsf{d}(x,\overline{E})+s^{\prime}+\delta$ can be chosen arbitrarily
small. It follows that, for every $r>0$ arbitrarily small, one can perform the
above construction in order to obtain $x_{E}\in\partial E$ and a family of
inner perturbations $(E_{t})_{t\in(-\delta,0)}$ of $E$, so that $E\setminus
E_{t}\subset B_{r}(x_{E})$.
Observe also that $E_{t}$ is a non trivial perturbation of $E$, i.e.
$\mathscr{H}^{N}(\\{\varphi>t\\}\cap E)>0$. Indeed from (5.14) it is easily
seen that $\\{\varphi>t\\}\cap E$ is non-empty and moreover it is open. Using
(5.14) it is also readily seen that the inclusion “$\subset$” in (5.16) can be
improved to the compact inclusion “$\Subset$”.
Thus, from the combination of (5.10), (5.11), (5.13), (5.15) and (5.16),
$\varphi$ verifies the assumptions of subsubsection 2.4.5 for some open subset
$\Omega^{\prime\prime}\subset X$ satisfying (note that $\Omega^{\prime\prime}$
plays the role of $\Omega$ in subsubsection 2.4.5)
(5.18) $\\{\varphi>t\\}\cap
E\Subset\Omega^{\prime\prime}\Subset\\{\varphi>f-\delta\\}\setminus\overline{B_{s^{\prime}}(x)}=:\Omega^{\prime}\,.$
Hence, for $t\in(-\delta,0)$, $E_{t}$ is a compactly supported inner
perturbation of $E$ with finite perimeter and
(5.19)
$\left(\nabla\varphi\cdot\nu_{\\{\varphi>t\\}}\right)_{\mathrm{int}}=\left(\nabla\varphi\cdot\nu_{\\{\varphi>t\\}}\right)_{\mathrm{ext}}=1\,,\quad\operatorname{Per}_{\\{\varphi>t\\}}\text{-a.e.
on $\Omega^{\prime\prime}$}\,.$
Step 5. Estimate for the perimeter.
We aim to prove that there exists $t<0$, with $|t|$ small enough, such that
(5.20)
$\operatorname{Per}(E,B_{r}(x_{E}))-\operatorname{Per}(E_{t},B_{r}(x_{E}))>0,$
contradicting the local inner minimality of $E$. Let
$F:=E\cap\\{\varphi>t\\}\,=E\setminus E_{t}\,.$
Neglecting the regularity issues, the boundary of $F$ has two components. The
first one is along $\partial E$, with unit normal coinciding with the unit
normal of $\partial E$. The second one is along the level set
$\\{\varphi=t\\}$, where the unit normal vector $\nu_{F}$ pointing inside of
$F$ is $\nabla\varphi$. To make rigorous this description we rely on Theorem
2.9, together with the remark that the boundaries of $\\{\varphi>t\\}$ and $E$
have negligible intersections for a.e. $t\in(-\delta,0)$, for $\delta>0$
sufficiently small. Let $\chi$ be a smooth cutoff function (see subsection
2.5) with $\chi\equiv 1$ on a neighbourhood of $F$ and $\chi\equiv 0$ on
$X\setminus\big{(}\\{\varphi>f-\delta\\}\setminus B_{s^{\prime}}(x)\big{)}$.
Notice that $\chi\nabla\varphi\in\mathcal{DM}^{\infty}(X)$, by (5.15). We can
thus apply Theorem 2.8, with test function $f\equiv 1$, vector field
$V=\chi\nabla\varphi$ and set of finite perimeter $F$, to obtain
$\displaystyle\int_{F^{(1)}}\bm{\Delta}\varphi=$
$\displaystyle-\int_{\mathcal{F}F}\left(\nabla\varphi\cdot\nu_{F}\right)_{\mathrm{int}}\mathop{}\\!\mathrm{d}\operatorname{Per}$
$\displaystyle=$ $\displaystyle-\int_{\mathcal{F}\\{\varphi>t\\}\cap
E^{(1)}}(\nabla\varphi\cdot\nu_{\\{\varphi>t\\}})_{\mathrm{int}}\mathop{}\\!\mathrm{d}\operatorname{Per}$
$\displaystyle-\int_{\mathcal{F}E\cap\\{\varphi>t\\}^{(1)}}\left(\nabla\varphi\cdot\nu_{E}\right)_{\mathrm{int}}\mathop{}\\!\mathrm{d}\operatorname{Per}$
$\displaystyle=$
$\displaystyle-\operatorname{Per}\big{(}\mathcal{F}\\{\varphi>t\\}\cap
E^{(1)}\big{)}-\int_{\mathcal{F}E\cap\\{\varphi>t\\}^{(1)}}\big{(}\nabla\varphi\cdot\nu_{E}\big{)}_{\mathrm{int}}\mathop{}\\!\mathrm{d}\operatorname{Per}$
$\displaystyle\leq$
$\displaystyle-\operatorname{Per}\big{(}\mathcal{F}\\{\varphi>t\\}\cap
E^{(1)}\big{)}+\operatorname{Per}\big{(}\mathcal{F}E\cap\\{\varphi>t\\}^{(1)}\big{)}\,,$
where the third equality follows from subsubsection 2.4.5 (see (5.18) and
(5.19)), while the inequality follows from the sharp trace bound
$\left\lvert\left(\nabla\varphi\cdot\nu_{E}\right)_{\mathrm{int}}\right\rvert\leq
1$ in (2.13).
Since $\bm{\Delta}\varphi>\varepsilon$ on a neighbourhood of $F$ by (5.15) and
(5.18), we get
(5.21) $-\operatorname{Per}\big{(}\mathcal{F}\\{\varphi>t\\}\cap
E^{(1)}\big{)}+\operatorname{Per}\big{(}\mathcal{F}E\cap\\{\varphi>t\\}^{(1)}\big{)}>0\,.$
Combining Theorem 2.9 with (5.17) and (5.21), we get the desired (5.20):
$\displaystyle\operatorname{Per}(E,B_{r}(x_{E}))-$
$\displaystyle\operatorname{Per}(E_{t},B_{r}(x_{E}))=$
$\displaystyle\operatorname{Per}\big{(}\mathcal{F}E\cap\\{\varphi>t\\}^{(1)}\big{)}-\operatorname{Per}\big{(}\mathcal{F}\\{\varphi>t\\}\cap
E^{(1)}\big{)}>0\,.$
Step 6. Adjustments to cover the case of a general lower Ricci curvature bound
$K\in\mathbb{R}$.
In Step 2, the density $h_{\alpha}$ on $X_{\alpha}$ is a
$\operatorname{CD}(K,N)$ density, yielding that $\log h_{\alpha}$ is semi-
concave (thus locally Lipschitz and twice differentiable except at most at
countably many points) and satisfies the differential inequality $(\log
h_{\alpha})^{\prime\prime}\leq-K$ in the distributional sense and point-wise
except countably many points. The singular part of
$\Delta\mathsf{d}_{\overline{E}}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}$ is
non-positive regardless of the value of $K\in\mathbb{R}$. One can then argue
along the lines of Step 2 to globalize the bound
$\Delta\mathsf{d}_{\overline{E}}\leq-K\mathsf{d}_{\overline{E}}$.
In Step 3, since in the contradiction argument we start from the assumption
that (5.1) does not hold, arguing as before we can find an auxiliary function
$\psi$ with properties (i) to (iii) and such that
$\Delta\psi(x)>-K\mathsf{d}_{\overline{E}}(x)\,,$
that replaces the condition $\Delta\psi(x)>0$ that we found in the case $K=0$.
The construction of the functions $\hat{\psi}$ and $\overline{\psi}$ requires
no modification, besides the natural ones for conditions (iv’) and (iv”).
Then, when building the function $\varphi$ by duality as in (5.8), we only
need to apply the general Theorem 4.5 to infer that
$\bm{\Delta}\varphi\geq\varepsilon\,\quad\text{on
$\\{\varphi>f-\delta\\}\setminus B_{s^{\prime}}(x)$}\,,$
also in this case. Basically, whenever $K<0$, the argument by contradiction
starts with a supporting function whose Laplacian is more positive than when
$K=0$. This compensates the fact that the Hopf-Lax semigroup might decrease
the lower Laplacian bound, though it does it only in a controlled way.
Notice that the bound
$\Delta\mathsf{d}_{\overline{E}}\leq-K\mathsf{d}_{\overline{E}}$ is sharp in
the $N=\infty$ case. The sharp dimensional bound can be obtained by the
following self-improving argument.
By the first part of Step 6 (see also Step 2), we know that $h_{\alpha}$ is a
$\operatorname{CD}(K,N)$ density on the ray $X_{\alpha}$ for
$\mathfrak{q}$-a.e. $\alpha\in Q$, i.e. it satisfies
(5.22) $(\log h_{\alpha})^{\prime\prime}\leq-K-\frac{1}{N-1}\big{(}(\log
h_{\alpha})^{\prime}\big{)}^{2}$
in the sense of distributions and point-wise except countably many points.
Moreover, from (5.6) and the first part of Step 6, we know that
(5.23) $(\log
h_{\alpha})^{\prime}(\mathsf{d}_{\overline{E}})\leq-K\,\mathsf{d}_{\overline{E}}\,\text{
on $X_{\alpha}$, for $\mathfrak{q}$-a.e. $\alpha\in Q$.}$
Observing that the function $\mathrm{t}_{K,N}$ defined in (1.1) satisfies the
following initial value problem
$\begin{cases}\mathrm{t}_{K,N}^{\prime}(x)&=-K-\frac{1}{N-1}\big{(}\mathrm{t}_{K,N}(x)\big{)}^{2}\\\
\mathrm{t}_{K,N}^{\prime}(0)&=0\end{cases}$
on $I_{K,N}$, a standard argument via differential inequalities (using (5.22)
and (5.23)) implies that
$(\log
h_{\alpha})^{\prime}\circ\mathsf{d}_{E}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{E}\,,\quad\text{
for $\mathfrak{q}$-a.e. $\alpha\in Q$.}$
Recalling the representation formula (5.6) and that the singular part of
$\Delta\mathsf{d}_{\overline{E}}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}$ is
non-positive, we infer that
$\Delta\mathsf{d}_{\overline{E}}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{\overline{E}}$.
Step 7. Adjustments in case $E$ is locally perimeter minimizing in $\Omega$,
i.e. proof of (5.3).
The key observation is the following: if the Laplacian bound (5.3) holds in a
neighbourhood of $\partial E\cap\Omega$, then it holds on
$\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$. This can be proved
along the lines of Step 2, since all the rays essentially partitioning
$\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$ start from $\partial
E\cap\Omega$: if we assume that the correct Laplacian bound holds in a
neighbourhood of $\partial E\cap\Omega$, then the bound holds globally on
$\left(X\setminus\overline{E}\right)\cap{\mathcal{K}}$ by one dimensional
considerations along each ray and by the fact that the singular part of
$\Delta\mathsf{d}_{\overline{E}}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits X\setminus\overline{E}$ is
non-positive. One can then follow verbatim the previous argument by
contradiction. ∎
For the sake of the applications it will be useful to understand the
regularity of the distance function from $\partial E$ without the necessity of
avoiding $\partial E$. Thanks to Theorem 5.1 we can prove that
$\mathsf{d}_{\partial E}$ has measure valued Laplacian and that its singular
contribution along $\partial E$ is the surface measure of $\partial E$.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a set of locally finite perimeter.
Assume that $E$ is locally perimeter minimizing inside an open domain
$\Omega\subset X$, according to subsection 5.1 and let
$\Omega^{\prime}\Subset\Omega$. Then
$\mathsf{d}_{\overline{E}}:X\to[0,\infty)$ has locally measure valued
Laplacian in a neighbourhood $U$ of $\partial E\cap\Omega^{\prime}$. Moreover,
the following representation formula holds:
(5.24)
$\bm{\Delta}\mathsf{d}_{\overline{E}}=\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial
E+\bm{\Delta}\mathsf{d}_{\overline{E}}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits(X\setminus\overline{E})\,,\quad\text{
on $U\supset\partial E\cap\Omega^{\prime}$}\,.$
###### Proof.
The proof relies on the following steps: first we will argue that
$\mathsf{d}_{\overline{E}}$ has locally measure valued Laplacian, relying on
Theorem 5.1 and on the volume bound for the tubular neighbourhood of $\partial
E$ in subsubsection 2.4.6. Then we observe that the Laplacian of
$\mathsf{d}_{\overline{E}}$ is absolutely continuous w.r.t.
$\mathscr{H}^{N-1}$. The sought representation formula follows by computing
the density of $\bm{\Delta}\mathsf{d}_{\overline{E}}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial E$ w.r.t.
$\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial E$ via a blow-up
argument. The strategy is inspired by the proofs of [29, Lemma 7.5 and Theorem
7.4], dealing with the Laplacian of the distance from the boundary on
noncollapsed $\operatorname{RCD}$ spaces.
Step 1. Our goal is to find a locally finite measure $\nu$ such that
$\int_{X}\nabla\varphi\cdot\nabla\mathsf{d}_{\overline{E}}\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}=-\int_{X}\varphi\mathop{}\\!\mathrm{d}\nu\,,$
for any Lipschitz function $\varphi:X\to\mathbb{R}$ with compact support.
Let us assume for simplicity that $\partial E$ is compact, the general case
can be handled with an additional cut-off argument.
By the coarea formula Theorem 2.4, for almost every $r>0$, the superlevel set
$\\{\mathsf{d}_{\overline{E}}>r\\}$ has finite perimeter. Moreover, the volume
bound for the tubular neighbourhood of the boundary
$\mathscr{H}^{N}(\\{0\leq\mathsf{d}_{\overline{E}}<r\\})\leq Cr\,,$
that follows from subsubsection 2.4.6, together with a further application of
the coarea formula, yield the existence of a sequence $(r_{i})$ with
$r_{i}\downarrow 0$ as $i\to\infty$ such that
(5.25) $\operatorname{Per}(\\{\mathsf{d}_{\overline{E}}>r_{i}\\})\leq
C\;\;\;\text{for any $i\in\mathbb{N}$}\,.$
Since $\mathsf{d}_{\overline{E}}$ has measure valued Laplacian on
$X\setminus\overline{E}=\\{\mathsf{d}_{\overline{E}}>0\\}$, the bounded vector
field $\nabla\mathsf{d}_{\overline{E}}$ has measure valued divergence on the
same domain. Therefore, applying Theorem 2.8 to the vector field
$\varphi\nabla\mathsf{d}_{\overline{E}}$ on the domain
$\\{\mathsf{d}_{\overline{E}}>r_{i}\\}$ we infer that
$\displaystyle\int_{\\{\mathsf{d}_{\overline{E}}>r_{i}\\}}$
$\displaystyle\nabla\varphi\cdot\nabla\mathsf{d}_{\overline{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
(5.26)
$\displaystyle=-\int_{\\{\mathsf{d}_{\overline{E}}>r_{i}\\}}\varphi\mathop{}\\!\mathrm{d}\bm{\Delta}\mathsf{d}_{\overline{E}}-\int_{X}\varphi
f_{i}\mathop{}\\!\mathrm{d}\operatorname{Per}(\\{\mathsf{d}_{\overline{E}}>r_{i}\\})\,,$
for some Borel functions $f_{i}$ verifying
(5.27) $\left\lVert
f_{i}\right\rVert_{L^{\infty}(\operatorname{Per}(\\{\mathsf{d}_{\overline{E}}>r_{i}\\}))}\leq
1\,.$
Thanks to (5.25) and (5.27), up to extracting a subsequence, the measures
$f_{i}\operatorname{Per}(\\{\mathsf{d}_{\overline{E}}>r_{i}\\})$ weakly
converge to a finite measure $\mu$ on $X$ in duality with continuous
functions. Passing to the limit in (5.1) as $i\to\infty$, we get
(5.28)
$\int_{X}\nabla\varphi\cdot\nabla\mathsf{d}_{\overline{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}=-\lim_{r_{i}\to
0}\int_{\\{\mathsf{d}_{\overline{E}}>r_{i}\\}}\varphi\mathop{}\\!\mathrm{d}\bm{\Delta}\mathsf{d}_{\overline{E}}-\int_{X}\varphi\mathop{}\\!\mathrm{d}\mu\,,$
as we claimed.
The next observation is that the first term at the right hand side in (5.28)
above is a linear function with sign (when $K=0$, otherwise there is a
correction term), therefore it is represented by a measure.
Indeed, combining (5.28) with Theorem 5.1, we have
$\int_{X}\nabla\varphi\cdot\nabla\mathsf{d}_{\overline{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\geq
K\int_{X}\varphi\,\mathsf{d}_{\overline{E}}\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}-\int_{X}\varphi\,\mathop{}\\!\mathrm{d}\mu\,,$
for any $\varphi\in\operatorname{LIP}_{c}(X)$ s.t. $\varphi\geq 0$.
In particular
$\varphi\mapsto\int\nabla\varphi\cdot\nabla\mathsf{d}_{\overline{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}+\int\varphi\mathop{}\\!\mathrm{d}\mu-K\int\varphi\,\mathsf{d}_{\overline{E}}\,\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
is a non-negative linear map. Hence there exists a non-negative locally finite
measure $\eta$ such that
$\int_{X}\nabla\varphi\cdot\nabla\mathsf{d}_{\overline{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}+\int_{X}\varphi\mathop{}\\!\mathrm{d}\mu-K\int_{X}\varphi\,\mathsf{d}_{\overline{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}=\int_{X}\varphi\mathop{}\\!\mathrm{d}\eta\,,$
for any $\varphi\in\operatorname{LIP}_{c}(X)$. This implies that
$\mathsf{d}_{\overline{E}}$ has measure valued Laplacian on $X$.
Step 2. Thanks to subsubsection 2.4.4, we have that
$\bm{\Delta}\mathsf{d}_{\overline{E}}\ll\mathscr{H}^{N-1}$.
To check that $\bm{\Delta}\mathsf{d}_{\overline{E}}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial
E=\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\partial E$, by standard
differentiation of measures (recall that in general the perimeter measure of
any set of finite perimeter is asymptotically doubling, therefore the
differentiation theorem applies), it suffices to prove that
(5.29) $\lim_{r\downarrow
0}\frac{\bm{\Delta}\mathsf{d}_{\overline{E}}(B_{r}(x))}{\operatorname{Per}(E,B_{r}(x))}=1\,,\quad\text{for
$\operatorname{Per}$-a.e. $x\in\partial E$}\,.$
The validity of (5.29) can be proved thanks to Theorem 2.11. Indeed, it is
sufficient to prove that the density estimate holds at regular boundary points
of $E$, i.e. those points where the blow-up is a Euclidean half-space
$\mathbb{H}^{N}\subset\mathbb{R}^{N}$.
Under this assumption, along the sequence
$X_{i}:=(X,\mathsf{d}/r_{i},\mathscr{H}^{N}/r_{i}^{N},x,E)$ of scaled spaces
converging to the blow-up, the sets $E\subset X$ converge in $L^{1}_{{\rm
loc}}$ to $\mathbb{H}^{N}$. By Theorem 2.11 the convergence can be stenghtned
to Kuratowski convergence of $\partial E_{i}\subset X_{i}$ to
$\partial\mathbb{H}^{N}$, which implies in turn the uniform convergence of
$\mathsf{d}_{\overline{E}}:X_{i}\to\mathbb{R}$ to
$\mathsf{d}_{\mathbb{H}^{N}}$. Moreover, this is easily seen to imply the
$H^{1,2}_{{\rm loc}}$ convergence of
$\mathsf{d}_{\overline{E}}:X_{i}\to\mathbb{R}$ to
$\mathsf{d}_{\mathbb{H}^{N}}$. Then the distributional Laplacians of
$\mathsf{d}_{\overline{E}}$ weakly converge as measures to the distributional
Laplacian of $\mathsf{d}_{\mathbb{H}^{N}}$, and (5.29) follows from the
standard properties of weak convergence. ∎
Up to now, we have studied the properties of the distance function from a
locally perimeter minimizing set, outside of the set. An inspection of the
proof of Theorem 5.1 shows that we actually relied only on inner perturbations
of the set $E$ to obtain properties of the Laplacian of the distance from $E$
outside of $E$.
As it is natural to expect, exploiting the full local minimality condition, we
obtain sharper statements about the distance (and the signed distance)
function from $\partial E$ on both sides of $E$, whenever $E$ is locally
perimeter minimizing. Recall also that if $E\subset X$ is a set of finite
perimeter, locally minimizing the perimeter functional, we can (and will)
assume that $E$ is open (up to choosing the suitable a.e. representative).
###### Theorem 5.2.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E\subset X$ be a set of locally finite perimeter and
suppose that it is locally perimeter minimizing inside an open domain
$\Omega\subset X$, according to subsection 5.1. Let $\mathsf{d}_{\partial
E}:X\to\mathbb{R}$ be the distance function from the boundary of $E$. Then
$\mathsf{d}_{\partial E}$ has locally measure valued Laplacian on $X$.
Moreover, for any open subset $\Omega^{\prime}\Subset\mathcal{K}$ (where
$\mathcal{K}$ was defined in (5.2)), it holds:
(5.30) $\bm{\Delta}\mathsf{d}_{\partial
E}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{\partial E}\,\quad\text{on
$E\cap\Omega^{\prime}$}\,,\quad\bm{\Delta}\mathsf{d}_{\partial
E}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{\partial E}\,\quad\text{on
$\big{(}X\setminus\overline{E}\big{)}\cap\Omega^{\prime}$}\,,$
where $\mathrm{t}_{K,N}$ was defined in (1.1). Moreover,
(5.31) $\bm{\Delta}\mathsf{d}_{\partial E}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\big{(}\partial
E\cap\Omega^{\prime}\big{)}=\mathscr{H}^{N-1}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\big{(}\partial
E\cap\Omega^{\prime}\big{)}\,.$
Under the same assumptions, denoting by $\mathsf{d}^{s}_{E}$ the signed
distance function from $E$ (with the convention that it is positive outside of
$E$ and negative inside), $\mathsf{d}^{s}_{E}$ has measure valued Laplacian on
$E\cap\Omega^{\prime}$ and
(5.32)
$\bm{\Delta}\mathsf{d}^{s}_{E}\geq\mathrm{t}_{K,N}\circ\mathsf{d}^{s}_{E}\,\quad\text{on
$E\cap\Omega^{\prime}$}\,,\quad\bm{\Delta}\mathsf{d}^{s}_{E}\leq\mathrm{t}_{K,N}\circ\mathsf{d}^{s}_{E}\,\quad\text{on
$\left(X\setminus\overline{E}\right)\cap\Omega^{\prime}$}\,,$
and
(5.33) $\bm{\Delta}\mathsf{d}^{s}_{E}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\left(\partial
E\cap\Omega^{\prime}\right)=0\,.$
###### Remark .
With the same caveat about the interpretation of the Laplacian bounds when
restricted to a measurable (possibly non-open) set as in subsection 5.1, the
Laplacian bounds (5.30), (5.31), (5.32) and (5.33) actually hold more strongly
by replacing $\Omega^{\prime}$ with $\mathcal{K}$.
###### Proof.
The first part of the statement follows from Theorem 5.1 and subsection 5.1,
applied to the distance from $\overline{E}$ and to the distance from
$\overline{X\setminus E}$. Notice indeed that, under our assumptions on $E$,
also $X\setminus E$ is locally perimeter minimizing inside $\Omega$.
To deal with the signed distance function $\mathsf{d}^{s}_{E}$, notice that it
coincides with $\mathsf{d}_{\partial E}$ on
$\left(X\setminus\overline{E}\right)\cap K$ and with $-\mathsf{d}_{\partial
E}$ on $E\cap K$. Then, arguing as in the proof of subsection 5.1, it is
possible to prove that $\mathsf{d}^{s}_{E}$ has measure valued Laplacian and
(5.32) follows.
To determine the restriction of $\bm{\Delta}\mathsf{d}^{s}_{E}$ to $\partial
E$, it is enough to adjust the argument in Step 2 of the proof of subsection
5.1. The key remark is that, when blowing up, the distance function from the
boundary converges to the distance function from the half-space, whose
distributional Laplacian has a singular contribution given by the surface
measure of the hyperplane. The signed distance function, instead, converges to
the signed distance function from the half-space after blowing up, which is a
coordinate function, hence in particular it is harmonic. This shows, through
the density estimate via blow-up, that (5.33) holds. ∎
The range of applications of Theorem 5.1 and Theorem 5.2 is expected to be
broad. For the sake of illustration, here we present an extension of a
celebrated property of minimal surfaces in manifolds with positive Ricci
curvature, the so-called Frankel’s theorem. As another application, in section
6 we will investigate some consequences of the mean curvature bounds at the
level of regularity.
It is a classical fact that two smooth minimal hypersurfaces in a manifold
with (strictly) positive Ricci curvature must intersect each other. This is
known as Frankel’s theorem after [63], where similar results were obtained
under the stronger assumption of positive sectional curvature. In the present
formulation the statement appears in [116], whose proof we can now follow,
given our understanding of mean curvature bounds for locally perimeter
minimizing sets on $\operatorname{RCD}$ spaces, after Theorem 5.1 and Theorem
5.2.
###### Theorem 5.3 (Generalized Frankel’s Theorem).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(N-1,N)$ metric
measure space. Let $\Sigma_{1},\Sigma_{2}\subset X$ be closed sets such that,
for any $i=1,2$ and any $x\in\Sigma_{i}$, there exist a ball $B_{r}(x)$ and a
set of finite perimeter $E\subset X$ such that $E$ is locally perimeter
minimizing in $B_{2r}(x)$ and $\Sigma_{i}\cap B_{r}(x)=\partial E\cap
B_{r}(x)$. Then
$\Sigma_{1}\cap\Sigma_{2}\neq\emptyset\,.$
###### Proof.
Let $\mathsf{d}_{1}$ and $\mathsf{d}_{2}$ denote $\mathsf{d}_{\Sigma_{1}}$ and
$\mathsf{d}_{\Sigma_{2}}$ respectively and let
$\bar{\mathsf{d}}:=\mathsf{d}_{1}+\mathsf{d}_{2}$.
Assume by contradiction that $\Sigma_{1}\cap\Sigma_{2}=\emptyset$. Then it is
easily seen that $\bar{\mathsf{d}}$ attains one of its minima at a point $x\in
X\setminus(\Sigma_{1}\cup\Sigma_{2})$. Indeed it is sufficient to consider a
minimizing geodesic between $\Sigma_{1}$ and $\Sigma_{2}$ whose length is
$\mathsf{d}(\Sigma_{1},\Sigma_{2})>0$ and pick a point inside it.
By Theorem 5.1,
$\bm{\Delta}\mathsf{d}_{1}\leq-(N-1)\mathsf{d}_{1}\,,\quad\text{and}\quad\bm{\Delta}\mathsf{d}_{2}\leq-(N-1)\mathsf{d}_{2}\,,\quad\text{on
$X\setminus(\Sigma_{1}\cup\Sigma_{2})$}\,.$
Hence
(5.34) $\bm{\Delta}\bar{\mathsf{d}}\leq-(N-1)\bar{\mathsf{d}}\,,\quad\text{on
$X\setminus(\Sigma_{1}\cup\Sigma_{2})$}\,.$
In particular, there is a neighbourhood $U$ of $x$ such that
$\bar{\mathsf{d}}$ is superharmonic on $U$ and attains a minimum at the
interior point $x$. The strong maximum principle implies that
$\bar{\mathsf{d}}$ is constant in a neighbourhood of $x$, that contradicts the
strict superharmonicity of $\bar{\mathsf{d}}$ in (5.34), since
$\bar{\mathsf{d}}(x)>0$. ∎
###### Remark .
The assumptions of Theorem 5.3 cover in particular the classical case of
smooth minimal hypersurfaces in closed manifolds with positive Ricci
curvature. Indeed, as we already mentioned, smooth minimal hypersurfaces are,
locally, perimeter minimizing boundaries.
## 6\. Regularity theory
This section is dedicated to the partial regularity theory for minimal
boundaries on non collapsed $\operatorname{RCD}$ spaces. Our main result will
be that they are topologically regular away from sets of ambient codimension
three, and from the boundary of the space. Besides from a sharp Hausdorff
dimension estimate (see Theorem 6.5), we will obtain also a Minkowski estimate
for the quantitative singular set (see Theorem 6.7). Following a classical
pattern, these results will be achieved through two intermediate steps:
* •
an $\varepsilon$-regularity result, Theorem 6.1 showing that under certain
assumptions at a given location and scale a minimal boundary is topologically
regular;
* •
the analysis dedicated to guarantee that the assumptions of the
$\varepsilon$-regularity theorem are verified at many locations and scales
along the minimal boundary. This is pursued as follows:
* –
in subsection 6.3, via dimension reduction arguments, we prove sharp Hausdorff
dimension estimates of the singular set (see Theorem 6.5). Here the arguments
depart from the classical ones: in the Euclidean (resp. smooth) setting,
minimal boundaries satisfy a very powerful monotonicity formula (resp. up to a
lower order term) which implies that every tangent space to a minimal boundary
is a cone. In the present non-smooth setting, it seems not possible to repeat
the Euclidean/smooth computations and it is not clear if such a (perturbed)
monotonicity formula holds;
* –
in subsection 6.2 we prove sharp perimeter bounds for the equidistant sets
from locally minimal boundaries which will be used in subsection 6.4 to obtain
the quantitative regularity results (see Theorem 6.7) through a series of
covering arguments that control the regularity of the space and the regularity
of the minimal boundary together. The interpretation of minimality via
Laplacian bounds on the distance function obtained in subsection 5.1 will play
a key role here.
As some examples will show, the threshold dimension for the full regularity is
lower in this framework than in the Euclidean case: our Hausdorff codimension
three estimate for the singular set is sharp (see subsection 6.3), moreover,
already in ambient dimension $4$ there are examples of tangent cones with no
Euclidean splittings (see subsection 6.1) and of topologically irregular
minimal boundaries.
### 6.1. An $\varepsilon$-regularity theorem
The aim of this subsection is to establish an $\varepsilon$-regularity result
for minimal boundaries. This will provide a (weak) counterpart of the
classical statement for minimal boundaries in the Euclidean setting.
Usually, the outcome of an $\varepsilon$-regularity theorem is that if a
certain solution is close enough to a rigid model then it is regular. The
celebrated result for minimal boundaries in the Euclidean case from [54] says
that a minimal boundary contained in a sufficiently small strip around a
hyperplane is analytic.
Arguably, and as elementary examples show, this is too much to hope for in the
present setting. Our $\varepsilon$-regularity result will be more in the
spirit of Reifenberg’s original approach: we will show that a minimal boundary
which is close enough to the boundary of a half-space (in the Gromov-Haudorff
sense) is topologically regular.
This could be considered as the counterpart for minimal boundaries of the
celebrated $\varepsilon$-regularity result for manifolds with lower Ricci
curvature bounds obtained in [49, 41] and extended to $\operatorname{RCD}$
spaces in [89], see Theorem 2.2.
To avoid confusion let us clarify that in this subsection by local perimeter
minimizer in an open domain we intend that the perimeter is minimized among
all the competitors that are perturbations inside the domain. This is a much
stronger requirement than the one considered in subsection 5.1 to obtain mean
curvature bounds. For smooth hypersurfaces in smooth ambient spaces,
subsection 5.1 would correspond to minimality (i.e. vanishing mean curvature),
while here we will be concerned with locally area minimizers.
Moreover, this subsection will be independent of the theory of mean curvature
bounds that we have developed so far. Mean curvature bounds will enter into
play later on, when proving that the assumptions of the
$\varepsilon$-regularity theorem are in force at many locations and scales,
see subsection 6.2 and subsection 6.3.
Let us introduce some useful terminology, adapting the notion of flatness from
the Euclidean to the non smooth and non flat case. With respect to the
Euclidean realm, in the non flat framework there are many more rigid
situations to be considered. This is also due to the following result,
yielding existence of a large family of flat minimal boundaries.
###### Lemma .
Let $(Y,\mathsf{d}_{Y},\mathscr{H}^{N-1})$ be an $\operatorname{RCD}(0,N-1)$
metric measure space and let $X:=\mathbb{R}\times Y$ be endowed with the
canonical product metric measure structure. Let $E:=\\{t<0\\}$, where we
denoted by $t$ the coordinate of the Euclidean factor $\mathbb{R}$. Then $E$
is a perimeter minimizing set.
###### Proof.
The vector field $\nabla t$ is easily checked to be a calibration for $E$,
($t$ is harmonic, hence $\nabla t$ has vanishing divergence). The conclusion
follows from a classical calibration argument, exploiting Theorem 2.7 and
Theorem 2.9 as in the smooth setting. ∎
Recall that convergence in the $L^{1}$ strong sense of sets of finite
perimeter along pmGH converging sequences of metric measure spaces is
metrizable, see [6, Appendix A]. By the above, we are entitled to give the
following.
###### Definition ($\varepsilon$-flat points).
Let $\varepsilon>0$. If $(X,\mathsf{d},\mathscr{H}^{N})$ is an
$\operatorname{RCD}(-\varepsilon,N)$ metric measure space and $E\subset X$ is
a set of finite perimeter, perimeter minimizing in $B_{2}(x)\subset X$ such
that:
* •
there exists an $\operatorname{RCD}(0,N-1)$ metric measure space
$(Y,\mathsf{d}_{Y},\mathscr{H}^{N-1},y)$ such that the ball $B_{2}(x)\subset
X$ is $\varepsilon$-GH close to the ball $B_{2}((0,y))\subset\mathbb{R}\times
Y$;
* •
$E$ is $\varepsilon$-close on $B_{2}(x)$ in the $L^{1}$ topology to
$\\{t<0\\}\subset\mathbb{R}\times Y$ and $\partial E\cap B_{2}(x)$ is
$\varepsilon$-GH close to $\\{t=0\\}\cap B_{2}(0,y)\subset\mathbb{R}\times Y$;
then we shall say that $E$ is $\varepsilon$-flat at $x$ in $B_{2}(x)$.
The notion of $\varepsilon$-flat set at $x$ in $B_{r}(x)$ can be introduced
analogously by scaling.
###### Definition ($\varepsilon$-regular points).
Let $\varepsilon>0$. If $(X,\mathsf{d},\mathscr{H}^{N})$ is an
$\operatorname{RCD}(-\varepsilon,N)$ metric measure space and $E\subset X$ is
a set of finite perimeter, perimeter minimizing in $B_{2}(x)\subset X$, such
that:
* •
the ball $B_{2}(x)\subset X$ is $\varepsilon$-GH close to the ball
$B_{2}(0^{N})\subset\mathbb{R}^{N}$;
* •
$E$ is $\varepsilon$-close on $B_{2}(x)$ in the $L^{1}$ topology to
$\\{t<0\\}\subset\mathbb{R}^{N}$ and $\partial E\cap B_{2}(x)$ is
$\varepsilon$-GH close to $\\{t=0\\}\cap B_{2}(0^{N})\subset\mathbb{R}^{N}$,
where we denoted by $t$ one of the canonical coordinates on $\mathbb{R}^{N}$;
then we shall say that $E$ is $\varepsilon$-regular at $x$ in $B_{2}(x)$.
The notion of $\varepsilon$-regular set at $x$ in $B_{r}(x)$ can be introduced
analogously by scaling.
###### Remark .
Let $E\subset X$ be perimeter minimizing inside an open domain $\Omega\subset
X$. Let $x\in\partial E$ and assume that there exists an
$\operatorname{RCD}(0,N-1)$ metric measure space
$(Y,\mathsf{d}_{Y},\mathscr{H}^{N-1},y)$ such that, denoting by $t$ the
coordinate of the split factor $\mathbb{R}$ in the product $\mathbb{R}\times
Y$ with canonical product metric measure structure,
$\big{\\{}\big{(}\\{t<0\\},(0,y),\mathbb{R}\times
Y\big{)}\big{\\}}\in\operatorname{Tan}_{x}(E,X,\mathsf{d},\mathscr{H}^{N})\,.$
Then, for any $\varepsilon>0$ and any $r_{0}>0$, there exists $0<r<r_{0}$ such
that $E$ is $\varepsilon r$-flat in $B_{r}(x)$. This is a direct consequence
of Theorem 2.11, together with the very definition of tangent to a set of
finite perimeter.
Analogously, if
$\big{\\{}\big{(}\\{t<0\\},0^{N},\mathbb{R}^{N}\big{)}\big{\\}}\in\operatorname{Tan}_{x}(E,X,\mathsf{d},\mathscr{H}^{N})\,,$
then for any $\varepsilon>0$ and for any $r_{0}>0$ there exists $0<r<r_{0}$
such that $E$ is $\varepsilon r$-regular at $x$ on $B_{r}(x)$.
Below, we shall fix the scale $r=1$. As we already argued, the statements are
scale invariant, therefore this is not a loss of generality.
The stability of perimeter minimizers allows to get a measure bound out from
Gromov-Hausdorff closeness.
###### Lemma (Perimeter density estimate for perimeter minimizers).
For any $\delta>0$ there exists $\varepsilon=\varepsilon(\delta,N)>0$ such
that the following holds. If $(X,\mathsf{d},\mathscr{H}^{N})$ is an
$\operatorname{RCD}(-\varepsilon,N)$ metric measure space, $E\subset X$ is
perimeter minimizing in $B_{4}(x)$, $x\in\partial E$ and $E$ is
$\varepsilon$-regular at $x$ in $B_{2}(x)$, then
(6.1) $1-\delta\leq\frac{\operatorname{Per}(E,B_{1}(x))}{\omega_{N-1}}\leq
1+\delta\,,$
where $\omega_{N-1}$ denotes the volume of the unit ball in
$\mathbb{R}^{N-1}$.
###### Proof.
The statement can be proved by a contradiction argument.
Consider a sequence of sets of finite perimeter $E_{n}\subset X_{n}$, where
$x_{n}\in\partial E_{n}$, $(X_{n},\mathsf{d}_{n},\mathscr{H}^{N})$ are
$\operatorname{RCD}(-1/n,N)$ metric measure spaces, $E_{n}$ is $1/n$-regular
in $B_{2}(x_{n})$ and perimeter minimizing in $B_{4}(x_{n})$. Then the
following holds: the balls
$B_{2}(x_{n})\subset(X_{n},\mathsf{d}_{n},\mathscr{H}^{N},x_{n})$ are
converging to
$B_{2}(0^{N})\subset(\mathbb{R}^{N},\mathsf{d}_{\mathrm{eucl}},\mathscr{H}^{N},0^{N})$
in the pmGH topology and the sets of finite perimeter $E_{n}$ are converging
to $\mathbb{H}^{N}$ on $B_{2}(0^{N})$ in the $L^{1}_{{\rm loc}}$-topology,
with boundaries $\partial E_{n}$ Hausdorff converging to the boundary
$\partial\mathbb{H}^{N}$ on $B_{2}(x_{n})$.
Then
$\operatorname{Per}(E_{n},B_{1}(x_{n}))\to\operatorname{Per}(\mathbb{H}^{N},B_{1}(0^{N}))=\omega_{N-1}\,,\quad\text{as
$n\to\infty$}\,,$
thanks to the weak convergence of perimeter measures in Theorem 2.11 and the
observation that $\operatorname{Per}(\mathbb{H}^{N},\partial B_{1}(0^{N}))=0$.
∎
###### Remark .
Let us recall that we can associate to any locally area minimizing cone
$C\subset\mathbb{R}^{N}$ with vertex at $0\subset\mathbb{R}^{N}$ its density
$\Theta_{0,C}:=\frac{\operatorname{Per}(C,B_{1}(0))}{\omega_{N-1}}=\frac{\operatorname{Per}(C,B_{r}(0))}{\omega_{N-1}r^{N-1}}\,,\quad\text{for
any $0<r<\infty$}\,.$
Then, among all the possible densities of minimal cones
$C\subset\mathbb{R}^{N}$, the halfspace attains the minimal one, and there is
a strictly positive gap between the density of the half-space and the
densities of all the other minimal cones.
This can be rephrased by saying that there exists $c_{N}>0$ such that, for any
minimal cone $C\subset\mathbb{R}^{N}$ with vertex at $0^{N}$ and different
from the half-space,
(6.2) $\Theta_{0,C}>1+c_{N}=\Theta_{0,\mathbb{H}^{N}}+c_{N}\,.$
The statement is classical, and it can be proved arguing by contradiction by
relying on the regularity theory for perimeter minimizers. More in detail, the
density at the vertex of a cone equals its density at infinity, which is
independent of the chosen base point. Namely
(6.3)
$\Theta_{0,C}=\lim_{r\to\infty}\frac{\operatorname{Per}(C,B_{r}(0))}{\omega_{N-1}r^{N-1}}=\lim_{r\to\infty}\frac{\operatorname{Per}(C,B_{r}(p))}{\omega_{N-1}r^{N-1}}\,,$
for any $p\in\partial C$. By the regularity theory, we can choose $p$ to be a
regular boundary point and apply the monotonicity formula to infer that
(6.4) $\Theta_{0,C}\geq\lim_{r\to
0}\frac{\operatorname{Per}(C,B_{r}(p))}{\omega_{N-1}r^{N-1}}=\Theta_{0,\mathbb{H}^{N}}\,.$
The argument above also shows that a cone with the same density of the half-
space must be the half-space.
In order to prove (6.2) we argue by contradiction. If there is a sequence of
cones $C_{n}$, all different from the half-space, and with densities
converging to the density of the half-space, by compactness and stability we
can extract a subsequence converging to a perimeter minimizer. The density at
infinity of this limit minimizer is easily seen to equal
$\Theta_{0,\mathbb{H}^{N}}$. By the above considerations, the limit is the
half-space. By the $\varepsilon$-regularity theorem $C_{n}$ is smooth on
$B_{1}(0)$ for any sufficiently large $n$. This is a contradiction to the
assumption that $C_{n}$ is a cone different from the half-space.
In the Euclidean theory minimal boundaries are smooth, if the ambient
dimension is less or equal than $7$. Moreover, they are smooth in any
dimension in a region where they are sufficiently flat. These statements are
the outcome of the classification of minimal cones up to dimension $7$ and of
the already mentioned $\varepsilon$-regularity theorem in [54].
Notice that subsection 6.1 shows that there is no hope for such a statement in
our setting: consider a (possibly singular) Alexandrov space of dimension two
and its product with a line, then the Alexandrov space is a minimal boundary
inside the product. Hence the best regularity we can achieve for minimal
boundaries in ambient dimension three is the regularity of two dimensional
Alexandrov spaces.
Nevertheless one might hope that sufficiently flat minimal boundaries in the
sense of subsection 6.1 have flat tangents (i.e. $0$-flat). It turns that this
is not the case, at least when the ambient dimension is greater than $4$, due
to the following.
###### Remark .
Denote by $\mathbb{S}^{3}_{r}$ the three dimensional sphere of radius $r$
endowed with the canonical Riemannian metric, and by $\mathbb{H}^{3}_{r}$ the
upper hemisphere. Let also $0$ denote the tip of the cone. In [111] it is
shown that the cone $C(\mathbb{H}^{3}_{r})$ is perimeter minimizing in
$B_{1}(0)\subset C(\mathbb{S}^{3}_{r})$, for $r<1$ sufficiently close to $1$.
The effect of this remark is that in our framework there cannot be an
improvement of flatness, as it happens in the classical case, at least for
ambient dimension greater than $4$. The best we can hope for is that flatness
is preserved along scales.
###### Theorem 6.1 ($\varepsilon$-regularity).
Let $N>1$ be fixed. For any $\varepsilon>0$ there exists
$\delta=\delta(\varepsilon,N)>0$ such that the following holds. Let
$(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(-\delta,N)$ metric
measure space, $E\subset X$ be a set of locally finite perimeter,
$x\in\partial E$ be such that $E$ is perimeter minimizing on $B_{4}(x)$ and
$E$ is $\delta$-regular in $B_{2}(x)$; then for any $y\in\partial E\cap
B_{1}(x)$ and for any $0<r<1$, $E$ is $\varepsilon r$-regular in $B_{r}(y)$.
Moreover, for any $0<\alpha<1$, there exists $\delta=\delta(\alpha,N)>0$ such
that if $X$ and $E$ are as above (in particular, $x\in\partial E$ and $E$ is
$\delta$-regular at $x$ in $B_{2}(x)$), then $\partial E\cap B_{1}(x)$ is
$C^{\alpha}$-homeomorphic to the ball $B_{1}(0^{N-1})\subset\mathbb{R}^{N-1}$.
###### Proof.
We argue by contradiction. Let us suppose that the conclusion is not true.
Then we can find $\varepsilon>0$, a sequence of $\operatorname{RCD}(-1/n,N)$
metric measure spaces $(X_{n},\mathsf{d}_{n},\mathscr{H}^{N},x_{n})$ and sets
of locally minimal perimeter $E_{n}\subset X_{n}$ such that $x_{n}\in\partial
E_{n}$, $E_{n}$ is $1/n$-regular at $x_{n}$ in $B_{2}(x_{n})$ but there exist
$y_{n}\in B_{1}(x_{n})\cap\partial E_{n}$ and $r_{n}>0$ such that:
* (i)
$E_{n}$ is $\varepsilon r$-regular at $y_{n}$ in $B_{r}(y_{n})$ for any
$r_{n}<r<1$ and for any $n\in\mathbb{N}$;
* (ii)
$E_{n}$ is not $\varepsilon r_{n}/2$-regular at $y_{n}$ in
$B_{r_{n}/2}(y_{n})$.
It is easy to check that these assumptions force $r_{n}\to 0$. Moreover, we
can assume $\varepsilon>0$ small enough so that $\delta>0$ in (6.1) is smaller
than the density gap $c_{N}$ of (6.2).
Now let us rescale along the sequence in order to let the critical scales
$r_{n}$ become scale $1$. If we do so, letting
$\tilde{X}_{n}:=(X_{n},\mathsf{d}_{n}/r_{n},\mathscr{H}^{N},y_{n})$ and
looking at the sets $E_{n}$ in the rescaled metric measure spaces, by Theorem
2.2, $\tilde{X}_{n}$ converge in the pmGH topology to
$(\mathbb{R}^{N},\mathsf{d}_{\mathrm{eucl}},\mathscr{H}^{N},0^{N})$. Moreover,
thanks to Theorem 2.11 the sets $E_{n}$ converge in the $L^{1}_{{\rm loc}}$
topology to an entire minimizer of the perimeter $F\subset\mathbb{R}^{N}$.
Taking into account (i) and subsection 6.1, we can also infer that
(6.5)
$1-\delta\leq\frac{\operatorname{Per}(F,B_{r}(0^{N}))}{\omega_{N-1}r^{N-1}}\leq
1+\delta\,,\quad\text{for any $1<r<\infty$}\,.$
Since $F$ is an entire perimeter minimizer in $\mathbb{R}^{N}$, the standard
Euclidean monotonicity formula yields that
(6.6) $r\mapsto\frac{\operatorname{Per}(F,B_{r}(z))}{\omega_{N-1}r^{N-1}}$
is an increasing function, for any $z\in\partial F$. By (6.5), that guarantees
compactness of the sequence of scalings $F_{0,r}$ of $F$ for $r>1$, we are
allowed to consider a blow-down $G$ of $F$. A standard consequence of the
monotonicty formula is that $G$ is an entire minimal cone in $\mathbb{R}^{N}$.
Moreover, by (6.5) and our choice of $\delta>0$, we have that
$1-\delta\leq\Theta_{0,G}\leq 1+\delta\leq 1+c_{N}\,.$
Hence, by the Euclidean density gap subsection 6.1 and monotonicity, we infer
that $\Theta_{G}=1$. Therefore
(6.7)
$\lim_{r\to\infty}\frac{\operatorname{Per}(F,B_{r}(0))}{\omega_{N-1}r^{N-1}}=\Theta_{0,G}=1\,.$
Observe that the density at infinity of the entire minimal surface $F$ is
independent of the base point $z\in\partial F$, as one can easily verify.
Moreover, by De-Giorgi’s theorem, there exists $z_{0}\in F\cap B_{1}(0)$ such
that
(6.8) $\lim_{r\to
0}\frac{\operatorname{Per}(F,B_{r}(z_{0}))}{\omega_{N-1}r^{N-1}}=1\,.$
Relying again on the monotonicity formula, by (6.7) and (6.8) we infer that
$\frac{\operatorname{Per}(F,B_{r}(z_{0}))}{\omega_{N-1}r^{N-1}}=1\,,\quad\text{for
any $0<r<\infty$}\,.$
Then with a standard argument we obtain that $F$ is a half-space
$\mathbb{H}^{N}$ passing through $0$.
By condition (ii) above, the sets $E_{n}$, when considered in the scaled
metric measure spaces $\tilde{X}_{n}$, are not $\varepsilon/2$-regular at
$x_{n}$ in $B_{1/2}(x_{n})$. This clearly gives a contradiction, since their
limit is a half-space, as we just argued; in particular, they are
$\varepsilon/2$-regular at $x_{n}$ in $B_{1/2}(x_{n})$ as soon as $n$ is large
enough.
The second part of the statement follows from the previous one via
Reifenberg’s theorem for metric spaces, see for instance [41, Appendix 1]. ∎
###### Corollary .
Let $N>1$ be fixed. Then there exists $\delta=\delta(N)>0$ such that the
following holds. If $(M^{N},g)$ is a smooth $N$-dimensional Riemannian
manifold and $E\subset M$ is a set of locally finite perimeter such that, for
some $x\in M$ and $r>0$,
* (i)
$\operatorname{Ric}_{M}\geq-\delta r^{-2}$ on $B_{4r}(x)$;
* (ii)
$E$ is perimeter minimizing in $B_{4r}(x)$;
* (iii)
$E$ is $\delta$-regular at $x$ on $B_{2r}(x)$.
Then $\partial E\cap B_{r}(x)$ is smooth.
###### Proof.
We only need to verify that all tangent cones at all points $x\in\partial
E\cap B_{r}(x)$ are Euclidean half-spaces. Then the classical regularity in
Geometric Measure Theory provides smoothness.
To this aim, observe that, by Theorem 6.1, all the tangent cones at any
$x\in\partial E\cap B_{r}(x)$ are entire perimeter minimizers in
$\mathbb{R}^{n}$ close to the Euclidean half-space at all scales. Then an
argument analogous to the one exploited in the proof of Theorem 6.1, relying
on the Euclidean density gap (see subsection 6.1), shows that the tangent
cones are half-spaces. ∎
###### Remark .
In subsection 6.1 there is no assumption on the injectivity radius of the
Riemannian manifold, nor on the full curvature tensor, which are the classical
assumptions for the $\varepsilon$-regularity theorems for minimal surfaces on
Riemannian manifolds, see for instance [47, 114].
###### Remark .
subsection 6.1 should be compared with some previous results obtained in [77]
and [80, Section 4]. Therein, uniform Reifenberg flatness was proved for
minimal bubbles w.r.t. families of smooth Riemannian metrics $g_{\varepsilon}$
uniformly converging to a background metric $g$ on a fixed manifold $M$. In
this regard subsection 6.1 is much stronger, since it deals with a weaker
notion of convergence of metrics. Moreover, Theorem 6.1 shows that ambient
regularity is not a key assumption for Reifenberg flatness, provided there is
a synthetic lower Ricci bound on the background.
### 6.2. Sharp perimeter bounds for the equidistant sets from minimal
boundaries
In this subsection we consider again local perimeter minimizers in the sense
of subsection 5.1. Our goal is to prove some sharp perimeter bounds for the
equidistant sets from minimal boundaries which will turn to be very useful to
establish the quantitative regularity results in subsection 6.4. The
interpretation of minimality via Laplacian bounds on the distance function
obtained in subsection 5.1 will play a key role here.
The following useful lemma is essentially taken from [29], see in particular
the proof of Theorem 7.4 therein. We omit the proof that can be obtained
relying on subsection 5.1, with arguments similar to those appearing in the
proofs of previous results in this note.
###### Lemma .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a set of locally finite perimeter which
locally minimizes the perimeter in an open domain $\Omega\subset X$ according
to subsection 5.1. Then, for any Lipschitz function $\varphi:X\to\mathbb{R}$
with compact support in $\Omega$, it holds:
(6.9)
$\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}(\\{\mathsf{d}_{\bar{E}}>r\\})=\int_{\\{0\leq\mathsf{d}_{\bar{E}}<r\\}}\mathop{}\\!\mathrm{d}\operatorname{div}(\varphi\nabla\mathsf{d}_{\bar{E}})\,,\quad\text{for
a.e. $r>0$}\,.$
###### Remark .
The local perimeter minimizing assumption above is used only to infer
regularity properties of the distance function, namely the fact that it has
measure valued Laplacian whose singular part on the boundary of the set is the
surfaces measure, rather than to obtain specific mean curvature bounds.
Indeed, the conclusion of subsection 6.2 holds for the boundary of any smooth
set on a smooth Riemannian manifold.
In order to ease the notation, let us denote by $E^{t}$ the open
$t$-enlargement of $E$, i.e.
(6.10) $E^{t}:=\\{x\in X\,:\,\mathsf{d}(x,\bar{E})<t\\}\,.$
We will need to compare the perimeter measure of the set $E$ and the measures
obtained by normalizing the restriction of the ambient volume measure to a
tubular neighbourhood of the set. Again, for all smooth hypersurfaces in the
smooth Riemannian setting, the perimeter and such a Minkowski-type measure
coincide, even though they do not for general sets. The next result states
that the perimeter minimality condition is robust enough to guarantee such an
extra regularity also in the $\operatorname{RCD}$ setting.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a set of locally finite perimeter which
locally minimizes the perimeter in an open domain $\Omega\subset X$ according
to subsection 5.1. For any $0<\varepsilon<1$, let
$\mu_{\varepsilon}^{+}:=\frac{1}{\varepsilon}\mathscr{H}^{N}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\\{0\leq\mathsf{d}_{{\bar{E}}}<\varepsilon\\}\quad\text{and}\quad\mu_{\varepsilon}^{-}:=\frac{1}{\varepsilon}\mathscr{H}^{N}\mathop{\hbox{\vrule
height=7.0pt,width=0.5pt,depth=0.0pt\vrule
height=0.5pt,width=6.0pt,depth=0.0pt}}\nolimits\\{0\leq\mathsf{d}_{E^{c}}<\varepsilon\\}\,.$
Then both $\mu_{\varepsilon}^{+}$ and $\mu_{\varepsilon}^{-}$ weakly converge
to $\operatorname{Per}_{E}$ on $\Omega$ as $\varepsilon\to 0$.
###### Proof.
Let us prove the weak convergence to the perimeter of $\mu_{\varepsilon}^{+}$.
The weak convergence of $\mu_{\varepsilon}^{-}$ can be proved with an
analogous argument, replacing $\mathsf{d}_{\bar{E}}$ with
$\mathsf{d}_{E^{c}}$.
The family of measures $\mu_{\varepsilon}^{+}$ has locally uniformly bounded
mass, as it follows from subsubsection 2.4.6. We claim that for any weak limit
$\mu$ of the sequence of measures $\mu_{\varepsilon_{i}}^{+}$, where
$\varepsilon_{i}\downarrow 0$ as $i\to\infty$, it holds
$\mu=\operatorname{Per}_{E}$.
Let us start from the inequality $\mu\geq\operatorname{Per}_{E}$.
Letting $\varphi_{\varepsilon}^{+}:X\to\mathbb{R}$ be defined by
$\varphi_{\varepsilon}^{+}(x)=1$ on $\bar{E}$, $\varphi_{\varepsilon}^{+}=0$
on $X\setminus E^{\varepsilon}$ and
$\varphi_{\varepsilon}^{+}=\frac{1}{\varepsilon}(\varepsilon-\mathsf{d}(x,\bar{E}))\,,\quad\text{on
$E^{\varepsilon}\setminus E$}\,,$
it holds
$\mu_{\varepsilon}^{+}=\left\lvert\nabla\varphi_{\varepsilon}^{+}\right\rvert\mathscr{H}^{N}\,.$
Moreover, it is easy to check that $\varphi_{\varepsilon}^{+}$ converge
locally in $L^{1}$ to $\chi_{E}$. Hence, by the lower semicontinuity of the
total variation (in localized form), it is easy to infer that, for any open
set $A\subset\Omega$ such that $\mu(\partial A)=0$,
$\operatorname{Per}(E,A)\leq\liminf_{i\to\infty}\mu_{\varepsilon_{i}}^{+}(A)=\mu(A)\,.$
To prove the converse inequality, let us focus for simplicity on the case
$K=0$, the general case introduces only an additional error term of lower
order. Let us consider any non-negative Lipschitz function
$\varphi:X\to[0,\infty)$ with compact support in $\Omega$. We claim that
$\int\varphi\mathop{}\\!\mathrm{d}\mu\leq\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}\,,$
which will imply the inequality $\mu\leq\operatorname{Per}_{E}$.
To prove this claim, we rely on subsection 6.2. Indeed, for a.e. $r>0$
sufficiently small, it holds that
$\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}(\\{\mathsf{d}_{\bar{E}}>r\\})=\int_{\\{\mathsf{d}_{\bar{E}}<r\\}}\mathop{}\\!\mathrm{d}\operatorname{div}(\varphi\nabla\mathsf{d}_{\bar{E}})\,.$
Hence, for a.e. $r>0$, using the Leibniz rule for the divergence, Theorem 5.1
and subsection 5.1, we get
$\displaystyle\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}(\\{\mathsf{d}_{\bar{E}}>r\\})=$
$\displaystyle\int_{E^{r}}\nabla\varphi\cdot\nabla\mathsf{d}_{\bar{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}+\int_{E^{r}}\varphi\bm{\Delta}\mathsf{d}_{\bar{E}}$
$\displaystyle\leq$
$\displaystyle\int_{E^{r}}\nabla\varphi\cdot\nabla\mathsf{d}_{\bar{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}+\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\,.$
Therefore, for any $s>0$ sufficiently small, by the coarea formula we get
$\displaystyle\int_{E^{s}}\varphi\mathop{}\\!\mathrm{d}\mathscr{H}^{N}=$
$\displaystyle\int_{0}^{s}\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}(\\{\mathsf{d}_{\bar{E}}>r\\})$
$\displaystyle\leq$
$\displaystyle\int_{0}^{s}\left(\int_{E^{r}}\nabla\varphi\cdot\nabla\mathsf{d}_{\bar{E}}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}+\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\right)$
$\displaystyle\leq$ $\displaystyle
s\operatorname{Lip}(\varphi)\mathscr{H}^{N}(E^{s}\cap\mathrm{spt}\varphi)+s\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\,.$
Hence
$\displaystyle\int\varphi\mathop{}\\!\mathrm{d}\mu=$
$\displaystyle\lim_{i\to\infty}\frac{1}{s_{i}}\int_{E^{s_{i}}}\varphi\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
$\displaystyle\leq$ $\displaystyle\limsup_{s\to
0}\frac{1}{s}\left(s\operatorname{Lip}(\varphi)\mathscr{H}^{N}(E^{s}\cap\mathrm{spt}\varphi)+s\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\right)$
$\displaystyle=$
$\displaystyle\int\varphi\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\,,$
where we used subsubsection 2.4.6 in the last inequality. This concludes the
proof of the inequality $\mu\leq\operatorname{Per}_{E}$ and hence the proof. ∎
Let us introduce the notation $\Sigma$ for the boundary $\partial E$ of a set
of finite perimeter $E$ which is locally perimeter minimizing in
$\Omega\subset X$ and let us denote, for any $h>0$,
$\Sigma^{h}:=\\{x\in\Omega\,:\,\mathsf{d}(\bar{E},x)=h\\}\,.$
The next result is a kind of monotonicity formula for equidistant sets from
minimal boundaries. Its proof is inspired by [33, Lemma 2], which deals with
the Euclidean case. The Laplacian bound for the distance from a locally
minimizing set of finite perimeter under lower Ricci curvature bounds
(obtained in Theorem 5.1) allows to extend it to the present framework.
###### Proposition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a set of locally finite perimeter which
locally minimizes the perimeter in an open domain $\Omega\subset X$ according
to subsection 5.1. Let $h>0$ be fixed. Let $\Gamma\subset\Sigma^{h}$ be any
compact set and denote
(6.11) $\displaystyle\Gamma_{\Sigma}$ $\displaystyle:=\\{y\in\Sigma\cap\Omega\
:\,\mathsf{d}(x,y)=h\quad\text{for some $x\in\Gamma$}\\}\,,$ (6.12)
$\displaystyle G$
$\displaystyle:=\\{x\in\Omega\,:\,\mathsf{d}_{\Gamma_{\Sigma}}(x)+\mathsf{d}_{\Gamma}(x)=h\\}\,.$
If $G\Subset\mathcal{K}$, where $\mathcal{K}$ has been defined in (5.2), then
(6.13)
$\operatorname{Per}(E^{h},\Gamma)\leq\operatorname{Per}(E,\Gamma_{\Sigma})+\int_{G}\mathrm{t}_{K,N}(\mathsf{d}_{E})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\
\,,$
where $\mathrm{t}_{K,N}$ was defined in (1.1), and
(6.14)
$\operatorname{Per}(E^{h},\Gamma)\leq\begin{cases}\operatorname{Per}(E,\Gamma_{\Sigma})\cos\left(\sqrt{\frac{K}{N-1}}h\right)^{N-1}\,&\quad\text{if
}K>0\\\ \quad\operatorname{Per}(E,\Gamma_{\Sigma})\,&\quad\text{if }K=0\\\
\operatorname{Per}(E,\Gamma_{\Sigma})\cosh\left(\sqrt{\frac{-K}{N-1}}h\right)^{N-1}\,&\quad\text{if
}K<0\,.\end{cases}$
###### Remark .
Note that $G$ is made by the union of minimizing geodesics connecting
$\Gamma_{\Sigma}$ with $\Sigma$ along which $\mathsf{d}_{E}$ is attained.
###### Remark .
The bounds obtained in subsection 6.2 are sharp. Indeed it is easily seen that
equality is achieved in the model spaces:
* •
for $K>0$, let $(X,\mathsf{d},\mathscr{H}^{N})$ be the $N$-dimensional round
sphere of constant sectional curvature $K/(N-1)$ and $E$ be a half-sphere. It
is a standard fact that $E$ is locally perimeter minimizing inside a
sufficiently small open domain $\Omega$. It is immediate to see that
$\Sigma^{h}$ is (part of the boundary of) a spherical cap and one can check
that equality is attained in (6.14) by direct computations;
* •
for $K=0$, let $(X,\mathsf{d},\mathscr{H}^{N})$ be the $N$-dimensional
Euclidean space and $E$ be a half-space. It is a standard fact that $E$ is
locally perimeter minimizing inside any open domain $\Omega$. It is immediate
to see that $\Sigma^{h}$ is (part of the boundary of) an equidistant half
space and that equality is attained in (6.14);
* •
for $K<0$, let $(X,\mathsf{d},\mathscr{H}^{N})$ be the $N$-dimensional
hyperbolic space of constant sectional curvature $K/(N-1)$ and $E$ be a horo-
ball. It is a standard fact that $E$ is locally perimeter minimizing inside
any open domain $\Omega$. Also in this case, one can check that equality is
attained in (6.14) by direct computations.
###### Proof.
Notice that $G$ is the set spanned by those rays connecting $\Gamma_{\Sigma}$
to $\Gamma$. We would like to apply the Gauss-Green integration by parts
formula to the vector field $\nabla\mathsf{d}_{E}$ on $G$. Indeed, at an
heuristic level, the boundary of $G$ is made of three parts,
$\Gamma_{\Sigma}$, $\Gamma$ and some lateral faces whose unit normal we expect
to be orthogonal to $\nabla\mathsf{d}_{E}$. Then the conclusion would follow
from the fact that
$\operatorname{div}\nabla\mathsf{d}_{E}\leq\mathrm{t}_{K,N}\circ\mathsf{d}_{E}$,
by Theorem 5.1.
In order to make the argument rigorous, we are going to approximate the
characteristic function of the set $G$ (which in general may not be regular
enough), by suitable cut-off functions.
Let us introduce the shortened notation $\bar{\mathsf{d}}$ for the distance
from $\bar{E}$. Moreover, let us denote by $\mathsf{d}_{\Gamma}$ the distance
function from the compact set $\Gamma$ in the statement. Then, for any
$\varepsilon\in(0,\varepsilon_{0})$ let us set
$\varphi_{\varepsilon}:=\frac{1}{\varepsilon}\left(h+\varepsilon-(\bar{\mathsf{d}}+\mathsf{d}_{\Gamma})\right)_{+}\,,$
where we denoted by $(\cdot)_{+}$ the positive part. For any $\delta\in(0,h)$,
we introduce the monotone function $g_{\delta}$ satisfying:
$g_{\delta}(0)=g^{\prime}_{\delta}(0)=0\,,\quad
g^{\prime\prime}_{\delta}=\frac{1}{\delta}\left(\chi_{[0,\delta]}-\chi_{[h-\delta,h]}\right)\,.$
Observe that, in particular, $g^{\prime}_{\delta}(h)=0$. Recalling that
$\left\lvert\nabla\bar{\mathsf{d}}\right\rvert\leq 1$ a.e. and that
$g_{\delta}^{\prime}(\bar{\mathsf{d}})\bm{\Delta}\bar{\mathsf{d}}\leq
g_{\delta}^{\prime}(\bar{\mathsf{d}})\,\mathrm{t}_{K,N}(\bar{\mathsf{d}})$ by
Theorem 5.1, using chain rule we obtain:
(6.15) $\bm{\Delta}g_{\delta}(\bar{\mathsf{d}})\leq
g_{\delta}^{\prime\prime}(\bar{\mathsf{d}})+g_{\delta}^{\prime}(\bar{\mathsf{d}})\,\mathrm{t}_{K,N}(\bar{\mathsf{d}})\,.$
Now let $F\subset\mathcal{K}$ be an open neighbourhood of $G$ inside
$\mathcal{K}$. Relying on (6.15) and applying the Gauss Green integration by
parts formula (see Theorem 2.8), taking into account that there are no
boundary terms since either $\varphi_{\varepsilon}=0$ or
$g^{\prime}_{\delta}(\bar{\mathsf{d}})=0$ on the boundary of the domain for
$\varepsilon>0$ sufficiently small, we can compute:
$\displaystyle\int_{F}g^{\prime\prime}_{\delta}(\bar{\mathsf{d}})\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
$\displaystyle\geq\int_{F}\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\bm{\Delta}g_{\delta}(\bar{\mathsf{d}})-\int_{F}\varphi_{\varepsilon}\,g_{\delta}^{\prime}(\bar{\mathsf{d}})\,\mathrm{t}_{K,N}(\bar{\mathsf{d}})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
$\displaystyle=-\int_{F}\nabla(g_{\delta}(\bar{\mathsf{d}}))\cdot\nabla\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}-\int_{F}\varphi_{\varepsilon}\,g_{\delta}^{\prime}(\bar{\mathsf{d}})\,\mathrm{t}_{K,N}(\bar{\mathsf{d}})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\,.$
Let us observe that
$\displaystyle\nabla(g_{\delta}(\bar{\mathsf{d}}))\cdot\nabla\varphi_{\varepsilon}=$
$\displaystyle
g^{\prime}_{\delta}(\bar{\mathsf{d}})\nabla\bar{\mathsf{d}}\cdot(-\nabla\bar{\mathsf{d}}-\nabla\mathsf{d}_{\Gamma})\varepsilon^{-1}$
$\displaystyle=$ $\displaystyle
g^{\prime}_{\delta}(\bar{\mathsf{d}})(-1-\nabla\bar{\mathsf{d}}\cdot\nabla\mathsf{d}_{\Gamma})\varepsilon^{-1}\leq
0\,,\quad\text{$\mathscr{H}^{N}$-a.e. on $F$}\,.$
Hence, for any $\varepsilon,\delta>0$ sufficiently small, it holds
$\int_{F}g^{\prime\prime}_{\delta}(\bar{\mathsf{d}})\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\geq-\int_{F}\varphi_{\varepsilon}\,g_{\delta}^{\prime}(\bar{\mathsf{d}})\,\mathrm{t}_{K,N}(\bar{\mathsf{d}})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\,.$
By the very definition of $g_{\delta}$, this implies that
$\displaystyle\frac{1}{\delta}\int_{F}\chi_{[0,\delta]}(\bar{\mathsf{d}})\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\geq$
$\displaystyle\frac{1}{\delta}\int_{F}\chi_{[h-\delta,h]}(\bar{\mathsf{d}})\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\mathscr{H}^{N}$
(6.16)
$\displaystyle-\int_{F}\varphi_{\varepsilon}\,g_{\delta}^{\prime}(\bar{\mathsf{d}})\,\mathrm{t}_{K,N}(\bar{\mathsf{d}})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\,.$
Relying on subsection 6.2, which guarantees the weak convergence of the
measures $\delta^{-1}\chi_{[0,\delta]}(\bar{\mathsf{d}})\mathscr{H}^{N}$ to
$\operatorname{Per}_{E}$ as $\delta\to 0$, we can pass to the limit in the
left hand side of (6.2). Moreover, by semicontinuity of the total variation,
for any weak limit $\nu$ of the sequence
$\delta^{-1}\chi_{[h-\delta,h]}(\bar{\mathsf{d}})\mathscr{H}^{N}$ (which is
easily seen to be pre-compact in the weak topology) as $\delta\to 0$, it holds
$\nu\geq\operatorname{Per}(E_{h})$. It is also easily seen that $0\leq
g_{\delta}^{\prime}(\bar{\mathsf{d}})\uparrow 1$ $\mathscr{H}^{N}$-a.e. on
$F$, as $\delta\downarrow 0$. Hence
(6.17)
$\int_{F}\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\operatorname{Per}_{E}\geq\int_{F}\varphi_{\varepsilon}\mathop{}\\!\mathrm{d}\operatorname{Per}_{E_{h}}-\int_{F}\varphi_{\varepsilon}\,\mathrm{t}_{K,N}(\bar{\mathsf{d}})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\,.$
Next, we pass to the limit as $\varepsilon\to 0$. Observe that
$\lim_{\varepsilon\to 0}\varphi_{\varepsilon}(x)=\begin{cases}1\,\quad\text{if
$x\in G$ }\\\ 0\,\quad\text{otherwise}\,.\end{cases}$
Therefore, passing to the limit in (6.17) as $\varepsilon\to 0$, we obtain
that
$\operatorname{Per}(E^{h},\Gamma)\leq\operatorname{Per}(E,\Gamma_{\Sigma})+\int_{G}\mathrm{t}_{K,N}(\bar{\mathsf{d}})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}\
\,,$
as desired.
The bounds in (6.14) follow from (6.13) thanks to the coarea formula and the
integral form of Grönwall’s Lemma if $K<0$. In the case $K=0$ they follow
directly from (6.13) since $\mathrm{t}_{0,N}=0$.
Let us deal with the remaining case $K>0$.
We introduce a function
$f_{K,N}:\left(0,\frac{\pi}{2}\sqrt{\frac{N-1}{K}}\right)\to\mathbb{R}\,,\quad\,f_{K,N}(r):=\int_{0}^{r}\cos\left(\sqrt{\frac{K}{N-1}}\,h\right)^{-(N-1)}\mathop{}\\!\mathrm{d}h\,.$
Notice that
(6.18)
$f^{\prime}_{K,N}(r)=\cos\left(\sqrt{\frac{K}{N-1}}\,r\right)^{-(N-1)}\,,$
for any $r>0$ and in particular $f^{\prime}_{K,N}(0)=1$. Moreover, the chain
rule for the Laplacian and a direct computation show that we can rephrase the
bound in Theorem 5.1 as
(6.19) $\Delta f_{K,N}\circ\mathsf{d}_{E}\leq 0\,.$
Then (6.14) in the case $K>0$ follows formally by applying the Gauss-Green
integration by parts formula to the vector field $\nabla
f_{K,N}\circ\mathsf{d}_{E}$ on the set $G$ introduced in (6.12). Indeed, the
contribution coming from the integration in the interior has a sign thanks to
(6.19), one of the two boundary terms is
$f^{\prime}_{K,N}(0)\operatorname{Per}(E,\Gamma_{\Sigma})=\operatorname{Per}(E,\Gamma_{\Sigma})$
and the other one can be estimated by
(6.20)
$f^{\prime}_{K,N}(h)\operatorname{Per}(E^{h},\Gamma)=\operatorname{Per}(E^{h},\Gamma)\cos\left(\sqrt{\frac{K}{N-1}}\,h\right)^{-(N-1)}\,.$
Therefore we obtain
(6.21)
$\operatorname{Per}(E^{h},\Gamma)\leq\operatorname{Per}(E,\Gamma_{\Sigma})\cos\left(\sqrt{\frac{K}{N-1}}\,h\right)^{(N-1)}\,,$
as we claimed. The rigorous justification of (6.21) can be obtained with an
approximation argument completely analogous to the one introduced in the first
part of the proof, approximating the characteristic function of $G$ with
suitable cut-off functions; we omit the details for the sake of brevity. ∎
A very useful result proved in Simons’ seminal paper on minimal varieties
[125] states that there are no two sided stable smooth minimal hypersurfaces
on closed manifolds with positive Ricci curvature. Thanks to the perimeter
monotonicity in subsection 6.2 we can partially generalize this fact to the
present framework.
###### Corollary (Simons’ theorem in $\operatorname{RCD}$ spaces).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space, for some $K>0$. Then, for any $r>0$, there is no non trivial
set of finite perimeter $E\subset X$ that minimizes the perimeter among all
the perturbations $F\subset X$ such that
$E\Delta F\subset B_{r}(\partial E)\,.$
###### Proof.
Let us argue by contradiction. If a set of finite perimeter as in the
statement exists, then it is locally perimeter minimizing according to
subsection 5.1. Hence it verifies the assumptions of subsection 6.2.
Therefore, for any $h>0$,
$\operatorname{Per}(E^{h})\leq\operatorname{Per}(E)+\int_{E^{h}\setminus
E}\mathrm{t}_{K,N}(\mathsf{d}_{E})\mathop{}\\!\mathrm{d}\mathscr{H}^{N}<\operatorname{Per}(E)\,.$
To conclude it is sufficient to observe that $E^{h}\Delta E\subset
B_{r}(\partial E)$ for any $h>0$ sufficiently small and we reach a
contradiction. ∎
### 6.3. Partial regularity of minimal boundaries away from sets of
codimension three
Our goal in this subsection is to prove that minimal boundaries have regular
blow-ups (and therefore are topologically regular) away from sets of ambient
codimension three (assuming for simplicity that the ambient space
$(X,\mathsf{d},\mathscr{H}^{N})$ is an $\operatorname{RCD}$ space without
boundary).
###### Definition (Regular and singular sets on minimal boundaries).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and $E\subset X$ be locally perimeter minimizing inside a ball
$B_{2}(x)\subset X$. Suppose also that $\partial X\cap B_{2}(x)=\emptyset$.
The regular part $\mathcal{R}^{E}$ and the singular part $\mathcal{S}^{E}$ of
$\partial E$ are defined as
$\mathcal{R}^{E}:=\\{x\in\partial
E\,:\,(\mathbb{R}^{N},\mathsf{d}_{\mathrm{eucl}},\mathscr{H}^{N},0^{N},\\{x_{N}<0\\})\in\operatorname{Tan}_{x}(X,\mathsf{d},\mathscr{H}^{N},E)\\}\,,$
$\mathcal{S}^{E}:=\partial E\setminus\mathcal{R}^{E}\,.$
###### Remark .
If $E\subset X$ is locally perimeter minimizing and $x\in\partial X$, then for
any
(6.22)
$(Y,\mathsf{d}_{Y},\mathscr{H}^{N},F,y)\in\operatorname{Tan}_{x}(X,\mathsf{d},\mathscr{H}^{N},E)$
it holds that $F$ is an entire local perimeter minimizer in $Y$, as it follows
from the stability Theorem 2.11.
As a first regularity result, we establish topological regularity of the
regular set. This is indeed a direct consequence of the
$\varepsilon$-regularity Theorem 6.1.
###### Theorem 6.2 (Topological regularity of the regular set).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space, assume that $B_{2}(x)\cap\partial X=\emptyset$ and let
$E\subset X$ be a set of locally finite perimeter that is locally perimeter
minimizing in $B_{2}(x)$. Then, for every $\alpha\in(0,1)$ there exists a
relatively open set $O_{\alpha}\subset\partial E\cap B_{1}(x)$ with
$\mathcal{R}^{E}\subset O_{\alpha}$ such that $O_{\alpha}$ is
$\alpha$-biHölder homeomorphic to an open, smooth $(N-1)$-dimensional
manifold.
###### Remark .
The $C^{0,\alpha}$ regularity of the manifold $O_{\alpha}$ containing the
regular set matches the (currently known) regularity of the regular part
$\mathcal{R}(X)$ of the ambient space $X$ (after Cheeger-Colding’s metric
Reifenberg Theorem [41, Appendix 1] and [89]). Higher regularity of
$\mathcal{R}^{E}$ (e.g. contained in a Lipschitz manifold), would require
first improving the structure theory of the ambient space.
The classical regularity result for perimeter minimizers in the Euclidean (or
smooth Riemannian) setting is that they are smooth away from sets of ambient
codimension $8$.
A key intermediate step is the fact that the blow-ups are flat Euclidean half-
spaces away from sets of ambient codimension $8$, see [62, 72].
The examples that we have already discussed in this note show that this
statement is false in the non smooth framework. Singular blow-ups already
appear in ambient dimension $3$, see the discussion after subsection 6.1.
As a first regularity result, below we prove that, if we restrict to regular
ambient points (or we consider $\operatorname{RCD}(K,N)$ metric measure spaces
$(X,\mathsf{d},\mathscr{H}^{N})$ such that the singular set is empty) then the
picture matches with the classical one and we can prove that the codimension
of the singular set of a perimeter minimizer is at least $8$.
###### Theorem 6.3.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space, let $\Omega\subset X$ be an open domain and let $E\subset X$ be
a set of locally finite perimeter that is locally perimeter minimizing in
$\Omega$. Then
(6.23) $\dim_{H}\left(\mathcal{S}^{E}\cap\mathcal{R}\cap\Omega\right)\leq
N-8\,.$
In particular:
* i)
if $\Omega\subset\mathcal{R}$, then
(6.24) $\dim_{H}\left(\mathcal{S}^{E}\cap\Omega\right)\leq N-8\,;$
* ii)
if $\Omega\subset\mathcal{R}$ and $N\leq 7$, then
$\mathcal{S}^{E}\cap\Omega=\emptyset$.
###### Proof.
With the tools that we have developed so far, the proof reduces to a variant
of the classical dimension reduction technique to bound the dimension of
singular sets. See [61, 62] and [72, Chapter 11] for the case of perimeter
minimizers in the Euclidean setting, and [56] for the dimension bounds for the
singular strata on $\operatorname{RCD}(K,N)$ spaces
$(X,\mathsf{d},\mathscr{H}^{N})$.
We argue by contradiction. If (6.23) is not satisfied, then we construct a
(local) perimeter minimizer inside $\mathbb{R}^{N}$ whose singular set has
codimension less than $8$. This will lead to a contradiction, since the
singular set of a Euclidean (local) perimeter minimizer has codimension at
least $8$, by the classical regularity theory.
Let us suppose that (6.23) is not verified. Then there exists $\eta>0$ such
that
(6.25)
$\mathcal{H}^{N-8+\eta}\left(\mathcal{S}^{E}\cap\mathcal{R}\cap\Omega\right)>0\,.$
By [56, Lemma 3.6] (see also [72, Lemma 11.3] and [61, Theorem 2.10.17]),
there exists $x\in\mathcal{S}^{E}\cap\mathcal{R}\cap\Omega$ such that
(6.26) $\limsup_{r\to
0}\frac{\mathscr{H}^{N-8+\eta}_{\infty}\left(B_{r}(x)\cap\left(\mathcal{S}^{E}\cap\mathcal{R}\cap\Omega\right)\right)}{r^{N-8+\eta}}\geq
2^{N-8+\eta}\omega_{N-8+\eta}\,,$
where we denoted by $\mathscr{H}^{N-8+\eta}_{\infty}$ the pre-Hausdorff
measure of dimension $N-8+\eta$.
Now we claim that for any sequence $r_{i}\downarrow 0$ there exists a
subsequence, that we do not relabel, such that
$E\subset(X,\mathsf{d}/r_{i},\mathscr{H}^{N}/r_{i}^{N},x)$ converge in the
$L^{1}_{{\rm loc}}$ sense as sets of (locally) finite perimeter to an entire
(local) perimeter minimizer
$E_{\infty}\subset\left(\mathbb{R}^{N},\mathsf{d}_{eucl},\mathscr{H}^{N},0^{N}\right)$.
Here the compactness of the sequence follows from [6, Corollary 3.4] together
with the uniform perimeter bounds for perimeter minimizers inside the ball,
while the conclusion that $E$ is an entire (local) perimeter minimizer follows
from Theorem 2.11.
By scaling, denoting by
$E_{i}=E\subset(X,\mathsf{d}/r_{i},\mathscr{H}^{N}/r_{i}^{N},x)$ the set of
finite perimeter $E$ considered inside the rescaled metric measure space, we
can find a sequence $r_{i}\downarrow 0$ such that $E_{i}$ converge in
$L^{1}_{{\rm loc}}$ to an entire (locally) perimeter minimizer $E_{\infty}$ as
we discussed above and moreover
(6.27)
$\lim_{i\to\infty}\mathscr{H}^{N-8+\eta}_{\infty}\left(\mathcal{S}^{E_{i}}\cap
B_{1}^{i}(x)\cap\mathcal{R}(X_{i})\right)\geq
2^{N-8+\eta}\omega_{N-8+\eta}\,>\,0.$
We claim that (6.27) forces
(6.28) $\mathscr{H}^{N-8+\eta}_{\infty}\left(\mathcal{S}^{E_{\infty}}\cap
B_{1}(0^{N})\right)\,>\,0\,.$
In order to check (6.28) it is sufficient to prove that any limit point
$x_{\infty}$ of a sequence $(x_{i})$ such that
$x_{i}\in\mathcal{S}^{E_{i}}\cap\mathcal{R}(X_{i})\cap B_{1}^{i}(x)$ belongs
to $\mathcal{S}^{E_{\infty}}\cap B_{1}(0^{N})$. Once this statement has been
established, (6.28) will follow from (6.27) and the upper semicontinuity of
the pre-Hausdorff measure $\mathscr{H}^{N-8+\eta}_{\infty}$ under GH
convergence, see [56, Equation (3.36)].
Let us pass to the verification of the claim. Let us consider any $x_{\infty}$
as above. If we suppose by contradiction that it is a regular point of
$E_{\infty}$, then it follows from the $\varepsilon$-regularity Theorem 6.1
that for any $\varepsilon>0$ there exists $i\in\mathbb{N}$ such that, for any
$j\geq i$, $E_{j}\cap B_{1}^{j}(x)$ is $\varepsilon$-regular inside
$B_{1}^{j}(x)$. In particular, if $\varepsilon>0$ is sufficiently small, then
by the Euclidean density gap subsection 6.1, all the blow-ups of $E_{j}$
inside $B_{1}^{j}(x)\cap\mathcal{R}$ are flat half-spaces (see also the proof
of subsection 6.1). This leads to a contradiction since we are assuming that
$x_{\infty}$ is a limit of singular points $x_{k}\in\mathcal{S}^{E_{j}}\cap
B_{1}^{j}(x)\cap\mathcal{R}(X_{j})$.
Given (6.28) we obtain a contradiction, since $E_{\infty}$ is an entire
Euclidean (local) perimeter minimizer and the classical dimension estimates
for the singular sets of perimeter minimizers give
(6.29) $\dim_{H}\left(\mathcal{S}^{E_{\infty}}\cap B_{1}(0^{N})\right)\leq
N-8\,.$
∎
###### Remark .
One of the key steps in the proof above is the fact that limits of singular
points of perimeter minimizers where the blow-up of the ambient is Euclidean
are singular points. In the smooth setting the second assumption is always
verified, but in the non smooth setting this is a non trivial requirement and
the example presented in subsection 6.1 shows that it is necessary in order
for the statement to hold. In particular, without further assumptions it is
not true that limits of singular boundary points of a perimeter minimizer are
singular boundary points of the limit.
We aim at obtaining a sharp dimension bound for the singular set of local
perimeter minimizers $\dim_{H}(\mathcal{S}^{E})\leq N-3$ within our framework.
To this aim, it will be necessary to consider also the intersection of the
minimal boundary with the ambient singular set $\partial E\cap\mathcal{S}(X)$.
In order to obtain the sharp dimension bound for the singular set in this
setting, there is a key additional difficulty with respect to the classical
case. Indeed it is not clear whether a monotonicity formula holds in this
generality, therefore we do not know if any blow-up of a local perimeter
minimizer is a cone. In order to circumvent this difficulty, following the
classical pattern of the dimension reduction, we will need first to iterate
blow-ups to reduce to the situation where the ambient is a cone of the form
$\mathbb{R}^{N-2}\times C(\mathbb{S}^{1}_{r})$, where $\mathbb{S}^{1}_{r}$ is
a circle, and then to perform the dimension reduction again in this simplified
setting (where a monotonicity formula holds).
We will rely on some classical tools of Geometric Measure Theory. The first
one is a monotonicity formula for perimeter minimizers inside cones, whose
proof can be obtained as in the classical case, see [110, Theorem 9.3], [61,
Theorem 5.4.3] and [111].
###### Theorem 6.4.
Let $(M,g)$ be a smooth Riemannian manifold of dimension $k\geq 1$ and with
$\operatorname{Ric}\geq k-1$. Let
$(X,\mathsf{d},\mathscr{H}^{N}):=C(M)\times\mathbb{R}^{N-k-1}$ be the product
of the metric measure cone $C(M)$ of tip $\\{o\\}$, with an
$(N-k-1)$-dimensional Euclidean factor. Let
$p\in\\{o\\}\times\mathbb{R}^{N-k-1}$ and let $E\subset X$ be perimeter
minimizing in $B_{2}(p)\subset X$. Then the ratio
(6.30) $(0,1)\ni
r\mapsto\frac{\operatorname{Per}_{E}(B_{r}(p))}{r^{N-1}}\quad\text{is
increasing.}$
Moreover, if the perimeter ratio is constant in $(0,2)$ then $E$ is a cone
with vertex $p$ inside $B_{1}(p)$.
The second tool is an elementary non existence result for entire local
perimeter minimizing cones passing through the tip inside non flat two
dimensional cones, whose proof is well known, see [111] and references
therein.
###### Proposition .
Let $N\geq 2$ be a given natural number. Let $0<r\leq 1$ and let $\mathcal{C}$
be the metric measure cone $C(\mathbb{S}^{1}_{r})\times\mathbb{R}^{N-2}$ with
canonical structure. Let $F:=C(I)\times\mathbb{R}^{N-2}\subset\mathcal{C}$,
where $I\subset\mathbb{S}^{1}_{r}$ is a set of finite perimeter. Then $F$ is a
local perimeter minimizer if and only if $r=1$ (i.e.
$\mathcal{C}=\mathbb{R}^{N}$) and $I$ is the half-circle
$[0,\pi]\subset\mathbb{S}^{1}_{1}$, up to isometry (i.e.
$F\subset\mathbb{R}^{N}$ is a half-space).
Notice that to pass from the case $N=2$ treated in [111] to the case of $N\geq
3$ it is sufficient to rely on a slight modification of [107, Lemma 28.13] to
drop the dimension.
###### Proposition .
Let $N\geq 2$ be a given natural number. Let $0<r\leq 1$ and let $\mathcal{C}$
be the metric measure cone $\mathbb{R}^{N-2}\times C(\mathbb{S}^{1}_{r})$ with
canonical structure and set of tips $\mathbb{R}^{N-2}\times\\{o\\}$. Let
$G\subset\mathcal{C}$ be any entire local perimeter minimizer. Then
(6.31) $\dim_{H}\left(\mathcal{S}^{G}\right)\leq N-3\,.$
###### Proof.
We argue via dimension reduction, reducing to the situation where subsection
6.3 can be applied.
Let us suppose without loss of generality that $r<1$ (i.e. the cone is
singular). If $r=1$ the statement follows from the classical Euclidean
regularity theory.
Step 1. We claim that any blow-up of $G\subset\mathcal{C}$ at a point
$x\in\mathcal{C}$ is either a minimal cone inside $\mathcal{C}$ if
$x\in\mathbb{R}^{N-2}\times\\{o\\}$ is an ambient singular point, or a minimal
cone in $\mathbb{R}^{N}$ if
$x\in\mathcal{C}\setminus\left(\mathbb{R}^{N-2}\times\\{o\\}\right)$ is an
ambient regular point.
In order to check this statement it is sufficient to observe that the
monotonicity formula
(6.32)
$r\mapsto\frac{\operatorname{Per}_{G}(B_{r}(x))}{r^{N-1}}\,\quad\text{is non
decreasing on $0<r<r_{x}$}$
holds for any $x\in\partial G$. Indeed, if $x\in\mathbb{R}^{N-2}\times\\{o\\}$
is a vertex, this follows from Theorem 6.4 (and we can take $r_{x}=\infty$
actually). If $x$ is a regular point of $\mathcal{C}$, the monotonicity
formula follows from the fact that $\mathcal{C}$ is isometric to a (flat)
Euclidean ball in a neighbourhood of $x$.
The fact that blow-ups are always cones follows then from a classical
argument, thanks to the uniform perimeter density bound (2.4.6) and the
rigidity in the monotonicity formula on $\mathcal{C}$ and $\mathbb{R}^{N}$.
Step 2. Let us assume by contradiction that (6.31) fails. Then, arguing as in
the proof of Theorem 6.3 (see in particular (6.26)) we can find $\eta>0$ such
that $\mathscr{H}^{N-3+\eta}(\partial G\cap\mathcal{S}(\mathcal{C}))>0$
(notice that
$\mathscr{H}^{N-3+\eta}(\mathcal{S}^{G}\cap\mathcal{R}(\mathcal{C}))=0$, by
Theorem 6.3). Therefore, there exist $x\in\partial
G\cap\mathcal{S}(\mathcal{C})=\partial G\cap\mathbb{R}^{N-2}\times\\{o\\}$ and
a sequence $r_{i}\downarrow 0$ such that
(6.33)
$\limsup_{i\to\infty}\frac{\mathscr{H}^{N-3+\eta}_{\infty}\left(\partial
G\cap\mathcal{S}(\mathcal{C})\cap B_{r_{i}}(x)\right)}{r_{i}^{N-3+\eta}}>0\,.$
By Step 1, we can find a subsequence of $(r_{i})$, that we do not relabel,
such that (6.33) holds and the blow-up of $G$ along the sequence $r_{i}$ is an
entire local perimeter minimizing cone $G^{1}$ inside a metric cone
$\mathcal{C}$ (which is the blow-up of $\mathcal{C}$ at any point
$x\in\mathcal{S}(\mathcal{C})$) with tip $o$. Moreover, it is easily seen that
any limit of points $x_{i}\in\partial G\cap\mathcal{S}(\mathcal{C})\cap
B_{r_{i}}(x)$ along this converging sequence belongs to $\partial
G^{1}\cap\mathcal{S}(\mathcal{C})\cap B_{1}(o)$. Hence, the upper
semicontinuity of the pre-Hausdorff measure implies
(6.34) $\mathscr{H}^{N-3+\eta}_{\infty}\left(\partial
G^{1}\cap\mathcal{S}(\mathcal{C})\cap B_{1}(o)\right)\,>\,0\,,$
that yields in turn
(6.35) $\mathscr{H}^{N-3+\eta}\left(\partial
G^{1}\cap\mathcal{S}(\mathcal{C})\cap B_{1}(o)\right)\,>\,0\,.$
Let us write $G^{1}=\mathbb{R}^{k}\times C(B^{1})$, where
$C(B^{1})\subset\mathbb{R}^{N-2-k}\times C(\mathbb{S}^{1}_{r})$ is an entire
local perimeter minimizing cone.
We claim that, after iterating a finite number of times the construction
above, it is possible to take $k=N-2$. Indeed, if we suppose that $k\leq N-3$,
then we obtain $\mathscr{H}^{N-3+\eta}\left(\left(\partial
G^{1}\setminus\mathbb{R}^{k}\times\\{o\\}\right)\cap\mathcal{S}(\mathcal{C})\right)>0$,
by (6.35).
In particular, there exist $z\in\left(\partial
G^{1}\cap\mathcal{S}(\mathcal{C})\right)\setminus\left(\mathbb{R}^{k}\times\\{o\\}\right)$
and a sequence $r_{j}\downarrow 0$ such that
(6.36)
$\limsup_{j\to\infty}\frac{\mathscr{H}^{N-3+\eta}_{\infty}\left(\partial
G^{1}\cap\mathcal{S}(\mathcal{C})\cap
B_{r_{j}}(z)\right)}{r_{j}^{N-3+\eta}}>0\,.$
Up to extraction of a subsequence, that we do not relabel, we find that a
blow-up of $G^{1}$ at $z$ along the sequence $r_{j}\downarrow 0$ is an entire
local perimeter minimizing cone of the form $G^{2}=\mathbb{R}^{k+1}\times
C(B^{2})$ such that
(6.37) $\dim_{H}\left(\partial
G^{2}\cap\mathcal{S}(\mathcal{C})\right)\,>\,N-3\,.$
This is due to Step 1 and to the fact that $G^{1}$ splits off a factor
$\mathbb{R}^{k}$, it is a cone, and we chose a point
$z\notin\mathbb{R}^{k}\times\\{o\\}$ as base point for the blow-up. The
additional splitting of $G^{2}$ can be justified with the very same arguments
of the Euclidean case, we refer for instance to [107, Theorem 28.11, Lemma
28.12, Lemma 28.13] whose statements and proofs work mutatis mutandis also in
our setting.
Step 3. The outcome of the previous two steps is that if (6.31) fails, then
there exists an entire local perimeter minimizing cone of the form
$G=\mathbb{R}^{N-2}\times C(B)\subset\mathcal{C}$. This is in contradiction
with subsection 6.3. ∎
###### Remark .
The Hausdorff dimension estimate (6.31) above is sharp. This is easily
verified by considering as entire local perimeter minimizer the set
$G:=\\{x>0\\}\subset\mathbb{R}\times C(\mathbb{S}^{1}_{r})$, where $0<r<1$ and
$x\in\mathbb{R}$ denotes the coordinate of the $\mathbb{R}$ factor. Then
$\partial G=\\{0\\}\times C(\mathbb{S}^{1}_{r})$ which has one singular point
$p=(0,o)$. Therefore $\mathscr{H}^{0}(\mathcal{S}^{G})=1$.
###### Theorem 6.5.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space, let $\Omega\subset X$ be an open domain such that
$\Omega\cap\partial X=\emptyset$ and let $E\subset X$ be a set of locally
finite perimeter that is locally perimeter minimizing in $\Omega$. Then
(6.38) $\dim_{H}\left(\mathcal{S}^{E}\cap\Omega\right)\leq N-3\,.$
###### Proof.
The strategy of the proof is a refinement of the one of Theorem 6.3.
To simplify the notation, we assume throughout the proof that $E$ is an entire
local perimeter minimizer. Moreover, we assume that $N\geq 3$ and
$K\geq-(N-1)$. The case $K<-(N-1)$ can be reduced to $K=-(N-1)$ by scaling of
the distance. The case $N=1$ is elementary and the case $N=2$ can be treated
with a simpler variant of the argument presented below.
Step 1. Reduction to perimeter minimizers inside cones.
We aim to show via blow-up that if (6.38) fails for some local perimeter
minimizer on some $\operatorname{RCD}(K,N)$ metric measure space
$(X,\mathsf{d},\mathscr{H}^{N})$, then it fails also for an entire perimeter
minimizer inside a metric measure cone.
Notice that $\mathcal{S}^{E}\cap\mathcal{S}(X)=\partial E\cap\mathcal{S}(X)$
by the very definition of $\mathcal{S}^{E}$. Hence, if (6.38) fails, then
$\dim_{H}(\partial E\cap\mathcal{S}(X))>N-3$, by Theorem 6.3. In particular,
there exists $\varepsilon>0$ such that $\dim_{H}(\partial
E\cap\mathcal{S}_{\varepsilon}(X))>N-3$, where $\mathcal{S}_{\varepsilon}(X)$
is the quantitative $\varepsilon$-singular set of
$(X,\mathsf{d},\mathscr{H}^{N})$ defined by
$\mathcal{S}_{\varepsilon}(X):=\\{x\in
X\,:\,\mathsf{d}_{GH}(B_{r}(x),B_{r}(0^{N}))\geq\varepsilon r\,,\text{for all
$r\in(0,1)$}\\}\,.$
Indeed, by a well known argument (involving Bishop-Gromov volume monotonicity,
volume convergence and volume rigidity; see for instance the proof of [89,
Theorem 3.1] after [41]), it is easy to check that
$\mathcal{S}(X)=\bigcup_{\varepsilon>0}\mathcal{S}_{\varepsilon}(X)$.
Arguing as in the proof of Theorem 6.3 (see in particular (6.26)) we can find
$\eta>0$ such that $\mathscr{H}^{N-3+\eta}(\partial
E\cap\mathcal{S}_{\varepsilon}(X))>0$. Therefore, there exist $x\in\partial
E\cap\mathcal{S}_{\varepsilon}(X)$ and a sequence $r_{i}\downarrow 0$ such
that
(6.39)
$\limsup_{i\to\infty}\frac{\mathscr{H}^{N-3+\eta}_{\infty}\left(\partial
E\cap\mathcal{S}_{\varepsilon}(X)\cap
B_{r_{i}}(x)\right)}{r_{i}^{N-3+\eta}}>0\,.$
Applying Theorem 2.11, we can find a subsequence of $(r_{i})$, that we do not
relabel, such that (6.39) holds and the blow-up of $E$ along the sequence
$r_{i}$ is an entire local perimeter minimizer $F$ inside a metric cone $C(Z)$
with tip $p$, for some $\operatorname{RCD}(N-2,N-1)$ metric measure space
$(Z,\mathsf{d}_{Z},\mathscr{H}^{N-1})$. Moreover, it is easily seen that any
limit of points $x_{i}\in\partial E\cap\mathcal{S}_{\varepsilon}(X)\cap
B_{r_{i}}(x)$ along this converging sequence belongs to $\partial
F\cap\mathcal{S}_{\varepsilon}(C(Z))\cap B_{1}(p)$. Hence, the upper
semicontinuity of the pre-Hausdorff measure implies
(6.40) $\mathscr{H}^{N-3+\eta}_{\infty}\left(\partial
F\cap\mathcal{S}_{\varepsilon}(C(Z))\cap B_{1}(p)\right)\,>\,0\,,$
which yields $\dim_{H}(\mathcal{S}^{F})>N-3$, as we claimed.
Step 2. Dimension reduction.
In Step 1, we found an entire local perimeter minimizer $F\subset C(Z)$, where
$C(Z)$ is a metric measure cone. Let us consider the maximal Euclidean factor
$\mathbb{R}^{k}$ split off by $C(Z)$ and write $C(Z)=\mathbb{R}^{k}\times
C(W)$ for some $0\leq k\leq N-2$ and some $\operatorname{RCD}(N-k-2,N-k-1)$
metric measure space $(W,\mathsf{d}_{W},\mathscr{H}^{N-k-1})$. Arguing
inductively, we wish to prove that it is possible to assume that $k=N-2$
iterating the construction of Step 1.
Indeed, let us suppose that $k\leq N-3$. Then by (6.40) there exists a set of
singular points of $F$ with positive $\mathscr{H}^{N-3+\eta}_{\infty}$ pre-
Hausdorff measure not contained $\mathbb{R}^{k}\times\\{p\\}$. Iterating the
construction of Step 1 with a base point
$y\notin\left(\mathbb{R}^{k}\times\\{p\\}\right)$ such that
(6.41)
$\limsup_{i\to\infty}\frac{\mathscr{H}^{N-3+\eta}_{\infty}\left(\partial
F\cap\mathcal{S}_{\varepsilon}(C(Z))\cap
B_{r_{i}}(y)\right)}{r_{i}^{N-3+\eta}}>0\,,$
we obtain that, up to extraction of a subsequence that we do not relabel, the
blow-up of $F$ at $y$ is an entire local perimeter minimizer
$G\subset\mathbb{R}^{k+1}\times C(V)$, where
$(V,\mathsf{d}_{V},\mathscr{H}^{N-k-1})$ is an
$\operatorname{RCD}(N-k-2,N-k-1)$ metric measure space. Indeed, the blow-up
$G$ is an entire local perimeter minimizer by the usual stability Theorem
2.11. Moreover, the fact that the ambient space splits an additional Euclidean
factor follows by the choice of base point
$y\notin\left(\mathbb{R}^{k}\times\\{p\\}\right)$ (and the fact that $C(Z)$ is
a cone), via the splitting theorem [64].
Step 3. Conclusion.
The outcome of the previous two steps is that, if (6.38) fails for a local
perimeter minimizer $E\subset X$, where $(X,\mathsf{d},\mathscr{H}^{N})$ is an
$\operatorname{RCD}(K,N)$ m.m.s. (without boundary), then it fails for an
entire perimeter minimizer $F\subset\mathbb{R}^{N-2}\times
C(\mathbb{S}^{1}_{r})$, where $0<r\leq 1$. However, this would contradict
subsection 6.3 and the proof is complete.
∎
In the next statement, obtained combining Theorem 6.2, Theorem 6.3 and Theorem
6.5, we summarize the main regularity results of the present section.
###### Theorem 6.6.
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space. Let $E\subset X$ be a set of locally finite perimeter. Assume
that $E$ is perimeter minimizing in $B_{2}(x)\subset X$ and
$B_{2}(x)\cap\partial X=\emptyset$. Then for any $\alpha\in(0,1)$ there exists
a relatively open set $O_{\alpha}\subset\partial E\cap B_{1}(x)$ such that:
* •
$O_{\alpha}$ is $\alpha$-bi-Hölder homeomorphic to a smooth open
$(N-1)$-dimensional manifold;
* •
$\mathcal{R}^{E}\subset O_{\alpha}$ and
(6.42) $\displaystyle\dim_{H}\big{(}(\partial
E\setminus\mathcal{R}^{E})\cap\mathcal{R}(X)\big{)}$ $\displaystyle\leq
N-8\,,$ (6.43) $\displaystyle\dim_{H}\big{(}(\partial
E\setminus\mathcal{R}^{E})\cap\mathcal{S}(X)\big{)}$ $\displaystyle\leq
N-3\,.$
###### Remark (Sharpness of Theorem 6.6 and a conjecture).
Both the Hausdorff codimension bounds (6.42) and (6.43) are sharp:
* •
(6.42) is sharp already in $\mathbb{R}^{N}$, by the classical example of
Simons’ cone $C_{S}\subset\mathbb{R}^{8}$;
* •
the sharpness of (6.43) was discussed in subsection 6.3.
Since $\mathcal{R}^{E}\subset O_{\alpha}$, the bounds (6.42)-(6.43) of course
imply
(6.44) $\displaystyle\dim_{H}\big{(}(\partial E\setminus
O_{\alpha})\cap\mathcal{R}(X)\big{)}$ $\displaystyle\leq N-8\,,$ (6.45)
$\displaystyle\dim_{H}\big{(}(\partial E\setminus
O_{\alpha})\cap\mathcal{S}(X)\big{)}$ $\displaystyle\leq N-3\,.$
Note that (6.44) is sharp already in $\mathbb{R}^{N}$, by the example of the
Simons’ cone $C_{S}\subset\mathbb{R}^{8}$: indeed, for any $\alpha\in(0,1)$,
it holds that $O_{\alpha}=C_{S}\setminus\\{0^{8}\\}$, so that
$\dim_{H}\big{(}C_{S}\setminus O_{\alpha}\big{)}=0$.
Instead, we conjecture that the optimal dimension bound for the topologically
regular part of $\partial E$ contained in the ambient singular set is
$\dim_{H}\big{(}(\partial E\setminus O_{\alpha})\cap\mathcal{S}(X)\big{)}\leq
N-4\,.$
Note that ambient Hausdorff co-dimension 4 would be sharp, from the example
given by $E:=C(\mathbb{RP}^{2})\times[0,\infty)\subset
C(\mathbb{RP}^{2})\times\mathbb{R}=:X$.
### 6.4. Quantitative estimates for singular sets of minimal boundaries
Our goal is to obtain Minkowski content estimates for the singular sets of
boundaries of locally perimeter minimizing sets in our context, in analogy
with the Euclidean theory [47, 114] and with the Minkowski estimates for the
quantitative singular sets of non collapsed Ricci limit spaces [46, 45] and
$\operatorname{RCD}$ spaces [19].
The strategy that we adopt has been partly inspired by [33], which proposed an
alternative approach to the regularity theory of locally perimeter minimizing
boundaries in the Euclidean framework. A key additional difficulty in our
setting, besides the fact that the spaces are not smooth, is that they are
curved (and we aim to an effective regularity theory, i.e. without the
dependence on flatness parameters such as the injectivity radius). Therefore
we will need to control at the same time the regularity of the space (with
constants only depending on the Ricci curvature and volume lower bounds) and
the regularity of the minimal boundary inside it.
###### Definition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ space.
Let $\eta>0$ and $r\in(0,1)$ be fixed. The quantitative regular set
$\mathcal{R}_{\eta,r}\subset X$ is defined by
$\mathcal{R}_{\eta,r}:=\\{x\in
X\,:\,\mathsf{d}_{GH}(B_{s}(x),B_{s}(0^{N}))\leq\eta s\,\quad\text{for any
$0<s<r$}\\}\,,$
where we indicated by $B_{r}(0^{N})\subset\mathbb{R}^{N}$ the Euclidean ball
of radius $r$.
###### Definition .
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ space.
Let $\eta>0$ and $r\in(0,1)$ be fixed. For any $0\leq k\leq N$, we shall
denote
$\displaystyle\mathcal{S}^{k}_{\eta,r}:=\\{x\in X$
$\displaystyle\,:\,\mathsf{d}_{GH}(B_{s}(x),B_{s}(0^{k+1},z^{*}))\geq\eta
s\,,$ $\displaystyle\quad\text{for any $\mathbb{R}^{k+1}\times C(Z)$ and all
$r<s<1$}\\}\,,$
where $B_{s}(0^{k+1},z^{*})$ denotes the ball centred at the tip of a cone
$\mathbb{R}^{k+1}\times C(Z)$.
In an analogous way we can deal with boundary points of local perimeter
minimizers.
###### Definition (Quantitative singular sets for minimizing boundaries).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a set of locally finite perimeter. Let
us suppose that $E$ is locally perimeter minimizing inside a ball
$B_{2}(x)\subset X$ and that $\partial X\cap B_{2}(x)=\emptyset$.
For any $\delta>0$, let $\mathcal{S}^{E}_{\delta}\subset\partial E$ be the
quantitative singular set defined by
$\displaystyle\mathcal{S}^{E}_{\delta}:=\\{x\in\partial E\,$
$\displaystyle:\,\text{there exists no $r\in(0,1)$}$ $\displaystyle\text{for
which $E\cap B_{r}(x)$ is $\delta$-regular at $x$}\\}\,.$
Moreover, for any $r>0$ we shall denote
$\displaystyle\mathcal{S}^{E}_{\delta,r}:=\\{x\in\partial E\,$
$\displaystyle:\,\text{there exists no $s\in(r,1)$}$ $\displaystyle\text{for
which $E\cap B_{s}(x)$ is $\delta$-regular at $x$}\\}\,$
and
$\overline{\mathcal{S}}^{E}_{\delta,r}:=\\{x\in\partial E\,:\,\text{$E\cap
B_{r}(x)$ is not $\delta$-regular at $x$}\\}\,.$
###### Remark .
A direct consequence of the definitions is that
$\mathcal{S}^{E}_{\delta}=\cap_{r>0}\mathcal{S}^{E}_{\delta,r}=\cap_{i\in\mathbb{N}}\mathcal{S}^{E}_{\delta,r_{i}}\,\quad\text{and
}\,\quad\mathcal{S}^{E}_{\delta,r}=\cap_{s>r}\overline{\mathcal{S}}^{E}_{\delta,s}\,,$
for any $\delta>0$ and for any sequence $r_{i}\downarrow 0$.
###### Definition (Quantitative regular sets for minimal boundaries).
Let $(X,\mathsf{d},\mathscr{H}^{N})$ and $E\subset X$ be as in subsection 6.4.
Given $\eta>0$ and $r>0$ we shall denote by
$\displaystyle\mathcal{R}^{E}_{\eta,r}$ $\displaystyle:=\\{x\in\partial
E\,:\,E\cap B_{s}(x)\quad\text{is}\quad\text{ $\eta$-regular for any
$s\in(0,r)$}\\}\,,$ $\displaystyle\mathcal{R}^{E}_{\eta}$
$\displaystyle:=\bigcup_{r>0}\mathcal{R}^{E}_{\eta,r}=\\{x\in\partial
E\,:\,\exists\,r>0\,\text{ s.t. }E\cap B_{r}(x)\quad\text{is
$\eta$-regular}\\}\,,$
the quantitative regular sets of the minimal boundary $\partial E$.
###### Remark .
Let us notice that
(6.46) $\mathcal{R}^{E}=\bigcap_{\eta>0}\mathcal{R}^{E}_{\eta}.$
This is a consequence of the very definitions and of the
$\varepsilon$-regularity Theorem 6.1. Also, $\mathcal{R}^{E}_{\eta}$ is open
as soon as $\eta<\eta(N)$. Moreover,
$\mathcal{R}^{E}_{\eta}=\partial
E\setminus\mathcal{S}^{E}_{\eta}\,,\quad\text{for any $\eta>0$}\,.$
###### Remark .
An inspection of the proof of Theorem 6.1 shows that, if $\eta<\eta(N)$ and
$x\in\mathcal{R}\cap\mathcal{R}^{E}_{\eta}\,,$
then $x\in\mathcal{R}^{E}$ (cf. also with subsection 6.1).
###### Theorem 6.7.
For every $K\in\mathbb{R}$ and $N\in[1,\infty)$ there exists $\delta_{K,N}>0$
with the following property. Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an
$\operatorname{RCD}(K,N)$ metric measure space, $x\in X$ such that $\partial
X\cap B_{2}(x)=\emptyset$, and $E\subset X$ be a set of locally finite
perimeter such that $E$ is perimeter minimizing in $B_{2}(x)$. Then for any
$0<\delta\in(0,\delta_{K,N})$ and for any $\gamma\in(0,1)$ there exist
$C=C(K,N,\delta,\operatorname{Per}\big{(}E,B_{2}(x)),\gamma\big{)}>0$ and
$r_{0}=r_{0}\big{(}K,N,\operatorname{Per}(E,B_{2}(x))\big{)}>0$ so that the
following Minkowski content-type estimate on the quantitative singular set
$\mathcal{S}^{E}_{\delta}\subset\partial E$ holds:
(6.47) $\mathscr{H}^{N}\big{(}T_{r}(\mathcal{S}^{E}_{\delta})\cap
B_{1}(x)\big{)}\leq C\,r^{2-\gamma}\,\quad\text{for any $r\in(0,r_{0})$ },$
where $T_{r}$ denotes the tubular neighbourhood of radius $r>0$.
When $(X,\mathsf{d},\mathscr{H}^{N})$ is a non collapsed Ricci limit space or
a finite dimensional Alexandrov space with curvature bounded below, the bounds
(6.47) can be strengthened to
(6.48) $\mathscr{H}^{N}(T_{r}(\mathcal{S}^{E}_{\delta})\cap B_{1}(x))\leq
C\,r^{2}\,,\quad\text{for any $r\in(0,r_{0})$}\,.$
###### Remark .
There is no direct implication between (6.47) and the Hausdorff dimension
estimate in Theorem 6.5. Indeed, while it is easily seen that (6.47) is much
stronger than the Hausdorff dimension estimate $\dim_{H}(\mathcal{S}^{E})\leq
N-2$, it does not imply the sharp estimate $\dim_{H}(\mathcal{S}^{E})\leq
N-3$. On the other hand, the Minkowski type estimate (6.47) is not implied by
any Hausdorff dimension estimate. As an elementary example just to fix the
ideas, note for instance that $\mathbb{Q}^{N}\subset\mathbb{R}^{N}$ has
Hausdorff dimension $0$, but no Minkowski content-type estimate holds since
any tubular neighbourhood of $\mathbb{Q}^{N}$ is the whole space
$\mathbb{R}^{N}$.
###### Remark .
While the proof of the Hausdorff dimension bound
$\dim_{H}(\mathcal{S}^{E})\leq N-3$ for local perimeter minimizers is
independent of the mean curvature bounds proved in section 5, these play a key
role in the proof of Theorem 6.7.
Theorem 6.7 will be proved at the end of the section. Below, we first
establish a series of auxiliary results.
Thanks to subsubsection 2.4.6, there exist constants
$C_{K,N},\bar{\delta}_{K,N}>0$ such that, if $(X,\mathsf{d},\mathscr{H}^{N})$
is an $\operatorname{RCD}(K,N)$ metric measure space and $E\subset X$ is a set
of finite perimeter minimizing the perimeter in $B_{2}(x)\subset X$, then
(6.49) $\mathscr{H}^{N}\big{(}(E^{\delta}\setminus\bar{E})\cap
B_{1}(x)\big{)}\leq
C_{K,N}\,\operatorname{Per}(E,B_{2}(x))\,\delta\,,\quad\text{for any
}\delta\in(0,\bar{\delta}_{K,N})\,,$
where we keep the notation $E^{\delta}$ for the $\delta$-enlargement of the
set $E$, see (6.10).
###### Corollary .
There exist constants $C_{K,N},\bar{\delta}_{K,N}>0$ with the following
property. Let $(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(K,N)$
metric measure space and $E\subset X$ is a set of finite perimeter minimizing
the perimeter in $B_{2}(x)\subset X$. Then, for any
$\delta\in(0,\bar{\delta}_{K,N})$, there exists $\rho\in(1/2,1)$ such that
$\operatorname{Per}(B_{\rho}(x),E^{\delta}\setminus\bar{E})\leq
C_{K,N}\,\operatorname{Per}(E,B_{2}(x))\,\delta\,.$
###### Proof.
The conclusion follows from the estimate (6.49). Indeed, by the coarea formula
Theorem 2.4 applied to the distance function from $x$ we can bound
$\displaystyle\int_{1/2}^{1}\operatorname{Per}\big{(}B_{s}(x),E^{\delta}\setminus\bar{E}\big{)}\mathop{}\\!\mathrm{d}s$
$\displaystyle\leq\mathscr{H}^{N}\big{(}(E^{\delta}\setminus\bar{E})\cap(B_{1}(x)\setminus
B_{1/2}(x))\big{)}$
$\displaystyle\leq\mathscr{H}^{N}\big{(}(E^{\delta}\setminus\bar{E})\cap
B_{1}(x)\big{)}\,$ $\displaystyle\leq
C_{K,N}\,\operatorname{Per}(E,B_{2}(x))\,\delta\,.$
∎
The so-called interior/exterior touching ball condition is a regularity
property for domains $\Omega\subset X$. The interior one amounts to ask that
at any given point $x\in\partial\Omega$, there exists a point $y\in\Omega$ and
$r>0$ such that $\mathsf{d}(x,y)=r$ and $B_{r}(y)\subset\Omega$.
When it holds uniformly, on a smooth Riemannian manifold, it yields a control
on the second fundamental form of the boundary of the domain.
Our next goal is to prove that minimal boundaries verify a weak
interior/exterior touching ball condition in our setting.
###### Definition (Set of touching points).
Let $(X,\mathsf{d},\mathfrak{m})$ be an $\operatorname{RCD}(K,N)$ metric
measure space and let $E\subset X$ be a local perimeter minimizer. For any
$\delta\in(0,\delta_{0})$, we let $\mathcal{C}_{\delta}\subset\partial E$ be
the set of interior and exterior touching points of balls of radius $\delta$,
i.e.
$\displaystyle\mathcal{C}_{\delta}:=\\{x\in\partial E\,$
$\displaystyle:\,\text{there exist $B_{\delta}(x_{1})\subset E$}$
$\displaystyle\text{and $B_{\delta}(x_{2})\subset E^{c}$ such that
$x\in\partial B_{\delta}(x_{1})\cap\partial B_{\delta}(x_{2})$}\\}\,.$
###### Proposition .
There exist constants $\delta_{K,N},\,C_{K,N}>0$ such that, for any
$\operatorname{RCD}(K,N)$ metric measure space
$(X,\mathsf{d},\mathscr{H}^{N})$, for any $x\in X$ and for any set of finite
perimeter $E\subset X$ such that $E$ is perimeter minimizing in
$B_{2}(x)\subset X$ the following holds:
(6.50) $\operatorname{Per}(E,B_{1/2}(x)\setminus\mathcal{C}_{\delta})\leq
C_{K,N}\,\operatorname{Per}(E,B_{2}(x))\,\delta\,,\quad\text{for any
}\delta\in(0,\delta_{K,N})\,.$
###### Proof.
It is sufficient to estimate the size of the set $\mathcal{C}^{e}_{\delta}$ of
touching points of exterior tangent balls as in (6.50). A similar argument
will give the estimate for the size of the set of touching points of interior
balls $\mathcal{C}^{i}_{\delta}$. Then the estimate for $\mathcal{C}_{\delta}$
will follow, since
$\mathcal{C}_{\delta}=\mathcal{C}^{i}_{\delta}\cap\mathcal{C}^{e}_{\delta}$.
Let us fix $\delta\in(0,\bar{\delta}_{K,N})$ and choose $\rho\in(1/2,1)$ given
by subsection 6.4 above. We can also assume that
$\operatorname{Per}(E^{\delta},\partial B_{\rho}(x))=0$ up to slightly perturb
$\rho$. Observe that $E\cup(E^{\delta}\cap B_{\rho}(x))$ is a compactly
supported perturbation of $E$ in $B_{2}(x)$. Hence, by perimeter minimality,
it holds:
$\operatorname{Per}(E,B_{2}(x))\leq\operatorname{Per}(E\cup(E^{\delta}\cap
B_{\rho}(x)),B_{2}(x))\,.$
Therefore
$\displaystyle\operatorname{Per}(E,B_{\rho}(x))\leq$
$\displaystyle\operatorname{Per}(E^{\delta},B_{\rho}(x))+\operatorname{Per}(B_{\rho}(x),E^{\delta}\setminus\bar{E})$
(6.51) $\displaystyle\leq$
$\displaystyle\operatorname{Per}(E^{\delta},B_{\rho}(x))+C_{K,N}\,\operatorname{Per}(E,B_{2}(x))\,\delta\,.$
Letting $\Gamma_{\delta}:=\partial E^{\delta}\cap\overline{B}_{\rho}(x)$ and
$\Gamma_{\delta,\Sigma}$ be the set of touching points of minimizing geodesics
from $\Gamma_{\delta}$ to $\Sigma$, we can estimate by subsection 6.2
(6.52)
$\operatorname{Per}(E^{\delta},B_{\rho}(x))\leq\operatorname{Per}(E,\Gamma_{\delta,\Sigma})+C_{K,N}\operatorname{Per}(E,B_{2}(x))\,\delta\,.$
Notice that all the points in $\Gamma_{\delta,\Sigma}$ are touching points of
exterior balls of radius $\delta$ on $\partial E$. Hence
$\Gamma_{\delta,\Sigma}\subset\mathcal{C}_{\delta}^{e}$. Taking into account
(6.4) and (6.52), then we can estimate
(6.53) $\operatorname{Per}(E,B_{1/2}(x)\setminus\mathcal{C}^{e}_{\delta})\leq
C_{K,N}\,\operatorname{Per}(E,B_{2}(x))\,\delta\,.$
Combining (6.53) with the analogous estimate valid for the set of touching
points of interior balls, we get (6.50). ∎
###### Remark .
It is worth pointing out the following nontrivial consequence of subsection
6.4: if $E$ is locally perimeter minimizing, then
$\operatorname{Per}_{E}$-a.e. point $x\in\partial E$ is an intermediate point
of a minimizing geodesic along which the signed distance function from $E$ is
realized (that would correspond to a perpendicular geodesic on a smooth
Riemannian manifold).
Given a set of finite perimeter $E\subset\mathbb{R}^{n}$, locally perimeter
minimizing in an open domain, the existence of an interior and of an exterior
touching balls at a given point $x\in\partial E$ are enough to guarantee the
regularity of the boundary near to the touching point.
One way to verify this conclusion is to argue that the presence of both an
interior and an exterior touching ball forces the tangent cone at the point to
be flat and this is enough to guarantee regularity in a neighbourhood, as we
already pointed out.
There is also a more quantitative approach, whose starting point is given by
the following observation: there exists $C=C_{n}>0$ such that if $x\in\partial
B_{C_{n}\lambda/\delta}(x_{1})\cap\partial B_{C_{n}\lambda/\delta}(x_{2})$,
where $\delta,\lambda>0$ and $B_{C_{n}\lambda/\delta}(x_{1})$ and
$B_{C_{n}\lambda/\delta}(x_{2})$ are an interior and an exterior touching ball
respectively, then
(6.54) $E\cap B_{\lambda}(y)\,\quad\text{is $\delta$-flat at $y$, for every
$y$ such that $\left\lvert x-y\right\rvert<\lambda$}\,.$
As subsection 6.1 clearly illustrates, the existence of an interior and an
exterior touching ball at a boundary point of a perimeter minimizing set is
not enough to guarantee that the tangent is flat, nor that the boundary is
regular in a neighbourhood of the point.
Even on a smooth Riemannian manifold, in order to guarantee
$\delta$-regularity, the existence of interior/exterior touching balls needs
to be combined with closeness (at the given scale) of the ball to the
Euclidean ball, as shown in the next lemma.
###### Lemma .
There exists a constant $C=C_{N}>0$ such that the following holds. Let
$(X,\mathsf{d},\mathscr{H}^{N})$ be an $\operatorname{RCD}(-(N-1),N)$ metric
measure space and let $E\subset X$ be a set of finite perimeter that locally
minimizes the perimeter in $B_{2}(x)\subset X$. Let $\lambda\in(0,1/2)$,
$\delta>0$ and assume that:
* (i)
$x\in\partial E$ is a touching point of an interior and an exterior ball of
radius $C_{N}\lambda/\delta$;
* (ii)
$B_{\lambda}(x)$ is $\delta\lambda$-GH close to
$B_{\lambda}(0)\subset\mathbb{R}^{N}$.
Then, for any $y\in\partial E\cap B_{\lambda/2}(x)$, $E\cap B_{\lambda}(y)$ is
$2\delta$-regular in $B_{\lambda}(y)$.
###### Proof.
Condition (ii) guarantees scale invariant $\delta$-closeness, in GH sense, of
$B_{\lambda}(x)$ to $B_{\lambda}(0)\subset\mathbb{R}^{N}$ and of
$B_{\lambda}(y)$ to $B_{\lambda}(0)\subset\mathbb{R}^{N}$ for any
$y\in\partial E\cap B_{\lambda/2}(x)$. The proof is then reduced to the
Euclidean setting, where the existence of interior/exterior touching balls
with radii $C_{N}\lambda/\delta$ guarantees $\delta$-flatness, as we remarked
in (6.54). ∎
By (6.54), we can bound in an effective way the perimeter of the set where
there are no interior/exterior touching balls of a given size. In order to
guarantee that regularity of the ambient balls is in force at many locations
and scales along $\partial E$, we will rely on the quantitative bounds for the
singular strata of noncollapsed $\operatorname{RCD}$ spaces, obtained in [19]
following the strategy of the previous [46].
We will be focusing on codimension two singularities. With this aim, let us
state an $\varepsilon$-regularity result that follows from [29].
###### Theorem 6.8 (Boundary $\varepsilon$-regularity).
Let $K\in\mathbb{R}$ and $1\leq N<\infty$ be fixed. Then there exists
$\varepsilon=\varepsilon(K,N)>0$ such that the following holds. If
$(X,\mathsf{d},\mathscr{H}^{N})$ is an $\operatorname{RCD}(K,N)$ space, $x\in
X$ and $s\in(0,1)$ are such that
$\mathsf{d}_{GH}(B_{s}(x),B_{s}(0^{N-1},z^{*}))<\varepsilon s\,,$
for some $B_{s}(0^{N-1},z^{*})\subset\mathbb{R}^{N-1}\times C(Z)$ and
$\partial X\cap B_{s}(x)=\emptyset$, then
$\mathsf{d}_{GH}(B_{s}(x),B_{s}(0^{N}))<2\varepsilon s\,,$
where $B_{s}(0^{N})\subset\mathbb{R}^{N}$ is the Euclidean ball of dimension
$N$.
###### Proof.
There are only two possibilities for the cone $C(Z)$. Either
$C(Z)=\mathbb{R}^{+}$ or $C(Z)=\mathbb{R}$, with the canonical metric measure
structure, in both cases. The possibility that $C(Z)=\mathbb{R}^{+}$ can be
excluded thanks to [29, Theorem 1.6]. Hence $C(Z)=\mathbb{R}$ the ball is
$2\varepsilon$-regular, as we claimed.
∎
Thanks to Theorem 6.8, we can easily check that if
$(X,\mathsf{d},\mathscr{H}^{N})$ is an $\operatorname{RCD}(-(N-1),N)$ space
and $B_{s}(x)\cap\partial X=\emptyset$ for some ball $B_{s}(x)\subset X$, then
(6.55) $B_{s}(x)\setminus\mathcal{S}^{N-2}_{\eta,r}=\\{y\in
B_{s}(x)\,:\,\mathsf{d}_{GH}(B_{t}(y),B_{t}(0^{N}))<\eta t\,,\text{ for any
$t\in(0,r)$}\\}\,,$
for any $\eta\in(0,\eta(N))$.
Let us recall the volume estimate for the quantitative singular stratum
obtained in [19] (see [19, Theorem 2.4] and the discussion below it) after
[46].
###### Theorem 6.9.
Let $K\in\mathbb{R}$, $2\leq N<\infty$, $1\leq k\leq N$, $v,\eta,\gamma>0$ be
fixed. Then there exists a constant $c=c(K,N,k,\eta,v)>0$ such that the
following holds. If $(X,\mathsf{d},\mathscr{H}^{N})$ is an
$\operatorname{RCD}(K,N)$ space and
$\frac{\mathscr{H}^{N}(B_{1}(x))}{v_{K,N}(1)}\geq v\,,$
then for any $r\in(0,1/2)$ it holds
(6.56) $\mathscr{H}^{N}(B_{1/2}(x)\cap\mathcal{S}^{k}_{\eta,r})\leq
c(K,N,k,\eta,v,\gamma)r^{N-k-\gamma}\,.$
###### Remark .
In [45, Theorem 1.7] it has been shown that for non collapsed Ricci limit
spaces it is possible to replace $(N-k-\gamma)$ with $(N-k)$ at the exponent
in (6.56) to obtain a much stronger estimate
(6.57) $\mathscr{H}^{N}(B_{1/2}(x)\cap\mathcal{S}^{k}_{\eta,r})\leq
c(N,K,k,\eta,v)r^{N-k}\,,\quad\text{for any $r\in(0,1/2)$ }.$
The very same estimate (6.57) was established in [101, Corollary 1.5] for
$N$-dimensional Alexandrov spaces with curvature bounded below by $K$.
Relying on Theorem 6.9, let us estimate the size of the intersection of the
quantitative singular stratum $\mathcal{S}^{k}_{\eta,r}$ with the boundary of
a locally perimeter minimizing set of finite perimeter.
###### Proposition .
Let $K\in\mathbb{R}$, $2\leq N<\infty$, $1\leq k\leq N$, $v,\eta,\gamma>0$ be
fixed. Then there exists a constant $c=c(K,N,k,\eta,v)>0$ such that the
following holds. If $(X,\mathsf{d},\mathscr{H}^{N})$ is an
$\operatorname{RCD}(K,N)$ space such that
$\frac{\mathscr{H}^{N}(B_{1}(x))}{v_{K,N}(1)}\geq v\,$
and $E\subset X$ is a set of finite perimeter which is perimeter minimizing in
$B_{2}(x)$, then there exists $r_{0}=r_{0}(K,N)>0$ independent of $k,\,\eta$
and $\gamma$ such that
(6.58)
$\displaystyle\operatorname{Per}(E,B_{1/2}(x)\cap\mathcal{S}^{k}_{\eta,r})$
$\displaystyle\leq c(K,N,k,\eta,v)\,r^{N-k-1-\gamma}\,,\quad\text{for any
$r\in(0,r_{0})$}\,,$ (6.59)
$\displaystyle\operatorname{Per}(E,B_{1/2}(x)\setminus\mathcal{R}_{\eta,r})$
$\displaystyle\leq c(K,N,\eta,v)\,r^{1-\gamma}\,,\quad\qquad\quad\text{ for
any $r\in(0,r_{0})$}\,.$
###### Proof.
Let us consider a covering of $\partial E\cap
B_{1/2}(x)\cap\mathcal{S}^{k}_{\eta,r}$ with balls $B_{r\eta}(x_{i})$ such
that the balls $B_{r\eta/5}(x_{i})$ are disjoint, via a Vitali covering
argument.
As shown for instance in [19, equation (2.5)], unwinding the definitions, one
can check that
(6.60) $T_{\eta
r}(\mathcal{S}^{k}_{2\eta,r})\subset\mathcal{S}^{k}_{\eta,r}\,,$
where $T_{\eta r}$ denotes the tubular neighbourhood of radius $r\eta$. Thus,
we can estimate:
(6.61)
$\operatorname{Per}(E,B_{1/2}(x)\cap\mathcal{S}^{k}_{\eta,r})\leq\sum_{i}\operatorname{Per}(E,B_{\eta
r}(x_{i}))\leq C\sum_{i}\frac{\mathscr{H}^{N}(B_{\eta r}(x_{i}))}{\eta r}\,,$
where the constant $C$ is given by subsubsection 2.4.6.
Relying on (6.60), Theorem 6.9 and the Vitali covering condition, we obtain
$\sum_{i}\frac{\mathscr{H}^{N}(B_{\eta r}(x_{i}))}{\eta r}\leq\frac{C}{\eta
r}\mathscr{H}^{N}(T_{\eta r}(\mathcal{S}^{k}_{2\eta,r}\cap
B_{1/2}(x)))\leq\frac{c}{\eta}r^{N-k-1-\gamma}\,,$
which gives (6.58) when combined with (6.61).
The estimate (6.59) follows from (6.58), thanks to (6.55). ∎
###### Remark .
Relying on the observation in subsection 6.4, in the case of non collapsed
Ricci limit spaces and finite dimensional Alexandrov spaces with curvature
bounded below, it is possible to strengthen (6.58) and (6.59) to
(6.62) $\operatorname{Per}(E,B_{1/2}(x)\cap\mathcal{S}^{k}_{\eta,r})\leq
c(K,N,k,\eta,v)\,r^{N-k-1}\,,$
for any $r\in(0,r_{0})$ and
(6.63) $\operatorname{Per}(E,B_{1/2}(x)\setminus\mathcal{R}_{\eta,r})\leq
c(K,N,\eta,v)\,r\,,$
for any $r\in(0,r_{0})$.
###### Proof of Theorem 6.7.
We claim that for any $\delta\in(0,\delta_{K,N})$ there exists a constant
$C=C(K,N,\delta,\gamma,\operatorname{Per}(E,B_{2}(x))>0$
such that for any $r\in(0,r_{0})$ and any Vitali covering of
$\overline{\mathcal{S}}^{E}_{\delta,r}\cap B_{1}(x)$ with balls $B_{r}(x_{i})$
such that $x_{i}\in\overline{\mathcal{S}}^{E}_{\delta,r}$ and $B_{r/5}(x_{i})$
are pairwise disjoint for $i=1,\dots,N(r)$, it holds
(6.64) $N(r)\leq C\,r^{2-N-\gamma}\,.$
Indeed, for any ball $B_{r}(x_{i})$ as above, it holds
$B_{r}(x_{i})\cap\mathcal{R}_{\delta/2,2r}\cap\mathcal{C}_{C_{K,N}\frac{r}{\delta}}=\emptyset\,,$
where $\mathcal{C}_{C_{K,N}\frac{r}{\delta}}$ is the set of contact points of
touching balls as in subsection 6.4. This is a consequence of subsection 6.4:
if by contradiction $y$ belongs to the intersection above, then $E$ is
$\delta$-regular on $B_{r}(z)$ for any $z\in B_{r}(y)$. Hence $E$ is
$\delta$-regular on $B_{r}(x_{i})$, a contradiction.
Since the balls $B_{r/5}(x_{i})$ are disjoint, we can bound
$\displaystyle\sum_{i\leq
N(r)}\operatorname{Per}\big{(}E,B_{r/5}(x_{i})\big{)}\leq$
$\displaystyle\operatorname{Per}\bigg{(}E,\bigcup_{i\leq
N(r)}B_{r}(x_{i})\bigg{)}$ $\displaystyle\leq$
$\displaystyle\operatorname{Per}\big{(}E,B_{1}(x)\setminus(\mathcal{R}_{\delta/2,2r}\cap{\mathcal{C}}_{C_{K,N}\frac{r}{\delta}})\big{)}\leq
C\,r^{1-\gamma}\,,$
for some $C=C(K,N,\delta,\gamma,\operatorname{Per}(E,B_{2}(x))>0$, where the
last inequality follows from subsection 6.4 and (6.59). By the Ahlfors
regularity of the perimeter measure subsubsection 2.4.6, we easily get (6.64).
Notice that, for non collapsed Ricci limit spaces and finite dimensional
Alexandrov spaces, the estimate (6.64) can be strengthened into
(6.65) $N(r)\leq C\,r^{2-N}\,.$
This is a consequence of the better Minkowski bounds obtained in [45, 101] in
such a setting, arguing as we did above, using subsection 6.4 and subsection
6.4.
To conclude the proof, in all the cases of $\operatorname{RCD}(K,N)$ spaces,
non collapsed Ricci limit spaces and finite dimensional Alexandrov spaces, it
is sufficient to rely on the Ahlfors regularity bound for $\mathscr{H}^{N}$
and to recall that
(6.66)
$\mathcal{S}^{E}_{\delta,r}=\bigcap_{s>r}\overline{\mathcal{S}}^{E}_{\delta,s}\,$
and
(6.67) $\mathcal{S}^{E}_{\delta}=\bigcap_{r>0}\mathcal{S}^{E}_{\delta,r}\,.$
∎
###### Remark .
If $(X,\mathsf{d},\mathscr{H}^{N})$ is a smooth Riemannian manifold equipped
with its volume measure, then (6.47) can be strengthened into
(6.68) $\mathscr{H}^{N}\big{(}T_{r}(\mathcal{S}^{E}_{\delta})\cap
B_{1}(x)\big{)}\leq C\,r^{8}\,\quad\text{for any $r\in(0,r_{0})$ }\,,$
if we allow the constants $C$ and $r_{0}$ to depend on the norm of the full
Riemann curvature tensor on $B_{2}(x)$ and on a lower bound on the injectivity
radius on $B_{2}(x)$, as proved in [114, Theorem 1.6].
Since the constants in (6.47) only depend on the dimension, the lower Ricci
curvature bound and on the perimeter of $E$ on $B_{1}(x)$, our estimates are
not encompassed by those in [114] even in the case of smooth manifolds.
## Appendix A Laplacian bounds Vs mean curvature bounds:
a comparison with the classical literature
The aim of this subsection is to put Theorem 5.1 and Theorem 5.2 into
perspective. In particular, we wish to clarify why Laplacian bounds on the
distance function can be understood as mean curvature bounds. For this reason,
we are going to present some mostly well known results about the distance
function from minimal hypersurfaces on smooth Riemannian manifolds, focusing
for simplicity on the non-negative Ricci curvature case.
As we already remarked, the fact that the distance from a smooth minimal
hypersurface is subharmonic in a manifold with non-negative Ricci curvature is
classical. To the best of our knowledge, the first reference where this result
is explicitly stated, even though without proof, is [134]. Therein, the
Laplacian bound was understood in the viscosity sense. In subsequent
contributions, such as [116] and [48], superharmonicity of the distance was
understood in the sense of barriers, following the seminal [34, 44].
###### Theorem A.1.
Let $(M^{n},g)$ be a smooth Riemannian manifold with non-negative Ricci
curvature and let $\Sigma\subset M$ be a smooth hypersurface. Then
$\bm{\Delta}\mathsf{d}_{\Sigma}\leq 0$ on $M\setminus\Sigma$ if and only if
$\Sigma$ is minimal, in the sense that it has vanishing mean curvature.
###### Proof.
We only give an indication of the argument, a complete proof of the
implication from minimality to subharmonicity of the distance can be found for
instance in [48].
Notice that the Laplacian of the distance from a smooth hypersurface coincides
with its mean curvature along the hypersurface, thanks to a classical
computation in Riemannian Geometry. One possible strategy to check
subharmonicity of the distance is to observe that the singular part of the
Laplacian has negative sign, in great generality. Then we can consider
minimizing geodesics along which the distance to the hypersurface is realized.
Along these rays, the vanishing mean curvature condition at the starting point
propagates to nonnegativity of the Laplacian of the distance, thanks to the
non-negative Ricci curvature condition.
The converse implication, from subharmonicity of the distance to minimality,
relies on the same principle, combined with the fact that
$\mathsf{d}_{\Sigma}$ is smooth on any side of $\Sigma$ locally in a
neighbourhood of any point. In order to check that the mean curvature
$H_{\Sigma}$ vanishes at a given $p\in\Sigma$, let us consider the minimizing
geodesic $\gamma:(-\varepsilon,\varepsilon)\to M$ such that $\gamma(0)=p$ and
$\gamma^{\prime}(0)$ is perpendicular to $T_{p}\Sigma$. Then observe that,
combining the superharmonicity of $\mathsf{d}_{\Sigma}$ with the already
mentioned connection between mean curvature and Laplacian of the distance,
$0\leq-\lim_{t\uparrow
0}\Delta\mathsf{d}_{\Sigma}(\gamma(t))=H_{\Sigma}(p)=\lim_{t\downarrow
0}\Delta\mathsf{d}_{\Sigma}(\gamma(t))\leq 0\,,$
hence $H_{\Sigma}(p)=0$. ∎
On smooth Riemannian manifolds with non-negative Ricci curvature, the distance
from a minimal hypersurface is subharmonic even for certain minimal
hypersurfaces that are not globally smooth. This is a key point for the sake
of the applications, since minimal hypersurfaces that are built through
variational arguments might be non smooth in ambient dimension greater than
$8$.
Notice that Theorem 5.1 already gives a substantial contribution in this
direction. Indeed, we can cover at least all the minimal hypersurfaces that
are locally boundaries of sets of locally minimal perimeter.222In particular
it provides a different proof of the first implication in Theorem A.1 since,
as we already mentioned, all smooth minimal hypersurfaces are locally
boundaries of perimeter minimizing sets.
Actually, the principle “minimality implies subharmonicity of the distance
function” extends even to minimal hypersurfaces that are not necessarily
locally boundaries.
Let us introduce some terminology, following [138] for this presentation.
###### Definition .
Given a smooth Riemannian manifold $(M^{n},g)$, a singular hypersurface with
singular set of codimension no less than $k$ ($k<n-1$, $k\in\mathbb{N}$) is a
closed set $\overline{\Sigma}\subset M$ such that
$\mathscr{H}^{n-1}(\overline{\Sigma})<\infty$, where the regular part
$\mathcal{R}(\Sigma)$ is defined by
$\displaystyle\mathcal{R}(\Sigma):=\\{$ $\displaystyle
x\in\overline{\Sigma}\,:\,\overline{\Sigma}$ $\displaystyle\quad\text{is a
smooth embedded hypersurface in a neighbourhood of $x$}\\}$
and $\mathcal{S}(\Sigma):=\overline{\Sigma}\setminus\mathcal{R}(\Sigma)$ is
the singular part which we assume to satisfy
$\dim_{H}(\mathcal{S}(\Sigma))\leq n+1-k$.
Given such a singular hypersurface, it represents an integral varifold, that
we denote as $[\Sigma]$. We will say that $\Sigma$ is minimal if $[\Sigma]$ is
a stationary varifold and the tangent cones of $[\Sigma]$ have all
multiplicity one.
###### Remark .
We recall that the minimality condition above is equivalent to the requirement
that the mean curvature vanishes on $\mathcal{R}(\Sigma)$ and the density of
$[\Sigma]$ is finite everywhere. Moreover, as shown in [138, Lemma 6.3],
minimal hypersurfaces produced through min-max are minimal according to
Appendix A above.
The next statement originates from an argument due to Gromov in his proof of
the isoperimetric inequality [78].
###### Theorem A.2.
Let $(M^{n},g)$ be a smooth Riemannian manifold with non-negative Ricci
curvature. Let $\overline{\Sigma}\subset X$ be minimal in the sense of
Appendix A. Then $\mathsf{d}_{\overline{\Sigma}}$ is subharmonic on
$M\setminus\overline{\Sigma}$.
###### Proof.
The proof is divided in two steps. The first is about controlling the mean
curvature at footpoints of minimizing geodesics on the hypersurface. The
second deals with the propagation of the mean curvature bound to obtain a
Laplacian bound, as in previous arguments in this note.
Step 1. As proved for instance in [138, Lemma 2.1] along the original argument
due to Gromov, the following holds. For any $p\in
M\setminus\overline{\Sigma}$, let
$\gamma:[0,\mathsf{d}(p,\overline{\Sigma})]\to M$ be a minimizing geodesic
connecting $p$ to $\overline{\Sigma}$, and let $\gamma(0)=q$ be the footpoint
of the geodesic on $\overline{\Sigma}$, then $q\in\mathcal{R}(\Sigma)$.
Indeed, the geodesic sphere of radius $\mathsf{d}(p,q)/2$ centred at
$\gamma(\mathsf{d}(p,q)/2)$ is a smooth hypersurface near to $q$ and
$\overline{\Sigma}$ lies on one side of it. Since all tangent cones have
multiplicity one, the tangent cone to $[\Sigma]$ at $q$ is unique and it is a
hyperplane. Hence, by Allard’s regularity theorem [1], $\overline{\Sigma}$ is
regular at $q$. Therefore, the mean curvature of $\Sigma$ is vanishing in a
neighbourhood of $q$.
Step 2. Let us propagate the information that the mean curvature is vanishing
in the classical sense near to footpoints of minimizing geodesics to prove
that $\mathsf{d}_{\overline{\Sigma}}$ is subharmonic on
$M\setminus\overline{\Sigma}$.
We can rely for instance on the localization technique to argue that it is
sufficient to control the regular part of the Laplacian of
$\mathsf{d}_{\overline{\Sigma}}$ (see for instance [37, Theorem 1.3, Corollary
4.16] and Step 2 in the proof of Theorem 5.1). Then, to control the regular
part, it is enough to observe that $\mathsf{d}_{\overline{\Sigma}}$ is smooth
near to initial points of rays in the localization (thanks to the smoothness
of $\Sigma$ obtained in Step 1). Moreover the Laplacian of the distance is
vanishing there, therefore it remains non-negative along the rays by the non-
negative Ricci curvature assumption. ∎
###### Remark .
The proof of Theorem A.2 above works in particular for hypersurfaces that are
locally boundaries of locally perimeter minimizing sets, once we appeal to the
classical Euclidean regularity theory for local perimeter minimizers. In
particular it provides a different proof of Theorem 5.1 for smooth Riemannian
manifolds. However, the use of deep regularity theorems in Geometric Measure
Theory, makes the extension of this strategy to non smooth ambient spaces
unlikely, as already pointed out in [119]. The interest towards proofs of mean
curvature bounds and regularity results for area minimizing surfaces not
heavily relying on GMT tools was pointed out also in [79, 80].
###### Remark .
As remarked in [123], if $E\subset\mathbb{R}^{n}$ is an open set and
$\Delta\mathsf{d}_{\partial E}\leq 0$ locally in a neighbourhood of $\partial
E$ and away from $\partial E$, then $\partial E$ satisfies the minimal
surfaces equation in the viscosity sense. Indeed the signed distance from a
smooth boundary is smooth in a neighbourhood of any point along the boundary,
where its Laplacian corresponds to the mean curvature, as we pointed out in
the proof of Theorem A.1 above. See also [133] for some arguments in the same
spirit in the Riemannian framework.
## References
* [1] W. K. Allard: On the first variation of a varifold, Ann. of Math. (2) 95 (1972), 417–491.
* [2] F. J. Almgren Jr.: Existence and regularity almost everywhere of solutions to elliptic variational problems with constraints, Mem. Amer. Math. Soc. 4 (1976), no. 165, viii+199 pp.
* [3] L. Ambrosio: Some fine properties of sets of finite perimeter in Ahlfors regular metric measure spaces, Adv. Math., 159 (2001), 51–67.
* [4] L. Ambrosio: Fine properties of sets of finite perimeter in doubling metric measure spaces, Set-Valued Anal., 10 (2002), 111–128.
* [5] L. Ambrosio: Calculus, heat flow and curvature-dimension bounds in metric measure spaces, Proceedings of the International Congress of Mathematicians—Rio de Janeiro 2018. Vol. I. Plenary lectures, 301–340, World Sci. Publ., Hackensack, NJ, 2018.
* [6] L. Ambrosio, E. Brué, D. Semola: Rigidity of the 1-Bakry-Émery inequality and sets of finite perimeter in $\operatorname{RCD}$ spaces, Geom. Funct. Anal., 19 (2019), n.4, 949-1001
* [7] L. Ambrosio, S. Di Marino: Equivalent definitions of $\operatorname{BV}$ space and of total variation on metric measure spaces, J. Funct. Anal. 266 (2014), no. 7, 4150–4188.
* [8] L. Ambrosio, N. Gigli, A. Mondino, T. Rajala: Riemannian Ricci curvature lower bounds in metric measure spaces with $\sigma$-finite measure, Trans. Amer. Math. Soc., 367 (2015), 4661–4701.
* [9] L. Ambrosio, N. Gigli, G. Savaré: Calculus and heat flow in metric measure spaces and applications to spaces with Ricci bounds from below, Invent. Math., 195 (2014), 289–391.
* [10] L. Ambrosio, N. Gigli, G. Savaré: Metric measure spaces with Riemannian Ricci curvature bounded from below, Duke Math. J., 163 (2014), 1405–1490.
* [11] L. Ambrosio, N. Gigli, G. Savaré: Bakry-Émery curvature-dimension condition and Riemannian Ricci curvature bounds, Ann. Probab. 43 (2015), no. 1, 339–404.
* [12] L. Ambrosio, S. Honda: New stability results for sequences of metric measure spaces with uniform Ricci bounds from below, Measure Theory in Non-Smooth Spaces, De Gruyter Open, Warsaw, (2017), 1–51.
* [13] L. Ambrosio, S. Honda: Local spectral convergence in $\operatorname{RCD}^{*}(K,N)$ spaces, Nonlinear Anal. 177 (2018), part A, 1–23.
* [14] L. Ambrosio, A. Mondino, G. Savaré: On the Bakry-Émery condition, the gradient estimates and the local-to-global property of $\operatorname{RCD}^{*}(K,N)$ metric measure spaces, J. Geom. Anal., 26 (2014), 1-33.
* [15] L. Ambrosio, A. Mondino, G. Savaré: Nonlinear diffusion equations and curvature conditions in metric measure spaces, Mem. Amer. Math. Soc. 262 (2019), no. 1270, v+121 pp.
* [16] L. Ambrosio, E. Paolini: Partial regularity for quasi minimizers of perimeter, Papers in memory of Ennio De Giorgi (Italian). Ricerche Mat. 48 (1999), suppl., 167–186.
* [17] M. T. Anderson: Convergence and rigidity of manifolds under Ricci curvature bounds, Invent. Math. 102 (1990), no. 2, 429–445.
* [18] B. Andrews: Moduli of continuity, isoperimetric profiles, and multi-point estimates in geometric heat equations, Surveys in differential geometry 2014. Regularity and evolution of nonlinear equations, 1–47, Surv. Differ. Geom., 19, Int. Press, Somerville, MA, 2015.
* [19] G. Antonelli, E. Bruè, D. Semola: Volume bounds for the quantitative singular strata of non collapsed $\operatorname{RCD}$ metric measure spaces, Anal. Geom. Metr. Spaces, 7 2019, no. 1.
* [20] G. Antonelli, E. Pasqualetto, M. Pozzetta: Isoperimetric sets in non smooth spaces with lower bounds on the Ricci curvature, Nonlinear Anal. 220 (2022), Paper No. 112839, 59 pp.
* [21] G. Antonelli, E. Pasqualetto, M. Pozzetta, D. Semola: Sharp isoperimetric comparison on non collapsed spaces with lower Ricci bounds, preprint arXiv:2201.04916 (2022).
* [22] K. Bacher, K.-T. Sturm: Localization and tensorization properties of the curvature-dimension condition for metric measure spaces, J. Funct. Anal. 259 (2010), no. 1, 28–56.
* [23] D. Bakry, I. Gentil, M. Ledoux: On Harnack inequalities and optimal transportation, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 14 (2015), no. 3, 705–727.
* [24] M. Biroli, U. Mosco: A Saint-Venant type principle for Dirichlet forms on discontinuous media, Ann. Matem. Pura e Applicata, 169 (1995), 125–181.
* [25] A. Björn, J. Björn, Nonlinear potential theory on metric spaces, EMS Tracts in Mathematics, 17. European Mathematical Society (EMS), Zürich, 2011. xii+403 pp.
* [26] E. Bruè, E. Pasqualetto, D. Semola: Rectifiability of the reduced boundary for sets of finite perimeter over $\operatorname{RCD}(K,N)$ spaces, J. Eur. Math. Soc. (2022), online first DOI 10.4171/JEMS/1217.
* [27] E. Bruè, E. Pasqualetto, D. Semola: Constancy of the dimension in codimension one and locality of the unit normal on $\operatorname{RCD}(K,N)$ spaces, Ann. Sc. Norm. Super. Pisa Cl. Sci. (2022), https://doi.org/10.2422/2036-2145.202110007.
* [28] E. Brué, D. Semola: Constancy of the dimension for $RCD(K,N)$ spaces via regularity of Lagrangian flows, Comm. Pure Appl. Math., 73: 1141-1204.
* [29] E. Brué, A. Naber, D. Semola: Boundary regularity and stability for spaces with lower Ricci curvature bounds, Invent. Math. 228 (2022), no. 2, 777–891.
* [30] V. Buffa, G. Comi, M. Miranda: On $\operatorname{BV}$ functions and essentially bounded divergence measure fields in metric spaces, Rev. Mat. Iberoam. 38 (2022), no. 3, 883–946.
* [31] Y. Burago, M. Gromov, G. Perel’man: A. D. Aleksandrov spaces with curvatures bounded below, Uspekhi Mat. Nauk 47 (1992), no. 2(284), 3–51, 222.
* [32] X. Cabré: Nondivergent elliptic equations on manifolds with non-negative curvature, Comm. Pure Appl. Math. 50 (1997), no. 7, 623–665.
* [33] L. A. Caffarelli, A, Córdoba: An elementary regularity theory of minimal surfaces, Differential Integral Equations 6 (1993), no. 1, 1–13.
* [34] E. Calabi: An extension of E. Hopf’s maximum principle with an application to Riemannian geometry, Duke Math. J. 25 (1958), 45–56.
* [35] F. Cavalletti, E. Milman: The Globalization Theorem for the Curvature Dimension Condition, Invent. Math. 226 (2021), no. 1, 1–137.
* [36] F. Cavalletti, A. Mondino: Sharp and rigid isoperimetric inequalities in metric-measure spaces with lower Ricci curvature bounds, Invent. Math. 208 (2017), no. 3, 803–849.
* [37] F. Cavalletti, A. Mondino: New formulas for the Laplacian of distance functions and applications, Anal. PDE 13 (2020), no. 7, 2091–2147.
* [38] F. Cavalletti, A. Mondino: Optimal transport in Lorentzian synthetic spaces, synthetic timelike Ricci curvature lower bounds and applications, Preprint arXiv:2004.08934.
* [39] J. Cheeger: Differentiability of Lipschitz functions on metric measure spaces, Geom. Funct. Anal., 9 (1999), 428–517.
* [40] J. Cheeger, T.-H. Colding: Lower bounds on Ricci curvature and the almost rigidity of warped products, Ann. of Math. (2), 144 (1996), 189–237.
* [41] J. Cheeger, T.-H. Colding: On the structure of spaces with Ricci curvature bounded below. I, J. Differential Geom., 46 (1997), 406–480.
* [42] J. Cheeger, T.-H. Colding: On the structure of spaces with Ricci curvature bounded below. II, J. Differential Geom., 54 (2000), 13–35.
* [43] J. Cheeger, T.-H. Colding: On the structure of spaces with Ricci curvature bounded below. III, J. Differential Geom., 54 (2000), 37–74.
* [44] J. Cheeger, D. Gromoll: The splitting theorem for manifolds of non-negative Ricci curvature, J. Differential Geometry 6 (1971/72), 119–128.
* [45] J. Cheeger, W. Jiang, A. Naber: Rectifiability of singular sets of noncollapsed limit spaces with Ricci curvature bounded below, Ann. of Math. (2) 193 (2021), no. 2, 407–538.
* [46] J. Cheeger, A. Naber: Lower bounds on Ricci curvature and quantitative behaviour of singular sets, Invent. Math. 191 (2013), no. 2, 321–339.
* [47] J. Cheeger, A. Naber: Quantitative stratification and the regularity of harmonic maps and minimal currents, Comm. Pure Appl. Math. 66 (2013), no. 6, 965–990.
* [48] J. Choe, A. Fraser: Mean curvature in manifolds with Ricci curvature bounded from below, Comment. Math. Helv. 93 (2018), no. 1, 55–69.
* [49] T.-H. Colding: Ricci curvature and volume convergence, Ann. of Math. (2) 145 (1997), no. 3, 477–501.
* [50] T.-H. Colding: New monotonicity formulas for Ricci curvature and applications. I, Acta Math. 209 (2012), no. 2, 229–263.
* [51] D. Cordero-Erausquin, R. McCann, M. Schmuckenschläger: A Riemannian interpolation inequality à la Borell, Brascamp and Lieb, Invent. Math. 146 (2001), no. 2, 219–257.
* [52] M.G. Crandall, H. Ishii, P. L. Lions: User’s guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.) 27 (1992), no. 1, 1–67.
* [53] C. Debin, N. Gigli, E. Pasqualetto: Quasi-continuous vector fields on $\operatorname{RCD}$ spaces, Potential Anal. 54 (2021), no. 1, 183–211.
* [54] E. De Giorgi: Frontiere orientate di misura minima. Seminario di Matematica della Scuola Normale Superiore di Pisa, 1960-61 Editrice Tecnico Scientifica, Pisa 1961 57 pp.
* [55] G. De Philippis, N. Gigli: From volume cone to metric cone in the nonsmooth setting, Geom. Funct. Anal., 26 (2016), 1526–1587.
* [56] G. De Philippis, N. Gigli: Non-collapsed spaces with Ricci curvature bounded from below, J. Éc. polytech. Math., 5 (2018), 613–650.
* [57] G. De Philippis, A. Marchese, F. Rindler: On a conjecture of Cheeger, Measure theory in non-smooth spaces, 145–155, Partial Differ. Equ. Meas. Theory, De Gruyter Open, Warsaw, 2017.
* [58] Q. Ding: Area-minimizing hypersurfaces in manifolds of Ricci curvature bounded below, preprint arXiv:2107.11074v1 (2021).
* [59] Q. Ding: Poincaré inequality on minimal graphs over manifolds and applications, preprint arXiv:2111.04458v1 (2021).
* [60] M. Erbar, K. Kuwada, K.-T. Sturm: On the equivalence of the entropic curvature-dimension condition and Bochner’s inequality on metric measure spaces, Invent. Math., 201 (2015), 993–1071.
* [61] H. Federer: Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, Band 153 Springer-Verlag New York Inc., New York 1969 xiv+676 pp.
* [62] H. Federer: The singular sets of area minimizing rectifiable currents with codimension one and of area minimizing flat chains modulo two with arbitrary codimension. Bull. Amer. Math. Soc. 76 (1970), 767–771.
* [63] T. Frankel: On the fundamental group of a compact minimal submanifold, Ann. of Math. (2) 83 (1966), 68–73.
* [64] N. Gigli: The splitting theorem in non-smooth context, preprint arXiv:1302.5555.
* [65] N. Gigli: On the differential structure of metric measure spaces and applications, Mem. Amer. Math. Soc., 236 (2015), vi–91.
* [66] N. Gigli: Nonsmooth differential geometry: an approach tailored for spaces with Ricci curvature bounded from below, Mem. Amer. Math. Soc., 251 (2018), v–161.
* [67] N. Gigli: On the regularity of harmonic maps from $\operatorname{RCD}(K,N)$ to $\mathrm{CAT}(0)$ spaces and related results, preprint arXiv:2204.04317.
* [68] N. Gigli, A. Mondino: A PDE approach to nonlinear potential theory in metric measure spaces, J. Math. Pures Appl. (9) 100 (2013), no. 4, 505–534.
* [69] N. Gigli, A. Mondino, G. Savaré: Convergence of pointed non-compact metric measure spaces and stability of Ricci curvature bounds and heat flows, Proc. Lond. Math. Soc. (3), 111 (2015), 1071–1129.
* [70] N. Gigli, A. Mondino, D. Semola: On the notion of Laplacian bounds on $\operatorname{RCD}$ spaces and applications, preprint arXiv:2302.05474.
* [71] N. Gigli, E. Pasqualetto: Behaviour of the reference measure on $\operatorname{RCD}$ spaces under charts, Comm. Anal. Geom. 29 (2021), no. 6, 1391–1414.
* [72] E. Giusti: Minimal surfaces and functions of bounded variation, Monographs in Mathematics, 80. Birkhäuser Verlag, Basel, 1984. xii+240 pp. ISBN: 0-8176-3153-4
* [73] R. E. Greene, H. Wu: $C^{\infty}$ approximations of convex, subharmonic, and plurisubharmonic functions, Ann. Sci. École Norm. Sup. (4) 12 (1979), no. 1, 47–84.
* [74] A. Grigor’yan: Heat kernel and analysis on manifolds, AMS/IP Studies in Advanced Mathematics, 47. American Mathematical Society, Providence, RI; International Press, Boston, MA, 2009. xviii+482 pp.
* [75] A. Grigor’yan, J. Hu: Heat kernels and Green functions on metric measure spaces, Canad. J. Math. 66 (2014), no. 3, 641–699.
* [76] M. Gromov: Sign and geometric meaning of curvature, Rend. Sem. Mat. Fis. Milano 61 (1991), 9–123 (1994).
* [77] M. Gromov: Positive curvature, macroscopic dimension, spectral gaps and higher signatures, Functional analysis on the eve of the 21st century, Vol. II (New Brunswick, NJ, 1993), 1–213, Progr. Math., 132, Birkhäuser Boston, Boston, MA, 1996.
* [78] M. Gromov: Paul Levy’s isoperimetric inequality. Appendix C in the book Metric structures for Riemannian and non-Riemannian spaces by M. Gromov. Modern Birkhäuser Classics. Birkhäuser Boston, Inc., Boston, MA, 2007.
* [79] M. Gromov: Plateau-Stein manifolds, Cent. Eur. J. Math. 12 (2014), no. 7, 923–951.
* [80] M. Gromov: Dirac and Plateau billiards in domains with corners, Cent. Eur. J. Math. 12 (2014), no. 8, 1109–1156.
* [81] M. Gromov: Four Lectures on Scalar Curvature, preprint arXiv:1908.10612v6.
* [82] J. Heinonen, P. Koskela: Quasiconformal maps in metric spaces with controlled geometry, Acta Math. 181 (1998), no. 1, 1–61.
* [83] E. Heintze, H. Karcher: A general comparison theorem with applications to volume estimates for submanifolds, Ann. Sci. École Norm. Sup. (4) 11 (1978), no. 4, 451–470.
* [84] H. Ishii: On the equivalence of two notions of weak solutions, viscosity solutions and distribution solutions, Funkcial. Ekvac. 38 (1995), no. 1, 101–120.
* [85] H. Ishii, P. L. Lions: Viscosity solutions of fully nonlinear second-order elliptic partial differential equations, J. Differential Equations 83 (1990), no. 1, 26–78.
* [86] R. Jiang: Lipschitz continuity of solutions of Poisson equations in metric measure space, Potential Anal. 37 (2012), no. 3, 281–301.
* [87] R. Jiang, H. Li, H. Zhang: Heat Kernel Bounds on Metric Measure Spaces and Some Applications, Potential Anal., 44 (2016), 601–627.
* [88] W. Jiang, A. Naber: $L^{2}$ curvature bounds on manifolds with bounded Ricci curvature, Ann. of Math. (2) 193 (2021), no. 1, 107–222.
* [89] V. Kapovitch, A. Mondino: On the topology and the boundary of $N$-dimensional $\operatorname{RCD}(K,N)$ spaces, Geom. Topol. 25 (2021), no. 1, 445–495.
* [90] M. Kell, A. Mondino: On the volume measure of non-smooth spaces with Ricci curvature bounded below, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 18 (2018), no. 2, 593–610.
* [91] C. Ketterer: The Heintze-Karcher inequality for metric measure spaces, Proc. Amer. Math. Soc. 148 (2020), no. 9, 4041–4056.
* [92] S. Kim: Harnack inequality for nondivergent elliptic operators on Riemannian manifolds, Pacific J. Math. 213 (2004), no. 2, 281–293.
* [93] J. Kinnunen, O. Martio: Nonlinear potential theory on metric spaces, Illinois J. Math. 46 (2002), no. 3, 857–883.
* [94] J. Kinnunen, R. Korte, A. Lorent, N. Shanmugalingam: Regularity of sets with quasiminimal boundary surfaces in metric spaces, J. Geom. Anal. 23 (2013), no. 4, 1607–1640.
* [95] Y. Kitabeppu, S. Lakzian: Characterization of low dimensional $\operatorname{RCD}^{*}(K,N)$ spaces, Anal. Geom. Metr. Spaces 4 (2016), no. 1, 187–215.
* [96] Y. Kitabeppu: A Bishop-type inequality on metric measure spaces with Ricci curvature bounded below, Proc. Amer. Math. Soc. 145 (2017), no. 7, 3137–3151.
* [97] B. Klartag: Needle decompositions in Riemannian geometry, Mem. Amer. Math. Soc. 249 (2017), no. 1180, v+77 pp.
* [98] R. Korte, P. Lahti: Relative isoperimetric inequalities and sufficient conditions for finite perimeter on metric spaces. Ann. Inst. H. Poincaré C Anal. Non Linéaire 31 (2014), no. 1, 129–154.
* [99] K. Kuwada: Duality on gradient estimates and Wasserstein controls, J. Funct. Anal. 258 (2010), no. 11, 3758–3774.
* [100] J. M. Lasry, P. L. Lions: A remark on regularization in Hilbert spaces, Israel J. Math. 55 (1986), no. 3, 257–266.
* [101] N. Li, A. Naber: Quantitative Estimates on the Singular Sets of Alexandrov Spaces, Peking Math. J. 3 (2020), 203–234.
* [102] J. Lott, C. Villani: Ricci curvature for metric-measure spaces via optimal transport, Ann. of Math. (2), 169 (2009), 903–991.
* [103] A. Lytchak, S. Stadler: Ricci curvature in dimension 2, To appear on J. Eur. Math. Soc., preprint arXiv:1812.08225.
* [104] A. Lytchak, S. Wenger: Area minimizing discs in metric spaces, Arch. Ration. Mech. Anal. 223 (2017), no. 3, 1123–1182.
* [105] A. Lytchak, S. Wenger: Isoperimetric characterization of upper curvature bounds, Acta Math. 221 (2018), no. 1, 159–202.
* [106] A. Lytchak, S. Wenger: Canonical parameterizations of metric disks, Duke Math. J. 169 (2020), no. 4, 761–797.
* [107] F. Maggi: Sets of finite perimeter and geometric variational problems. An introduction to geometric measure theory, Cambridge Studies in Advanced Mathematics, 135. Cambridge University Press, Cambridge, 2012. xx+454 pp.
* [108] C. Mantegazza, G. Mascellani, G. Uraltsev: On the distributional Hessian of the distance function, Pacific J. Math. 270 (2014), no. 1, 151–166.
* [109] M. Miranda Jr.: Functions of bounded variation on “good” metric spaces, J. Math. Pures Appl., 82 (2003), 975–1004.
* [110] F. Morgan: Geometric measure theory. A beginner’s guide. Third edition. Academic Press, Inc., San Diego, CA, 2000. x+226 pp.
* [111] F. Morgan: Area-minimizing surfaces in cones, Comm. Anal. Geom. 10 (2002), no. 5, 971–983.
* [112] A. Mondino, A. Naber: Structure theory of metric measure spaces with lower Ricci curvature bounds, J. Eur. Math. Soc. (JEMS) 21 (2019), no. 6, 1809–1854.
* [113] A. Mondino, D. Semola: Lipschitz continuity and Bochner-Eells-Sampson inequality for harmonic maps from $\operatorname{RCD}(K,N)$ to $\mathrm{CAT}(0)$ spaces, preprint arXiv:2202.01590 (2022).
* [114] A. Naber, D. Valtorta: The singular structure and regularity of stationary varifolds, J. Eur. Math. Soc. (JEMS) 22 (2020), no. 10, 3305–3382.
* [115] F. Otto, C. Villani: Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality, J. Funct. Anal. 173 (2000), no. 2, 361–400.
* [116] P. Petersen, F. Wilhelm: On Frankel’s theorem, Canad. Math. Bull. 46 (2003), no. 1, 130–139.
* [117] A. Petrunin: Parallel transportation for Alexandrov space with curvature bounded below. Geom. Funct. Anal. 8 (1998), no. 1, 123–148.
* [118] A. Petrunin: Subharmonic functions on Alexandrov space, preprint available at https://anton-petrunin.github.io/papers/HarmFun.pdf (2000).
* [119] A. Petrunin: Harmonic functions on Alexandrov spaces and their applications, Electron. Res. Announc. Amer. Math. Soc. 9 (2003), 135–141.
* [120] A. Petrunin: Alexandrov meets Lott-Villani-Sturm, Münster J. Math., 4 (2011), 53–64.
* [121] T. Rajala: Local Poincaré inequalities from stable curvature conditions on metric spaces, Calc. Var. Partial Differential Equations 44 (2012), no. 3-4, 477–494.
* [122] G. Savaré: Self-improvement of the Bakry-Émery condition and Wasserstein contraction of the heat flow in $\operatorname{RCD}(K,\infty)$ metric measure spaces, Discrete Contin. Dyn. Syst. 34 (2014), no. 4, 1641–1661.
* [123] O. Savin: Small perturbation solutions for elliptic equations. Comm. Partial Differential Equations 32 (2007), no. 4-6, 557–578.
* [124] N. Shanmugalingam: Harmonic functions on metric spaces, Illinois Journal of Mathematics, 45 no. 3, (2001) 1021–1050.
* [125] J. Simons: Minimal Varieties in Riemannian Manifolds, Annals of Math., 88 no. 1 (1968), 62–105.
* [126] K.-T. Sturm: Analysis on local Dirichlet spaces. III. The parabolic Harnack inequality, J. Math. Pures Appl. (9), 75 (1996), 273–297.
* [127] K.-T. Sturm: On the geometry of metric measure spaces I, Acta Math., 196 (2006), 65–131.
* [128] K.-T. Sturm: On the geometry of metric measure spaces II, Acta Math., 196 (2006), 133–177.
* [129] K. T. Sturm, M. K. Von Renesse: Transport inequalities, gradient estimates, entropy, and Ricci curvature, Comm. Pure Appl. Math. 58 (2005), no. 7, 923–940.
* [130] C. Villani: Optimal transport. Old and New. Grundlehren der Mathematischen Wissenschaften, 338. Springer-Verlag Berlin, 2009.
* [131] M.-K. Von Renesse: On local Poincaré via transportation, Math. Z., 259 (2008), 21–31.
* [132] Y. Wang, X. Zhang: An Alexandroff-Bakelman-Pucci estimate on Riemannian manifolds, Adv. Math. 232 (2013), 499–512.
* [133] B. White: Controlling area blow-up in minimal or bounded mean curvature varieties. J. Differential Geom. 102 (2016), no. 3, 501–535.
* [134] H. Wu, An elementary method in the study of non-negative curvature, Acta Math. 142 (1979), no. 1-2, 57–78.
* [135] H.-C. Zhang, X. Zhong, X.-P. Zhu: Quantitative gradient estimates for harmonic maps into singular spaces. Sci. China Math. 62 (2019), no. 11, 2371–2400.
* [136] H. C. Zhang, X. P. Zhu: Yau’s gradient estimates on Alexandrov spaces, J. Differential Geom. 91 (2012), no. 3, 445–522.
* [137] H. C. Zhang, X. P. Zhu: Local Li-Yau’s estimates on $\operatorname{RCD}^{*}(K,N)$ metric measure spaces. Calc. Var. Partial Differential Equations 55 (2016), no. 4, Art. 93, 30 pp.
* [138] X. Zhou: Min-max hypersurface in manifold of positive Ricci curvature. J. Differential Geom. 105 (2017), no. 2, 291–343.
|
arxiv-papers
| 2021-07-26T17:34:39 |
2024-09-04T03:07:19.385415
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Andrea Mondino and Daniele Semola",
"submitter": "Daniele Semola",
"url": "https://arxiv.org/abs/2107.12344"
}
|
2107.12345
|
# Uniqueness of Meromorphic functions concerning k-th derivatives and
difference operators
Goutam Haldar
###### Abstract.
In this paper, we continue to study the sharing value problems for higher
order derivatives of meromorphic functions with its linear difference and
$q$-difference operators. Some of our results generalize and improve the
results of Meng–Liu (J. Appl. Math. and Informatics, 37(2019), 133–148) to a
large extent.
††footnotetext: Department of Mathematics, Malda College, Rabindra Avenue,
Malda, West Bengal 732101, India††footnotetext: E-mail:
[email protected]
AMS Mathematics Subject Classification: 30D35, 39A05, 39A10.
Keywords and phrases: meromorphic function, difference operator, uniqueness,
weighted sharing
## 1\. Introduction
Let $f$ and $g$ be two non-constant meromorphic functions defined in the open
complex plane $\mathbb{C}$. If for some $a\in\mathbb{C}\cup\\{\infty\\}$,
$f-a$ and $g-a$ have the same set of zeros with the same multiplicities, we
say that $f$ and $g$ share the value $a$ CM (counting multiplicities), and if
we do not consider the multiplicities then $f$ and $g$ are said to share the
value $a$ IM (ignoring multiplicities). We assume that the readers are
familiar with the standard notations symbols such as $T(r,f)$, $N(r,a;f)$
($\overline{N}(r,a;f)$) of Nevanlinna’s value distribution theory (see [11]).
In 2001, Lahiri ([19], [17]) introduced the definition ofweighted sharing,
which plays a key role in uniqueness theory as far as relaxation of sharing is
concerned. In the following we explain the notion.
###### Definition 1.1.
[19] Let $k$ be a non-negative integer or infinity. For
$a\in\mathbb{C}\cup\\{\infty\\}$ we denote by $E_{k}(a,f)$ the set of all
$a$-points of $f$, where an $a$ point of multiplicity $m$ is counted $m$ times
if $m\leq k$ and $k+1$ times if $m>k.$ If $E_{k}(a,f)=E_{k}(a,g),$ we say that
$f$, $g$ share the value $a$ with weight $k$.
We write $f$, $g$ share $(a,k)$ to mean that $f,$ $g$ share the value $a$ with
weight $k.$ Clearly if $f,$ $g$ share $(a,k)$ then $f,$ $g$ share $(a,p)$ for
any integer $p$, $0\leq p<k.$ Also we note that $f,$ $g$ share a value $a$ IM
or CM if and only if $f,$ $g$ share $(a,0)$ or $(a,\infty)$ respectively.
###### Definition 1.2.
[16] For $a\in\mathbb{C}\cup\\{\infty\\},$ we denote by $N(r,a;f\mid=1)$ the
counting function of simple $a$-points of $f.$ For a positive integer $m,$ we
denote by $N(r,a;f\mid\leq m)$ $(N(r,a;f\mid\geq m))$ the counting function of
those $a$-point of $f$ whose multiplicities are not greater (less) than $m$,
where each $a$-point is counted according to its multiplicity.
$\overline{N}(r,a;f\mid\leq m)$ $(\overline{N}(r,a;f\mid\geq m))$ are defined
similarly except that in counting the $a$-points of $f$ we ignore the
multiplicity. Also $N(r,a;f\mid<m)$, $N(r,a;f\mid>m),$
$\overline{N}(r,a;f\mid<m)$ and $\overline{N}(r,a;f\mid>m)$ are defined
similarly.
###### Definition 1.3.
[19] We denote by $N_{2}(r,a;f)$ the sum
$\overline{N}(r,a;f)+\overline{N}(r,a;f\mid\geq 2)$.
###### Definition 1.4.
[19] Let $f$ and $g$ share a value $a$ IM. We denote by
$\overline{N}_{*}(r,a;f,g)$ the counting function of those $a$-points of $f$
whose multiplicities differ from the multiplicities of the corresponding
$a$-points of $g$.
Let $c$ be a nonzero complex constant, and let $f(z)$ be a meromorphic
function. The shift operator is denoted by $f(z+c)$. Also, we use the
notations $\Delta_{c}f$ and $\Delta_{c}^{k}f$ to denote the difference and
kth-order difference operators of $f$, which are respectively defined as
$\displaystyle\Delta_{c}f=f(z+c)-f(z),\;\;\Delta_{c}^{k}f(z)=\Delta_{c}(\Delta_{c}^{k-1}f(z)),\;\;k\in\mathbb{N},\;k\geq
2.$
We note that $\Delta_{c}f$ and $\Delta_{c}^{k}f$ are nothing but linear
combination of different shift operators. So for generalization of those
operators, it is reasonable to introduce the linear difference operators
$L(z,f)$ as follows:
(1.1) $\displaystyle L(z,f)=\sum_{j=0}^{p}a_{j}f(z+c_{j}),$
where $p\in\mathbb{N}\cup\\{0\\}$ and $a_{j}$ and $c_{j}$’s are complex
constants with at-least one $a_{j}$’s are non-zero.
For a non-zero complex constant $q$ and a meromorphic function $f$, the
$q$-shift and $q$-difference operators are defined, respectively by $f(qz)$
and $\Delta_{q}f=f(qz)-f(z)$. Here also we generalize these operators as
follows:
(1.2) $\displaystyle L_{q}(z,f)=\sum_{j=0}^{r}b_{j}f(q_{j}z+d_{j}),$
where $r$ is a non-negative integer, and $q_{j}$, $b_{j}$, $d_{j}$’s are
complex constants with at-least one of $b_{j}$ is non-zero.
It was Rubel–Yang [30] who first initiated the problem of uniqueness of
meromorphic functions sharing two values, and obtained the following result.
###### Theorem 1.1.
[30] Let $f$ be a non-constant entire function. If $f$ shares two distinct
finite values CM with $f^{\prime}$, then $f\equiv f^{\prime}$.
Mues–Steinmetz [26] improved the above result by relaxing the nature of
sharing two values from CM to IM. After that Mues–Steinmetz [27], and
Gundersen [9] improved Theorem A to non-constant meromorphic functions.
Recently, the difference analogue of classical Nevanlinna theory for
meromorphic functions of finite order was established by Halburd–Korhonen [12,
13], Chiang–Feng [8], independently, and developed by Halburd–Korhonen–Tohge
[14] for hyper order strictly less than 1. After that, there has been an
increasing interest in studying the uniqueness problems of meromorphic
functions related to their shift or difference operators (see [7, 15, 21, 23,
28, 4, 5, 6, 22, 34]).
As we know that the time-delay differential equation $f(x)=f(x-k)$, $k>0$
plays an important roll in real analysis, and it has been rigorously studied.
For complex variable counterpart, Liu-Dong [24] studied the complex
differential-difference equation $f(z)=f(z+c)$, where $c$ is a non-zero
constant.
In 2018, Qi et al. [29] looked at this complex differential-difference
equation from a different perspective. In fact, they considered the value
sharing problem related to $f^{\prime}(z)$ and $f(z+c)$, where $c$ is a
complex number, and obtained the following result.
###### Theorem 1.2.
[29] Let $f$ be a non-constant meromorphic function of finite order, $n\geq 9$
be an integer. If $[f^{\prime}(z)]^{n}$ and $f^{n}(z+c)$ share $a(\neq 0)$ and
$\infty$ CM, then $f^{\prime}(z)=tf(z+c)$, for a constant $t$ that satisfies
$t^{n}=1$.
In $2019$, Meng–Liu [25] reduced the nature of sharing values from CM to
finite weight and obtained the following results.
###### Theorem 1.3.
[25] Let $f$ be a non-constant meromorphic function of finite order, $n\geq
10$ an integer. If $[f^{\prime}(z)]^{n}$ and $f^{n}(z+c)$ share $(1,2)$ and
$(\infty,0)$, then $f^{\prime}(z)=tf(z+c)$, for a constant $t$ that satisfies
$t^{n}=1$.
###### Theorem 1.4.
[25] Let $f$ be a non-constant meromorphic function of finite order, $n\geq 9$
an integer. If $[f^{\prime}(z)]^{n}$ and $f^{n}(z+c)$ share $(1,2)$ and
$(\infty,\infty)$, then $f^{\prime}(z)=tf(z+c)$, for a constant $t$ that
satisfies $t^{n}=1$.
###### Theorem 1.5.
[25] Let $f$ be a non-constant meromorphic function of finite order, $n\geq
17$ an integer. If $[f^{\prime}(z)]^{n}$ and $f^{n}(z+c)$ share $(1,0)$ and
$(\infty,0)$, then $f^{\prime}(z)=tf(z+c)$, for a constant $t$ that satisfies
$t^{n}=1$.
For further investigation of Theorems 1.2–1.5, we pose the following
questions.
###### Question 1.1.
Could we determine the relationship between the $k$-th derivative $f^{(k)}(z)$
and the linear difference polynomial $L(z,f)$ as defined in $(\ref{e1.1})$ of
a meromorphic (or entire) function $f(z)$ under relax sharing hypothesis?
###### Question 1.2.
Could we further reduce the lower bound of $n$ in Theorems 1.3–1.5?
In this direction, we prove the following result.
###### Theorem 1.6.
Let $f$ be a non-constant meromorphic function of finite order, $n,\;k$ are
positive integers, and $L(z,f)$ are defined in $(\ref{e1.1})$. Suppose
$[f^{(k)}]^{n}$ and $[L(z,f)]^{n}$ share $(1,l)$ and $(\infty,m)$, where
$0\leq l<\infty$ and $0\leq m\leq\infty$, and one of the following conditions
holds:
1. (i)
$l\geq 2$, $m=0$ and $n\geq 8$;
2. (ii)
$l\geq 2$, $m=\infty$ and $n\geq 7$;
3. (iii)
$l=1$, $m=0$ and $n\geq 9$;
4. (iv)
$l=0$, $m=0$ and $n\geq 12$.
Then $f^{(k)}(z)=tL(z,f)$, for a non-zero constant $t$ that satisfies
$t^{n}=1$.
We give the following example in the support of Theorem 1.6.
###### Example 1.1.
Let $f(z)=e^{z/n}$, where $n$ is a positive integer. Suppose
$L(z,f)=f(z+c)+c_{0}f(z)$, where $c_{0}$ is a non-zero complex constant such
that $c_{0}\neq 1/n$, and $c=n\log((1-c_{0}n)/n)$.Then one can easily verify
that $(f^{\prime})^{n}$ and $(L(z,f))^{n}$ satisfy all the conditions of
Theorem 1.6. Here $f^{\prime}(z)=tL(z,f)$, where $t$ is a constant such that
$t^{n}=1$.
###### Remark 1.1.
Let us suppose that $c_{j}=jc$, $j=0,1,\ldots,p$ and $a_{p}(z)={p\choose 0}$,
$a_{p-1}=-{p\choose 1}$, $a_{p-2}={p\choose 2}$. Then from (1.1), it is easily
seen that $L(z,f)=\Delta^{p}_{c}f$. Therefore, we obtain the following
corollary from Theorem 1.6.
###### Corollary 1.1.
Let $f$ be a non-constant meromorphic function of finite order, $n,\;k$ are
positive integers, and $L(z,f)$ are defined in $(\ref{e1.1})$. Suppose
$[f^{(k)}]^{n}$ and $[\Delta^{p}_{c}f]^{n}$ share $(1,l)$ and $(\infty,m)$,
where $0\leq l<\infty$ and $0\leq m\leq\infty$, and one of the following
conditions holds:
1. (i)
$l\geq 2$, $m=0$ and $n\geq 8$;
2. (ii)
$l\geq 2$, $m=\infty$ and $n\geq 7$;
3. (iii)
$l=1$, $m=0$ and $n\geq 9$;
4. (iv)
$l=0$, $m=0$ and $n\geq 12$.
Then $f^{(k)}(z)=t\Delta^{p}_{c}f$, for a non-zero constant $t$ that satisfies
$t^{n}=1$.
For entire function we prove the following result which is an improvement of
Corollary 1.8 of [25].
###### Theorem 1.7.
Let $f$ be a non-constant entire function of finite order, $n,\;k$ are
positive integers, and $L(z,f)$ are defined in $(\ref{e1.1})$. Suppose
$[f^{(k)}]^{n}$ and $[L(z,f)]^{n}$ share $(1,l)$, and one of the following
conditions holds:
1. (i)
$l\geq 1$ and $n\geq 5$;
2. (ii)
$l=0$ and $n\geq 8$;
Then $f^{(k)}(z)=tL(z,f)$, for a non-zero constant $t$ that satisfies
$t^{n}=1$.
In the same paper, Meng–Liu [25] also obtained the following results by
replacing $f(z+c)$ with $q$-shift operator $f(qz)$.
###### Theorem 1.8.
[25] Let $f$ be a non-constant meromorphic function of zero order, $n\geq 10$
an integer. If $[f^{\prime}(z)]^{n}$ and $f^{n}(qz)$ share $(1,2)$ and
$(\infty,0)$, then $f^{\prime}(z)=tf(qz)$, for a constant $t$ that satisfies
$t^{n}=1$.
###### Theorem 1.9.
[25] Let $f$ be a non-constant meromorphic function of zero order, $n\geq 9$
an integer. If $[f^{\prime}(z)]^{n}$ and $f^{n}(qz)$ share $(1,2)$ and
$(\infty,\infty)$, then $f^{\prime}(z)=tf(qz)$, for a constant $t$ that
satisfies $t^{n}=1$.
###### Theorem 1.10.
[25] Let $f$ be a non-constant meromorphic function of zero order, $n\geq 17$
an integer. If $[f^{\prime}(z)]^{n}$ and $f^{n}(qz)$ share $(1,0)$ and
$(\infty,0)$, then $f^{\prime}(z)=tf(qz)$, for a constant $t$ that satisfies
$t^{n}=1$.
For the generalizations and improvements of Theorems 1.8–1.10 to a large
extent, we obtain the following result.
###### Theorem 1.11.
Let $f$ be a non-constant meromorphic function of zero order, $n,\;k$ are
positive integers, and $L_{q}(z,f)$ are defined in $(\ref{e1.2})$. Suppose
$[f^{(k)}]^{n}$ and $[L_{q}(z,f)]^{n}$ share $(1,l)$ and $(\infty,m)$, where
$0\leq l<\infty$ and $0\leq m\leq\infty$, and one of the following conditions
holds:
1. (i)
$l\geq 2$, $m=0$ and $n\geq 8$;
2. (ii)
$l\geq 2$, $m=\infty$ and $n\geq 7$;
3. (iii)
$l=1$, $m=0$ and $n\geq 9$;
4. (iv)
$l=0$, $m=0$ and $n\geq 12$.
Then $f^{(k)}=tL_{q}(z,f)$, for a non-zero constant $t$ that satisfies
$t^{n}=1$.
In 2018, Qi et al. [29] also proved the following result.
###### Theorem 1.12.
[29] Let $f$ be a meromorphic function of finite order. Suppose that
$f^{\prime}$ and $\Delta_{c}f$ share $a_{1},a_{2},a_{3},a_{4}$ IM, where
$a_{1},a_{2},a_{3},a_{4}$ are four distinct finite values. Then,
$f^{\prime}(z)\equiv\Delta_{c}f.$
We prove the following uniqueness theorem about the $k$-th derivative
$f^{(k)}$ and linear difference polynomial $L(z,f)$ of a meromorphic function
$f$, which is an extension of Theorem 1.12.
###### Theorem 1.13.
Let $f$ be a meromorphic function of finite order. Suppose that $f^{(k)}$ and
$L(z,f)$ share $a_{1},a_{2},a_{3},a_{4}$ IM, where $a_{1},a_{2},a_{3},a_{4}$
are four distinct finite values. Then,
$f^{(k)}(z)\equiv L(z,f).$
## 2\. Key Lemmas
In this section, we present some lemmas which will be needed in the sequel.
Let $F$ and $G$ be two non-constant meromorphic functions defined in
$\mathbb{C}$. We also denote by $H$, the following function
(2.1) $\displaystyle
H=\Big{(}\frac{F^{\prime\prime}}{F^{\prime}}-\frac{2F^{\prime}}{F-1}\Big{)}-\Big{(}\frac{G^{\prime\prime}}{G^{\prime}}-\frac{2G^{\prime}}{G-1}\Big{)}.$
###### Lemma 2.1.
[19] Let $F$, $G$ be two non-constant meromorphic functions such that they
share $(1,1)$ and $H\not\equiv 0.$ Then
$\displaystyle N(r,1;F\mid=1)=N(r,1;G\mid=1)\leq N(r,H)+S(r,F)+S(r,G).$
###### Lemma 2.2.
[3] Let $F$, $G$ be two non-constant meromorphic functions sharing $(1,t),$
where $0\leq t<\infty.$ Then
$\displaystyle\overline{N}(r,1;F)+\overline{N}(r,1;G)-N_{E}^{1)}(r,1;F)+\left(t-\frac{1}{2}\right)\overline{N}_{*}(r,1;F,G)$
$\displaystyle\leq$ $\displaystyle\frac{1}{2}(N(r,1;F)+N(r,1;G)).$
###### Lemma 2.3.
[20] Suppose $F$, $G$ share $(1,0)$, $(\infty,0)$. If $H\not\equiv 0,$ then,
$\displaystyle N(r,H)$ $\displaystyle\leq N(r,0;F\mid\geq 2)+N(r,0;G\mid\geq
2)+\overline{N}_{*}(r,1;F,G)$
$\displaystyle+\overline{N}_{*}(r,\infty;F,G)+\overline{N}_{0}(r,0;F^{\prime})+\overline{N}_{0}(r,0;G^{\prime})+S(r,F)+S(r,G),$
where $\overline{N}_{0}(r,0;F^{\prime})$ is the reduced counting function of
those zeros of $F^{\prime}$ which are not the zeros of $F(F-1)$, and
$\overline{N}_{0}(r,0;G^{\prime})$ is similarly defined.
###### Lemma 2.4.
[31] Let $f$ be a non-constant meromorphic function and
$P(f)=a_{0}+a_{1}f+a_{2}f^{2}+\ldots+a_{n}f^{n},$ where
$a_{0},a_{1},a_{2},\ldots,a_{n}$ are constants and $a_{n}\neq 0$. Then
$T(r,P(f))=nT(r,f)+O(1).$
###### Lemma 2.5.
[18] If $N\left(r,0;f^{(k)}\mid f\not=0\right)$ denotes the counting function
of those zeros of $f^{(k)}$ which are not the zeros of $f$, where a zero of
$f^{(k)}$ is counted according to its multiplicity then
$N\left(r,0;f^{(k)}\mid f\not=0\right)\leq
k\overline{N}(r,\infty;f)+N\left(r,0;f\mid<k\right)+k\overline{N}\left(r,0;f\mid\geq
k\right)+S(r,f).$
###### Lemma 2.6.
[33] Let $F$ and $G$ be two non-constant meromorphic functions such that they
share $(1,0)$, and $H\not\equiv 0$, then
$\displaystyle N_{E}^{1)}(r,1;F)\leq N(r,\infty;H)+S(r,F)+S(r,G).$
Similar inequality holds for $G$ also.
###### Lemma 2.7.
[1] If $F$, $G$ be two non-constant meromorphic functions such that they share
$(1,1)$. Then
$\displaystyle
2\overline{N}_{L}(r,1;F)+2\overline{N}_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)-\overline{N}_{F>2}(r,1;G)$
$\displaystyle\leq$ $\displaystyle N(r,1;G)-\overline{N}(r,1;G).$
###### Lemma 2.8.
[2] If two non-constant meromorphic functions $F$, $G$ share $(1,1)$, then
$\displaystyle\overline{N}_{F>2}(r,1;G)\leq\frac{1}{2}(\overline{N}(r,0;F)+\overline{N}(r,\infty;F)-N_{0}(r,0;F^{\prime}))+S(r,F),$
where $N_{0}(r,0;F^{\prime}))$ is the counting function of those zeros of
$F^{\prime}$ which are not the zeros of $F(F-1)$.
###### Lemma 2.9.
[2] Let $F$ and $G$ be two non-constant meromorphic functions sharing $(1,0)$.
Then
$\displaystyle\overline{N}_{L}(r,1;F)+2\overline{N}_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)-\overline{N}_{F>1}(r,1;G)-\overline{N}_{G>1}(r,1;F)$
$\displaystyle\leq N(r,1;G)-\overline{N}(r,1;G).$
###### Lemma 2.10.
[2] If $F$ and $G$ share $(1,0)$, then
$\displaystyle\overline{N}_{L}(r,1;F)\leq\overline{N}(r,0;F)+\overline{N}(r,\infty;F)+S(r,F)$
$\displaystyle\overline{N}_{F>1}(r,1;G)\leq\overline{N}(r,0;F)+\overline{N}(r,\infty;F)-N_{0}(r,0;F^{\prime})+S(r,F).$
Similar inequalities hold for $G$ also.
###### Lemma 2.11.
[33] Let $F$ and $G$ be two non-constant meromorphic functions such that they
share $(1,0)$ and $H\not\equiv 0$. Then
$\displaystyle N_{E}^{1)}(r,1;F)\leq N(r,\infty;H)+S(r,F)+S(r,H).$
###### Lemma 2.12.
[32] Let $f$ and $g$ be two distinct non-constant rational functions and let
$a_{1},a_{2},a_{3},a_{4}$ be four distinct values. If $f$ and $g$ share
$a_{1},a_{2},a_{3},a_{4}$ IM, then $f(z)=g(z)$.
###### Lemma 2.13.
[10] Suppose $f$ and $g$ are two distinct non-constant meromorphic functions,
and $a_{1},a_{2},a_{3},a_{4}\in\mathbb{C}\cup\\{\infty\\}$ are four distinct
values. If $f$ and $g$ share $a_{1},a_{2},a_{3},a_{4}$ IM, then
1. (i)
$T(r,f)=T(r,g)+O(\log(rT(r,f)))$, as $r\not\in E$ and $r\rightarrow\infty$,
2. (ii)
$2T(r,f)=\sum_{j=1}^{4}\overline{N}\left(r,\displaystyle\frac{1}{f-a_{j}}\right)+O(\log(rT(r,f)))$,
as $r\not\in E$ and $r\rightarrow\infty$, where $E\subset(1,\infty)$ is of
finite linear measure.
## 3\. Proof of the theorems
We prove only Theorems 1.6 and 1.13 as the proof of the rest of the theorems
are very much similar to the proof of Theorem 1.6.
###### Proof of Theorem 1.6.
Case 1: Suppose $H\not\equiv 0$.
Let $F=(L(z,f))^{n}$ and $G=(f^{(k)})^{n}$.
Keeping in view of Lemma 2.4, we get by applying Second fundamental theorem of
Nevalinna on $F$ and $G$ that
(3.1) $\displaystyle n(T(r,L(z,f))+T(r,f^{(k)}))$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;F)+\overline{N}(r,1;F)+\overline{N}(r,\infty;F)+\overline{N}(r,0;G)+\overline{N}(r,1;G)$
$\displaystyle+\overline{N}(r,\infty;G)-\overline{N}_{0}(r,0;F^{\prime})-\overline{N}_{0}(r,0;G^{\prime})+S(r,F)+S(r,G),$
where $\overline{N}_{0}(r,0;F^{\prime})$ and
$\overline{N}_{0}(r,0;G^{\prime})$ are defined as in Lemma 2.3.
(i). Suppose $l\geq 2$ and $m=0$.
Then using Lemmas 2.1, 2.2 and 2.3 in $(\ref{e4.1})$ we obtain
$\displaystyle\frac{n}{2}(T(r,L(z,f))+T(r,(f^{(k)})))$ $\displaystyle\leq$
$\displaystyle
N_{2}(r,0;F)+N_{2}(r,0;G)+\overline{N}(r,\infty;F)+\overline{N}(r,\infty;G)$
$\displaystyle+\overline{N}_{*}(r,\infty;F,G)-\left(l-\frac{3}{2}\right)\overline{N}_{*}(r,1;F,G)+S(r,F)+S(r,G)$
$\displaystyle\leq$ $\displaystyle
2\overline{N}(r,0;L(z,f))+2\overline{N}(r,0;f^{(k)})+\overline{N}(r,\infty;L(z,f))$
$\displaystyle+\overline{N}(r,\infty;f^{(k)})+\frac{1}{2}(\overline{N}(r,\infty;L(z,f))+\overline{N}(r,\infty;f^{(k)})$
$\displaystyle+S(r,F)+S(r,G)$ $\displaystyle\leq$
$\displaystyle\frac{7}{2}(T(r,L(z,f))+T(r,f^{(k)})+S(r,F)+S(r,G).$
This implies that
$\displaystyle(n-7)(T(r,L(z,f))+f^{(k)})\leq S(r,L(z,f))+S(r,f^{(k)}),$
which contradict to the fact that $n\geq 8$.
(ii). Suppose $l\geq 2$ and $m=\infty$. Then using Lemmas 2.1, 2.2 and 2.3 in
$(\ref{e4.1})$ we obtain
$\displaystyle\frac{n}{2}(T(r,L(z,f))+T(r,f^{(k)}))$ $\displaystyle\leq$
$\displaystyle
N_{2}(r,0;F)+N_{2}(r,0;G)+\overline{N}(r,\infty;F)+\overline{N}(r,\infty;G)$
$\displaystyle-\left(l-\frac{3}{2}\right)\overline{N}_{*}(r,1;F,G)+S(r,F)+S(r,G)$
$\displaystyle\leq$ $\displaystyle
2\overline{N}(r,0;L(z,f))+2\overline{N}(r,0;f^{(k)})+\overline{N}(r,\infty;L(z,f))$
$\displaystyle+\overline{N}(r,\infty;f^{(k)})+S(r,F)+S(r,G)$
$\displaystyle\leq$ $\displaystyle 3(T(r,L(z,f))+T(r,f^{(k)}))+S(r,F)+S(r,G).$
This implies that
$\displaystyle(n-6)(T(r,L(z,f))+T(r,f^{(k)}))\leq S(r,L(z,f))+S(r,f^{(k)}),$
which contradict to the fact that $n\geq 7$.
(iii). Suppose $l=1$ and $m=0$.
Using Lemmas 2.1, 2.3, 2.7 and 2.8, we obtain
(3.2) $\displaystyle\overline{N}(r,1;F)$ $\displaystyle\leq$ $\displaystyle
N(r,1;F\mid=1)+\overline{N}_{L}(r,1;F)+\overline{N}_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)$
$\displaystyle\leq$ $\displaystyle\overline{N}(r,0;F\mid\geq
2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,1;F,G)+\overline{N}_{*}(r,\infty;F,G)$
$\displaystyle+\overline{N}_{L}(r,1;F)+\overline{N}_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)+\overline{N}_{0}(r,0;F^{\prime})+\overline{N}_{0}(r,0;G^{\prime})$
$\displaystyle+S(r,F)+S(r,G)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;F\mid\geq 2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,\infty;F,G)+2\overline{N}_{L}(r,1;F)$
$\displaystyle+2\overline{N}_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)+\overline{N}_{0}(r,0;F^{\prime})+\overline{N}_{0}(r,0;G^{\prime})$
$\displaystyle+S(r,F)+S(r,G)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;F\mid\geq 2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,\infty;F,G)+N(r,1;G)$
$\displaystyle-\overline{N}(r,1;G)+\overline{N}_{F>2}(r,1;G)+\overline{N}_{0}(r,0;F^{\prime})+\overline{N}_{0}(r,0;G^{\prime})$
$\displaystyle+S(r,F)+S(r,G)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;F\mid\geq 2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,\infty;F,G)+N(r,0;G\mid G\neq 0)$
$\displaystyle+\frac{1}{2}\overline{N}(r,0;F)+\frac{1}{2}\overline{N}(r,\infty;F)+\overline{N}_{0}(r,0;F^{\prime})+S(r,F)+S(r,G)$
$\displaystyle\leq$ $\displaystyle\overline{N}(r,0;F\mid\geq
2)+N_{2}(r,0;G)+\overline{N}_{*}(r,\infty;F,G)+\overline{N}(r,\infty;G)$
$\displaystyle+\frac{1}{2}\overline{N}(r,0;F)+\frac{1}{2}\overline{N}(r,\infty;F)+\overline{N}_{0}(r,0;F^{\prime})+S(r,F)+S(r,G).$
Similarly, we can get
(3.3) $\displaystyle\overline{N}(r,1;G)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;G\mid\geq
2)+N_{2}(r,0;F)+\overline{N}_{*}(r,\infty;F,G)+\overline{N}(r,\infty;F)$
$\displaystyle+\frac{1}{2}\overline{N}(r,0;G)+\frac{1}{2}\overline{N}(r,\infty;G)+\overline{N}_{0}(r,0;G^{\prime})+S(r,F)+S(r,G).$
Putting the values of $\overline{N}(r,1;F)$ and $\overline{N}(r,1;G)$ from
$(\ref{e4.2})$ and $(\ref{e4.3})$ to $(\ref{e4.1})$, a simple calculation
reduces to
$\displaystyle n(T(r,L(z,f))+T(r,f^{(k)}))$ $\displaystyle\leq$ $\displaystyle
2N_{2}(r,0;F)+2N_{2}(r,0;G)+\frac{1}{2}(\overline{N}(r,0;F)+\overline{N}(r,0;G))$
$\displaystyle+\frac{7}{2}(\overline{N}(r,\infty;F)+\overline{N}(r,\infty;G))+S(r,F)+S(r,G)$
$\displaystyle\leq$
$\displaystyle\frac{9}{2}(\overline{N}(r,0;L(z,f))+\overline{N}(r,0;f^{(k)})+\frac{7}{2}\overline{N}(r,\infty;L(z,f))$
$\displaystyle+\frac{7}{2}\overline{N}(r,\infty;f^{(k)})+S(r,F)+S(r,G)$
$\displaystyle\leq$ $\displaystyle
8(T(r,L()z,f)+T(r,f^{(k)}))+S(r,L(z,f))+S(r,f^{(k)}),$
which is a contradiction since $n\geq 9$.
(iv). Suppose $l=0$ and $m=0$. Using Lemmas 2.11, 2.3, 2.5, 2.9 and 2.10, we
obtain
(3.4) $\displaystyle\overline{N}(r,1;F)$ $\displaystyle\leq$ $\displaystyle
N_{E}^{1)}(r,1;F)+\overline{N}_{L}(r,1;F)+\overline{N}_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)$
$\displaystyle\leq$ $\displaystyle\overline{N}(r,0;F\mid\geq
2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,1;F,G)+\overline{N}_{*}(r,\infty;F,G)$
$\displaystyle+\overline{N}_{L}(r,1;F)+\overline{N}_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)+\overline{N}_{0}(r,0;F^{\prime})+\overline{N}_{0}(r,0;G^{\prime})$
$\displaystyle+S(r,F)+S(r,G)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;F\mid\geq 2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,\infty;F,G)+2\overline{N}_{L}(r,1;F)$
$\displaystyle+\overline{2}N_{L}(r,1;G)+\overline{N}_{E}^{(2}(r,1;F)+\overline{N}_{0}(r,0;F^{\prime})+\overline{N}_{0}(r,0;G^{\prime})$
$\displaystyle+S(r,F)+S(r,G)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;F\mid\geq 2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,\infty;F,G)+\overline{N}_{L}(r,1;F)$
$\displaystyle+\overline{N}_{F>1}(r,1;G)+\overline{N}_{G>1}(r,1;F)+N(r,1;G)-\overline{N}(r,1;G)+\overline{N}_{0}(r,0;F^{\prime})$
$\displaystyle+\overline{N}_{0}(r,0;G^{\prime})+S(r,F)+S(r,G)$
$\displaystyle\leq$ $\displaystyle\overline{N}(r,0;F\mid\geq
2)+\overline{N}(r,0;G\mid\geq
2)+\overline{N}_{*}(r,\infty;F,G)+2\overline{N}(r,0;F)$
$\displaystyle+2\overline{N}(r,\infty;F)+\overline{N}(r,0;G)+\overline{N}(r,\infty;G)+N(r,1;G)-\overline{N}(r,1;G)$
$\displaystyle+\overline{N}_{0}(r,0;F^{\prime})+\overline{N}_{0}(r,0;G^{\prime})+S(r,F)+S(r,G)$
$\displaystyle\leq$ $\displaystyle
N_{2}(r,0;F)+N_{2}(r,0;G)+\overline{N}(r,0;F)+2\overline{N}(r,\infty;F)+\overline{N}(r,\infty;G)$
$\displaystyle+\overline{N}_{*}(r,\infty;F,G)+N(r,0;G^{\prime}\mid G\neq
0)+\overline{N}_{0}(r,0;F^{\prime})+S(r,F)+S(r,G)$ $\displaystyle\leq$
$\displaystyle
N_{2}(r,0;F)+N_{2}(r,0;G)+\overline{N}(r,0;F)+\overline{N}(r,0;G)+2\overline{N}(r,\infty;F)$
$\displaystyle+2\overline{N}(r,\infty;G)+\overline{N}_{*}(r,\infty;F,G)+\overline{N}_{0}(r,0;F^{\prime})+S(r,F)+S(r,G).$
Similarly, we can obtain
(3.5) $\displaystyle\overline{N}(r.1;G)$ $\displaystyle\leq$ $\displaystyle
N_{2}(r,0;F)+N_{2}(r,0;G)+\overline{N}(r,0;F)+\overline{N}(r,0;G)+2\overline{N}(r,\infty;F)$
$\displaystyle+2\overline{N}(r,\infty;G)+\overline{N}_{*}(r,\infty;F,G)+\overline{N}_{0}(r,0;G^{\prime})+S(r,F)+S(r,G).$
Using $(\ref{e4.4})$ and $(\ref{e4.4})$, $(\ref{e4.1})$ reduces to
$\displaystyle n(T(r,L(z,f))+T(r,f^{(k)}))$ $\displaystyle\leq$ $\displaystyle
2(N_{2}(r,0;F)+N_{2}(r,0;G))+3(\overline{N}(r,0;F)+\overline{N}(r,0;G))$
$\displaystyle+2\overline{N}_{*}(r,\infty;F,G)+3(\overline{N}(r,\infty;F)+\overline{N}(r,\infty;G))$
$\displaystyle+S(r,F)+S(r,G)$ $\displaystyle\leq$ $\displaystyle
7(\overline{N}(r,0;L(z,f))+\overline{N}(r,0;f^{(k)}))+4\overline{N}(r,\infty;L(z,f))$
$\displaystyle+4\overline{N}(r,\infty;f^{(k)})+S(r,L(z,f))+S(r,f^{(k)})$
$\displaystyle\leq$ $\displaystyle
11(T(r,L(z,f))+T(r,f^{(k)}))+S(r,L(z,f))+S(r,f^{(k)}).$
This implies that
$\displaystyle(n-11)(T(r,L(z,f))+T(r,f^{(k)}))\leq S(r,L(z,f))+S(r,f^{(k)}),$
which is a contradiction since $n\geq 12$.
Case 2: Suppose $H\equiv 0$. Then by integration we get
(3.6) $\displaystyle F=\frac{AG+B}{CG+D},$
where $A,\;B,\;C$ and $D$ are complex constants such that $AD-BC\neq 0$.
From $(\ref{e4.6})$, it is easily seen that $T(r,L(z,f))=T(r,f^{(k)})+O(1)$.
Subcase 2.1: Suppose $AC\neq 0$. Then $F-A/C=-(AD-BC)/C(CG+D)\neq 0.$ So $F$
omits the value $A/C.$ Therefore, by the second fundamental theorem, we get
$\displaystyle
T(r,F)\leq\overline{N}(r,\infty;F)+\overline{N}(r,0;F)+\overline{N}\left(r,\frac{A}{C};F\right)+S(r,F).$
This implies that
$\displaystyle nT(r,L(z,f))$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,\infty;L(z,f))+\overline{N}(r,0;L(z,f))+S(r,L(z,f))$
$\displaystyle\leq$ $\displaystyle 2T(r,L(z,f))+S(r,L(z,f),$
which is not possible in all cases.
Subcase 2.2: Suppose that $AC=0$. Since $AD-BC\neq 0,$ both $A$ and $C$ can
not be simultaneously zero.
Subcase 2.2.1: Suppose $A\neq 0$ and $C=0.$ Then (3.6) becomes $F\equiv\alpha
G+\beta$, where $\alpha=A/D$ and $\beta=B/D.$
If $F$ has no $1$-point, then by the second fundamental theorem of Nevalinna,
we have
$\displaystyle
T(r,F)\leq\overline{N}(r,0;F)+\overline{N}(r,1;F)+\overline{N}(r,\infty;F)+S(r,F)$
$or,$
$\displaystyle(n-2)T(r,L(z,f))\leq S(r,L(z,f)),$
which is not possible in all cases.
Let $F$ has some $1$-point. Then $\alpha+\beta=1$. If $\beta=0,$ then
$\alpha=1$ and then $F\equiv G$ which implies that
$L(z,f)=tf^{(k)},$
where $t$ is a constant such that $t^{n}=1$.
Let $\beta\neq 0$. Then applying the second main theorem of Nevalinna to $F$,
we obtain
$\displaystyle nT(r,L(z,f))$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,\infty;F)+\overline{N}(r,0;F)+\overline{N}(r,\beta;F)+S(r,F)$
$\displaystyle\leq$ $\displaystyle 2T(r,L(z,f))+T(r,f^{(k)})+S(r,L(z,f))$
$\displaystyle\leq$ $\displaystyle 3T(r,L(z,f))+S(r,L(z,f)),$
which is not possible in all cases.
Subcase 2.2.2: Suppose $A=0$ and $C\neq 0$. Then (3.6) becomes
$\displaystyle F\equiv\frac{1}{\gamma G_{1}+\delta},$
where $\gamma=C/B$ and $\delta=D/B.$
If $F$ has no $1$-point, then applying the second fundamental theorem to $F$,
we have
$\displaystyle nT(r,L(z,f))$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,\infty;F)+\overline{N}(r,0;F)+\overline{N}(r,1;F)+S(r,F)$
$\displaystyle\leq$ $\displaystyle 2T(r,L(z,f))+S(r,L(z,f)),$
which is a contradiction.
Suppose that $F$ has some $1$-point. Then $\gamma+\delta=1$.
Therefore, $F\equiv 1/(\gamma G+1-\gamma)$. Since $C\neq 0,$ $\gamma\neq 0$,
and so $G$ omits the value $(\gamma-1)/\gamma.$
By the second fundamental theorem of Nevalinna, we have
$\displaystyle T(r,G)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,\infty;G)+\overline{N}(r,0;G)+\overline{N}\left(r,-\frac{1-\gamma}{\gamma};G\right)+S(r,G).$
$i.e.,$
$\displaystyle(n-2)T(r,f^{(k)})\leq S(r,f^{(k)}),$
which is a contradiction. This completes the proof of the theorem.∎
###### Proof of Theorem 1.13.
If $f$ is rational, the conclusion follows by Lemma 2.12. Assume that $f$ is
transcendental meromorphic function. Then $f^{(k)}$ must be transcendental
also. Now we discuss the following two cases.
Case 1: Suppose that $f^{(k)}$ is transcendental and $L(z,f)$ is rational.
Then from Lemma 2.13 (i), it follows that
$\displaystyle T(r,f^{(k)})=T(r,L(z,f))+O(\log
rT(r,f^{(k)}))=O(\log(rT(r,f^{(k)}))),$
which is a contradiction.
Case 2: Suppose $f$ and $L(z,f)$ are both transcendental.
Now keeping in view of Lemma 2.13 (ii), and applying the second fundamental
theorem of Nevalinna to $f^{(k)}$, we obtain
$\displaystyle 3T(r,f^{(k)})$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,f^{(k)})+\sum_{j=1}^{4}\overline{N}\left(r,\frac{1}{f^{(k)}-a_{j}}\right)+S(r,f^{(k)})$
$\displaystyle\leq$
$\displaystyle\overline{N}(r,f^{(k)})+2T(r,f^{(k)})+S(r,f^{(k)})$
$\displaystyle\leq$ $\displaystyle 3T(r,f^{(k)})+S(r,f^{(k)}),$
which implies that
$\displaystyle N(r,f^{(k)})=\overline{N}(r,f^{(k)})+S(r,f^{(k)}).$
This implies that
$\displaystyle
N(r,f)+k\overline{N}(r,f)=N(r,f^{(k)})=\overline{N}(r,f^{(k)})+S(r,f^{(k)})=\overline{N}(r,f)+S(r,f^{(k)}).$
This shows that
(3.7) $\displaystyle
N(r,f)=\overline{N}(r,f)=\overline{N}(r,f^{(k)})=S(r,f^{(k)}).$
Again from Lemma 2.13 (i), we have
$\displaystyle T(r,f^{(k)})=T(r,L(z,f))+S(r,f^{(k)}).$
Keeping in view of (3.7), the above equation, and applying the second main
theorem to $f^{(k)}$, we obtain
$\displaystyle 3T(r,f^{(k)})$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,f^{(k)})+\sum_{j=1}^{4}\overline{N}\left(r,\frac{1}{f^{(k)}-1}\right)+S(r,f^{(k)})$
$\displaystyle\leq$
$\displaystyle\overline{N}(r,f^{(k)})+N\left(r,\frac{1}{f^{(k)}-L(z,f)}\right)+S(r,f^{(k)})$
$\displaystyle\leq$ $\displaystyle T(r,f^{(k)})+T(r,L(z,f))+S(r,f^{(k)})$
$\displaystyle\leq$ $\displaystyle 2T(r,f^{(k)})+S(r,f^{(k)}),$
which is a contradiction. This completes the proof of the theorem. ∎
## References
* [1] T.C. Alzahary and H.X. Yi, Weighted value sharing and a question of I. Lahiri, Complex Var. Theory Appl., 49(2004), 1063–1078.
* [2] A. Banerjee, Meromorphic functions sharing one value, Int. J.Math.Math. Sci., 22(2005), 3587–3598.
* [3] A. Banerjee, Uniqueness of meromorphic functions sharing two sets with finite weight II, Tamkang J. Math., 41(2010), 379–392.
* [4] A. Banerjee and S. Bhattacharayya, On the uniqueness of meromorphic functions and its difference operators sharing values of sets, Rend. Circ. Mat. Palermo2. Ser., doi 10.1007/s12215-016-0295-1.
* [5] S.S. Bhusnurmath and S.R. Kabbur, Value distributions and uniqueness theorems for difference of entire and meromorphic functions, Int. J. Anal. Appl., 2(2013), 124–136.
* [6] B. Chen and Z. Chen, Meromorphic function sharing two sets with its difference operator, Bull. Malays. Math. Sci. Soc., 2(2012), 765–774.
* [7] Z.X. Chen, Complex Differences and Difference Equations. Science press, Beijing, 2014.
* [8] Y.M. Chiang and S.J. Feng, On the Nevanlinna characteristic of $f(z+\eta)$ and difference equations in the complex plane, Remanujan J., 16(2008), 105–129.
* [9] G.G. Gundersen, Meromorphic functions that share two finite values with their derivatives, J. Math. Anal. Appl., 75(1980), 441–446.
* [10] G.G. Gundersen, Meromorphic functions that share three values IM and a fourth value CM, Complex Var. Elliptic Equ., 20(1992), 99–106.
* [11] W.K. Hayman, Meromorphic Functions. The Clarendon Press, Oxford, 1964.
* [12] R.G. Halburd and R. Korhonen, Nevanlinna theory for the difference operator, Ann. Acad. Sci. Fenn. Math., 31(2006), 463–487.
* [13] R.G. Halburd and R. Korhonen, Difference analogue of the lemma on the logarithmic derivative with applications to difference equations, J. Math. Anal. Appl., 314(2006), 477–487.
* [14] R.G. Halburd, R. Korhonen and K. Tohge, Holomorphic curves with shift-invariant hyperplane preimages, Trans. Am. Math. Soc., 366(2014), 4267–4298, 2014.
* [15] R.G. Halburd and R. Korhonen, Growth of meromorphic solutions of delay differential equations, Proc. Am. Math. Soc., 145(2017), 2513–2526.
* [16] I. Lahiri, Value distribution of certain differential polynomials, Int. J. Math. Math. Sci., 28(2001), 83–91.
* [17] I. Lahiri, Weighted sharing and uniqueness of meromorphic functions, Nagoya Math. J., 161(2001), 193–206.
* [18] I. Lahiri and S. Dewan, Value distribution of the product of a meromorphic function and its derivative, Kodai Math. J., 26(2003), 95–100.
* [19] I. Lahiri, Weighted value sharing and uniqueness of meromorphic functions, Complex Var. Theory Appl., 46(2001), 241–253.
* [20] I. Lahiri and A. Banerjee, Weighted sharing of two sets, Kyungpook Math. J., 46(2006), 79–87.
* [21] I. Laine, Nevanlinna Theory and Complex Differential Equations. De Gruyter Studies in Mathematics, vol. 15. Walter de Gruyter and Co., Berlin, 1993.
* [22] S. Li and B. Chen, Unicity of of meromorphic functions sharing sets with their linear difference polynomials, Abst. Appl. Anal., 2014(2014), Article ID 894968, 7 pages, https://doi.org/10.1155/2014/894968.
* [23] K. Liu and L. Yang, On entire solutions of some differential-difference equations, Comput. Methods Funct. Theory, 13(2013), 433–447.
* [24] K. Liu and X.J. Dong, Some results related to complex differential-difference equations of certain types, Bull. Korean Math. Soc., 51(2014), 1453–1467.
* [25] C. Meng and G. Liu, Uniqueness of Meromorphic functions concerning the shifts and derivative, J. Appl. Math. Informatics, 37(2019), 133–148.
* [26] E. Mues and N. Steinmetz, Meromorphic Funtionen die Unit ihrer Ableitung Werte teilen, Manuscr. Math., 514(1979), 195–206.
* [27] E. Mues and N. Steinmetz, Meromorphic Funtionen, die Unit ihrer Ableitung zwei Werte teilen, Results. Math., 6(1983), 48–55.
* [28] X. Qi and L. Yang, Uniqueness of meromorphic functions concerning their shifts and derivatives, Comput. Methods Funct. Theory, V20(2020), 159–178.
* [29] X.G. Qi, N. Li and L.Z. Yang, Uniqueness of meromorphic functions concerning their differences and solutions of difference painleve equations, Comput. Methods Funct. Theory, 18(2018), 567–582.
* [30] L.A. Rubel and C.C. Yang, Values shared by an entire function and its derivative, In: Lecture Notes in Math. Springer, New York, 599(1977), 101–103.
* [31] C.C. Yang, On deficiencies of differential polynomials II, Math. Z., 125(1972), 107–112.
* [32] C.C. Yang and H.X. Yi, Uniqueness Theory of Meromorphic Functions. Kluwer Academic Publishers, Dordrecht, 2003.
* [33] H.X. Yi, Meromorphic functions that share one or two values II, Kodai Math. J., 22(1999), 264–272.
* [34] J. Zhang, Value distribution and shared sets of difference of meromorphic functions, J. Math. Anal. Appl., 367(2010), 401–408.
|
arxiv-papers
| 2021-07-26T17:36:46 |
2024-09-04T03:07:19.428710
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Goutam Haldar",
"submitter": "Goutam Haldar",
"url": "https://arxiv.org/abs/2107.12345"
}
|
2107.12348
|
Finite symmetries of quantum character stacks
Corina Keller a and Lukas Müller b
a Institut Montpelliérain Alexander Grothendieck
Université de Montpellier
Place Eugène Bataillon, 34090 Montpellier
[email protected]
b Max-Planck-Institut für Mathematik
Vivatsgasse 7, D – 53111 Bonn
[email protected]
###### Abstract
For a finite group $D$, we study categorical factorisation homology on
oriented surfaces equipped with principal $D$-bundles, which ‘integrates’ a
(linear) balanced braided category $\mathcal{A}$ with $D$-action over those
surfaces. For surfaces with at least one boundary component, we identify the
value of factorisation homology with the category of modules over an explicit
algebra in $\mathcal{A}$, extending the work of Ben-Zvi, Brochier and Jordan
to surfaces with $D$-bundles. Furthermore, we show that the value of
factorisation homology on annuli, boundary conditions, and point defects can
be described in terms of equivariant representation theory.
Our main example comes from an action of Dynkin diagram automorphisms on
representation categories of quantum groups. We show that in this case
factorisation homology gives rise to a quantisation of the moduli space of
flat twisted bundles.
###### Contents
1. 1 Introduction
2. 2 Setup
1. 2.1 Review of factorisation homology for manifolds with $\mathcal{G}$-structures
1. 2.1.1 Excision
2. 2.1.2 Point defects and boundary conditions
2. 2.2 The categorical case
1. 2.2.1 Excision for manifolds with $D$-bundles
3. 2.3 Actions of diagram automorphisms and their quantisation
4. 2.4 Reconstruction theorems for module categories
3. 3 Factorisation homology for surfaces with $D$-bundles
1. 3.1 Reconstruction for rigid braided tensor categories with group action
2. 3.2 Computation on punctured surfaces
3. 3.3 Little bundles algebras and braided $D$-crossed categories
4. 3.4 Algebraic description of boundary conditions and point defects
1. 3.4.1 Boundary conditions
2. 3.4.2 Point defects
3. 3.4.3 Closed surfaces and marked points
4. 4 Quantisation of flat twisted bundles
1. 4.1 The moduli space of flat twisted bundles
1. 4.1.1 The twisted Fock-Rosly Poisson structure
2. 4.2 Quantisation
## 1 Introduction
In this paper we extend the work on categorical factorisation homology by Ben-
Zvi, Brochier and Jordan [BZBJ18a, BZBJ18b] to (framed)
$\mathsf{E}_{2}$-algebras with an action of a finite group $D$. This leads to
functorial invariants for manifolds equipped with an $SO(2)\times D$
tangential structure, or in more geometric terms oriented 2-dimensional
manifolds equipped with principal $D$-bundles.
Factorisation homology [AF15, Lur] is a local-to-global invariant which
‘integrates’ higher algebraic quantities, namely disk algebras in a symmetric
monoidal higher category $\mathcal{C}$, over manifolds. We will work with
$\mathcal{C}={\mathsf{Pr}}_{c}$, the 2-category of $k$-linear compactly
generated presentable categories for $k$ an algebraically closed field of
characteristic 0. In the $D$-decorated setting, the coefficients $\mathcal{A}$
for factorisation homology are given by balanced braided monoidal categories
equipped with an additional $D$-action through balanced braided monoidal
automorphisms. Factorisation homology then assigns to every oriented
2-dimensional manifold $\Sigmait$ equipped with a principal $D$-bundle,
described by its classifying map $\varphi\colon\Sigmait\longrightarrow BD$, a
linear category
$\displaystyle\int\displaylimits_{(\Sigmait,\varphi)}\mathcal{A}\in{\mathsf{Pr}}_{c}\
\ .$ (1.1)
This construction is functorial in the pair $(\Sigmait,\varphi)$.
Our main example will be $\mathcal{A}={\mathsf{Rep}}_{q}(G)$, the (locally
finite) representation category of a quantum group associated to a reductive
group $G$ and $q\in\mathbb{C}^{\times}$ (we assume $q$ is not a root of
unity), which admits a natural action of the group of outer automorphisms
$\operatorname{Out}(G)$ of $G$. We use these coefficients to construct a
functorial quantisation of the moduli space of flat twisted bundles related to
finite $\operatorname{Out}(G)$-symmetries in gauge theories. Before addressing
the role of symmetries, we give a brief overview on the factorisation homology
approach to the quantisation of moduli spaces of flat bundles.
For a reductive algebraic group $G$, the moduli space $\mathcal{M}(\Sigmait)$
of flat principal $G$-bundles over a Riemann surfaces $\Sigmait$ is ubiquitous
in mathematical physics and symplectic geometry: For example, the symplectic
volume of $\mathcal{M}(\Sigmait)$ computes the topological limit of the
partition function of two dimensional Yang-Mills theory on $\Sigmait$ [Wit91],
the state space of 3-dimensional Chern-Simons theory on $\Sigmait$ can be
constructed by applying geometric quantisation to $\mathcal{M}(\Sigmait)$
[Hit90, ADPW91], and deformations of the category of quasi-coherent sheaves on
$\mathcal{M}(\Sigmait)$ describe boundary condition in the 4-dimensional
Kapustin-Witten theory [KW05, BZN16].
In [BZN13, BZBJ18a, BZBJ18b] it was shown that quasi-coherent sheaves on the
classical moduli space can be understood in terms of the factorisation
homology of ${\mathsf{Rep}}(G)$:
$\displaystyle\operatorname{QCoh}(\mathcal{M}(\Sigmait))\cong\int_{\Sigmait}{\mathsf{Rep}}(G)\
\ .$ (1.2)
The category ${\mathsf{Rep}}(G)$ admits a well-studied deformation by the
category ${\mathsf{Rep}}_{q}(G)$ of (locally finite)
$U_{q}(\mathfrak{g})$-modules. Thus, using the local-to-global property of
factorisation homology, the quantum analog of the category of quasi-coherent
sheaves on $\mathcal{M}(\Sigmait)$ is defined in [BZBJ18a, BZBJ18b] as the
quantum character stack $\int_{\Sigmait}{\mathsf{Rep}}_{q}(G)$. This is a
mathematical construction of the 2-dimensional part of the 4-dimensional
Kapustin-Witten theory as a topological quantum field theory, which assigns to
an oriented surfaces $\Sigmait$ a quantisation of the moduli space of flat
$G$-bundles on $\Sigmait$, where ‘quantisation’ is understood as a deformation
of the category of quasi-coherent sheaves on the moduli space.
To explain the physical role of the $D$-action, we turn our attention to
symmetries of quantum field theories, in particular to symmetries of moduli
spaces of $G$-local systems. One source for symmetries are automorphisms of
the classical space of fields preserving the classical action functional. In
gauge theories the space of fields is most naturally understood as a higher
differential geometric object, namely a smooth stack, and automorphisms should
take this higher geometric structure into account. Concretely, this means that
the action of a symmetry group $D$ only needs to close up to gauge
transformations. In the physics literature these are known as fractionalised
symmetries [WWW18] and can be described by group extensions
$\displaystyle 1\longrightarrow G\longrightarrow\widehat{G}\longrightarrow
D\longrightarrow 1\ \ .$ (1.3)
where the group $\widehat{G}$ encodes the non-trivial interaction of gauge
transformations and the symmetry group $D$. We refer to [FPSV15, MS20] for a
detailed discussion of these symmetries in the case of discrete gauge
theories.
We will restrict our attention to extension of the form
$\displaystyle 1\longrightarrow G\longrightarrow
G\rtimes\operatorname{Out}(G)\longrightarrow\operatorname{Out}(G)\longrightarrow
1$ (1.4)
with $D=\operatorname{Out}(G)$.111To handle arbitrary extensions one could use
non-abelian 2-cocycles. An element $\kappa\in\operatorname{Out}(G)$ acts on a
gauge field described by a principal $G$-bundle with connection by forming the
associated bundle along the group homomorphism $\kappa\colon G\longrightarrow
G$. In [MSS22] these symmetries have been studied in the context of
2-dimensional Yang-Mills theory. They restrict to an action of
$\operatorname{Out}(G)$ on the moduli space $\mathcal{M}(\Sigmait)$. One
motivation for developing the general framework presented in this paper was to
study these symmetries for quantum character stacks.
On the level of the local coefficients, i.e. for $\mathsf{fE}_{2}$-algebras,
the symmetry is realised through the $\operatorname{Out}(G)$-action on
${\mathsf{Rep}}(G)$ by pullbacks. In Section 2.3 we show that this action
extends to ${\mathsf{Rep}}_{q}(G)$ and hence we can compute the value of
factorisation homology for ${\mathsf{Rep}}_{q}(G)$ on oriented surfaces with
principal $\operatorname{Out}(G)$-bundles. By evaluation on surfaces with
trivial bundles we get an action of $\operatorname{Out}(G)$ on the quantum
character stack associated to an arbitrary surface. This implements the action
of the symmetry on the quantum character stack.
Factorisation homology on surfaces equipped with non-trivial
$\operatorname{Out}(G)$-bundles has also a natural field theoretical
interpretation: The value of factorisation homology describes the coupling of
the quantum character field theory to non-trivial
$\operatorname{Out}(G)$-background fields. In [MSS22] the topological limit of
the partition function of 2-dimensional Yang-Mills theory coupled to an
$\operatorname{Out}(G)$-background field $\varphi\colon\Sigmait\longrightarrow
B\operatorname{Out}(G)$ was related to the symplectic volume of the moduli
space of flat $\varphi$-twisted $G$-bundles $\mathcal{M}_{\rho}(\Sigmait)$
[BY15, Mei17, Zer21]. We will show the analogous statement for quantum
character stacks, i.e. that they provided a quantisation of the category of
quasi-coherent sheaves on $\mathcal{M}_{\rho}(\Sigmait)$.
##### Summary of results and outline.
In Section 2 we review factorisation homology following [AF15], with a focus
on categorical factorisation homology on oriented 2-dimensional surfaces with
$D$-bundles for a finite group $D$. We will also allow for certain
stratifications along the lines of [AFT17], namely boundary conditions and
point defects. The section concludes with some details related to the
algebraic quantities appearing in this paper. In particular, we introduce the
representation category of a quantum group ${\mathsf{Rep}}_{q}(G)$, and show
that it is naturally endowed with an $\operatorname{Out}(G)$-action.
After the setup is established, we compute in Section 3 the factorisation
homology with coefficients in a rigid braided tensor category $\mathcal{A}$
with $D$-action $\vartheta$ of an oriented punctured surface $\Sigmait$
equipped with a $D$-bundle. To that end we apply reconstruction techniques for
module categories, following ideas presented in [BZBJ18a, Section 5]. We use a
combinatorial description of the surface with decoration
$\varphi\colon\Sigmait\longrightarrow BD$, namely a decorated fat graph model
$(P,d_{1},\dots,d_{n})$, see Definition 3.4. From $(P,d_{1},\dots,d_{n})$ we
can define an algebra
$a_{P}^{d_{1},\dots,d_{n}}\coloneqq\bigotimes_{i=1}^{n}\mathcal{F}_{\mathcal{A}}^{d_{i}}$
in $\mathcal{A}$, where each
$\mathcal{F}_{\mathcal{A}}^{d_{i}}=\int^{V\in\text{comp}(\mathcal{A})}V^{\vee}\boxtimes\vartheta(d_{i}^{-1}).V$
is a twisted version of Lyubashenko’s coend [Lyu95] in $\mathcal{A}$. We show
in Theorem 3.5 that there is an equivalence of categories
$\int\displaylimits_{(\Sigmait,\varphi)}\mathcal{A}\cong
a_{P}^{d_{1},\dots,d_{n}}\text{-mod}_{\mathcal{A}}\ \ ,$
identifying factorisation homology with the category of modules over an
algebra which can be described in purely combinatorial terms. This result is
an extension of [BZBJ18a, Theorem 5.14] to surfaces with $D$-bundles.
In Section 3.3 we explore the algebraic structure that arises on the
collection of the factorisation homologies
$\int\displaylimits_{\varphi\colon\mathbb{S}^{1}\times\mathbb{R}\longrightarrow
D}\mathcal{A}$
for varying decoration $\varphi$, which turn out to assemble into an algebra
over the little bundles operad [MW20b]. It was shown in [MW20b] that
categorical little bundles algebras can be identified with braided $D$-crossed
categories, as defined by Turaev [Tur00, Tur10]. We compute the resulting
$D$-crossed categories concretely in terms of bimodule traces introduced in
[FSS17].
The goal of Section 3.4 is to give an explicit description of the algebraic
data describing boundary conditions and point defects in $D$-structured
factorisation homology. It is well-known that for oriented 2-manifolds without
$D$-bundles, boundary conditions are incorporated by algebras over the Swiss-
cheese operad, and point defects by $\mathsf{E}_{2}$-modules [AFT17, Gin15].
For algebras in linear categories, [BZBJ18b, Theorem 3.11] shows that the
latter coincides with the notion of a braided module category as introduced in
[Enr08, Bro12, Bro13]. In order to extend these algebraic structures to the
$D$-decorated setting, we will work with combinatorial models for the
decorated Swiss-cheese operad and the operad of decorated disks with marked
points respectively. If we let $\mathcal{A}$ be a balanced braided tensor
category with $D$-action, we find:
* •
Boundary conditions are given by a monoidal category $\mathcal{C}$ with
$D$-action and a $D$-equivariant braided functor
$\mathcal{A}\longrightarrow\mathcal{Z}(\mathcal{C})$ into the Drinfeld centre
of $\mathcal{C}$ (see Proposition 3.12).
* •
Point defects are equivariant balanced right modules over $\mathcal{A}$ as
given in Definition 3.17 (see Proposition 3.18).
In Section 3.4.3, we treat the case of closed manifolds.
Lastly, Section 4 is devoted to our main application, the quantisation of the
moduli space of twisted flat bundles via $\operatorname{Out}(G)$-structured
factorisation homology with coefficients in ${\mathsf{Rep}}_{q}(G)$: For a
connected surface $\Sigmait=\Sigmait_{g,r}$ of genus $g$ and with $r>0$
boundary components, together with a chosen point $p$ on the boundary, recall
that the $G$-representation variety is the affine variety
$\operatorname{Hom}(\pi_{1}(\Sigmait),G)$ of group homomorphisms. Since the
fundamental group of $\Sigmait$ is free on $n=2g+r-1$ generators we have
$\operatorname{Hom}(\pi_{1}(\Sigmait),G)\cong G^{n}$. Via the holonomy map,
the $G$-representation variety is identified with the moduli space
$\mathcal{M}^{\circ}(\Sigmait)$ of flat $G$-bundles on $\Sigmait$ with a
trivialisation over $p\in\partial\Sigmait$, and there is an action of $G$ on
$\mathcal{M}^{\circ}(\Sigmait)$ changing the trivialisation.
Now, given an $\operatorname{Out}(G)$-bundle
$\rho\colon\pi_{1}(\Sigmait_{g,r})\longrightarrow\operatorname{Out}(G)$
described by a tuple $(\kappa_{1},\dots,\kappa_{n})$ of elements
$\kappa_{i}\in\operatorname{Out}(G)$, we can define the
$\operatorname{Out}(G)$-twisted representation variety
$\mathcal{M}^{\circ}_{\rho}(\Sigmait)=\operatorname{Hom}_{\rho}(\pi_{1}(\Sigmait),G)$,
where now the maps $\pi_{1}(\Sigmait)\longrightarrow G$ are no longer group
homomorphisms, but twisted by the elements $\kappa_{i}$, see Section 4.1 for
the formal definitions. The moduli space of flat $\rho$-twisted bundles is the
stacky quotient $\mathcal{M}^{\circ}_{\rho}(\Sigmait)/^{\rho}G$, with respect
to the $\rho$-twisted conjugation action, and we show that the category of
quasi-coherent sheaves on this moduli space can be computed via
$\operatorname{Out}(G)$-structured factorisation homology
$\int\limits_{\rho\colon\Sigmait\longrightarrow
B\operatorname{Out}(G)}{\mathsf{Rep}}(G)\cong\bigotimes_{i=1}^{n}\mathcal{O}^{\kappa_{i}}(G)\text{-mod}_{{\mathsf{Rep}}(G)}\
\ ,$
where on the right hand side $\otimes_{i=1}^{n}\mathcal{O}^{\kappa_{i}}(G)$ is
the algebra of functions on $\mathcal{M}^{\circ}_{\rho}(\Sigmait)\cong G^{n}$
with the induced $\rho$-twisted action by $G$.
We then follow the approach of Ben-Zvi, Brochier and Jordan [BZBJ18a] to
quantise these moduli spaces by locally choosing coefficients in the
representation category of the corresponding quantum group
${\mathsf{Rep}}_{q}(G)$ and subsequently gluing this local data together via
factorisation homology over the surface $\Sigmait$ decorated with an
$\operatorname{Out}(G)$-bundle:
$\int\limits_{\rho\colon\Sigmait\longrightarrow
B\operatorname{Out}(G)}{\mathsf{Rep}}_{q}(G)\cong
a_{P}^{\kappa_{1},\dots,\kappa_{n}}\text{-mod}_{{\mathsf{Rep}}_{q}(G)}\ \ .$
We then show by means of a direct computation that the above provides a
quantisation of the moduli space of flat twisted bundles. To that end, we
present in Proposition 4.3 a novel combinatorial formula for the Poisson
structure on $\mathcal{M}_{\rho}^{\circ}(\Sigmait)$ and in Theorem 4.4 we
prove that the algebra222The algebra $a_{\hbar}^{\kappa_{1},\dots,\kappa_{n}}$
is the combinatorial algebra $a_{P}^{\kappa_{1},\dots,\kappa_{n}}$ in
$\mathcal{A}={\mathsf{Rep}}_{\hbar}(G)$.
$a_{\hbar}^{\kappa_{1},\dots,\kappa_{n}}$ is a deformation quantisation of the
algebra of functions on $\mathcal{M}_{\rho}^{\circ}(\Sigmait)$.
##### Relation to topological field theories.
We conclude the introduction by briefly commenting on the relation to
topological field theories. We restrict our discussion to framed field
theories since we want to highlight the additional structure coming from the
bundle decorations and because this is the case most studied in the literature
on fully extended field theories. In the undecorated setting, i.e. for
manifolds without $D$-bundles, factorisation homology gives rise to fully
extended topological field theories. More precisely, for an
$\mathsf{E_{n}}$-algebra $\mathcal{E}$ in a (nice) symmetric monoidal
$(\infty,1)$-category $\mathcal{C}$, Scheimbauer [Sch14] explicitly
constructed a fully extended framed topological field theory taking values in
the higher Morita category of $\mathsf{E_{n}}$-algebras [Sch14, Hau17, JFS17]
in $\mathcal{C}$ via factorisation homology for framed manifolds, assigning
$\mathcal{E}$ to the framed point. For $n=2$ and
$\mathcal{C}={\mathsf{Pr}}_{c}$ the Morita category is the 4-category
${\mathsf{BrTens}}$ of braided tensor categories with central algebras333For
$\mathcal{A},\mathcal{B}\in{\mathsf{BrTens}}$ a central algebra is an
$\mathsf{E}_{1}$-algebra in $\mathcal{A}$-$\mathcal{B}$-bimodules. as
1-morphisms, central bimodules as 2-morphisms, and functors and natural
transformations as 3- and 4-morphisms, respectively [BJS21]. Every object of
${\mathsf{BrTens}}$ is 2-dualisable [GS18, BJS21] and hence by the cobordism
hypothesis [BD95, Lur09] defines a 2-dimensional framed topological field
theory, namely the one explicitly constructed by Scheimbauer.
If one adds decorations with principal $D$-bundles, the corresponding
topological field theories are known as $D$-equivariant field theories
[Tur10]. Factorisation homology for $\mathsf{E}_{n}$-algebras with $D$-action
is expected to provide examples of $D$-equivariant field theories with values
in the Morita category of $\mathsf{E}_{n}$-algebras. Our work can be
understood as exploring this (expected) equivariant field theory in the
oriented setting and dimension $n=2$ with values in ${\mathsf{Pr}}_{c}$. As a
complementary example it was shown in [MW20a] that equivariant higher
Hochschild homology, that is factorisation homology for
$\mathsf{E_{\infty}}$-algebras with $D$-action in chain complexes, gives
examples of equivariant field theories in any dimension $n$.
$D$-equivariant field theories can also be studied through the cobordism
hypothesis, which implies that 2-dimensional framed fully extended
$D$-equivariant field theories with values in ${\mathsf{BrTens}}$ are
described by functors $BD\longrightarrow{\mathsf{BrTens}}$. Such a functor is
described by picking out an object $\mathcal{A}\in{\mathsf{BrTens}}$, together
with a central algebra $\mathcal{M}_{d}$ for every $d\in D$, a central
$\mathcal{M}_{d_{2}}\circ\mathcal{M}_{d_{1}}$-$\mathcal{M}_{d_{2}d_{1}}$-bimodule
for every pair $d_{1},d_{2}\in D$ and furthermore 3- and 4-morphisms for all
triples and quadruples of group elements, respectively, satisfying a coherence
condition involving five group elements. This data can be constructed from an
$\mathsf{E}_{2}$-algebra in ${\mathsf{Pr}}_{c}$ with $D$-action by setting
$\mathcal{M}_{d}=\mathcal{A}$, seen as an $\mathsf{E}_{1}$-algebra in
bimodules over itself, where the left action is twisted by acting with $d$.
The coherence isomorphisms for the $D$-action induce the additional data.
However, this is only a special case for the data classifying equivariant
framed field theories according to the cobordism hypothesis and situations
outside this class do not seem to be accessible using factorisation homology
with values in ${\mathsf{Pr}}_{c}$.
The type of factorisation homology we compute in this article is a special
case of equivariant factorisation homology for global quotient orbifolds
[Wee20]; namely the case of free actions. The general case, which requires
additional input data, should give rise to field theories defined as functors
out of the bordism category introduced in [GS21]. Hence, our results provide a
first steps towards computing this field theory.
##### Acknowledgements.
We thank Bart Vlaar for helpful discussions on the action of Dynkin diagram
automorphisms on ${\mathsf{Rep}}_{q}(G)$. We thank Adrien Brochier, Damien
Calaque, David Jordan, Christoph Schweigert, and Lukas Woike for helpful
discussions and correspondence. CK has received funding from the European
Research Council (ERC) under the European Union’s Horizon 2020 research and
innovation programme (grant agreement No. 768679). LM gratefully acknowledges
support from the Max-Planck-Institute for Mathematics in Bonn.
## 2 Setup
In this section we review some of the necessary mathematical background and
introduce the main example of $\operatorname{Out}(G)$-actions on the
representation category of a quantum group ${\mathsf{Rep}}_{q}(G)$ leading to
a coherent quantisation of moduli spaces of twisted flat bundles in Section 4.
### 2.1 Review of factorisation homology for manifolds with
$\mathcal{G}$-structures
Let ${\mathsf{Man}}_{n}$ be the topological category of $n$-dimensional
manifolds which admit a finite good open cover with embeddings as morphisms.
The morphism spaces are equipped with the compact-open topology. The disjoint
union of manifolds equips ${\mathsf{Man}}_{n}$ with the structure of a
symmetric monoidal category. Let $\mathcal{G}$ be a topological group and
$\rho\colon\mathcal{G}\longrightarrow GL(n)$ a continuous group homomorphism.
A $\mathcal{G}$-structure on a manifold $M$ is a homotopy lift
${B\mathcal{G}}$${M}$${BGL(n)}$$\scriptstyle{B\rho}$ (2.1)
of the classifying map for the frame bundle. These homotopy lifts correspond
to a reduction of the structure group of the frame bundle to $\mathcal{G}$.
There is a space of tangential $\mathcal{G}$-structures on $M$, given by the
mapping space ${\mathsf{Spaces}}_{/BGL(n)}(M,B\mathcal{G})$. This space is a
model for the $\infty$-groupoid of tangential $\mathcal{G}$-structures on $M$.
Homotopies in this space lead to a natural notion of morphisms of tangential
structures.
###### Example 2.1.
We list some important examples of $\mathcal{G}$-structures.
* •
For $\mathcal{G}=\star$, a $\mathcal{G}$-structure is the same as the choice
of a framing on $M$.
* •
For $\mathcal{G}=SO(n)\longrightarrow GL(n)$ the canonical embedding, a
$\mathcal{G}$-structure is the same as the choice of an orientation.
* •
For $\mathcal{G}=SO(n)\times D$ and $\rho\colon SO(n)\times
D\xrightarrow{\operatorname{pr}_{SO(n)}}SO(n)\longrightarrow GL(n)$, a
$\mathcal{G}$-structure is the choice of an orientation on $M$ together with a
map $M\longrightarrow BD$, i.e. a principal $D$-bundle. This is the example
considered in this paper.
To construct the $\infty$-category ${\mathsf{Man}}_{n}^{\mathcal{G}}$ of
manifolds with $\mathcal{G}$-structure we proceed as follows: there is a
symmetric monoidal functor
$\tau\colon{\mathsf{Man}}_{n}\longrightarrow{\mathsf{Spaces}}_{/BGL(n)}$ of
$\infty$-categories sending a manifold $M$ to the classifying map
$M\longrightarrow BGL(n)$ of its frame bundle [AF15, Section 2.1]. The
category ${\mathsf{Man}}_{n}^{\mathcal{G}}$ of manifolds with tangential
$\mathcal{G}$-structure is defined as the pullback
${{\mathsf{Man}}_{n}^{\mathcal{G}}}$${{\mathsf{Spaces}}_{/B\mathcal{G}}}$${{\mathsf{Man}}_{n}}$${{\mathsf{Spaces}}_{/BGL(n)}}$$\scriptstyle{\tau}$
(2.2)
Denote by
${\mathsf{Disk}}_{n}^{\mathcal{G}}\subset{\mathsf{Man}}_{n}^{\mathcal{G}}$ the
full symmetric monoidal subcategory whose objects are disjoint unions of
Euclidean spaces. Let $\mathcal{V}$ be a symmetric monoidal $\infty$-category.
A ${\mathsf{Disk}}_{n}^{\mathcal{G}}$-algebra in $\mathcal{V}$ is a symmetric
monoidal functor
$\mathcal{A}\colon{\mathsf{Disk}}_{n}^{\mathcal{G}}\longrightarrow\mathcal{V}$.
###### Remark 2.2.
For the tangential structures of Example 2.1, disk algebras have a description
in terms of more classical objects:
* •
A ${\mathsf{Disk}}^{\star}_{n}$-algebra is an $\mathsf{E_{n}}$-algebra, see
for example [AF15].
* •
A ${\mathsf{Disk}}^{SO(n)}_{n}$-algebra is a framed $\mathsf{E_{n}}$-algebra,
see for example [AF15].
* •
A ${\mathsf{Disk}}^{SO(n)\times D}_{n}$-algebra is a framed
$\mathsf{E_{n}}$-algebra equipped with a $D$-action, see for example [Wee20,
Proposition 4.6].
###### Example 2.3.
In Figure 1 we give a sketch for $n=2$ of the disk operations in
${\mathsf{Disk}}^{\mathcal{G}}_{2}$, for the tangential structures of the
previous remark, and the corresponding algebraic structures on
$\mathcal{A}\colon{\mathsf{Disk}}^{\mathcal{G}}_{2}\longrightarrow(\mathcal{V},\otimes)$.
\begin{overpic}[scale={0.5},tics=10]{diskalgebras.pdf}
\put(13.5,22.5){$\hookrightarrow$} \put(32.5,22.5){$\rightsquigarrow\quad
m\colon\mathcal{A}\otimes\mathcal{A}\longrightarrow\mathcal{A}$}
\put(5.5,3.0){$e$} \put(10.0,-2.5){$BD$} \put(18.0,3.0){$e$}
\put(10.0,8.5){$\lhook\joinrel\longrightarrow$}
\put(11.0,11.0){$\operatorname{id}$}
\put(11.0,5.0){$\mathrel{\rotatebox[origin={c}]{-135.0}{$\Longrightarrow$}}$}
\put(14.0,4.0){$d$}
\put(30.0,7.5){$\rightsquigarrow\quad\vartheta(d)\colon\mathcal{A}\longrightarrow\mathcal{A}$}
\put(80.0,22.5){$\rightsquigarrow\quad\sigma\colon m\Longrightarrow
m\circ\tau$}
\put(80.0,7.5){$\rightsquigarrow\quad\theta\colon\operatorname{id}_{\mathcal{A}}\Longrightarrow\operatorname{id}_{\mathcal{A}}$}
\end{overpic} Figure 1: First row: Disk embeddings (or isotopies thereof) in
${\mathsf{Disk}}^{\ast}_{2}$ that give rise to the multiplication $m$ and the
braiding $\sigma$ in the $\mathsf{E_{2}}$-algebra $\mathcal{A}$. Here
$\tau\colon A\otimes A\longrightarrow A\otimes A$ denotes the braiding in
$\mathcal{V}$. Second row: On the right, the additional operation in the
oriented case given by a loop in the space of disk embeddings in
${\mathsf{Disk}}^{SO(2)}_{2}$, rotating the disk by $2\pi$. Together with the
operations in the first row, this endows $\mathcal{A}$ with the structure of a
framed $\mathsf{E_{2}}$-algebra. On the left, the additional operation in the
$D$-decorated oriented case, given by the identity disk embedding in
${\mathsf{Disk}}^{SO(2)\times D}_{2}$ with homotopy
$d\colon\operatorname{id}^{*}(e)\Rightarrow e$, inducing an automorphism of
$\mathcal{A}$ for each $d\in D$, i.e. a $D$-action on $\mathcal{A}$.
Let $(\mathcal{V},\otimes)$ be a symmetric monoidal $\infty$-category. We
assume that $\mathcal{V}$ admits sifted colimits and that $\otimes$ preserves
them in each component. Factorisation homology $\int_{\bullet}\mathcal{A}$
with coefficients in the ${\mathsf{Disk}}_{n}^{\mathcal{G}}$-algebra
$\mathcal{A}$ is the left Kan-extension [AF15]:
${{\mathsf{Disk}}_{n}^{\mathcal{G}}}$${\mathcal{V}}$${{\mathsf{Man}}_{n}^{\mathcal{G}}}$$\scriptstyle{\mathcal{A}}$$\scriptstyle{\int_{\bullet}\mathcal{A}}$
(2.3)
The condition that $\otimes$ preserves sifted colimits makes factorisation
homology into a symmetric monoidal functor. Hence, the value of factorisation
homology on any manifold $M$ is naturally pointed by the inclusion
$\emptyset\hookrightarrow M$ of the empty manifold:
$\displaystyle\int_{\emptyset}\mathcal{A}\cong
1_{\mathcal{V}}\longrightarrow\int_{M}\mathcal{A}\ \ .$ (2.4)
#### 2.1.1 Excision
The main tool for computing factorisation homology will be $\otimes$-excision.
Excision allows one to reconstruct the value of factorisation homology from a
certain decomposition of $M$, namely from a collar-gluing [AF15, Section 3.3].
We recall that a collar-gluing of a $\mathcal{G}$-structured manifold $M$ is
given by a smooth map
$f\colon M\longrightarrow[-1,1]\ \ ,$
such that $f^{-1}(-1,1)\longrightarrow(-1,1)$ is a manifold bundle. If we
define $M_{-}\coloneqq f^{-1}[-1,1)$, $M_{+}\coloneqq f^{-1}(-1,1]$ and
$M_{0}\coloneqq f^{-1}(-1,1)$, we will often denote the collar-gluing by
$M=M_{-}\bigcup_{M_{0}}M_{+}$.
\begin{overpic}[scale={0.3},tics=10]{collargluing2.pdf}
\put(47.5,-5.0){$M_{0}$} \put(47.5,27.5){$N$} \put(7.5,22.5){$M_{-}$}
\put(90.0,22.5){$M_{+}$} \end{overpic} Figure 2: An example of a collar-
gluing.
We can choose an equivalence $\theta\colon M_{0}\xrightarrow{\ \cong\
}N\times(-1,1)$ in the $\infty$-category of $\mathcal{G}$-structured
manifolds, where $N$ is the fibre over an arbitrary point in $(-1,1)$, as
illustrated in Figure 2. The object $\int_{N\times(-1,1)}\mathcal{A}$ has a
natural $E_{1}$-algebra structure in $\mathcal{V}$, which gives rise to an
$E_{1}$-algebra structure on $\int_{M_{0}}\mathcal{A}$. We fix oriented
embeddings
$\displaystyle\mu_{+}\colon(-1,1)\sqcup(-1,1]\longrightarrow(-1,1]\ \text{ and
}\ \mu_{-}\colon[-1,1)\sqcup(-1,1)\longrightarrow[-1,1)\ \ ,$ (2.5)
which are the identity in a neighbourhood of the boundary. Using the
equivalence $\theta$, we lift these embeddings to maps
$\operatorname{act}_{-}\colon M_{-}\sqcup M_{0}\longrightarrow M_{-}$ and
$\operatorname{act}_{+}\colon M_{0}\sqcup M_{+}\longrightarrow M_{+}$ of
$\mathcal{G}$-structured manifolds, see Figure 3 below for a sketch.
\begin{overpic}[scale={0.3},tics=10]{modulestructure.pdf} \end{overpic} Figure
3: The map which induces the right $\int_{M_{0}}\mathcal{A}$-module structure
on $\int_{M_{-}}\mathcal{A}$. Here, the green collar depicts the manifold
$N\times(-1,1)$.
Evaluation of factorisation homology on $\operatorname{act}_{-}$ and
$\operatorname{act}_{+}$ equips $\int_{M_{-}}\mathcal{A}$ and
$\int_{M_{+}}\mathcal{A}$ with the structure of a right and left module over
$\int_{M_{0}}\mathcal{A}$, respectively. At this point we want to highlight
that the module structures depend on the chosen trivialisation $\theta$; see
Section 2.2.1 for an example. The value of factorisation homology on $M$ can
be computed as the relative tensor product [AF15, Lemma 3.18]
$\displaystyle\int_{M}\mathcal{A}\cong\int_{M_{-}}\mathcal{A}~{}\underset{{\int_{M_{0}}\mathcal{A}}}{\bigotimes}~{}\int_{M_{+}}\mathcal{A}\
\ ,$ (2.6)
defined through the bar construction in $\mathcal{V}$.
#### 2.1.2 Point defects and boundary conditions
Factorisation homology admits a natural extension to stratified manifolds
[AFT17], which in more physics oriented language corresponds to incorporating
defects in the field theory that we wish to study via factorisation homology.
For us, only two types of defects will be relevant; namely point defects and
boundary conditions. Instead of going through the heavy machinery of
stratified manifolds, we only mention the concrete examples studied in this
paper following [BZBJ18b].
We fix $\mathcal{G}=SO(2)\times D$ and define the $\infty$-category
${\mathsf{Man}}_{2,\ast}^{\mathcal{G}}$ whose objects are oriented
2-dimensional manifolds $\Sigmait$, together with a collection of marked
points $p_{1},\dots,p_{n}\in\Sigmait$ and a continuous map
$\varphi\colon\Sigmait\setminus\\{p_{1},\dots,p_{n}\\}\longrightarrow BD$.
Morphisms are embeddings of manifolds, mapping marked points bijectively onto
marked points, which are compatible with the morphisms into $BD$. We denote by
${\mathsf{Disk}}_{2,\ast}^{\mathcal{G}}$ the full subcategory whose objects
are disjoint unions of disks with one or zero marked points. Notice that we do
not require the $D$-bundles to extend to the whole of $\Sigmait$. As for
smooth manifolds, factorisation homology can again be defined by left Kan
extension:444The slice categories appearing in the coend formula for the left
Kan extension are not sifted. Hence, here we need to assume that $\mathcal{V}$
is tensor cocomplete.
${{\mathsf{Disk}}_{2,\ast}^{\mathcal{G}}}$${\mathcal{V}}$${{\mathsf{Man}}_{2,\ast}^{\mathcal{G}}}$$\scriptstyle{\mathcal{F}}$$\scriptstyle{\int_{\bullet}\mathcal{F}}$
(2.7)
The second type of defects we want to study are boundary conditions. To that
end, we define the category ${\mathsf{Man}}_{2,\partial}^{\mathcal{G}}$ of
oriented 2-dimensional manifolds $\Sigmait$ with boundary $\partial\Sigmait$
and continuous maps $\Sigmait\longrightarrow BD$. We denote by
${\mathsf{Disk}}_{2,\partial}^{\mathcal{G}}$ the full subcategory with objects
disjoint unions of disks and half disks, by the latter we mean manifolds
diffeomorphic to $\mathbb{R}\times\mathbb{R}_{\geq 0}$.
We will adopt the following terminology:
###### Definition 2.4.
By point defects in $\mathcal{G}=SO(2)\times D$-structured factorisation
homology we mean a symmetric monoidal functor
$\mathcal{F}\colon{\mathsf{Disk}}_{2,\ast}^{\mathcal{G}}\longrightarrow\mathcal{V}$.
Similarly, by a boundary condition we mean a symmetric monoidal functor
$\mathcal{F}\colon{\mathsf{Disk}}_{2,\partial}^{\mathcal{G}}\longrightarrow\mathcal{V}$.
In Section 3.4 we will give an algebraic characterisation of point defects and
boundary conditions.
###### Remark 2.5.
Unless otherwise stated, we will usually work with trivial boundary conditions
in this paper, meaning that we use the same disk algebra for a disk with empty
boundary, as for a disk with non-empty boundary.
### 2.2 The categorical case
From now on we specialise to 2-dimensional manifolds and tangential structures
of the form $D\times SO(2)$, where $D$ is a finite group. Throughout this
paper we will work with factorisation homology with values in the
$(2,1)$-category ${\mathsf{Pr}}_{c}$ of $k$-linear compactly generated
presentable categories with compact and cocontinuous functors and natural
isomorphisms between them, meaning that we will not use any non-invertible
2-morphisms. For us $k$ will always be an algebraically closed field of
characteristic 0, usually $k=\mathbb{C}$. Recall that an object $c$ in a
$k$-linear category $\mathcal{C}$ is compact if the functor
$\operatorname{Hom}(c,-)$ preserves filtered colimits. A category
$\mathcal{C}$ is compactly generated if every object can be written as a
filtered colimit of compact objects and a functor is compact if it preserves
compact objects. We refer the reader to [BZBJ18a, Section 3] for more details
on ${\mathsf{Pr}}_{c}$.
Every $\infty$-functor from ${\mathsf{Man}}_{2}^{D\times SO(2)}$ to
${\mathsf{Pr}}_{c}$ will factor through its homotopy 2-category which admits
the following concrete description.
###### Definition 2.6.
We denote by $D\text{-}{\mathsf{Man}}_{2}$ the $(2,1)$-category with
* •
Objects: Oriented 2-dimensional manifolds $\Sigmait$ equipped with a
continuous map $\varphi\colon\Sigmait\longrightarrow BD$.
* •
1-Morphisms: Smooth embeddings
$f\colon\Sigmait_{1}\longrightarrow\Sigmait_{2}$ together with the choice of a
homotopy $h\colon\varphi_{1}\longrightarrow f^{*}\varphi_{2}$.
* •
2-Morphisms: A 2-morphism $(f_{1},h_{1})\longrightarrow(f_{2},h_{2})$ is given
by an equivalence class of isotopies $\chi\colon f_{1}\longrightarrow f_{2}$,
together with a map $\gamma\colon\Sigmait_{1}\times\Delta^{2}\longrightarrow
BD$ filling
${\ \
f_{2}^{*}\varphi_{2}}$${\varphi_{1}}$${f_{1}^{*}\varphi_{2}}$$\scriptstyle{h_{2}}$$\scriptstyle{h_{1}}$$\scriptstyle{\chi^{*}\varphi_{2}}$
(2.8)
Two such pairs $(\chi,\gamma)$ and $(\chi^{\prime},\gamma^{\prime})$ are
equivalent if there exists an isotopy of isotopies from $\chi$ to
$\chi^{\prime}$ (i.e. a map
$\Omegait\colon\Sigmait_{1}\times\Delta^{2}\longrightarrow\Sigmait_{2}$
filling the bottom in Diagram (2.9)) and a map
$\Gammait\colon\Sigmait\times\Delta^{3}\longrightarrow BD$ filling
${\varphi_{1}}$${f_{2}^{*}\varphi_{2}}$${f_{1}^{*}\varphi_{2}}$${f_{2}^{*}\varphi_{2}}$$\scriptstyle{h_{2}}$$\scriptstyle{h_{1}}$$\scriptstyle{\chi^{*}\varphi_{2}}$$\scriptstyle{{\chi^{\prime}}^{*}\varphi_{2}}$$\scriptstyle{h_{2}}$
(2.9)
where the faces are labeled with the various maps which are part of the
morphisms.
We denote the corresponding disk category by $D\text{-}{\mathsf{Disk}}_{2}$.
###### Remark 2.7.
Similarly, there are truncated versions $D\text{-}{\mathsf{Man}}_{2,\ast}$ and
$D\text{-}{\mathsf{Man}}_{2,\partial}$ of the categories
${\mathsf{Man}}_{2,\ast}^{D\times SO(2)}$ and
${\mathsf{Man}}_{2,\partial}^{D\times SO(2)}$ introduced above.
One reason to work with ${\mathsf{Pr}}_{c}$ is that it is a closed symmetric
monoidal $(2,1)$-category under the Deligne-Kelly tensor product $\boxtimes$.
In particular, the tensor product $\boxtimes$ preserves sifted colimits in
each variable, see [BZBJ18a, Proposition 3.5]. For any two objects
$\mathcal{C},\mathcal{D}\in{\mathsf{Pr}}_{c}$, the Deligne-Kelly tensor
product $\mathcal{C}\boxtimes\mathcal{D}\in{\mathsf{Pr}}_{c}$ of $\mathcal{C}$
and $\mathcal{D}$ is characterised via the natural equivalence
${\mathsf{Pr}}_{c}[\mathcal{C}\boxtimes\mathcal{D},\mathcal{E}]\cong{\mathsf{Bil}}_{c}[\mathcal{C},\mathcal{D};\mathcal{E}]\
\ ,$
where ${\mathsf{Bil}}_{c}[\mathcal{C},\mathcal{D};\mathcal{E}]$ is the
category of $k$-bilinear functors from $\mathcal{C}\times\mathcal{D}$ to
$\mathcal{E}$, preserving colimits in each variable separately.
###### Definition 2.8.
A tensor category $\mathcal{A}$ in ${\mathsf{Pr}}_{c}$ is rigid if all compact
objects of $\mathcal{A}$ are left and right dualisable.
###### Definition 2.9.
A balancing is a family of natural isomorphisms $(\theta_{V}:V\longrightarrow
V)_{V\in\mathcal{A}}$, such that
$\theta_{1_{\mathcal{A}}}=\operatorname{id}_{1_{\mathcal{A}}}$, and so that it
is compatible with the braiding $\sigma$ of $\mathcal{A}$: $\theta_{V\otimes
W}=\sigma_{W,V}\circ\theta_{W}\otimes\theta_{V}\circ\sigma_{V,W}:V\otimes
W\longrightarrow V\otimes W$, graphically we depict this compatibility as
follows:
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes W$ (2.10)
A balanced tensor category is then a braided tensor category equipped with a
balancing.
By a result of Salvatore and Wahl [SW03], the 2-category of framed
$\mathsf{E}_{2}$-algebras (or equivalently
${\mathsf{Disk}}_{2}^{SO(2)}$-algebras) in ${\mathsf{Pr}}_{c}$ can be
identified with the 2-category of balanced braided tensor categories
${\mathsf{bBr}}$. We also recall that a ribbon category in ${\mathsf{Pr}}_{c}$
is a rigid balanced braided tensor category so that the balancing maps satisfy
$\theta_{V^{\vee}}=(\theta_{V})^{\vee}$. One can show that in this case giving
a balancing is equivalent to giving a pivotal structure, see e.g. [HPT16,
Appendix A.2]. Finally, a $D\text{-}{\mathsf{Disk}}_{2}$-algebra is described
by a balanced braided tensor category with $D$-action.
###### Definition 2.10.
Let $\mathcal{A}$ be a balanced tensor category. A $D$-action on $\mathcal{A}$
is a (2-)functor
$\vartheta\colon\star\text{//}D\longrightarrow\star\text{//}\operatorname{Aut}_{{\mathsf{bBr}}}(\mathcal{A})$
from the category with one object and $D$ as automorphisms to the 2-category
with one object, balanced braided automorphisms555A braided automorphism is
balanced if it preserves $\theta$. of $\mathcal{A}$ as 1-morphisms and natural
transformations as 2-morphisms. In more details, the action consists of an
auto-equivalence $\vartheta(d)\colon\mathcal{A}\longrightarrow\mathcal{A}$ for
each $d\in D$, and for each composable pair $d_{i},d_{j}\in D$ we have a
natural isomorphism
$c_{ij}\colon\vartheta(d_{i}d_{j})\xrightarrow{\cong}\vartheta(d_{1})\vartheta(d_{2})$
satisfying the usual associativity axiom.
Our main example will be constructed from Dynkin diagram automorphisms acting
on the representation categories of quantum groups, see Section 2.3.
#### 2.2.1 Excision for manifolds with $D$-bundles
Consider an object $(\Sigmait,\varphi)$ in $D\text{-}{\mathsf{Man}}_{2}$,
where $\Sigmait$ is an oriented 2-manifold and
$\varphi\colon\Sigmait\longrightarrow BD$ a continuous map. Let
$\Sigmait=\Sigmait_{-}\cup_{\Sigmait_{0}}\Sigmait_{+}$ be a collar-gluing and
$\theta\colon\Sigmait_{0}\cong N\times(-1,1)$ a diffeomorphism of oriented
manifolds. Notice that when using excision to compute factorisation homology
on $(\Sigmait,\varphi)$, the restriction $\varphi|_{N\times(-1,1)}$ is not
required to be constant along the interval $(-1,1)$, though it will be
homotopic to the constant map. For the cases of interest to us, making this
homotopy compatible with the collar-gluing will introduce a $D$-twist in the
action featuring in excision. We illustrate this last point with an example
which will be relevant later on:
###### Example 2.11.
Assume that the map $\varphi$ is such that its restriction
$\varphi|_{\Sigmait_{-}\setminus\Sigmait_{0}}$ as well as
$\varphi|_{\Sigmait_{+}\setminus\Sigmait_{0}}$ agree with the constant map to
the base point $\ast$ of $BD$. Furthermore, let us fix a diffeomorphism
$\theta\colon\Sigmait_{0}\xrightarrow{\cong}N\times(-1,1)$ of oriented
manifolds. Here, $N$ is the codimension 1 submanifold determined by the given
collar-gluing, see Figure 2. We choose $\varphi$ such that its pullback to
$N\times(-1,1)$ is given by
$\displaystyle(\theta^{-1})^{*}\varphi(n,s)=\begin{cases}\ast,&\text{for}~{}s\notin(-\tfrac{1}{2},\tfrac{1}{2})\\\
\gamma_{d^{-1}}(s+\frac{1}{2}),&\text{for}~{}s\in(-\tfrac{1}{2},\tfrac{1}{2})\end{cases}$
for all $n\in N$, as illustrated in Figure 4(a). Here,
$\gamma_{d^{-1}}\colon[0,1]\longrightarrow BD$ is the loop corresponding to
the inverse of a given group element $d\in D$.
\begin{overpic}[scale={0.4},tics=10]{excisionDmfd.pdf}
\put(35.0,-7.0){$N\times(-1,1)$}
\put(7.5,7.5){$\Sigmait_{-}\setminus\Sigmait_{0}$}
\put(77.5,12.5){$\Sigmait_{+}\setminus\Sigmait_{0}$} \put(42.5,40.0){$BD$}
\put(20.0,35.0){$\ast$} \put(77.5,35.0){$\ast$}
\put(45.0,25.0){$\gamma_{d^{-1}}$} \put(35.0,12.5){$-\frac{1}{2}$}
\put(48.0,12.5){$\frac{1}{2}$} \end{overpic} (a) The map $\varphi$ on a
collar-gluing.
\begin{overpic}[scale={0.5},tics=10]{htpyExcisionDmfd.pdf}
\put(-15.0,8.5){$N~{}\times$} \put(-5.0,0.0){$-1$}
\put(10.0,0.0){$-\frac{1}{2}$} \put(27.5,0.0){$\frac{1}{2}$}
\put(41.0,0.0){$1$} \put(90.0,-5.0){$\ast$}
\put(102.5,7.5){{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}$\gamma_{d^{-1}}(t_{0})$}}
\put(75.0,22.5){{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}$\gamma_{d^{-1}}(t\geq
t_{0})$}} \put(47.0,8.5){$,~{}~{}H(s,t_{0})=$} \end{overpic} (b) Sketch of the
homotopy $H$ for some fixed $t_{0}\in[0,1]$.
Figure 4:
We then extend $\theta$ to an equivalence of $D$-manifolds, where the collar
$N\times(-1,1)$ is equipped with the constant map to $BD$. The equivalence is
established using a homotopy
$H\colon(\theta^{-1})^{*}\varphi|_{\Sigmait_{0}}\Rightarrow\ast$, which is
given by $\gamma_{d^{-1}}$ for every point in $N\times(-1,-\tfrac{1}{2}]$,
continues the loop $\gamma_{d^{-1}}$ to its end point on
$N\times(-\tfrac{1}{2},\tfrac{1}{2})$ and is constant on
$N\times[\tfrac{1}{2},1)$, as sketched in Figure 4(b). Hence, we get an
equivalence
$\int_{(\Sigmait_{0},\varphi|_{\Sigmait_{0}})}\mathcal{A}\cong\int_{N\times(-1,1)}\mathcal{A}\eqqcolon\mathcal{C}\
\ .$
Given a balanced tensor category $\mathcal{A}$ with a $D$-action, we now want
to deduce the module structures featuring in the excision formulae for
$\int_{(\Sigmait,\varphi)}\mathcal{A}$. Denote by $\Sigmait_{-}^{\ast}$ and
$\Sigmait_{+}^{\ast}$ two objects in $D\text{-}{\mathsf{Man}}_{2}$, whose
underlying manifolds agree with $\Sigmait_{-}$ and $\Sigmait_{+}$, but whose
maps to $BD$ are assumed to be constant. The value of factorisation homology
on these manifolds naturally defines module categories $\mathcal{M}_{-}$ and
$\mathcal{M}_{+}$ over the $E_{1}$-algebra $\mathcal{C}$. In order to obtain
an explicit description of the module structures obtained by excision, note
that the homotopy $H$ from above can be used to construct an equivalence
$\theta_{+}\colon\Sigmait_{+}\xrightarrow{\cong}\Sigmait_{+}^{\ast}$ with
homotopy $H_{+}\colon(\theta^{-1})^{*}\varphi|_{\Sigmait_{+}}\Rightarrow\ast$
in $D\text{-}{\mathsf{Man}}_{2}$. We use this equivalence to identify
$\int_{(\Sigmait_{+},\varphi|_{\Sigmait_{+}})}\mathcal{A}\cong\int_{\Sigmait_{+}^{*}}\mathcal{A}$
as categories. This equivalence can be promoted to an equivalence of module
categories, i.e. the following diagram commutes:
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes
W$${N\times(-1,1)\sqcup\Sigmait_{+}}$${\Sigmait_{+}}$${N\times(-1,1)\sqcup\Sigmait_{+}^{\ast}}$${\Sigmait_{+}^{\ast}}$$\scriptstyle{\operatorname{id}\sqcup(\theta_{+}{,}H_{+})}$$\scriptstyle{(\theta_{+}{,}H_{+})}$
We can see that the action of $\mathcal{C}$ on
$\int_{(\Sigmait_{+}^{*},\varphi|_{\Sigmait_{+}})}\mathcal{A}$ is precisely
given by the $\mathcal{C}$-module structure of $\mathcal{M}_{+}$. On the
contrary, the situation is a bit more involved for the $\mathcal{C}$-module
structure of $\int_{(\Sigmait_{-},\varphi|_{\Sigmait_{-}})}\mathcal{A}$: We
cannot simply identify $\Sigmait_{-}\cong\Sigmait_{-}^{\ast}$ via $H$ since
the homotopy is not constant near $N\times\\{-1\\}$. However, we can construct
an equivalence
$\theta_{-}\colon\Sigmait_{-}\xrightarrow{\cong}\Sigmait_{-}^{\ast}$ from a
homotopy $H_{-}$, which is defined by using the loop $\gamma_{d}$ similarly to
how we used $\gamma_{d^{-1}}$ above. This gives rise to an identification
$\int_{(\Sigmait_{-},\varphi|_{\Sigmait_{-}})}\mathcal{A}\cong\int_{\Sigmait_{-}^{\ast}}\mathcal{A}$
together with a weakly commuting diagram
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes W$${\Sigmait_{-}\sqcup
N\times(-1,1)}$${\Sigmait_{-}}$${\Sigmait_{-}^{\ast}\sqcup
N\times(-1,1)}$${\Sigmait_{-}^{\ast}}$$\scriptstyle{(\theta_{-}{,}H_{-})\sqcup(\operatorname{id}{,}\gamma_{d})}$$\scriptstyle{(\theta_{-}{,}H_{-})}$
(2.11)
in $D\text{-}{\mathsf{Man}}_{2}$. From the horizontal maps we deduce that the
module structure relevant for excision is obtained by twisting by the
$D$-action on $\mathcal{C}$:
$\displaystyle\operatorname{act}_{-}^{d}\colon\mathcal{M}_{-}\boxtimes\mathcal{C}\xrightarrow{\operatorname{id}_{\mathcal{M}_{-}}\boxtimes\vartheta(d)}\mathcal{M}_{-}\boxtimes\mathcal{C}\xrightarrow{\operatorname{act}_{-}}\mathcal{M}_{-}\
\ .$ (2.12)
We write $\mathcal{M}_{-,d}$ for this module category. Combining everything we
arrive at
$\displaystyle\int_{(\Sigmait,\varphi)}\mathcal{A}\cong\mathcal{M}_{-,d}\underset{\mathcal{C}}{\boxtimes}\mathcal{M}_{+}\
\ .$ (2.13)
###### Remark 2.12.
Notice that alternatively we could have chosen a trivialisation of
$\Sigmait_{0}$ which extends to $\Sigmait_{-}$, instead of $\Sigmait_{+}$,
which would have resulted in a twisting of $\mathcal{M}_{+}$ by $d^{-1}$. In
this sense the module structures featuring in excision for $D$-structured
oriented 2-manifolds are not unique, though the value of the relative tensor
product is.
### 2.3 Actions of diagram automorphisms and their quantisation
For applications to quantum physics, we will be mostly interested in
factorisation homology for the ribbon category ${\mathsf{Rep}}_{q}(G)$. In
this section we will show that ${\mathsf{Rep}}_{q}(G)$ admits an
$\operatorname{Out}(G)$-action, which can be seen as a quantisation of the
$\operatorname{Out}(G)$-symmetry in gauge theory.
The outer automorphism group $\operatorname{Out}(G)$ of $G$ is finite and can
be identified with the group of Dynkin diagram automorphisms. Concretely, one
finds for the non-trivial outer automorphism groups
Type | $A_{n}$ , $n\geq 2$ | $D_{n}$ , $n>4$ | $D_{4}$ | $E_{6}$
---|---|---|---|---
$\operatorname{Out}(G)$ | $\mathbb{Z}_{2}$ | $\mathbb{Z}_{2}$ | $S_{3}$ | $\mathbb{Z}_{2}$
The identification of outer automorphisms and Dynkin digram automorphisms
provides an explicit splitting
$\operatorname{Out}(G)\longrightarrow\operatorname{Aut}(G)$ and allows us to
write down the short exact sequence
$\displaystyle 1\longrightarrow G\longrightarrow
G\rtimes\operatorname{Out}(G)\longrightarrow\operatorname{Out}(G)\longrightarrow
1$ (2.14)
containing the semi-direct product.
The category ${\mathsf{Rep}}(G)$ of $G$-representations is a symmetric
monoidal ribbon category and hence in particular a framed
$\mathsf{E}_{2}$-algebra. The finite group $\operatorname{Out}(G)$ acts
naturally on the category ${\mathsf{Rep}}(G)$ by pulling back representations
along the inverse and this symmetry extends to the representation category of
the corresponding quantum group, see Proposition 2.13 below.
We will use the following notation and conventions. We consider a finite-
dimensional simple complex Lie algebra $\mathfrak{g}$ with Cartan matrix
$(a_{ij})_{1\leq i,j\leq n}$. We fix a Cartan subalgebra
$\mathfrak{h}\subset\mathfrak{g}$ and select a set of simple roots
$\Piit=\\{\alpha_{1},\dots,\alpha_{n}\\}$. We write $\Lambdait$ for the weight
lattice and we choose a symmetric bilinear form $(\cdot,\cdot)$ on $\Lambdait$
such that $(\alpha_{i},\alpha_{j})=a_{ij}$. For the rest of this paragraph we
will restrict our attention to Lie algebras with Dynkin diagrams of type
$A_{n}$ ($n\geq 2$), $D_{n}$ ($n\geq 4$), or $E_{6}$, since these are the only
cases for which we have non-trivial Dynkin diagram automorphisms.
The formal quantum group $U_{\hbar}(\mathfrak{g})$ is a Hopf algebra
deformation of the universal enveloping algebra $U(\mathfrak{g})$ over
$\mathbb{C}[[\hbar]]$ with generators
$\\{H_{\alpha_{i}},X^{\pm}_{\alpha_{i}}\\}_{\alpha_{i}\in\Piit}$, subjected to
certain relations, see for example [CP95, Section 6.5] for details. In order
to define positive and negative root vectors, we fix a reduced decomposition
$\omega_{0}=s_{i_{1}}s_{i_{2}}\dots s_{i_{N}}$ of the longest element
$\omega_{0}$ in the Weyl group of $\mathfrak{g}$. The positive and negative
root vectors are then defined as
$X_{\beta_{r}}^{\pm}=T_{i_{1}}T_{i_{2}}\dots
T_{i_{r-1}}X^{\pm}_{\alpha_{i_{r}}}$
in $U_{\hbar}(\mathfrak{g})$ by acting on the generators with elements
$T_{i}\in\mathfrak{B}_{\mathfrak{g}}$ of the braid group associated to
$\mathfrak{g}$ [CP95, Section 8.1 ]. The formal quantum group
$U_{\hbar}(\mathfrak{g})$ is quasi-triangular with universal R-matrix given by
the multiplicative formula [CP95, Theorem 8.3.9]
$\mathcal{R}=\Omegait\widehat{\mathcal{R}},\quad\Omegait=\prod_{\alpha_{i}\in\Piit}e^{\hbar({a_{ij}^{-1}H_{\alpha_{i}}\otimes
H_{\alpha_{j}})}},\quad\widehat{\mathcal{R}}=\prod_{\beta_{r}}\widehat{\mathcal{R}}_{\beta_{r}}\
\ ,$
where the order in the second product is such that the $\beta_{r}$-term is to
the left of the $\beta_{s}$-term if $r>s$, and
$\widehat{\mathcal{R}}_{\beta_{r}}=\exp_{q}((1-q^{-2})X^{+}_{\beta_{r}}\otimes
X^{-}_{\beta_{r}})$ for $q=\exp(\hbar)$. It is shown in [CP95, Corollary
8.3.12] that $\mathcal{R}$ is independent of the chosen reduced decomposition
of $\omega_{0}$. We denote by ${\mathsf{Rep}}_{\hbar}(G)$ the category of
topologically free left modules over $U_{\hbar}(\mathfrak{g})$ of finite rank.
This tensor category comes with a braiding defined via the universal R-matrix
$\mathcal{R}$ of $U_{\hbar}(\mathfrak{g})$.
###### Proposition 2.13.
The braided tensor category ${\mathsf{Rep}}_{\hbar}(G)$ admits a left action
of $\operatorname{Out}(G)$.
###### Proof.
The outer automorphisms $\operatorname{Out}(G)$ can be identified with the
automorphism group $\operatorname{Aut}(\Piit)$ of the Dynkin diagram of
$\mathfrak{g}$. An element $\kappa\in\operatorname{Aut}(\Piit)$ acts on the
generators of $U_{\hbar}(\mathfrak{g})$ via
$H_{\alpha_{i}}\longmapsto H_{\alpha_{\kappa(i)}},\quad
X^{\pm}_{\alpha_{i}}\longmapsto X^{\pm}_{\alpha_{\kappa(i)}}.$
We thus get an action $\rho$ of $\operatorname{Out}(G)$ on the tensor category
${\mathsf{Rep}}_{\hbar}(G)$ defined by pulling back a representation along the
inverse automorphism, i.e. $\rho(\kappa)(X)=(\kappa^{-1})^{*}X$, for any
$X\in{\mathsf{Rep}}_{\hbar}(G)$. It is left to show that the action preserves
the braiding. The action of $\kappa$ on a positive, respectively negative,
root vector is given by
$\kappa.X^{\pm}_{\beta_{r}}=T_{\kappa(i_{r})}\dots
T_{\kappa(i_{r-1})}X^{\pm}_{\alpha_{\kappa(i_{r})}}\ \ .$
We now make use of the following explicit expressions for $\omega_{0}$,
details can be found for example in [Hum90, Section 3.19]. First, divide the
vertices of the Dynkin diagram into two nonempty disjoint subsets $S$ and
$S^{\prime}$, so that in each subset the corresponding simple reflections
commute. Let $a$ and $b$ be the products of the simple reflections in $S$ and
$S^{\prime}$, respectively. For $A_{n}$ ($n$ odd), $D_{n}$ ($n\geq 4$) and
$E_{6}$ we can set $\omega_{0}=(ab)^{h}$, where $h$ is the respective Coxeter
number. Whereas for $A_{n}$ ($n$ even), $\omega_{0}$ can be represented either
as $\omega_{0}=(ab)^{\frac{n}{2}}a$ or as $\omega_{0}=b(ab)^{\frac{n}{2}}$. We
thus see that $\kappa$ sends a given reduced decomposition of the longest Weyl
group element $\omega_{0}$ to another reduced decomposition of $\omega_{0}$.
But since the R-matrix is independent of the chosen reduced decomposition the
result follows. ∎
###### Proposition 2.14.
The action of $\operatorname{Out}(G)$ on ${\mathsf{Rep}}_{\hbar}(G)$ is
compatible with the balancing automorphism of ${\mathsf{Rep}}_{\hbar}(G)$.
###### Proof.
The balancing in ${\mathsf{Rep}}_{\hbar}(G)$ is given by acting with the
ribbon element $c_{\hbar}=\exp(\hbar H_{\rho})u_{\hbar}$ of
$U_{\hbar}(\mathfrak{g})$, see [CP95, Section 8.3.F]. Here,
$H_{\rho}=\sum_{i=1}^{n}\mu_{i}H_{\alpha_{i}}$ with coefficients
$\mu_{i}=\sum_{j=1}^{n}a^{-1}_{ij}$ and
$u_{\hbar}=m_{\hbar}(S_{\hbar}\otimes\operatorname{id})\mathcal{R}_{2,1}$ with
$m_{\hbar}$ and $S_{\hbar}$ the multiplication and antipode in
$U_{\hbar}(\mathfrak{g})$ respectively. It follows from Proposition 2.13 that
a Dynkin diagram automorphism $\kappa\in\operatorname{Aut}(\Piit)$ preserves
the element $u_{\hbar}$. So it is left to show that $\kappa$ preserves the
element $H_{\rho}$. Since the Cartan matrix is invariant under the Dynking
diagram automorphism, we have
$\mu_{i}=\sum_{j=1}^{n}a^{-1}_{i,j}=\sum_{j=1}^{n}a^{-1}_{\kappa(i),\kappa(j)}=\sum_{j=1}^{n}a^{-1}_{\kappa(i),j}=\mu_{\kappa(i)}$
and thus $\kappa.H_{\rho}=H_{\rho}$. ∎
Let $q\in\mathbb{C}^{\times}$ be a non-zero complex number which is not a root
of unity and let $U_{q}(\mathfrak{g})$ be the corresponding specialisation of
the rational form of $U_{\hbar}(\mathfrak{g})$. A precise definition of
$U_{q}(\mathfrak{g})$ can be found e.g. in [CP95, Section 9]. We denote by
${\mathsf{Rep}}_{q}(G)$ the category of locally finite
$U_{q}(\mathfrak{g})$-modules of type 1. Strictly speaking,
$U_{q}(\mathfrak{g})$ is not quasi-triangular. However, it’s representation
category admits a braiding [CP95, Section 10.1.D]. On a representation
$V\otimes V^{\prime}\in{\mathsf{Rep}}_{q}(G)$, the braiding is defined by the
so-called quasi R-matrix $\Theta_{V,V^{\prime}}=\tau\circ
E_{V,V^{\prime}}\widehat{\mathcal{R}}_{V,V^{\prime}}$, where $\tau$ is the map
swapping the tensor factors and $E_{V,V^{\prime}}$ is an invertible operator
on $V\otimes V^{\prime}$ acting on the subspace $V_{\lambda}\otimes
V^{\prime}_{\mu}$ by the scalar $q^{(\lambda,\mu)}$, for
$\lambda,\mu\in\Lambdait$. Moreover, the standard ribbon element for
$U_{q}(\mathfrak{g})$ acts on $V_{\lambda}$ as the constant
$q^{-(\lambda,\lambda)-2(\lambda,\rho)}$ with $\rho$ the half-sum of positive
roots, giving rise to the balancing in $U_{q}(\mathfrak{g})$. Hence, we get
the $q$-analog of Proposition 2.13:
###### Proposition 2.15.
The braided balanced tensor category ${\mathsf{Rep}}_{q}(G)$ admits a left
action of $\operatorname{Out}(G)$.
### 2.4 Reconstruction theorems for module categories
The following is a brief recollection of [BZBJ18a, Section 4] which will allow
us to compute the value of factorisation homology explicitly in terms of
module categories over certain algebras in the next section. We start by
recalling that the inclusion $\emptyset\hookrightarrow\Sigmait$ of the empty
manifold into a surface $\Sigmait$ induces a canonical functor
$1_{{\mathsf{Pr}}_{c}}\cong\mathsf{Vect}_{k}\longrightarrow\int_{\Sigmait}\mathcal{A}$
on the level of factorisation homology, see Section 2.1. We thus have a
distinguished object
$\operatorname{Dist}_{\Sigmait}\in\int_{\Sigmait}\mathcal{A}$, given as the
image of $k$ under this functor. If we assume that $\Sigmait$ is not closed
and we choose a marked interval in its boundary, there is a natural
$\mathcal{A}$-module structure on $\int_{\Sigmait}\mathcal{A}$, induced by
embedding a disk along the marked interval. In order to study the
factorisation homology of the surface $\Sigmait$, we wish to describe the
entire category $\int_{\Sigmait}\mathcal{A}$ internally in terms of
$\mathcal{A}$. To that end, following [BZBJ18a], we will apply techniques from
Barr-Beck monadic reconstruction to monads arising from adjunctions of module
functors of the form
$\operatorname{act}_{\operatorname{Dist}_{\Sigmait}}\colon\mathcal{A}\longrightarrow\int_{\Sigmait}\mathcal{A}$.
Applying monadic reconstruction techniques to module categories was first done
for fusion categories in the work of Ostrik [Ost03], and later in the setting
of finite abelian categories in [DSPS20]. Here, we will recall its further
generalisation to categories in ${\mathsf{Pr}}_{c}$, as developed in [BZBJ18a,
Section 4]. For the remainder of this section, let $\mathcal{A}$ be an abelian
rigid tensor category in ${\mathsf{Pr}}_{c}$ and $\mathcal{M}$ an abelian
right $\mathcal{A}$-module category with action functor
$\operatorname{act}\colon\mathcal{M}\boxtimes\mathcal{A}\longrightarrow\mathcal{M}$.
For each $m\in\mathcal{M}$, the induced functor
$\operatorname{act}_{m}\colon\mathcal{A}\longrightarrow\mathcal{M},\quad\operatorname{act}_{m}(a)\coloneqq
m\otimes a$
admits a right adjoint which we denote $\operatorname{act}^{R}_{m}$. For any
pair of objects $m,n\in\mathcal{M}$, define the internal morphisms from $m$ to
$n$ as the object
$\underline{\operatorname{Hom}}_{\mathcal{A}}(m,n)=\operatorname{act}_{m}^{R}(n)\in\mathcal{A}$
representing the functor
$a\longmapsto\operatorname{Hom}_{\mathcal{M}}(m\otimes a,n)$. Then, there is a
natural algebra internal to $\mathcal{A}$ given by
$\underline{\operatorname{End}}_{\mathcal{A}}(m)\coloneqq\underline{\operatorname{Hom}}_{\mathcal{A}}(m,m)$,
which is called the internal endomorphism algebra of $m$. For each
$m\in\mathcal{M}$, we get a functor
$\widetilde{\operatorname{act}^{R}_{m}}\colon\mathcal{M}\longrightarrow(\operatorname{act}^{R}_{m}\circ\operatorname{act}_{m})\operatorname{-mod}_{\mathcal{A}}$
sending an object $n\in\mathcal{M}$ to
$\underline{\operatorname{Hom}}_{\mathcal{A}}(m,n)$ with canonical action
$\operatorname{act}^{R}_{m}\circ\operatorname{act}_{m}\circ\operatorname{act}^{R}_{m}(n)\longrightarrow\operatorname{act}^{R}_{m}(n)$
given by the counit of the adjunction. The monadicity theorem (see Theorem
2.17 below) then tells us when this functor is an equivalence. In order to
state the theorem, we adopt the following terminology.
###### Definition 2.16.
An object $m\in\mathcal{M}$ is called
* •
an $\mathcal{A}$-generator if $\operatorname{act}^{R}_{m}$ is faithful,
* •
$\mathcal{A}$-projective if $\operatorname{act}^{R}_{m}$ is colimit-
preserving,
* •
an $\mathcal{A}$-progenerator if it is both $\mathcal{A}$-projective and an
$\mathcal{A}$-generator.
###### Theorem 2.17 ([BZBJ18a, Theorem 4.6]).
Let $m\in\mathcal{M}$ be an $\mathcal{A}$-progenerator. Then the functor
$\widetilde{\operatorname{act}^{R}_{m}}\colon\mathcal{M}\xrightarrow{\cong}\underline{\operatorname{End}}_{\mathcal{A}}(m)\operatorname{-mod}_{\mathcal{A}},$
is an equivalence of $\mathcal{A}$-module categories, where $\mathcal{A}$ acts
on the right by the tensor product.
When computing factorisation homology of a surface, we will make extensive use
of $\otimes$-excision, as explained in Section 2.1.1. On a categorical level
this means that we wish to apply monadic reconstruction to the relative
Deligne-Kelly tensor product of two module categories. For this, notice that
if $\mathcal{M}$ is a left $\mathcal{A}$-module category and $a$ an algebra in
$\mathcal{A}$, one can use the $\mathcal{A}$-action on $\mathcal{M}$ to define
the category of $a$-modules in $\mathcal{M}$, which we denote by
$a\text{-mod}_{\mathcal{M}}$.
###### Theorem 2.18 ([BZBJ18a, Theorem 4.12]).
Let $\mathcal{M}_{-}$ and $\mathcal{M}_{+}$ be right-, respectively left
$\mathcal{A}$-module categories. Assume that $m\in\mathcal{M}_{-}$ and
$n\in\mathcal{M}_{+}$ are both $\mathcal{A}$-progenerators. Then there are
equivalences
$\mathcal{M}_{-}\boxtimes_{\mathcal{A}}\mathcal{M}_{+}\cong\underline{\operatorname{End}}_{\mathcal{A}}(m)\operatorname{-mod}_{\mathcal{M}_{+}}\cong(\underline{\operatorname{End}}_{\mathcal{A}}(m),\underline{\operatorname{End}}_{\mathcal{A}}(n))\operatorname{-bimod}_{\mathcal{A}}$
of categories.
The following special case will be of particular interest for us: We assume
that $\mathcal{M}_{+}$ is itself a tensor category and that the
$\mathcal{A}$-module structure on $\mathcal{M}_{+}$ is induced by a tensor
functor $F\colon\mathcal{A}\longrightarrow\mathcal{M}_{+}$, which is such that
every object in $\mathcal{M}_{+}$ appears as a subobject, or equivalently a
quotient, of an object in the image of $F$. Tensor functors with this property
are called dominant. When in this setting, we have the following base-change
formula:
###### Corollary 2.19 ([BZBJ18a, Corollary 4.13]).
Let $F\colon\mathcal{A}\longrightarrow\mathcal{M}_{+}$ be a dominant tensor
functor and $m\in\mathcal{M}_{-}$ an $\mathcal{A}$-progenerator. Then there is
an equivalence of $\mathcal{M}_{+}$-module categories
$\mathcal{M}_{-}\boxtimes_{\mathcal{A}}\mathcal{M}_{+}\cong
F(\underline{\operatorname{End}}_{\mathcal{A}}(m))\operatorname{-mod}_{\mathcal{M}_{+}}.$
## 3 Factorisation homology for surfaces with $D$-bundles
In this section we use excision and reconstruction theorems to compute
factorisation homology of an abelian rigid balanced braided tensor category
$\mathcal{A}$ equipped with $D$-action, for $D$ a finite group, over a surface
$\Sigmait$ with principal $D$-bundles and at least one boundary component.
Furthermore, we study the algebraic structure corresponding to the evaluation
on annuli, boundary conditions and point defects.
### 3.1 Reconstruction for rigid braided tensor categories with group action
For $d\in D$, consider the right $\mathcal{A}^{\boxtimes 2}$-module category
$\mathcal{M}_{d}$, whose underlying category is $\mathcal{A}$ and the action
is
$\operatorname{reg}^{d}\colon\mathcal{M}_{d}\boxtimes\mathcal{A}\boxtimes\mathcal{A}\xrightarrow{\operatorname{id}\boxtimes\operatorname{id}\boxtimes\vartheta(d)}\mathcal{M}_{d}\boxtimes\mathcal{A}\boxtimes\mathcal{A}\xrightarrow{T^{3}}\mathcal{M}_{d}\
\ ,$ (3.1)
where $T^{3}$ is the iterated tensor product functor $x\boxtimes y\boxtimes
z\longmapsto x\otimes y\otimes z$.
###### Lemma 3.1.
$1_{\mathcal{A}}$ is a progenerator for the twisted regular action
$\operatorname{reg}^{d}$.
###### Proof.
The unit $1_{\mathcal{A}}$ is a progenerator for the right regular action (see
[BZBJ18a, Proposition 4.15]). Since $\vartheta(d)$ is an automorphism of
$\mathcal{A}$, it is also a progenerator for $\operatorname{reg}^{d}$. ∎
The internal endomorphism algebra
$\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes 2}}(1_{\mathcal{A}})$
can be explicitly described by the coend
$\displaystyle\int^{V\in\text{comp}(\mathcal{A})}V^{\vee}\boxtimes\vartheta(d^{-1}).V\
\ ,$ (3.2)
where $V^{\vee}$ is the dual of $V$ and the colimit is taken over compact
objects in $\mathcal{A}$. To derive the above expression it is enough to note
that the action is given by pre-composition of the regular action with the
automorphism $\operatorname{id}\boxtimes\vartheta(d)$ with adjoint
$\operatorname{id}\boxtimes\vartheta(d^{-1})$ and use Remark 4.16 of
[BZBJ18a]. Applying the tensor product functor
$T\colon\mathcal{A}\boxtimes\mathcal{A}\longrightarrow\mathcal{A}$ to
$\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes 2}}(1_{\mathcal{A}})$
we get the object
$\mathcal{F}_{\mathcal{A}}^{d}\coloneqq\int^{V\in\text{comp}(\mathcal{A})}V^{\vee}\otimes\vartheta(d^{-1}).V\
\ .$ (3.3)
Notice that for the identity element $e\in D$, this is Lyubashenko’s coend
$\int V^{\vee}\otimes V$ [Lyu95], which in particular is a braided Hopf
algebra in $\mathcal{A}$.
###### Example 3.2.
Let $H$ be a ribbon Hopf algebra with $D$-action, meaning that an element
$d\in D$ acts on $H$ by Hopf algebra automorphisms, the universal R-matrix is
$D$-invariant, i.e. $\mathcal{R}\in(H\otimes H)^{D}$ and the ribbon element is
preserved by the $H$-action. Let ${\mathsf{Rep}}(H)$ be the braided tensor
category of locally finite left modules over $H$ on which the elements $d\in
D$ act through the pullback of representations along $d^{-1}$. It is a well-
known result that at the identity element $e\in D$, the algebra
$\mathcal{F}_{{\mathsf{Rep}}(H)}^{e}$ is identified with the braided dual of
$H$, also known as the reflection equation algebra (REA), equipped with the
coadjoint action. Its underlying vector space is given by the matrix
coefficients $H^{\circ}$ of finite dimensional $H$-representations. As an
algebra, the REA can be obtained from the so-called Faddeev-Reshetikhin-
Takhtajan (FRT) algebra via twisting by a cocycle given in terms of the
universal R-matrix [DM03]. In more detail, the FRT algebra is identified with
the coend
$\mathcal{F}_{\text{FRT}}=\int^{V\in{\mathsf{Rep}}^{\operatorname{fd}}(H)}V^{\vee}\boxtimes
V\in{\mathsf{Rep}}(H)^{\operatorname{rev}}\boxtimes{\mathsf{Rep}}(H)\ \ ,$
where ${\mathsf{Rep}}(H)^{\operatorname{rev}}$ is the category with the
opposite monoidal product, with multiplication $m_{\text{FRT}}$ induced by the
canonical maps
$(V^{\vee}\boxtimes V)\otimes(W^{\vee}\boxtimes
W)=(V^{\vee}\otimes^{\operatorname{rev}}W^{\vee})\boxtimes(V\otimes
W)\cong(W\otimes V)^{\vee}\boxtimes(W\otimes V)\xrightarrow{\iota_{V\otimes
W}}\mathcal{F}_{\text{FRT}}\ \ .$
The REA is then given by the image of the FRT algebra under the composite
functor
${\mathsf{Rep}}(H)^{\operatorname{rev}}\boxtimes{\mathsf{Rep}}(H)\xrightarrow{(\operatorname{id},\sigma)\boxtimes\operatorname{id}}{\mathsf{Rep}}(H)\boxtimes{\mathsf{Rep}}(H)\xrightarrow{T}{\mathsf{Rep}}(H)\
\ ,$ (3.4)
where $(\operatorname{id},\sigma)$ denotes the identity functor, equipped with
a non-trivial tensor structure given by the braiding $\sigma$ in
${\mathsf{Rep}}(H)$.
In the decorated case, we precompose the functor in (3.4) with the
automorphism $1\boxtimes\vartheta(d)$. Then, for any $d\in D$, the underlying
vector space of $\mathcal{F}_{{\mathsf{Rep}}(H)}^{d}$ is identified again with
$H^{\circ}$ via
$V^{\vee}\otimes{d}^{*}V\xrightarrow{\iota_{V}}H^{\circ},\quad\iota_{V}(\phi\otimes
v)=\phi(-\triangleright({d}^{-1})^{*}v),$
for any $V\in{\mathsf{Rep}}^{\operatorname{fd}}(H)$, but $H^{\circ}$ is now
equipped with the twisted coadjoint action
$\operatorname{ad}^{*}_{d}(h\otimes\phi)(v)=\phi(S(h_{(1)})(-){d}.h_{(2)}\triangleright
v)$. The multiplication on the coend algebra is defined in terms of its
universal property. Concretely, consider the following dinatural map
$f_{V,W}\colon V^{\vee}\otimes{d}^{*}V\otimes
W^{\vee}\otimes{d}^{*}W\xrightarrow{\sigma_{{d}^{*}V,W^{\vee}\otimes{d}^{*}W}}V^{\vee}\otimes
W^{\vee}\otimes{d}^{*}W\otimes{d}^{*}V\xrightarrow{\cong}(W\otimes
V)^{\vee}\otimes{d}^{*}(W\otimes V)\xrightarrow{\iota_{W\otimes
V}}\mathcal{F}_{{\mathsf{Rep}}(H)}^{d}$
Then there exists a unique multiplication map
$\mathcal{F}_{{\mathsf{Rep}}(H)}^{d}\otimes\mathcal{F}_{{\mathsf{Rep}}(H)}^{d}\xrightarrow{m}\mathcal{F}_{{\mathsf{Rep}}(H)}^{d}$
such that $f_{V,W}=m\circ(\iota_{V}\otimes\iota_{W})$. Explicitly, the product
of $\phi,\psi\in\mathcal{F}_{{\mathsf{Rep}}(H)}^{d}$ is given by
$m_{\text{REA}}^{d}(\phi\otimes\psi)=m_{\text{FRT}}(\phi(\mathcal{R}_{1}(-){d}.\mathcal{R}^{\prime}_{1})\otimes\psi(S(\mathcal{R}^{\prime}_{2})\mathcal{R}_{2}(-))\
\ .$
In the language of [DM03], we thus find that
$\mathcal{F}^{d}_{{\mathsf{Rep}}(H)}$ is obtained by twisting the module
algebra $(H^{\circ},\operatorname{ad}^{*}_{d})$ by the cocycle
$\mathcal{R}_{1}\otimes{d}.\mathcal{R}^{\prime}_{1}\otimes\mathcal{R}_{2}\mathcal{R}^{\prime}_{2}\otimes
1$, where we write $\mathcal{R}=\mathcal{R}_{1}\otimes\mathcal{R}_{2}$ and we
use primes to distinguish different copies of the R-matrix.
###### Example 3.3.
The category of finite-dimensional $U_{q}(\mathfrak{g})$-modules of type 1 is
a semisimple braided tensor category via the quasi R-matrix $\Theta$. The
quantised coordinate algebra $\mathcal{O}_{q}(G)$ is then defined as the
algebra of matrix coefficients of objects in this category. Given an
automorphism $\kappa\in\operatorname{Out}(G)$, the twisted coend algebra (3.3)
takes the form
$T(\underline{\operatorname{End}}_{{\mathsf{Rep}}_{q}(G)^{\boxtimes
2}}(\mathbb{C}))\cong\bigoplus_{V}V^{\vee}\otimes\kappa^{*}V$, where the sum
runs over the simple objects. By a quantum version of the Peter-Weyl theorem
(see for example [Gan18, Proposition 4.1]) we get an identification
$\bigoplus_{V}V^{\vee}\otimes\kappa^{*}V\cong\mathcal{O}_{q}(G)$ as vector
spaces, and by the previous example, we thus find that the coend algebra is
isomorphic to $\mathcal{O}_{q}(G)$ with $\kappa$-twisted multiplication.
### 3.2 Computation on punctured surfaces
Throughout this section we consider connected oriented surfaces with at least
one boundary component. We can pick a ciliated fat graph model to describe the
surface $\Sigmait$ we want to work with, which in [BZBJ18a] is conveniently
defined via a gluing-pattern, that is a bijection
$P\colon\\{1,1^{\prime},\dots,n,n^{\prime}\\}\longrightarrow\\{1,\dots,2n\\}$,
such that $P(i)<P(i^{\prime})$. Here, $n$ is the number of edges of the fat-
graph model of $\Sigmait$. Given a gluing pattern $P$, we can reconstruct
$\Sigmait$ as depicted in Figure 5(b), namely by gluing $n$ disks
$\mathbb{D}_{\bullet\bullet}$ with two marked intervals each to a disk
${{}_{\bullet^{2n}}}\mathbb{D}_{\bullet}$ with $2n+1$ marked intervals,
thereby gluing the intervals $i$ and ${i}^{\prime}$ to $P(i)$ and
$P(i^{\prime})$, respectively.
###### Definition 3.4.
A $D$-labeled gluing pattern is a gluing pattern
$P\colon\\{1,1^{\prime},\dots,n,n^{\prime}\\}\longrightarrow\\{1,\dots,2n\\}$
together with $n$ elements $d_{1},\dots,d_{n}\in D$.
Notice that the fundamental group of a genus $g$ surface with $r+1$ boundary
components is free on $n=2g+r$ generators. This implies that a $D$-labeled
gluing pattern determines a principal $D$-bundle on the surface constructed
from the gluing pattern. Furthermore, up to equivalence all principal
$D$-bundles on surfaces with at least one boundary arise in this way.
\begin{overpic}[scale={0.5},tics=10]{decorated_gluing_pattern_2.pdf}
\put(5.25,4.25){\footnotesize$[\gamma_{d_{1}}]$}
\put(29.5,4.25){\footnotesize$[\gamma_{d_{r}}]$}
\put(42.5,4.25){\footnotesize$[\gamma_{d_{r+1}}]$}
\put(56.0,4.25){\footnotesize$[\gamma_{d_{r+2}}]$}
\put(73.5,4.25){\footnotesize$[\gamma_{d_{n-1}}]$}
\put(87.5,4.25){\footnotesize$[\gamma_{d_{n}}]$} \end{overpic} (a) Generators
of the homotopy group $\pi_{1}(\Sigmait)$.
\begin{overpic}[scale={0.5},tics=10]{decorated_gluing_pattern_3.pdf}
\put(2.0,7.5){\tiny$P(1)$} \put(12.0,7.5){\tiny$P(1^{\prime})$}
\put(22.5,7.5){\tiny$P(r)$} \put(32.5,7.5){\tiny$P(r^{\prime})$}
\put(60.0,7.5){$\cdots$} \put(70.0,7.5){\tiny$P(n)$} \put(84.5,6.75){$|$}
\put(77.5,2.5){\tiny$P((n-1)^{\prime})$} \put(90.0,7.5){\tiny$P(n^{\prime})$}
\put(-1.5,30.0){\footnotesize$d_{1}$} \put(20.0,30.0){\footnotesize$d_{r}$}
\put(52.5,30.0){\footnotesize$d_{n-1}$} \put(78.5,30.0){\footnotesize$d_{n}$}
\end{overpic} (b) Gluing a surface from a decorated gluing pattern.
Figure 5:
For a $D$-labeled gluing pattern $(P,d_{1}\dots d_{n})$ we are going to define
an algebra $a_{P}^{d_{1},\dots,d_{n}}\in\mathcal{A}$. As an object in
$\mathcal{A}$, it is defined by the tensor product
$\displaystyle
a_{P}^{d_{1},\dots,d_{n}}\coloneqq\bigotimes_{i=1}^{n}\mathcal{F}_{\mathcal{A}}^{d_{i}}\
\ ,$ (3.5)
where the $\mathcal{F}_{\mathcal{A}}^{d_{i}}$ are defined by the coend in
Equation (3.3). The gluing pattern can be used to define an algebra structure
on this object in complete analogy with [BZBJ18a]. To that end, we will use
the following terminology: Two labeled discs
$\mathbb{D}_{\bullet\bullet}^{d_{i}}$ and
$\mathbb{D}_{\bullet\bullet}^{d_{j}}$ with $i<j$ are called
* •
positively (negatively) linked if $P(i)<P(j)<P(i^{\prime})<P(j^{\prime})$
($P(j)<P(i)<P(j^{\prime})<P(i^{\prime})$)
* •
positively (negatively) nested if $P(i)<P(j)<P(j^{\prime})<P(i^{\prime})$
($P(j)<P(i)<P(i^{\prime})<P(j^{\prime})$)
* •
positively (negatively) unlinked if $P(i)<P(i^{\prime})<P(j)<P(j^{\prime})$
($P(j)<P(j^{\prime})<P(i)<P(i^{\prime})$)
To each of the above cases, we assign a crossing-morphism as depicted in
Figure 6 below. Notice that the crossing-morphism in the nested case differs
from the one given in [BZBJ18a, Definition 5.8].
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes
W$$+\text{-linked}$$+\text{-nested}$$+\text{-unlinked}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$L^{+}~{}=$$N^{+}~{}=$$U^{+}~{}=$
(3.6) Figure 6: Definition of crossing-morphisms
$L^{+},N^{+},U^{+}\colon\mathcal{F}_{\mathcal{A}}^{d_{i}}\otimes\mathcal{F}_{\mathcal{A}}^{d_{j}}\longrightarrow\mathcal{F}_{\mathcal{A}}^{d_{j}}\otimes\mathcal{F}_{\mathcal{A}}^{d_{i}}$
for positively linked, nested and unlinked decorated discs. Notice that we
read the diagrams from bottom to top.
Now, for each pair of indices $1\leq i<j\leq n$, the restriction of the
multiplication to
$\mathcal{F}^{d_{i}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{j}}_{\mathcal{A}}\subset
a_{P}^{d_{1},\dots,d_{n}}$ is defined by
$\mathcal{F}^{d_{i}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{j}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{i}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{j}}_{\mathcal{A}}\xrightarrow{\operatorname{id}\otimes
C\otimes\operatorname{id}}\mathcal{F}^{d_{i}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{i}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{j}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{j}}_{\mathcal{A}}\xrightarrow{m\otimes
m}\mathcal{F}^{d_{i}}_{\mathcal{A}}\otimes\mathcal{F}^{d_{j}}_{\mathcal{A}}\ \
,$
where $C$ is either $L^{\pm}$, $N^{\pm}$ or $U^{\pm}$, depending on whether
the decorated discs $\mathbb{D}^{d_{i}}_{\bullet\bullet}$ and
$\mathbb{D}^{d_{j}}_{\bullet\bullet}$ are $\pm$-linked, $\pm$-nested or
$\pm$-unlinked.
Finally, given a $D$-labeled gluing pattern, we wish to describe the module
structure induced by gluing the marked disks
$\mathbb{D}_{\bullet\bullet}^{d_{i}}$ to the disk
${}_{\bullet^{2n}}\mathbb{D}_{\bullet}$ as sketched in Figure 5(b). To that
end, we look at the example of a sphere with three punctures
$(\mathbb{S}^{2})_{3}$ and a $D$-bundle described by the map
$\varphi\colon\pi_{1}((\mathbb{S}^{2})_{3})\longrightarrow D$ sending the two
generators of the fundamental group to $d_{1}$ and $d_{2}$, respectively. The
corresponding gluing pattern is $P(1,1^{\prime},2,2^{\prime})=(1,2,3,4)$,
decorated by the tuple $(d_{1},d_{2})\in D^{2}$. We then choose a collar-
gluing $(\mathbb{S}^{2})_{3}\cong\Sigmait_{-}\cup_{\Sigmait_{0}}\Sigmait_{+}$
for the punctured sphere, as sketched on the right hand side of Figure 7, and
an equivalence in $D\text{-}\mathsf{Man}_{2}$, so that the maps to $BD$ are
constant on $\Sigmait_{-}\setminus\Sigmait_{0}$ and
$\Sigmait_{+}\setminus\Sigmait_{0}$ and are given by the loops
$\gamma_{d_{1}}$ and $\gamma_{d_{2}}$ on fixed open intervals in
$\Sigmait_{0}$, which are depicted by the red and blue intervals in Figure 7.
We immediately see that we are in a situation similar to Example 2.11: The
right $\mathcal{A}\boxtimes\mathcal{A}$-module structure on
$\int_{\mathbb{D}^{d_{i}}_{\bullet\bullet}}\mathcal{A}$, for $i=1,2$, is the
twisted regular action $\operatorname{reg}^{d_{i}}$ from (3.1). The module
structure for more general decorated gluing patterns can be worked out
analogously.
\begin{overpic}[scale={0.4},tics=10]{decorated_gluing_pattern.pdf}
\put(52.5,25.0){$\cong$} \put(77.5,-7.5){$\Sigmait_{0}$}
\put(7.5,40.0){{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}$\gamma_{d_{2}}$}}
\put(7.5,15.0){{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}$\gamma_{d_{1}}$}}
\end{overpic} Figure 7: Example: sphere with three punctures.
###### Theorem 3.5.
Let $\Sigmait$ be a surfaces with at least one boundary component. Fix a
principal $D$-bundle $\varphi\colon\Sigmait\longrightarrow BD$ on $\Sigmait$
and a corresponding $D$-labeled gluing pattern $(P,d_{1},\dots,d_{n})$. There
is an equivalence of categories
$\displaystyle\int_{(\Sigmait,\varphi)}\mathcal{A}\cong
a_{P}^{d_{1},\dots,d_{n}}\operatorname{-mod}_{\mathcal{A}}$ (3.7)
###### Proof.
The following is an extension of the proof given in [BZBJ18a, Theorem 5,14] to
surfaces with $D$-bundles. We have seen that for a $d$-labeled disk
$\mathbb{D}^{d}_{\bullet\bullet}$ with two marked intervals we have
$\int_{\mathbb{D}^{d}_{\bullet\bullet}}\mathcal{A}\cong\mathcal{A}$ as plain
categories, with the markings inducing the structure of a right
$\mathcal{A}^{\boxtimes 2}$-module category with module structure given by the
twisted regular action $\operatorname{reg}^{d}$. Now,
$\int_{\sqcup_{i}\mathbb{D}^{d_{i}}_{\bullet\bullet}}\mathcal{A}\cong\mathcal{A}^{\boxtimes
n}$ has the structure of a right $\mathcal{A}^{\boxtimes 2n}$-module category.
Indeed, using the decorated gluing pattern $(P,d_{1},\dots,d_{n})$ we have an
action:
$\operatorname{reg}_{P}^{d_{1},\dots,d_{n}}\colon(x_{1}\boxtimes\dots\boxtimes
x_{n})\boxtimes(y_{1}\boxtimes\dots\boxtimes y_{2n})\longmapsto(x_{1}\otimes
y_{P(1)}\otimes\vartheta(d_{1}).y_{P(1^{\prime})})\boxtimes\dots\boxtimes(x_{n}\otimes
y_{P(n)}\otimes\vartheta(d_{n}).y_{P(n^{\prime})})$
We denote the resulting right module category by
$\mathcal{M}_{P}^{d_{1},\dots,d_{n}}$.
On the other hand, we have the disk ${{}_{\bullet^{2n}}}\mathbb{D}_{\bullet}$
with $2n$ marked intervals to the left and one marked interval to the right.
This turns
$\int_{{{}_{\bullet^{2n}}}\mathbb{D}_{\bullet}}\mathcal{A}\cong\mathcal{A}$
into a $(\mathcal{A}^{\boxtimes 2n},\mathcal{A})$-bimodule via the iterated
tensor product
$(x_{1}\boxtimes\dots\boxtimes x_{2n})\boxtimes y\boxtimes z\longmapsto
x_{1}\otimes\dots\otimes x_{2n}\otimes y\otimes z.$
We denote the resulting bimodule category by ${{}_{\mathcal{A}^{\boxtimes
2n}}}\mathcal{A}_{\mathcal{A}}$. Using excision, we then have
$\int_{(\Sigmait,\varphi)}\mathcal{A}\cong\mathcal{M}_{P}^{d_{1},\dots,d_{n}}\underset{\mathcal{A}^{\boxtimes
2n}}{\boxtimes}{{}_{\mathcal{A}^{\boxtimes 2n}}}\mathcal{A}_{\mathcal{A}}\ \
.$
Let $\tau_{P}\colon\\{1,\dots,2n\\}\longrightarrow\\{1,\dots,2n\\}$ be the
bijection given by postcomposing the map defined by $2k-1\longmapsto k$,
$2k\longmapsto k^{\prime}$ with $P$. Notice that the inverse of this map is
part of the action $\operatorname{reg}^{d_{1},\dots,d_{2}}_{P}$. Applying
monadic reconstruction as in Theorem 2.17, together with Lemma 3.1, we can
identify $\mathcal{M}_{P}^{d_{1},\dots,d_{n}}$ with modules over an algebra
$\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes
2n}}(1_{\mathcal{A}})_{P}^{d_{1},\dots,d_{n}}\in\mathcal{A}^{\boxtimes 2n}$,
obtained from $\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes
2}}(1_{\mathcal{A}})^{d_{1}}\boxtimes\dots\boxtimes\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes
2}}(1_{\mathcal{A}})^{d_{n}}$ by acting with $\tau_{P}$. Applying Corollary
2.19 to the dominant tensor functor
$T^{2n}\colon\mathcal{A}^{2n}\longrightarrow\mathcal{A}$, we thus get
$\int_{\Sigmait}\mathcal{A}\cong
T^{2n}(\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes
2n}}(1_{\mathcal{A}})_{P}^{d_{1},\dots,d_{n}})\operatorname{-mod}_{\mathcal{A}}\
\,$
as right $\mathcal{A}$-module categories.
Let us write $T^{2n}(\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes
2n}}(1_{\mathcal{A}})_{P}^{d_{1},\dots,d_{n}})=\widetilde{a}_{P}$ for brevity.
To finish the proof, we want to show that there is an isomorphism of algebras
$\widetilde{a}_{P}\cong a_{P}^{d_{1},\dots,d_{n}}$. Consider the subalgebras
$\mathcal{F}^{(i,i^{\prime})}_{\mathcal{A}}\coloneqq\underline{\operatorname{End}}_{\mathcal{A}_{P(i)}\boxtimes\mathcal{A}_{P(i^{\prime})}}(1_{\mathcal{A}})^{d_{i}}\in\mathcal{A}^{\boxtimes
2n}$
and their images under the tensor functor
$\mathcal{F}_{\mathcal{A}}^{(i)}\coloneqq
T^{2n}(\mathcal{F}_{\mathcal{A}}^{(i,i^{\prime})})\in\mathcal{A}$. By
embedding each $\mathcal{F}_{\mathcal{A}}^{(i)}$ into $\widetilde{a}_{P}$ we
get a map
$\widetilde{m}_{P}\colon\mathcal{F}_{\mathcal{A}}^{(1)}\otimes\dots\otimes\mathcal{F}_{\mathcal{A}}^{(n)}\hookrightarrow\widetilde{a}_{P}^{\otimes
n}\xrightarrow{\widetilde{m}}\widetilde{a}_{P}\ \ ,$
where $\widetilde{m}$ is the multiplication in $\widetilde{a}_{P}$. This map
establishes the isomorphism on the level of objects in $\mathcal{A}$. The
restriction of the multiplication to the image of one of the
$\mathcal{F}_{\mathcal{A}}^{(i)}$ agrees with the multiplication $m$ in
$\mathcal{F}_{\mathcal{A}}^{d_{i}}$. So it is left to show that for each pair
of indices $1\leq i<j\leq n$ the composition
$\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)}\otimes\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)}\xrightarrow{\operatorname{id}\otimes
C\otimes\operatorname{id}}\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)}\xrightarrow{m\otimes
m}\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)}\xrightarrow{\widetilde{m}_{P}}\widetilde{a}_{P},$
for $C$ being $L^{\pm},N^{\pm}$ or $U^{\pm}$, agrees with
$\widetilde{m}_{P}|_{(\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)})^{\otimes
2}}$. To that end, consider the following diagram
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes
W$$+\text{-linked}$$+\text{-nested}$$+\text{-unlinked}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$L^{+}~{}=$$N^{+}~{}=$$U^{+}~{}=$${T^{4}(\mathcal{F}_{\mathcal{A}}^{(i,i^{\prime})}\otimes\mathcal{F}_{\mathcal{A}}^{(j,j^{\prime})})=T^{4}(\mathcal{F}_{\mathcal{A}}^{(j,j^{\prime})}\otimes\mathcal{F}_{\mathcal{A}}^{(i,i^{\prime})})}$${\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)}}$${\mathcal{F}_{\mathcal{A}}^{(j)}\otimes\mathcal{F}_{\mathcal{A}}^{(i)}}$${\widetilde{a}_{P}}$$\scriptstyle{T^{4}(m)}$$\scriptstyle{J_{i,j}}$$\scriptstyle{\widetilde{m}_{P}}$$\scriptstyle{\widetilde{m}_{P}}$$\scriptstyle{J_{j,i}}$
where the label $T^{4}(m)$ on the vertical arrow means applying the tensor
functor to the multiplication in
$\underline{\operatorname{End}}_{\mathcal{A}^{\boxtimes
2n}}(1_{\mathcal{A}})_{P}^{d_{1},\dots,d_{n}}$. The dashed arrows, making the
above diagram commute, can be described by exhibiting the tensor structure of
the iterated tensor product functor
$J_{i,j}\colon\mathcal{F}_{\mathcal{A}}^{(i)}\otimes\mathcal{F}_{\mathcal{A}}^{(j)}=T^{4}(\mathcal{F}_{\mathcal{A}}^{(i,i^{\prime})})\otimes
T^{4}(\mathcal{F}_{\mathcal{A}}^{(j,j^{\prime})})\xrightarrow{\cong}T^{4}(\mathcal{F}_{\mathcal{A}}^{(i,i^{\prime})}\otimes\mathcal{F}_{\mathcal{A}}^{(j,j^{\prime})})$
given by the shuffle braiding666The shuffle braiding $J\colon
a_{1}\otimes\dots\otimes a_{n}\otimes b_{1}\otimes\dots\otimes
b_{n}\xrightarrow{\cong}a_{1}\otimes b_{1}\otimes\dots\otimes a_{n}\otimes
b_{n}$ is given by
$J=\sigma_{a_{n},b_{n-1}}\circ\dots\circ\sigma_{a_{3}\otimes\dots\otimes
a_{n},b_{2}}\circ\sigma_{a_{2}\otimes\dots\otimes a_{n},b_{1}}$, where
$\sigma$ is the braiding of $\mathcal{A}$.. As an example, consider the gluing
pattern $P(1,1^{\prime},2,2^{\prime})=(1,3,4,2)$ describing positively nested
handles. The corresponding shuffle braiding is
$J_{1,2}=(1\otimes 1\otimes\sigma)\circ(1\otimes\sigma\otimes 1),\quad
J_{2,1}=(\sigma\otimes 1\otimes 1)\circ(1\otimes\sigma\otimes 1),$
and we observe that the composition $J^{-1}_{1,2}\circ J_{2,1}$ agrees with
the nested crossing morphism
$N_{1,2}^{+}\colon\mathcal{F}_{\mathcal{A}}^{d_{2}}\otimes\mathcal{F}_{\mathcal{A}}^{d_{1}}\longrightarrow\mathcal{F}_{\mathcal{A}}^{d_{1}}\otimes\mathcal{F}_{\mathcal{A}}^{d_{2}}$.
From commutativity of the above diagram, we then get that
$\widetilde{m}_{P}|_{\mathcal{F}^{d_{2}}_{\mathcal{A}}\otimes\mathcal{F}_{\mathcal{A}}^{d_{1}}}=\widetilde{m}_{P}|_{\mathcal{F}_{\mathcal{A}}^{d_{1}}\otimes\mathcal{F}_{\mathcal{A}}^{d_{2}}}\circ
N_{1,2}^{+}$, which finishes the proof for the positively nested case. The
other five cases can be worked out analogously. ∎
### 3.3 Little bundles algebras and braided $D$-crossed categories
The value of oriented factorisation homology of a rigid balanced braided
category $\mathcal{A}$ on $\mathbb{S}^{1}\times\mathbb{R}$ is given by the
Drinfeld centre $\mathcal{Z}(\mathcal{A})$ of $\mathcal{A}$. In [BZBJ18b,
Remark 3.2] it is observed that
$\int_{\mathbb{S}^{1}\times\mathbb{R}}\mathcal{A}$ carries two natural
monoidal structures induced from the topology of genus zero surfaces; one is
induced by stacking annuli in the $\mathbb{R}$-direction, which we will denote
$\otimes_{\mathbb{R}}$, and the other one is induced by embedding annuli into
the pair of pants and will be denoted $\otimes_{\text{Pants}}$. The monoidal
structure coming from the pair of pants requires some explanation: Evaluating
factorisation homology on the pair of embeddings sketched in Figure 8 gives
rise to the cospan
$\displaystyle\int_{\mathbb{S}^{1}\times\mathbb{R}}\mathcal{A}\boxtimes\int_{\mathbb{S}^{1}\times\mathbb{R}}\mathcal{A}\xrightarrow{(\iota_{1}\sqcup\iota_{2})_{*}}\int_{\text{Pants}}\mathcal{A}\xleftarrow{{\iota_{\text{out}}}_{*}}\int_{\mathbb{S}^{1}\times\mathbb{R}}\mathcal{A}$
(3.8)
in ${\mathsf{Pr}}_{c}$. Using the right adjoint777Note that the right adjoint
$\iota_{\text{out}}^{*}$ is again in ${\mathsf{Pr}}_{c}$ since
$\iota_{\text{out}}$ is given by acting on the distinguished object in
$\int_{\text{Pants}}\mathcal{A}$ which is a progenerator.
$\iota_{\text{out}}^{*}$ to ${\iota_{\text{out}}}_{*}$ we get an induced
tensor product $\otimes_{\text{Pants}}$, which agrees with the usual tensor
product on the Drinfeld centre. We refer to [Was20] for a detailed algebraic
discussion of the type of interaction we expect between these two monoidal
structures in the case of fusion categories.
\begin{overpic}[scale={0.3},tics=10]{pair_of_pents_tensor_product.pdf}
\put(13.0,10.0){$\iota_{1}\sqcup\iota_{2}$}
\put(77.5,10.0){$\iota_{\text{out}}$} \end{overpic} Figure 8: The maps
inducing the monoidal structure $\otimes_{\text{Pants}}$.
For the case of interest in the present work, i.e. in the case that
$\mathcal{A}$ is equipped with a $D$-action, the situation is slightly
different since the annulus $\mathbb{S}^{1}\times\mathbb{R}$ can be endowed
with different maps into $BD$. We can assume that, up to homotopy, every map
$\varphi\colon\mathbb{S}^{1}\times\mathbb{R}\longrightarrow BD$ is constant in
the $\mathbb{R}$-direction and hence we still find an $\mathsf{E}_{1}$-algebra
structure $\otimes_{\mathbb{R}}$ on
$\int_{(\mathbb{S}^{1}\times\mathbb{R},\varphi)}\mathcal{A}$. On the other
hand, the pair of pants only induces an $\mathsf{E}_{2}$-algebra structure in
the case that all maps into $BD$ are chosen to be constant. The non-constant
maps into $BD$ induce instead another interesting algebraic structure on the
collection of values taken by factorisation homology on all possible maps
$\varphi\colon\mathbb{S}^{1}\times\mathbb{R}\longrightarrow BD$, a little
$D$-bundles algebra [MW20b].
The operad $\mathsf{E}_{2}^{D}$ of little $D$-bundles is coloured over the
space of maps from $\mathbb{S}^{1}$ to $BD$. To describe the space of
operations we need to introduce some notation: For a disk embedding
$f\in\mathsf{E}_{2}(r)$ we denote by $\mathsf{C}(f)$ the complement of the
interior of all embedded disks. Let
$\underline{\varphi}=(\varphi_{1},\dots,\varphi_{r})$ be an $r$-tuple of maps
$\varphi_{i}\colon\mathbb{S}^{1}\longrightarrow BD$ and
$\psi\colon\mathbb{S}^{1}\longrightarrow BD$ another map. The space of
operations $\mathsf{E}_{2}^{D}\binom{\psi}{\underline{\varphi}}$ consists of
pairs of an element $f\in\mathsf{E}_{2}(r)$ together with a map
$\xi\colon\mathsf{C}(f)\longrightarrow BD$ whose restriction to
$\partial\mathsf{C}(f)$ is given by $(\underline{\varphi},\psi)$. By
construction we have the following:
###### Proposition 3.6.
The value of factorisation homology on $\mathbb{S}^{1}$ equipped with varying
$D$-bundle decorations has the structure of a little $D$-bundles algebra.
The main result of [MW20b, Theorem 4.13] identifies algebras over
$\mathsf{E}^{D}_{2}$ inside the 2-category ${\mathsf{Cat}}$ of categories with
braided $D$-crossed categories as defined by Turaev [Tur00, Tur10] and
recalled below. The proof directly carries over to ${\mathsf{Pr}}_{c}$.
###### Definition 3.7.
A braided $D$-crossed category is a $D$-graded monoidal category
$\mathcal{A}^{D}=\bigoplus_{d\in D}\mathcal{A}_{d},\quad\text{such that
}\otimes\colon\mathcal{A}_{d}\boxtimes\mathcal{A}_{d^{\prime}}\longrightarrow\mathcal{A}_{dd^{\prime}}$
together with a $D$-action on $\mathcal{A}^{D}$, which is such that the image
of the action by an element $h\in D$ on $\mathcal{A}_{d}$ is contained in
$\mathcal{A}_{hdh^{-1}}$, and natural isomorphisms $c_{X,Y}\colon X\otimes
Y\longrightarrow d.Y\otimes X$ for $X\in\mathcal{A}_{d}$, satisfying natural
coherence conditions [Gal17].
We call the braided $D$-crossed category assigned to $\mathcal{A}$ by
factorisation homology the $D$-centre $\mathcal{Z}^{D}(\mathcal{A})$ of
$\mathcal{A}$. The $d$-components $\mathcal{Z}^{D}_{d}(\mathcal{A})$ are given
by factorisation homology on
$\varphi_{d}\colon\mathbb{S}^{1}\times\mathbb{R}\longrightarrow BD$, where
$\varphi_{d}$ corresponds to the loop $d\in\pi_{1}(BD)=D$ and is constant in
the $\mathbb{R}$-direction. To compute the $D$-centre explicitly, we recall
the concept of bimodule traces and twisted centres from [DSPS20, FSS17]. Let
$\mathcal{A}\in{\mathsf{Pr}}_{c}$ be a monoidal category and $\mathcal{M}$ be
an $\mathcal{A}$-bimodule category. The bimodule trace of $\mathcal{M}$ is
$\displaystyle\operatorname{Tr}_{\mathcal{A}}(\mathcal{M})\coloneqq\mathcal{M}\boxtimes_{\mathcal{A}\boxtimes\mathcal{A}^{\operatorname{rev}}}\mathcal{A}\
\ ,$ (3.9)
where $\mathcal{A}^{\operatorname{rev}}$ denotes the category $\mathcal{A}$
with the reverse multiplication. Assume now that
$\mathcal{F}\colon\mathcal{A}\longrightarrow\mathcal{A}$ is a monoidal functor
and denote by ${{}_{\langle\mathcal{F}\rangle}}\mathcal{M}$ the
$(\mathcal{A},\mathcal{A})$-bimodule whose left action is pulled back along
$\mathcal{F}$. Similarly, we will denote
$\mathcal{M}_{\langle\mathcal{F}\rangle}$ the bimodule whose right action is
pulled back along $\mathcal{F}$. The $\mathcal{F}$-twisted centre
$\mathcal{Z}^{\mathcal{F}}(\mathcal{M})$ is then the Drinfeld centre of the
bimodule category $\mathcal{M}_{\langle\mathcal{F}\rangle}$.
###### Proposition 3.8.
Let $d$ be an element of $D$. There is a natural isomorphism
$\displaystyle\mathcal{Z}_{d}^{D}(\mathcal{A})\cong\operatorname{Tr}_{\mathcal{A}}(\mathcal{M}_{d})\
\ ,$ (3.10)
where $\mathcal{M}_{d}$ is the bimodule constructed in Section 3.1 via the
twisted regular action. Moreover, one can identify the bimodule trace
$\operatorname{Tr}_{\mathcal{A}}(\mathcal{M}_{d})$ with the twisted Drinfeld
centre $\mathcal{Z}^{\vartheta(d^{-1})}(\mathcal{A})$.
###### Proof.
The first statement follows directly from applying excision to the cover
sketched in Figure 9 combined with the results of Section 2.2.1. Note that
here excision is not used as in the proof of Theorem 3.5.
\begin{overpic}[scale={0.4},tics=10]{decorated_gluing_pattern_4.pdf}
\put(47.5,17.5){$\cong$} \put(77.5,-7.5){$\Sigmait_{0}$} \end{overpic} Figure
9: Collar-gluing for the annulus with a map to $BD$.
For the second statement, recall that since $\mathcal{A}$ is rigid we can
apply Theorem 2.17 to identify
$\mathcal{M}_{d}\cong\underline{\operatorname{End}}(1_{\mathcal{A}})^{\vartheta(d)}\operatorname{-mod}_{\mathcal{A}\boxtimes\mathcal{A}^{\operatorname{rev}}}$,
where $\underline{\operatorname{End}}(1_{\mathcal{A}})^{\vartheta(d)}$ is the
endomorphism algebra of $1_{\mathcal{A}}$ in
$\mathcal{A}\boxtimes\mathcal{A}^{\operatorname{rev}}$ with respect to the
$\vartheta(d)$-twisted regular action. A categorical version of the Eilenberg-
Watts theorem [BJS21, Lemma 5.7] then gives an equivalence
$\operatorname{Tr}_{\mathcal{A}}(\mathcal{M}_{d})\underset{\text{Thm.
}\eqref{thm:reconstructionreltensorprod}}{\cong}\underline{\operatorname{End}}(1_{\mathcal{A}})^{\vartheta(d)}\text{-mod}_{\mathcal{A}}\cong\operatorname{Hom}_{\mathcal{A}\boxtimes\mathcal{A}^{\operatorname{rev}}}({{}_{\langle\vartheta(d^{-1})\rangle}}\mathcal{A},\mathcal{A})\
\ .$
But, by [FSS17, Lemma 2.13] this is precisely the $\vartheta(d)$-twisted
Drinfeld centre of $\mathcal{A}$ as claimed. ∎
Let us introduce the following bimodule category $\mathcal{A}\rtimes
D\coloneqq\bigoplus_{d\in D}\mathcal{M}_{d}$. This category has the structure
of a $D$-graded monoidal category via
$\displaystyle\otimes_{\mathcal{A}\rtimes
D}\colon\mathcal{M}_{d}\boxtimes\mathcal{M}_{d^{\prime}}$
$\displaystyle\longrightarrow\mathcal{M}_{dd^{\prime}}$ (3.11) $\displaystyle
x\boxtimes x^{\prime}$ $\displaystyle\longmapsto
x\otimes\vartheta(d).x^{\prime}$ (3.12)
as indicated by the notation.
###### Corollary 3.9.
The trace $\operatorname{Tr}_{\mathcal{A}}(\mathcal{A}\rtimes D)$ of the
bimodule $\mathcal{A}\rtimes D$ agrees with the $D$-centre
$\mathcal{Z}^{D}(\mathcal{A})$ and is a braided $D$-crossed category.
###### Remark 3.10.
In [GNN09], the graded centre of a $D$-graded fusion category
$\mathcal{C}=\bigoplus_{d\in D}\mathcal{C}_{d}$ is defined to be
$\mathcal{Z}_{\mathcal{C}_{e}}(\mathcal{C})\cong\operatorname{Tr}_{\mathcal{C}_{e}}(\mathcal{C})$
and equipped with the structure of a braided $D$-crossed category. In the case
that $\mathcal{A}$ is a braided fusion category with $D$-action, the
$D$-centre $\mathcal{Z}^{D}(\mathcal{A})$ agrees with the graded centre of
$\mathcal{A}\rtimes D$. A careful comparison of the two little bundles algebra
structures would take us too far from the content of the paper.
###### Remark 3.11.
We also leave a detailed study of the interaction of the monoidal structure
$\otimes_{\mathbb{R}}$ induced by stacking annuli in the
$\mathbb{R}$-direction with the $D$-crossed braided structure as an
interesting open question for further research.
### 3.4 Algebraic description of boundary conditions and point defects
In Section 2.1.2 we explained that boundary conditions and point defects for
$D\times SO(2)$-structured factorisation homology with values in
${\mathsf{Pr}}_{c}$ are classified by symmetric monoidal functors from the
categories of stratified disks $D\text{-}{\mathsf{Disk}}_{2,\partial}$ and
$D\text{-}{\mathsf{Disk}}_{2,*}$ to ${\mathsf{Pr}}_{c}$, respectively. In this
section we will describe the algebraic structure classifying these functors.
Our strategy will be the following: The source categories can naturally be
identified with the envelope of the coloured operads $D\text{-}\mathsf{fSC}$,
a framed and $D$-equivariant version of the Swiss-cheese operad [Vor99], and
$D\text{-}\mathsf{fE_{2}^{1}}$, a framed and $D$-equivariant
$\mathsf{E}_{2}$-operad with a frozen strand [CG20], respectively. Hence,
defect data corresponds to algebras over them. Both operads are aspherical,
meaning that all the homotopy groups of the operation spaces vanish in degree
higher than 1. For this reason we can work equivalently with the groupoid
valued operads $\Piit_{1}(D\text{-}\mathsf{fSC})$ and
$\Piit_{1}(D\text{-}\mathsf{fE_{2}^{1}})$, instead of topological operads. We
extend existing combinatorial models [Idr17, CG20] in terms of generators and
relations to the situation at hand. The results will be combinatorially
described groupoid valued coloured operads $D\text{-}\mathsf{fPeBr}$ and
$D\text{-}\mathsf{fBr^{1}}$ equivalent to $\Piit_{1}(D\text{-}\mathsf{fSC})$
and $\Piit_{1}(D\text{-}\mathsf{fE_{2}^{1}})$.
We will work within the 2-categorical framework for operads, see for example
Section 2 of [MW22]. The advantage of this is that all structures will
automatically be coherent in the appropriate sense. Alternatively, one could
work with $\Sigmait$-cofibrant models, similar to the parenthesised braid
model for the $\mathsf{E}_{2}$-operad [Fre17-I, Chapter 6], at the categorical
level [CG20, Idr17].
#### 3.4.1 Boundary conditions
We briefly recall the situation without principal bundles [BZBJ18b]. The
category ${\mathsf{Disk}}^{\operatorname{fr}}_{2,\partial}$ is equivalent to
the envelope of the topological Swiss-cheese operad $\mathsf{SC}$ with its two
colours $\mathbb{D}$ and $\mathbb{D}_{\partial}$, corresponding to the
standard disk and the half-disk. The spaces of operations are given by
rectilinear embeddings. In particular, one has that
$\mathsf{SC}(\underbrace{\mathbb{D},\dots,\mathbb{D}}_{n};\mathbb{D})=\mathsf{E}_{2}(n),\quad\mathsf{SC}(\underbrace{\mathbb{D}_{\partial},\dots,\mathbb{D}_{\partial}}_{n};\mathbb{D}_{\partial})=\mathsf{E}_{1}(n)\
\ .$
In Figure 10 we sketch an operation with different colours and in Figure 11 we
list the generators888We refer [MW20b, Section 4.1] for more details on
generators and relations for groupoid valued operads. for the corresponding
combinatorial model $\mathsf{PeBr}$ of permutations and braids, constructed in
[Idr17], together with the respective topological operations.
\begin{overpic}[scale={0.5},tics=10]{sc.pdf} \put(18.0,5.0){$1$}
\put(63.0,5.0){$2$} \put(36.0,27.0){$3$} \put(77.0,36.0){$4$}
\put(40.0,65.0){$5$} \end{overpic} Figure 10: An example of an operation in
$\mathsf{SC}(\mathbb{D}_{\partial},\mathbb{D}_{\partial},\mathbb{D},\mathbb{D},\mathbb{D};\mathbb{D}_{\partial})$.
The relations for $\mathsf{PeBr}$ are such that an algebra over $\mathsf{SC}$
corresponds to a braided monoidal category $\mathcal{A}$, a monoidal category
$\mathcal{C}$ and a braided functor
$\mathcal{A}\longrightarrow\mathcal{Z}(\mathcal{C})$ into the Drinfeld centre
of $\mathcal{C}$. For a complementary physical perspective on the
correspondence between boundary conditions and maps into
$\mathcal{Z}(\mathcal{C})$ we refer the reader to [FSV15].
\begin{overpic}[scale={0.5},tics=10]{generators_SC.pdf}
\put(2.0,67.5){$\text{Generating objects:}$} \put(2.0,43.0){$\text{Generating
morphisms:}$} \put(9.0,56.0){\large$\longmapsto$} \put(33.0,52.0){\large$,$}
\put(49.0,56.0){\large$\longmapsto$} \put(68.0,52.0){\large$,$}
\put(17.0,28.0){$1$} \put(22.5,28.0){$2$} \put(29.5,28.0){$2$}
\put(35.0,28.0){$1$} \put(17.0,13.0){$1$} \put(22.5,13.0){$2$}
\put(29.5,13.0){$2$} \put(35.0,13.0){$1$} \put(81.0,56.0){\large$\longmapsto$}
\put(15.0,34.0){\large$\colon$} \put(24.0,34.0){\large$\longrightarrow$}
\put(40.0,34.0){\large$\longmapsto$} \put(63.0,34.0){\large$\longrightarrow$}
\put(15.0,20.0){\large$\colon$} \put(24.0,20.0){\large$\longrightarrow$}
\put(40.0,20.0){\large$\longmapsto$} \put(63.0,20.0){\large$\longrightarrow$}
\put(15.0,5.0){\large$\colon$} \put(24.0,5.0){\large$\longrightarrow$}
\put(40.0,5.0){\large$\longmapsto$} \put(63.0,5.0){\large$\longrightarrow$}
\end{overpic} Figure 11: Generating operations for $\mathsf{fPeBr}$ and their
image under the equivalence
$\mathsf{fPeBr}\xrightarrow{\cong}\Piit_{1}(\mathsf{fSC})$. The arrows
indicate the paths in the space of embeddings. If we ignore the last
generating morphism, we recover the generators of $\mathsf{PeBr}$.
To study boundary conditions for oriented manifolds, one works with the framed
Swiss-cheese operad $\mathsf{fSC}$ where embeddings are allowed to rotate the
disks $\mathbb{D}$. In the respective combinatorial model $\mathsf{fPeBr}$ for
the framed Swiss-cheese operad this is incorporated by introducing one
additional generator in $\mathsf{fPeBr}(\mathbb{D};\mathbb{D})$, the
balancing, and imposing the relation corresponding to Equation (2.10) inside
$\mathsf{fPeBr}(\mathbb{D},\mathbb{D};\mathbb{D})$, see also Figure 11. Hence,
we see that in order to extend an algebra $(\mathcal{A},\mathcal{C})$ over
$\mathsf{SC}$ to an algebra over $\mathsf{fSC}$, we need to equip
$\mathcal{A}$ with a balancing.
Finally, we turn our attention to the $D$-equivariant version
$D\text{-}\mathsf{fSC}$ of the framed Swiss-Cheese operad, together with its
combinatorial model $D\text{-}\mathsf{fPeBr}$, whose envelope is equivalent to
$D\text{-}{\mathsf{Disk}}_{2,\partial}$. We can assume without loss of
generality that all bundles are trivial and hence the colours of the operads
do not change. However, for every group element $d\in D$, we get an additional
arity one operation in both $D\text{-}\mathsf{fPeBr}(\mathbb{D};\mathbb{D})$
and $D\text{-}\mathsf{PeBr}(\mathbb{D}_{\partial};\mathbb{D}_{\partial})$
corresponding to gauge transformations of the trivial bundle, which ‘commute’
with all the other generators. Hence, we can identify
$D\text{-}\mathsf{fPeBr}$ with the Boardman-Vogt tensor product
$\mathsf{fPeBr}\otimes_{\text{BV}}D$, where we consider the group $D$ as an
operad concentrated in arity one. On the level of algebras this implies
${\mathsf{Alg}}(D\text{-}\mathsf{fSC};{\mathsf{Pr}}_{c})\cong{\mathsf{Alg}}(\mathsf{fPeBr};{\mathsf{Alg}}(D;{\mathsf{Pr}}_{c}))$.
But a $D$-algebra is just an object of ${\mathsf{Pr}}_{c}$ equipped with a
$D$-action, and so we can summarise our discussion in the following
proposition.
###### Proposition 3.12.
Let $\mathcal{A}$ be a balanced braided category with $D$-action. Boundary
conditions for $\mathcal{A}$ in $D\times SO(2)$-structured factorisation
homology are given by a monoidal category $\mathcal{C}\in{\mathsf{Pr}}_{c}$
with $D$-action and a $D$-equivariant braided functor
$\mathcal{A}\longrightarrow\mathcal{Z}(\mathcal{C})$ into the Drinfeld centre
of $\mathcal{C}$ with its induced $D$-action.
###### Example 3.13.
The trivial boundary condition, corresponding to simply removing the boundary
and computing factorisation homology on the resulting manifold without
boundary, is given by taking $\mathcal{C}=\mathcal{A}$ together with the
canonical embedding $\mathcal{A}\longrightarrow\mathcal{Z}(\mathcal{A})$
induced by the braiding on $\mathcal{A}$.
###### Example 3.14.
The sources for boundary conditions from [BZBJ18b, Section 2.3] have natural
generalisations to the equivariant setting:
1. 1.
Let $\mathcal{A}$ be a balanced braided category with $D$-action and denote by
$E_{2}(\mathcal{A})$ the category of commutative algebras in $\mathcal{A}$,
which comes with an induced $D$-action. For every homotopy fixed point999Here
a homotopy fixed point is a commutative algebra $a$ together with algebra
isomorphisms $\tau_{d}\colon d.a\xrightarrow{\cong}a$ for all $d\in D$ such
that $\tau_{d^{\prime}}\circ d^{\prime}.\tau_{d}=\tau_{d^{\prime}d}$. $a\in
E_{2}(\mathcal{A})^{D}$, the category $a\operatorname{-mod}$ inherits a
natural $D$-action and provides an example for boundary conditions of the bulk
theory described by $\mathcal{A}$.
2. 2.
Consider the quantum Borel algebra $U_{q}(\mathfrak{b})\hookrightarrow
U_{q}(\mathfrak{g})$, which is the subalgebra generated by the elements
$\\{K^{\pm}_{\alpha_{i}},X^{+}_{\alpha_{i}}\\}_{\alpha_{i}\in\Piit}$,
following conventions from [CP95, Section 9.1.B]. We get a forgetful tensor
functor ${\mathsf{Rep}}_{q}(G)\longrightarrow{\mathsf{Rep}}_{q}(B)$. Moreover,
as noted in [BZBJ18b, Section 2.3], the R-matrix provides a central structure
on this forgetful functor. We observe that we have an
$\operatorname{Out}(G)$-action on $U_{q}(\mathfrak{b})$, given on generators
by $K^{\pm}_{\alpha_{i}}\longmapsto K^{\pm}_{\kappa(\alpha_{i})}$ and
$H^{+}_{\alpha_{i}}\longmapsto H^{+}_{\kappa(\alpha_{i})}$ for any
$\kappa\in\operatorname{Out}(G)$. We conclude that we get an
$\operatorname{Out}(G)$-equivariant functor
${\mathsf{Rep}}_{q}(G)\longrightarrow\mathcal{Z}({\mathsf{Rep}}_{q}(B))$.
###### Remark 3.15.
There is another generalisation of the Swiss-cheese operad to the equivariant
setting with operations consisting of an element in $\mathsf{SC}$ equipped
with a map to $BD$ on the complement of the embedding. This is similar to the
generalisation of the little disks operad given by the little bundles operad.
We also expect this operad to play an important role in the description of
boundary conditions for equivariant field theories.
#### 3.4.2 Point defects
We again start by recalling the framed result from [BZBJ18b] in the language
of coloured operads and then gradually build up to the oriented and
$D$-equivariant setting. The disk category
${\mathsf{Disk}}_{2,*}^{\operatorname{fr}}$ can be described as the envelope
of a topological operad with two colours, $\mathbb{D}$ and $\mathbb{D}_{*}$,
corresponding to a disk and a marked disk, respectively. The spaces of
operations are given by rectilinear embeddings which map marked points
bijectively to marked points. The concrete structure of this coloured operad
makes it into a moperad as defined in [Wil16, Definition 9]. A combinatorial
model for this topological operad is given in [CG20] in terms of parenthesised
braids with a frozen strand. In Figure 12, we give a strict version of this
combinatorial model, which will be denoted $\mathsf{Br}^{1}$. The description
in terms of generators and relations allows us to read off the corresponding
algebraic structure which was introduced in [Enr08, Bro12, Bro13].
###### Definition 3.16.
Let $\mathcal{A}$ be a braided category. A braided module over $\mathcal{A}$
is a right module category
$\triangleleft\colon\mathcal{M}\boxtimes\mathcal{A}\longrightarrow\mathcal{M}$
equipped with a natural isomorphism
$E\colon\triangleleft\Longrightarrow\triangleleft$ satisfying (suppressing
coherence isomorphisms)
$\displaystyle E_{m\triangleleft x,y}$
$\displaystyle=(\operatorname{id}_{m}\triangleleft\sigma_{y,x}^{-1})\circ(E_{m,y}\triangleleft\operatorname{id}_{x})\circ(\operatorname{id}_{m}\triangleleft\sigma_{x,y}^{-1})$
(3.13) $\displaystyle E_{m,x\otimes y}$
$\displaystyle=(E_{m,x}\triangleleft\operatorname{id}_{y})\circ
E_{m\triangleleft
x,y}\circ(\operatorname{id}_{m}\triangleleft(\sigma_{y,x}\circ\sigma_{x,y}))$
(3.14)
for all $m\in\mathcal{M}$ and $x,y\in\mathcal{A}$.
The framed version $\mathsf{fBr}^{1}$, giving a combinatorial model for the
envelope of ${\mathsf{Disk}}_{2,*}$, can be described by an extension of
$\mathsf{Br}^{1}$ obtained by adding two additional generating morphisms
$\theta\in\mathsf{fBr}^{1}(\mathbb{D};\mathbb{D})$ and
$\theta_{*}\in\mathsf{fBr}^{1}(\mathbb{D}_{*},\mathbb{D}_{*})$, corresponding
to rotating the disks by $2\pi$. Furthermore, we need to include Relation
(2.10) for $\theta$ and Relation $(R4)$ from Figure 12 for $\theta_{*}$.
\begin{overpic}[scale={0.5},tics=10]{fBr1.pdf}
\put(0.0,76.0){$\text{Generating objects:}$} \put(0.0,53.0){$\text{Generating
morphisms:}$} \put(0.0,18.0){$\text{Relations:}$} \put(-2.0,66.0){$d$}
\put(2.0,66.0){ \large$\longmapsto$} \put(20.0,65.0){ $d$} \put(24.0,60.0){
\large$,$} \put(34.0,66.0){$d$} \put(34.0,72.0){$dhd^{-1}$}
\put(35.5,59.0){$h$} \put(39.0,66.0){ \large$\longmapsto$} \put(55.0,65.0){
$d$} \put(60.0,60.0){ \large$,$} \put(71.0,59.0){$d$} \put(74.0,72.0){$d$}
\put(78.0,66.0){ \large$\longmapsto$} \put(13.0,42.0){\large$\colon$}
\put(13.0,27.0){\large$\colon$} \put(23.0,27.0){\large$\longrightarrow$}
\put(23.0,42.0){\large$\longrightarrow$} \put(46.0,27.0){\large$\longmapsto$}
\put(46.0,42.0){\large$\longmapsto$} \put(74.5,27.0){\large$\longrightarrow$}
\put(74.5,42.0){\large$\longrightarrow$} \put(15.0,36.0){$d$}
\put(29.4,36.0){$d$} \put(18.0,48.0){$d$} \put(33.0,48.0){$d$}
\put(35.0,41.0){$d$} \put(18.0,22.0){$d$} \put(32.0,22.0){$d$}
\put(33.5,27.0){$d$} \put(11.5,6.0){\large$=$} \put(37.0,6.0){\large$=$}
\put(62.0,6.0){\large$=$} \put(87.0,6.0){\large$=$} \put(26.5,1.0){\large$,$}
\put(52.0,1.0){\large$,$} \put(76.0,1.0){\large$,$} \put(10.0,9.0){$(R1)$}
\put(35.0,9.0){$(R2)$} \put(60.0,9.0){$(R3)$} \put(85.5,9.0){$(R4)$}
\end{overpic} Figure 12: Generating operations and relations for
$D\text{-}\mathsf{fBr^{1}}$ and their image under the equivalence
$D\text{-}\mathsf{fBr^{1}}\xrightarrow{\cong}\Piit_{1}(D\text{-}\mathsf{fE_{2}^{1}})$.
Notice that we did not depict the relations related to the $D$-action. The
$d$-labels on the disk for the first two generating objects mean that the map
to $BD$ is the loop $d$ in radial direction. In $D\text{-}{\mathsf{Man}}_{2}$
this embedding is isomorphic to the identity embedding equipped with the
homotopy corresponding to $d$. If we ignore the $D$-labels, we get generators
and relations of $\mathsf{fBr}^{1}$. If we furthermore drop the second
generating morphism as well as relation $(R4)$, we get a combinatorial model
for $\mathsf{E}^{1}_{2}$.
We note that the system of relations is over-determined: To see this, note
that Relation $(R4)$ allows one to rewrite $E_{\mathbb{D}_{*},\mathbb{D}}$ in
terms of the balancing $\theta$ and $\theta_{*}$. Inserting this into Relation
$(R3)$ in Figure 12, we find that this relation is automatically satisfied and
hence obsolete. To show that the combinatorial description is correct, it is
enough to note that the operation spaces in $\mathsf{fE}_{2}^{1}$ can be
identified with the ones of $\mathsf{fE}_{2}$. Reading off the corresponding
algebraic structure from the combinatorial model, one finds an equivalent
reformulation of the braided balanced modules introduced in [BZBJ18b, Theorem
3.12]. The only additional structure to the one described in Definition 3.16
is that of a balancing
$\theta_{\mathcal{M}}\colon\operatorname{id}_{\mathcal{M}}\Longrightarrow\operatorname{id}_{\mathcal{M}}$
on $\mathcal{M}$ compatible with $E$.
Finally, we move on to describe point defects in the $D$-equivariant setting,
which is slightly more subtle than the boundary conditions described in the
previous section. The reason for this is that the disk with one marked point
$\mathbb{D}_{*}$ is replaced by a collection of marked disks
$\mathbb{D}_{*}^{d}$ equipped with a map to $BD$ with holonomy $d$. The
combinatorial model for $D\text{-}\mathsf{fE}_{2}^{1}$ can be derived from the
model for the framed version of the little bundles operad given in [Woi20,
Section 5.4.2] similar to the derivation of the model for
$\mathsf{fE}_{2}^{1}$ from the one for $\mathsf{fE}_{2}$. It is important to
note here that we only consider configurations where the map to $BD$ has non-
trivial holonomy around the frozen strand. We list the generators and
relations for the combinatorial model $D\text{-}\mathsf{fBr}^{1}$ in Figure
12. The corresponding algebraic notion is:
###### Definition 3.17.
Let $\mathcal{A}$ be a balanced braided category with $D$-action. An
equivariant balanced right module over $\mathcal{A}$ is a $D$-graded category
$\mathcal{M}=\bigoplus_{d\in D}\mathcal{M}_{d}$ equipped with
* •
a $D$-action
$\operatorname{act}^{\mathcal{M}}\colon*\text{//}D\longrightarrow*\text{//}\operatorname{Aut}(\mathcal{M})$
such that the image of $\mathcal{M}_{d}$ under the action of $d^{\prime}\in D$
is contained in $\mathcal{M}_{d^{\prime}d{d^{\prime}}^{-1}}$,
* •
an equivariant right $\mathcal{A}$-action
$\triangleleft\colon\mathcal{M}\boxtimes\mathcal{A}\longrightarrow\mathcal{M}\
\ ,$
* •
natural isomorphisms
$\theta_{\mathcal{M}}^{d}\colon\operatorname{id}_{\mathcal{M}_{d}}\longrightarrow\operatorname{act}^{\mathcal{M}}_{d}$
and
$E^{d}\colon\triangleleft\longrightarrow\triangleleft\circ\left(\operatorname{id}_{\mathcal{M}_{d}}\boxtimes\operatorname{act}^{\mathcal{A}}_{d}\right)$
for all $d\in D\ \ .$
such that (suppressing coherence isomorphisms)
* •
for all $m\in\mathcal{M}_{d}$ and $x,y\in\mathcal{A}$
$\displaystyle E^{d}_{m\triangleleft
x,y}=\left(\operatorname{id}_{m}\triangleleft\sigma_{\operatorname{act}^{\mathcal{A}}_{d}(y),x}^{-1}\right)\circ\left(E^{d}_{m,y}\triangleleft\operatorname{id}_{x}\right)\circ\left(\operatorname{id}_{m}\triangleleft\sigma_{x,y}^{-1}\right)\
\ ,$ (3.15)
* •
and for all $m\in\mathcal{M}_{d}$ and $x\in\mathcal{A}$
$\displaystyle\left(\theta_{\mathcal{M}}^{d}\right)_{m\triangleleft
x}=E^{d}_{\operatorname{act}^{\mathcal{M}}_{d}(m),x}\circ\left(\left(\theta_{\mathcal{M}}^{d}\right)_{m}\triangleleft(\theta_{\mathcal{A}})_{x}\right)\
\ .$ (3.16)
We can summarise our discussion in the following proposition.
###### Proposition 3.18.
Point defects in $D\times SO(2)$-structured factorisation homology are
equivalent to equivariant balanced modules.
###### Example 3.19.
Let $\mathcal{C}$ be a boundary condition for a bulk theory $\mathcal{A}$. We
can form a point defect from this boundary condition by removing a small
circle around every marked point and inserting $\mathcal{C}$. On the algebraic
level, the map from boundary conditions to point defects sends $\mathcal{C}$
to the $D$-centre $\mathcal{Z}^{D}(\mathcal{C})$ with the $\mathcal{A}$-action
induced by the functor
$\mathcal{A}\longrightarrow\mathcal{Z}(\mathcal{C})\subset\mathcal{Z}^{D}(\mathcal{C})$.
###### Remark 3.20.
In [BZBJ18b] a different approach to the description of point defects is
taken: They are identified with modules over the value assigned to the annulus
by factorisation homology equipped with the stacking tensor product. The same
approach should work in the situation considered in this section, hence we
expect that equivariant balanced modules over $\mathcal{A}$ can equivalently
be described by graded modules over the graded centre
$\mathcal{Z}^{D}(\mathcal{A})$ equipped with the stacking tensor product.
###### Example 3.21.
Here we set $D=\operatorname{Out}(G)$. For each element
$\kappa\in\operatorname{Out}(G)$, let $h\in G$ act via $\kappa$-twisted
conjugation $\operatorname{Ad}^{\kappa}_{h}(g)=hg\kappa(h^{-1})$ on $G$.
Denote $C^{\kappa}\subset G$ the orbits of this action, i.e. the
$\kappa$-twisted conjugacy classes of $G$. For each $\kappa$-component of the
$\operatorname{Out}(G)$-centre of ${\mathsf{Rep}}(G)$, we thus get a tensor
functor
$\int_{(\mathbb{S}^{1},\kappa)}{\mathsf{Rep}}(G)\cong\operatorname{QCoh}(G/G)\longrightarrow\operatorname{QCoh}(C^{\kappa}/G)$
where the $G$ acts by $\kappa$-twisted conjugation.
#### 3.4.3 Closed surfaces and marked points
We first compute the value of factorisation homology on a closed, unmarked
surface $\Sigmait$ equipped with a map $\varphi\colon\Sigmait\longrightarrow
BD$. We use a decomposition of $\Sigmait$ into a surface $\Sigmait_{o}$ with
one boundary component and a disk $\mathbb{D}$, see Figure 13.
\begin{overpic}[scale={0.5},tics=10]{closedSurface.pdf}
\put(40.0,10.0){$\Sigmait_{o}$} \end{overpic} Figure 13: The surface
$\Sigmait_{o}$ obtained from $\Sigmait$ by removing a disk $\mathbb{D}$.
We denote by $\varphi_{o}$ the restriction of $\varphi$ to $\Sigmait_{o}$
which has trivial holonomy around the boundary $\partial\Sigmait_{o}$ since
the bundle extends to $\Sigmait$. Excision now implies that the value of
factorisation homology on $\Sigmait$ is given by the relative tensor product
$\displaystyle\int_{(\Sigmait,\varphi)}\mathcal{A}\cong\int_{(\Sigmait_{o},\varphi_{o})}\mathcal{A}\underset{{\int_{(\mathbb{S}^{1}\times\mathbb{R},\ast)}\mathcal{A}}}{\boxtimes}\mathcal{A}\
\ ,$ (3.17)
where $\ast\colon\mathbb{S}^{1}\times\mathbb{R}\longrightarrow BD$ is the
constant map at the base point. Given a decorated gluing pattern for
$\Sigmait_{o}$, we showed in Theorem 3.5 that one obtains identifications
$\int_{(\Sigmait_{o},\varphi_{o})}\mathcal{A}\cong a_{P}^{d_{1},\dots
d_{n}}\text{-mod}_{\mathcal{A}},\quad\int_{(\mathbb{S}^{1}\times\mathbb{R},\ast)}\mathcal{A}\cong\mathcal{F}_{\mathcal{A}}^{e}\text{-mod}_{\mathcal{A}}\
\ ,$
via monadic reconstruction in $\mathcal{A}$. Now in order to compute the
relative tensor product (3.17), we have to describe the categorical
factorisation homology internal to the annulus category
$\int_{\mathbb{S}^{1}\times\mathbb{R}}\mathcal{A}$. The techniques to do so
were developed in [BZBJ18b, Section 4], and we will briefly review the main
results that will be used to compute factorisation homology on closed surfaces
with $D$-bundles.
We first recall the notion of a quantum moment map, see [Saf21, Section 3] for
more details. For every $V\in\mathcal{A}$ we have a natural isomorphism, the
so-called “field goal” isomorphism [BZBJ18b, Corollary 4.6] :
$\tau_{V}\colon\mathcal{F}_{\mathcal{A}}\otimes V\longrightarrow
V\otimes\mathcal{F}_{\mathcal{A}},\quad\tau_{V}\coloneqq{\leavevmode\hbox
to36.51pt{\vbox to58.54pt{\pgfpicture\makeatletter\hbox{\hskip
103.61243pt\lower-114.62776pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{
}\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }\hbox to0.0pt{\hbox
to0.0pt{\hbox to0.0pt{ { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-170.71655pt}{14.22638pt}\pgfsys@lineto{-170.71655pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-184.94293pt}{14.22638pt}\pgfsys@lineto{-184.94293pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-170.71655pt}{42.67914pt}\pgfsys@lineto{-170.71655pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-184.94293pt}{42.67914pt}\pgfsys@lineto{-184.94293pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-135.1506pt}{14.22638pt}\pgfsys@curveto{-135.1506pt}{22.10373pt}{-120.92422pt}{20.57541pt}{-120.92422pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-120.92422pt}{14.22638pt}\pgfsys@curveto{-120.92422pt}{22.10373pt}{-135.1506pt}{20.57541pt}{-135.1506pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-120.92422pt}{14.22638pt}\pgfsys@curveto{-120.92422pt}{22.10373pt}{-135.1506pt}{20.57541pt}{-135.1506pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-135.1506pt}{42.67914pt}\pgfsys@curveto{-135.1506pt}{50.55649pt}{-120.92422pt}{49.02817pt}{-120.92422pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-120.92422pt}{42.67914pt}\pgfsys@curveto{-120.92422pt}{50.55649pt}{-135.1506pt}{49.02817pt}{-135.1506pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-120.92422pt}{42.67914pt}\pgfsys@curveto{-120.92422pt}{50.55649pt}{-135.1506pt}{49.02817pt}{-135.1506pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss} { {}{}{}}{}{{}}{}{
{}{}{}}
{}{}{}\pgfsys@moveto{-99.58466pt}{0.0pt}\pgfsys@lineto{-99.58466pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-56.90552pt}{0.0pt}\pgfsys@lineto{-56.90552pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-85.35828pt}{0.0pt}\pgfsys@curveto{-85.35828pt}{7.87735pt}{-71.1319pt}{6.34903pt}{-71.1319pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-71.1319pt}{0.0pt}\pgfsys@curveto{-71.1319pt}{7.87735pt}{-85.35828pt}{6.34903pt}{-85.35828pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-71.1319pt}{0.0pt}\pgfsys@curveto{-71.1319pt}{7.87735pt}{-85.35828pt}{6.34903pt}{-85.35828pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-99.58466pt}{14.22638pt}\pgfsys@curveto{-99.58466pt}{22.10373pt}{-85.35828pt}{20.57541pt}{-85.35828pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-71.1319pt}{14.22638pt}\pgfsys@curveto{-71.1319pt}{22.10373pt}{-56.90552pt}{20.57541pt}{-56.90552pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-71.1319pt}{28.45276pt}\pgfsys@curveto{-71.1319pt}{36.33011pt}{-85.35828pt}{34.80179pt}{-85.35828pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-56.90552pt}{28.45276pt}\pgfsys@lineto{-56.90552pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-99.58466pt}{28.45276pt}\pgfsys@lineto{-99.58466pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-56.90552pt}{14.22638pt}\pgfsys@curveto{-56.90552pt}{22.10373pt}{-71.1319pt}{20.57541pt}{-71.1319pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-56.90552pt}{14.22638pt}\pgfsys@curveto{-56.90552pt}{22.10373pt}{-71.1319pt}{20.57541pt}{-71.1319pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-85.35828pt}{14.22638pt}\pgfsys@curveto{-85.35828pt}{22.10373pt}{-99.58466pt}{20.57541pt}{-99.58466pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-85.35828pt}{14.22638pt}\pgfsys@curveto{-85.35828pt}{22.10373pt}{-99.58466pt}{20.57541pt}{-99.58466pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-85.35828pt}{28.45276pt}\pgfsys@curveto{-85.35828pt}{36.33011pt}{-71.1319pt}{34.80179pt}{-71.1319pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-85.35828pt}{28.45276pt}\pgfsys@curveto{-85.35828pt}{36.33011pt}{-71.1319pt}{34.80179pt}{-71.1319pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{42.67914pt}{0.0pt}\pgfsys@lineto{42.67914pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{14.22638pt}{0.0pt}\pgfsys@curveto{14.22638pt}{7.87735pt}{28.45276pt}{6.34903pt}{28.45276pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@curveto{28.45276pt}{7.87735pt}{14.22638pt}{6.34903pt}{14.22638pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@curveto{28.45276pt}{7.87735pt}{14.22638pt}{6.34903pt}{14.22638pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{0.0pt}{14.22638pt}\pgfsys@curveto{0.0pt}{22.10373pt}{14.22638pt}{20.57541pt}{14.22638pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{28.45276pt}{28.45276pt}\pgfsys@curveto{28.45276pt}{36.33011pt}{14.22638pt}{34.80179pt}{14.22638pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{42.67914pt}{28.45276pt}\pgfsys@lineto{42.67914pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{0.0pt}{28.45276pt}\pgfsys@lineto{0.0pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@curveto{14.22638pt}{22.10373pt}{0.0pt}{20.57541pt}{0.0pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@curveto{14.22638pt}{22.10373pt}{0.0pt}{20.57541pt}{0.0pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{14.22638pt}{28.45276pt}\pgfsys@curveto{14.22638pt}{36.33011pt}{28.45276pt}{34.80179pt}{28.45276pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{14.22638pt}{28.45276pt}\pgfsys@curveto{14.22638pt}{36.33011pt}{28.45276pt}{34.80179pt}{28.45276pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{99.58466pt}{0.0pt}\pgfsys@lineto{99.58466pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{142.2638pt}{0.0pt}\pgfsys@lineto{142.2638pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{113.81104pt}{0.0pt}\pgfsys@curveto{113.81104pt}{7.87735pt}{128.03741pt}{6.34903pt}{128.03741pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{128.03741pt}{0.0pt}\pgfsys@curveto{128.03741pt}{7.87735pt}{113.81104pt}{6.34903pt}{113.81104pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{128.03741pt}{0.0pt}\pgfsys@curveto{128.03741pt}{7.87735pt}{113.81104pt}{6.34903pt}{113.81104pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{99.58466pt}{14.22638pt}\pgfsys@curveto{99.58466pt}{22.10373pt}{113.81104pt}{20.57541pt}{113.81104pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{128.03741pt}{14.22638pt}\pgfsys@curveto{128.03741pt}{22.10373pt}{142.2638pt}{20.57541pt}{142.2638pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{142.2638pt}{28.45276pt}\pgfsys@lineto{142.2638pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{99.58466pt}{28.45276pt}\pgfsys@lineto{99.58466pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{142.2638pt}{14.22638pt}\pgfsys@curveto{142.2638pt}{22.10373pt}{128.03741pt}{20.57541pt}{128.03741pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{142.2638pt}{14.22638pt}\pgfsys@curveto{142.2638pt}{22.10373pt}{128.03741pt}{20.57541pt}{128.03741pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{113.81104pt}{14.22638pt}\pgfsys@curveto{113.81104pt}{22.10373pt}{99.58466pt}{20.57541pt}{99.58466pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{113.81104pt}{14.22638pt}\pgfsys@curveto{113.81104pt}{22.10373pt}{99.58466pt}{20.57541pt}{99.58466pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{42.67914pt}{14.22638pt}\pgfsys@curveto{42.67914pt}{22.10373pt}{28.45276pt}{20.57541pt}{28.45276pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{28.45276pt}{14.22638pt}\pgfsys@curveto{28.45276pt}{22.10373pt}{42.67914pt}{20.57541pt}{42.67914pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{28.45276pt}{14.22638pt}\pgfsys@curveto{28.45276pt}{22.10373pt}{42.67914pt}{20.57541pt}{42.67914pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{113.81104pt}{28.45276pt}\pgfsys@curveto{113.81104pt}{36.33011pt}{128.03741pt}{34.80179pt}{128.03741pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{128.03741pt}{28.45276pt}\pgfsys@curveto{128.03741pt}{36.33011pt}{113.81104pt}{34.80179pt}{113.81104pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{128.03741pt}{28.45276pt}\pgfsys@curveto{128.03741pt}{36.33011pt}{113.81104pt}{34.80179pt}{113.81104pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss} { {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-85.35828pt}{-85.35828pt}\pgfsys@curveto{-85.35828pt}{-77.48093pt}{-99.58466pt}{-79.00925pt}{-99.58466pt}{-71.1319pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-99.58466pt}{-85.35828pt}\pgfsys@curveto{-99.58466pt}{-77.48093pt}{-85.35828pt}{-79.00925pt}{-85.35828pt}{-71.1319pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-99.58466pt}{-85.35828pt}\pgfsys@curveto{-99.58466pt}{-77.48093pt}{-85.35828pt}{-79.00925pt}{-85.35828pt}{-71.1319pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{} {
{}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}{}\pgfsys@moveto{-85.35828pt}{-99.58466pt}\pgfsys@curveto{-85.35828pt}{-91.7073pt}{-71.1319pt}{-93.23563pt}{-71.1319pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{} { {}{}{}}{{}{}}{{}} {{{}}{{}}}{{}}{ {}{}{}}{{{}}{{}}}{
{}{}{}}{}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@setlinewidth{1.8pt}\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{4.01001pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-71.1319pt}{-99.58466pt}\pgfsys@curveto{-71.1319pt}{-91.7073pt}{-85.35828pt}{-93.23563pt}{-85.35828pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
}\pgfsys@beginscope\pgfsys@invoke{
}{\pgfsys@setlinewidth{0.41pt}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@moveto{-71.1319pt}{-99.58466pt}\pgfsys@curveto{-71.1319pt}{-91.7073pt}{-85.35828pt}{-93.23563pt}{-85.35828pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{ {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-99.58466pt}{-99.58466pt}\pgfsys@lineto{-99.58466pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}}{}{ {}{}{}}
{}{}{}\pgfsys@moveto{-71.1319pt}{-85.35828pt}\pgfsys@lineto{-71.1319pt}{-71.1319pt}\pgfsys@stroke\pgfsys@invoke{
} \hss}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }\hbox to0.0pt{\hbox
to0.0pt{\hbox to0.0pt{ {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-170.71655pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-170.71655pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-184.94293pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-184.94293pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-170.71655pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-170.71655pt}{56.90552pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-184.94293pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-184.94293pt}{56.90552pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-160.37907pt}{33.73158pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{=}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{}\pgfsys@rect{-188.49953pt}{28.45276pt}{21.33957pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-177.82974pt}{35.56595pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{}\pgfsys@rect{-188.49953pt}{28.45276pt}{21.33957pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-180.17696pt}{32.09373pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\theta$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{}\pgfsys@rect{-140.83083pt}{28.76073pt}{11.36044pt}{13.61044pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-137.49782pt}{32.09373pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\theta$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}{}\pgfsys@rect{-126.60445pt}{28.76073pt}{11.36044pt}{13.61044pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-123.27144pt}{32.09373pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\theta$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-184.94293pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-193.38525pt}{-3.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$V\otimes W$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-143.59293pt}{-3.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$V\otimes W$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-135.1506pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-120.92422pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-135.1506pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-120.92422pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-135.1506pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-120.92422pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-135.1506pt}{56.90552pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-120.92422pt}{56.90552pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \hss}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-56.90552pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-56.90552pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-56.90552pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-56.90552pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-96.85623pt}{-31.50832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$+\text{-linked}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{14.22638pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{42.67914pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{14.22638pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{42.67914pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{14.22638pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{42.67914pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{14.22638pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{42.67914pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{1.86732pt}{-31.50832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$+\text{-nested}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{99.58466pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{113.81104pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{128.03741pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{142.2638pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{113.81104pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{128.03741pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{99.58466pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{142.2638pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{113.81104pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{128.03741pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{99.58466pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{142.2638pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{113.81104pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{128.03741pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{99.58466pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{142.2638pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{96.7575pt}{-31.50832pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$+\text{-unlinked}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-97.83536pt}{-14.10199pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{i}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-69.3826pt}{-14.10199pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{j}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-69.3826pt}{49.91672pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{i}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-97.83536pt}{49.91672pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{j}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{1.7493pt}{-14.10199pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{i}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{101.33395pt}{-14.10199pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{i}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{30.20206pt}{49.91672pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{i}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{129.78671pt}{49.91672pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{i}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{30.20206pt}{-14.10199pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{j}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{129.78671pt}{-14.10199pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{j}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{1.7493pt}{49.91672pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{j}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{101.33395pt}{49.91672pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}^{d_{j}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-132.05865pt}{17.10625pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$L^{+}~{}=$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-33.6337pt}{17.10625pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$N^{+}~{}=$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{66.55443pt}{17.10625pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$U^{+}~{}=$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \hss}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{-85.35828pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{-85.35828pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{-71.1319pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{-71.1319pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{-99.58466pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{-99.58466pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-85.35828pt}{-85.35828pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{-85.35828pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{-99.58466pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-99.58466pt}{-85.35828pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{-85.35828pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-71.1319pt}{-71.1319pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-97.83536pt}{-112.71443pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}$}} }}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-75.15967pt}{-113.6711pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$V$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-103.61243pt}{-63.87877pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$V$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-83.60898pt}{-62.9221pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{F}_{\mathcal{A}}$}} }}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\hss}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }
\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{{
{}{}{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\ \ .$ (3.18)
Now let $A$ be an algebra in $\mathcal{A}$. A quantum moment map is an algebra
map $\mu\colon A\longrightarrow\mathcal{F}_{\mathcal{A}}$ in $\mathcal{A}$
such that it fits into the following commutative diagram
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes
W$$+\text{-linked}$$+\text{-nested}$$+\text{-unlinked}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$L^{+}~{}=$$N^{+}~{}=$$U^{+}~{}=$$\mathcal{F}_{\mathcal{A}}$$V$$V$$\mathcal{F}_{\mathcal{A}}$${A\otimes\mathcal{F}_{\mathcal{A}}}$${A\otimes
A}$${A}$${\mathcal{F}_{\mathcal{A}}\otimes A}$${A\otimes
A}$$\scriptstyle{\tau_{A}^{-1}}$$\scriptstyle{\operatorname{id}\otimes\mu}$$\scriptstyle{m}$$\scriptstyle{\mu\otimes\operatorname{id}}$$\scriptstyle{m}$
It is shown in [BZBJ18b, Corollary 4.7] that algebras
$A\in\int_{\mathbb{S}^{1}\times\mathbb{R}}\mathcal{A}$ amount to the data of a
quantum moment map $\mu\colon\mathcal{F}^{e}_{\mathcal{A}}\longrightarrow A$.
As mentioned in Remark 3.20, braided modules are identified in [BZBJ18b] with
module categories over
$\mathcal{F}_{\mathcal{A}}^{e}\text{-mod}_{\mathcal{A}}$, where the latter is
equipped with the tensor product $\otimes_{\mathbb{R}}$ induced by stacking
annuli in the radial direction. Let now $\mathcal{M}$ be a braided module
category and assume there is a progenerator $m\in\mathcal{M}$ for the induced
$\mathcal{A}$-action. In the situation at hand,
$\mathcal{M}=\int_{(\Sigmait_{o},\varphi_{o})}\mathcal{A}$ and the
progenerator is the distinguished object given by the pointing via the
inclusion of the empty manifold. The following reconstruction result for
$\mathcal{M}$ is proven in [BZBJ18b, Theorem 1.1]: There is an equivalence
$\mathcal{M}\cong
A\text{-mod}_{\int_{\mathbb{S}^{1}\times\mathbb{R}}\mathcal{A}},\quad
A=\underline{\text{End}}_{\mathcal{A}}(m)\ \ ,$
where the endomorphism algebra comes with a canonical quantum moment map
$\mu_{\Sigmait_{o}}\colon\mathcal{F}_{\mathcal{A}}\longrightarrow A$. The
right action of $\mathcal{F}_{\mathcal{A}}\text{-mod}_{\mathcal{A}}$ on
$\mathcal{M}$ is then given by [BZBJ18b, Corollary 4.7] :
$\displaystyle A\text{-mod}\boxtimes\mathcal{F}_{\mathcal{A}}\text{-mod}$
$\displaystyle\longrightarrow A\text{-mod}$ (3.19) $\displaystyle V\boxtimes
X$ $\displaystyle\longmapsto V\otimes_{\mathcal{F}_{\mathcal{A}}}X\ \ ,$
(3.20)
where the algebra homomorphism $\mu_{\Sigmait_{o}}$ and the field goal
transformations are used to form the relative tensor product.
###### Remark 3.22.
Conversely, given an algebra $A\in\mathcal{A}$ and a quantum moment map
$\mu\colon\mathcal{F}_{\mathcal{A}}\longrightarrow A$, the category
$\mathcal{M}=A\text{-mod}_{\mathcal{A}}$ is equipped with the structure of a
braided module category. We refer to [BZBJ18b, Section 4.3] for an explicit
description of the braided module structure that one obtains from the given
quantum moment map $\mu$.
Applying the above reconstruction result to the situation at hand, we get
quantum moment maps
$\displaystyle\mu_{\Sigmait_{o}}\colon\mathcal{F}_{\mathcal{A}}\longrightarrow
a_{P}^{d_{1},\dots d_{n}}\ \ \text{and }\ \
\mu_{\mathbb{D}}\colon\mathcal{F}_{\mathcal{A}}\longrightarrow
1_{\mathcal{A}}\ \ ,$ (3.21)
which endow $a_{P}^{d_{1},\dots d_{n}}$ and $1_{\mathcal{A}}$ with the
structure of algebras in $\mathcal{F}_{\mathcal{A}}\text{-mod}_{\mathcal{A}}$.
Finally, by [BZBJ18b, Corollary 4.8], we get:
###### Proposition 3.23.
The factorisation homology on a closed decorated surface $(\Sigmait,\varphi)$
is given by
$\displaystyle\int\displaylimits_{(\Sigmait,\varphi)}\mathcal{A}\cong(a_{P}^{d_{1},\dots,d_{n}}\text{-}\mathrm{mod}\text{-}1_{\mathcal{A}})_{\mathfrak{F}_{\mathcal{A}}\text{-}\mathrm{mod}_{\mathcal{A}}}\
\ ,$ (3.22)
the category of $a_{P}^{d_{1},\dots,d_{n}}$-$1_{\mathcal{A}}$-bimodules inside
$\mathcal{F}_{\mathcal{A}}\text{-}\mathrm{mod}_{\mathcal{A}}$.
###### Remark 3.24.
Let $\\{x_{1},\dots,x_{r}\\}\subset\Sigmait$ be a collection of marked points
on the surface and
$\varphi\colon\Sigmait\setminus\\{x_{1},\dots,x_{r}\\}\longrightarrow BD$ a
continuous map. Let $\Sigmait_{o}$ be the surface obtained from $\Sigmait$ by
removing a small disk $\mathbb{D}^{d_{i}}$ around each point $x_{i}$, where
the label $d_{i}$ indicates that the holonomy of $\varphi$ around the $i$-th
boundary component $\partial_{i}\Sigmait_{o}$ is given by the group element
$d_{i}\in D$. Let $\mathcal{M}=\bigoplus_{d\in D}\mathcal{M}_{d}$ be an
equivariant balanced right module over $\mathcal{A}$. Applying excision, we
can express factorisation homology over the marked surface $\Sigmait$ via the
following relative tensor product:
$\int_{((\Sigmait,\varphi),\\{x_{1},\dots,x_{r}\\})}(\mathcal{A},\mathcal{M})\cong\int_{(\Sigmait_{o},\varphi)}\mathcal{A}\underset{\big{(}\int_{(\mathbb{S}^{1},d_{1})}\mathcal{A}\boxtimes\dots\boxtimes\int_{(\mathbb{S}^{1},d_{r})}\mathcal{A}\big{)}}{\boxtimes}\Big{(}\mathcal{M}_{d_{1}}\boxtimes\dots\boxtimes\mathcal{M}_{d_{r}}\Big{)}\
\ .$
## 4 Quantisation of flat twisted bundles
In this section we describe the Poisson algebra of functions on the moduli
space of flat $\operatorname{Out}(G)$-twisted $G$-bundles on an oriented
surface $\Sigmait$ and its quantisation via factorisation homology over
$\Sigmait$ with coefficients in the ribbon category ${\mathsf{Rep}}_{q}(G)$
equipped with the $\operatorname{Out}(G)$-action defined in Section 2.3.
### 4.1 The moduli space of flat twisted bundles
We first recollect some background about twisted bundles in the differential
geometric setting, see for example [Mei17] and [Zer21] for more details and
[MSS22] for the non-flat version. We refer to [BY15] for the original
algebraic geometric definition and extension to wild character varieties. Let
$\Sigmait$ be an oriented surface equipped with a principal
$\operatorname{Out}(G)$-bundle $\mathcal{P}\longrightarrow\Sigmait$. The group
homomorphism
$G\rtimes\operatorname{Out}(G)\longrightarrow\operatorname{Out}(G)$, given by
projection onto the second factor, induces a morphism of smooth
groupoids101010Here smooth groupoids can, for example, be modelled as sheaves
of groupoids on the site of Cartisan space as in [BMS21, Section 5.1]. We will
not go into the details here because they will not be important for what
follows. $\operatorname{Bun}^{\text{flat}}_{\operatorname{Out}(G)\rtimes
G}(\Sigmait)\longrightarrow\operatorname{Bun}_{\operatorname{Out}(G)}(\Sigmait)$.
The groupoid of flat $\mathcal{P}$-twisted $G$-bundles is defined as the
homotopy pullback
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes
W$$+\text{-linked}$$+\text{-nested}$$+\text{-unlinked}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$L^{+}~{}=$$N^{+}~{}=$$U^{+}~{}=$$\mathcal{F}_{\mathcal{A}}$$V$$V$$\mathcal{F}_{\mathcal{A}}$${\operatorname{Bun}_{G\downarrow\mathcal{P}}^{\text{flat}}(\Sigmait)}$${\operatorname{Bun}^{\text{flat}}_{G\rtimes\operatorname{Out}(G)}(\Sigmait)}$${\star}$${\operatorname{Bun}_{\operatorname{Out}(G)}(\Sigmait)\
\ .}$$\scriptstyle{\mathcal{P}}$ (4.1)
The trivial $\mathcal{P}$-twisted $G$-bundle is the bundle
$\mathcal{P}\times_{\operatorname{Out}(G)}(G\rtimes\operatorname{Out}(G))$
associated to $\mathcal{P}$ using the group homomorphism
$\operatorname{Out}(G)\hookrightarrow G\rtimes\operatorname{Out}(G)$,
$\kappa\longmapsto 1\rtimes\kappa$. Note that the automorphisms of the trivial
flat $\mathcal{P}$-twisted $G$-bundle are $G^{\pi_{0}(\Sigmait)}$ and not
$(G\rtimes\operatorname{Out}(G))^{\pi_{0}(\Sigmait)}$ as one might naively
expect.
###### Remark 4.1.
The moduli space of flat $\operatorname{Out}(G)$-twisted bundles on a closed
surface $\Sigmait$ was studied in the differential geometric setting in
[Mei17, Zer21]. In particular, it is shown in loc. cit. that the moduli space
of $\operatorname{Out}(G)$-twisted flat bundles for a compact Lie group $G$
carries a canonical Atiyah-Bott like symplectic structure. Similar symplectic
structures have been constructed in the algebraic geometric setting on of
(wild) twisted character varieties in [BY15].
Since in this paper we obtain our results in the algebraic setting, we will
now give another description of flat twisted bundles that is more suitable for
us, namely the holonomy description of twisted $G$-bundles. We will only
consider surfaces $\Sigmait$ with at least one boundary component and a marked
point $v\in\partial\Sigmait$ on one of the boundary circles. For brevity we
write simply $\pi_{1}(\Sigmait)$ for $\pi_{1}(\Sigmait,v)$. For any group $G$,
we call the space of group homomorphisms
$\operatorname{Hom}(\pi_{1}(\Sigmait),G)$ the $G$-representation variety. It
comes with a natural action of $G$ via conjugation:
$g.\varphi(\gamma)=g\varphi(\gamma)g^{-1}$ for all $g\in G$,
$\gamma\in\pi_{1}(\Sigmait)$ and
$\varphi\in\operatorname{Hom}(\pi_{1}(\Sigmait),G)$. As before, we fix a
principal $\operatorname{Out}(G)$-bundle, here described by a group
homomorphism
$\rho\colon\pi_{1}(\Sigmait)\longrightarrow\operatorname{Out}(G)$. Such a map
$\rho$ is given by picking an element $\kappa\in\operatorname{Out}(G)$ for
every generator in $\pi_{1}(\Sigmait)$. Then, an element in the $\rho$-twisted
$G$-representation variety is a lift
=$\theta$$\theta$$\theta$$V\otimes W$$V\otimes
W$$+\text{-linked}$$+\text{-nested}$$+\text{-unlinked}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{i}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$\mathcal{F}_{\mathcal{A}}^{d_{j}}$$L^{+}~{}=$$N^{+}~{}=$$U^{+}~{}=$$\mathcal{F}_{\mathcal{A}}$$V$$V$$\mathcal{F}_{\mathcal{A}}$${G\rtimes\operatorname{Out}(G)}$${\pi_{1}(\Sigmait)}$${\operatorname{Out}(G)\
\ .}$$\scriptstyle{\rho}$ (4.2)
We write $\operatorname{Hom}_{\rho}(\pi_{1}(\Sigmait),G)$ to denote the space
of lifts. Concretely, elements in
$\operatorname{Hom}_{\rho}(\pi_{1}(\Sigmait),G)$ can be described by maps
$\varphi\colon\pi_{1}(\Sigmait)\longrightarrow G$, which are such that
$\varphi(\gamma_{1}\circ\gamma_{2})=\varphi(\gamma_{1})\rho(\gamma_{1}).\varphi(\gamma_{2})$.
The group $G$ acts via twisted conjugation, i.e. the action of an element
$g\in G$ is given by $\varphi(\gamma)\longmapsto
g\varphi(\gamma)\rho(\gamma).g^{-1}$. Given a set $E$ of free generators of
$\pi_{1}(\Sigmait)$, we get an identification
$\operatorname{Hom}_{\rho}(\pi_{1}(\Sigmait),G)\cong G^{E}$.
There is a bijective correspondence between elements in the twisted
representation variety $\operatorname{Hom}_{\rho}(\pi_{1}(\Sigmait),G)$ and
elements in
$\mathcal{M}^{\circ}_{\rho}(\Sigmait)\coloneqq\\{\text{Isomorphism classes of
flat twisted }G\text{-bundles with trivialisation over }v\in\Sigmait\\}\ \ ,$
which is established via the holonomy map. The group $G$ acts on
$\mathcal{M}_{\rho}^{\circ}(\Sigmait)$ by changing the trivialisation. The
moduli space of flat twisted bundles is then given by the quotient stack
$\mathcal{M}_{\rho}(\Sigmait)=\mathcal{M}_{\rho}^{\circ}(\Sigmait)/^{\rho}G\ \
,$
where the notation $/^{\rho}$ indicates that $G$ acts via twisted conjugation.
#### 4.1.1 The twisted Fock-Rosly Poisson structure
For the remainder of this section, $\Sigmait$ is a connected surface with at
least one boundary component. We will give an explicit description of the
Poisson structure on $\mathcal{M}^{\circ}_{\rho}(\Sigmait)$, following the
strategy of Fock and Rosly [FR98] using lattice gauge theory.
We choose a ciliated fat graph model for $\Sigmait$ with one vertex and edges
$E=\\{e_{1},\dots,e_{n}\\}$, constructed from a gluing pattern for $\Sigmait$
as defined in Section 3.2. Furthermore, we choose an
$\operatorname{Out}(G)$-labeling $\\{\kappa_{1},\dots,\kappa_{n}\\}$ of the
gluing pattern describing the twisting principal
$\operatorname{Out}(G)$-bundle $\rho$. The fundamental group of $\Sigmait$ is
freely generated by the edges $E$ of the graph model, as depicted in Figure
14. Using the holonomy description from the previous section, we can
characterise a $\rho$-twisted bundle on $\Sigmait$ by a graph connection, that
is a labeling of every edge $e_{i}\in E$ with a group element $g_{i}\in G$:
$\text{hol}\colon\mathcal{M}_{\rho}^{\circ}(\Sigmait)\xrightarrow{\cong}\operatorname{Hom}_{\rho}(\pi_{1}(\Sigmait,v),G)=G^{E}\
\ .$
This identification chooses an orientation for every edge in the fat graph
model which we choose to agree with the natural orientation coming from the
gluing pattern. Hence, we get an identification
$\mathcal{M}_{\rho}(\Sigmait)\cong G^{E}/^{\rho}G\ \ ,$
where $h\in G$ acts via twisted conjugation
$(g_{e_{1}},\dots
g_{e_{n}})\longmapsto(hg_{e_{1}}\kappa_{1}(h)^{-1},\dots,hg_{e_{n}}\kappa_{n}(h)^{-1})\
\ .$ (4.3)
In this way, we consider the algebraic functions on $G^{E}$ as an element of
${\mathsf{Rep}}(G)$ and we denote this algebra by $\mathcal{O}^{\rho}(G^{E})$.
Quasi-coherent sheaves on $\mathcal{M}_{\rho}(\Sigmait)$ can now be identified
with modules over $\mathcal{O}^{\rho}(G^{E})$ in ${\mathsf{Rep}}(G)$.
###### Proposition 4.2.
Let $\Sigmait$ be a surface of genus $g$ and with $r\geq 1$ boundary
components. Given a principal $\operatorname{Out}(G)$-bundle
$\rho\colon\pi_{1}(\Sigmait)\longrightarrow\operatorname{Out}(G)$, described
by the elements $\kappa_{1},\dots,\kappa_{2g+r-1}\in\operatorname{Out}(G)$,
and a gluing pattern $P$ for $\Sigmait$, there is an isomorphism
$\mathcal{O}^{\rho}(G^{2g+r-1})\cong a_{P}^{\kappa_{1},\dots,\kappa_{2g+r-1}}$
of algebras in ${\mathsf{Rep}}(G)$.
###### Proof.
To establish the isomorphism on the level of vector spaces, we use the
algebraic Peter-Weyl theorem:
$\mathcal{O}(G)\cong\bigoplus_{V}V^{\vee}\otimes V\ \ ,$
where the sum on the right hand side is over all irreducible representations
of $G$ and $\mathcal{O}(G)$ is the Hopf algebra of matrix coefficients of
irreducible $G$-representations. Next we take into account the twist by a
given automorphism $\kappa\in\operatorname{Out}(G)$: a group element $h\in G$
acts on $\phi\in\mathcal{O}^{\kappa}(G)$ via
$h\triangleright\phi=\phi(h^{-1}(-)\kappa(h))$. As explained in Example 3.2,
we thus get an isomorphism
$\mathcal{O}^{\kappa}(G)\cong\bigoplus_{V}V^{\vee}\otimes\kappa^{*}V=\mathcal{F}_{{\mathsf{Rep}}(G)}^{\kappa}$
compatible with the $G$-action. ∎
In combination with Theorem 3.5, the above result shows that
$\int_{(\Sigmait,\rho)}{\mathsf{Rep}}(G)$ agrees with the category of quasi-
coherent sheaves on the moduli space $\mathcal{M}_{\rho}(\Sigmait)$ of twisted
bundles. Note that $G^{E}$ is a finite dimensional smooth algebraic variety
and independent of the concrete form of the gluing pattern or topology of
$\Sigmait$. However, we will see shortly that the Poisson structure is
sensitive to the topology.
\begin{overpic}[scale={0.4},tics=10]{graphsurface.pdf} \end{overpic} Figure
14: Generators of the fundamental group for an $r$-punctured genus $g$
surface.
In order to describe the Poisson structure on the representation variety
$\mathcal{M}_{\rho}^{\circ}(\Sigmait)$, we notice that there is an equivariant
embedding
$\displaystyle\iota\colon G^{E}$
$\displaystyle\longrightarrow(G\rtimes\operatorname{Out}(G))^{E}$ (4.4)
$\displaystyle(g_{e_{1}},\dots g_{e_{n}})$
$\displaystyle\longmapsto(g_{e_{1}}\rtimes\kappa_{1},\dots
g_{e_{n}}\rtimes\kappa_{n})$ (4.5)
which identifies $G^{E}$ with a connected component of
$(G\rtimes\operatorname{Out}(G))^{E}$ since $\operatorname{Out}(G)$ is
discrete. The $G$-action on the right side is via the embedding
$G\longrightarrow G\rtimes\operatorname{Out}(G)$ and conjugation inside
$G\rtimes\operatorname{Out}(G)$. Using the gluing pattern for $\Sigmait$,
together with the choice of an $\operatorname{Out}(G)$-invariant classical
r-matrix111111For example, the semi-classical limit of the quantum R-matrix
$\mathcal{R}$ of $U_{\hbar}(\mathfrak{g})$ is
$\operatorname{Out}(G)$-invariant, see Proposition 2.13.
$r\in\left(\mathfrak{g}\otimes\mathfrak{g}\right)^{\operatorname{Out}(G)}$,
Fock and Rosly’s construction [FR98] gives a Poisson structure
$\pi_{\text{FR}}$ on $(G\rtimes\operatorname{Out}(G))^{E}$, such that the
action of $G\rtimes\operatorname{Out}(G)$ is Poisson-Lie. Pulling back
$\pi_{\text{FR}}$ along $\iota$, we get the desired Poisson structure on
$\mathcal{M}_{\rho}^{\circ}(\Sigmait)$, which is compatible with the twisted
$G$-action. In Proposition 4.3 below we give an explicit formula for the
Poisson structure $\pi_{\mathcal{M}_{\rho}^{\circ}(\Sigmait)}$ we just
described on $\mathcal{M}_{\rho}^{\circ}(\Sigmait)$, which is a twisted
version of the Fock-Rosly Poisson structure on $G^{E}$ given in [FR98,
Proposition 3].
###### Proposition 4.3.
Let the surface $\Sigmait$ be represented by a ciliated fat graph with one
vertex $v$ and a set $E$ of edges. Let
$(x_{i})_{i=1,\dots,\text{dim}(\mathfrak{g})}$ be a basis of $\mathfrak{g}$.
Then for a given choice $r=r^{ij}x_{i}\otimes
x_{j}\in\left(\mathfrak{g}\otimes\mathfrak{g}\right)^{\operatorname{Out}(G)}$
of $\operatorname{Out}(G)$-invariant classical $r$-matrix there is a Poisson
structure on $\mathcal{M}_{\rho}^{\circ}(\Sigmait)$ given by the bivector
$\pi_{\mathcal{M}_{\rho}(\Sigmait)}=\sum_{\alpha\prec\beta}r^{ij}x_{i}(\alpha)\wedge
x_{j}(\beta)+\frac{1}{2}\sum_{\alpha}r^{ij}x_{i}(\alpha)\wedge x_{j}(\alpha)$
where $\alpha$ and $\beta$ run over the set of half-edges121212We break up the
edges of the graph, so that from each edge we get an incoming and an outgoing
half-edge at the vertex $v$. Since the chosen graph is ciliated, we get an
ordering $\prec$ on the set of half-edges. and
$x_{i}(\alpha)\coloneqq\begin{cases}-x_{i}^{R}(\alpha),&\alpha\text{ is
incoming at v}\\\ (\kappa_{\alpha})_{*}x_{i}^{L}(\alpha),&\alpha\text{ is
outgoing at v}\end{cases}$
where $x^{R/L}_{i}(\alpha)$ denotes the right/left-invariant vector field of
$x_{i}$ acting on the $\alpha$-copy of $G^{E}$. Furthermore, the induced
Poisson structure on the subalgebra of $G$-invariant functions is independent
of the chosen fat graph model for $\Sigmait$.
### 4.2 Quantisation
In Section 3.2 we constructed an algebra
$a_{P}^{\kappa_{1},\dots,\kappa_{n}}$, $n=2g+r-1$, from a combinatorial
presentation of the decorated surface $\Sigmait$. We now explain how these
algebras provide a deformation quantisation of the twisted Fock-Rosly Poisson
structure on $\mathcal{M}_{\rho}^{\circ}(\Sigmait)$. To that end, we consider
$a_{P}^{\kappa_{1},\dots,\kappa_{n}}$ as an object in the representation
category ${\mathsf{Rep}}_{\hbar}(G)$ of the formal quantum group. It is the
tensor product $\bigotimes_{i=1}^{n}\mathcal{O}_{\hbar}^{\kappa_{i}}(G)$,
where each $\mathcal{O}_{\hbar}^{\kappa_{i}}(G)$ is a $\kappa_{i}$-twisted REA
of quantised coordinate functions. The multiplication on the tensor product is
defined in terms of the crossing morphisms depicted in Figure 6. We will show
in Theorem 4.4 that for all elements
$f_{\hbar}^{\kappa_{i}}\in\mathcal{O}_{\hbar}^{\kappa_{i}}$ and
$g_{\hbar}^{\kappa_{j}}\in\mathcal{O}_{\hbar}^{\kappa_{j}}$ we have
$\frac{[f_{\hbar}^{\kappa_{i}},g_{\hbar}^{\kappa_{j}}]}{\hbar}\text{
mod}(\hbar)=\\{f^{\kappa_{i}},g^{\kappa_{j}}\\}\ \ ,$
where $\\{\cdot,\cdot\\}$ is the twisted Fock-Rosly Poisson structure from
Proposition 4.3, and $f^{\kappa_{i}}=f_{\hbar}^{\kappa_{i}}\text{
mod}(\hbar)\in\mathcal{O}^{\kappa_{i}}(G)$, and similarly for
$g^{\kappa_{j}}$.
We present a reformulation of the Poisson structure on
$\mathcal{M}_{\rho}^{\circ}(\Sigmait)$ that will prove useful for what
follows. Let $r=\omega+t$ be the decomposition of the classical r-matrix into
an anti-symmetric part $\omega$ and an invariant symmetric element $t$. For a
given automorphism $\kappa\in\operatorname{Out}(G)$, define the bivector field
$\displaystyle\pi^{\kappa}_{\text{STS}}\coloneqq\omega^{\operatorname{ad}(\kappa),\operatorname{ad}(\kappa)}+t^{R,L(\kappa)}-t^{L(\kappa),R}\
\ ,$ (4.6)
where the superscripts indicate that the action by left-invariant vector
fields is twisted by the automorphism $\kappa$, and we used the notation
$x^{\operatorname{ad}(\kappa)}=x^{R}-\kappa_{*}x^{L}$ for the vector field
generated by the element $x\in\mathfrak{g}$ via the twisted adjoint action
$h\longmapsto gh\kappa(g^{-1})$ of $G$ on itself. In the case $\kappa=e$, the
bivector field $\pi^{e}_{\text{STS}}$ agrees with the Semenov-Tian-Shansky
(STS) Poisson structure on $G$, see [STS94]. Using the decorated gluing
pattern $(P,\\{\kappa_{1},\dots,\kappa_{2g+n-1}\\})$ for $\Sigmait$, we define
the bivector
$\displaystyle\pi=\sum_{\alpha\in
E}\pi_{\text{STS}}^{\kappa_{\alpha}}+\sum_{\alpha<\beta}\pi_{\alpha,\beta}-\pi_{\beta,\alpha}\
\ ,$ (4.7)
where $\pi_{\alpha,\beta}$ is a 2-tensor, acting on the $\alpha$-component of
the first factor and on the $\beta$-component of the second factor of
$G^{E}\times G^{E}$, and is defined by
$\displaystyle\pi_{\alpha,\beta}\coloneqq\begin{cases}-r_{2,1}^{\operatorname{ad}(\kappa_{\alpha}),\operatorname{ad}(\kappa_{\beta})}&\text{,
if $\alpha$ and $\beta$ are positively unlinked}\\\
-r_{2,1}^{\operatorname{ad}(\kappa_{\alpha}),\operatorname{ad}(\kappa_{\beta})}-2t^{L(\kappa_{\alpha}),R}&\text{,
if $\alpha$ and $\beta$ are positively linked}\\\
-r_{2,1}^{\operatorname{ad}(\kappa_{\alpha}),\operatorname{ad}(\kappa_{\beta})}-2t^{L(\kappa_{\alpha}),R}+2t^{L(\kappa_{\alpha}),L(\kappa_{\beta})}&\text{,
if $\alpha$ and $\beta$ are positively nested}\end{cases}$ (4.8)
And similarly, the 2-tensor $\pi_{\beta,\alpha}$ acts on the $\beta$-component
of the first factor and on the $\alpha$-component of the second factor of
$G^{E}\times G^{E}$ and is defined as
$\pi_{\beta,\alpha}=\tau(\pi_{\alpha,\beta})$, where $\tau$ swaps the two
tensor factors. Similarly, for the remaining three cases, we define
$\displaystyle\pi_{\alpha,\beta}\coloneqq\begin{cases}r_{1,2}^{\operatorname{ad}(\kappa_{\alpha}),\operatorname{ad}(\kappa_{\beta})}&\text{,
if $\alpha$ and $\beta$ are negatively unlinked}\\\
r_{1,2}^{\operatorname{ad}(\kappa_{\alpha}),\operatorname{ad}(\kappa_{\beta})}+2t^{R,L(\kappa_{\beta})}&\text{,
if $\alpha$ and $\beta$ are negatively linked}\\\
r_{1,2}^{\operatorname{ad}(\kappa_{\alpha}),\operatorname{ad}(\kappa_{\beta})}+2t^{R,L(\kappa_{\beta})}-2t^{L(\kappa_{\alpha}),L(\kappa_{\beta})}&\text{,
if $\alpha$ and $\beta$ are negatively nested}\end{cases}$ (4.9)
and set again $\pi_{\beta,\alpha}=\tau(\pi_{\alpha,\beta})$. A direct
computation shows that $\pi$ agrees with the twisted Fock-Rosly Poisson
structure defined in Proposition 4.3.
###### Theorem 4.4.
The algebra $a_{P}^{\kappa_{1},\dots,\kappa_{2g+r-1}}$ is a quantisation of
the twisted Fock-Rosly Poisson structure on
$\mathcal{M}^{\circ}_{\rho}(\Sigmait)\cong G^{2g+r-1}$. Its subalgebra of
$U_{\hbar}(\mathfrak{g})$-invariants does not depend on the choice of the
gluing pattern $P$ and is a quantisation of the Poisson structure on the
affine quotient $\mathcal{M}_{\rho}^{\circ}(\Sigmait)\text{//}G$.
###### Proof.
First, we show that the quasi-classical limit of the commutator of two
quantised functions in $\mathcal{O}_{\hbar}^{\kappa}(G)$ agrees with the
$\kappa$-twisted STS Poisson structure $\pi_{\text{STS}}^{\kappa}$. We recall
from Example 3.2 that the multiplication in the $\kappa$-twisted REA
$\mathcal{O}_{\hbar}^{\kappa}(G)$ is related to the multiplication in the FRT-
algebra via a twisting cocycle given in terms of R-matrices. The commutator in
the (untwisted) FRT-algebra $H^{\circ}$, $H=U_{\hbar}(\mathfrak{g})$, can be
computed by acting with
$(1\otimes^{\operatorname{rev}}1)\boxtimes(1\otimes
1)-(\mathcal{R}_{2}^{-1}\otimes^{\operatorname{rev}}\mathcal{R}^{-1}_{1})\boxtimes(\mathcal{R}^{\prime}_{2}\otimes\mathcal{R}^{\prime}_{1})$
on the components $V^{\vee}\otimes^{\operatorname{rev}}W^{\vee}\boxtimes
V\otimes W$, for $V,W\in{\mathsf{Rep}}_{\hbar}(G)$, since the multiplication
in the FRT-algebra is given by the Hopf pairing $\langle-,-\rangle$ between
$H^{\circ}$ and $H$:
$\langle
m_{\text{FRT}}(\phi\psi),h\rangle=\langle\phi\otimes\psi,\Delta(h)\rangle,\quad\phi,\psi\in
H^{\circ},h\in H$
and $\Delta(-)=\mathcal{R}^{-1}\Delta^{\text{op}}(-)\mathcal{R}$. Now we take
into account the twist by $\kappa$, as well as the twisting cocycle
$\mathcal{R}^{\prime}_{1}\otimes\kappa.\mathcal{R}_{1}\otimes\mathcal{R}^{\prime}_{2}\mathcal{R}_{2}\otimes
1$, to compute the commutator in $\mathcal{O}_{\hbar}^{\kappa}(G)$ component-
wise by acting with
$\displaystyle(\mathcal{R}^{\prime}_{1}\otimes^{\operatorname{rev}}\mathcal{R}^{\prime}_{2}\mathcal{R}_{2})\boxtimes(\kappa.\mathcal{R}_{1}\otimes
1)-C\circ(\mathcal{R}^{\prime}_{2}\mathcal{R}_{2}\otimes^{\operatorname{rev}}\mathcal{R}^{\prime}_{1})\boxtimes(1\otimes\kappa.\mathcal{R}_{1})$
(4.10) $\displaystyle\text{where
}C=(\mathcal{R}_{2}^{-1}\otimes^{\operatorname{rev}}\mathcal{R}^{-1}_{1})\boxtimes(\kappa.\mathcal{R}^{\prime}_{2}\otimes\kappa.\mathcal{R}^{\prime}_{1})$
(4.11)
on $V^{\vee}\otimes^{\operatorname{rev}}W^{\vee}\boxtimes V\otimes W$. To
compute the quasi-classical limit of the action (4.10), we use that in the
limit $\exp(\hbar)\longrightarrow 1$, the R-matrix has the following
expansion: $\mathcal{R}=1+\hbar r+\mathcal{O}(\hbar^{2})$, where
$r=r_{1}\otimes r_{2}\in\mathfrak{g}^{\otimes 2}$ is the classical r-matrix.
Explicitly, the quasi-classical limit of (4.10) is
$r^{3(\kappa),2}+r^{1,2}-r^{4(\kappa),1}-r^{2,1}+r^{2,1}-r^{4,3}\in
U(\mathfrak{g})^{\otimes 4}\ \ ,$
where for instance $r^{3(\kappa),2}=1\otimes r_{2}\otimes
r_{1}^{\kappa}\otimes 1\in U(\mathfrak{g})^{\otimes 4}$ and the superscript
$\kappa$ means that the respective action will be twisted by $\kappa$. More
explicitly, the first two copies of $U(\mathfrak{g})^{\otimes 4}$ act on
$\mathcal{O}^{\kappa}(G)$ via $x\longmapsto x^{r}$, for $x\in\mathfrak{g}$,
and the last two copies act via $x\longmapsto-\kappa_{*}x^{L}$. Thus, we find
that the quasi-classical limit of the commutator is the bivector field on $G$
given by
$\displaystyle-r^{L(\kappa),R}+r^{R,R}+r_{2,1}^{R,L(\kappa)}-r_{2,1}^{L,L}$
$\displaystyle=\omega^{\operatorname{ad}(\kappa),\operatorname{ad}(\kappa)}+t^{R,L(\kappa)}-t^{L(\kappa),R}$
$\displaystyle=\pi_{\text{STS}}^{\kappa}\ \ ,$
where we used that $r^{R,R}-r_{2,1}^{L,L}=\omega^{R,R}+\omega^{L,L}$.
Next, we prove the claim for two positively unlinked edges $\alpha<\beta$. We
recall that the crossing morphism for two unlinked edges $\alpha<\beta$ is
given by acting on
$\mathcal{O}^{\kappa_{\beta}}_{\hbar}(G)\otimes\mathcal{O}^{\kappa_{\alpha}}_{\hbar}(G)$
with
$\displaystyle U^{+}$ $\displaystyle=\tau_{12,34}\circ(\mathcal{R}_{1}\otimes
1\otimes
1\otimes\kappa_{\alpha}.\mathcal{R}_{2})(1\otimes\kappa_{\beta}.\mathcal{R}_{1}\otimes
1\otimes\kappa_{\alpha}.\mathcal{R}_{2})(\mathcal{R}_{1}\otimes
1\otimes\mathcal{R}_{2}\otimes
1)(1\otimes\kappa_{\beta}.\mathcal{R}_{1}\otimes\mathcal{R}_{2}\otimes 1)$
$\displaystyle\coloneqq\tau_{12,34}\circ\widetilde{U}^{+}$
Hence, the commutator on components
$\phi\otimes\kappa_{\alpha}^{*}v\in\mathcal{O}^{\kappa_{\alpha}}_{\hbar}(G)$
and $\psi\otimes\kappa_{\beta}^{*}w\in\mathcal{O}^{\kappa_{\beta}}_{\hbar}(G)$
can be computed via
$(m_{\mathcal{O}^{\kappa_{\alpha}}_{\hbar}(G)}\otimes
m_{\mathcal{O}^{\kappa_{\beta}}_{\hbar}(G)})\circ(1-(U^{+})^{7,8,1,2})(\phi\otimes\kappa_{\alpha}^{*}v\otimes
1^{\otimes 4}\otimes\psi\otimes\kappa_{\beta}^{*}w)\ \ .$
Taking the quasi-classical limit of this action thus amounts to
$\frac{1-\tau(\widetilde{U}^{+})}{\hbar}~{}\text{mod}(\hbar)=-r^{3,2(\kappa_{\alpha})}-r^{4(\kappa_{\beta}),2(\kappa_{\alpha})}-r^{3,1}-r^{4(\kappa_{\beta}),1}\in
U(\mathfrak{g})^{\otimes 4}\ \,$ (4.12)
where this time the first and third copy in $U(\mathfrak{g})^{\otimes 4}$ act
via $x\longmapsto x^{R}$ and the second and the forth copy via
$x\longmapsto-\kappa_{*}x^{L}$, so that the right hand side of (4.12) acts on
$\mathcal{O}^{\kappa_{\alpha}}(G)\otimes\mathcal{O}^{\kappa_{\beta}}(G)$ via
$-r_{2,1}^{\operatorname{ad}(\kappa_{\alpha}),\operatorname{ad}(\kappa_{\beta})}$,
which agrees with $\pi_{\alpha,\beta}$ as claimed. Similarly, for two
positively linked edges we have
$\frac{1-\tau(\widetilde{L}^{+})}{\hbar}\text{
mod}(\hbar)=r^{2(\kappa_{\alpha}),3}-r^{4(\kappa_{\beta}),2(\kappa_{\alpha})}-r^{3,1}-r^{4(\kappa_{\beta}),1}\
\,$
and we see that in the positively linked case the 2-tensor
$\pi_{\alpha,\beta}$ differs from the unlinked case by adding a term
$-2t^{L(\kappa_{\alpha}),R}$. Lastly, for two positively nested edges we find
$\frac{1-\tau(\widetilde{N}^{+})}{\hbar}\text{
mod}(\hbar)=r^{2(\kappa_{\alpha}),3}+r^{2(\kappa_{\alpha}),4(\kappa_{\beta})}-r^{3,1}-r^{4(\kappa_{\beta}),1}\
\,$
which differs from the linked case by adding the term
$2t^{L(\kappa_{\alpha}),L(\kappa_{\beta})}$, which ends the proof for the
positively unlinked, linked and nested case. The remaining three cases can be
worked out analogously. ∎
## References
* [ADPW91] S. Axelrod, S. Della Pietra, E. Witten. Geometric quantization of Chern-Simons gauge theory. Journal of Differential Geometry (1991). 33(3):787–902.
* [AF19] D. Ayala, J. Francis. A factorization homology primer. arXiv:1903.10961 (2019).
* [AF15] D. Ayala, J. Francis. Factorization homology of topological manifolds. Journal of Topology (2015). 8(4):1045–1084.
* [AFT17] D. Ayala, J. Francis, H. L. Tanaka. Factorization homology of stratified spaces. Selecta Mathematica (2017). 23(1):293–362.
* [BD95] J. C. Baez, J. Dolan. Higher dimensional algebra and topological quantum field theory. Journal of Mathematical Physics (1995). 36:6073–6105.
* [BJS21] A. Brochier, D. Jordan, N. Snyder. On dualizability of braided tensor categories. Compositio Mathematica (2021). 157(3):435–483.
* [BMS21] S. Bunk, L. Müller, and R. J. Szabo. Smooth 2-group extensions and symmetries of bundle gerbes. Communications in Mathematical Physics (2021). 384:1829–1911.
* [Bro12] A. Brochier. A Kohno-Drinfeld theorem for the monodromy of cyclotomic KZ connections. Communications in Mathematical Physics (2012). 311:55–96.
* [Bro13] A. Brochier. Cyclotomic associators and finite type invariants for tangles in the solid torus. Algebraic & Geometric Topology (2013). 13:3365–3409.
* [BY15] P. Boalch, D. Yamakawa. Twisted wild character varieties. arXiv:1512.08091 (2015).
* [BZBJ18a] D. Ben-Zvi, A. Brochier, D. Jordan. Integrating quantum groups over surfaces. Journal of Topology (2018). 11(4):874–917.
* [BZBJ18b] D. Ben-Zvi, A. Brochier, D. Jordan. Quantum character varieties and braided monoidal categories. Selecta Mathematica (2018). 24(5):4711–4748.
* [BZN13] D. Ben-Zvi, D. Nadler. Loop spaces and representations. Duke Mathematical Journal (2013). 162(9):1587–1619.
* [BZN16] D. Ben-Zvi, D. Nadler. Betti geometric Langlands. arXiv:1606.08523 (2016).
* [CG20] D. Calaque, M. Gonzalez. A moperadic approach to cyclotomic associators. arXiv:2004.00572 (2020).
* [CP95] V. Chari, A. N. Pressley. A guide to quantum groups. Cambridge University Press, Cambridge, (1995).
* [DM03] J. Donin, A. Mudrov. Reflection equation, twist, and equivariant quantization. Israel Journal of Mathematics (2003). 136:11–28.
* [DSPS20] C. L. Douglas, C. Schommer-Pries, N. Snyder. Dualizable tensor categories. Memoirs of the American Mathematical Society (2020). 268(1308).
* [Enr08] B. Enriquez. Quasi-reflection algebras and cyclotomic associators. Selecta Mathematica (2008). 13:391–463.
* [FPSV15] J. Fuchs, J. Priel, C. Schweigert, A. Valentino. On the Brauer groups of symmetries of abelian Dijkgraaf-Witten theories. Communications in Mathematical Physics (2015). 339(2):385–405.
* [FR98] V. V. Fock, A. A. Rosly. Poisson structure on moduli of flat connections on Riemann surfaces and $r$-matrix. arXiv:9802054 (1998).
* [Fre17-I] B. Fresse. Homotopy of operads and Grothendieck-Teichmüller groups. Part 1: The Algebraic Theory and its Topological Background. Mathematical Surveys and Monographs 217, American Mathematical Society, Providence, RI (2017).
* [FSS17] J. Fuchs, G. Schaumann, C. Schweigert. A trace for bimodule categories. Applied Categorical Structures (2017). 25:227–-268.
* [FSV15] J. Fuchs, C. Schweigert, A. Valentino. Bicategories for boundary conditions and for surface defects in 3-d TFT. Communications in Mathematical Physics (2015). 321:543–575.
* [Gal17] C. Galindo. Coherence for monoidal $G$-categories and braided $G$-crossed categories. Journal of Algebra (2017). 487:118–137.
* [Gan18] I. Ganev. The wonderful compactification for quantum groups. Journal of the London Mathematical Society (2018). 99(2):778–806.
* [Gin15] G. Ginot. Notes on factorization algebras, factorization homology and applications. In: D. Calaque, T. Strobl (eds.). Mathematical Aspects of Quantum Field Theories, Mathematical Physics Studies, 429–552. Springer International Publishing (2015).
* [GNN09] S. Gelaki, D. Naidu, D. Nikshych. Centers of graded fusion categories. Algebra & Number Theory (2009). 3(8):959–990.
* [Saf21] P. Safronov. A categorical approach to quantum moment maps. Theory and Applications of Categories (2021). 37(24):818–862.
* [GS18] O. Gwilliam, C. Scheimbauer. Duals and adjoints in the factorization higher Morita category. arxiv:1804.10924 (2018).
* [GS21] S. Galatius, G. Szűcsr. The equivariant cobordism category. Journal of Topology (2021). 14:215–257.
* [Hau17] R. Haugseng. The higher Morita category of $E_{n}$-algebras. Geometry and Topology (2017). 21(3):1631–1730.
* [Hit90] N. J. Hitchin. Flat connections and geometric quantization. Communications in Mathematical Physics (1990). 131(2):347–380.
* [HPT16] A. Henriques, D. Penneys, J. Tener. Categorified trace for module tensor categories over braided tensor categories. Documenta Mathematica (2016). 21:1089–1149.
* [Hum90] J. E. Humphreys. Reflection groups and Coxeter groups. Cambridge University Press (1990).
* [Idr17] N. Idrissi. Swiss-cheese operad and Drinfeld center. Israel Journal of Mathematics (2017). 221(2):941–972.
* [JFS17] T. Johnson-Freyd, C. Scheimbauer. (Op)lax natural transformations, twisted quantum field theories, and “even higher” Morita categories. Advances in Mathematics (2017). 307:147–223.
* [KW05] A. Kapustin, E. Witten. Electric-magnetic duality and the geometric Langlands program. Communications in Number Theory and Physics (2005). 1:1–236.
* [Lur] J. Lurie. Higher algebra. Preprint available at: https://www.math.ias.edu/~lurie/
* [Lur09] J. Lurie. On the classification of topological field theories. Current Developments in Mathematics (2009). 129–280.
* [Lyu95] V. Lyubashenko. Modular transformations for tensor categories. Journal of Pure and Applied Algebra (1995). 98(3):279–327.
* [Mei17] E. Meinrenken. Convexity for twisted conjugation. Mathematical Research Letters (2017). 24:1797–1818.
* [MS20] L. Müller, R. J. Szabo. ’t Hooft anomalies of discrete gauge theories and non-abelian group cohomology. Communications in Mathematical Physics (2020). 375:1581–1627.
* [MSS22] L. Müller, L. Szegedy, R. J. Szabo. Symmetry defects and orbifolds of two-dimensional Yang-Mills theory. Letters in Mathematical Physics (2022). 112(2).
* [MW20a] L. Müller, L. Woike. Equivariant higher Hochschild homology and topological field theories. Homology, Homotopy and Applications (2020). 22(1):27–54.
* [MW20b] L. Müller, L. Woike. The little bundles operad. Algebraic & Geometric Topology (2020). 20(4):2029–2070.
* [MW22] L. Müller, L. Woike. Cyclic framed little disks algebras, Grothendieck-Verdier duality and handlebody group representations. The Quarterly Journal of Mathematics (2022).
* [Ost03] V. Ostrik. Module categories, weak Hopf algebras and modular invariants. Transformation Groups (2003). 8:177–206.
* [SW03] P. Salvatore, N. Wahl. Framed discs operads and Batalin-Vilkovisky algebras. The Quarterly Journal of Mathematics (2003). 54(2):213–231.
* [Sch14] C. Scheimbauer. Factorization homology as a fully extended topological field theory. Ph.D. thesis, ETH Zurich (2014).
* [STS94] M. A. Semenov-Tian-Shansky. Poisson Lie groups, quantum duality principle, and the quantum double. Contemporary Mathematics (1994). 175:219–248.
* [Tur00] V. Turaev. Homotopy field theory in dimension 3 and crossed group-categories. arXiv:0005291 (2000).
* [Tur10] V. Turaev. Homotopy quantum field theory. With appendices by M. Müger and A. Virelizier. European Mathematical Society (2010).
* [Vor99] A. A. Voronov. The Swiss-cheese operad. Homotopy invariant algebraic structures, 239:365–373, in Contemporary Mathematics, American Mathematical Society, Providence, RI (1999).
* [Was20] T. A. Wasserman. The Drinfeld centre of a symmetric fusion category is 2-fold monoidal. Advances in Mathematics (2020). 366.
* [Wee20] T. A. N. Weelinck. Equivariant Factorization homology of global quotient orbifolds. Advances in Mathematics (2020). 366.
* [Wil16] T. Willwacher. The homotopy braces formality morphism. Duke Mathematical Journal (2016). 165(10):1815–1964.
* [Wit91] E. Witten, On quantum gauge theories in two dimensions. Communications in Mathematical Physics (1991). 141:153–209.
* [Woi20] L. Woike. Higher categorical and operadic concepts for orbifold constructions - A study at the interface of topology and representation theory. Ph.D. thesis available at: https://ediss.sub.uni-hamburg.de/handle/ediss/8444 (2020).
* [WWW18] J. C. Wang, X. G. Wen, E. Witten. Symmetric gapped interfaces of SPT and SET states: Systematic constructions. Physical Review X (2018). 8(3):031048.
* [Zer21] A. J. Zerouali. Twisted moduli spaces and Duistermaat-Heckman measures. Journal of Geometry and Physics (2021). 161.
|
arxiv-papers
| 2021-07-26T17:41:46 |
2024-09-04T03:07:19.442723
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Corina Keller, Lukas M\\\"uller",
"submitter": "Corina Keller",
"url": "https://arxiv.org/abs/2107.12348"
}
|
2107.12349
|
# Imaging Sources in the Third Realization of the International Celestial
Reference Frame
Lucas Hunt United States Naval Observatory
3450 Massachusetts Ave NW
Washington, DC 20392, USA Computational Physics, Inc.
8001 Braddock Road, Suite 210
Springfield, VA 22151-2110 Megan C. Johnson United States Naval Observatory
3450 Massachusetts Ave NW
Washington, DC 20392, USA Phillip J. Cigan United States Naval Observatory
3450 Massachusetts Ave NW
Washington, DC 20392, USA George Mason University
4400 University Dr
Fairfax, VA 22030 David Gordon United States Naval Observatory
3450 Massachusetts Ave NW
Washington, DC 20392, USA George Mason University
4400 University Dr
Fairfax, VA 22030 John Spitzak Computational Physics, Inc.
8001 Braddock Road, Suite 210
Springfield, VA 22151-2110
###### Abstract
The third iteration of the International Celestial Reference Frame (ICRF3) is
made up of 4536 quasars observed at S/X bands using Very Long baseline
interferometry (VLBI). These sources are high redshift quasars, typically
between $1<z<2$, that are believed to host active galactic nuclei (AGN) at
their centers. The position of compact radio sources can be determined better
than sources with large amounts of extended radio structure. Here we report
information on a series of 20 observations from January 2017 through December
2017 which were designed for precise astrometry and to monitor the structure
of sources included in the ICRF3 .
editorials, notices — miscellaneous — catalogs — surveys
††journal: ApJ††software: AOFlagger††software: Difmap
## 1 Introduction
The International Celestial Reference Frame (ICRF) is the definitive standard
framework for precise astronomical positions at radio wavelengths, and
underpins a wide range of applications including navigation, the GPS satellite
network, and defines right ascension and declination coordinates in astronomy.
The ICRF is determined from positions of compact quasars, observed using the
Very Long Baseline Interferometry (VLBI) technique, and has undergone three
realizations (hereafter ICRF1, ICRF2, and ICRF3) (Ma et al. (1998); Fey et al.
(2015); Charlot et al. (2020), respectively). The third realization (ICRF3;
Charlot et al., 2020) was adopted in January 2019 by the International
Astronomical Union (IAU) as the international standard reference frame.
ICRF1 (Ma et al., 1998) contained positions of 608 sources, 212 of which were
“defining sources,” so called because they serve to define the axis of the
frame. Observations for ICRF1 were carried out between 1979 and 1995. ICRF2
(Fey et al., 2015) built upon ICRF1, having incorporated sources from the Very
Long Baseline Array (VLBA) Calibrator Survey (VCS; Beasley et al., 2002;
Gordon et al., 2016), and ultimately included positions of 3,414 total
sources, 295 of which were defining sources. The list of ICRF2 defining
sources was formed by first selecting quasars with the most stable and well-
determined positions and then, second, by selecting sources that formed an
isotropic distribution across the sky. The ICRF3 improves upon previous
iterations and contains 4536 sources at S/X bands, with improved astrometry
over the ICRF2 catalog primarily due to incorporation of VLBA 24-hour
astrometric and geodetic sessions that make up $\sim$68% of the ICRF3 data
(Charlot et al., 2020). The ICRF3 contains 303 defining sources that were
selected based on the following criteria (in order of priority) (i) isotropic
over the celestial sphere, (ii) contain positional stability, and (iii) are
compact. Notably, ICRF3 includes for the first time observations at multiple
radio frequencies including K-band and X/Ka-band reference frames in addition
to the legacy S/X-band reference frame.
As described in Charlot et al. (2020), the ideal ICRF sources are compact,
point-like objects, with astrometrically stable positions, which allow for
high positional accuracy. Selected sources are distant, radio-loud quasars;
the great distances to these sources translates to very small proper motions,
which satisfies the requirement of a quasi-inertial reference frame. They are
bright enough in the radio to require only short integration times; $\sim$300
sources can be observed during a single 24 hour observing epoch. ICRF imaging
campaigns have been carried out since 1995 and with the advent of the VLBA in
1993, it has been used to conduct almost all ICRF imaging campaigns. These
campaigns are critical to refining, monitoring, maintaining, and improving
ICRF source selection.
Though the VLBI technique allows for precisely measured positions of radio
loud quasars, its unmatched resolution also means that some of these quasars
are resolved and this effect increases as a function of increasing frequency.
Furthermore, the spatial scales at which we observe these sources, and the
turbulent nature of the active galactic nuclei (AGN) means that the sources
are variable on timescales of hours, days, months, and years (e.g., 3C48).
Changes in source structure can affect astrometric source positions, reducing
the precision and adversely affecting the accuracy of the ICRF. To mitigate
the effects of variability and maintain the integrity of the ICRF, it is
important to image these sources and to continuously monitor them for changes
in source structure.
In January 2017, the United States Naval Observatory (USNO) entered into an
agreement with the National Science Foundation to contribute 50% of the
operating costs for the NRAO’s VLBA in exchange for 50% of the observing time.
With this time, USNO and the National Aeronautics and Space Administration
Goddard Space Flight Center (NASA GSFC) have carried out a joint observing
campaign of ICRF3 sources in order to improve positions for those that had a
limited number of past observations, and to image those sources to establish a
snapshot set of observations in order to begin monitoring and understanding
their positional and intensity variability at radio frequencies as well as
their spatial structure. This is an ongoing USNO effort to continue monitoring
these sources to contribute improved astrometric positions to the ICRF and
look for changes in source structure. We are providing the image FITS files,
calibrated $uv$-data, and other image products through a web-based interface
for use by the astronomical and geodetic community.
This paper is laid out as follows: In § 2 we describe the observations,
calibration, and imaging of the data. In § 3 we introduce the database of
images and other data products, Fundamental Reference Image Data Archive
(FRIDA). In § 4 we explore global properties of the ICRF sources, including
fluxes and band-to-band spectral indices. Finally, we provide a summary in §
5.
## 2 Data
### 2.1 Observations and Scheduling
The observations were made using the VLBA S/X dual frequency system, providing
compatibility with earlier VLBA astrometry sessions and with nearly 40 years
of astrometric/geodetic VLBI. A description of the VLBA system can be found in
Napier (1994). The simultaneous recording of data at both S and X-band, often
used for a more accurate ionosphere calibration that is important for
astrometric and geodetic experiments, is enabled by a dichroic mirror that is
deployed over the S-band receiver and reflects the higher frequency radiation
to a deflector that then leads to the X-band receiver.
The primary goal of this VLBA observing campaign is to improve the ICRF3
(Charlot et al., 2020) with improved source positions and with images to help
in the selection of defining sources. The setup was very similar to that of
the VCS-II campaign (Gordon et al., 2016). Frequencies and bandwidths were
identical to VCS-II, with 12, 32-MHz channel windows at X-band, and 4 at
S-band, using 2-bit sampling for a total recording rate of 2 Gbps. Table 1
gives the observing parameters and list of sessions observed during 2017. The
target list was created with most sources being previously observed at S/X
bands in only three or fewer sessions, and some being sources not previously
detected. Schedules were made using the NRAO
SCHED111http://www.aoc.nrao.edu/software/sched/index.html program.
Schedules were written using the SCHED dynamic mode, with each taking
approximately one sidereal day, allowing them to be run at any day and
starting time. To minimize slewing time, the sources were split into groups of
4 nearby sources, to be observed sequentially along with an ICRF2 defining
source for troposphere calibration and for ties into the ICRF. We scheduled
$\sim$300 target sources in each session, with integration times between 60
and 160 seconds. Table 1 lists the observing properties of all 20 sessions.
Most sources north of $-20^{\circ}$ declination were comprised of three scans,
with those south of $-20^{\circ}$ having two scans. The declination limit was
approximately $-45^{\circ}$. There were two sessions per month during the
first ten months of 2017, for a total of 20 sessions. The distribution of the
number of visits to individual sources over the 20 included observing sessions
and their positions on the sky are shown in Figure 1 and Figure 2,
respectively.
Table 1: Observation Parameters All Sessions
---
Parameter | Value
Backend System | Polyphase Filterbank (PFB)
Total channel windows | 16
Single channel window bandwidth (MHz) | 32
No. of spectral channels per window | 64
Total bandwidth at X-band (MHz) | 384
Total bandwidth at S-band (MHz) | 128
Frequency resolution (MHz) | 0.5
Polarization | Right-hand circular
Data rate (Gbps) | 2
Sampling rate (bits) | 2
X-band channel frequencies (MHz) | 8460.0, 8492.0, 8524.0, 8556.0, 8620.0, 8652.0,
| 8716.0, 8748.0, 8812.0, 8844.0, 8876.0, 8908.0
S-band channel frequencies (MHz) | 2220.0, 2252.0, 2284.0, 2348.0
Individual Sessions
Session | Antennas in arraya | # of Sources | # of Scans | Obs. Date Range (2017)
UF001A | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 297 | 798 | Jan-16 23:26$-$Jan-17 23:21
UF001B | BR, FDb, HN, LA, MK, NL, OV, PT, SC | 309 | 780 | Jan-21 23:06$-$Jan-22 23:00
UF001C | BR, FD, HN, KP, LA, MK, NL, OV, PTb, SC | 292 | 769 | Feb-19 21:12$-$Feb-20 21:06
UF001D | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 281 | 773 | Feb-24 20:52$-$Feb-25 20:47
UF001E | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 284 | 785 | Mar-23 19:06$-$Mar-24 19:01
UF001F | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 281 | 785 | Mar-27 07:02$-$Mar-28 06:57
UF001G | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 289 | 765 | Apr-28 16:45$-$Apr-29 16:40
UF001H | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 281 | 752 | May-01 16:33$-$May-02 16:28
UF001I | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 287 | 765 | May-27 22:02$-$May-28 21:57
UF001J | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 297 | 777 | May-31 04:34$-$Jun-01 04:28
UF001K | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 291 | 772 | Jun-10 03:08$-$Jun-11 03:03
UF001L | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 280 | 760 | Jun-15 19:15$-$Jun-16 19:11
UF001M | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 293 | 776 | Jul-09 05:08$-$Jul-10 05:01
UF001N | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 297 | 763 | Jul-16 10:00$-$Jul-17 09:54
UF001O | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 281 | 742 | Aug-05 04:40$-$Aug-06 04:34
UF001P | BR, FD, HN, KP, LA, MK, NL, OV, PT, SC | 286 | 759 | Aug-12 15:11$-$Aug-13 15:05
UF001Q | BRb, FD, HN, KP, LA, MKb, NL, OV, PT | 285 | 751 | Sep-18 19:50$-$Sep-19 19:44
UF001R | BR, FD, HN, KP, LA, MK, NL, OV, PT | 175 | 352 | Sep-26 07:11$-$Sep-27 07:06
UF001S | BR, FD, HN, KP, LA, MK, NL, OVb, PT | 251 | 523 | Oct-09 12:03$-$Oct-10 11:59
UF001T | BR, FD, HN, KP, LA, MK, NL, OV, PT | 253 | 525 | Oct-21 13:56$-$Oct-22 13:48
Notes: | a Antennas listed were present in at least one scan,
| —
Figure 1: Histogram of the number of times a given source was observed. Most
sources were only observed one time. Stronger sources which were used to tie
the other objects to the ICRF were observed in multiple sessions. Figure 2:
Distribution of imaged sources on the sky. Objects highlighted in orange are
ICRF3 defining sources.
### 2.2 Calibration
The raw data were correlated at the Array Operations Center in Socorro, New
Mexico with the Socorro-DiFX correlator (Deller et al., 2011). Initial
amplitude and phase calibration was performed using Common Astronomy Software
Applications (CASA; McMullin et al., 2007). The calibration and imaging steps
used in our CASA pipeline are briefly described in the following sections.
Section 2.2.1 covers how the amplitude calibration was carried out. Section
2.2.2 outlines some of the flags that were used to ensure data quality.
Section 2.2.3 outlines the phase calibration steps, and Section 2.2.4 outlines
the imaging and self-calibration steps. A flowchart of our pipeline is shown
in Figure 3, with the importing tasks in blue squares, the amplitude
calibration steps in yellow squares, the flagging steps in red squares, the
phase calibration steps in gray squares, and the imaging and self-calibration
steps in white squares. For a given step, CASA tasks are italicized while
separate programs are listed in bold text.
Figure 3: A flowchart outlining our calibration routine. The CASA task used
for each step is written in italics. Programs run within the script but
outside of CASA are written in bold italics. The colors of the squares
correspond to categories for the steps: blue for importing data; yellow for
amplitude calibration; red for flagging; gray for phase calibration; and white
for imaging and self-calibration. The final image is selected (green) as the
highest S/N image created in the process described in Section 2.2.4.
#### 2.2.1 Amplitude Calibration
The first step carried out by our pipeline is to ingest the data from a
standard format to a format that can be modified within the CASA environment.
We use the CASA task importfitsidi to convert the idifits files to the CASA
measurement set format. We do not use phase referencing so our observations do
not require corrected earth orientation parameters, and we currently do not
make corrections for the ionosphere.
The steps required to calibrate the amplitudes with our pipeline are
highlighted in yellow in Figure 3. The first step in amplitude calibration is
to correct for errors in sampler thresholds when the analog signal is
converted to a digital signal. This correction can be calculated by
determining how much the autocorrelation spectrum deviates from unity and
applying that scaling factor at a later time.
The second step in amplitude calibration is to determine amplitudes from the
antenna information. The VLBI technique allows us to probe some of the
smallest physical scales in all of observational astronomy, which means that
source amplitudes can, and do, appear to vary on short timescales. We do not
have calibrators of constant, known brightness at these resolutions, and so we
use the system equivalent flux density (SEFD) for each antenna to calibrate
amplitudes. The SEFD is defined as the flux of a radio source which doubles
the antenna’s system temperature and can be written as
${\rm SEFD}=\frac{T_{sys}}{{\rm DPFU}\cdot{\rm gc}}$ (1)
where $T_{sys}$ is the system temperature measured at the telescope in Jy.
DPFU is the degrees per flux unit, or the gain in units Jy K-1 which relates
the measured system temperature to a flux value at a specific elevation. gc is
the aforementioned gain curve which describes how the telescope’s gain changes
as a function of elevation. The correlated flux density of a source on a given
baseline can then be calculated using
$S_{c,ij}=C^{acc}_{i}C^{acc}_{j}\sqrt{SEFD_{i}SEFD_{j}}$ (2)
where $S_{c,ij}$ is the flux density of the source on the baseline $ij$ in
units of Jy, $C^{acc}_{n}$ is the correction for correlator offsets for
antenna $n$, and SEFD is defined above.
#### 2.2.2 Separating Observing Bands and Flagging Data
This section covers the tasks marked by red boxes in Figure 3, and starts with
the separation of the S-band (4 spectral windows) and X-band (12 spectral
windows) into different measurement set files. All of the following steps,
through separating the individual sources into their own files, are applied to
each band. We flag the data using the automated flagging program AOFlagger 222
(Offringa et al., 2012). We then flag the autocorrelations because they are
not needed. Next we flag 7 channels from each edge of each spectral window
where the sensitivity drops off dramatically. These channels do not calibrate
well, and contribute little to the overall signal. Finally, we flag one second
at the beginning and end of each scan. The task that calibrates the phases,
fringefit, uses a Fourier transform code that crashes the script when
encountering a stray one second integration. In these cases, the phases do not
properly calibrate, resulting in corrupted initial images. We found that
flagging one second at the beginning and end of each scan removed these errant
one second integrations and enable the fringefit task to run without error.
#### 2.2.3 Phase and Delay Calibration
Accurate correlation of interferometric data requires a good estimate of the
difference in time it takes for the same wave front to reach each telescope,
based on numerous factors including the distance between the telescopes and
the rotation of the Earth. Even with careful estimation, it is often not
possible to accurately calculate all of the factors that contribute to that
time offset: for example, the atmosphere above each telescope can be notably
different, or the signal may have to travel through different hardware paths
to get to the correlator. These differences manifest as changes in phase with
time and frequency and can cause loss of coherence when the data are averaged
in time and frequency during the imaging steps. We use a calibration technique
known as fringe fitting to remove these errors after correlation.
Phase calibration is done in two steps. The first is to remove the change in
phase over frequency due to different path lengths through telescope
electronics for each spectral window. This is done by finding a strong source
and correcting the slope as a function of frequency. Our code does this by
running the CASA task fringefit on scans that include all antennas, and
selecting the scan where the fringefit task finds solutions to all antennas.
This calibration step is only applied to the change in phase over frequency,
and is applied to all scans.
The second phase calibration step corrects the phase, change in phase as a
function of frequency, and change in phase as a function of time, for each
source individually. Since the previous calibration step removed the change in
phase with frequency, we can average spectral windows together, over the whole
scan, in order to increase the chance of finding a solution.
#### 2.2.4 Imaging and Self-calibration
$\Theta_{{\rm hpbw}}=\frac{\lambda}{D}$ (3) $\Theta_{{\rm
hpbw}}=\frac{\lambda}{D}$ (4)
A model of clean components was built using the auto-masking feature of tclean
to generate a mask that selected only the brightest emission in the field.
This was done using the Högbom algorithm (Högbom, 1974) and setting the Briggs
robust weighting equal to 2, traditionally defined as natural weighting in the
tclean. The masks created using the automatic masking feature in tclean were
controlled by the auto-multithresh parameter (Kepley et al., 2020). Masking is
mainly controlled by the noisethreshold parameter which masks peaks in the
residual image that are some multiple of the noise. The sources in our survey
have varying flux densities so it is impossible to predict what value for
noisethreshold will create a conservative clean mask. We therefore start with
the noisethreshold value set to 15, and continue to check if a clean mask has
been made. If it has not, the code will try again until it reaches a
noisethreshold of 5. If it doesn’t make a mask at that point it, will end with
the dirty image.
Self-calibration was attempted next — a process of successively using models
produced from previous images to improve the calibration for the phases,
delays, and amplitudes, which can often yield images of higher fidelity and
improved S/N. The clean component model from the initial image was fed back
into the calibration task gaincal, to calibrate the delays and phases. An
image was produced from this ”phase-only” self-calibration, using the same
parameters as the initial image. This in turn was used to repeat the process,
but this time calibrating for phase and amplitude for each antenna. The self-
calibration process was repeated until the brightest residual emission in
tclean reached a threshold of three times the noise . The image with the
highest S/N from among the initial images, phase-only self-calibration, and
phase+amplitude self-calibration was then selected as the final image to
include in the sample.
### 2.3 Comparing CASA to other software
#### 2.3.1 Calibration Comparison to AIPS
Previous large VLBI campaigns including the VLBA Imaging and Polarimetry
Survey (VIPS; Helmboldt et al., 2007), VCS (Beasley et al., 2002; Petrov et
al., 2006; Gordon et al., 2016), and Monitoring Of Jets in Active galactic
nuclei with VLBA Experiments (MOJAVE; Lister & Homan, 2005) have used the
Astronomical Image Processing System (AIPS; Greisen, 2003) to apply the
initial calibrations to their data, and Difmap (Shepherd et al., 1994) for
imaging and self-calibration. We considered using AIPS and Difmap to carry out
calibration and imaging in this campaign as well, but due to having thousands
of sources and plans for continued observations, we needed to ensure that the
software package we used was easy to script, continued to receive new features
and updates, and allowed simple use of outside packages. We chose CASA as a
better fit to our requirements than AIPS and Difmap, but we still needed to
compare CASA to AIPS and Difmap to ensure that the results were consistent.
The calibration steps carried out using CASA were outlined above in Section
2.2. We carried out the same calibration tasks using the equivalent AIPS
tasks. We did not carry out the initial fringe fitting, where we corrected the
phase for instrumental errors, nor did we run AOFlagger on the data.
To carry out a detailed calibration and imaging comparison, we selected three
sources from the data which represent a stronger source (0955+476), a source
with complex structure (0733+261), and a weak source (1257+839). Figure 4
shows images of each source, and Table 3 shows the rms and peak values in Jy
bm-1 extracted from each image. The main focus of this section is the initial
calibration comparison, so we focus on the statistics where the sources were
both imaged in CASA.
#### 2.3.2 Imaging Comparison to Difmap
Difmap 333 https://sites.astro.caltech.edu/ tjp/citvlb/ (Shepherd et al.,
1994) is part of the Caltech VLBI software package, and was designed to
quickly make images from VLBI data. Difmap does not contain calibration
packages like those described in 2.2 and 2.3.1, but rather it is strictly a
software package for imaging, and is optimized for VLBI data. It has been used
in conjuction with calibrated data from AIPS to make the images for the
aforementioned large VLBI survey campaigns and is considered an industry
standard. In Figure 4 we show images of sources mentioned in 2.3.1. The layout
for each source image set is, from upper-left to lower-right: AIPS
calibration, CASA imaging; AIPS Calibration, Difmap imaging; CASA Calibration,
CASA imaging; and CASA calibration, Difmap imaging.
The self-calibration steps using CASA are outlined above in section 2.2.4. The
Difmap script starts with a uniform weighted image. It runs 50 iterations of
CLEAN, then calibrates the phases in a loop, until the peak in the residual
image is 8 times the noise. It then creates a naturally weighted image in the
same way. Finally it carries out a similar process but calibrates the
amplitude per antenna and channel window and calibrates the phases.
Though the steps are different, the results from imaging and self-calibration
in CASA and Difmap are the same for 0955+476 and 0723+261. In the weaker
source, 1257+839 the peak flux from Difmap is higher. This is likely due to
the differences in the self-calibration procedure .
Table 2: Outlining imaging parameters to compare between AIPS and CASA | CASA | Difmap
---|---|---
Final map size | 512 | 512a
Number of iterations per cycleb | 1000 | 100
Stopping criteria | 5$\sigma$ in residual | 5$\sigma$ in residual
Cell size | 0.165 mas | 0.165 mas
Clean gain | 0.05 | 0.05
Solution Interval | Scan | Scan
a Difmap map size starts at 1024
b always reach stopping criteria before all clean iterations
Figure 4: Comparison of RR polarization images made using different combinations of intial calibration and imaging/self-calibration steps. These images were made from the UF001K dataset observed on 10 June 2017. We show the software used for calibration and imaging in the upper left hand corner of each image, and the rms measured in the image away from the source at the bottom of the image. The restoring beam is shown in gray in the lower left hand corner of the image. The contours in each image are $3\times~{}4^{n}\sigma$. Figures (a), (b), and (c) show that data imaged with CASA and Difmap produce very similar images for stronger sources (0955+476), complex sources (0733+261), and weak sources (1257+839). Panel (d) shows the amplitude vs UV-distance plot for sources that have been calibrated in AIPS (blue) and CASA (green) both imaged in CASA. The plots show that the amplitude calibration between AIPS and CASA are nearly identical. Table 3: Comparison of sample source RMS and peak flux density in Jy bm${-1}$ when processed with CASA, AIPS, and Difmap | AIPS/CASA | AIPS/Difmap | CASA/CASA | CASA/Difmap
---|---|---|---|---
Source | $\sigma_{\rm obs}$ | Peak | $\sigma_{\rm obs}$ | Peak | $\sigma_{\rm obs}$ | Peak | $\sigma_{\rm obs}$ | Peak
0955+476 (Compact) | $0.01$ | $1.01$ | $0.0012$ | $0.97$ | $0.0015$ | $1.02$ | $0.0014$ | $0.96$
0733+261 (Complex) | $0.0013$ | $0.135$ | $0.0005$ | $0.127$ | $0.0012$ | $0.133$ | $0.0005$ | $0.123$
1257+839 (Weak) | $0.0005$ | $0.040$ | $0.0002$ | $0.058$ | $0.0002$ | $0.045$ | $0.0002$ | $0.057$
## 3 Fundamental Reference Image Data Archive (FRIDA)
As part of USNO’s membership in the IVS, USNO is an official IVS Analysis
Center and an Analysis Center for Source Structure. In support of these
international arrangements, USNO has historically been responsible for
providing images of ICRF sources to the community through what was previously
known as the Radio Reference Frame Image Database (RRFID). USNO has been
undergoing many updates and changes to our networks and computer systems, and
thus, access to RRFID has been unavailable for the past few years. However,
USNO has taken this opportunity to develop a new interactive web-based
interface called the Fundamental Reference Image Data Archive (FRIDA). FRIDA
will debut in 2021 and it will contain all archival images from the RRFID
along with the images presented in this work. Currently, the USNO images of
ICRF sources span frequencies from 2.3 to 42 GHz with the majority of images
at 2.3 and 8.6 GHz. FRIDA will host FITS files for all images as well as
calibrated $uv$-data files and ancillary image quality diagnostic files such
as amplitude versus $uv$-distance and $uv$ sky coverage plots for individual
sources. Users will be able to download all data available through the
interactive website.
With the calibration and imaging pipeline developed in CASA, we aim to have
new images populate FRIDA in an automated or semi-automated manner after each
24-hour VLBA session is correlated. FRIDA is planned to grow by including the
Research and Development with VLBA (RDV) sessions, an IVS-sponsored series
which combine the VLBA with other IVS stations. The goal of the RDV sessions
is to monitor and maintain information on faint or non-detected sources, and
this series is a collaborative effort between USNO and Bordeaux Observatory.
In fact, the Bordeaux VLBI Image Database
(BVID444http://bvid.astrophy.u-bordeaux.fr/database.html) contains images from
half of all the RDV sessions and it is USNO’s goal to host the remaining half
on FRIDA. In addition to the VLBA-only sessions, the RDV sessions, and
archival images from RRFID, USNO plans to host K-band images from the USNO-
sponsored UD001 series for multi-wavelength radio images of ICRF sources.
Imaging ICRF sources is paramount for monitoring the physical properties
intrinsic to the quasars — these may lead to astrometric uncertainties, which
in turn may contribute to uncertainties in geodetic measurements.
Characteristics such as source structure, variability in flux and position,
core shift, and other physical phenomena make quasars problematic over long
temporal ranges for maintaining precise astrometry for each target. Therefore,
it is vital to image ICRF sources regularly in order to monitor any changes
that might lead to problems in astrometric or geodetic measurements. Figure 5
shows example images of four sources in S and X bands, demonstrating the
variety of source features within the sample.
Figure 5: Examples highlighting the variety of source features in the ICRF
catalog, in image pairs at S and X bands for selected sources. The field of
view is narrower for the X band images, and the equivalent extent is denoted
by the dashed boxes in the S band images, along with a 0$\farcs$2 size bar in
each for reference. (a) 0339-683, a point-like source in both bands. (b)
1413+349, with extended emission. (c) 0829+187, showing multiple components.
(d) 1313-333, with a clear detection and extended structure at X band but a
low S/N detection at S band.
## 4 Global Properties of ICRF Sources
### 4.1 Sources included in analysis
We have observed 3,627 sources between one and 20 times, at two frequencies,
for a possible 11,220 images. For sources that were observed in more than one
24-hour observation session, we select the image that has the highest dynamic
range at X-Band, leaving us with 7,254 images. Our imaging pipeline
automatically creates clean masks for residual images with emission brighter
than fifteen times the noise level. Sources with S/N lower than this level
will not have any cleaning attempted, and only a ‘dirty image’ will be
created. We exclude the low S/N sources where only a dirty image has been made
in this global property analysis. The figures below include 3371 sources for
plots made at X-band, 2,659 sources at S-band, and 2576 sources where
information from both bands are included.
### 4.2 Flux properties
We have used the CASA task imstat to determine the noise in a region of each
image that is free of emission, and the peak flux density of the image both in
units Jy bm-1. We calculated the theoretical RMS noise ($\sigma_{\rm theor}$),
in units Jy bm-1, for each image using the following equation from Wrobel &
Walker (1999)555 sensitivity calculation also found from NRAO at
https://science.nrao.edu/facilities/vlba/docs/manuals/oss/imag-sens
$\sigma_{\rm theor}=\frac{\rm SEFD}{\eta_{c}(N(N-1)\delta\nu~{}t_{\rm
int})^{1/2}}~{}{\rm Jy~{}bm^{-1}}$ (5)
where SEFD is the system equivalent flux density in Jy, the overall system
noise defined as the flux density of a source that doubles the system
temperature, $\eta_{c}$ is the correlator efficiency (0.75 for the VLBA
666value for $\eta_{c}$ comes from https://science.nrao.edu/facilities/vlba
/docs/manuals/oss/bsln-sens), $N$ is the number of antennas, $\delta\nu$ is
the bandwidth in Hz and $t_{\rm int}$ is the total on source integration time.
We calculated the signal-to-noise ratio (S/N) using the peak flux and observed
noise $\sigma_{\rm obs}$. We used the CASA task imfit to fit a 2-D Gaussian to
the center-most point source of each image We label the flux density for the
gaussian $S_{Gauss}$. Past studies have calculated the total flux density in
an image by fitting a Gaussian to each component (ex. Pushkarev & Kovalev,
2012; Fey & Charlot, 2000) and summing the flux density from all components.
Fitting a Gaussian to each component by hand for over 10,000 images is
prohibitively time consuming, but Pushkarev & Kovalev (2012) show that the
total flux density estimated from fitting 2-D Gaussians to components in an
image is approximately equal to the sum of the flux density in the clean
components in an image. Therefore we estimate the total flux density,
$S_{\nu}$ by summing the flux density of clean components in an image. All of
these source properties calculated for each observation are included in Table
5 along with the source name, the ICRF3 Right Ascension and Declination to the
median uncertainty of ICRF3 of 0.1 mas (Charlot et al., 2020), and the date of
each observation. The first 15 sources are included as an example and the full
table will be available as a supplement in a machine-readable format.
Figure 6: The percent difference between the flux estimated for the gaussian
fit to the brightest component in an image using the casa tasks uvmodelfit and
imfit. For most sources the percent difference is less than 1%, and only a
small number of weak sources have a percent difference greater than 5%
We show the distribution of $S_{\nu}$ at both S and X-bands in Figures 7 and
8, respectively. Each of these figures have two histograms: the left histogram
shows the total flux density for all sources up to 1 Jy while the right
histogram shows the distribution of the sources whose total flux density is
greater than 1 Jy. For sources that were imaged more than once over the full
duration of the 20 observing sessions, the value of $S_{\nu}$ comes from the
image with the highest dynamic range.
As mentioned in Section 1, unresolved, compact point sources typically provide
better astrometric precision than sources with structure. Any extended
emission can cause the astrometry solution to degrade in accuracy, especially
when aggregated over long temporal timelines. As such, we aim to include as
many compact, point-like sources as possible in the ICRF, and imaging
campaigns such as this one provide a great way to study the nature of source
structure in large numbers of quasars at high spatial resolutions. We estimate
the compactness, or core dominance of sources in our survey as the ratio of
the Gaussian model flux density, $S_{Gauss}$, to total flux density,
$S_{\nu}$. We show this distribution in Figure 9. This ratio has been referred
to previously as the core dominance (Pushkarev & Kovalev, 2012; Fey & Charlot,
2000) but because we don’t want to draw any conclusions about the source of
the emission for any given source we will refer to the ratio as the
compactness ratio.
The calculated compactness ratio is sometimes greater than one because the
imfit task in CASA can produce a flux density value that is slightly larger
than the sum of the clean components for that source if the wings of the
Gaussian model fit to the central component are below the noise limit in the
image. Since we used the sum of the clean components to estimate $S_{\nu}$,
and the model flux density, $S_{Gauss}$ is estimated using imfit, a point
source could have a compactness ratio greater than one. After visually
inspecting $\sim$200 sources with a compactness ratio between 0.95 and 1.0 we
find that sources with a compactness ratio greater than 0.975 are usually
compact and less than 0.975 have some extended emission.
As mentioned above, core dominance (referred to here as compactness ratio) was
measured by Fey & Charlot (2000) and shown to be correlated with Source
Structure Index, a measurement of how much time the source structure adds to
the group delay measurement (Charlot, 1990). Though our measurement of the
compactness ratio does not directly match the value from Fey & Charlot (2000)
due to our use of the clean flux as opposed to their method of model fitting
for every component in an image, we have compared sources that are in both
samples and found that sources in our study that have a compactness greater
than 0.975 typically have a source structure index of 1 or 2 in the Fey &
Charlot (2000) sample, a good indication that these sources are reliable for
use in the ICRF.
### 4.3 Spectral Index
We measured the spectral index, $\alpha$, using the total flux density,
$S_{\nu}$ from each band and assuming the flux density changes as
S${}_{\nu}\propto\nu^{\alpha}$ where Sν is the flux density at a given
frequency $\nu$. With simultaneous S/X band observations, our measurements of
the spectral index of sources are free from errors that might otherwise arise
from variable fluxes between different epochs. We do acknowledge that the
spectral index calculated is biased due to the different spatial resolution of
the images at different frequencies. The images generated at 8.7 GHz are not
sensitive to some of the extended emission that may be detected at 2.3 GHz,
therefore the flux density at 2.3 GHz may include emission that would not be
detectable at 8.7 GHz. The spectral index in such a case would be steeper than
the actual spectral index of the source. For a point-like source, which we
expect most of these sources to be, all of the emission would detected at both
frequencies and any difference is due to the spectral index of the source.
There will, however, be some number of sources in this catalog for which the
spectral index we’ve measured is steeper than the actual spectral index of the
source. The distribution of spectral index values measured across our sample
is shown in Figure 10. Spectral indices vary from $-1.82$ to $1.85$ with a
median value of $-0.02$. We find that 2315 of the 2587, or $89\%$ of sources
detected in both bands have spectral index greater than -0.5, and are defined
as flat spectrum sources. The other 272 sources have a spectral index less
than -0.5.
Previous imaging campaigns such as Fey & Charlot (2000); Pushkarev & Kovalev
(2012) that used a similar frequency setup and observed sources used in
different versions of the ICRF found that most of their sources were also flat
spectrum sources. The median spectral index cited by Fey & Charlot (2000) is
$-0.28$. While the total spectral index was calculated and a histogram was
presented in Pushkarev & Kovalev (2012) (see, e.g., their Figure 13), no
median value was given, though it lies somewhere in the range between $-0.2$
and $0$. We find similar spectral index trends in our sample which contains
roughly eight times the number of sources. The spectral index distribution
found here is also similar to the 153 sources detected in the VLBA survey of a
complete north polar cap sample at 2.3 and 8.6 GHz (Popkov et al., 2021).
Figure 7: Left: Distribution of total flux density, $S_{\nu}$, for each source
at S-Band. Sources with flux density larger than 1 Jy are counted in the final
bin on the right. Right: the distribution of sources with flux density greater
than 1 Jy. These histograms show the distribution of flux densities measured
from a single observation for each source. For sources that were observed and
imaged more than once, the flux density from the image with the highest
dynamic range was used. These histograms do not include sources that were not
imaged or sources whose only image had a dynamic range of less than 15.
Figure 8: Same as Figure 7 for all sources at X-band Figure 9: The compactness
ratios of observed sources. This serves as an estimation of how point-like an
object is, where a source with a Gaussian model flux ratio of 1 has all of the
flux contained within the beam (unresolved). The total flux is measured by
adding the flux from the clean component map. Figure 10: Distribution of band-
to-band spectral indices.
## 5 Summary
We have presented results from our imaging campaign targeting 3,627 sources in
ICRF3 at 2.3 and 8.7 GHz. We have used a CASA pipeline to successfully image
2697 sources at 2.3 GHz and 3209 sources at 8.7 GHz. We imaged 2615 of those
sources simultaneously replacedasat both frequencies.
We found that the median flux density of our sample is 0.13 Jy at 2.3 GHz and
0.09 Jy as 8.7 GHz . We found that $70\%$ of the sources have a compactness
ratio greater than 0.975, indicating that there is little or no emission
coming from outside the central, bright component. . Finally we found that the
spectral index of sources in our sample ranges from -1.8 to 1.8 with a median
value of -0.02. Approximately $90\%$ of the sources in our campaign that were
detected at both frequencies have a spectral index greater than -0.5, the
cutoff for flat spectrum sources.
Table 4: Radio Properties of ICRF Sources Source | RA | Dec | date | $\alpha$ | $\nu$ | $\sigma_{\rm theor}$ | $\sigma_{\rm obs}$ | Peak | $S_{\rm\nu}$ | $S_{\rm Gauss}$ | S/N
---|---|---|---|---|---|---|---|---|---|---|---
| (deg) | (deg) | | | (GHz) | (mJy bm-1) | (mJy bm-1) | (Jy bm-1) | (Jy) | (Jy) |
0000-160 | $0.86360065$ | $-15.78484871$ | 2017 Feb 19 | 0.2 | 2.3 GHz | $0.27$ | $0.839$ | $0.062291$ | $0.06085$ | $0.065444$ | $74.2$
| | | | | 8.6 GHz | $0.191$ | $0.289$ | $0.067377$ | $0.074851$ | $0.075682$ | $233.3$
0000-197 | $0.82781262$ | $-19.45620993$ | 2017 Feb 19 | -0.3 | 2.3 GHz | $0.27$ | $0.866$ | $0.082143$ | $0.095579$ | $0.103313$ | $94.9$
| | | | | 8.6 GHz | $0.191$ | $0.648$ | $0.057467$ | $0.067454$ | $0.06853$ | $88.6$
0000-199 | $0.81645585$ | $-19.69733383$ | 2017 Feb 19 | -0.3 | 2.3 GHz | $0.304$ | $2.376$ | $0.196167$ | $0.205944$ | $0.209789$ | $82.6$
| | | | | 8.6 GHz | $0.215$ | $0.456$ | $0.089882$ | $0.140468$ | $0.122478$ | $197.0$
0001+459 | $1.06719853$ | $46.25499185$ | 2017 Mar 23 | -0.0 | 2.3 GHz | $0.352$ | $0.818$ | $0.15733$ | $0.157872$ | $0.163444$ | $192.3$
| | | | | 8.6 GHz | $0.249$ | $0.365$ | $0.150068$ | $0.155065$ | $0.15553$ | $411.1$
0001+478 | $0.94183991$ | $48.11781535$ | 2017 Mar 23 | -1.2 | 2.3 GHz | $0.269$ | $1.933$ | $0.157052$ | $0.238967$ | $0.259524$ | $81.3$
| | | | | 8.6 GHz | $0.19$ | $1.582$ | $0.05662$ | $0.051817$ | $0.056724$ | $35.8$
0001-120 | $1.02047917$ | $-11.81621839$ | 2017 Feb 24 | -0.0 | 2.3 GHz | $0.238$ | $1.223$ | $0.540535$ | $0.624813$ | $0.60506$ | $442.1$
| | | | | 8.6 GHz | $0.168$ | $0.817$ | $0.462684$ | $0.60403$ | $0.588762$ | $566.1$
0002+051 | $1.33423129$ | $5.403001$ | 2017 Jul 16 | -0.5 | 2.3 GHz | $0.313$ | $2.031$ | $0.17243$ | $0.175952$ | $0.173318$ | $84.9$
| | | | | 8.6 GHz | $0.221$ | $1.211$ | $0.083285$ | $0.093682$ | $0.086632$ | $68.8$
0002+200 | $1.14899286$ | $20.32842159$ | 2017 May 27 | 0.1 | 2.3 GHz | $0.238$ | $1.815$ | $0.286337$ | $0.33006$ | $0.322637$ | $157.8$
| | | | | 8.6 GHz | $0.168$ | $0.541$ | $0.252224$ | $0.362979$ | $0.365341$ | $466.4$
0002+541 | $1.26818061$ | $54.47359014$ | 2017 Mar 27 | 0.3 | 2.3 GHz | $0.27$ | $2.362$ | $0.263699$ | $0.256789$ | $0.264302$ | $111.7$
| | | | | 8.6 GHz | $0.191$ | $0.671$ | $0.343928$ | $0.390912$ | $0.388459$ | $512.8$
0002-170 | $1.32472412$ | $-16.8012996$ | 2017 Feb 24 | -0.2 | 2.3 GHz | $0.338$ | $0.865$ | $0.1509$ | $0.169368$ | $0.172039$ | $174.4$
| | | | | 8.6 GHz | $0.239$ | $0.373$ | $0.091568$ | $0.130629$ | $0.11989$ | $245.7$
0002-350 | $1.27468791$ | $-34.76379337$ | 2017 Jan 16 | $\cdots$ | 2.3 GHz | $0.292$ | $30.877$ | $0.081618$ | $0.0$ | $\cdots$ | $2.6$
| | | | | 8.6 GHz | $0.206$ | $0.425$ | $0.098578$ | $0.102574$ | $0.103216$ | $232.0$
| | | 2017 Jan 22 | $\cdots$ | 2.3 GHz | $0.577$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
| | | | | 8.6 GHz | $0.407$ | $1.517$ | $0.094508$ | $0.089684$ | $0.098981$ | $62.3$
Notes: a indicates ICRF3 Defining Source | | | | | | | |
Table 5: The Source is the source name in the observation. RA and Dec are the
Right Ascension and Declination of the source taken from the ICRF3 catalog
(Charlot et al., 2020) with precision to the median uncertainty of the frame
to 0.1 mas. Date is the observation date for a given image. $\alpha$ is the
2.3 GHz to 8.7.GHz spectral index of a source, $\nu$ is the frequency at which
the values in the next colums are measured. $\sigma_{\rm theor}$ and
$\sigma_{\rm obs}$ are the theoretical and measured RMS in the image. Peak is
the peak flux density of the brightest pixel in an image in Jy bm-1.
$S_{\rm\nu}$ is the total flux density of the image determined by summing the
clean components. $S_{\rm Gauss}$ is the flux density calculated from fitting
a Gaussian to the brightest component in an image. S/N is the signal-to-noise
ratio of an image calculated as the peak flux density divided by $\sigma_{\rm
obs}$. The full, machine readable table will be available through the journal.
## 6 Acknowledgements
We sincerely thank Justin Linford for help with our CASA/AIPS/Difmap
calibration comparison. We thank Bob Zavala, Brian Luzum, Mike Dutka, and
Bryan Hemingway for helpful feedback.
The National Radio Astronomy Observatory is a facility of the National Science
Foundation operated under cooperative agreement by Associated Universities,
Inc. The authors acknowledge use of the Very Long Baseline Array under the US
Naval Observatory’s time allocation. This work supports USNO’s ongoing
research into the celestial reference frame and geodesy.
Figure 11: Violin plots showing the distribution and span of the gain
corrections calculated from self-calibration for each antenna for each scan at
8.7 GHz. The median value of the gain corrections is indicated by the
horizontal line in the center, and shown as text in each plot. Figure 11:
Violin plots showing the distribution and span of the gain corrections
calculated from self-calibration for each antenna for each scan at 8.7 GHz.
The median value of the gain corrections is indicated by the horizontal line
in the center, and shown as text in each plot. Figure 11: Violin plots
showing the distribution and span of the gain corrections calculated from
self-calibration for each antenna for each scan at 8.7 GHz. The median value
of the gain corrections is indicated by the horizontal line in the center, and
shown as text in each plot. Figure 11: Violin plots showing the distribution
and span of the gain corrections calculated from self-calibration for each
antenna for each scan at 8.7 GHz. The median value of the gain corrections is
indicated by the horizontal line in the center, and shown as text in each
plot. Figure 12: Violin plots showing the distribution and span of the gain
corrections calculated from self-calibration for each antenna for each scan at
2.3 GHz. The median value of the gain corrections is indicated by the
horizontal line in the center, and shown as text in each plot. Figure 12:
Violin plots showing the distribution and span of the gain corrections
calculated from self-calibration for each antenna for each scan at 2.3 GHz.
The median value of the gain corrections is indicated by the horizontal line
in the center, and shown as text in each plot. Figure 12: Violin plots
showing the distribution and span of the gain corrections calculated from
self-calibration for each antenna for each scan at 2.3 GHz. The median value
of the gain corrections is indicated by the horizontal line in the center, and
shown as text in each plot. Figure 12: Violin plots showing the distribution
and span of the gain corrections calculated from self-calibration for each
antenna for each scan at 2.3 GHz. The median value of the gain corrections is
indicated by the horizontal line in the center, and shown as text in each
plot.
## References
* Beasley et al. (2002) Beasley, A. J., Gordon, D., Peck, A. B., et al. 2002, The Astrophysical Journal Supplement Series, 141, 13
* Charlot (1990) Charlot, P. 1990, The Astronomical Journal, 99, 1309
* Charlot et al. (2020) Charlot, P., Jacobs, C. S., Gordon, D., et al. 2020, Astronomy & Astrophysics, 644, A159. https://www.aanda.org/10.1051/0004-6361/202038368
* Deller et al. (2011) Deller, A. T., Brisken, W. F., Phillips, C. J., et al. 2011, Publications of the Astronomical Society of the Pacific, 123, 275
* Fey & Charlot (2000) Fey, A. L., & Charlot, P. 2000, The Astrophysical Journal Supplement Series, 128, 17
* Fey et al. (2015) Fey, A. L., Gordon, D., Jacobs, C. S., et al. 2015, Astronomical Journal, 150, 58. http://stacks.iop.org/1538-3881/150/i=2/a=58?key=crossref.e497c460708f157dbe1bba4036c43bc2
* Gordon et al. (2016) Gordon, D., Jacobs, C., Beasley, A., et al. 2016, The Astronomical Journal, 151, 154. http://stacks.iop.org/1538-3881/151/i=6/a=154?key=crossref.68494a42d907a1bfa435d151d7591017
* Greisen (2003) Greisen, E. W. 2003, in Information Handling in Astronomy - Historical Vistas (Dordrecht: Springer Netherlands), 109–125. http://link.springer.com/10.1007/0-306-48080-8{_}7
* Helmboldt et al. (2007) Helmboldt, J. F., Taylor, G. B., Tremblay, S., et al. 2007, The Astrophysical Journal, 658, 203. http://stacks.iop.org/0004-637X/658/i=1/a=203
* Högbom (1974) Högbom, J. 1974, Astronomy and Astrophysics Supplement, 15, 417. https://ui.adsabs.harvard.edu/abs/1974A{&}AS...15..417H
* Kepley et al. (2020) Kepley, A. A., Tsutsumi, T., Brogan, C. L., et al. 2020, Publications of the Astronomical Society of the Pacific, 132, 24505. http://dx.doi.org/10.1088/1538-3873/ab5e14
* Lister & Homan (2005) Lister, M. L., & Homan, D. C. 2005, The Astronomical Journal, 130, 1389. https://iopscience.iop.org/article/10.1086/518654https://iopscience.iop.org/article/10.1086/432969
* Ma et al. (1998) Ma, C., Arias, E. F., Eubanks, T. M., et al. 1998, The Astronomical Journal, 116, 516. http://stacks.iop.org/1538-3881/116/i=1/a=516
* McMullin et al. (2007) McMullin, J., Waters, B., Schiebel, D., Young, W., & Golap, K. 2007, in Astronomical Data Analysis Software and Systems XVI, Vol. 376, 127. https://ui.adsabs.harvard.edu/abs/2007ASPC..376..127M/abstract
* Napier (1994) Napier, P. 1994, in Very High Angular Resolution Imaging, ed. J. G. Robertson & W. J. Tango (Dordrecht: Springer Netherlands), 117–124. https://ui.adsabs.harvard.edu/abs/1994IAUS..158..117N/abstract
* Offringa et al. (2012) Offringa, A. R., Van De Gronde, J. J., & Roerdink, J. B. 2012, Astronomy and Astrophysics, 539, arXiv:1201.3364
* Petrov et al. (2006) Petrov, L., Kovalev, Y. Y., Fomalont, E. B., & Gordon, D. 2006, The Astronomical Journal, 131, 1872
* Popkov et al. (2021) Popkov, A. V., Kovalev, Y. Y., Petrov, L. Y., & Kovalev, Y. A. 2021, The Astronomical Journal, 161, 88. https://iopscience.iop.org/article/10.3847/1538-3881/abd18c
* Pushkarev & Kovalev (2012) Pushkarev, A. B., & Kovalev, Y. Y. 2012, Astronomy & Astrophysics, 544, A34. http://arxiv.org/abs/1205.5559http://dx.doi.org/10.1051/0004-6361/201219352http://www.aanda.org/10.1051/0004-6361/201219352
* Shepherd et al. (1994) Shepherd, M. C., Pearson, T. J., & Taylor, G. B. 1994, {DIFMAP:} an interactive program for synthesis imaging., Vol. 26, 987–989. http://adsabs.harvard.edu/abs/1994BAAS...26..987S
* Wrobel & Walker (1999) Wrobel, J. M., & Walker, R. C. 1999, in Synthesis Imaging in Radio Astronomy II, A Collection of Lectures from the Sixth NRAO/NMIMT Synthesis Imaging Summer School., ed. G. B. Taylor, C. L. Carilli, & R. A. Perley (ASP Conference Series), 171. https://ui.adsabs.harvard.edu/abs/1999ASPC..180..171W/abstract
|
arxiv-papers
| 2021-07-26T17:42:17 |
2024-09-04T03:07:19.469337
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Lucas R. Hunt, Megan C. Johnson, Phillip J. Cigan, David Gordon, John\n Spitzak",
"submitter": "Lucas Hunt",
"url": "https://arxiv.org/abs/2107.12349"
}
|
2107.12352
|
# Genetic Networks Encode Secrets of Their Past
Peter Crawford-Kahrl1,† Robert R. Nerem2,† Bree Cummins1 and Tomas Gedeon1
###### Significance Statement
The study of gene regulatory networks has expanded in recent years as an
abundance of experimentally derived networks have become publicly available.
The sequence of evolutionary steps that produced these networks are usually
unknown. As a result, it is challenging to differentiate features that arose
through gene duplication and gene interaction removal from features introduced
through other mechanisms. We develop tools to distinguish these network
features and in doing so, give methods for studying ancestral networks through
the analysis of present-day networks.
###### Abstract
Research shows that gene duplication followed by either repurposing or removal
of duplicated genes is an important contributor to evolution of gene and
protein interaction networks. We aim to identify which characteristics of a
network can arise through this process, and which must have been produced in a
different way. To model the network evolution, we postulate vertex duplication
and edge deletion as evolutionary operations on graphs. Using the novel
concept of an _ancestrally distinguished subgraph_ , we show how features of
present-day networks require certain features of their ancestors. In
particular, ancestrally distinguished subgraphs cannot be introduced by vertex
duplication. Additionally, if vertex duplication and edge deletion are the
only evolutionary mechanisms, then a graph’s ancestrally distinguished
subgraphs must be contained in all of the graph’s ancestors. We analyze two
experimentally derived genetic networks and show that our results accurately
predict lack of large ancestrally distinguished subgraphs, despite this
feature being statistically improbable in associated random networks. This
observation is consistent with the hypothesis that these networks evolved
primarily via vertex duplication. The tools we provide open the door for
analysing ancestral networks using current networks. Our results apply to
edge-labeled (e.g. signed) graphs which are either undirected or directed.
## 1 Introduction
††1Department of Mathematical Sciences,
Montana State University, Bozeman, Montana, USA
2Institute for Quantum Science and Technology,
University of Calgary, Alberta T2N 1N4, Canada
†These authors contributed equally to this work
Gene duplication is one of the most important mechanisms governing genetic
network growth and evolution [li1997molecular, ohno2013evolution,
patthy2009protein]. Another important process is the elimination of
interactions between existing genes, and even entire genes themselves. These
two mechanisms are often linked, whereby a duplication event is followed by
the removal of some of the interactions between the new gene and existing
genes in the network [conant2003asymmetric, dokholyan2002expanding, Janwa2019,
taylor2004duplication, vazquez2003modeling, wolfe]. De novo establishment of
new interactions or addition of new genes into the network by horizontal gene
transfer is also possible, but significantly less likely [Wagner03].
$2$$1$$3$$2$$1$$3$$2^{\prime}$$\mathscr{D}_{2}$ Figure 1: An illustration of
vertex duplication. The left graph is $G$, and the right graph is
$G^{\prime}=\mathscr{D}_{2}(G)$. Here, vertex 2 is duplicated, resulting in
the addition of vertex $2^{\prime}$ and new edges, all of which are shown in
grey. Vertex $2^{\prime}$ inherits all of the connections of vertex 2. Since
$2$ possesses a self-loop, we add connections between $2$ and $2^{\prime}$.
A common description of protein-protein interaction networks and genetic
regulatory networks is that of a graph. Several papers study how gene
duplication, edge removal and vertex removal affect the global structure of
the interaction network from a graph theoretic perspective [vazquez03,
dorog01, sole02, Wagner01, Wagner03]. They study the effects that the
probability of duplication and removal have on various network
characteristics, such as the degree distribution of the network. These papers
conclude that by selecting proper probability rates of vertex doubling,
deletion of newly created edges after vertex doubling, and addition of new
edges, one can recover the degree distribution observed in inferred genetic
networks in the large graph limit. This seems to be consistent with the data
from Saccharomyces cerevisiae [Wagner01, Wagner03] but since regulatory
networks are finite, the distributions of genetic networks are by necessity
only approximations to the theoretical power distributions.
Other investigations are concerned with general statistical descriptors of
large networks. These descriptors include the distribution of path lengths,
number of cyclic paths, and other graph characteristics [albert02, Barabasi99,
Jeong01, watts99]. These methods are generally applicable to any type of
network (social interactions, online connections, etc) and are often used to
compare networks across different scientific domains.
We take a novel approach to analyzing biological network evolution. We pose
the following question:
###### Question 1.
Given a current network, with no knowledge of its evolutionary path, can one
recover structural traces of its ancestral network?
To answer this question we formulate a general model of graph evolution, with
two operations: the duplication of a vertex and removal of existing vertices
or edges. The effect of vertex duplication, shown in Figure 1, is defined by a
vertex and its duplicate sharing the same adjacencies. This model does not put
any constraints on which vertices or edges may be removed, the order of
evolutionary operations, nor limits the number of operations of either type.
Previous investigations of the evolution of networks under vertex duplication
study special cases of our model [conant2003asymmetric,
dokholyan2002expanding, taylor2004duplication, vazquez2003modeling].
Suppose that a particular sequence of evolutionary operations transforms a
graph $G$ into a graph $G^{\prime}$. We seek to discover which characteristics
and features of the ancestor $G$ may be recovered from knowledge of
$G^{\prime}$. Although this work is motivated by biological applications, the
results in our paper apply to any edge-labeled directed or undirected graph.
Our results are in two related directions. First, we introduce the concept of
a ancestrally distinguished subgraph and show that $G$ must contain all
(ancestrally) distinguished subgraphs of $G^{\prime}$. This implies that
vertex duplication and edge deletion can not introduce distinguished
subgraphs. Next, we define the distinguishability of graph as the size of of
its largest distinguished subgraph. Our theoretical analysis suggests that
small distinguishability is a signature of networks that evolve primarily via
vertex duplication. We confirm this result by showing that the
distinguishabilities of two published biological networks and artificial
networks evolved by simulated vertex duplication both exhibit
distinguishability that is smaller than their expected distinguishability
under random edge relabeling.
## 2 Main Results
### 2.1 Ancestral Networks Contain Distinguished Subgraphs
We begin by introducing a new graph property that we call _ancestral
distinguishability_ (Definition 4.7) shortened to distinguishability
hereafter. We say two vertices are distinguishable if there exists a mutual
neighbor for which the edges connecting the vertices to this neighbor have
different edge labels. In a directed graph, a mutual neighbor is either a
predecessor of both vertices or a successor of both vertices. Since, by
definition of duplication, a vertex and its duplicate must be connected to
each of their neighbors by edges with the same label (Figure 1, Definition
4.6), we show that a vertex and its duplicate can never be distinguishable.
Additionally, deletion of edges can not create distinguishability between two
vertices.
We combine these results to prove that vertex duplication and edge deletion
cannot create new subgraphs for which every pair of vertices is
distinguishable. This observation yields our first main result that any such
_distinguished subgraph_ in the current network $G^{\prime}$, must have also
occurred in the ancestral network $G$ (Corollary 4.10). In fact this result is
a corollary of a stronger theorem regarding the existence of a certain graph
homomorphism from $G^{\prime}$ to $G$ (Theorem 4.9).
###### Main Result 1.
If $G^{\prime}$ is a network formed from $G$ by vertex duplication and edge
deletion, then all distinguished subgraphs of $G^{\prime}$ are isomorphic to
distinguished subgraphs of $G$. In other words, no distinguished subgraph in
$G$ could have been introduced by vertex duplication and edge deletion.
We develop Main Result 1 in the setting for which vertex duplication and edge
deletion are the only evolutionary mechanisms. However, if there are
evolutionary mechanisms other than vertex duplication and edge deletion, the
the second formulation of Main Result 1 offers an important insight. If a
sequence of arbitrary evolutionary steps (vertex duplication, edge deletion,
or some other mechanism) takes a network $G$ to a network $G^{\prime}$
containing a distinguished subgraph $H$, then either $H$ is isomorphic to a
subgraph of $G$ or at least one step in the evolutionary sequence was not
vertex duplication or edge deletion.
### 2.2 A Robust Signature of Duplication
Figure 2: Colored points represent 500 directed graphs generated from random
25-vertex seed graphs by repeated random vertex duplication and subsequent
edge deletion until a predetermined number of edges is achieved. Color
indicates final number of edges after deletion. Each of the 500 grey points
represents a randomly generated ER-graph with number of vertices, positive
edges, and negative edges equal to that of a corresponding evolved graph. The
corresponding figure for undirected graphs is Figure 4(a) in the SI.
We next aim to determine if the effects of evolution by vertex duplication and
edge deletion can be identified in biological networks. We consider the
_distinguishability_ of a graph, which is the number of vertices in its
largest distinguished subgraph. Since vertex duplication and edge deletion
cannot create distinguishability, the distinguishability of a graph cannot
increase under this model of evolution (Corollary 4.12). Since observations
indicate that evolution is dominated by duplication and removal, we predict
that genetic networks exhibit low distinguishability.
To quantify the degree to which the distinguishability of a graph $G$ is low,
we compute the _distinguishability deviation_ of $G$: the difference between
the distinguishability of $G$ and the expected distinguishability of $G$ under
random edge relabeling (Equation 7). Since low distinguishability is a
signature of vertex duplication, we expect random relabeling to remove this
signature and therefore increase distinguishability. In other words, we expect
networks evolved by vertex duplication and edge deletion to have negative
distinguishability deviation.
We calculate the distinguishability deviation of networks constructed by
simulated evolution via vertex duplication and edge deletion. These networks
are formed in two stages from 25-vertex Erdös-Rényi graphs (ER-graphs [ER])
with two edge labels denoting positive and negative interaction. First, vertex
duplication is applied 225 times, each time to a random vertex. Next, edges
are randomly deleted until some target final number of edges is reached. The
deletions simulate both evolutionary steps and the effect of incomplete data
in experimentally derived networks. We note that the operation of vertex
duplication and edge removal commute in a sense that any graph that can be
built by an arbitrary order of these operations can be also built by
performing the duplications first and then performing an appropriate number of
deletions. Therefore our construction is general.
As shown in Figure 2, these simulations indicate that networks evolved by
vertex duplication have negative distinguishability deviation. For each graph
represented by a colored point in Figure 2, we construct an ER-graph with the
same number of vertices, positive edges, and negative edges. These graphs are
represented by grey points and show that ER-graphs exhibit near-zero
distinguishability deviation. This negativity is robust against edge deletion;
even graphs that had 80% of their edges deleted after vertex duplication
exhibited statistically significant negative distinguishability deviation.
Having established evidence that graphs evolved by vertex duplication exhibit
negative distinguishability deviation, we evaluate if this property is
observable in biological networks. We consider two networks. The first is a D.
melanogaster protein-protein interaction network developed by [vin14],
represented by an edge-labeled undirected graph. Second, we investigate the
directed human blood cell regulatory network recorded in [Collombet2017]. Both
networks have label set $L=\\{-1,+1\\}$, signifying negative and positive
regulation, respectively.
The distinguishability deviations of these networks confirm our predictions.
Respectively, the distinguishabilities of the D. melanogaster and blood cell
networks are 7 and 4 and their expected distinguishabilities approximated by
100 random edge sign relabeling are $31.2\pm.7\;\mbox{ and }\;5.6\pm.6$. Thus,
these networks have distinguishability deviations of
$-24.2\pm.7\;\mbox{ and }\;-1.6\pm.6$ (1)
with statistical significance of $34.6$ and $2.3$ standard deviations,
respectively. These results are consistent with the hypothesis that biological
networks inferred from experimental data are subject to long sequences of
vertex duplication and edge removal without the evolutionary operation of
novel vertex or edge addition.
The joint evidence of negative distinguishability deviations in both simulated
and observed data leads to the following result.
###### Main Result 2.
Negative distinguishability deviation is a likely signature of evolution via
vertex duplication and edge deletion.
While we do not offer a rigorous mathematical proof, in Subsection 4.4 we give
evidence for a conjecture (Conjecture 4.15) which, if true, would prove that
vertex duplication always decreases distinguishability deviation. SI Section D
gives a detailed description of the simulated evolution scheme we used in
Figure 2. For completeness, we show in this section that negative
distinguishability deviation cannot be fully explained by the single vertex
characteristics (i.e. signed degree sequence) or small world properties of the
networks.
## 3 Discussion
We introduce the concept of distinguished subgraphs, in which every vertex has
differentiating regulatory interactions from every other vertex in the
subgraph. We show that distinguished subgraphs cannot be created by vertex
duplication and edge deletion. Remarkably, this implies that any of a
network’s distinguished subgraphs must appear in all of its ancestors under a
model of network evolution that allows duplication and removal, but does not
allow for the addition of new vertices or edges. Furthermore, this result
shows that distinguished subgraphs cannot be introduced by vertex duplication
and edge deletion.
In biological networks the addition of regulatory interactions between
existing genes (neofunctionalization [Force1999]), or the addition of entirely
new genes via horizontal gene transfer [Wagner03] are possible, but are
considered less likely than gene duplication or loss of function of a
regulatory interaction [Bergthorsson2007]. With this in mind, we consider a
model of network evolution in which long sequences of vertex duplication and
edge removal are interspersed by infrequent additions of new edges or
vertices. Under this model, Main Result 1 (Corollary 4.10) applies to any
sequence of consecutive vertex duplications and edge removals.
We investigate whether the predicted features of vertex duplication can be
found in biological networks inferred from experimental observations. Using
the metric of distinguishability deviation we show that two inferred
biological networks and a population of simulated networks evolved by vertex
duplication exhibit negative distinguishability deviation that is
statistically improbable in associated random networks. We propose that
negative distinguishability deviation is a marker of evolution by vertex
duplication and edge removal.
One potential application of this result is a method of checking the
suitability of random graph models. Often, random statistical models are
developed to generate graphs that match properties of social networks
[newman2002random], properties of biological networks [saul2007exploring], or
general graph theoretic properties [fosdick2018configuring]. For example, the
discovery of small-world phenomena [Milgram1967, watts99] lead to the
development of the Watts-Strogatz model [Watts1998]. Our results imply that an
accurate random graph model for signed biological networks, or more generally
edge-labeled networks that primarily evolved via vertex duplication, should
generate networks with negative distinguishability deviation. Additionally,
distinguishability deviation could inform the development of new models that
more closely agree with experimentally derived networks.
As an illustration of the utility of Main Result 1, we consider the following
example. Certain network motifs, i.e. 3-4 vertex subgraphs, have been shown to
appear at statistically higher rates in inferred biological networks
[milo2002network]. Motifs seem to be a byproduct of convergent evolution,
being repeatedly selected for based on their underlying biological function,
and appearing in organisms and systems across various biological applications
[alon2007network].
Vertex duplication and edge removal can easily create new motifs. For example,
consider the feed-forward loop, any three vertex subgraph isomorphic to a
directed graph with edge set $\\{(i,j),(j,k),(i,k)\\}$ (see
[shen2002network]). In Figure 1, no feed-forward loops can be found in $G$,
but there are two in $G^{\prime}$, both of which contain the vertices $1$,
$2$, and $2^{\prime}$. In contrast, the introduction of motifs that are also
distinguished subgraphs by vertex duplication and edge deletion is forbidden
by Main Result 1. Indeed, the feed-forward loops created in Figure 1 are not
distinguished subgraphs. This ability to identify which motifs could not have
arisen from vertex duplication and edge deletion could provide new insight
into the origin of specific motifs and, potentially, their biological
importance. Similarly, identifying genes in subgraphs that cannot arise from
vertex duplication and edge deletion could be useful for finding genes that
were introduced by mechanisms outside of these operations, such as horizontal
gene transfer.
Finally, our mathematical results are general enough to survey network models
beyond genetics to discern if vertex duplication may have played a role in
their evolution. For example, current ecological networks reflect past
speciation events, where a new species initially shares the ecological
interactions of their predecessors. This can be viewed as vertex duplication
and therefore ecological networks may exhibit significant negative
distinguishability deviation. Evaluating the distinguishability deviation of
ecological networks could indicate if the duplication process has been a
significant factor in their evolution. More broadly, the study of the
evolutionary processes that produce networks has been used to understand why
networks from distinct domains, be they social, biological, genetic, internet
connections, etc, have properties unique to their domain (e.g. exponents of
power law distributions [Graham2003]). Distinguishability deviation is yet
another tool to understand the effect evolutionary processes have on networks.
## 4 Methods
We proceed with preliminary definitions to familiarize the reader with the
language and notation used in this paper.
### 4.1 Definitions
Throughout this paper we fix an _edge label set_ $L$. We assume that
$\left|L\right|\geq 2$, otherwise the results are trivial. For example, to
consider signed regulatory networks with both activating and inhibiting
interactions one could take $L=\\{+1,-1\\}$. We use this choice in examples,
along with the notation $\dashv$ and $\to$ to represent directed edges with
labels $-1$ and $+1$ respectively.
###### Definition 4.1.
A _graph_ is the 3-tuple $G:=(V,E,\ell)$ where $V$ is a set of vertices,
$E\subseteq\\{(i,j):i,j\in V\\}$ is a set of directed edges, and $\ell:E\to L$
is a map labeling edges with elements of $L$.
Our results apply to both directed graphs and undirected graphs. To facilitate
this, we use graph to mean either an undirected or directed graph, and view
undirected graphs as a special case of directed graphs, as seen in the
following definition.
###### Definition 4.2.
A graph $G=(V,E,\ell)$ is _undirected_ if $(i,j)\in E$ and $\ell(i,j)=a$ if
and only if $(j,i)\in E$ and $\ell(j,i)=a$. For an unlabeled graph,
$\ell=\emptyset$.
###### Definition 4.3.
A _subgraph_ of a graph $G=(V,E,\ell)$ is a graph
$H=(V^{\prime},E^{\prime},\ell|_{E^{\prime}})$ such that $V^{\prime}\subseteq
V$ and $E^{\prime}\subseteq E\cap V^{\prime}\times V^{\prime}$. If $H$ is
undirected, we require that $G$ is also undirected, i.e. $E^{\prime}$
satisfies $(i,j)\in E$ if and only if $(j,i)\in E$.
###### Definition 4.4.
Let $(V,E,\ell)$ be a graph. We say $j\in V$ is a _neighbor_ of $i\in V$ if
either $(j,i)\in E$ or $(i,j)\in V$.
###### Definition 4.5.
Let $G^{\prime}=(V^{\prime},E^{\prime},\ell^{\prime})$ and $G=(V,E,\ell)$ be
two graphs. A map $\Phi\colon V^{\prime}\to V$ is a _graph homomorphism (from
$G^{\prime}$ to $G$)_ if $\forall i,j\in V^{\prime}$, if $(i,j)\in
E^{\prime}$, then $(\Phi(i),\Phi(j))\in E$ and
$\ell^{\prime}(i,j)=\ell(\Phi(i),\Phi(j))$. In other words, a graph
homomorphism is a map on vertices that respects edges and edge labels.
The following definition specifies an operation on a graph which duplicates a
vertex $d$, producing a new graph that is identical in all respects except for
the addition of one new vertex, $d^{\prime}$, that copies the edge connections
of $d$. This definition captures the behavior of gene duplication in genetic
networks.
###### Definition 4.6.
Given a graph $G=(V,E,\ell)$ and a vertex $d\in V$, we define the _vertex
duplication of $d$_ as the graph operation which constructs a new graph,
denoted
${\mathscr{D}_{d}(G):=G^{\prime}=(V^{\prime},E^{\prime},\ell^{\prime})}$,
where $V^{\prime}:=V\cup\\{d^{\prime}\\}$, and $(i,j)\in E^{\prime}$ with
$\ell^{\prime}(i,j)=a$ if and only if either
1. 1.
$(i,j)\in E$ with $\ell(i,j)=a$,
2. 2.
$j=d^{\prime}$ and $(i,d)\in E$ with $\ell(i,d)=a$,
3. 3.
$i=d^{\prime}$ and $(d,j)\in E$ with $\ell(d,j)=a$,
4. 4.
or $j=i=d^{\prime}$ and $(d,d)\in E$ with $\ell(d,d)=a$.
An example of vertex duplication is shown in Figure 1.
### 4.2 Distinguishability
We now introduce an important invariant property under vertex duplication and
edge removal.
###### Definition 4.7.
Let $G=(V,E,\ell)$ be a graph. Two vertices $i,j\in V$ are _distinguishable
(in $G$)_ if and only if there exists a vertex $k$ that is a neighbor of both
$i$ and $j$ such that either
$(i,k),(j,k)\in E\text{ and }\ell(i,k)\neq\ell(j,k)$ (2)
or
$(k,i),(k,j)\in E\text{ and }\ell(k,i)\neq\ell(k,j).$ (3)
We say that $k$ is a _distinguisher_ of $i$ and $j$. It is worth noting that
there may be multiple distinguishers of $i$ and $j$, i.e. distinguishers need
not be unique. Furthermore, if $G$ is undirected, Equation (2) holds for a
vertex $k$ if and only if Equation (3) also holds.
We say $U\subseteq V$ is a _distinguishable set (in G)_ if for all $i,j\in U$
with $i\neq j$, the vertices $i$ and $j$ are distinguishable. Similarly, we
refer to any subgraph whose vertex set is distinguishable as a _distinguished
subgraph_.
###### Remark 4.8.
As long as $\left|L\right|\geq 2$, for any graph $G$, there is a graph
$G^{\prime}$ that contains $G$ as a distinguished subgraph. To see this,
consider a subgraph $G$. Then for each pair $i,j\in G$ add a new vertex $k$
and edges $\\{(i,k),(j,k)\\}$ with different labels, so that
$\ell(i,k)\neq\ell(j,k)$. Then $i$ and $j$ are distinguishable and $G$ is
embedded as a distinguishable subgraph in a larger graph $G^{\prime}$.
To illustrate the concept of distinguishable sets, consider the two graphs
shown in Figure 1. The leftmost graph has distinguishable sets $\\{1,2\\}$ and
$\\{2,3\\}$. Here, $2$ is a distinguisher of $1$ and $2$, and $1$ is a
distinguisher of $2$ and $3$. However, in the rightmost graph, $2$ and
$2^{\prime}$ are not distinguishable. Any mutual neighbor of $2$ and
$2^{\prime}$ shares exactly the same edges with matching labels. The last
insight, that the duplication of a gene $d$ produces an indistinguishable pair
$d$ and $d^{\prime}$, is general and leads to our main result in Theorem 4.9.
### 4.3 Distinguished Subgraphs
Fix two graphs $G$ and $G^{\prime}$. Suppose that $G$ is an _ancestor_ of
$G^{\prime}$, that is, there exists a sequence of graphs $G_{1},\dots,G_{M}$
with $G_{m}:=(V_{m},E_{m},\ell_{m})$, such that $G=G_{1}$, $G^{\prime}=G_{M}$,
and for each $m\in\\{1,\dots,M\\}$, either $G_{m+1}$ is a subgraph of $G_{m}$,
or $G_{m+1}=\mathscr{D}_{d_{m}}(G_{m})$, for some $d_{m}\in V_{m}$.
To address Question 1, we present Theorem 4.9. It states that whenever $G$ is
an ancestor of $G^{\prime}$, then there must exist a graph homomorphism from
$G^{\prime}$ to its ancestor $G$ such that the homomorphism is injective on
distinguishable sets of vertices. This result allows us to conclude several
corollaries that characterize the properties of the ancestor network.
The proof of the following theorem makes use of Lemma A.1 in Appendix A.
###### Theorem 4.9.
Let $G=(V,E,\ell)$ be an ancestor of
$G^{\prime}=(V^{\prime},E^{\prime},\ell^{\prime})$. Then there is a graph
homomorphism $\Phi\colon V^{\prime}\to V$ such that for all distinguishable
sets $U\subseteq V^{\prime}$, the restriction $\Phi|_{U}$ is 1-to-1, and
$\Phi(U)$ is a distinguishable set in $G$.
###### Proof.
Let $G_{1},\dots,G_{M}$ be the evolutionary path connecting ancestor $G$ with
the current graph $G^{\prime}$, where $G_{m}:=(V_{m},E_{m},\ell_{m})$. At each
step, we construct a map $\Phi_{m}$ from $G_{m+1}$ to $G_{m}$ satisfying the
required conditions. The composition $\Phi:=\Phi_{1}\circ\dots\circ\Phi_{M-1}$
then verifies the desired result.
We now construct $\Phi_{m}$. If $G_{m+1}$ is a subgraph of $G_{m}$, let
$\Phi_{m}$ be the inclusion map $\iota\colon V_{m+1}\hookrightarrow V_{m}$.
The inclusion map is obviously a graph homomorphism, and is injective on all
of $V_{m+1}$. Let $i,j\in V_{m+1}$ be distinguishable vertices in $G_{m+1}$,
and let $k$ be a distinguisher of $i$ and $j$. Since $\iota$ is a
homomorphism, $\iota(k)=k\in V_{m}$ is a distinguisher of
$\iota(i),\iota(j)\in V_{m}$.
If $G_{m+1}=\mathscr{D}_{d_{m}}(G_{m})$, let $\Phi_{m}\colon V_{m+1}\to V_{m}$
be defined as
$\Phi_{m}(i):=\begin{cases}d_{m}&\text{if }i=d_{m}^{\prime}\\\
i&\text{otherwise}\end{cases}\ .$
We verify by using Definition 4.6 that this map satisfies the required
properties in Lemma A.1. ∎
It is worth noting that the proof of Theorem 4.9 is constructive; however, the
construction relies on the knowledge of the specific evolutionary path, i.e a
sequence of events that form the graph sequence $G_{1},\dots,G_{M}$. In almost
all applications, this sequence is unknown or only partially understood.
However the existence of the homomorphism allows us to conclude features of
$G$ using knowledge of the graph $G^{\prime}$.
###### Corollary 4.10.
Let $G$ be the ancestor of $G^{\prime}$. Any distinguished subgraph of
$G^{\prime}$ is isomorphic to a subgraph of $G$.
###### Proof.
Consider a distinguished subgraph of $G^{\prime}$ with vertex set $U\subseteq
V^{\prime}$. Since $U$ is distinguishable, by Theorem 4.9 $\Phi|_{U}$ is an
injective graph homomorphism, so it is an isomorphism onto its image.
Therefore, $\Phi|_{U}$ is the desired isomorphism. ∎
This result describes structures that must have been present in any ancestor
graph $G$, and puts a lower bound on the size of $G$.
###### Definition 4.11.
The _distinguishability_ of a graph $G=(V,E,\ell)$ is the size of a maximum
distinguishable subset $U\subseteq V$. Let $\mathtt{D}(G)$ denote the
distinguishability of a graph $G$.
###### Corollary 4.12.
Let $G$ be the ancestor of $G^{\prime}$. The distinguishability of $G$ is
greater than or equal to the distinguishability of $G^{\prime}$,
$\mathtt{D}(G)\geq\mathtt{D}(G^{\prime}).$
###### Proof.
Let $U\subseteq V^{\prime}$ be a distinguishable set in $G^{\prime}$. Then
$\Phi(U)$ is distinguishable in $G$, and since $\Phi|_{U}$ is injective,
$\left|\Phi(U)\right|=\left|U\right|$. ∎
Identifying distinguishable sets can be computationally challenging, and so we
recast the problem of finding distinguishable sets in terms of a more familiar
computational problem. We construct a new graph whose cliques are
distinguishable sets of the original graph.
###### Definition 4.13.
The _distinguishability graph_ of $G=(V,E,\ell)$ is a undirected graph
$D(G):=(V,E^{\ast},\emptyset)$ where $(i,j)\in E^{\ast}$ if and only if $i$
and $j$ are distinguishable in $G$.
Recall that a set of vertices is distinguishable if and only if each pair of
vertices in that set is distinguishable. Therefore distinguishable sets in $G$
are cliques in the distinguishability graph $D(G)$, see SI Section C. We also
prove that the clique problem is efficiently reducible to calculating the
distinguishability of a graph. Since it is easy to show computing
distinguishability is in the class $\mathcal{NP}$, this reduction implies that
computing the distinguishability is $\mathcal{NP}$-complete.
### 4.4 Distinguishability Deviation
We now search for consequences of Corollary 4.12 in inferred biological
networks. To do so, we seek a metric that evaluates how the distinguishability
of a network compares with expected distinguishability in an appropriately
selected class of random graphs. Since vertex duplication cannot increase
distinguishability, we expect genetic networks to exhibit low
distinguishability when compared with similar random graphs. The most obvious
graphs to compare against are those with the same structure as $G$, and with
the same expected fraction of positive and negative edges as $G$, but in which
each edge has a randomly assigned label. Before formalizing this notion in
Definition 4.14, we adjust our perspective on undirected graphs in order to
reduce notational complexity. For the rest of this manuscript, we adopt the
convention that if $E$ is an edge set for an undirected graph, then
$E\subseteq\\{\\{i,j\\}:i,j\in V\\}$, i.e. edges of undirected graphs are
unordered pairs of vertices. The notation $e\in E$ then refers to $e=(i,j)$ in
a directed graph and $e=\\{i,j\\}$ in an undirected graph.
###### Definition 4.14.
Let $G=(V,E,\ell)$ be a graph. We define the probability of each label in $G$
by counting its relative edge label abundance
$\mathbf{p}_{G}(a):=\frac{\left|\\{e\in E:\ell(e)=a\\}\right|}{|E|}\ .$ (4)
Let $\\{\ell_{r}\\}_{r\in R}$ be the set of all possible edge label maps,
$\ell_{r}\colon E\to L$, where $R$ is an index set. Denote
$G_{r}:=(V,E,\ell_{r})$ to be the graph with the same vertices and edges as
$G$ but with edge labels determined by $\ell_{r}$. We define the _expected
distinguishability of $G$_ as
$\left\langle\mathtt{D}(G)\right\rangle:=\sum_{r\in
R}P(G_{r})\mathtt{D}(G_{r}).$ (5)
where
$P(G_{r})=\prod_{e\in E}\mathbf{p}_{G}(\ell_{r}(e)).$ (6)
We interpret $P(G_{r})$ as the probability of the graph $G_{r}$ conditioned on
using the unlabeled structure of $G$.
In addition, we define the _distinguishability deviation_ of $G$ as the
difference between its distinguishability and its expected distinguishability,
i.e.
$\mathtt{D}(G)-\langle\mathtt{D}(G)\rangle.$ (7)
Expected distinguishability $\langle\mathtt{D}(G)\rangle$ can be approximated
by randomly relabeling $G$ with probability according to Equation (6) and
calculating the distinguishability of the resultant graph. Repeating the
process multiple times and averaging yields an approximation of expected
distinguishability. We utilize this method in our calculations of
distinguishability deviation in Section 2. In particular, the
distinguishability deviations in Figure 2 were calculated by averaging over 10
random graphs. The distinguishability deviations of the biological networks in
Equation (1) were found by averaging over 100 random graphs.
The results of distinguishability deviation calculations in published
biological networks and simulated networks lead us to the following
conjecture.
###### Conjecture 4.15.
Let $\mathcal{G}_{n}$ be the set of all graphs $G=(V,E,\ell)$ with $n$
vertices. Let $\mathcal{U}_{n}\subseteq\mathcal{G}_{n}$ be the set of those
graphs for which
$\frac{1}{\left|V\right|}\sum_{d\in
V}\left\langle\mathtt{D}(\mathscr{D}_{d}(G))\right\rangle-\left\langle\mathtt{D}(G)\right\rangle>0;$
(8)
that is, the set of graphs for which the expected distinguishability increases
under vertex duplication. Then the fraction of graphs with this property
approaches $1$ for large graphs
$\lim_{n\to\infty}\frac{|\mathcal{U}_{n}|}{|\mathcal{G}_{n}|}=1.$
If Conjecture 4.15 is true it would imply vertex duplication decreases
distinguishability deviation on average for the majority of large graphs. This
follows from Corollary 4.12 which shows duplication does not increase
distinguishability. Therefore, if duplication increases expected
distinguishability, it must decrease distinguishability deviation. Part of the
difficulty in proving Conjecture 4.15 arises because the distribution of edge
labels in $G^{\prime}=\mathscr{D}_{d}(G)$ and $G$ may be significantly
different, which causes the probabilities of edge label assignments $\ell_{r}$
to change significantly between $G$ and $G^{\prime}$.
However, as evidence in support of the conjecture we prove a version of
Conjecture 4.15 in SI Section B for a modified expected distinguishability
that is taken over a fixed probability of edge labels. To provide the main
idea of the proof, fix a probability of edge labels, which is be used for both
$G$ and $G^{\prime}=\mathscr{D}_{d}(G)$. Let $\\{\ell_{r}\\}$ and
$\\{\ell^{\prime}_{s}\\}$ be the sets of all possible edge label maps of $G$
and $G^{\prime}$ respectively, and denote $G_{r}:=(V,E,\ell_{r})$ and
$G^{\prime}_{s}:=(V^{\prime},E^{\prime},\ell_{s}^{\prime})$. For this fixed
labeling probability, if we randomize the labels of $G$ then the probability
of a specific labeling $\ell_{r}:V\to L$ is the same as the probability of any
labeling $\ell_{s}:V^{\prime}\to L$ such that $\ell_{s}|_{V}=\ell_{r}$.
Therefore, the probability of a specific $G_{r}$ is the same as the
probability of any such $G_{s}^{\prime}$. Then, noting that $G_{r}$ is a
subgraph of $G_{s}^{\prime}$, it follows from Corollary 4.12 with
$G_{s}^{\prime}$ as an ancestor of $G_{r}$ that
$\mathtt{D}(G_{s}^{\prime})\geq\mathtt{D}(G_{r})$, as required.
This shows that if the expected distinguishability is taken over a fixed
labeling probability, then the expected distinguishability of a graph $G$
cannot be more than that of $G^{\prime}$. In fact, we show in SI Section B
that under this assumption as long as $d^{\prime}$ has at least one neighbor,
then the modified expected distinguishability of $G^{\prime}$ is strictly
greater than that of $G$.
## Appendix A Proof of Lemma A.1
###### Lemma A.1.
Let $G=(V,E,\ell)$ be a graph. Let
$G^{\prime}=\mathscr{D}_{d}(G)=(V^{\prime},E^{\prime},\ell^{\prime})$, for
some $d\in V$. Let $\phi\colon V^{\prime}\to V$ be the map defined as
$\phi(i):=\begin{cases}d&\text{if }i=d^{\prime}\\\
i&\text{otherwise}\end{cases}\ .$
Then $\phi$ is a graph homomorphism such that for all distinguishable sets
$U\subseteq V^{\prime}$, the restriction $\phi|_{U}$ is 1-to-1, and $\phi(U)$
is a distinguishable set in $G$.
###### Proof.
We first show $\phi$ is a graph homomorphism. Let $i,j\in V^{\prime}$. If
$i,j\neq d^{\prime}$, then $(\phi(i),\phi(j))=(i,j)$. Inspecting Definition
4.6 we see $(i,j)\in E$ if and only if $(i,j)\in E^{\prime}$, and
$\ell(i,j)=\ell^{\prime}(i,j)$.
Now suppose $i=d^{\prime}$ and $j\neq d^{\prime}$. The case where $i\neq
d^{\prime}$ and $j=d^{\prime}$ follows a symmetric argument. Suppose that
$(d^{\prime},j)\in E^{\prime}$. Then $(\phi(d^{\prime}),\phi(j))=(d,j)$, and
from the construction of $E^{\prime}$ in Definition 4.6 we see that
$(d^{\prime},j)\in E^{\prime}$ if and only if $(d,j)\in E$. Finally, by
definition, $\ell^{\prime}(d^{\prime},j)=\ell(d,j)$. When $i=j=d^{\prime}$,
the proof follows similarly.
To prove the properties of $\phi$ on a distinguishable set, we first show that
$d$ and $d^{\prime}$ are not distinguishable. Suppose by way of contradiction
that $k$ is a distinguisher of $d$ and $d^{\prime}$ in $G^{\prime}$. From the
definition of vertex duplication, if $(d,k)\in E^{\prime}$, then
$(d^{\prime},k)\in E^{\prime}$, and
$\ell^{\prime}(d,k)=\ell^{\prime}(d^{\prime},k)$. Similarly, $(k,d)\in
E^{\prime}$, then $(k,d^{\prime})\in E^{\prime}$, and
$\ell^{\prime}(k,d)=\ell^{\prime}(k,d^{\prime})$. Therefore, neither (2) nor
(3) in Definition 4.7 can be satisfied, a contradiction. We conclude that $d$
and $d^{\prime}$ are not distinguishable.
Let $U\subseteq V^{\prime}$ be a distinguishable set. Then since $d$ and
$d^{\prime}$ are not distinguishable, $U$ can contain at most one of them.
Notice that $\phi$ is 1-to-1 on $V\setminus\\{d\\}$, as well as on
$V\setminus\\{d^{\prime}\\}$. Consequently $\phi|_{U}$ is 1-to-1.
Finally, we show that $\phi(U)$ is distinguishable. Let $i,j\in U$. Let $k$ be
a distinguisher of $i$ and $j$. Then since $\phi$ is a graph homomorphism, it
respects edge labels, so $\phi(k)$ is a distinguisher of $\phi(i)$ and
$\phi(j)$. ∎
## Acknowledgements
TG was partially supported by National Science Foundation grant DMS-1839299
and National Institutes of Health grant 5R01GM126555-01. PCK and RRN were
supported by the National Institutes of Health grant 5R01GM126555-01. BC was
supported by National Science Foundation grant DMS-1839299. We acknowledge the
Indigenous nations and peoples who are the traditional owners and caretakers
of the land on which this work was undertaken at the University of Calgary and
Montana State University.
## References
* [1] W. Li et al., Molecular evolution. Sinauer associates incorporated, 1997.
* [2] S. Ohno, Evolution by gene duplication. Springer-Verlag Berlin Heidelberg, 1970.
* [3] L. Patthy, Protein evolution. John Wiley & Sons, 2009.
* [4] G. C. Conant and A. Wagner, “Asymmetric sequence divergence of duplicate genes,” Genome research, vol. 13, no. 9, pp. 2052–2058, 2003.
* [5] N. V. Dokholyan, B. Shakhnovich, and E. I. Shakhnovich, “Expanding protein universe and its origin from the biological big bang,” Proceedings of the National Academy of Sciences, vol. 99, no. 22, pp. 14132–14136, 2002.
* [6] H. Janwa, S. Massey, J. Velev, and B. Mishra, “On the origin of biomolecular networks,” Frontiers in Genetics, vol. 10, 2019.
* [7] J. S. Taylor and J. Raes, “Duplication and divergence: the evolution of new genes and old ideas,” Annu. Rev. Genet., vol. 38, pp. 615–643, 2004.
* [8] A. Vázquez, A. Flammini, A. Maritan, and A. Vespignani, “Modeling of protein interaction networks,” Complexus, vol. 1, no. 1, pp. 38–44, 2003\.
* [9] K. H. Wolfe, “Origin of the yeast whole-genome duplication,” PLOS Biology, vol. 13, pp. 1–7, 08 2015.
* [10] A. Wagner, “How the global structure of protein interaction networks evolves,” Proc. R. Soc. Lond. B, vol. 270, pp. 457–466, 2003.
* [11] A. Alexei Vazquez, A. Flammina, and A. Vespignani, “Modeling of protein interaction networks,” ComPlexUs, vol. 1, pp. 38–44, 2003.
* [12] S. Dorogovtsev and J. Mendes, “Evolution of networks,” Adv. Phys., vol. 51, p. 1079, 2002.
* [13] R. Sole, R. Pasor-Santorras, E. Smith, and T. Kepler, “A model of large-scale proteome evolution,” Advances in Complex Systems 5, 43 (2002), vol. 5, no. 43, 2002.
* [14] A. Wagner, “The Yeast Protein Interaction Network Evolves Rapidly and Contains Few Redundant Duplicate Genes,” Molecular Biology and Evolution, vol. 18, pp. 1283–1292, 07 2001.
* [15] R. Albert and A.-B. Barabasi, “Statistical mechanics of complex networks,” Reviews of Modern Physics, vol. 74, 2002.
* [16] A.-L. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999.
* [17] H. Jeong, S. P. Mason, A.-L. Barabási, and Z. N. Oltvai, “Lethality and centrality in protein networks,” Nature, vol. 411, p. 41–42, May 2001\.
* [18] D. J. Watts, “Networks, dynamics, and the small‐world phenomenon,” American Journal of Sociology, vol. 105, no. 2, pp. 493–527, 1999.
* [19] P. Erdős and A. Rényi, “On random graphs i.,” Publ. Math. Debrecen., vol. 290-297, pp. 440–442, 1959.
* [20] A. Vinayagam, J. Zirin, C. Roesel, Y. Hu, B. Yilmazel, A. Samsonova, R. A. Neumüller, S. Mohr, and N. Perrimon, “Integrating protein-protein interaction networks with phenotypes reveals signs of interactions,” Nature Methods, vol. 11, no. 1, pp. 94–9, 2014.
* [21] S. Collombet, C. V. van Oevelen, J. L. S. Ortega, W. Abou-Jaoudé, B. D. Stefano, M. Thomas-Chollier, T. Graf, and D. Thieffry, “Logical modeling of lymphoid and myeloid cell specification and transdifferentiation,” Proceedings of the National Academy of Sciences, vol. 114, pp. 5792 – 5799, 2017\.
* [22] A. Force, M. Lynch, F. B. Pickett, A. Amores, Y. Yan, and J. Postlethwait, “Preservation of duplicate genes by complementary, degenerative mutations.,” Genetics, vol. 151 4, pp. 1531–45, 1999.
* [23] U. Bergthorsson, D. Andersson, and J. Roth, “Ohno’s dilemma: Evolution of new genes under continuous selection,” Proceedings of the National Academy of Sciences, vol. 104, pp. 17004 – 17009, 2007.
* [24] M. E. Newman, D. J. Watts, and S. H. Strogatz, “Random graph models of social networks,” Proceedings of the national academy of sciences, vol. 99, pp. 2566–2572, 2002.
* [25] Z. M. Saul and V. Filkov, “Exploring biological network structure using exponential random graph models,” Bioinformatics, vol. 23, no. 19, pp. 2604–2611, 2007.
* [26] B. K. Fosdick, D. B. Larremore, J. Nishimura, and J. Ugander, “Configuring random graph models with fixed degree sequences,” SAIM Review, vol. 60, no. 2, pp. 315–355, 2018.
* [27] S. Milgram, “The small world problem,” Psychology today, vol. 2, pp. 60–67, 1967.
* [28] D. Watts and S. Strogatz, “Collective dynamics of ‘small-world’ networks,” Nature, vol. 393, pp. 440–442, 1998.
* [29] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, “Network motifs: simple building blocks of complex networks,” Science, vol. 298, no. 5594, pp. 824–827, 2002.
* [30] U. Alon, “Network motifs: theory and experimental approaches,” Nature Reviews Genetics, vol. 8, no. 6, pp. 450–461, 2007.
* [31] S. S. Shen-Orr, R. Milo, S. Mangan, and U. Alon, “Network motifs in the transcriptional regulation network of escherichia coli,” Nature genetics, vol. 31, no. 1, pp. 64–68, 2002.
* [32] F. Graham, L. Lu, T. Dewey, and D. Galas, “Duplication models for biological networks,” Journal of computational biology : a journal of computational molecular cell biology, vol. 10 5, pp. 677–87, 2003.
* [33] S. Arora and B. Barak, Computational complexity: a modern approach. Cambridge University Press, 2016.
* [34] R. Karp, “Reducibility among combinatorial problems,” in Complexity of Computer Computations, 1972.
* [35] R. R. Nerem, “Distinguishability.” github.com/Rnerem/distinguishability, 2021.
* [36] A. Hagberg, P. Swart, and D. S Chult, “Exploring network structure, dynamics, and function using networkx,” tech. rep., Los Alamos National Lab(LANL), Los Alamos, NM (United States), 2008.
## Appendix B SI: Toward Conjecture 4.15
We now prove a restricted version of Conjecture 4.15.
The difficulty in proving Conjecture 4.15 arises because the distribution of
edge labels in a graph may significantly change after a vertex is duplicated.
To avoid this, we present a more manageable version of expected
distinguishability, where the expected value is taken over the same
probability both before and after the vertex duplication. Recall Equation (6),
which is repeated below
$P(G_{r})=\prod_{e\in E}\mathbf{p}_{G}(\ell_{r}(e)).$
Notice that $P(G_{r})$ depends implicitly on the original graph $G$, as the
probabilities $\mathbf{p}_{G}$ are determined from $\ell$ in Equation (4). To
simplify, we fix a set of probabilities $\\{\mathbf{p}(a)\\}_{a\in L}$ with
$\mathbf{p}(a)\geq 0$ for all $a\in L$, and $\sum_{a\in L}\mathbf{p}(a)=1$,
and such that there are at least two labels $a,b\in L$ with $a\neq b$ such
that $\mathbf{p}(a)>0$ and $\mathbf{p}(b)>0$. Using this set, we redefine
$P(G_{r})$, the probability of choosing a graph $G_{r}=(V,E,\ell_{r})$, as
$P(G_{r}):=\prod_{e\in E}\mathbf{p}(\ell_{r}(e))\ .$
We are now equipped to present and prove a restricted version of Conjecture
4.15. Under the redefined probabilities, Equation (9) in the following lemma
is analogous to showing all terms in the sum of Equation (8) in the manuscript
are non-negative. Furthermore, as long as the graph $G$ contains at least one
edge, at least one term is strictly greater than zero. Of course, as the size
of the graph goes to infinity, the fraction of graphs with at least one edge
approaches one.
###### Lemma B.1.
Let $G=(V,E,\ell)$. Let $d\in V$ be arbitrary. Let
$G^{\prime}=(V^{\prime},E^{\prime},\ell^{\prime})=\mathscr{D}_{d}(G)$. Fix
$\\{\mathbf{p}(a)\\}_{a\in L}$ as above. Let $\\{\ell_{r}\\}_{r\in R}$ and
$\\{\ell^{\prime}_{s}\\}_{s\in S}$ be the set of all possible edge label maps
of $G$ and $G^{\prime}$ respectively, for some $R$ and $S$ index sets. Denote
$G_{r}:=(V,E,\ell_{r})$ and
$G^{\prime}_{s}:=(V^{\prime},E^{\prime},\ell_{s}^{\prime})$. Then
$\sum_{r\in R}P(G_{r})\mathtt{D}(G_{r})\leq\sum_{s\in
S}P(G^{\prime}_{s})\mathtt{D}(G^{\prime}_{s}),$ (9)
Furthermore, the inequality is strict if $d$ has at least one neighbor.
###### Proof.
We expand the right hand side of (9) as
$\displaystyle\sum_{s\in S}P(G^{\prime}_{s})\mathtt{D}(G^{\prime}_{s})$
$\displaystyle=\sum_{s\in S}\left(\prod_{e\in
E^{\prime}}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G^{\prime}_{s})$
$\displaystyle=\sum_{s\in S}\left(\prod_{e\in
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G^{\prime}_{s})$
We now make a key observation: for each $s\in S$, there exists a unique $r\in
R$ such that $\ell_{r}=\ell^{\prime}_{s}|_{E}$, so define a map $\xi:S\to R$
via $\xi(s)=r$ if and only if $\ell_{r}=\ell^{\prime}_{s}|_{E}$. In what
follows we use the more compact notation $\xi s=r$. With this insight, note
that
$\prod_{e\in E}\mathbf{p}(\ell^{\prime}_{s}(e))=\prod_{e\in
E}\mathbf{p}(\ell_{\xi s}(e))=P(G_{\xi s})$
We continue to rewrite the right hand side as
$\displaystyle\sum_{s\in S}\left(\prod_{e\in
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G^{\prime}_{s})$
$\displaystyle=\sum_{s\in S}P(G_{\xi s})\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G^{\prime}_{s})$
$\displaystyle=\sum_{r\in R}\sum_{\begin{subarray}{c}s\in S\\\ \xi
s=r\end{subarray}}P(G_{\xi s})\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G^{\prime}_{s})$
$\displaystyle=\sum_{r\in R}P(G_{r})\sum_{\begin{subarray}{c}s\in S\\\ \xi
s=r\end{subarray}}\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G^{\prime}_{s})$
Note that $G_{\xi s}$ is a subgraph of $G^{\prime}_{s}$. Then applying
Corollary 4.12 we have $\mathtt{D}(G^{\prime}_{s})\geq\mathtt{D}(G_{\xi s})$.
Therefore
$\displaystyle\sum_{r\in R}P(G_{r})\sum_{\begin{subarray}{c}s\in S\\\ \xi
s=r\end{subarray}}\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G^{\prime}_{s})$
$\displaystyle\geq\sum_{r\in R}P(G_{r})\sum_{\begin{subarray}{c}s\in S\\\ \xi
s=r\end{subarray}}\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G_{\xi s})$
$\displaystyle=\sum_{r\in R}P(G_{r})\sum_{\begin{subarray}{c}s\in S\\\ \xi
s=r\end{subarray}}\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)\mathtt{D}(G_{r})$
$\displaystyle=\sum_{r\in
R}P(G_{r})\mathtt{D}(G_{r})\sum_{\begin{subarray}{c}s\in S\\\ \xi
s=r\end{subarray}}\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)$ $\displaystyle=\sum_{r\in
R}P(G_{r})\mathtt{D}(G_{r})$
which is the desired result. The last inequality holds because for any fixed
$r$
$\displaystyle\sum_{\begin{subarray}{c}s\in S\\\ \xi
s=r\end{subarray}}\left(\prod_{e\in E^{\prime}\setminus
E}\mathbf{p}(\ell^{\prime}_{s}(e))\right)=1$
since the sum can be thought of as over all possible relabelings of
$E^{\prime}\setminus E$, of which the total probability is $1$.
We now show that the inequality is strict if $d$ has at least one neighbor. To
do so we construct a specific pair $\ell^{\prime}_{q}$ and $\ell_{\xi q}$ so
that $\mathtt{D}(G^{\prime}_{q})=\mathtt{D}(G_{\xi q})+1$. Let $\mathbf{p}(a)$
and $\mathbf{p}(b)$ be two non-zero elements. Let $k\in V$ be a neighbor of
$d$. Let $\ell^{\prime}_{q}$ be the constant map on $a\in L$, except for the
edge between $d^{\prime}$ and $k$, where it takes the value $b\in L$. Then
$\mathtt{D}(G^{\prime}_{q})=2$, as $d$ and $d^{\prime}$ are distinguishable
and no other vertex can be distinguishable from both $d$ and $d^{\prime}$.
Since $\ell_{\xi q}$ is the constant map on $a$, $\mathtt{D}(G_{\xi q})=1$,
completing the proof. ∎
## Appendix C SI: Computational Complexity
For notational convenience we return to Definition 4.2 for our definition of
undirected graph.
###### Definition C.1.
Let $G=(V,E,\ell)$ be a undirected graph. A subset $U\subseteq V$ is a
_clique_ if for all $i,j\in U$ the vertices $i$ and $j$ are neighbors. We
refer to a clique $U$ of size $|U|=m$ as a $m$-clique.
###### Problem C.2 (Distinguishability).
Given a graph $G=(V,E,\ell)$, a number $m\in{\mathbb{N}}$, and a label set $L$
with $|L|\leq|E|$, decide if $G$ contains a distinguishable set of size $m$.
Let Distinguishability be the associated decision problem represented as a
language.
Although we do not specify a particular way of encoding graphs as strings in
the language Distinguishability, any encoding that is polynomial in $|V|$ is
sufficient. For discussion on decision problems as languages and graph
encodings see [Barak2016].
###### Theorem C.3.
A graph $G$ has a distinguishable set of size $m$ if and only if $D(G)$ has an
$m$-clique.
###### Proof.
$(\Rightarrow)$ Let $U\subset V$ be a distinguishable set in $G$ of size $m$.
Then all pairs $i,j\in U$ are distinguishable which implies $(i,j)\in E^{*}$
for all $i,j\in U$. Thus $U$ is a $m$-clique of $G(D)$.
$(\Leftarrow)$ Let $U\subset V$ be a $m$-clique in $D(G)$. Then all pairs
$i,j\in U$ are distinguishable in $G$. Thus $U$ is a distinguishable set in
$G$ of size $m$. ∎
The following Corollary is immediate.
###### Corollary C.4.
A graph $G$ has a distinguishability $m$ if and only if the size of a maximum
clique in $D(G)$ is $m$.
We now show that finding the distinguishability is $\mathcal{NP}$-complete.
###### Lemma C.5.
$\textsc{Distinguishability }\in\mathcal{NP}$
###### Proof.
Let $S\in\textsc{Distinguishability}$ be an instance of distinguishability
with graph $G=(V,E,\ell)$, label set $L$, and number $m$. Let the certificate
for $S$ be a list of $m$ vertices that constitute a distinguishable set $U$.
Clearly this certificate has length polynomial in $|V|$. A deterministic
algorithm to verify this certificate is to check if $i$ and $j$ are
distinguishable for every $i,j\in U$ by iterating over all mutual neighbors of
$i$ and $j$. This algorithm has running time polynomial in $|V|$. Therefore,
$\text{{Distinguishability}}\in\mathcal{NP}$. ∎
###### Problem C.6 (Clique Number).
Given a simple undirected graph $G$ and a number $m\in{\mathbb{N}}$, decide if
$G$ has a clique of size $m$. Let Clique be the associated decision problem
represented as a language.
###### Lemma C.7.
Distinguishability is $\mathcal{NP}$-hard
###### Proof.
We proceed by showing a many-to-one deterministic polynomial-time reduction of
Clique, which is $\mathcal{NP}$-complete [Karp1972], to Distinguishability.
That is, we show a map from instances of Clique to instances of
Distinguishability that is efficiently computable, and that maps ‘yes’
instances of Clique to ‘yes’ instances of Distinguishability and maps ‘no’
instances of Clique to ‘no’ instances of Distinguishability.
Consider an arbitrary instance of Clique with a simple undirected input graph
$G=(V,E,\emptyset)$ and input number $m$. We aim to construct a new graph
which has distinguishability equal to the size of the largest clique in $G$.
Consider the undirected graph $H=(E\cup V,E_{H},\ell)$ with label set $L=V$
where
$(i,(i,j)),(j,(i,j))\in E_{H}\text{ iff }(i,j)\in E$ (10)
and
$\ell((i,j),i)=\ell(i,(i,j))=i.$ (11)
In other words for each (undirected) edge $(i,j)\in E$ of the graph $G$, we
assign directed edges from $i\in V$ to $(i,j)\in E$ and from $i\in V$ to
$(i,j)\in E$ in the graph $H$. There are no edges in $H$ between $i\in V$ and
$j\in V$ and no edges between $(i,j)\in E$ and $(k,s)\in E$ and therefore $H$
is bipartite with partition of vertices $(V,E)$.
We now show the distinguishability of $H$ is $m$. Consider the
distinguishability graph $D(H)=(E\cup V,E_{D(H)},\emptyset)$. First notice
that, because $H$ is bipartite, there is no edge in $D(H)$ between a vertex
$j\in V$ and a vertex $(i,k)\in E$ as $(i,k)$ and $j$ cannot have a mutual
neighbor in $H$. Furthermore, there is no edge in $D(H)$ between two vertices
$(i,k),(r,s)\in E$ because if $(i,k)$ and $(r,s)$ have a mutual neighbor $j$
then $j$ is not a distinguisher of $(i,k)$ and $(r,s)$ due to all edge labels
being identical, i.e.
$\displaystyle j$ $\displaystyle=\ell(j,(r,s))$ $\displaystyle=\ell((i,k),j)$
$\displaystyle=\ell((r,s),j)$ $\displaystyle=\ell(j,(i,k)).$
Now we show for any $i,j\in V$ there is an edge $(i,j)\in E_{D(H)}$ if and
only if $(i,j)\in E$. If $(i,j)\in E$ then $(i,(i,j)),(j,(i,j))\in E_{H}$.
Also $\ell(i,(i,j))=i$ and $\ell(j,(i,j))=j$ so
$\ell(i,(i,j))\neq\ell(j,(i,j))$ which means $i$ and $j$ are distinguishable
in $H$. Therefore, $(i,j)\in E_{D(H)}$.
Now suppose $(i,j)\notin E$. Then $i$ and $j$ have no mutual neighbors in $H$
and so they are not distinguishable in $H$. This implies $(i,j)\notin
E_{D(H)}$. We have shown that the only edges in $D(H)$ connect $i,j\in V$ such
that $(i,j)\in E$. Furthermore, since $(i,j)\in E_{D(H)}$ if and only if
$(i,j)\in E$, the subgraph of $D(H)$ induced by $V$ is isomorphic to $G$.
Therefore, there is a $m$-clique in $D(H)$ if and only if there is a
$m$-clique in $G$.
The many-to-one reduction is given by the function
$\phi:(G,m)\mapsto(H,m).$ (12)
This function, which amounts to constructing the graph $H$, can be computed by
an algorithm which iterates over the set $(V\cup E)\times(V\cup E)$ of
possible edges in $H$. This algorithm takes time polynomial in $|V|+|E|$, and
so $\phi$ is efficiently computable. ∎
Note that the graph $H$ is undirected so the completeness holds even if the
problem Distinguishability is restricted to undirected graphs.
###### Corollary C.8.
Distinguishability is ${\mathcal{NP}\textit{-complete}}$.
## Appendix D SI: Numerical Simulations
(a) 500 graphs, each with 250 vertices, generated by taking evolved graphs
$G_{i}$ (as in Figure 2) and generating a new graph $J_{i}$ which has the same
signed degree distribution of $G_{i}$ but is otherwise randomized. Colors and
axis are same as Figure 2.
(b) Point-wise difference between distinguishability change in Figure 2 and in
Figure 3(a). This difference shows that change in distinguishability of the
evolved graphs in Figure 2 which cannot be attributed to single vertex
characteristics.
Figure 3: Distinguishability deviation of directed graphs.
In this section we give a complete description of our numerical simulations
and give evidence that negative distinguishability deviation cannot be
explained solely through a graph’s signed degree distribution or by small
world properties.
We first describe the procedure for generating the evolved graphs and their
corresponding ER-graphs. The distinguishability deviation of these graphs are
shown in Figure 2. Our implementation of this procedure, which can be found at
[gitcode], employs the NetworkX Python package [networkx]. The following
description uses the convention $[n]:=\\{1,2,\dots,n\\}$.
1. 1.
Randomly generate 500 ER-graphs, each with 25 vertices where the fraction of
positive edges are chosen uniformly at random from $(.25,.75)$ and the edge
density $2|E|/(|V|-1)|V|$ is chosen uniformly at random from $[1/2(25),2/25]$
(rounding up to the nearest whole edge).
2. 2.
Perform duplication on a random vertex 225 times to each graph generating 500
graphs each with 250 vertices.
3. 3.
Divide this set of 500 graphs into 5 sets of 100. Randomly remove edges from
graphs in the first set until a final edge count of 250 is reached for each
graph. Repeat for the last four sets using final edge counts of 500, 750,
1000, and 1250 respectively. We refer to the set of graphs
$\\{G_{i}\\}_{i\in[500]}$ as the evolved graphs.
4. 4.
For each evolved graph $G_{i}$, calculate its distinguishability
$\mathtt{D}(G_{i})$.
5. 5.
For each evolved graph $G_{i}$ randomly generate 10 new graphs $G_{i,j}$ with
probability $P_{i}(G_{i,j})$ (the probability distribution in Definition 4.14)
that have the same adjacencies but with a random edge labeling. Estimate their
expected distinguishability by
$\langle\mathtt{D}(G_{i})\rangle\approx\langle\mathtt{D}(G_{i})\rangle_{\approx}:=\frac{1}{10}\sum_{j\in[10]}\mathtt{D}(G_{i,j}).$
(13)
6. 6.
Calculate the approximate distinguishability deviation
$\mathtt{D}(G_{i})-\langle\mathtt{D}(G_{i})\rangle_{\approx}$ of the evolved
graphs.
7. 7.
For each evolved graph $G_{i}$, randomly generate an ER-graph $H_{i}$ with the
same number of vertices, edges, and positive and negative labels as $G_{i}$,
and calculate its distinguishability $\mathtt{D}(H_{i})$.
8. 8.
For each graph $H_{i}$ compute $\langle H_{i}\rangle_{\approx}$ as in Step 5.
9. 9.
Calculate the distinguishability deviation
$\mathtt{D}(H_{i})-\langle\mathtt{D}(H_{i})\rangle_{\approx}$ of the ER-graphs
and compute the standard deviation.
In Figures 2 and 4(a), point color indicates final number of edges after edge
deletion of the evolved graphs $\\{G_{i}\\}$. Grey points represent the ER-
graphs $\\{H_{i}\\}$. The vertical axis denotes distinguishability deviation.
The horizontal axis gives the fraction of edges which are removed in the
deletion process. Note that the above procedure applies to both directed and
undirected graphs, the data for which are given in Figures 2 and 4(a)
respectively.
It is natural to ask if these distinguishability deviations can be explained
entirely through a graph’s signed degree distribution. The idea is that
distinguishers must have two edges of differing sign which, in the directed
case, are either both incoming or both outgoing. As a result, we expect graphs
for which most vertices have all edges of the same sign to have low
distinguishability. When the edge labels of these networks are randomized, the
homogeneity of signed degree can be removed, potentially creating higher
distinguishability.
To address this question we compute, for each evolved graph, a random network
which has the same signed degree as the original network. To generate these
networks we use the following algorithm in which we randomly connect edge
stubs of matching sign. Starting from a list of edge stubs for each vertex and
a graph with no edges, we randomly choose two edge stubs, remove them from
their respective lists, and add an edge between their respective vertices.
Note that when we randomly choose edge stubs we do not allow choice of edge
stubs where the introduced edge would create a multigraph. In the undirected
case this means we do not pick edge stubs between already connected vertices.
If the only edge stubs that remain are on two vertices $v_{1}$ and $u_{1}$
such that adding a edge between these vertices would create a muligraph, a
random rewiring is performed. That is, a randomly chosen previously added edge
$(v_{2},u_{2})$ is removed from the graph and the edges $(v_{1},u_{2})$ and
$(v_{2},u_{1})$ are added. If for all such edges $(v_{2},u_{2})$ this rewiring
would create a multigraph, then the random graph generation is restarted. This
process is used for both directed and undirected graphs with the difference
being that for directed graphs in-edge stubs are matched with out-edge stubs.
Python code for generating these graphs is provided at [gitcode].
For directed graphs, we plot distinguishability deviation of the graphs
generated by this procedure in Figure 3(a). Note that each point in this
figure is in one-to-one correspondence with the colored points of Figure 2.
From this data we see that the signed degree preserved randomizations exhibit
significant negative distinguishability deviation. However, this negative
distinguishability deviation is less strong than the distinguishability
deviation observed for the evolved graphs. Investigating further, Figure 3(b)
shows the point-by-point difference between distinguishability deviations of
the evolved graphs and their randomized signed degree-preserving counterparts.
From this figure it is apparent that, almost always, the randomized versions
of the evolved graphs exhibit lower distinguishability deviation. These
results are replicated in Figures 4(b) and 4(c) for undirected graphs. We
conclude that the large distinguishability deviation observed in the evolved
graphs can not be explained solely through signed degree distribution.
(a) Same as Figure 2 but with undirected graphs
(b) Same as Figure 3(a) but with undirected graphs
(c) Point-wise difference between distinguishabilities in Figure 4(a) and in
Figure 4(b)
Figure 4: Distinguishability deviation of undirected graphs.
We also computed the distinguishability deviation in the experimentally
derived networks of [vin14] and [Collombet2017] along with their signed degree
preserved randomizations, as shown in Table 1. These results agree with
simulations since both networks exhibited stronger negative distinguishability
deviation than their preserved sign degree sequence randomizations. We
conclude that the signed degree distribution of a network can not entirely
predict its distinguishability deviation.
This table also includes the distinguishability deviation of both directed and
undirected Erdös-Rényi graphs (ER) and Watts-Strogatz graphs [Watts1998] with
characteristics similar to the published biological networks. For the ER-
graphs, number of vertices and number of positive and negative edges are the
same as the corresponding biological networks. For the Watts-Strogatz graphs,
we picked the number of vertices and mean degree $k$ to be the same as the
biological networks. We used a rewiring probability of $\beta=.1$ to target
the small-world regime of low path length and high clustering observed in
[Watts1998]. In the Watts-Strogatz model we randomly assigned edge signs with
the probability at which to occur in the original network. For both models,
direction is randomly assigned to edges when generating directed graphs.
Since the generation of these random graph models assigns edge labels
randomly, we expect near zero average distinguishability deviation. However,
we are interested in the standard deviation of the distinguishability
deviation since this describes the likelihood to produce outliers with large
negative distinguishability deviation in these random models. The observed
small standard deviation suggests that these models are unlikely to produce
graphs with distinguishability deviation near that observed in the biological
networks. The distinguishability deviation of graphs generated by the Watts-
Strogatz model is nearly the same as the ER-graphs, suggesting that small
world properties have little to no effect on distinguishability deviation.
| D. Melanogaster | Blood Cell
---|---|---
| Dist. | Dist. Deviation | Dist. | Dist. Deviation
Original graphs | 7 | $-24.2\pm.7$ | 4 | $-1.6\pm 0.6$
Preserved signed degree | $4.94\pm 0.4$ | $-1.0\pm 0.6$ | $4.9\pm.6$ | $-0.3\pm.8$
ER-graph | $3.0\pm 0.2$ | $0.0\pm.2$ | $3.4\pm.5$ | $-0.1\pm 0.7$
Watts-Strogatz | $5.0\pm 0.1$ | $0.0\pm.2$ | $3.0\pm 0.4$ | $0.0\pm.6$
Table 1: Distinguishability and distinguishability deviation for two
experimentally derived networks and random graph models. For random graph
models, values are an average over 100 random graphs. Note, the blood cell
network contained a single multi-edge which was ignored in the calculation of
these values. Graphs described in the first D. Melanogaster column are
undirected and graphs described in the Blood Cell column are directed.
|
arxiv-papers
| 2021-07-26T17:46:37 |
2024-09-04T03:07:19.483822
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Peter Crawford-Kahrl, Robert R. Nerem, Bree Cummins, and Tomas Gedeon",
"submitter": "Robert Nerem",
"url": "https://arxiv.org/abs/2107.12352"
}
|
2107.12353
|
# Vincular pattern avoidance on cyclic permutations
Rupert Li Massachusetts Institute of Technology, 77 Massachusetts Avenue,
Cambridge, MA 02139, USA [email protected]
###### Abstract.
Pattern avoidance for permutations has been extensively studied, and has been
generalized to vincular patterns, where certain elements can be required to be
adjacent. In addition, cyclic permutations, i.e., permutations written in a
circle rather than a line, have been frequently studied, including in the
context of pattern avoidance. We investigate vincular pattern avoidance on
cyclic permutations. In particular, we enumerate many avoidance classes of
sets of vincular patterns of length 3, including a complete enumeration for
all single patterns of length 3. Further, several of the avoidance classes
corresponding to a single vincular pattern of length 4 are enumerated by the
Catalan numbers. We then study more generally whether sets of vincular
patterns of an arbitrary length $k$ can be avoided for arbitrarily long cyclic
permutations, in particular investigating the boundary cases of minimal
unavoidable sets and maximal avoidable sets.
###### Key words and phrases:
pattern avoidance, cyclic permutations, vincular patterns
###### 2020 Mathematics Subject Classification:
05A05
## 1\. Introduction
Pattern containment and avoidance for permutations is a well-established
branch of enumerative combinatorics; see Kitaev [16] for a further
introduction. The study of pattern avoidance was generalized to vincular
patterns in 2000 by Babson and Steingrímsson [2], where vincular patterns can
additionally require some elements to be adjacent when considering whether a
permutation contains the pattern; see Steingrímsson [27] for a survey of the
study of vincular patterns, which he refers to as generalized patterns.
A frequently studied variant of permutations is cyclic permutations, where the
permutation is written in a circle rather than a line. Cyclic permutations are
frequently encountered outside of the context of pattern avoidance; for a
recent example, Kim and Williams [15] used cyclic permutations to study the
inhomogeneous totally asymmetric simple exclusion process on a ring. In 2002,
Callan [7] initiated the study of pattern avoidance in cyclic permutations,
enumerating the avoidance classes for all patterns of length 4. In 2021,
Domagalski, Liang, Minnich, Sagan, Schmidt, and Sietsema [10] extended
Callan’s work, enumerating the avoidance classes for all sets of patterns of
length 4, where a permutation avoids a set of patterns if it avoids each
pattern in the set. Menon and Singh [21] extend these results to patterns of
higher length, principally investigating pairs of patterns, one of length 4
and the other of length $k$.
Domagalski et al. additionally initiated the study of vincular pattern
avoidance in cyclic permutations, showing that the Catalan numbers appear as
the sizes of a particular avoidance class of cyclic permutations.
In this paper, we provide a more thorough, foundational investigation of
vincular pattern avoidance on cyclic permutations. In Section 3, we enumerate
the avoidance classes of all vincular cyclic patterns of length 3, the first
nontrivial length to enumerate. We extend this analysis in Section 4, where we
enumerate the avoidance classes of all sets of at least three vincular cyclic
patterns of length 3, as well as some of the doubleton sets of patterns. In
particular, in Section 4.1 we find one of the doubleton sets is equinumerous
to the set of up-down permutations, and in Section 4.2 we find the unique
nonzero Wilf class of tripleton sets of patterns of length 3 has enumeration
equivalent to finding the cardinality of the set of total extensions of a
certain partial cyclic order, a circular analog of a poset, for which a
recurrence is known. In Section 5, we enumerate six of the eight trivial Wilf
equivalence classes of vincular cyclic patterns of length 4 with a single
vinculum, demonstrating that there are five Wilf equivalence classes;
combining this with the result by Domagalski, Liang, Minnich, Sagan, Schmidt,
and Sietsema [10], this leaves only one of the eight trivial Wilf equivalence
classes unresolved. Notably, we show that the cyclic permutations of a given
length avoiding a member of two more trivial Wilf classes are enumerated by
the Catalan numbers.
In Sections 6 and 7, we investigate vincular pattern avoidance on cyclic
permutations for patterns of general lengths. In particular, we consider
whether a given set of totally vincular patterns, i.e., patterns where the
entire subsequence must be consecutive, is unavoidable, meaning no
sufficiently long cyclic permutation can avoid this set. In Section 6, we
consider the boundary cases of this property, namely the minimal unavoidable
sets, which are sets of totally vincular patterns that are unavoidable but for
which any proper subset is avoidable. We demonstrate that one of the most
natural families of unavoidable sets, namely the sets of patterns with a 1 at
position $i$ for a fixed $i$, is minimal. And in Section 7, we consider the
dual question, maximal avoidable sets, which are sets of totally vincular
patterns that are avoidable but adding any additional pattern makes the set
unavoidable. We determine the maximum cardinality of any avoidable set.
Finally, we conclude in Section 8 with some remarks on areas for further
research.
## 2\. Preliminaries
Let $a,b\in\mathbb{Z}$. When we write the interval $[a,b]$, we will only be
referring to the integers in that interval. As is standard, we will use $[a]$
to denote the set $\\{1,2,\dots,a\\}$.
Let $S_{n}$ denote the set of permutations on $[n]$ for a positive integer
$n$. Any permutation $\pi\in S_{n}$ is said to have _length_ $n$, denoted by
$|\pi|=n$. A permutation $\pi$ will often be written in the one-line notation
$\pi=\pi_{1}\cdots\pi_{n}$, where commas may be optionally inserted such as
$\pi=\pi_{1},\dots,\pi_{n}$ for sake of readability.
In particular, two of the simplest permutations of length $n$ are the
increasing and decreasing permutations, which will appear throughout the
paper, denoted
$\iota_{n}=12\cdots n$
and
$\delta_{n}=n\cdots 21,$
respectively.
Two sequences of distinct integers $\pi=\pi_{1}\cdots\pi_{k}$ and
$\sigma=\sigma_{1}\cdots\sigma_{k}$ of the same length are _order isomorphic_
, denoted $\pi\cong\sigma$, if $\pi_{i}<\pi_{j}$ if and only if
$\sigma_{i}<\sigma_{j}$ for $1\leq i,j\leq n$. The _reduction_ of a sequence
of distinct integers $\pi=\pi_{1}\cdots\pi_{n}$ is the unique permutation
$\sigma\in S_{n}$ such that $\pi\cong\sigma$; in other words, the values of
the elements of $\pi$ are mapped to $[n]$ while preserving their relative
order.
We now define pattern avoidance on permutations. If $\sigma\in S_{n}$ and
$\pi\in S_{k}$ for $k\leq n$, then $\sigma$ _contains_ $\pi$ as a pattern if
there is a subsequence $\sigma^{\prime}$ of $\sigma$ with
$|\sigma^{\prime}|=k$ such that $\sigma^{\prime}\cong\pi$. If no such
subsequence exists, then $\sigma$ is said to _avoid_ $\pi$. The _avoidance
class_ of $\pi$ is
$\operatorname{Av}_{n}(\pi)=\\{\sigma\in S_{n}\mid\sigma\text{ avoids
}\pi\\}.$
We extend this definition to avoidance classes of sets of permutations $\Pi$
by defining
$\operatorname{Av}_{n}(\Pi)=\bigcap_{\pi\in\Pi}\operatorname{Av}_{n}(\pi).$
The _reverse_ of a permutation $\pi=\pi_{1}\cdots\pi_{n}$ is
$\pi^{r}=\pi_{n}\cdots\pi_{1}$. We define the _plot_ of a permutation $\pi$ to
be the sequence of points $(i,\pi_{i})$ in the Cartesian plane. Reversal then
corresponds to reflecting the plot of a permutation across a vertical axis.
Similarly, reflection across a horizontal axis corresponds to the _complement_
of $\pi$, given by
$\pi^{c}=n+1-\pi_{1},n+1-\pi_{2},\dots,n+1-\pi_{n}.$
Combining these two operations gives the _reverse complement_
$\pi^{rc}=n+1-\pi_{n},\dots,n+1-\pi_{1},$
which corresponds to rotation by 180 degrees. We can apply any of these three
operations to sets of permutations $\Pi$ by applying them to each element of
$\Pi$. In other words, we have
$\displaystyle\Pi^{r}$ $\displaystyle=\\{\pi^{r}\mid\pi\in\Pi\\}$
$\displaystyle\Pi^{c}$ $\displaystyle=\\{\pi^{c}\mid\pi\in\Pi\\}$
$\displaystyle\Pi^{rc}$ $\displaystyle=\\{\pi^{rc}\mid\pi\in\Pi\\}.$
We say that two patterns $\pi$ and $\pi^{\prime}$ are _Wilf equivalent_ ,
written $\pi\equiv\pi^{\prime}$, if for all $n\geq 1$, we have
$\left|\operatorname{Av}_{n}(\pi)\right|=\left|\operatorname{Av}_{n}(\pi^{\prime})\right|$.
We extend this definition naturally to sets of patterns, denoted
$\Pi\equiv\Pi^{\prime}$. It is easy to see that
$\pi\equiv\pi^{r}\equiv\pi^{c}\equiv\pi^{rc}$ for any pattern $\pi$, so these
are called _trivial Wilf equivalences_. This naturally generalizes to trivial
Wilf equivalences for sets of patterns:
$\Pi\equiv\Pi^{r}\equiv\Pi^{c}\equiv\Pi^{rc}$. These relations form _trivial
Wilf equivalence classes_.
For $\sigma=\sigma_{1}\cdots\sigma_{n}\in S_{n}$, let a _rotation_ of $\sigma$
be any permutation $\tau\in S_{n}$ of the form
$\tau=\sigma_{k}\sigma_{k+1}\cdots\sigma_{n}\sigma_{1}\cdots\sigma_{k-1}$
for some $k\in[n]$. We define the _cyclic permutation_ corresponding to
$\sigma\in S_{n}$ to be the set of all rotations of $\sigma$, denoted
$[\sigma]=\\{\sigma_{1}\cdots\sigma_{n},\sigma_{2}\cdots\sigma_{n}\sigma_{1},\dots,\sigma_{n}\sigma_{1}\cdots\sigma_{n-1}\\}.$
For example,
$[123]=\\{123,231,312\\}=[231]=[312].$
We use square brackets to denote the cyclic analog of objects defined in the
linear case, and using this notation, we let $[S_{n}]$ denote the set of
cyclic permutations of length $n$. Notice that $\left|[S_{n}]\right|=(n-1)!$.
To avoid confusion, we may refer to permutations from $S_{n}$ as _linear_
permutations, as opposed to cyclic permutations. The _length_ of a cyclic
permutation $[\sigma]$ is simply the length of $\sigma$, so any permutation
$[\sigma]\in[S_{n}]$ has length $n$, even though one may view a cyclic
permutation as being arranged in a circle and thus lacking endpoints.
We now define pattern avoidance on cyclic permutations, the natural analog to
the definition of pattern avoidance in the linear case. If
$[\sigma]\in[S_{n}]$ and $[\pi]\in[S_{k}]$ for $k\leq n$, then $[\sigma]$
_contains_ $[\pi]$ as a pattern if some element $\sigma^{\prime}\in[\sigma]$,
i.e., some rotation $\sigma^{\prime}$ of $\sigma$, contains $\pi$ as a
pattern, using the linear definition of pattern avoidance. Otherwise,
$[\sigma]$ is said to _avoid_ $[\pi]$. Intuitively, a cyclic permutation can
be written in a circle rather than a line as in the linear case, and a cyclic
permutation contains a pattern if some subsequence, going once around the
circle, is order isomorphic to the pattern. The formal definition of a cyclic
permutation being the set of rotations of a linear rotation enforces that one
cannot loop around the circle multiple times; if this were allowed, then any
cyclic permutation of length $n$ would trivially contain any pattern of length
$k$ for $k\leq n$.
The _avoidance class_ of $[\pi]$ is similarly defined to be
$\operatorname{Av}_{n}[\pi]=\\{[\sigma]\in[S_{n}]\mid[\sigma]\text{ avoids
}[\pi]\\}.$
As before, we extend this definition to avoidance classes of sets of cyclic
permutations $[\Pi]$ by defining
$\operatorname{Av}_{n}[\Pi]=\bigcap_{[\pi]\in[\Pi]}\operatorname{Av}_{n}[\pi].$
For simplicity, when working with an explicit set of patterns we may omit the
curly brackets for the set; for example,
$\operatorname{Av}_{n}[1234,1243]=\operatorname{Av}_{n}[\\{1234,1243\\}].$
Wilf equivalences for cyclic permutations and sets of cyclic permutations are
defined analogously to the linear case. In particular, we still have the
trivial Wilf equivalences: for all $[\pi]$ and $[\Pi]$, we have
$\displaystyle[\pi]\equiv[\pi^{r}]$
$\displaystyle\equiv[\pi^{c}]\equiv[\pi^{rc}]$
$\displaystyle[\Pi]\equiv[\Pi^{r}]$
$\displaystyle\equiv[\Pi^{c}]\equiv[\Pi^{rc}].$
Lastly, we introduce vincular patterns and vincular pattern avoidance. We
consider $\pi$ as a vincular pattern if, when determining which permutations
$\sigma$ avoid $\pi$, we only consider subsequences $\sigma^{\prime}$ of
$\sigma$ where certain adjacent elements of $\pi$ are also adjacent in
$\sigma^{\prime}$ when $\sigma^{\prime}$ is embedded within $\sigma$. Such
adjacent elements are overlined in $\pi$, and each adjacency requirement,
i.e., each adjacent pair of elements that are overlined together, is referred
to as a vinculum. When two vincula are themselves adjacent, the overlines are
combined into one longer overline. For example, $\sigma=34251$ contains two
subsequences order isomorphic to $\pi=213$, namely $325$ and $425$, but only
$425$ is a copy of $\pi^{\prime}=\overline{21}3$, because the 4 and the 2 are
adjacent. In fact, $425$ is a copy of $\pi^{\prime\prime}=\overline{213}$ as
well, where $\pi^{\prime}=\overline{21}3$ is said to have one vinculum and
$\pi^{\prime\prime}=\overline{213}$ has two vincula. Notice that the vincula
do not have to be adjacent themselves, as we can have a vincular pattern like
$\pi=\overline{12}\,\overline{34}5\overline{678}$, which has four vincula.
Classical patterns can be seen as vincular patterns with no vincula.
Vincular pattern avoidance, avoidance classes, and Wilf equivalences are
defined analogously. In particular, these vincular notions apply to cyclic
patterns and permutations as well, without change. For example, Domagalski,
Liang, Minnich, Sagan, Schmidt, and Sietsema [10] proved that
$\left|\operatorname{Av}_{n}[13\overline{24}]\right|$, the number of cyclic
permutations of $[n]$ avoiding $[13\overline{24}]$, equals $C_{n-1}$, where
$C_{n}$ denotes the $n$th Catalan number.
We would like to note one difference between pattern avoidance in the vincular
case, as opposed to the non-vincular case. This observation applies to both
linear and cyclic permutations. Without vincula, we have the simple but
oftentimes useful property that if $\sigma$ avoids a pattern $\pi$, then any
subsequence $\sigma^{\prime}$ of $\sigma$ also avoids $\pi$. But this is not
necessarily the case when $\pi$ is a vincular pattern, as removing elements
from $\sigma$ changes the adjacency relations of the other elements. For
example, $\sigma=1423$ avoids $\overline{12}3$, but the subsequence
$\sigma^{\prime}=123$ contains $\overline{12}3$. However, with some additional
caution, this reasoning can sometimes still be applied: for an example, see
the proof of Theorem 5.5.
## 3\. Vincular cyclic patterns of length 3
Before we address vincular cyclic patterns of length 3, let us address the two
vincular cyclic patterns of length 2, $[\overline{12}]$ and $[\overline{21}]$.
It is easy to see that every $[\sigma]$ of length $|\sigma|\geq 2$ contains
both $[\overline{12}]$ and $[\overline{21}]$, for every cyclic permutation of
length at least two must contain at least one ascent and one descent. Hence,
we have the following proposition.
###### Proposition 3.1.
We have
$\left|\operatorname{Av}_{n}[\overline{12}]\right|=\left|\operatorname{Av}_{n}[\overline{21}]\right|=\begin{cases}1&n=1\\\
0&n\geq 2.\end{cases}$
We now address vincular cyclic patterns of length 3 with a single vinculum.
###### Theorem 3.2.
For vincular cyclic patterns $\pi$ of length 3 containing one vinculum, we
have $\left|\operatorname{Av}_{n}[\pi]\right|=1$ for all $n\geq 1$.
###### Proof.
The vincular cyclic patterns of length 3 with one vinculum are
$[\overline{12}3]\equiv[\overline{21}3]\equiv[\overline{23}1]\equiv[\overline{32}1]$
and
$[\overline{13}2]\equiv[\overline{31}2],$
where these Wilf equivalences are trivial Wilf equivalences.
For $n<3$, we have only one cyclic permutation, so the result holds. So we now
assume $n\geq 3$.
We first address $[\overline{12}3]$. Consider $[\sigma]\in[S_{n}]$ that avoids
$[\overline{12}3]=[3\overline{12}]$. As 1 must be followed by an ascent, this
ascent must be to $n$, as otherwise 1, the element just after it, and $n$ form
a $[\overline{12}3]$ pattern. Consider 2 and the element just after it. This
element cannot be $n$ as 1 is just before $n$, but if it is an ascent then we
have a $[\overline{12}3]$ pattern using $n$ as our 3, so 2 must be just before
1. Continuing this logic, we find that
$[\sigma]=[(n-1),\dots,2,1,n]=[\delta_{n}]$ is the only
$[\overline{12}3]$-avoiding cyclic permutation, and hence
$\left|\operatorname{Av}_{n}[\overline{12}3]\right|=1$.
Now we address $[\overline{31}2]$. Consider $[\sigma]\in[S_{n}]$ that avoids
$[\overline{31}2]=[2\overline{31}]$. Note that 1 and the element just before
it must form a descent, and so to avoid $[\overline{31}2]$ the element just
before 1 must be a 2, as otherwise these two elements along with 2 form a
$[\overline{31}2]$ pattern. Similarly, 2 and the element just before it must
also form a descent, and we find 3 must be just before 2. Inductively, we find
that $[\sigma]=[\delta_{n}]$ is the only possibility, so
$\left|\operatorname{Av}_{n}[\overline{31}2]\right|=1$. ∎
Next, we address vincular cyclic patterns of length 3 with two vincula. There
are six such vincular cyclic patterns, namely
$[\overline{123}]\equiv[\overline{321}]$ and
$[\overline{132}]\equiv[\overline{213}]\equiv[\overline{231}]\equiv[\overline{312}]$,
where these Wilf equivalences are trivial. Viewing each cyclic $[\sigma]$
permutation as starting at $\sigma_{1}=1$, in order to avoid
$[\overline{123}]$, or two consecutive cyclic ascents, as we cannot ascend to
1, this is equivalent to avoiding a double ascent in the (linear) permutation
$\sigma_{2}\cdots\sigma_{n}$ as well as avoiding an initial ascent, i.e.,
$\sigma_{2}<\sigma_{3}$. Bergeron, Flajolet, and Salvy [4] showed this
sequence gives the exponential generating function
(1) $\sum_{n\geq
0}\left|\operatorname{Av}_{n+1}[\overline{123}]\right|\frac{z^{n}}{n!}=\frac{1}{2}+\frac{\sqrt{3}}{2}\tan\left(\frac{\sqrt{3}}{2}z+\frac{\pi}{6}\right),$
which satisfies the differential equation
(2) $E^{\prime}=E^{2}-E+1.$
This resolves the first half of [10, Conjecture 6.4], although we had to
change the indices of the exponential generating function to use
$\left|\operatorname{Av}_{n+1}[\overline{123}]\right|$ in order for the
conjectured differential equation to hold. Elizalde and Sagan [12, Corollary
2]111This paper [12] will be merged with [10] in a later version. concurrently
and independently proved a more general result that implies Eq. 2, which thus
has the explicit solution given by Eq. 1; their proof method relates
$[\overline{123}]$-avoiding cyclic permutations to $\overline{123}$-avoiding
linear permutations, contrasting with our method of relating it to linear
permutations without double ascents or an initial ascent. We note that a
result by Ehrenborg [11, Theorem 3.3] implies the following closed form for
$\left|\operatorname{Av}_{n}[\overline{123}]\right|$:
$\left|\operatorname{Av}_{n}[\overline{123}]\right|=(n-1)!\sum_{k=-\infty}^{\infty}\left(\frac{\sqrt{3}}{2\pi(k+1/3)}\right)^{n}.$
In order for $[\sigma]$ to avoid $[\overline{132}]$, assuming $\sigma_{1}=1$,
$\sigma_{1}$ cannot be either the 3 or 2 in the vincular pattern
$[\overline{132}]$, so this is equivalent to the linear permutation $\sigma$,
which must start with 1 and avoid $\overline{132}$. It is easy to see from the
commented description that this is counted by the OEIS sequence A052319, which
gives exponential generating function
(3) $\sum_{n\geq
1}\left|\operatorname{Av}_{n}[\overline{132}]\right|\frac{z^{n}}{n!}=-\ln\left(1-\sqrt{\frac{\pi}{2}}\operatorname{erf}\left(\frac{z}{\sqrt{2}}\right)\right),$
where $\operatorname{erf}(z)$ is the error function
$\operatorname{erf}(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}e^{-t^{2}}dt.$
This exponential generating function satisfies the differential equation
(4) $E^{\prime}=e^{\displaystyle E-z^{2}/2},$
resolving the second half of [10, Conjecture 6.4]. Elizalde and Sagan [12,
Corollary 4] also concurrently and independently resolved this second half of
the conjecture by proving a more general result that implies Eq. 4, which in
turn implies Eq. 3.
## 4\. Multiple vincular cyclic patterns of length 3
Similar to the study of [10] for non-vincular cyclic patterns, we analyze the
sizes of the avoidance classes for sets of multiple vincular cyclic patterns.
Consider a set $\Pi$ of (potentially vincular) cyclic patterns of length 3. By
Theorem 3.2, if $\Pi$ contains a vincular pattern with one vinculum, then
$\left|\operatorname{Av}_{n}[\Pi]\right|\leq 1$, where if
$\operatorname{Av}_{n}[\Pi]$ is nonempty then either
$\operatorname{Av}_{n}[\Pi]=\\{[\iota_{n}]\\}$ or
$\operatorname{Av}_{n}[\Pi]=\\{[\delta_{n}]\\}$. Moreover, the non-vincular
cyclic patterns of length 3 are $[321]$ and $[123]$, which are only avoided by
$[\iota_{n}]$ and $[\delta_{n}]$, respectively. We see that $[\iota_{n}]$
avoids the set of vincular patterns
$\\{[321],[\overline{13}2],[\overline{21}3],[\overline{32}1],[\overline{132}],[\overline{213}],[\overline{321}]\\}$,
i.e., the set of possibly vincular cyclic patterns of length 3 that when “de-
vincularized” are equal to $[321]$, and $[\delta_{n}]$ avoids
$\\{[123],[\overline{12}3],[\overline{23}1],[\overline{31}2],[\overline{123}],[\overline{231}],[\overline{312}]\\}$,
the set of possibly vincular cyclic patterns of length 3 that when de-
vincularized are equal to $[123]$; in addition, each contains all the patterns
in the other set. Hence, assuming $\Pi$ contains a vincular pattern with at
most one vinculum, if $\Pi$ is a subset of either of these two sets, then
$\left|\operatorname{Av}_{n}[\Pi]\right|=1$, namely containing the respective
$[\iota_{n}]$ or $[\delta_{n}]$. Otherwise,
$\left|\operatorname{Av}_{n}[\Pi]\right|=0$ for $n\geq 3$.
It remains to consider $\Pi$ containing only vincular patterns of length 3
with two vincula. This case is far more interesting, as there are many more
cyclic permutations avoiding such patterns.
### 4.1. Doubleton sets
There are $\binom{6}{2}=15$ doubleton sets, i.e., sets containing two
elements, that contain vincular patterns of length 3 with two vincula. We
first address the doubleton sets that do not admit any cyclic permutations.
###### Proposition 4.1.
For $\Pi$ equaling any of the following six doubleton sets
$\\{[\overline{123}],[\overline{132}]\\},\,\,\,\\{[\overline{123}],[\overline{213}]\\},\,\,\,\\{[\overline{321}],[\overline{231}]\\},\,\,\,\\{[\overline{321}],[\overline{312}]\\},\,\,\,\\{[\overline{132}],[\overline{231}]\\},\,\,\,\\{[\overline{213}],[\overline{312}]\\},$
we have $\left|\operatorname{Av}_{n}[\Pi]\right|=\begin{cases}1&n\leq 2\\\
0&n\geq 3.\end{cases}$
###### Proof.
For $n\leq 2$, there is one cyclic permutation, which yields the desired
result.
All six of these doubleton sets consist of two patterns that either share a 1
in the same position or a 3 in the same position. Consider the corresponding
consecutive subsequence of three elements that uses 1 or $n$ in the
corresponding position, if the shared value is 1 or 3, respectively. Then the
other two values $x$ and $y$ must either satisfy $x<y$ or $x>y$, which will
not avoid one of the patterns.
For example, consider the first set $\\{[\overline{123}],[\overline{132}]\\}$.
Consider an arbitrary cyclic permutation of length $n\geq 3$, and consider the
consecutive subsequence of three elements that starts with $1$, i.e., of the
form $1xy$. If $x<y$, then this is order isomorphic to $[\overline{123}]$, and
otherwise it is order isomorphic to $[\overline{132}]$, so no permutation can
avoid both patterns. ∎
We now address the case of $\Pi=\\{[\overline{123}],[\overline{321}]\\}$,
which we show is equinumerous with the up-down permutations. Recall that an
up-down permutation on $n$ elements is a permutation
$\sigma=\sigma_{1}\cdots\sigma_{n}$ where
$\sigma_{1}<\sigma_{2}>\sigma_{3}<\sigma_{4}>\cdots$, i.e., the permutation
alternates between ascents and descents. Let $U_{n}$ be the number of up-down
permutations on $n$ elements. Up-down permutations are also referred to as
alternating permutations, however use of this terminology is inconsistent as
other authors define alternating permutations to also include the down-up
permutations, defined analogously.
###### Proposition 4.2.
For all $n\geq 1$,
$\left|\operatorname{Av}_{n}[\overline{123},\overline{321}]\right|=\begin{cases}1&n=1\\\
0&n\geq 3\text{ is odd}\\\ U_{n-1}&n\geq 2\text{ is even}.\end{cases}$
###### Proof.
For $n\leq 2$, we have one cyclic permutation, so the result holds; in
particular, $U_{1}=1$. For $n\geq 3$, notice that in order to avoid two
consecutive cyclic ascents and two consecutive cyclic descents, the cyclic
permutation must be alternating, i.e., alternate between cyclic ascents and
descents. This is clearly impossible for odd $n$ due to the cyclic nature of
the permutation, so no permutations avoid both vincular cyclic patterns.
It remains to resolve the even case for $n\geq 4$. Consider a cyclic
permutation $[\sigma]$ that avoids $\\{[\overline{123}],[\overline{321}]\\}$
of length $n$. Without loss of generality we may assume $\sigma_{1}=n$. In
particular, we must descend from $n$, and ascend to $n$, so the linear
permutation $\sigma_{2}\cdots\sigma_{n}$ is an up-down permutation of length
$n-1$. This is a necessary and sufficient condition for $[\sigma]$ to avoid
$\\{[\overline{123}],[\overline{321}]\\}$. Hence, there are $U_{n-1}$ such
cyclic permutations. ∎
###### Remark 4.3.
Up-down permutations were first studied by André [1], who showed that the
exponential generating function of the number of up-down permutations $U_{n}$
is $\tan(x)+\sec(x)$, where $\tan(x)$ provides the terms with odd degree and
$\sec(x)$ provides the terms with even degree. Thus, including the $n=1$ term,
we find
$\sum_{n\geq
0}\left|\operatorname{Av}_{n+1}[\overline{123},\overline{321}]\right|\frac{z^{n}}{n!}=1+\tan(z),$
or equivalently
$\sum_{n\geq
1}\left|\operatorname{Av}_{n}[\overline{123},\overline{321}]\right|\frac{z^{n}}{n!}=\int_{0}^{z}(1+\tan(x))dx=z-\ln(\cos(z)).$
The asymptotics of the proportion of permutations that are up-down is
$\frac{U_{n}}{n!}=2\left(\frac{2}{\pi}\right)^{n+1}+O\left(\left(\frac{2}{3\pi}\right)^{n}\right);$
see Stanley [26].
The remaining doubleton sets form three Wilf equivalence classes under the
trivial Wilf equivalences:
(A)
$\displaystyle\\{[\overline{123}],[\overline{231}]\\}\equiv\\{[\overline{123}],[\overline{312}]\\}$
$\displaystyle\equiv\\{[\overline{321}],[\overline{132}]\\}\equiv\\{[\overline{321}],[\overline{213}]\\}$
(B) $\displaystyle\\{[\overline{132}],[\overline{213}]\\}$
$\displaystyle\equiv\\{[\overline{231}],[\overline{312}]\\}$ (C)
$\displaystyle\\{[\overline{132}],[\overline{312}]\\}$
$\displaystyle\equiv\\{[\overline{213}],[\overline{231}]\\}.$
A computer search demonstrates none of these three classes are Wilf
equivalent, and provides the following table of data, Table 1, on the number
of cyclic permutations avoiding a member of one of these three Wilf
equivalence classes. Currently, none of these three sequences appear in the
OEIS [25]. We leave the enumeration of these three Wilf equivalence classes as
an open problem.
$n$ | $\left|\operatorname{Av}_{n}[(\mathrm{A})]\right|$ | $\left|\operatorname{Av}_{n}[(\mathrm{B})]\right|$ | $\left|\operatorname{Av}_{n}[(\mathrm{C})]\right|$
---|---|---|---
1 | 1 | 1 | 1
2 | 1 | 1 | 1
3 | 1 | 1 | 0
4 | 1 | 1 | 1
5 | 4 | 3 | 2
6 | 14 | 12 | 6
7 | 54 | 46 | 20
8 | 278 | 218 | 86
9 | 1524 | 1206 | 416
10 | 9460 | 7272 | 2268
11 | 66376 | 49096 | 13598
12 | 504968 | 366547 | 89924
13 | 4211088 | 2970945 | 649096
Table 1. Size of avoidance classes of doubleton sets of vincular cyclic
patterns of length 3.
### 4.2. Three or more patterns
As observed previously in Section 4, if a set $\Pi$ of cyclic patterns of
length 3 contains a pattern with at most one vinculum, then we have
$\left|\operatorname{Av}_{n}[\Pi]\right|\leq 1$ and its exact value can be
easily determined. Thus we will only consider $\Pi$ consisting of three or
more vincular patterns of length 3, all with two vincula. There are
$\binom{6}{3}=20$ sets of three such patterns. All but two of these contain a
set from Proposition 4.1, so it follows that for $n\geq 3$, no cyclic
permutations in $[S_{n}]$ avoid all three patterns in each of these 18 sets.
The two remaining sets are
$\\{[\overline{123}],[\overline{231}],[\overline{312}]\\}\equiv\\{[\overline{132}],[\overline{213}],[\overline{321}]\\}.$
Adding any other such pattern will yield one of the sets from Proposition 4.1
contained in $\Pi$, so we see that for any $\Pi$ consisting of more than three
patterns, $\left|\operatorname{Av}_{n}[\Pi]\right|=0$ for $n\geq 3$, and 1
otherwise.
We now provide a bijection between
$\operatorname{Av}_{n}[\overline{132},\overline{213},\overline{321}]$ and the
set of total cyclic orders extending a particular partial cyclic order. We
first define the necessary concepts regarding partial cyclic orders, which can
be seen as a circular analog of a poset. We refer readers to Megiddo [20] for
a more comprehensive introduction of cyclic orders.
###### Definition 4.4.
A partial cyclic order on a set $X$ is a ternary relation $Z\subset X^{3}$
satisfying the following conditions:
1. (1)
$\forall x,y,z\in X,(x,y,z)\in Z\Rightarrow(y,z,x)\in Z$ (cyclicity),
2. (2)
$\forall x,y,z\in X,(x,y,z)\in Z\Rightarrow(z,y,x)\not\in Z$ (antisymmetry),
3. (3)
$\forall x,y,z,u\in X,(x,y,z)\in Z$ and $(x,z,u)\in Z\Rightarrow(x,y,u)\in Z$
(transitivity).
A partial cyclic order $Z$ on a set $X$ is a total cyclic order if for any
triple $(x,y,z)$ of distinct elements, either $(x,y,z)\in Z$ or $(z,y,x)\in
Z$.
A partial cyclic order $Z^{\prime}$ extends another partial cyclic order $Z$
if $Z\subseteq Z^{\prime}$. The problem of determining whether a given partial
cyclic order admits a cyclic extension is NP-complete, as shown by Galil &
Megiddo [14].
One way in which cyclic orders can be seen as a circular analog of partial
orders is that a totally ordered set can be organized into a chain, where an
element $y$ is larger than $x$ if $y$ is above $x$; on the other hand, a total
cyclic order can be visually represented by placing the elements of $X$ on a
circle, so that $(x,y,z)\in Z$ if and only if, starting from $x$ and going in
the positive (i.e., counterclockwise) direction, one encounters $y$ before
encountering $z$.
We now use a simplification of the notation of Ramassamy [23], where we use
$\mathcal{R}_{n}$ in place of $\mathcal{R}_{+^{n}}^{+,+}$ as used in his
paper.
###### Definition 4.5.
For any positive integer $n$, let $\mathcal{R}_{n}$ denote the set of total
cyclic orders $Z$ on the set $X=[n+2]$ such that $(i,i+1,i+2)\in Z$ for all
$1\leq i\leq n$, as well as $(n+1,n+2,1)\in Z$ and $(n+2,1,2)\in Z$.
Ramassamy [23] proved a recurrence relation that enumerates
$\left|\mathcal{R}_{n}\right|$. This recurrence relation is quite complicated,
however, so for sake of brevity and clarity we do not include this recurrence,
and refer readers to [23, Theorem 3.5]. No closed form or nice expression for
a generating function is currently known for this sequence of numbers
$\left|\mathcal{R}_{n}\right|$, which is OEIS sequence A295264 [25]. However,
the recurrence does allow for the construction of an algorithm that calculates
the first $n$ values of this sequence in polynomial time, compared to the
super-exponential complexity associated with a complete search over all
permutations.
We now provide a bijection to demonstrate
$\left|\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]\right|=\left|\mathcal{R}_{n}\right|$.
###### Theorem 4.6.
For all $n\geq 1$, we have
$\left|\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]\right|=\left|\mathcal{R}_{n}\right|.$
###### Proof.
Consider an element
$[\sigma]\in\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]$,
where we may assume that $\sigma_{1}=1$. We construct a bijection
$\phi:\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]\to\mathcal{R}_{n}$.
The cyclic order $\phi([\sigma])\in\mathcal{R}_{n}$ has
$(i,j,k)\in\phi([\sigma])$ if and only if $\sigma_{i}\sigma_{j}\sigma_{k}$ is
order isomorphic to 123, 231, or 312.
We first verify $\phi([\sigma])\in\mathcal{R}_{n}$. Clearly this satisfies
cyclicity, as well as antisymmetry. Proving transitivity is more involved.
Suppose $(x,y,z),(x,z,u)\in\phi([\sigma])$. We split into three cases
depending on whether $\sigma_{x}\sigma_{y}\sigma_{z}$ is order isomorphic to
123, 231, or 312.
1. (1)
Suppose $\sigma_{x}\sigma_{y}\sigma_{z}\cong 123$.
Then as $\sigma_{x}<\sigma_{z}$ and $(x,z,u)\in\phi([\sigma])$, either
$\sigma_{x}\sigma_{z}\sigma_{u}\cong 123$ or
$\sigma_{x}\sigma_{z}\sigma_{u}\cong 231$. In the former case, notice that
this implies $\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{u}\cong 1234$, and thus
$\sigma_{x}\sigma_{y}\sigma_{u}\cong 123$, so $(x,y,u)\in\phi([\sigma])$. In
the latter case, this implies $\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{u}\cong
2341$, and thus $\sigma_{x}\sigma_{y}\sigma_{u}\cong 231$, so
$(x,y,u)\in\phi([\sigma])$.
2. (2)
Suppose $\sigma_{x}\sigma_{y}\sigma_{z}\cong 231$.
Then in order to have $(x,z,u)\in\phi([\sigma])$, we must have
$\sigma_{x}\sigma_{z}\sigma_{u}\cong 312$, so
$\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{u}\cong 3412$, and thus
$\sigma_{x}\sigma_{y}\sigma_{u}\cong 231$, so $(x,y,u)\in\phi([\sigma])$.
3. (3)
Suppose $\sigma_{x}\sigma_{y}\sigma_{z}\cong 312$.
Then in order to have $(x,z,u)\in\phi([\sigma])$, we must have
$\sigma_{x}\sigma_{z}\sigma_{u}\cong 312$, so
$\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{u}\cong 4123$, and thus
$\sigma_{x}\sigma_{y}\sigma_{u}\cong 312$, so $(x,y,u)\in\phi([\sigma])$.
It is easy to see $\phi([\sigma])$ is a total cyclic order. Lastly, we must
show that $(i,i+1,i+2)\in\phi([\sigma])$ for all $1\leq i\leq n$, as well as
$(n+1,n+2,1),(n+2,1,2)\in\phi([\sigma])$. As
$[\sigma]\in\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]$,
we have that any such triple of indices must avoid the vincular patterns
$[\overline{132}],[\overline{213}]$, and $[\overline{321}]$, so in particular
these three cyclically consecutive values must be order isomorphic to 123,
231, or 312, as desired. Hence, $\phi([\sigma])\in\mathcal{R}_{n}$.
We now construct the inverse map
$\psi:\mathcal{R}_{n}\to\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]$,
which we will then show satisfies $\phi\circ\psi=\operatorname{id}$. Given a
total cyclic order $Z\in\mathcal{R}_{n}$, construct $[\sigma]\in[S_{n+2}]$
with $\sigma_{1}=1$ by enforcing $\sigma_{i}<\sigma_{j}$ for distinct $i,j>1$
if and only if $(1,i,j)\in Z$. Notice that for any such $\sigma_{i}$ and
$\sigma_{j}$, either $\sigma_{i}<\sigma_{j}$ or $\sigma_{j}<\sigma_{i}$ as $Z$
is a total cyclic order. We claim that a unique assignment of the elements of
$[2,n+2]$ to $\sigma_{2},\dots,\sigma_{n+2}$ exists that satisfies all of
these prescribed relations. It suffices to show that a directed cycle of self-
contradictory relations
$\sigma_{i_{1}}<\sigma_{i_{2}}<\cdots<\sigma_{i_{k}}<\sigma_{i_{1}}$ cannot
occur, as this implies the relations form a total order, which can be mapped
order isomorphically to $[2,n+2]$. Suppose for the sake of contradiction that
such a directed cycle existed. Then by construction of $\psi$, we have
$(1,i_{1},i_{2}),(1,i_{2},i_{3}),\dots,(1,i_{k-1},i_{k})\in Z$, and applying
transitivity yields $(1,i_{1},i_{k})\in Z$, so by antisymmetry
$(1,i_{k},i_{1})\not\in Z$. But our cycle’s final relation
$\sigma_{i_{k}}<\sigma_{i_{1}}$ implies $(1,i_{k},i_{1})\in Z$, reaching a
contradiction. Hence a unique assignment of the elements of $[2,n+2]$ to
$\sigma_{2},\dots,\sigma_{n+2}$ exists, which yields a unique
$[\sigma]\in[S_{n+2}]$. We define $\psi(Z)=[\sigma]$.
We first show that
$\psi(Z)\in\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]$.
Let $[\sigma]=\psi(Z)$. Consider three cyclically consecutive elements
$\sigma_{i}\sigma_{i+1}\sigma_{i+2}$ of $\psi(Z)$, where the indices are taken
modulo $n+2$. By definition of $\mathcal{R}_{n}$, we know $(i,i+1,i+2)\in Z$.
As $Z$ is a total cyclic order, we split into two cases depending on whether
$(1,i,i+2)\in Z$ or not.
Case 1: $(1,i,i+2)\in Z$. Then $(i,i+1,i+2),(i,i+2,1)\in Z$, which implies
$(i,i+1,1)\in Z$, so $\sigma_{i}<\sigma_{i+1}$. Moreover,
$(i+2,1,i),(i+2,i,i+1)\in Z$, which implies $(i+2,1,i+1)\in Z$, so
$\sigma_{i+1}<\sigma_{i+2}$, so $\sigma_{i}\sigma_{i+1}\sigma_{i+2}\cong 123$,
and thus it avoids the three forbidden patterns.
Case 2: $(1,i+2,i)\in Z$. Then $\sigma_{i+2}<\sigma_{i}$. If $(1,i,i+1)\in Z$,
then $\sigma_{i}<\sigma_{i+1}$, and thus
$\sigma_{i}\sigma_{i+1}\sigma_{i+2}\cong 231$, which avoids the three
forbidden patterns. Otherwise, we have $(1,i+1,i)\in Z$. Then we have
$(i+1,i+2,i),(i+1,i,1)\in Z$, so transitivity implies $(i+1,i+2,1)\in Z$, and
thus $\sigma_{i+1}<\sigma_{i+2}$. Thus $\sigma_{i+1}<\sigma_{i+2}<\sigma_{i}$,
so $\sigma_{i}\sigma_{i+1}\sigma_{i+2}\cong 312$, which avoids the three
forbidden patterns.
Hence, we find
$\psi(Z)\in\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]$.
Finally, it suffices to show that $\phi\circ\psi(Z)=Z$ for all
$Z\in\mathcal{R}_{n}$. Let $[\sigma]=\psi(Z)$. Suppose
$\sigma_{i}\sigma_{j}\sigma_{k}$ is order isomorphic to 123, 231, or 312. It
suffices to show that $(i,j,k)\in Z$. By applying cyclicity afterwards, it
suffices to address the case $\sigma_{i}\sigma_{j}\sigma_{k}\cong 123$. As
$\sigma_{i}<\sigma_{j}<\sigma_{k}$, by definition of $\psi$ we know
$(1,i,j),(1,j,k)\in Z$. So $(j,k,1)$ and $(j,1,i)$ are both in $Z$, and
transitivity yields $(j,k,i)\in Z$, or equivalently $(i,j,k)\in Z$, as
desired. Hence $\phi\circ\psi(Z)=Z$, and $\phi$ is a bijection between
$\operatorname{Av}_{n+2}[\overline{132},\overline{213},\overline{321}]$ and
$\mathcal{R}_{n}$, which proves the result. ∎
This completes the classification for all sets $\Pi$ containing at least three
vincular patterns of length 3.
## 5\. Vincular cyclic patterns of length 4
There are four ways to place vincula into a vincular pattern of length 4: one
vinculum $[\overline{ab}cd]$, two disjoint vincula
$[\overline{ab}\,\overline{cd}]$, two consecutive vincula $[\overline{abc}d]$,
and three vincula $[\overline{abcd}]$. We investigate the case of a single
vinculum.
There are $4!=24$ vincular cyclic patterns of length 4 with one vinculum,
grouped into the following equivalence classes via trivial Wilf equivalences.
(A) $\displaystyle[\overline{12}34]\equiv[\overline{21}43]$
$\displaystyle\equiv[\overline{34}12]\equiv[\overline{43}21]$ (B)
$\displaystyle[\overline{12}43]\equiv[\overline{21}34]$
$\displaystyle\equiv[\overline{34}21]\equiv[\overline{43}12]$ (C)
$\displaystyle[\overline{13}24]\equiv[\overline{24}13]$
$\displaystyle\equiv[\overline{31}42]\equiv[\overline{42}31]$ (D)
$\displaystyle[\overline{13}42]\equiv[\overline{24}31]$
$\displaystyle\equiv[\overline{31}24]\equiv[\overline{42}13]$ (E)
$\displaystyle[\overline{14}23]$ $\displaystyle\equiv[\overline{41}32]$ (F)
$\displaystyle[\overline{14}32]$ $\displaystyle\equiv[\overline{41}23]$ (G)
$\displaystyle[\overline{23}14]$ $\displaystyle\equiv[\overline{32}41]$ (H)
$\displaystyle[\overline{23}41]$ $\displaystyle\equiv[\overline{32}14]$
A computer search provides the following table of data, Table 2, on the number
of cyclic permutations in each trivial Wilf equivalence class of avoidance
classes.
$n$ | $\left|\operatorname{Av}_{n}[(\mathrm{A})]\right|$ | $\left|\operatorname{Av}_{n}[(\mathrm{C})]\right|=\left|\operatorname{Av}_{n}[(\mathrm{E})]\right|$ | $\left|\operatorname{Av}_{n}[(\mathrm{D})]\right|$ | $\left|\operatorname{Av}_{n}[(\mathrm{G})]\right|$ | $\left|\operatorname{Av}_{n}[(\mathrm{H})]\right|$
---|---|---|---|---|---
| $=\left|\operatorname{Av}_{n}[(\mathrm{B})]\right|$ | $=\left|\operatorname{Av}_{n}[(\mathrm{F})]\right|=C_{n-1}$ | | |
1 | 1 | 1 | 1 | 1 | 1
2 | 1 | 1 | 1 | 1 | 1
3 | 2 | 2 | 2 | 2 | 2
4 | 5 | 5 | 5 | 5 | 5
5 | 14 | 14 | 13 | 14 | 15
6 | 43 | 42 | 35 | 42 | 50
7 | 144 | 132 | 97 | 133 | 180
8 | 523 | 429 | 275 | 442 | 690
9 | 2048 | 1430 | 794 | 1537 | 2792
10 | 8597 | 4862 | 2327 | 5583 | 11857
11 | 38486 | 16796 | 6905 | 21165 | 52633
12 | 182905 | 58786 | 20705 | 83707 | 243455
OEIS | A047970 | A000108 | A025242 | A34666$0^{*}$ | A34666$1^{*}$
Table 2. Size of avoidance classes of vincular cyclic patterns of length 4
with one vinculum. Trivial Wilf equivalence classes that are enumerated by the
same values for $n\leq 12$ are condensed into the same column; all of these
nontrivial Wilf equivalences are proven in this paper. OEIS references are
included; the last two, marked with asterisks, are new and come from this
work.
We first enumerate (A) and (B), demonstrating that they are Wilf equivalent.
In order to do this, we first define the Zeilberger statistic and set of a
permutation, and then of a cyclic permutation.
###### Definition 5.1.
The _Zeilberger set_ 222The terminology of a Zeilberger set was suggested by
Colin Defant. of a permutation $\sigma\in S_{n}$ is the longest subsequence of
the form $n,n-1,\dots,i$ for some $i$. The _Zeilberger statistic_ of a
permutation $\sigma$, denoted $\operatorname{zeil}(\sigma)$, is the length of
the Zeilberger set, i.e., the largest integer $m$ such that
$n,n-1,\dots,n-m+1$ is a subsequence of $\sigma$.
The Zeilberger statistic originated in Zeilberger’s study of stack-sortable
permutations [29], and has been studied in articles such as [5, 17, 6]. We
extend this definition to cyclic permutations.
###### Definition 5.2.
The _Zeilberger set_ of a cyclic permutation $[\sigma]\in[S_{n}]$ is the
longest subsequence of the form $n,n-1,\dots,i$ for some $i$ appearing within
some permutation $\sigma^{\prime}\in[\sigma]$, i.e., some rotation of
$\sigma$. The _Zeilberger statistic_ of a cyclic permutation $[\sigma]$,
denoted $\operatorname{zeil}[\sigma]$, is the cardinality of the Zeilberger
set of $[\sigma]$, i.e., the largest integer $m$ such that $n,n-1,\dots,n-m+1$
is a subsequence of some rotation of $\sigma$. Notice that
$\operatorname{zeil}[\sigma]$ is the maximum of
$\operatorname{zeil}(\sigma^{\prime})$ for all rotations $\sigma^{\prime}$ of
$\sigma$.
For example, the Zeilberger set of $[136254]$ is the subsequence 6543 because
it is a subsequence of the rotation 625413.
###### Definition 5.3.
The _reverse Zeilberger set_ of a cyclic permutation $[\sigma]\in[S_{n}]$ is
the longest subsequence of the form $i,i+1,\dots,n$ for some $i$ appearing
within some permutation $\sigma^{\prime}\in[\sigma]$.
Notice that the reverse Zeilberger set of $[\sigma]$ corresponds to the
Zeilberger set of $[\sigma^{r}]$, and the _reverse Zeilberger statistic_ , the
cardinality of the reverse Zeilberger set, is simply
$\operatorname{zeil}[\sigma^{r}]$, so no new notation is needed.
###### Theorem 5.4.
For all $n\geq 2$, we have
$\left|\operatorname{Av}_{n}[\overline{12}34]\right|=\left|\operatorname{Av}_{n}[\overline{12}43]\right|=1+\sum_{i=0}^{n-2}i(i+1)^{n-i-2}.$
###### Proof.
For $2\leq n\leq 3$, we directly verify the result, where all cyclic
permutations of length $n$ avoid both $[\overline{12}34]$ and
$[\overline{12}43]$.
For general $n\geq 4$, we first address $[\overline{12}43]$. We claim that the
following criterion is a necessary and sufficient condition for a cyclic
permutation $[\sigma]\in[S_{n}]$ to avoid $[\overline{12}43]$: for any cyclic
ascent in $[\sigma]$, i.e., two adjacent elements $\sigma_{i}<\sigma_{i+1}$
with indices taken modulo $n$, say from $a$ to $b>a$, we have that $b$ must be
in the reverse Zeilberger set of $[\sigma]$. To see that this is sufficient,
if $[\sigma]$ satisfies this criterion, then for any cyclic ascent from $a$ to
$b$, as $b$ itself is within the reverse Zeilberger set, reading from $b$, we
encounter the elements strictly greater than $b$ in increasing order, and thus
we cannot get a copy of $[\overline{12}43]$ using this cyclic ascent as our
$\overline{12}$. As $\overline{12}$ must come from a cyclic ascent and our
cyclic ascent was chosen arbitrarily, we find $[\sigma]$ avoids
$[\overline{12}43]$. To see that this criterion is necessary, if there exists
a cyclic ascent in $[\sigma]$, say from $a$ to $b>a$, such that $b$ is not in
the reverse Zeilberger set of $[\sigma]$, then reading $[\sigma]$ starting
from $b$, we cannot encounter the elements strictly greater than $b$ in
increasing order. This means there exist $c$ and $d$ where $b<c<d$ and $d$ is
encountered before $c$, from which we find $abdc$ forms a copy of
$[\overline{12}43]$.
Using this equivalent criterion, we determine
$\left|\operatorname{Av}_{n}[\overline{12}43]\right|$ by casework on the
reverse Zeilberger statistic. Clearly, for any $[\sigma]\in[S_{n}]$, we have
$2\leq\operatorname{zeil}[\sigma^{r}]\leq n$. Suppose
$\operatorname{zeil}[\sigma^{r}]=i+1$ for some $1\leq i\leq n-1$. If $i=n-1$,
then $[\sigma]=[\iota_{n}]$, which satisfies the criterion. For $1\leq i\leq
n-2$, the elements $n-i,\dots,n$, in that order, separate $[\sigma]$ into
$i+1$ regions between these $i+1$ elements, in which all other elements
$1,\dots,n-i-1$ must be placed. As $n-i-1$ is not in the reverse Zeilberger
set, it cannot be placed in the region between $n$ and $n-i$, so there are $i$
options on which region it can be placed into. All other $n-i-2$ elements can
be placed into any of the $i+1$ regions, yielding a total of $i(i+1)^{n-i-2}$
assignments. In each region, the elements must be in decreasing order so that
all cyclic ascents have the larger element in the reverse Zeilberger set. This
is necessary and sufficient to satisfy the criterion, and each of the
$i(i+1)^{n-i-2}$ assignments yields a unique permutation with this property.
Summing over all $1\leq i\leq n-2$ gives
$\left|\operatorname{Av}_{n}[\overline{12}43]\right|=1+\sum_{i=1}^{n-2}i(i+1)^{n-i-2},$
as desired.
The proof for $[\overline{12}34]$ is essentially identical to the
$[\overline{12}43]$ argument, where we instead replace the reverse Zeilberger
set with the Zeilberger set. ∎
We note that
$\left|\operatorname{Av}_{n+2}[\overline{12}34]\right|=1+\sum_{i=1}^{n}i(i+1)^{n-i}$
has a pattern avoidance interpretation using barred patterns; see Pudwell
[22].
The size of the avoidance class for class (C) was found in [10] to be the
Catalan numbers: $\left|\operatorname{Av}[\overline{13}24]\right|=C_{n-1}$.
We now enumerate Wilf equivalence class (D), but first we provide the
necessary definitions for this sequence. The sequence $D_{n}$, studied
previously in [18, 24], is given by the Catalan-like recurrence
$D_{n}=\sum_{k=1}^{n-3}D_{k}D_{n-k}$
for $n\geq 4$, with $D_{1}=2$, $D_{2}=1$, and $D_{3}=1$. The next few values
are $D_{4}=2$, $D_{5}=5$, and $D_{6}=13$. For $n>1$, the number $D_{n}$ counts
the number of Dyck paths of semilength $n-1$ that avoid UUDD; see Sapounakis,
Tasoulas, and Tsikouras [24]. Sapounakis et al. also proved the following
expression for $D_{n+1}$:
$D_{n+1}=\sum_{j=0}^{\left\lfloor
n/2\right\rfloor}\frac{(-1)^{j}}{n-j}\binom{n-j}{j}\binom{2n-3j}{n-j-1}.$
Mansour and Shattuck [18] determined this sequence’s ordinary generating
function is
$\sum_{n\geq 1}D_{n}z^{n}=\frac{(z+1)^{2}-\sqrt{1-4z+2z^{2}+z^{4}}}{2}.$
We show that the size of the avoidance class for class (D) is enumerated by
the sequence $D_{n}$.
###### Theorem 5.5.
For all $n\geq 1$, we have
$\left|\operatorname{Av}_{n}[\overline{13}42]\right|=D_{n+1}$.
###### Proof.
The result clearly holds for $n\leq 4$. For $n>4$, we must show that
$\displaystyle\left|\operatorname{Av}_{n}[\overline{13}42]\right|$
$\displaystyle=\sum_{k=1}^{n-2}\left|\operatorname{Av}_{k-1}[\overline{13}42]\right|\left|\operatorname{Av}_{n-k}[\overline{13}42]\right|$
$\displaystyle=2\left|\operatorname{Av}_{n-1}[\overline{13}42]\right|+\sum_{k=1}^{n-3}\left|\operatorname{Av}_{k}[\overline{13}42]\right|\left|\operatorname{Av}_{n-k-1}[\overline{13}42]\right|,$
where $\left|\operatorname{Av}_{0}[\overline{13}42]\right|$ is defined to be
$D_{1}=2$. Consider $[\sigma]$ of length $n>4$ that avoids
$[\overline{13}42]$, where we assume $\sigma_{1}=1$. Let $\sigma_{2}=m$, where
$2\leq m\leq n$. We will proceed by casework on the value of $m$, where each
of the $n-1$ values of $m$ corresponds to one of the $n-3$ terms in the
desired sum, or one of the two copies of
$\left|\operatorname{Av}_{n-1}[\overline{13}42]\right|$. In this casework, it
will be important to reference specific elements of the cyclic permutation
$[\sigma]$, so for clarity we will use indices with respect to the linear
permutation $\sigma$, which is well-defined from $[\sigma]$ as assuming
$\sigma_{1}=1$ fixes the particular rotation, i.e., the particular element of
$[\sigma]$, that we use. However, we still must avoid the cyclic pattern
$[\overline{13}42]$, so when concerning ourselves with pattern avoidance we
will return to discussing the cyclic permutation $[\sigma]$, rather than the
linear permutation $\sigma$.
If $m=2$, then the only additional $[\overline{13}42]$ patterns that could
arise from removing 1 would be when the vinculum $\overline{13}$ bridges over
the removed 1, i.e., when 2 plays the role of 3 in the pattern
$\overline{13}42$. However, as removing 1 would make 2 the smallest element,
this is not possible. So removing 1 and reducing takes $[\sigma]$ to a
$[\overline{13}42]$-avoiding cyclic permutation of length $n-1$. Conversely,
increasing the values of all elements in a $[\overline{13}42]$-avoiding cyclic
permutation of length $n-1$ by one and then inserting a 1 before the 2 yields
a $[\overline{13}42]$-avoiding cyclic permutation of length $n$, as the only
additional $[\overline{13}42]$ patterns that could potentially arise from this
insertion would require 1 to be part of the pattern, in which it would have to
play the role of 1, but then 2 could not play the role of 3 for no element has
value strictly between 1 and 2. Thus, there is a natural bijection from the
$m=2$ subcase to $\operatorname{Av}_{n-1}[\overline{13}42]$ via removing 1 and
reducing, i.e., subtracting 1 from each element, which yields
$\left|\operatorname{Av}_{n-1}[\overline{13}42]\right|$ possibilities.
Similarly, if $m=n$, then clearly 1 cannot be part of a $[\overline{13}42]$
pattern, and removing 1 cannot create additional $[\overline{13}42]$ patterns
with $n$ playing the role of 3, as $n$ is the largest element. Hence there is
a natural bijection from the $m=n$ subcase to
$\operatorname{Av}_{n-1}[\overline{13}42]$ via removing 1 and reducing. This
yields the second copy of
$\left|\operatorname{Av}_{n-1}[\overline{13}42]\right|$ needed in our
recursion.
We now address $3\leq m\leq n-1$. Notice that we must have all elements in
$[2,m-1]$ come before all the elements of $[m+1,n]$ in $\sigma$, so $\sigma$
is of the form $1m\rho\tau$ where $\rho$ is a permutation of $[2,m-1]$ and
$\tau$ is a permutation of $[m+1,n]$. To avoid a $[\overline{13}42]$ pattern
in $[\sigma]$ where the last element of $\rho$ plays the role of 1, we require
$\tau_{1}=n$, as otherwise we can use the last element of $\rho$ as the 1,
$\tau_{1}$ as the 3, $n$ as the 4, and $m$ as the 2 in an occurrence of
$[\overline{13}42]$. So $\sigma$ is of the form $1m\rho n\tau^{\prime}$ where
$\tau^{\prime}$ is a permutation of $[m+1,n-1]$. Clearly if $[\sigma]$ is
$[\overline{13}42]$-avoiding, then so too are $[m\rho]$ and
$[n\tau^{\prime}]$, as the additional ascent from the last element of $\rho$
to $m$ cannot be the $\overline{13}$ in a $\overline{13}42$ pattern, as there
is no element higher than $m$ in $m\rho$ to be the 4 in the pattern, and
similarly the additional ascent from the last element of $\tau^{\prime}$ to
$n$ cannot be the $\overline{13}$ in a $\overline{13}42$ pattern.
We now show that if $[m\rho]$ and $[n\tau^{\prime}]$ are both
$[\overline{13}42]$-avoiding for $\rho$ a permutation of $[2,m-1]$ and
$\tau^{\prime}$ a permutation of $[m+1,n-1]$, then $[1m\rho
n\tau^{\prime}]\in[S_{n}]$ is also $[\overline{13}42]$-avoiding. We proceed by
casework on the position of the cyclic ascent playing the role of
$\overline{13}$ in a potential $[\overline{13}42]$ pattern in $[1m\rho
n\tau^{\prime}]$, and show we cannot find a pair of elements to be the 4 and
the 2. If the cyclic ascent is 1 to $m$, then as all elements smaller than $m$
are before all the elements greater than $m$ in $1m\rho n\tau^{\prime}$, we
cannot form a $[\overline{13}42]$ pattern. The pair of adjacent elements $m$
and $\rho_{1}$ forms a cyclic descent, so we continue onward and consider a
cyclic ascent in $\rho$. If a $[\overline{13}42]$ pattern existed in $[1m\rho
n\tau^{\prime}]$ where the $\overline{13}$ is within $\rho$, then clearly the
2 in the pattern must also come from $\rho$. If the 4 also comes from $\rho$,
then $[m\rho]$ would have contained $[\overline{13}42]$, contradicting the
assumption that $[m\rho]$ avoids $[\overline{13}42]$. Otherwise, the 4 comes
from outside $\rho$, and notice that using $m$ in place of this element still
yields a $[\overline{13}42]$ pattern, which again implies $[m\rho]$ contains
$[\overline{13}42]$, a contradiction. The cyclic ascent from the last element
of $\rho$ to $n$ cannot be the $\overline{13}$ in a $[\overline{13}42]$
pattern, as there is no element larger than $n$ to be the 4. We have a cyclic
descent from $n$ to $\tau^{\prime}_{1}$, as well as a cyclic descent from the
last element of $\tau^{\prime}$ to 1, so the last case to consider is a cyclic
ascent within $\tau^{\prime}$ being the $\overline{13}$. If a
$[\overline{13}42]$ pattern existed in $[1m\rho n\tau^{\prime}]$ where the
$\overline{13}$ is within $\tau^{\prime}$, then clearly the 2 in the pattern
must also come from $\tau^{\prime}$. If the 4 also comes from $\tau^{\prime}$,
then this same subsequence is in $[n\tau^{\prime}]$, contradicting the
assumption that $[n\tau^{\prime}]$ avoids $[\overline{13}42]$. Otherwise, the
4 comes from outside $\tau^{\prime}$, and notice that using $n$ in place of
this element still yields a $[\overline{13}42]$ pattern, which again implies
$[n\tau^{\prime}]$ contains $[\overline{13}42]$, a contradiction. Thus,
$[1m\rho n\tau^{\prime}]$ avoids $[\overline{13}42]$.
In particular, each $[\overline{13}42]$-avoiding permutation of $[2,m]$ has a
unique linear representation as $m\rho$ and each $[\overline{13}42]$-avoiding
permutation of $[m+1,n]$ has a unique linear representation as
$n\tau^{\prime}$, which is crucial as our expression of $\sigma$ as $1m\rho
n\tau^{\prime}$ is a linear expression. As $m\rho$ is of length $m-1$ and
$n\tau^{\prime}$ is of length $n-m$, letting $k=n-m$ yields $n\tau^{\prime}$
is of length $k$ and $m\rho$ is of length $n-k-1$, where $1\leq k\leq n-3$ as
$m$ ranges within $3\leq m\leq n-1$, which gives us the sum from the desired
recurrence. ∎
We now enumerate classes (E) and (F), demonstrating that (C), (E), and (F) are
Wilf equivalent, namely being enumerated by the Catalan numbers. We first
introduce Catalan’s triangle.
###### Definition 5.6 (Entries of Catalan’s triangle).
For $0\leq k\leq n$, define $T(n,k)=\frac{n-k+1}{n+1}\binom{n+k}{n}$.
This creates a triangle of entries, where rows correspond to values of $n$ and
columns correspond to values of $k$. It is well-known that the row-sums are
the Catalan numbers.
###### Lemma 5.7 ([3, Lemma 1]).
For all $n\geq 0$, we have
$\sum_{k=0}^{n}T(n,k)=C_{n+1}.$
It is also well-known that $T(n,k)$ satisfies the following recurrence. We
will use the convention that $T(n,n+1)=0$, which is consistent with the
original definition $T(n,k)=\binom{n+k}{n}\frac{n-k+1}{n+1}$ when $k=n+1$.
###### Lemma 5.8 ([3, Lemma 1]).
For all $n\geq 1$ and $0\leq k\leq n$, we have
$T(n,k)=\sum_{j=0}^{k}T(n-1,j).$
We first enumerate class (E).
###### Theorem 5.9.
For all $n\geq 1$, we have
$\left|\operatorname{Av}_{n}[\overline{14}23]\right|=C_{n-1}$.
###### Proof.
For $n\leq 3$, we directly verify the result, where all cyclic permutations of
length $n$ avoid $[\overline{14}23]$.
We now assume $n\geq 4$. Let $i$ be a positive integer satisfying $1\leq i\leq
n-1$. Let $A(n,i)$ denote the set of cyclic permutations
$[\sigma]\in\operatorname{Av}_{n}[\overline{14}23]$ that have $i$ directly
before $n$. We will show that $|A(n,i)|=T(n-2,i-1)$. Summing over $i$ would
then yield
$\left|\operatorname{Av}_{n}[\overline{14}23]\right|=\sum_{i=1}^{n-1}T(n-2,i-1)=C_{n-1},$
using Lemma 5.7. We prove this by induction on $n$, where it is trivial to
verify this for the base case $n=3$.
For the inductive step, assume the result holds for $n-1$; using Lemma 5.8, it
suffices to show that $A(n,i)$ is in bijection with
$\bigcup_{j=1}^{i}A(n-1,j)$, which we denote $B(n,i)$. Our bijection
$\phi:A(n,i)\to B(n,i)$ will be to simply delete $n$ from a cyclic permutation
of length $n$.
We first show that this mapping takes cyclic permutations in $A(n,i)$ to
cyclic permutations in $B(n,i)$. As we have a cyclic ascent from $i$ to $n$,
in order for $[\sigma]\in A(n,i)$ to avoid $[\overline{14}23]$, we must have
the elements strictly between $i$ and $n$ appearing in decreasing order when
reading from $n$, which implies that the Zeilberger set of $[\sigma]$ is equal
to the interval $[i,n]$. If the element preceding $n-1$ in $[\sigma]$ is $n$,
then the element preceding $n-1$ in $\phi([\sigma])$ is $i$. If the element
preceding $n-1$ in $[\sigma]$ is not $n$, then this element is strictly less
than $i$, for any element $k$ where $i\leq k<n$ occurs before $n$ when reading
from $n-1$. Therefore, the element preceding $n-1$ in $\phi([\sigma])$ is some
$j$ for $1\leq j\leq i$. To show that $\phi([\sigma])$ is
$[\overline{14}23]$-avoiding, as $[\sigma]$ is $[\overline{14}23]$-avoiding,
it suffices to show that the newly created adjacency between $i$ and the
element after it after removing $n$ cannot create a copy of
$[\overline{14}23]$ where $i$ plays the role of 1. In order for this to be a
copy of $[\overline{14}23]$, we require $i$ to ascend to some element $k>i$,
and as $[i,n-1]$ is arranged in descending order, this means $i$ may only
ascend to $n-1$, but in this case we cannot find elements to play the roles of
2 and 3 for $[i,n-1]$ is in descending order. Hence, $\phi([\sigma])$ is
$[\overline{14}23]$-avoiding and $\phi([\sigma])\in B(n,i)$.
To prove $\phi$ is a bijection, we provide the inverse map $\psi$, which we
claim is the operation that adds $n$ after $i$. It suffices to show that this
mapping sends elements of $B(n,i)$ to elements of $A(n,i)$, as once this is
shown it clearly follows from the definition of $\psi$ that
$\psi\circ\phi([\sigma])=[\sigma]$ for all $[\sigma]\in A(n,i)$ and
$\phi\circ\psi([\tau])=[\tau]$ for all $[\tau]\in B(n,i)$. Consider some
cyclic permutation $[\tau]\in B(n,i)$. Clearly, we have $i$ directly before
$n$ in $\psi([\tau])$. To show that $\psi([\tau])$ is still
$[\overline{14}23]$-avoiding, as $n$ can only ever play the role of 4, it
suffices to show that the cyclic ascent from $i$ to $n$ cannot be a part of a
copy of $[\overline{14}23]$. Equivalently, we must show that the elements in
$[i,n]$ appear in decreasing order, when starting from $n$. As $[\tau]$ is
$[\overline{14}23]$-avoiding with a cyclic ascent from $j\leq i$ to $n-1$, we
know $[j,n-1]$ appears in decreasing order, and thus $[i,n-1]$ appears in
decreasing order in $[\tau]$ as well as $\psi[\tau]$. As $n$ is inserted right
after $i$ and thus between $i$ and $n-1$, we find the elements of $[i,n]$
appear in decreasing order.
Thus $\phi$ is a bijection, and the proof is complete. ∎
We now enumerate class (F). The proof is similar in structure to that of
Theorem 5.9, but is slightly more involved and uses a different refinement
than $A(n,i)$ from the previous proof.
###### Theorem 5.10.
For all $n\geq 1$, we have
$\left|\operatorname{Av}_{n}[\overline{14}32]\right|=C_{n-1}$.
###### Proof.
For $n\leq 3$, we directly verify the result, where all cyclic permutations of
length $n$ avoid $[\overline{14}32]$.
We now assume $n\geq 4$. Let $i$ be a positive integer satisfying $0\leq i\leq
n-2$. Let $E(n,i)$ denote the set of cyclic permutations
$[\sigma]\in\operatorname{Av}_{n}[\overline{14}32]$ that have
$\operatorname{zeil}[\sigma^{r}]=n-i$. We will show by induction on $n$ that
$|E(n,i)|=T(n-2,i)$. As $2\leq\operatorname{zeil}[\sigma^{r}]\leq n$, summing
over all $i$ satisfying $0\leq i\leq n-2$ would then yield
$\left|\operatorname{Av}_{n}[\overline{14}32]\right|=\sum_{i=0}^{n-2}T(n-2,i)=C_{n-1},$
by Lemma 5.7. It is trivial to verify the base case $n=3$.
For the inductive step, assume the result holds for $n-1$; using Lemma 5.8, it
suffices to show that $E(n,i)$ is in bijection with
$\bigcup_{j=0}^{i}E(n-1,j)$, which we denote $F(n,i)$. Our bijection
$\phi:E(n,i)\to F(n,i)$ will be to simply delete $n$ from $[\sigma]$, similar
to the proof of Theorem 5.9.
We first show that this mapping takes $[\sigma]\in E(n,i)$ to some $[\tau]\in
F(n,i)$. As $\operatorname{zeil}[\sigma^{r}]=n-i$, we find $i+1,i+2,\dots,n$
is a subsequence of $[\sigma]$, and thus $\phi([\sigma])$ has $i+1,\dots,n-1$
as a subsequence, so $n-1-i\leq\operatorname{zeil}[\phi([\sigma])^{r}]\leq
n-1$. Thus $\operatorname{zeil}[\phi([\sigma])^{r}]=n-1-j$ for some $0\leq
j\leq i$. To show that
$\phi([\sigma])\in\operatorname{Av}_{n-1}[\overline{14}32]$, suppose the
element preceding $n$ is $a$ and the element following $n$ is $b$. It suffices
to show that the removal of $n$ cannot have caused a copy of
$[\overline{14}32]$ with $a,b$ forming the vinculum $\overline{14}$. As
$[\sigma]$ is $[\overline{14}32]$-avoiding and $a,n$ form a cyclic ascent, we
know that reading from $a$, the elements in $[a+1,n-1]$ are encountered in
increasing order. Hence in order for $a,b$ to be a cyclic ascent, $b$ must
equal $a+1$, but then $a,b$ cannot form $\overline{14}$ as there are no
elements with values between $a$ and $b=a+1$. Thus, $\phi([\sigma])\in
F(n,i)$.
To prove $\phi$ is a bijection, we provide the inverse map $\psi$. For
$[\tau]\in F(n,i)$ with $\operatorname{zeil}[\tau^{r}]=n-1-j$ for $0\leq j\leq
i$, if $j=i$ then define $\psi([\tau])$ to be the cyclic permutation obtained
by inserting $n$ immediately after $n-1$, and if $j<i$ define $\psi([\tau])$
to be the cyclic permutation obtained by inserting $n$ immediately after $i$.
We first show that this mapping sends elements of $F(n,i)$ to elements of
$E(n,i)$. Notice that $[\tau]$ has $j+1,j+2,\dots,n-1$ as a subsequence, as
$\operatorname{zeil}[\tau^{r}]=n-1-j$.
If $j=i$, then adding $n$ immediately after $n-1$ means
$\operatorname{zeil}[\psi([\tau])^{r}]=n-i$, as desired. To show that
$\psi([\tau])$ is $[\overline{14}32]$-avoiding, it suffices to show that $n$
cannot be a part of a copy of $[\overline{14}32]$. If it were, then $n$ must
be the 4 in the $[\overline{14}32]$ pattern, but then $n-1$ would have to play
the role of 1, leaving no possible elements to play the roles of 3 and 2.
Hence $\psi([\tau])\in E(n,i)$.
Otherwise if $j<i$, then we have that $j+1,\dots,i,n,i+1,\dots,n-1$ is a
subsequence of $\psi([\tau])$, from which we can immediately see that the
reverse Zeilberger set is the subsequence $i+1,\dots,n-1,n$, and thus
$\operatorname{zeil}[\psi([\tau])^{r}]=n-i$. To show that $\psi([\tau])$ is
$[\overline{14}32]$-avoiding, it suffices to show that $n$ cannot be a part of
a copy of $[\overline{14}32]$, where it would have to play the role of 4, and
thus $i$ would have to play the role of 1. Reading from $n$, we encounter the
elements of $[i+1,n-1]$ in increasing order, and thus we cannot find two
elements in $[i+1,n-1]$ to complete the $[\overline{14}32]$ pattern. Thus
$\psi([\tau])\in E(n,i)$.
It clearly follows from the definitions of $\phi$ and $\psi$ that
$\phi\circ\psi([\tau])=[\tau]$ for all $[\tau]\in F(n,i)$. To show
$\psi\circ\phi([\sigma])=[\sigma]$ for all $[\sigma]\in E(n,i)$, notice that
$[\sigma]$ has $i+1,i+2,\dots,n-1,n$ as a subsequence.
If $\phi([\sigma])\in E(n-1,i)$, then $\psi\circ\phi([\sigma])$ is obtained by
inserting $n$ immediately after $n-1$ in $\phi([\sigma])$. So, we must show
that there does not exist an element between $n-1$ and $n$ in $[\sigma]$. If
$i=0$, then $[\sigma]=[\iota_{n}]$ and this holds. Otherwise, element $i\geq
1$ is not located between $n-1$ and $i+1$ (reading in the forward direction),
as $\operatorname{zeil}[\phi([\sigma])^{r}]=n-1-i$. So $i$ is somewhere
between $i+1$ and $n-1$. If there was an element between $n-1$ and $n$,
suppose the element immediately before $n$ was $x$. Notice that $x<i$, so
$x,n,i+1,i$ forms a $[\overline{14}32]$ pattern, contradicting the fact that
$[\sigma]$ avoids $[\overline{14}32]$. Hence, there is no element between
$n-1$ and $n$ in $[\sigma]$, which yields $\psi\circ\phi([\sigma])=[\sigma]$.
Otherwise $\operatorname{zeil}[\phi([\sigma])^{r}]>n-1-i$, so $i$ is somewhere
between $n-1$ and $i+1$. As $\operatorname{zeil}[\sigma]=n-i$, however, $i$
cannot be between $n$ and $i+1$, so this means $i$ is somewhere between $n-1$
and $n$. Hence $[\sigma]$ has $i+1,i+2,\dots,n-1,i,n$ as a subsequence. In
this case, $\psi\circ\phi([\sigma])$ is obtained by inserting $n$ immediately
after $i$ in $\phi([\sigma])$, so we must show that there does not exist an
element between $i$ and $n$ in $[\sigma]$. If there were, suppose the element
immediately before $n$ was $x$. Note that $x<i$, so $x,n,i+1,i$ forms a
$[\overline{14}32]$ pattern, contradicting the fact that $[\sigma]$ avoids
$[\overline{14}32]$. Therefore, there is no element between $i$ and $n$ in
$[\sigma]$, which yields $\psi\circ\phi([\sigma])=[\sigma]$.
Hence, $\phi$ is a bijection, and the proof is complete. ∎
Lastly, we enumerate class (G) in terms of the number of strongly monotone
partitions of $[n]$, which we now define.
###### Definition 5.11.
A partition of $[n]$ is _strongly monotone_ if, when its parts are sorted so
that their minimum elements are in increasing order, then their maximum
elements are also in increasing order.
Let $A_{n}$ denote the number of strongly monotone partitions of $[n]$.
Claesson and Mansour [9] showed this sequence has the ordinary generating
function
$\sum_{n\geq 0}A_{n}x^{n}=\frac{1}{1-x-x^{2}B^{*}(x)},$
where $B^{*}(x)$ denotes the ordinary generating function of the Bessel
numbers, originally introduced by Flajolet and Schott [13]. While the size of
the avoidance classes of (G) does not previously appear as a sequence in the
OEIS [25], it can be expressed in terms of $A_{n}$.
###### Theorem 5.12.
For all $n\geq 2$, we have
$\left|\operatorname{Av}_{n}[\overline{23}14]\right|=\sum_{i=0}^{n-2}\binom{n-2}{i}A_{i}.$
###### Proof.
We directly verify the result for $n\leq 3$, where $A_{0}=A_{1}=1$.
For general $n\geq 4$, consider
$[\sigma]\in\operatorname{Av}_{n}[\overline{23}14]$. Without loss of
generality suppose that $\sigma_{n}=1$, and say $n$ is at index $k$ for $1\leq
k\leq n-1$, i.e., $\sigma_{k}=n$. The linear permutation
$\sigma^{(n)}=\sigma_{k+1}\cdots\sigma_{n}\sigma_{1}\cdots\sigma_{k-1}$, i.e.,
the linear permutation obtained by rotating $\sigma$ so that $n$ is the last
element and then removing $n$, must be $\overline{23}1$-avoiding. We may
partition $\sigma^{(n)}$ into blocks of consecutive elements, where each block
is a maximal decreasing sequence, so that the transition from one block to the
next is an ascent. Claesson [8, Proposition 5] characterized the
$1\overline{32}$-avoiding permutations, where after partitioning a permutation
into blocks of maximal increasing sequences, the permutation avoids
$1\overline{32}$ if and only if the minima of the blocks are in decreasing
order. By reversing this characterization, we find the minimum elements of
each block of $\sigma^{(n)}$ must be in increasing order.
Similarly, the linear permutation $\sigma^{(1)}=\sigma_{1}\cdots\sigma_{n-1}$
must be $3\overline{12}$ avoiding, as any copy of $3\overline{12}$ in
$\sigma^{(1)}$ along with $\sigma_{n}=1$ would yield a copy of
$14\overline{23}$, or a copy of $[\overline{23}14]$ in $[\sigma]$. We
partition $\sigma^{(1)}$ into blocks of maximal decreasing consecutive
sequences. By complementing Claesson’s characterization [8] of
$1\overline{32}$-avoiding permutations, we find the maximum elements of each
block of $\sigma^{(1)}$ must be in increasing order.
Combining these two observations, we find that partitioning $\sigma$ into
blocks of maximal decreasing consecutive sequences, the maximum elements of
each block of $\sigma$ are in increasing order, the minimum elements of all
but the last block of $\sigma$ are in increasing order, and the last block
starts with $n$ and ends with 1. See Fig. 1 for a schematic diagram of this
characterization of $[\sigma]$.
$n$1 Figure 1. Schematic diagram of the plot of a $[\overline{23}14]$-avoiding
cyclic permutation.
Our previous argument demonstrates that this characterization is a necessary
condition for $[\sigma]$ to be $[\overline{23}14]$-avoiding. We now show that
this characterization is sufficient. In order to have a copy of
$[\overline{23}14]$ in such a cyclic permutation $[\sigma]$, we must have a
cyclic ascent to play the role of $\overline{23}$, and in particular an ascent
only occurs in between two blocks. If the ascent is to the last block, it
ascends to $n$, which cannot play the role of 3, but rather must play the role
of 4; likewise, if the ascent is from the last block, it ascends from 1, which
cannot play the role of 2, but rather must play the role of 1. Otherwise the
ascent is between two blocks, neither of which is the last block. So our
ascent is from the minimum element $a$ of block $j$ to the maximum element $b$
of block $j+1$, for some $j$. In order to get an element smaller than $a$ to
act as our 1, clearly we must go at least to the last block, the block from
$n$ to 1. But then it is impossible to get an element smaller than $b$ to act
as our 4 before crossing $a$ again.
Thus, we now have a characterization of all $[\overline{23}14]$-avoiding
permutations, as depicted in Fig. 1. Suppose there are $i$ total elements in
the blocks excluding the last block from $n$ to $1$, so that there are $n-i$
elements in the last block. As the last block contains at least two elements,
namely $n$ and 1, we find $0\leq i\leq n-2$. There are $\binom{n-2}{i}$ ways
to pick the $i$ elements in these blocks, and the remaining $n-2-i$ elements
other than $n$ and 1 must be in decreasing order between $n$ and 1. Consider
the sequence $\sigma^{\prime}=\sigma_{1}\cdots\sigma_{i}$, i.e. the sequence
consisting of all but the last block of $\sigma$. The reduction of
$\sigma^{\prime}$ is a permutation on $[i]$, and it is easy to see that the
partitioning of $\sigma^{\prime}$ into its blocks, which satisfy the
increasing minima and maxima conditions, provides a one-to-one correspondence
to the strongly monotone partitions of $[i]$, where each block corresponds to
one of the sets in a partitioning of $[i]$. Hence there are $A_{i}$ ways to
arrange the $i$ elements before the block from $n$ to 1. Summing over all
$0\leq i\leq n-2$ thus yields
$\left|\operatorname{Av}_{n}[\overline{23}14]\right|=\sum_{i=0}^{n-2}\binom{n-2}{i}A_{i},$
as desired. ∎
We leave the enumeration of (H), which currently does not appear in the OEIS
[25], as an open problem.
###### Remark 5.13.
Since the writing of the original version of this paper, the case of avoiding
$[\overline{23}41]$ has been dealt with by Mansour and Shattuck [19], where an
explicit formula for the ordinary generating function enumerating the class
has been found.
## 6\. Minimal unavoidable sets of totally vincular patterns
We call a vincular pattern of length $n$ with $n-1$ vincula _totally
vincular_. A totally vincular pattern corresponds to requiring that all $n$
elements are adjacent, and thus when considering whether a permutation avoids
a totally vincular pattern, we only consider consecutive blocks of $n$
elements. Let $\overline{S_{k}}$ denote the set of totally vincular patterns
of length $k$. For a permutation $\pi\in S_{k}$, let the corresponding totally
vincular pattern be denoted $\overline{\pi}\in\overline{S_{k}}$.
In this section we consider which sets $\Pi\subseteq\overline{S_{k}}$ cannot
be avoided by sufficiently long permutations, i.e.,
$\left|\operatorname{Av}_{n}[\Pi]\right|=0$ for all sufficiently large $n$. We
call such a set of patterns _unavoidable_ , and otherwise a set of patterns is
_avoidable_. We say a set of patterns $\Pi$ is a _minimal_ unavoidable set if
$\Pi$ is unavoidable, but any proper subset $\Pi^{\prime}\subset\Pi$ is
avoidable.
Very little is known about unavoidable pattern sets for ordinary permutations:
see, for example, Wilf [28, Section 10]. On the other hand, the unavoidability
of patterns is a well-studied and frequently-asked question for pattern
avoidance of words, where letters come from a fixed finite alphabet but are
allowed to be repeated; in the study of words, almost all pattern avoidance is
implicitly totally vincular.
Recall that Proposition 4.1 along with the enumeration in Section 3 implies
that the sets $\Pi$ consisting of the two totally vincular patterns of length
3 containing a 1 in the $i$th position for a fixed $i$, or the element-wise
complements of these sets, i.e., the sets consisting of all possible patterns
containing a 3 in the $i$th position, were minimal unavoidable sets. We show
that for general $k$, not just $k=3$, the same construction yields minimal
unavoidable sets.
In particular, let $\Pi_{i,k}$ denote the set of all totally vincular patterns
$\overline{\pi}\in\overline{S_{k}}$ where $\overline{\pi}_{i}=1$ for $1\leq
i\leq k$. This naturally means $\Pi_{i,k}^{c}$ is the set of all totally
vincular patterns $\overline{\pi}\in\overline{S_{k}}$ where
$\overline{\pi}_{i}=k$. We then have the following theorem.
###### Theorem 6.1.
For all positive integers $k$ and $i$ where $1\leq i\leq k$, the sets
$\Pi_{i,k}$ and $\Pi_{i,k}^{c}$ are minimal unavoidable sets.
###### Proof.
We prove the result only for $\Pi_{i,k}$, as the $\Pi_{i,k}^{c}$ case then
follows from trivial Wilf equivalence. Likewise, by reversing $\Pi_{i,k}$ as
necessary, we may assume $i\leq\frac{k+1}{2}$, as the other cases follow from
trivial Wilf equivalence.
The case $k=1$ is trivial, as then $i=1$ and any element of a permutation of
length $n\geq 1$ is order isomorphic to $1$, and $\Pi_{1,1}$ is a singleton
set so a proper subset of $\Pi$ must be empty, which any permutation avoids.
The case $k=2$ is also trivial, as any cyclic permutation of length $n\geq 2$
must contain at least one copy of $[\overline{12}]$, which is the sole element
of $\Pi_{1,2}$. The case $k=3$ follows from Proposition 4.1 and the
enumerations of $\operatorname{Av}_{n}[\overline{123}]$ and
$\operatorname{Av}_{n}[\overline{132}]$ from Section 3, where, in particular,
these contain $[\delta_{n}]$ and $[\iota_{n}]$, respectively, showing that
$\Pi_{i,k}$ is a minimal unavoidable set.
For general $k\geq 4$, we first show $\Pi_{i,k}$ is unavoidable for all $1\leq
i\leq\frac{k+1}{2}$, by showing no cyclic permutation of length at least $k$
avoids $\Pi_{i,k}$. For any arbitrary $[\sigma]\in[S_{n}]$ for $n\geq k$,
consider the consecutive subsequence of $k$ elements of $[\sigma]$ where the 1
is in position $i$ of the subsequence. This consecutive subsequence must be
order isomorphic to some element of $\Pi_{i,k}$, so no cyclic permutation of
length $n\geq k$ can avoid $\Pi_{i,k}$.
To prove minimality of $\Pi_{i,k}$, we split into three cases: first, $i=1$;
second, $1<i<\frac{k+1}{2}$; and third, $i=\frac{k+1}{2}$ for odd $k$.
Case 1. We first prove minimality for $i=1$. It suffices to show that
$\Pi_{1,k}\setminus\\{\overline{\pi}\\}$ for some
$\overline{\pi}\in\overline{S_{k}}$ with $\overline{\pi}_{1}=1$ is avoidable.
For $n\geq k$, consider $[\sigma]\in[S_{n}]$ given by
$\sigma=1,n-k+\overline{\pi}_{2},n-k+\overline{\pi}_{3},\dots,n-k+\overline{\pi}_{k},n-k+1,n-k,\dots,2.$
In other words, $\sigma$ consists of 1, followed by the permutation of
$[n-k+2,n]$ that is order isomorphic to
$\overline{\pi}_{2}\cdots\overline{\pi}_{k}$, following by the elements of
$[2,n-k+1]$ in decreasing order. See Fig. 2 for a schematic diagram of this
construction of $\sigma$. We claim that $[\sigma]$ avoids
$\Pi_{1,k}\setminus\\{\overline{\pi}\\}$. We must show that for any cyclically
consecutive sequence of $k$ elements of $\sigma$ for which the first element
is the minimum element among these $k$ elements, this sequence is order
isomorphic to $\overline{\pi}$; otherwise, it would be order isomorphic to
some other pattern in $\overline{S_{k}}$ that begins with 1, which by
definition is an element of $\Pi_{1,k}\setminus\\{\overline{\pi}\\}$. As
$n\geq k$, we find the $k$ consecutive elements starting at $\sigma_{1}=1$ are
order isomorphic to $\overline{\pi}$. For any block of $k$ consecutive
elements starting at $\sigma_{j}=n-k+\overline{\pi}_{j}$ for $2\leq j\leq k$,
we have $\sigma_{k+1}=n-k+1<n-k+\overline{\pi}_{j}$ is in this block of $k$
consecutive elements, so the first element $\sigma_{j}$ is not the minimum
element among these $k$ consecutive elements. For any block of $k$ consecutive
elements starting at index $j$ for $k+1\leq j\leq n$, i.e., starting in the
portion decreasing from $n-k+1$ to 2, the second element is one smaller than
the first element, so the first element is not the minimum element among these
$k$ elements. Hence, $[\sigma]$ avoids
$\Pi_{1,k}\setminus\\{\overline{\pi}\\}$, which is thus avoidable, implying
$\Pi_{1,k}$ is a minimal unavoidable set of patterns.
1$n+k+\overline{\pi}_{j}$ for $2\leq j\leq k$$n-k+1$2 Figure 2. Schematic
diagram for the plot of $\sigma$ in Case 1 when
$\overline{\pi}=\overline{12534}$.
Case 2. Next, we prove minimality for $1<i<\frac{k+1}{2}$. It suffices to show
that $\Pi_{i,k}\setminus\\{\overline{\pi}\\}$ for some
$\overline{\pi}\in\overline{S_{k}}$ with $\overline{\pi}_{i}=1$ is avoidable.
Express $\overline{\pi}$ in the form $\overline{\pi}=\overline{\rho 1\tau}$
for $|\rho|<|\tau|$. Consider the permutation $\sigma\in S_{n}$ where we take
$\overline{\pi}$ and map it to an order isomorphic permutation of
$\\{1\\}\cup[n-k+2,n]$, and then append $[2,n-k+1]$ in decreasing order. In
other words,
$\sigma=n-k+\overline{\pi}_{1},\dots,n-k+\overline{\pi}_{i-1},1,n-k+\overline{\pi}_{i+1},\dots,n-k+\overline{\pi}_{k},n-k+1,n-k,\dots,2.$
See Fig. 3 for a schematic diagram of this construction of $\sigma$. We claim
$[\sigma]$ avoids $\Pi_{i,k}\setminus\\{\overline{\pi}\\}$. Consider a
consecutive subsequence of $k$ elements of $[\sigma]$. If the $i$th element is
1, then the subsequence is order isomorphic to $\overline{\pi}$, by
construction. It suffices to show if the $i$th element is not 1, then the
$i$th element is not the minimum element in this consecutive subsequence, as
then this subsequence cannot be order isomorphic to a pattern in
$\Pi_{i,k}\setminus\\{\overline{\pi}\\}$. We now split into cases depending on
the value of the $i$th element.
1. (1)
If the $i$th element is in $[n-k+2,n]$, then it is either in the $\rho$ or
$\tau$ portion of the permutation of $\\{1\\}\cup[n-k+2,n]$ order isomorphic
to $\overline{\pi}$. If it is in the $\tau$ portion, then this subsequence
contains an element in $[2,n-k+1]$, which is smaller than the $i$th element.
If it is in the $\rho$ portion, as $|\rho|<|\tau|$, the subsequence contains
the next $|\tau|$ elements, which must include the 1, which is smaller than
the $i$th element.
2. (2)
If the $i$th element is in $[3,n-k+1]$, then the element of index $i+1$, which
is part of the subsequence as $i<\frac{k+1}{2}\leq k$, is one less than it.
3. (3)
If the $i$th element is 2, then the next $|\tau|>|\rho|$ elements are also in
this subsequence, and namely this includes the 1, which is $|\rho|+1$ steps
after the 2.
1$n+k+\overline{\pi}_{j}$ for $j\neq i$$n-k+1$2 Figure 3. Schematic diagram
for the plot of $\sigma$ in Case 2 when $\overline{\pi}=\overline{51342}$.
Case 3. Third and finally, we prove minimality for $i=\frac{k+1}{2}$ when $k$
is odd. It suffices to show that $\Pi_{i,k}\setminus\\{\overline{\pi}\\}$ for
some $\overline{\pi}\in\overline{S_{k}}$ with $\overline{\pi}_{i}=1$ is
avoidable. Express $\overline{\pi}$ in the form $\overline{\pi}=\overline{\rho
1\tau}$ for $|\rho|=|\tau|$. We will prove the result when 2 is an element in
$\rho$, as then by reversing, we prove the result for when 2 is an element in
$\tau$. Consider the permutation $\sigma$ where we take $\overline{\pi}$ and
map it to an order isomorphic permutation of $[1,2]\cup[n-k+3,n]$ and then
append $[3,n-k+2]$ in decreasing order. See Fig. 4 for a schematic diagram of
this construction.
21$n+k+\overline{\pi}_{j}$ for $\overline{\pi}_{j}>2$$n-k+2$3 Figure 4.
Schematic diagram for the plot of $\sigma$ in Case 3 when
$\overline{\pi}=\overline{7251364}$.
We claim $[\sigma]$ avoids $\Pi_{i,k}\setminus\\{\overline{\pi}\\}$. Consider
a consecutive subsequence of $k$ elements of $[\sigma]$. If the $i$th element
is 1, then the subsequence is order isomorphic to $\overline{\pi}$, by
construction. It suffices to show if the $i$th element is not 1, then the
$i$th element is not the minimum element in this consecutive subsequence. We
now split into cases depending on the value of the $i$th element.
1. (1)
If the $i$th element is in $[n-k+3,n]$, then it is either in the $\rho$ or
$\tau$ portion of the permutation of $[1,2]\cup[n-k+3,n]$ order isomorphic to
$\overline{\pi}$. If it is in the $\tau$ portion, then this subsequence
contains an element in $[3,n-k+2]$, which is smaller than the $i$th element.
If it is in the $\rho$ portion, as $|\rho|=|\tau|$, the subsequence contains
the next $|\tau|$ elements, which must include the 1, which is smaller than
the $i$th element.
2. (2)
If the $i$th element is in $[4,n-k+2]$, then the next element is one smaller
than it, so it is not the minimum element in this subsequence.
3. (3)
If the $i$th element is 3, then this block contains the 2 as the 2 is in the
$|\rho|=|\tau|$ elements before the 1 and after the 3, so is not the minimum
element.
4. (4)
If the $i$th element is 2, as the 2 is in the $\rho$ portion of the
permutation on the set $[1,2]\cup[n-k+3,n]$ order isomorphic to
$\overline{\pi}$, the subsequence contains the next $|\tau|=|\rho|$ elements,
which must include the 1, so 2 is not the minimum.
Hence, all $\Pi_{i,k}$ for $1\leq i\leq k$ are minimal unavoidable sets. ∎
These sets $\Pi_{i,k}$ and $\Pi_{i,k}^{c}$ when $k=3$ correspond to the sets
from Proposition 4.1. It is easy to see from our analysis in Sections 3 and 4
that these are the only minimal unavoidable subsets of $\overline{S_{3}}$.
For $k\geq 2$, viewing the power set of $\overline{S_{k}}$ as a Boolean
lattice ordered by inclusion, it is clear that any chain can contain at most
one minimal unavoidable set, as unavoidability is a monotone property. Thus
the number of minimal unavoidable sets is bounded above by the size of the
maximum antichain of this lattice, which is $\binom{k!}{k!/2}$. This yields
the following proposition.
###### Proposition 6.2.
For $k\geq 2$, the number of minimal unavoidable subsets of $\overline{S_{k}}$
is bounded above by $\binom{k!}{k!/2}$.
We conjecture that the sets from Theorem 6.1, which have cardinality $(k-1)!$,
are the smallest unavoidable subsets of $\overline{S_{k}}$. Intuitively, this
states that these sets are the “most efficient” at preventing permutations
from avoiding them.
###### Conjecture 6.3.
For all $k\geq 1$, the minimum cardinality of an unavoidable subset of
$\overline{S_{k}}$ is $(k-1)!$.
## 7\. Maximum avoidable sets of totally vincular patterns
The dual notion for a minimal unavoidable set $\Pi\subseteq\overline{S_{k}}$
is a _maximal avoidable_ set of patterns, an avoidable subset of
$\overline{S_{k}}$ that, when any other vincular pattern
$\overline{\pi}\in\overline{S_{k}}$ is added, is no longer avoidable.
In this section, we determine the size of a _maximum avoidable_ set
$\Pi\subseteq\overline{S_{k}}$, an avoidable subset of $\overline{S_{k}}$
whose cardinality is greater than or equal to any other avoidable subset of
$\overline{S_{k}}$. Clearly, all maximum avoidable sets are maximal avoidable
sets.
###### Theorem 7.1.
The maximum cardinality of an avoidable subset of $\overline{S_{k}}$ is
$k!-k$.
###### Proof.
We first show $k!-k$ is an upper bound on the cardinality of an avoidable set.
Suppose $\Pi\subseteq\overline{S_{k}}$ has cardinality $|\Pi|\geq k!-k+1$.
Partition $\overline{S_{k}}$ into $k$ sets depending on the index of the 1; in
other words, using the notation from Section 6, partition $\overline{S_{k}}$
into
$\overline{S_{k}}=\bigcup_{i=1}^{k}\Pi_{i,k}.$
Each $\Pi_{i,k}$ has cardinality $(k-1)!$, and as $\Pi$ has cardinality
$|\Pi|\geq k((k-1)!-1)+1$, we have $\Pi$ contains $\Pi_{i,k}$ for some $1\leq
i\leq k$. This set is unavoidable by Theorem 6.1. Hence, any avoidable subset
has cardinality at most $k!-k$.
We now show $k!-k$ can be achieved as the cardinality of an avoidable set.
Consider the set
$\Pi=\overline{S_{k}}\setminus\\{\overline{12\cdots k},\overline{23\cdots
k1},\overline{34\cdots k12},\dots,\overline{k12\cdots k-1}\\}.$
In other words, to avoid $\Pi$, every consecutive subsequence of $k$ elements
must be order isomorphic to a rotation of $\iota_{k}=12\cdots k$. But observe
$\iota_{n}=12\cdots n$ possesses this property, so it avoids $\Pi$. Thus
$\Pi$, which has cardinality $k!-k$, is a maximum avoidable set contained in
$\overline{S_{k}}$. ∎
For $\pi\in S_{k}$, let $[\overline{\pi}]$ denote the set of totally vincular
patterns in $\overline{S_{k}}$ corresponding to the rotations of $\pi$, i.e.,
corresponding to $[\pi]$. We have the following proposition on maximum
avoidable subsets of $\overline{S_{k}}$.
###### Proposition 7.2.
For $\pi\in S_{k}$, the set $\Pi=\overline{S_{k}}\setminus[\overline{\pi}]$ is
a maximum avoidable set contained in $\overline{S_{k}}$.
###### Proof.
As $|\Pi|=k!-k$, by Theorem 7.1 it suffices to show that $\Pi$ is avoidable,
i.e., for arbitrarily large lengths $n$, we have
$\left|\operatorname{Av}_{n}[\Pi]\right|>0$. We will show this holds when $n$
is a multiple of $k$.
Suppose $n=mk$ for a positive integer $m$. Then consider the cyclic
permutation $[\sigma]\in[S_{n}]$ given by
$\displaystyle\sigma=\,$ $\displaystyle
m(\pi_{1}-1)+1,\dots,m(\pi_{k}-1)+1,m(\pi_{1}-1)+2,\dots,m(\pi_{k}-1)+2,\dots,$
$\displaystyle m(\pi_{1}-1)+m,\dots,m(\pi_{k}-1)+m.$
See Fig. 5 for a schematic diagram of this construction of $\sigma$.
Figure 5. Schematic diagram for the plot of $\sigma$ when $\pi=1342$ and
$m=4$.
Essentially $\sigma$ consists of $\pi$ repeated $m$ times, vertically scaled
with some slight shifts in order to not repeat any elements. It is easy to see
that any consecutive subsequence of $k$ elements in $[\sigma]$ is order
isomorphic to some element of $[\overline{\pi}]$, and thus
$[\sigma]\in\operatorname{Av}_{n}[\Pi]$, so $\Pi$ is avoidable. ∎
We pose the following question.
###### Question 7.3.
Are there any other maximum avoidable subsets of $\overline{S_{k}}$, or are
all maximum avoidable sets $\Pi\subset\overline{S_{k}}$ of the form
$\Pi=\overline{S_{k}}\setminus[\overline{\pi}]$ for some $\pi\in S_{k}$?
## 8\. Conclusion and open questions
There are numerous avenues for further research regarding vincular pattern
avoidance of cyclic permutations. The enumeration corresponding to the last
three Wilf equivalence classes of pairs of length 3 totally vincular cyclic
patterns, listed in Section 4.1, is an open area for research. One may also
work on enumerating vincular cyclic patterns of length 4 with more than one
vinculum, as well as sets of these length 4 patterns. Non-vincular cyclic
patterns of length 5 have not been enumerated yet, so this is also an open
area of research, and a good first step before addressing vincular cyclic
patterns of length 5. Lastly, one could also consider enumerating avoidance
classes of sets of patterns that do not all have the same length.
Regarding unavoidable sets, a characterization of unavoidable sets or even
minimal unavoidable sets may be quite difficult, so a good first step would be
to understand minimum unavoidable sets. To this end, we leave 6.3 as an open
problem to determine the minimum cardinality of an unavoidable set
$\Pi\subseteq\overline{S_{k}}$. Finding better bounds on the number of minimal
unavoidable subsets of $\overline{S_{k}}$, i.e., improving Proposition 6.2,
could also be of interest. Similarly, it may be quite difficult to
characterize all the avoidable sets or even all the maximal avoidable sets,
though empirical data suggest there are significantly fewer maximal avoidable
sets than there are minimal unavoidable sets, and hence classifying maximal
avoidable sets may be more tractable. Characterizing the more restrictive
class of maximum avoidable subsets is thus a good first step towards better
understanding the avoidability of sets $\Pi\subseteq\overline{S_{k}}$, and we
leave 7.3 as an open area for research.
## Acknowledgements
We sincerely thank Amanda Burcroff and Benjamin Gunby for their input
throughout the research process. We also thank Daniel Zhu and Colin Defant for
helpful ideas and input. We would like to thank Prof. Joe Gallian for his
editing feedback and operation of the Duluth REU program. This research was
conducted at the University of Minnesota Duluth Mathematics REU and was
supported, in part, by NSF-DMS Grant 1949884 and NSA Grant H98230-20-1-0009.
Additional support was provided by the CYAN Mathematics Undergraduate
Activities Fund.
## References
* [1] Désiré André. Sur les permutations alternées. Journal de Mathématiques Pures et Appliquées, 7:167–184, 1881.
* [2] Eric Babson and Einar Steingrímsson. Generalized permutation patterns and a classification of the Mahonian statistics. Séminaire Lotharingien de Combinatoire, 44(B44b):547–548, 2000\.
* [3] D. F. Bailey. Counting arrangements of 1’s and -1’s. Mathematics Magazine, 69(2):128–131, 1996.
* [4] François Bergeron, Philippe Flajolet, and Bruno Salvy. Varieties of increasing trees. In Colloquium on trees in algebra and programming, pages 24–48. Springer, 1992.
* [5] Mireille Bousquet-Mélou. Multi-statistic enumeration of two-stack sortable permutations. The Electronic Journal of Combinatorics, 5(1):R21, 1998.
* [6] Mathilde Bouvel and Olivier Guibert. Refined enumeration of permutations sorted with two stacks and a $D_{8}$-symmetry. Annals of Combinatorics, 18(2):199–232, 2014.
* [7] David Callan. Pattern avoidance in circular permutations. arXiv:math/0210014 [math.CO], 2002.
* [8] Anders Claesson. Generalized pattern avoidance. European Journal of Combinatorics, 22(7):961–971, 2001.
* [9] Anders Claesson and Toufik Mansour. Enumerating permutations avoiding a pair of Babson-Steingrímsson patterns. Ars Combinatoria, 77, 2005.
* [10] Rachel Domagalski, Jinting Liang, Quinn Minnich, Bruce E. Sagan, Jamie Schmidt, and Alexander Sietsema. Cyclic pattern containment and avoidance. arXiv:2106.02534 [math.CO], 2021.
* [11] Richard Ehrenborg. Cyclically consecutive permutation avoidance. SIAM Journal on Discrete Mathematics, 30(3):1385–1390, 2016.
* [12] Sergi Elizalde and Bruce Sagan. Consecutive patterns in circular permutations. arXiv:2107.04717 [math.CO], 2021.
* [13] Philippe Flajolet and René Schott. Non-overlapping partitions, continued fractions, Bessel functions and a divergent series. European Journal of Combinatorics, 11(5):421–432, 1990.
* [14] Zvi Galil and Nimrod Megiddo. Cyclic ordering is NP-complete. Theoretical Computer Science, 5(2):179–182, 1977.
* [15] Donghyun Kim and Lauren Williams. Schubert polynomials and the inhomogeneous TASEP on a ring. arXiv:2102.00560 [math.CO], 2021.
* [16] Sergey Kitaev. Patterns in permutations and words. Springer Science & Business Media, 2011.
* [17] Sergey Kitaev and Anders Claesson. Classification of bijections between 321-and 132-avoiding permutations. Discrete Mathematics & Theoretical Computer Science, 2008.
* [18] Toufik Mansour and Mark Shattuck. Restricted partitions and generalized Catalan numbers. Pure Mathematics and Applications, 22(2):239–251, 2011.
* [19] Toufik Mansour and Mark Shattuck. Enumerating circular permutations avoiding the vincular pattern $\overline{23}41$. arXiv:2111.04211 [math.CO], 2021.
* [20] Nimrod Megiddo. Partial and complete cyclic orders. Bulletin of the American Mathematical Society, 82(2):274–276, 1976\.
* [21] Krishna Menon and Anurag Singh. Pattern avoidance of $[4,k]$-pairs in circular permutations. arXiv:2111.04925 [math.CO], 2021.
* [22] Lara Pudwell. Enumeration schemes for permutations avoiding barred patterns. The Electronic Journal of Combinatorics, 17:R29, 2010.
* [23] Sanjay Ramassamy. Extensions of partial cyclic orders, Euler numbers and multidimensional boustrophedons. The Electronic Journal of Combinatorics, 25(1):1–66, 2018.
* [24] Aristidis Sapounakis, Ioannis Tasoulas, and Panagiotis Tsikouras. Counting strings in Dyck paths. Discrete Mathematics, 307(23):2909–2924, 2007.
* [25] Neil J. A. Sloane et al. The On-line Encyclopedia of Integer Sequences. Published electronically at oeis.org, 2021.
* [26] Richard P. Stanley. A survey of alternating permutations. Contemporary Mathematics, 531:165–196, 2010.
* [27] Einar Steingrímsson. Generalized permutation patterns—a short survey. Permutation Patterns, 376:137–152, 2010.
* [28] Herbert S. Wilf. The patterns of permutations. Discrete Mathematics, 257(2-3):575–583, 2002.
* [29] Doron Zeilberger. A proof of Julian West’s conjecture that the number of two-stack-sortable permutations of length $n$ is $2(3n)!/((n+1)!(2n+1)!)$. Discrete Mathematics, 102(1):85–93, 1992.
|
arxiv-papers
| 2021-07-26T17:46:39 |
2024-09-04T03:07:19.498948
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Rupert Li",
"submitter": "Rupert Li",
"url": "https://arxiv.org/abs/2107.12353"
}
|
2107.12357
|
capbtabboxtable[][]
11institutetext: Technical University Munich, Munich, Germany 22institutetext:
Helmholtz AI, Neuherberg, Germany 33institutetext: Institute for Computational
Biology, HelmholtzZentrum Munich, Germany 44institutetext: Munich School of
Data Science (MuDS), Munich, Germany 55institutetext: ContextVision AB,
Stockholm, Sweden
# Structure-Preserving Multi-Domain Stain Color Augmentation using Style-
Transfer with Disentangled Representations
Sophia J. Wagner 112244 Nadieh Khalili 55 Raghav Sharma 33 Melanie Boxberg
1144 Carsten Marr 33 Walter de Back 55 Tingying Peng 112244
###### Abstract
In digital pathology, different staining procedures and scanners cause
substantial color variations in whole-slide images (WSIs), especially across
different laboratories. These color shifts result in a poor generalization of
deep learning-based methods from the training domain to external pathology
data. To increase test performance, stain normalization techniques are used to
reduce the variance between training and test domain. Alternatively, color
augmentation can be applied during training leading to a more robust model
without the extra step of color normalization at test time. We propose a novel
color augmentation technique, HistAuGAN, that can simulate a wide variety of
realistic histology stain colors, thus making neural networks stain-invariant
when applied during training. Based on a generative adversarial network (GAN)
for image-to-image translation, our model disentangles the content of the
image, i.e., the morphological tissue structure, from the stain color
attributes. It can be trained on multiple domains and, therefore, learns to
cover different stain colors as well as other domain-specific variations
introduced in the slide preparation and imaging process. We demonstrate that
HistAuGAN outperforms conventional color augmentation techniques on a
classification task on the publicly available dataset Camelyon17 and show that
it is able to mitigate present batch effects. 111Code and model weights are
available at https://github.com/sophiajw/HistAuGAN.
###### Keywords:
color augmentation style-transfer disentangled representations.
## 1 Introduction
Modern cancer diagnosis relies on the expert analysis of tumor specimens and
biopsies. To highlight its structure and morphological properties,
conventionally, the tissue is stained with hematoxylin and eosin (H&E) [5].
The path from the raw tissue to the final digitized image slide however
consists of many different processing steps that can introduce variances, such
as tissue fixation duration, the age and the composition of the H&E-staining,
or scanner settings. Therefore, histological images show a large variety of
colors, not only differing between laboratories but also within one laboratory
[3].
This variability can lead to poor generalization of algorithms that are
trained on WSIs from a single source. One strategy to account for this is
stain color normalization. Traditionally, this is either done by aligning the
color distribution of the test images to a reference tile in the training
domain [12] or by decomposing the color space of a reference tile into
hematoxylin and eosin components [10, 17]. Then, H&E components of the test
tiles can be aligned while keeping the structure intact.
Recently, the focus shifted toward the application of style-transfer methods
such as cycle-consistent generative adversarial networks, CycleGAN [19], for
stain normalization [16]. However, these models aim to match the target
distribution possibly leading to undesired changes in the morphological
structure [6]. To circumvent this, other approaches propose color space
transformations [14], structural similarity loss functions [9], or residual
learning [4].
We propose a novel histological color transfer model, HistAuGAN, based on a
GAN architecture for image-to-image translation. In contrast to previous
approaches, HistAuGAN disentangles the content of a histological image, i.e.,
the morphological tissue structure, from the stain color attributes, hence
preserving the structure while altering the color. Therefore, HistAuGAN can be
used as a stain augmentation technique during training of a task-specific
convolutional neural network (CNN). We demonstrate that this helps to render
the trained network color-invariant and makes it transferable to external
datasets without an extra normalization step at test time. Applied as an
augmentation technique, HistAuGAN significantly outperforms other color
augmentation techniques on a binary tumor-classification task. Furthermore,
clustering results suggest that HistAuGAN can capture sources of domain shifts
beyond color variations, such as noise and artifacts introduced in the
staining or digitization process, e.g., image compression or blurring.
To the best of our knowledge, HistAuGAN is the first GAN-based color
augmentation technique that generates realistic histological color variations.
## 2 Method
### 2.1 Model architecture
Figure 1: We propose HistAuGAN for structure-preserving multi-domain stain
color augmentation. (a) Histological slides from different laboratories
(domains) exhibit color variations. (b) Model architecture. Here, the domain
information flow is visualized by colored arrows. (c) At inference, HistAuGAN
can be used as an augmentation technique by sampling attribute $z_{a}$ and
domain $d$.
We build our model based on a multi-domain GAN using disentangled
representations, inspired by DRIT++ [8]. Originally designed for image-to-
image translation of natural images using a predefined style, we propose its
application on histological images to disentangle the morphological tissue
structure from the visual appearance. In contrast to previous CycleGAN-based
color normalization methods that use only a single encoder, HistAuGAN is able
to separate two essential image properties from each other as visualized in
Figure 1b: the domain-invariant content encoder $E_{c}$ encodes the
histopathological structure of the tissue, e.g., size and position of the
nuclei, whereas the domain-specific attribute encoder $E_{a}$ learns the
domain-specific color appearance. The model can be trained on data from
multiple domains and thereby captures both inter-laboratory variability
between multiple domains and intra-laboratory variability within each domain
at the same time. Finally, the generator $G$ takes as input a content vector
$z_{c}$, an attribute vector $z_{a}$, and the one-hot-encoded domain vector
$d$ and outputs a simulated histological image. The objective function is
given by
$L_{total}=w_{cc}L^{cc}+w_{c}L^{c}+w_{d}L^{d}+w_{recon}L^{recon}+w_{latent}L^{latent}+w_{KL}L^{KL},$
(1)
where $L^{cc}$ is the cycle-consistency loss, $L^{c}$ and $L^{d}$ are
adversarial losses for the content and the attribute encoder, $L^{recon}$ is
an $L_{1}$-loss for image reconstruction, $L^{latent}$ is an $L_{1}$-loss for
latent space reconstruction, and $L^{KL}$ enforces the latent attribute space
to be distributed according to the standard normal distribution. Please refer
to [8] for a detailed explanation of each loss and the precise hyperparameter
setting.
Figure 2: Overview of the color variation in the dataset and the augmentation
techniques used in this paper using the framed image as example tile.
At inference, using the fixed content encoding of the input image $z_{c}$, we
can sample the attribute vector $z_{a}$ and the one-hot encoded domain vector
$d$ as visualized in Figure 1c. Hence, we can map one image to many different
structure-preserving augmentations. More specifically, we sample a random
color attribute $z_{a}$ from a normal distribution that parametrizes the stain
color variabilities in one domain. Figure 2b shows randomly sampled outcomes
of intra-domain augmentations. Additionally, we can change the one-hot-encoded
domain vector $d$ to project the input image into multiple target domains as
visualized in Figure 2c. In addition to sampling from the training domains, we
can also interpolate between these domains to obtain an even broader variety
of realistic color appearances for histopathological images. Figure 2d
demonstrates this by linearly interpolating the domain from domain RUMC to
domain UMCU according to
$d=(1-t)\cdot d_{\mathrm{RUMC}}+t\cdot d_{\mathrm{UMCU}},\quad\mathrm{for}\
t\in[0,1].$ (2)
### 2.2 Competing methods for stain color augmentation
Most existing stain color transfer methods are used for stain normalization,
i.e., to transfer the stain color of the test domain to that of the training
domain. Recently, it has been shown that simple stain color augmentations,
such as perturbing the HSV color space of the histological images, perform
better and lead to more robust models than traditional and network-based
normalization techniques [15]. Therefore, we compare our HistAuGAN to the HSV
augmentations used in [15]. Besides HSV augmentation, there is a more
complicated augmentation technique based on the Wasserstein distance of
different domains [11]. But the method is much slower than HSV and HistAuGAN,
thus difficult to be used as an on-the-fly augmentation technique.
For a quantitative evaluation of our augmentation technique, we consider the
following augmentation methods:
* •
Geometric augmentations: vertical and horizontal flipping, as well as
$90^{\circ}$, $180^{\circ}$, and $270^{\circ}$ rotations.
* •
HSV color augmentations: geometric augmentations with Gaussian blur and
contrast and brightness perturbations applied with probability 0.25 and 0.5,
respectively. We tried both light and strong color augmentations, as suggested
in [15]. Strong color augmentations can generate unrealistic color
appearances. However, applying hue and saturation jittering with factor 0.5
and probability 0.5, which results in relatively strong color perturbance as
shown in Figure 2e, performed best for us.
* •
HistAuGAN: geometric augmentations combined with our augmentation technique
applied to half of the images during training. For each image, we randomly
pick a target domain from the training domains and sample a color attribute
vector $z_{a}\in\mathbb{R}^{8}$ from the standard normal distribution.
### 2.3 Evaluation
We evaluate HistAuGAN on three different aspects, in particular, i) whether it
can remove batch effects present in histological images collected from
multiple medical laboratories, ii) how it affects the out-of-domain
generalization of a deep learning model trained for a specific down-stream
task, and iii) how HistAuGAN preserves morphological structure during
augmentation. For ii), we choose a binary classification task of classifying
WSI tiles into the classes tumor versus non-tumor as described in more detail
in Section 3.3. Question iii) is evaluated by asking a pathology expert to
check image similarity before and after augmentation. To explore how
generalizable our model is, we extend the HistAuGAN training data (lymph
nodes) by tiles from unseen tissue and tumor types, in particular, breast
tissue [13].
## 3 Results and Discussion
### 3.1 Dataset
For the quantitative evaluation of HistAuGAN, we choose the publicly available
Camelyon17 dataset [1] that provides WSIs from five different medical centers
(denoted by RUMC, CWZ, UMCU, RST, and LPON) with different scanning properties
and stain colors as shown in Figure 2a. Pixel-wise annotations are given for
50 WSIs in total, 10 from each medical center. To create the training patches,
we first threshold the images with naive RGB thresholding combined with Otsu
thresholding and then patch the tissue regions of each WSI at the highest
resolution based on a grid into tiles of size $512\times 512$ pixels. Each
tile is labeled as tumor if the ratio of pixels annotated as tumor pixels is
larger than 1%, otherwise, it is labeled as non-tumor. The tiled dataset has
an imbalanced class distribution, i.e., overall, 7% of the tiles are labeled
as tumor and the ratio of tumor tiles is in the same order of magnitude across
all medical centers.
### 3.2 Evaluation of batch-effect removal
Figure 3: Effect of color augmentation on batch effects in color statistics.
(a-d) UMAP embeddings of color statistics of training data, color-coded by
source domains. (e) The quantification of mixing based on mean local diversity
(mLD, higher is better) suggests HistAuGAN effectively mitigates batch
effects.
To evaluate how color augmentation mitigates batch effects, we quantify the
mixing of images from different medical centers with respect to their color
statistics. A random set of 1,000 image tiles were extracted from the WSIs
from each center and analyzed in terms of the average values of each component
after transformation to various color spaces (RGB, HSV, LAB, HED, grayscale).
To visually observe batch effects, we reduced the dimensionality to 2D using
UMAP [2] and labeled points according to their domain as shown in Figure 3a-d.
To quantify the mixing of different domains, we measured the mean over the
local diversity (mLD) for all $k$-nearest neighborhoods ($k=10$) in the 2D
projection using Shannon’s equitability which varies between 0 for non-mixed
and 1 for perfectly mixed populations (cf. Figure 3e).
Without color augmentation, we observe a clear batch effect: tiles from
different domains form distinct clusters ($\mathrm{mLD}=0.2$, Figure 3a). HSV
augmentations improve data mixing, but domain-correlated clusters are still
visible ($\mathrm{mLD}=0.48$, Figure 3b) and single domains, e.g. LPON, are
not mixed with other domains. In contrast, HistAuGAN mixes data from multiple
domains (Figure 3c,d) with a high local diversity ($\mathrm{mLD}=0.85$). If
HistAuGAN is used to transfer colors to discrete domains, the distinct domain
clusters are retained, but each cluster contains well-mixed image samples
transferred from all domains (Figure 3c). When HistAuGAN is used to randomly
interpolate between domains, a continuous well-mixed color subspace is
obtained without any clustering structure (Figure 3d).
These results show that HistAuGAN is highly effective in removing batch
effects present in color statistics of images sampled from different medical
centers.
### 3.3 Evaluation on a down-stream classification task
Figure 4: Precision-recall AUC (left) and F1-score (right) of our binary
classification task. The bold bars depict the results on the out-of-domain
centers averaged across all runs. The most-right, pale bars denote the in-
domain test performance of the classifiers trained with geometric
augmentations.
To evaluate the effect of our proposed augmentation method, we train a CNN on
a binary tumor classification task and compare the performance on different
out-of-domain test sets based on the Camelyon17 dataset. Due to the relatively
small size of our dataset, in particular the small number of tumor tiles, we
choose a small CNN, namely, a pre-trained ResNet18 [7], and fine-tune the last
two ResNet-blocks together with the fully-connected layer on our dataset. For
training, we use weighted cross-entropy-loss to rebalance the contribution of
each class, with a learning rate of 1e-5 and an $L_{2}$-regularization of 1e-5
across all runs and for all augmentation techniques. Furthermore, we used
random erasing as regularization on all augmentation techniques [18]. Since
our dataset is highly imbalanced, we report the F1-score of the tumor class in
addition to the area under the precision-recall curve (PR-AUC).
Figure 4 shows the results of the quantitative evaluation of different
augmentation techniques on the binary tumor-classification task. For each
medical center, we trained three classifiers, one for each augmentation type,
and aggregated the results evaluated on the test domains. All experiments were
repeated three times. On both metrics, HistAuGAN shows better performance on
all of the out-of-domain test sets. As visualized in Figure 2, the appearance
of images from medical center UMCU and LPON deviates strongly from the other
centers, explaining their lower scores. In comparison to HSV color
augmentation, HistAuGAN performs better in handling the stain color
discrepancy between training and test domain and is therefore able to generate
a more robust classification model that generalizes better to out-of-domain
test sets. This can also be measured in the standard deviation of the results
across the out-of-domain test sets centers. For our model, the standard
deviation of the PR-AUC for the tumor class amounts to 0.08, whereas it higher
for geometric (0.22) and color (0.14) augmentations, respectively, which
demonstrates that our model is more robust to underlying stain color
variations. The right-most group shows the in-domain test results for
geometric augmentations. It can be seen as an upper bound for any stain
normalization technique, and thus shows that HistAuGAN can even outperform
stain normalization techniques on some of the five domains.
### 3.4 Qualitative evaluation by an expert pathologist
We further check the quality of HistAuGAN by an expert pathologist on the
structural similarity of original and augmented WSI tiles from the training
set, i.e., the Camelyon17 dataset, and an unseen dataset of breast tissue
[13]. We define three levels of similarity: a) “High similarity”: a
pathologist would find it difficult to distinguish the original tile from the
augmented tile. b) “Moderate similarity”: some structural variations are
observed, but do not affect pathological diagnosis. c) “Low similarity”: the
augmentated tiles can not be used for diagnostic purposes. As shown in Table
6, most of the augmented images do not have a structural modification that
affects diagnosis and over half of them can even fool an expert pathologist.
It is worth mentioning that HistAuGAN is not trained on any of the breast
cancer images but is still able to transfer its color in a structure-
preserving manner as shown in Figure 6 on a sample tile.
Figure 5: Expert evaluation.
Tissue type | High | Moderate | Low | Total
---|---|---|---|---
Lymph nodes | 10 | 7 | 3 | 20
Breast | 14 | 4 | 2 | 20
Figure 6: HistAuGAN on unseen tissue.
## 4 Conclusion
In summary, we propose a novel GAN-based technique, HistAuGAN, for color
augmentation of histopathological images. Based on the disentangled
representations of content and style, HistAuGAN is able to change the color
appearance of an histological image while preserving its morphological
structure. Moreover, HistAuGAN captures both intra-domain and inter-domain
color variations. It is able to interpolate between domains and can therefore
span a continuous color space covering a large variety of realistic stain
colors. When applied as an augmentation technique, HistAuGAN yields a robust
down-stream classifier that generalizes better to out-of-domain test sets than
other color augmentations techniques and, therefore, renders additional stain
normalization steps unnecessary. Finally, HistAuGAN can mitigate batch effects
present in histopathological data which suggests that it is also able to cover
domain shifts beyond color variations, such as noise and artifacts introduced
in image compression. The code is publicly available at
https://github.com/sophiajw/HistAuGAN together with a model of HistAuGAN
trained on the five medical centers of the Camelyon17 dataset.
## References
* [1] Bandi, P., Geessink, O., Manson, Q., Van Dijk, M., Balkenhol, M., Hermsen, M., Ehteshami Bejnordi, B., Lee, B., Paeng, K., Zhong, A., Li, Q., Zanjani, F.G., Zinger, S., Fukuta, K., Komura, D., Ovtcharov, V., Cheng, S., Zeng, S., Thagaard, J., Dahl, A.B., Lin, H., Chen, H., Jacobsson, L., Hedlund, M., Cetin, M., Halici, E., Jackson, H., Chen, R., Both, F., Franke, J., Kusters-Vandevelde, H., Vreuls, W., Bult, P., van Ginneken, B., van der Laak, J., Litjens, G.: From detection of individual metastases to classification of lymph node status at the patient level: The CAMELYON17 challenge. IEEE Trans. Med. Imaging 38(2), 550–560 (Feb 2019)
* [2] Becht, E., McInnes, L., Healy, J., Dutertre, C.A., Kwok, I.W.H., Ng, L.G., Ginhoux, F., Newell, E.W.: Dimensionality reduction for visualizing single-cell data using UMAP. Nat. Biotechnol. (Dec 2018)
* [3] Bejnordi, B.E., Litjens, G., Timofeeva, N., Otte-Holler, I., Homeyer, A., Karssemeijer, N., van der Laak, J.A.: Stain specific standardization of Whole-Slide histopathological images (2016)
* [4] de Bel, T., Bokhorst, J.M., van der Laak, J., Litjens, G.: Residual cyclegan for robust domain transformation of histopathological tissue slides. Med. Image Anal. 70, 102004 (May 2021)
* [5] Chan, J.K.C.: The wonderful colors of the Hematoxylin–Eosin stain in diagnostic surgical pathology. Int. J. Surg. Pathol. 22(1), 12–32 (Feb 2014)
* [6] Cohen, J.P., Luck, M., Honari, S.: Distribution matching losses can hallucinate features in medical image translation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I. Lecture Notes in Computer Science, vol. 11070, pp. 529–536. Springer (2018). https://doi.org/10.1007/978-3-030-00928-1_60, https://doi.org/10.1007/978-3-030-00928-1_60
* [7] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016)
* [8] Lee, H.Y., Tseng, H.Y., Mao, Q., Huang, J.B., Lu, Y.D., Singh, M., Yang, M.H.: DRIT : Diverse Image-to-Image translation via disentangled representations (2020)
* [9] Liang, H., Plataniotis, K.N., Li, X.: Stain style transfer of histopathology images via Structure-Preserved generative learning. In: Machine Learning for Medical Image Reconstruction. pp. 153–162. Springer International Publishing (2020)
* [10] Macenko, M., Niethammer, M., Marron, J.S., Borland, D., Woosley, J.T., Guan, X., Schmitt, C., Thomas, N.E.: A method for normalizing histology slides for quantitative analysis. In: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. pp. 1107–1110. IEEE (2009)
* [11] Nadeem, S., Hollmann, T., Tannenbaum, A.: Multimarginal wasserstein barycenter for stain normalization and augmentation. Med. Image Comput. Comput. Assist. Interv. 12265, 362–371 (Oct 2020)
* [12] Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (Jul 2001)
* [13] Roux, L.: Mitos-atypia-14 grand challenge. https://mitos-atypia-14.grand-challenge.org/, accessed: 2021-03-03
* [14] Shaban, M.T., Tarek Shaban, M., Baur, C., Navab, N., Albarqouni, S.: Staingan: Stain style transfer for digital histological images (2019)
* [15] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.M., Ciompi, F., van der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Med. Image Anal. 58, 101544 (Dec 2019)
* [16] Tschuchnig, M.E., Oostingh, G.J., Gadermayr, M.: Generative adversarial networks in digital pathology: A survey on trends and future potential. Patterns (N Y) 1(6), 100089 (Sep 2020)
* [17] Vahadane, A., Peng, T., Sethi, A., Albarqouni, S., Wang, L., Baust, M., Steiger, K., Schlitter, A.M., Esposito, I., Navab, N.: Structure-Preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging 35(8), 1962–1971 (Aug 2016)
* [18] Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. Proceedings of the AAAI Conference on Artificial Intelligence 34(07), 13001–13008 (Apr 2020). https://doi.org/10.1609/aaai.v34i07.7000, https://ojs.aaai.org/index.php/AAAI/article/view/7000
* [19] Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2223–2232 (2017)
|
arxiv-papers
| 2021-07-26T17:52:39 |
2024-09-04T03:07:19.514580
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Sophia J. Wagner, Nadieh Khalili, Raghav Sharma, Melanie Boxberg,\n Carsten Marr, Walter de Back, Tingying Peng",
"submitter": "Sophia J. Wagner",
"url": "https://arxiv.org/abs/2107.12357"
}
|
2107.12358
|
# Constraining spinning primordial black holes with global 21-cm signal
Pravin Kumar Natwariya ${}^{\href https://orcid.org/0000-0001-9072-8430}$
[email protected], [email protected] Physical Research Laboratory, Theoretical
Physics Division, Ahmedabad, Gujarat 380 009, India Department of Physics,
Indian Institute of Technology, Gandhinagar, Palaj, Gujarat 382 355, India
Alekha C. Nayak ${}^{\href https://orcid.org/0000-0001-6087-2490}$
[email protected] National Institute of Technology, Meghalaya, Shillong,
Meghalaya 793 003, India Tripurari Srivastava ${}^{\href
https://orcid.org/0000-0001-6856-9517}$ [email protected] Department
of Physics and Astrophysics, University of Delhi, Delhi 110007, India
###### Abstract
Abstract
We study the upper projected bounds on the dark matter fraction in the form of
the primordial black holes (PBHs) with a non-zero spin by using the absorption
feature in the global 21-cm signal at redshift $z\approx 17$. The mass and
spin are fundamental properties of a black hole, and they can substantially
affect the evaporation rate of the black hole. The evaporating black hole can
inject energy into the intergalactic medium and heat the gas. Subsequently, it
can modify the absorption amplitude in the global 21-cm signal. Therefore, the
absorption feature in the 21-cm signal can provide a robust bound on PBHs. We
analyse the projected constraints on the dark matter fraction in the form of
both spinning and non-spinning PBHs. The constraints are more stringent for
spinning PBHs than non-spinning ones. We also compare these bounds with other
observations and find the most stringent lower constraint on PBHs mass, which
is allowed to constitute the entire dark matter to $6.7\times 10^{17}$ g for
extremal spinning PBHs.
Primordial Black Hole, Dark Matter, 21-cm signal
## I Introduction
About 85 percent of the total matter content in the Universe is dominated by
the dark matter (DM) Planck Collaboration VI (2020). In the last decade, many
DM models, such as collision-less cold DM Peebles (1982), fuzzy cold DM Hu
_et al._ (2000), warm DM Dodelson and Widrow (1994); Boyarsky _et al._
(2019); Bulbul _et al._ (2014), self-interacting DM Spergel and Steinhardt
(2000); Natwariya _et al._ (2020), have been proposed to explain various
astrophysical observations. However, the microscopic nature of dark matter is
still unknown. One interesting, well-motivated proposal is the fraction/all of
DM in the form of PBHs (Carr and Kühnel (2020); Dasgupta _et al._ (2020);
Frampton _et al._ (2010); Khlopov (2010); Belotsky _et al._ (2019) and
references therein). Recently, PBHs have been gathered much attention after
the black hole binary merger detection by LIGO and Virgo collaboration. These
events suggest that PBHs may constitute a fraction of DM Bird _et al._
(2016); Abbott et al. (2016a, b, 2017a, 2017b); Sasaki _et al._ (2016).
PBHs may have originated in the early Universe due to initial inhomogeneities
Zel’dovich and Novikov (1967); Hawking (1971); Carr and Hawking (1974); Carr
(1975), Higgs potential instability at a scale of $10^{11}$ GeV Espinosa _et
al._ (2018), hybrid inflation Frampton _et al._ (2010); Clesse and García-
Bellido (2015), etc. Depending on the origin time (t), PBHs can have a wide
range of masses $M_{\rm PBH}\sim 10^{15}\,\big{[}t/(10^{-23}\rm sec)\big{]}$ g
Carr _et al._ (2010). PBHs having mass larger than $10^{15}$ g can survive
the Hawking evaporation and account for present-day DM density Hawking (1975).
The presence of PBHs can be responsible for the ultra-luminous X-ray sources,
seeds for supermassive black holes at the centre of galaxies Clesse and
García-Bellido (2015), and it may provide seeds for present-day observed
structures Clesse and García-Bellido (2015); García-Bellido (2017). There are
several hints that indicate the presence of PBHs, such as dynamics and star
clusters of ultra-faint-dwarf-galaxies, correlations between X-ray and
infrared cosmic backgrounds, etc. (for a detailed review, see Ref. Clesse and
García-Bellido (2018)). The presence of evaporating PBHs can explain the
galactic/extra-galactic $\gamma$-ray background radiation Wright (1996);
Lehoucq _et al._ (2009); Carr (1976); Page and Hawking (1976), short-duration
$\gamma$-ray bursts Cline _et al._ (1997); Green (2001), and reionization by
injection of $\gamma$ and $e^{\pm}$ radiations into Inter-Galactic-Medium
(IGM) Belotsky _et al._ (2014); Belotsky and Kirillov (2015).
During the cosmic dawn era, the evolution of the gas temperature and
ionization fraction of the Universe are well-known Seager _et al._ (1999,
2000). The addition of any exotic source of energy during the cosmic dawn era
can significantly impact the ionization and thermal history of the Universe.
Therefore, we can constrain the properties of such exotic sources from the
observations during the cosmic dawn era. Evaporating PBHs can heat the gas and
modify the free electron fraction in the IGM Laha _et al._ (2021); Kim
(2021). Rotating PBHs can emit more particles into IGM and substantially
affect the IGM evolution compared to non-rotating PBHs Chandrasekhar and
Detweiler (1977); Page and Hawking (1976); Page (1976a). Therefore, it is
important to study the properties of spinning PBHs. In the present work, we
consider the Hawking emission of PBHs into background radiations (photons and
electron/positron) and provide the projected constraints on the fraction of DM
in the form of PBHs, $f_{\rm PBH}=\Omega_{\rm PBH}/\Omega_{\rm DM}$, as a
function of mass and spin. Here, $\Omega_{\rm PBH}$ and $\Omega_{\rm DM}$ are
the dimensionless density parameters for PBHs and DM.
Recently, EDGES observation detected a large absorption signal in the 21-cm
line in the redshift range $15-20$ Bowman _et al._ (2018); Pritchard and Loeb
(2012). The 21-cm signal appears to be a treasure trove to provide constraints
on various cosmological phenomena such as the formation of the first stars,
galaxies or any exotic source of energy injection. The 21 cm line corresponds
to the wavelength of the hyperfine transition between the $1S$ singlet and
triplet states of the neutral hydrogen atom. The EDGES absorption signal is
nearly two times larger than the theoretical prediction based on the standard
model of cosmology ($\Lambda$CDM) at the redshift $z\approx 17$ Bowman _et
al._ (2018); Pritchard and Loeb (2012). In the $\Lambda$CDM model, the gas and
cosmic-microwave-background (CMB) temperatures vary adiabatically as $T_{\rm
gas}\propto(1+z)^{2}$ and $T_{\rm CMB}\propto(1+z)$ during the cosmic dawn
era. Subsequently, at $z=17$, one gets the gas and CMB temperatures to be
$\sim 6.7$ K and $\sim 49.1$ K, respectively, and it implies
$T_{21}\approx-220$ mK Seager _et al._ (1999, 2000). While EDGES
collaboration reported $T_{21}=-0.5_{-0.5}^{+0.2}$ K with 99% confidence
intervals at a centre frequency of $78\pm 1$ MHz or $z\simeq 17$ Bowman _et
al._ (2018). To resolve this discrepancy, either one has to increase the
background radio radiation or decrease the gas temperature. Both the scenarios
have been explored in several literatures Ewall-Wice _et al._ (2018); Jana
_et al._ (2018); Feng and Holder (2018); Lawson and Zhitnitsky (2019);
Natwariya (2021); Lawson and Zhitnitsky (2013); Levkov _et al._ (2020);
Natwariya and Bhatt (2020); Brandenberger _et al._ (2019); Chianese _et al._
(2019); Bhatt _et al._ (2020); Tashiro _et al._ (2014); Barkana (2018);
Sikivie (2019); Mirocha and Furlanetto (2019); Ghara and Mellema (2019).
However, increasing the background radio radiation or cooling the IGM gas by
the non-standard mechanisms are not well known and are debatable issues Muñoz
and Loeb (2018); Sean Fraser et al. (2018); Bransden _et al._ (1958); Barkana
_et al._ (2018); Berlin _et al._ (2018); Kovetz _et al._ (2018); Muñoz _et
al._ (2018); Slatyer and Wu (2018); D’Amico _et al._ (2018); Mitridate and
Podo (2018). Therefore, we have not considered any methods to increase the
background radiation above CMB or cooling of the IGM gas. In this work, we
study projected bounds on spinning PBHs such that 21-cm differential
brightness temperature does not change more than a factor of 1/4 from
$\Lambda$CDM framework based theoretical prediction.
The fraction of DM in the form of PBHs is constrained from various
astrophysical observations and theoretical predictions. PBHs with mass smaller
than $\sim\mathcal{O}(10^{15}~{}{\rm g})$ may have evaporated as of now and
can be constrained from the impact on big bang nucleosynthesis by evaporated
particles, background radiation etc. Higher mass PBHs can be constrained by
the effect on large-scale structures, gravitational wave and lensing, and
impact on thermal and ionization history of the IGM (for details, see the
recent reviews Carr _et al._ (2021); Green and Kavanagh (2021); Carr and
Kühnel (2020) and references therein). In the context of the 21-cm signal, the
upper bound on the $f_{\rm PBH}$ can be found in Refs. Hektor _et al._
(2018); Clark _et al._ (2018); Mena _et al._ (2019); Yang (2020a); Halder
and Banerjee (2021); Tashiro and Kadota (2021); Yang (2020b); Villanueva-
Domingo and Ichiki (2021). Angular momentum is a fundamental property of a
black hole, and it can modify the Hawking evaporation drastically. In the case
of rotating PBHs, authors of the Refs. Dasgupta _et al._ (2020); Laha _et
al._ (2021) have reported the various types of bound on $f_{\rm PBH}$ as a
function of PBHs mass and spin. Future collaboration, All-sky Medium Energy
Gamma-ray Observatory (AMEGO)111https://asd.gsfc.nasa.gov/amego/index.html
will be able to constrain some parameter space for the rotating PBHs Ray _et
al._ (2021).
## II Thermal History Of IGM
A rotating black hole with angular momentum $J_{\rm PBH}$ and having mass
$M_{\rm PBH}$ can be defined with a rotation parameter, $a_{*}=J_{\rm
PBH}/(G\,M_{\rm PBH}^{2})$ Page (1976a), where $G$ is the gravitational
constant. Black holes can get their spin depending on generation mechanisms,
merger or accretion Kesden _et al._ (2010); Cotner and Kusenko (2017); Harada
_et al._ (2021); Luca _et al._ (2019, 2020); Harada _et al._ (2017); Kühnel
(2020); Flores and Kusenko (2021); Arbey _et al._ (2020a); He and Suyama
(2019); Cotner _et al._ (2019). PBHs with higher mass can have a lifetime
larger/comparable than the age of the Universe. Therefore, they have enough
time to accrete mass and spin up Dong _et al._ (2016). Rotating black hole
with higher spin ($a_{*}\rightarrow 1$) injects more energy into IGM and
evaporates faster than non-rotating BHs Chandrasekhar and Detweiler (1977);
Taylor _et al._ (1998); Arbey _et al._ (2020b, 2021). Therefore, we expect
that the bounds on $f_{\rm PBH}$ to be more stringent compared to non-rotating
PBHs. The energy injection per unit volume per unit time due to $e^{\pm}$ and
photons into IGM, for monochromatic mass distribution of PBHs, can be written
as Laha _et al._ (2021); Mittal _et al._ (2021),
$\displaystyle\Gamma_{\rm PBH}^{e^{\pm}}(z,a_{*})$
$\displaystyle=2\int\left[f_{c}^{e}(E-m_{e},z)\,(E-m_{e})\left(\frac{d^{2}N_{e}}{dt\,dE}\right)\,\right]\,n_{\rm
PBH}\,dE\,,$ (1) $\displaystyle\Gamma_{\rm PBH}^{\gamma}(z,a_{*})$
$\displaystyle=\int\left[\
f_{c}^{\gamma}(E,z)\,E\,\left(\frac{d^{2}N_{\gamma}}{dt\,dE}\right)\
\right]\,n_{\rm PBH}\ dE\,.$ (2)
Energy injection into IGM happens by three processes: heating, ionization, and
excitation of the gas Slatyer (2016a, b); Liu _et al._ (2020). $f_{c}^{i}$
represents the energy deposition efficiency into IGM. Here, $c$ stands for
above-mentioned three channels and $i\equiv({\rm electron/positron,\,photon})$
stands for different types of injected particles. The factor of 2 in equation
(1) accounts for the total contribution of electrons and positrons. $n_{\rm
PBH}=f_{\rm PBH}\,(\rho_{\rm DM}/M_{\rm PBH})$ is the number density of the
PBHs, and $\rho_{\rm DM}$ is the dark matter energy density.
$d^{2}N^{i}/(dt\,dE)\equiv d^{2}N^{i}/(dt\,dE)\,\big{(}E,M_{\rm
PBH},a_{*}\big{)}$ represents the number of particles emitted by black hole
per unit time per unit energy Page (1976a); MacGibbon and Webber (1990); Laha
_et al._ (2021); Arbey and Auffinger (2019). We use the BlackHawk
code222https://blackhawk.hepforge.org/ to calculate the spectra due to photons
and electrons/positrons; we take both the primary and secondary Hawking
evaporation spectra into account Arbey and Auffinger (2019, 2021).
In the presence of Hawking radiation, the thermal evolution of the gas Chluba
_et al._ (2015); D’Amico _et al._ (2018),
$\displaystyle\frac{dT_{\rm gas}}{dz}=2\,\frac{T_{\rm
gas}}{1+z}+\frac{\Gamma_{c}}{(1+z)\,H}(T_{\rm gas}-T_{\rm CMB})-\frac{2\
\,\Gamma_{\rm PBH}\,}{3\,N_{\rm tot}(1+z)\,H}\,,$ (3)
here, $\Gamma_{\rm PBH}=\Gamma_{\rm PBH}^{e^{\pm}}+\Gamma_{\rm PBH}^{\gamma}$
is the total energy injection per unit time and per unit volume into IGM,
$N_{\rm tot}=N_{\rm H}\,(1+f_{\rm He}+X_{e})$ is the total number density of
the gas, $N_{\rm H}$ is the hydrogen number density, $f_{\rm He}=N_{\rm
He}/N_{\rm H}$, $N_{\rm He}$ is the helium number density, $X_{e}=N_{e}/N_{\rm
H}$ is the free electron fraction and $N_{e}$ is the free electron number
density. $\Gamma_{c}$ stands for the Compton scattering rate Schleicher _et
al._ (2008); Natwariya and Bhatt (2020). We consider the following numerical
values of the cosmological parameters: $h=0.674$, $\Omega_{\rm M}=0.315$,
$\Omega_{\rm b}=0.049$ and $T_{\rm CMB}|_{z=0}=2.725$ K Planck Collaboration
VI (2020); Fixsen (2009). To compute the energy deposition efficiency, thermal
and ionization history of the Universe, we use
DarkHistory333https://darkhistory.readthedocs.io/en/master/ package with
necessary modifications Liu _et al._ (2020).
(a)
(b)
Figure 1: The gas temperature evolution with redshift for evaporating
primordial black hole. The red dashed lines represent the CMB temperature
evolution. The black dashed lines depicts the $T_{\rm gas}$ when there is no
PBHs. The shaded region corresponds to the redshift $15\leq z\leq 20$ (EDGES
observed signal). In plot (1a), we consider PBHs mass and $f_{\rm PBH}$ to
$1\times 10^{15}$ g and $10^{-7}$, respectively, and vary the spin of PBHs. In
plot (1b), we keep $M_{\rm PBH}=1\times 10^{15}$ g and $a_{*}=0.5$ constant
and vary $f_{\rm PBH}$. Figure 2: The caption is the same as in Figure (1),
except, here, we vary the mass of PBHs and keep spin and $f_{\rm PBH}$ to
$0.5$ and $10^{-7}$, respectively.
## III Results and Discussion
Following the Refs. Mesinger and Furlanetto (2007); Mesinger _et al._ (2011);
Pritchard and Loeb (2012); Mittal and Kulkarni (2020), we write the global
21-cm differential brightness temperature as,
$\displaystyle T_{21}=27\,X_{\rm HI}\,\left(1-\frac{T_{\rm R}}{T_{\rm
S}}\right)\,\left(\frac{0.15}{\Omega_{\rm
m}}\,\frac{1+z}{10}\right)^{1/2}\left(\frac{\Omega_{\rm
b}h}{0.023}\right)~{}{\rm mK}\,,$ (4)
here, $X_{\rm HI}=N_{\rm HI}/N_{\rm H}$ is the fraction of neutral hydrogen in
the Universe, and $N_{\rm HI}$ is the neutral hydrogen number density. $T_{\rm
S}$ is the spin temperature, and it is characterized by the number density
ratio of $1S$ triplet and singlet hyperfine states of the neutral hydrogen
atom. In the cosmological scenarios, there are mainly three processes that can
affect the spin temperature: background radio radiation, Ly$\alpha$ radiation
from the first stars and collisions of a hydrogen atom with another hydrogen
atom, residual electron or proton. In the detailed balance between the
population of $1S$ singlet and triplet state, one can write the spin
temperature as Field (1958); Pritchard and Loeb (2012),
$\displaystyle T_{\rm S}^{-1}=\frac{T_{\rm
R}^{-1}+x_{\alpha}\,T_{\alpha}^{-1}+x_{c}\,T_{\rm
gas}^{-1}}{1+x_{\alpha}+x_{c}}\,,$ (5)
here, $T_{\alpha}$ is the Ly$\alpha$ colour temperature. $x_{\alpha}$ is the
Ly$\alpha$ coupling coefficient due to Wouthuysen-Field effect Wouthuysen
(1952); Field (1958). $x_{c}$ is the collisional coupling coefficient due to
scattering between hydrogen atoms or scattering of hydrogen atoms with other
species such as electrons and protons (Pritchard and Loeb (2012) and
references therein). The colour temperature can be taken as gas temperature,
$T_{\alpha}\simeq T_{\rm gas}$, due to the repeated scattering between
Ly$\alpha$ photons and gas Field (1958, 1959); Pritchard and Loeb (2012).
After the formation of the first stars ($z\sim 30$), their Ly$\alpha$
radiation causes the hyperfine transition in the neutral hydrogen atom, and
the $x_{\alpha}$ starts dominating over other couplings Pritchard and Loeb
(2012). Therefore, at the redshift 17.2, spin temperature can be approximated
as $T_{\rm S}\simeq T_{\rm gas}$, when the background radiation temperature
$T_{\rm R}=T_{\rm CMB}$ Pritchard and Loeb (2012); Natwariya (2021). In the
standard cases, the background radiation contribution is assumed solely by CMB
radiation, $T_{\rm R}=T_{\rm CMB}$. In the present work, we do not consider
X-ray heating of the gas due to the uncertainty of the known physics of the
first stars. The inclusion of X-ray heating will further strengthen our
projected constraints. Here, it is to be noted that the gas temperature may
increase due to the energy transfer from the background radiation to the
thermal motions of the gas mediated by Ly$\alpha$ radiation from the first
stars Venumadhav _et al._ (2018). However, due to the uncertainty in known
physics of the first star formation, we do not include this effect also. The
inclusion of this effect will further strengthen our projected bounds on
$f_{\rm PBH}$. Depending on the ratio $T_{\rm CMB}/T_{\rm S}$, there can be
three scenarios: absorption ($T_{\rm CMB}>T_{\rm S}$), emission ($T_{\rm
CMB}<T_{\rm S}$) or no signal ($T_{\rm CMB}=T_{\rm S}$). At redshift $17.2$,
to get $T_{21}\leq-150$ mK, we require $T_{\rm gas}\leq 9.62$ K. Here, $X_{\rm
HI}\simeq 1-X_{e}$, and in our case at required redshift we get
$X_{e}\lesssim\mathcal{O}(10^{-3})$. Therefore, $X_{\rm HI}$ can be regarded
as unity.
(a)
(b)
Figure 3: The upper projected bounds on the dark fraction of matter in the
form PBHs ($f_{\rm PBH}=\Omega_{\rm PBH}/\Omega_{\rm DM}$) as a function of
PBHs mass for varying spin ($a_{*}$) of PBHs. The shaded regions are excluded
from our analysis for $f_{\rm PBH}$ when $a_{*}=0$ (dotted black line), 0.5
(dot-dashed black line), 0.9 (dashed black line) and 0.9999 (solid black
line). The dashed blue curve depicts the upper constraint on $f_{\rm PBH}$ by
observations of the diffuse Isotropic Gamma-Ray Background (IGRB) for
$a_{*}=0.9$ Arbey _et al._ (2020b). The double-dot-dashed blue curve
represents the upper constraint on $f_{\rm PBH}$ from Diffuse Supernova
Neutrino Background (DSNB) searches at Super-Kamiokande, while the solid blue
line represents the INTErnational Gamma-Ray Astrophysical Laboratory
(INTEGRAL) observation of 511 KeV $\gamma$-ray lines at Galactic centre
constraint on $f_{\rm PBH}$ for $a_{*}=0.9$ Dasgupta _et al._ (2020). The
double-dot-dashed magenta (red) line represents the AMEGO forecast for
$a_{*}=0\ (a_{*}=0.9999)$ Ray _et al._ (2021). Near future, AMEGO
collaboration will be able to probe the parameter-space above the magenta
(red) double-dot-dashed curve for $a_{*}=0\ (a_{*}=0.9999)$. The solid green
line stands for 95% confidence level bound from INTEGRAL observation of
Galactic gamma-ray flux for non-spinning PBHs Laha _et al._ (2020). Solid
cyan curve depicts the upper bound from observing the 511 KeV $\gamma$-ray
lines at the Galactic centre by assuming all the PBHs within a 3 Kpc radius of
the Galactic centre for non-spinning PBHs Laha (2019). The magenta solid line
represents the Planck constraint Clark _et al._ (2017). The red solid line
depicts the dwarf galaxy Leo T constraint Kim (2021) and the green dashed line
shows the COMPTEL bound Coogan _et al._ (2021) for non-spinning PBHs.
In Figures (1) and (2), we present IGM gas temperature evolution as a function
of redshift for different PBH masses, spins and fractions of DM in the form of
PBHs. The shaded region corresponds redshift range, $15-20\,$. The red dashed
curves in all plots depict the CMB temperature evolution, while the black
dashed line represents the gas temperature when there are no evaporating PBHs.
In Figure (1a), we keep mass and $f_{\rm PBH}$ to $1\times 10^{15}$ g and
$10^{-7}$, respectively, and vary the spin of PBHs. As expected, when we
increase the spin of PBHs, the gas temperature rises significantly in the
shaded region. The solid violet curve represents the case when the spin of
PBHs is 0. Increasing the spin to 0.5 (solid green line), the gas temperature
increases. Continuously increasing spin to 0.99 (solid cyan line), the gas
temperature rises further. In Figure (1b), we keep $M_{\rm PBH}=1\times
10^{15}$ g, spin to 0.5 and vary $f_{\rm PBH}$. In this plot, as we increase
the $f_{\rm PBH}$ from $10^{-8}$ (solid cyan line) to $10^{-6}$ (solid violet
line), the IGM heating rises rapidly. If the gas temperature becomes larger
than the CMB temperature in the shaded region, it can erase the 21 cm
absorption signal; instead, it may give an emission signal. Therefore, at
desired redshift (in our scenario $z=17.2$), one has to keep $T_{\rm
gas}<T_{\rm CMB}$ to get an absorption signal. Increasing $f_{\rm PBH}$, for a
given mass, the number density of PBHs increases, resulting in more energy
injection into IGM by PBHs Hawking evaporation. Therefore, increasing the
$f_{\rm PBH}$, the gas temperature rises. In Figure (2), we vary the mass of
PBHs and keep spin and $f_{\rm PBH}$ constants to $0.5$ and $10^{-7}$,
respectively. In this plot, as we increase the mass of PBHs from $1\times
10^{15}$ g (solid violet line) to $5\times 10^{15}$ g (solid cyan line), the
gas temperature decreases. It happens for two reasons: (i) Increasing the mass
of PBHs leads to a decrease in the total power contributions from Hawking
evaporation of PBHs MacGibbon and Webber (1990). (ii) Ignoring the integral
dependency in equations (1) and (2), $\Gamma_{\rm PBH}^{e^{\pm}}$ and
$\Gamma_{\rm PBH}^{\gamma}$ are proportional to $n_{\rm PBH}=f_{\rm
PBH}\,(\rho_{\rm DM}/M_{\rm PBH})$. For a fixed dark-matter energy density and
$f_{\rm PBH}$, the number density of PBHs increases by decreasing the black
hole mass. Thus, energy injection into IGM per unit volume and time
($\Gamma_{\rm PBH}$) increases, and one gets more heating of the gas.
In Figure (3), we plot the upper projected bounds on the fraction of DM in the
form of PBHs as a function of PBHs mass for different spins. Here, we have
considered that 21-cm differential brightness temperature, $T_{21}$, remains
$-150$ mK at redshift $z=17.2$. We vary the mass of PBHs from $10^{15}$ g to
$10^{18}$ g. The shaded regions in both the plots are excluded for the
corresponding PBH spins. The dashed blue curve represents the upper constraint
on $f_{\rm PBH}$ by observations of the diffuse Isotropic Gamma-Ray Background
(IGRB) Arbey _et al._ (2020b). The double-dot-dashed blue curve represents
the upper constraint on $f_{\rm PBH}$ from Diffuse Supernova Neutrino
Background (DSNB) searches at Super-Kamiokande, while the solid blue line
represents the INTErnational Gamma-Ray Astrophysical Laboratory (INTEGRAL)
observation of 511 KeV $\gamma$-ray line at Galactic centre constraint on
$f_{\rm PBH}$ for $a_{*}=0.9$ Dasgupta _et al._ (2020). For $a_{*}=0$, the
observation at the Jiangmen Underground Neutrino Observatory (JUNO) will be
able to place a 20 times stronger bound on the upper allowed value of $f_{\rm
PBH}$ for $M_{\rm PBH}=10^{15}$ g compared to Super-Kamiokande Wang _et al._
(2021); Dasgupta _et al._ (2020). The double-dot-dashed magenta (red) line
represents the AMEGO forecast for $a_{*}=0\ (a_{*}=0.9999)$ Ray _et al._
(2021). In the near future, AMEGO collaboration will be able to probe the
parameter-space above the magenta (red) double-dot-dashed curve for $a_{*}=0\
(a_{*}=0.9999)$. Solid green line stands for 95% confidence level bound from
INTEGRAL observation of Galactic $\gamma$-ray flux for non-spinning PBHs Laha
_et al._ (2020). The solid cyan curve depicts the upper bound from the
observation of 511 KeV $\gamma$-ray lines at the Galactic centre by assuming
all the PBHs within a 3 Kpc radius of the Galactic centre for non-spinning
PBHs Laha (2019). For the comparison, we have also plotted the bounds from
Planck Clark _et al._ (2017), Leo T Kim (2021) and COMPTEL Coogan _et al._
(2021) observations for non-spinning PBHs. In Figure (3a), $f_{\rm PBH}$
varies from $1\times 10^{-10}$ to $1\times 10^{-5}$, while, in Figure (3b), it
varies from $1\times 10^{-5}$ to its maximum allowed value 1 ($\Omega_{\rm
PBH}=\Omega_{\rm DM}$). In Figure (3), as we increase the value of spin from
$0$ to its extremal value, $0.9999$, the upper bounds become more stringent.
This is due to an increment in evaporation of PBHs, and it results in more
energy injection into the IGM Page (1976a, b, 1977). As discussed earlier,
increasing the mass of PBHs, energy injection into IGM decreases.
Subsequently, one gets more window to increase the gas temperature or $f_{\rm
PBH}$, and the upper bound becomes weaker. Therefore, in Figure (3), the upper
bound on $f_{\rm PBH}$ weakens as we increase the mass. Our upper projected
constraint on $f_{\rm PBH}$ for $a_{*}=0.9$ is comparable to the INTEGRAL
observation of 511 KeV $\gamma$-ray lines for PBHs mass larger than $\sim
8\times 10^{16}$ and becomes stronger for smaller PBH masses. Also, compared
to IGRB Arbey _et al._ (2020b) and DSNB Dasgupta _et al._ (2020), our
projected bounds are stringent for the considered mass range of PBHs. We find
the most robust lower projected constraint on the mass of PBHs, which is
allowed to constitute the entire dark matter, to $1.5\times 10^{17}$ g,
$1.9\times 10^{17}$ g, $3.9\times 10^{17}$ g and $6.7\times 10^{17}$ g for PBH
spins 0, 0.5, 0.9 and 0.9999, respectively. The lower bound on $M_{\rm PBH}$
for $\Omega_{\rm PBH}=\Omega_{\rm DM}$, for extremal spinning PBHs is nearly
four times larger than non-spinning PBHs.
## IV Conclusions
Spinning primordial black holes can substantially affect the ionization and
thermal history of the Universe. Subsequently, it can modify the 21-cm
absorption signal in the cosmic dawn era by injecting energy due to Hawking
evaporation. We study the upper projected bounds on the fraction of dark
matter in the form of PBHs as a function of mass and spin, considering that
the 21-cm differential brightness temperature does not change more than a
factor of 1/4 from the theoretical prediction based on the $\Lambda$CDM
framework. Our projected constraints are stringent compared to DSNB, INTEGRAL
observation of the 511 KeV line, IGRB, Planck, Leo T and COMPTEL. In the near
future, AMEGO collaboration will be able to probe some parameter space in our
considered mass range of PBHs. In the present work, we have considered the
monochromatic mass distribution of PBHs. The allowed parameter space can also
be explored for different PBHs mass distributions such as log-normal, power-
law, critical collapse, etc. Arbey and Auffinger (2019). Here, it is to be
noted that we have not considered heating of IGM gas due to X-ray from the
first stars in the vague of known physics of the first stars. The inclusion of
X-ray heating will further strengthen our projected bounds.
## V Acknowledgements
The authors would like to acknowledge Prof. Jitesh R Bhatt and Ranjan Laha for
valuable comments and suggestions, and the TEQIP-III sponsored Workshop on
Astroparticle Physics and Cosmology at the National Institute of Technology
Meghalaya. We thank Alexandre Arbey and Jérémy Auffinger for providing the new
version of the BlackHawk code in advance. T. S. would like to acknowledge the
support from the Dr. D. S. Kothari Postdoctoral fellowship scheme No.
F.4-2/2006 (BSR)/PH/20-21/0163 Finally, the authors would like to thank the
Referee for the suggestions and a detailed report that significantly improved
the quality of the manuscript.
## References
* Planck Collaboration VI (2020) Planck Collaboration VI, Astronomy & Astrophysics 641, A6 (2020).
* Peebles (1982) P. J. E. Peebles, ApJL 263, L1 (1982).
* Hu _et al._ (2000) W. Hu, R. Barkana, and A. Gruzinov, Phys. Rev. Lett. 85, 1158 (2000).
* Dodelson and Widrow (1994) S. Dodelson and L. M. Widrow, Phys. Rev. Lett. 72, 17 (1994).
* Boyarsky _et al._ (2019) A. Boyarsky, M. Drewes, T. Lasserre, S. Mertens, and O. Ruchayskiy, Progress in Particle and Nuclear Physics 104, 1 (2019).
* Bulbul _et al._ (2014) E. Bulbul, M. Markevitch, A. Foster, R. K. Smith, M. Loewenstein, and S. W. Randall, The Astrophysical Journal 789, 13 (2014).
* Spergel and Steinhardt (2000) D. N. Spergel and P. J. Steinhardt, Phys. Rev. Lett. 84, 3760 (2000).
* Natwariya _et al._ (2020) P. K. Natwariya, J. R. Bhatt, and A. K. Pandey, Eur. Phys. J. C 80 (2020), 10.1140/epjc/s10052-020-8341-8.
* Carr and Kühnel (2020) B. Carr and F. Kühnel, Annual Review of Nuclear and Particle Science 70, 355 (2020), https://doi.org/10.1146/annurev-nucl-050520-125911 .
* Dasgupta _et al._ (2020) B. Dasgupta, R. Laha, and A. Ray, Phys. Rev. Lett. 125, 101101 (2020).
* Frampton _et al._ (2010) P. H. Frampton, M. Kawasaki, F. Takahashi, and T. T. Yanagida, J. Cosmol. Astropart. Phys. 2010, 023 (2010).
* Khlopov (2010) M. Y. Khlopov, Research in Astronomy and Astrophysics 10, 495 (2010).
* Belotsky _et al._ (2019) K. M. Belotsky, V. I. Dokuchaev, Y. N. Eroshenko, E. A. Esipova, M. Y. Khlopov, L. A. Khromykh, A. A. Kirillov, V. V. Nikulin, S. G. Rubin, and I. V. Svadkovsky, Eur. Phys. J. C 79 (2019), 10.1140/epjc/s10052-019-6741-4.
* Bird _et al._ (2016) S. Bird, I. Cholis, J. B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E. D. Kovetz, A. Raccanelli, and A. G. Riess, Phys. Rev. Lett. 116, 201301 (2016).
* Abbott et al. (2016a) B. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett. 116, 061102 (2016a).
* Abbott et al. (2016b) B. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett. 116, 241103 (2016b).
* Abbott et al. (2017a) B. Abbott et al. (LIGO Scientific and Virgo Collaboration), Phys. Rev. Lett. 118, 221101 (2017a).
* Abbott et al. (2017b) B. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett. 119, 141101 (2017b).
* Sasaki _et al._ (2016) M. Sasaki, T. Suyama, T. Tanaka, and S. Yokoyama, Phys. Rev. Lett. 117, 061101 (2016).
* Zel’dovich and Novikov (1967) Y. B. Zel’dovich and I. D. Novikov, Soviet Astronomy 10, 602 (1967).
* Hawking (1971) S. Hawking, MNRAS 152, 75 (1971).
* Carr and Hawking (1974) B. J. Carr and S. W. Hawking, Monthly Notices of the Royal Astronomical Society 168, 399 (1974).
* Carr (1975) B. J. Carr, ApJ 201, 1 (1975).
* Espinosa _et al._ (2018) J. R. Espinosa, D. Racco, and A. Riotto, Phys. Rev. Lett. 120, 121301 (2018).
* Clesse and García-Bellido (2015) S. Clesse and J. García-Bellido, Phys. Rev. D 92, 023524 (2015).
* Carr _et al._ (2010) B. J. Carr, K. Kohri, Y. Sendouda, and J. Yokoyama, Phys. Rev. D 81, 104019 (2010).
* Hawking (1975) S. W. Hawking, Commun. Math. Phys. 43, 199 (1975), [Erratum: Commun.Math.Phys. 46, 206 (1976)].
* García-Bellido (2017) J. García-Bellido, Journal of Physics: Conference Series 840, 012032 (2017).
* Clesse and García-Bellido (2018) S. Clesse and J. García-Bellido, Physics of the Dark Universe 22, 137 (2018).
* Wright (1996) E. L. Wright, ApJ 459, 487 (1996), arXiv:astro-ph/9509074 [astro-ph] .
* Lehoucq _et al._ (2009) R. Lehoucq, M. Cassé, J. M. Casandjian, and I. Grenier, A&A 502, 37 (2009), arXiv:0906.1648 [astro-ph.HE] .
* Carr (1976) B. J. Carr, ApJ 206, 8 (1976).
* Page and Hawking (1976) D. N. Page and S. W. Hawking, ApJ 206, 1 (1976).
* Cline _et al._ (1997) D. B. Cline, D. A. Sanders, and W. Hong, The Astrophysical Journal 486, 169 (1997).
* Green (2001) A. M. Green, Phys. Rev. D 65, 027301 (2001).
* Belotsky _et al._ (2014) K. M. Belotsky, A. E. Dmitriev, E. A. Esipova, V. A. Gani, A. V. Grobov, M. Y. Khlopov, A. A. Kirillov, S. G. Rubin, and I. V. Svadkovsky, Modern Physics Letters A 29, 1440005 (2014).
* Belotsky and Kirillov (2015) K. Belotsky and A. Kirillov, J. Cosmol. Astropart. Phys. 2015, 041 (2015).
* Seager _et al._ (1999) S. Seager, D. D. Sasselov, and D. Scott, Ast. J. 523, L1 (1999).
* Seager _et al._ (2000) S. Seager, D. D. Sasselov, and D. Scott, ApJ 128, 407 (2000).
* Laha _et al._ (2021) R. Laha, P. Lu, and V. Takhistov, Physics Letters B 820, 136459 (2021).
* Kim (2021) H. Kim, Monthly Notices of the Royal Astronomical Society 504, 5475 (2021).
* Chandrasekhar and Detweiler (1977) S. Chandrasekhar and S. L. Detweiler, Proc. Roy. Soc. Lond. A 352, 325 (1977).
* Page (1976a) D. N. Page, Phys. Rev. D 13, 198 (1976a).
* Bowman _et al._ (2018) J. D. Bowman, A. E. E. Rogers, R. A. Monsalve, T. J. Mozdzen, and N. Mahesh, Nature 555, 67 (2018).
* Pritchard and Loeb (2012) J. R. Pritchard and A. Loeb, Rep. Prog. Phys 75, 086901 (2012).
* Ewall-Wice _et al._ (2018) A. Ewall-Wice, T.-C. Chang, J. Lazio, O. Doré, M. Seiffert, and R. A. Monsalve, The Astrophysical Journal 868, 63 (2018).
* Jana _et al._ (2018) R. Jana, B. B. Nath, and P. L. Biermann, Monthly Notices of the Royal Astronomical Society 483, 5329 (2018).
* Feng and Holder (2018) C. Feng and G. Holder, The Astrophysical Journal 858, L17 (2018).
* Lawson and Zhitnitsky (2019) K. Lawson and A. Zhitnitsky, Physics of the Dark Universe 24, 100295 (2019).
* Natwariya (2021) P. K. Natwariya, Eur. Phys. J. C 81 (2021), 10.1140/epjc/s10052-021-09155-z.
* Lawson and Zhitnitsky (2013) K. Lawson and A. R. Zhitnitsky, Physics Letters B 724, 17 (2013).
* Levkov _et al._ (2020) D. G. Levkov, A. G. Panin, and I. I. Tkachev, Phys. Rev. D 102, 023501 (2020).
* Natwariya and Bhatt (2020) P. K. Natwariya and J. R. Bhatt, MNRAS: Letters 497, L35 (2020).
* Brandenberger _et al._ (2019) R. Brandenberger, B. Cyr, and R. Shi, J. Cosmol. Astropart. Phys. 2019, 009 (2019).
* Chianese _et al._ (2019) M. Chianese, P. Di Bari, K. Farrag, and R. Samanta, Physics Letters B 790, 64 (2019).
* Bhatt _et al._ (2020) J. R. Bhatt, P. K. Natwariya, A. C. Nayak, and A. K. Pandey, Eur. Phys. J. C 80, 334 (2020).
* Tashiro _et al._ (2014) H. Tashiro, K. Kadota, and J. Silk, Phys. Rev. D 90, 083522 (2014).
* Barkana (2018) R. Barkana, Nature 555, 71 (2018).
* Sikivie (2019) P. Sikivie, Physics of the Dark Universe 24, 100289 (2019).
* Mirocha and Furlanetto (2019) J. Mirocha and S. R. Furlanetto, MNRAS 483, 1980 (2019).
* Ghara and Mellema (2019) R. Ghara and G. Mellema, MNRAS 492, 634 (2019).
* Muñoz and Loeb (2018) J. B. Muñoz and A. Loeb, Nature 557, 684 (2018).
* Sean Fraser et al. (2018) Sean Fraser et al., Phys. Lett. B 785, 159 (2018).
* Bransden _et al._ (1958) B. H. Bransden, A. Dalgarno, T. L. John, and M. J. Seaton, Proc. Phys. Soc. 71, 877 (1958).
* Barkana _et al._ (2018) R. Barkana, N. J. Outmezguine, D. Redigolo, and T. Volansky, Phys. Rev. D 98, 103005 (2018).
* Berlin _et al._ (2018) A. Berlin, D. Hooper, G. Krnjaic, and S. D. McDermott, Phys. Rev. Lett. 121, 011102 (2018).
* Kovetz _et al._ (2018) E. D. Kovetz, V. Poulin, V. Gluscevic, K. K. Boddy, R. Barkana, and M. Kamionkowski, Phys. Rev. D 98, 103529 (2018).
* Muñoz _et al._ (2018) J. B. Muñoz, C. Dvorkin, and A. Loeb, Phys. Rev. Lett. 121, 121301 (2018).
* Slatyer and Wu (2018) T. R. Slatyer and C.-L. Wu, Phys. Rev. D 98, 023013 (2018).
* D’Amico _et al._ (2018) G. D’Amico, P. Panci, and A. Strumia, Phys. Rev. Lett. 121, 011103 (2018).
* Mitridate and Podo (2018) A. Mitridate and A. Podo, J. Cosmol. Astropart. Phys. 2018, 069 (2018).
* Carr _et al._ (2021) B. Carr, K. Kohri, Y. Sendouda, and J. Yokoyama, Rep. Prog. Phys. 84, 116902 (2021).
* Green and Kavanagh (2021) A. M. Green and B. J. Kavanagh, Journal of Physics G: Nuclear and Particle Physics 48, 043001 (2021).
* Hektor _et al._ (2018) A. Hektor, G. Hütsi, L. Marzola, M. Raidal, V. Vaskonen, and H. Veermäe, Phys. Rev. D 98 (2018), 10.1103/physrevd.98.023503.
* Clark _et al._ (2018) S. J. Clark, B. Dutta, Y. Gao, Y.-Z. Ma, and L. E. Strigari, Phys. Rev. D 98 (2018), 10.1103/physrevd.98.043006.
* Mena _et al._ (2019) O. Mena, S. Palomares-Ruiz, P. Villanueva-Domingo, and S. J. Witte, Phys. Rev. D 100 (2019), 10.1103/physrevd.100.043540.
* Yang (2020a) Y. Yang, Phys. Rev. D 102 (2020a), 10.1103/physrevd.102.083538.
* Halder and Banerjee (2021) A. Halder and S. Banerjee, Phys. Rev. D 103 (2021), 10.1103/physrevd.103.063044.
* Tashiro and Kadota (2021) H. Tashiro and K. Kadota, Phys. Rev. D 103 (2021), 10.1103/physrevd.103.123532.
* Yang (2020b) Y. Yang, The European Physical Journal Plus 135 (2020b), 10.1140/epjp/s13360-020-00710-3.
* Villanueva-Domingo and Ichiki (2021) P. Villanueva-Domingo and K. Ichiki, “21 cm forest constraints on primordial black holes,” (2021), arXiv:2104.10695 [astro-ph.CO] .
* Ray _et al._ (2021) A. Ray, R. Laha, J. B. Muñoz, and R. Caputo, Phys. Rev. D 104, 023516 (2021).
* Kesden _et al._ (2010) M. Kesden, G. Lockhart, and E. S. Phinney, Phys. Rev. D 82 (2010), 10.1103/physrevd.82.124045.
* Cotner and Kusenko (2017) E. Cotner and A. Kusenko, Phys. Rev. D 96 (2017), 10.1103/physrevd.96.103002.
* Harada _et al._ (2021) T. Harada, C.-M. Yoo, K. Kohri, Y. Koga, and T. Monobe, The Astrophysical Journal 908, 140 (2021).
* Luca _et al._ (2019) V. D. Luca, V. Desjacques, G. Franciolini, A. Malhotra, and A. Riotto, J. Cosmol. Astropart. Phys. 2019, 018 (2019).
* Luca _et al._ (2020) V. D. Luca, G. Franciolini, P. Pani, and A. Riotto, J. Cosmol. Astropart. Phys. 2020, 052 (2020).
* Harada _et al._ (2017) T. Harada, C.-M. Yoo, K. Kohri, and K.-I. Nakao, Phys. Rev. D 96 (2017), 10.1103/physrevd.96.083517.
* Kühnel (2020) F. Kühnel, Eur. Phys. J. C 80 (2020), 10.1140/epjc/s10052-020-7807-z.
* Flores and Kusenko (2021) M. M. Flores and A. Kusenko, Phys. Rev. D 104 (2021), 10.1103/physrevd.104.063008.
* Arbey _et al._ (2020a) A. Arbey, J. Auffinger, and J. Silk, Monthly Notices of the Royal Astronomical Society 494, 1257 (2020a).
* He and Suyama (2019) M. He and T. Suyama, Phys. Rev. D 100 (2019), 10.1103/physrevd.100.063520.
* Cotner _et al._ (2019) E. Cotner, A. Kusenko, M. Sasaki, and V. Takhistov, J. Cosmol. Astropart. Phys. 2019, 077 (2019).
* Dong _et al._ (2016) R. Dong, W. H. Kinney, and D. Stojkovic, J. Cosmol. Astropart. Phys. 2016, 034 (2016).
* Taylor _et al._ (1998) B. E. Taylor, C. M. Chambers, and W. A. Hiscock, Phys. Rev. D 58 (1998), 10.1103/physrevd.58.044012.
* Arbey _et al._ (2020b) A. Arbey, J. Auffinger, and J. Silk, Phys. Rev. D 101 (2020b), 10.1103/physrevd.101.023010.
* Arbey _et al._ (2021) A. Arbey, J. Auffinger, P. Sandick, B. Shams Es Haghi, and K. Sinha, Phys. Rev. D 103 (2021), 10.1103/physrevd.103.123549.
* Mittal _et al._ (2021) S. Mittal, A. Ray, G. Kulkarni, and B. Dasgupta, “Constraining primordial black holes as dark matter using the global 21-cm signal with x-ray heating and excess radio background,” (2021), arXiv:2107.02190 [astro-ph.CO] .
* Slatyer (2016a) T. R. Slatyer, Phys. Rev. D 93, 023521 (2016a).
* Slatyer (2016b) T. R. Slatyer, Phys. Rev. D 93, 023527 (2016b).
* Liu _et al._ (2020) H. Liu, G. W. Ridgway, and T. R. Slatyer, Phys. Rev. D 101 (2020), 10.1103/physrevd.101.023530.
* MacGibbon and Webber (1990) J. H. MacGibbon and B. R. Webber, Phys. Rev. D 41, 3052 (1990).
* Arbey and Auffinger (2019) A. Arbey and J. Auffinger, Eur. Phys. J. C 79 (2019), 10.1140/epjc/s10052-019-7161-1.
* Arbey and Auffinger (2021) A. Arbey and J. Auffinger, Eur. Phys. J. C 81 (2021), 10.1140/epjc/s10052-021-09702-8.
* Chluba _et al._ (2015) J. Chluba, D. Paoletti, F. Finelli, and J. A. Rubiño-Martín, MNRAS 451, 2244 (2015).
* Schleicher _et al._ (2008) D. R. G. Schleicher, R. Banerjee, and R. S. Klessen, Phys. Rev. D 78, 083005 (2008).
* Fixsen (2009) D. J. Fixsen, ApJ 707, 916 (2009).
* Mesinger and Furlanetto (2007) A. Mesinger and S. Furlanetto, ApJ 669, 663 (2007).
* Mesinger _et al._ (2011) A. Mesinger, S. Furlanetto, and R. Cen, MNRAS 411, 955 (2011).
* Mittal and Kulkarni (2020) S. Mittal and G. Kulkarni, Monthly Notices of the Royal Astronomical Society 503, 4264 (2020).
* Field (1958) G. B. Field, Proceedings of the IRE 46, 240 (1958).
* Wouthuysen (1952) S. A. Wouthuysen, ApJ 57, 31 (1952).
* Field (1959) G. B. Field, ApJ 129, 536 (1959).
* Venumadhav _et al._ (2018) T. Venumadhav, L. Dai, A. Kaurov, and M. Zaldarriaga, Phys. Rev. D 98, 103513 (2018).
* Laha _et al._ (2020) R. Laha, J. B. Muñoz, and T. R. Slatyer, Phys. Rev. D 101, 123514 (2020).
* Laha (2019) R. Laha, Phys. Rev. Lett. 123, 251101 (2019).
* Clark _et al._ (2017) S. J. Clark, B. Dutta, Y. Gao, L. E. Strigari, and S. Watson, Phys. Rev. D 95 (2017), 10.1103/physrevd.95.083006.
* Coogan _et al._ (2021) A. Coogan, L. Morrison, and S. Profumo, Physical Review Letters 126 (2021), 10.1103/physrevlett.126.171101.
* Wang _et al._ (2021) S. Wang, D.-M. Xia, X. Zhang, S. Zhou, and Z. Chang, Phys. Rev. D 103 (2021), 10.1103/physrevd.103.043010.
* Page (1976b) D. N. Page, Phys. Rev. D 14, 3260 (1976b).
* Page (1977) D. N. Page, Phys. Rev. D 16, 2402 (1977).
|
arxiv-papers
| 2021-07-26T17:53:13 |
2024-09-04T03:07:19.523953
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Pravin Kumar Natwariya, Alekha C. Nayak and Tripurari Srivastava",
"submitter": "Pravin Kumar Natwariya Mr.",
"url": "https://arxiv.org/abs/2107.12358"
}
|
2107.12361
|
# Convergent least-squares optimisation methods for variational data
assimilation
C. Cartis Mathematical Institute, University of Oxford, UK M. H. Kaouri
Department of Mathematics and Statistics, University of Reading, UK
Corresponding author. Address: Department of Mathematics and Statistics,
University of Reading, PO Box 220, Reading, RG6 6AX, UK. Email:
[email protected] A. S. Lawless Department of Mathematics and Statistics,
University of Reading, UK Department of Meteorology, University of Reading, UK
National Centre for Earth Observation, Reading, UK N. K. Nichols Department
of Mathematics and Statistics, University of Reading, UK Department of
Meteorology, University of Reading, UK National Centre for Earth Observation,
Reading, UK
###### Abstract
Data assimilation combines prior (or background) information with observations
to estimate the initial state of a dynamical system over a given time-window.
A common application is in numerical weather prediction where a previous
forecast and atmospheric observations are used to obtain the initial
conditions for a numerical weather forecast. In four-dimensional variational
data assimilation (4D-Var), the problem is formulated as a nonlinear least-
squares problem, usually solved using a variant of the classical Gauss-Newton
(GN) method. However, we show that GN may not converge if poorly initialised.
In particular, we show that this may occur when there is greater uncertainty
in the background information compared to the observations, or when a long
time-window is used in 4D-Var allowing more observations. The difficulties GN
encounters may lead to inaccurate initial state conditions for subsequent
forecasts. To overcome this, we apply two convergent GN variants (line search
and regularisation) to the long time-window 4D-Var problem and investigate the
cases where they locate a more accurate estimate compared to GN within a given
budget of computational time and cost. We show that these methods are able to
improve the estimate of the initial state, which may lead to a more accurate
forecast.
Keywords: Data assimilation, Gauss-Newton, least squares, line search,
optimisation, regularisation
Highlights:
* •
Poor initialisation of Gauss-Newton method may result in failure to converge.
* •
Safeguarded Gauss-Newton improves initial state estimate within limited
time/cost.
* •
Results using twin experiments with long time-window and chaotic Lorenz
models.
* •
Apply state of the art least-squares convergence theory to data assimilation.
* •
Improvements to initial state estimate may lead to a more accurate forecast.
## 1 Introduction
Four-dimensional variational data assimilation (4D-Var) aims to solve a
nonlinear least-squares problem that minimizes the error in a prior estimate
of the initial state of a dynamical system together with the errors between
observations and numerical model estimates of the states of the system over
time. In Numerical Weather Prediction (NWP), 4D-Var is used to estimate the
initial conditions for a weather forecast [24]. The 4D-Var scheme is able to
incorporate information from a previous forecast along with observations over
both temporal and spatial domains, weighted by the uncertainties in the prior
and the observations. From a Bayesian point of view the solution is the
maximum a posteriori estimate of the initial state [35]. The nonlinear least-
squares objective function is minimized using an iterative method. The quality
of the estimate and the subsequent forecast depends on how accurately the
4D-Var problem is solved within the time and computational cost available.
In this paper, we investigate the application of globally convergent
optimisation methods to the 4D-Var problem; such methods use safeguards to
guarantee convergence from an arbitrary initial estimate by ensuring a
sufficient, monotonic/strict decrease in the objective function at each
iteration. We focus on the strong-constraint 4D-Var problem where we assume
that the numerical model of the system perfectly represents the true dynamics
of the system or the model errors are small enough to be neglected. This
results in the formulation of variational data assimilation as an
unconstrained nonlinear least-squares problem and is employed by many
operational meteorological centres [37], including the Meteorological Service
of Canada [14], the European Centre for Medium-range Weather Forecasting
(ECMWF) [8, 38] and the Met Office [39].
Ideally in large-scale unconstrained optimisation, we seek a fast rate of
convergence, which can be achieved in nondegenerate cases using a Newton-type
method. However, these methods require the use of second order derivatives of
the objective function, which are too costly to compute and store
operationally. Therefore, optimisation methods that approximate the high order
terms, such as limited memory Quasi-Newton [15, 27, 40, 44], Inexact Newton
[10], Truncated Newton [23, 42], Adjoint Newton [41], Hessian-free Newton [9],
Gauss-Newton [11, 36] and Approximate Gauss-Newton [18] methods have been
considered. To compute efficiently the first derivatives of the objective
function required by these techniques, the adjoint of the numerical model is
generally used [24]. More recently, optimisation methods that do not require
the first derivatives of the objective function are being examined to avoid
the development and maintenance costs associated with using the adjoint [17].
Alternative data assimilation techniques that use ensemble methods to
approximate the objective function gradients, rather than using the adjoint,
are also being investigated [2, 26].
The incremental 4D-Var technique, used commonly in operational centres,
approximately solves a sequence of linear least-squares problems and has been
shown to be equivalent to the Gauss-Newton (GN) method under standard
conditions [22]. In the GN (or incremental) method the linearized problem is
solved in an inner loop of the algorithm; the solution to the nonlinear
problem is then updated in an outer loop and the problem is re-linearized. The
accuracy with which the inner loop is solved is known to affect the
convergence of the outer loop [21, 22]. In our work, we focus on the
convergence of the outer loop, where the exact gradient is used (as is the
case when an adjoint is available) and we assume that the inner loop linear
least-squares problem is solved exactly. Furthermore, we use a variable
transformation usually applied in operational 4D-Var to precondition the
optimisation problem, see [3].
A general drawback of the GN method is that given a poor initialisation, it is
not guaranteed to converge to a solution, known as the ‘analysis’ state, of
the 4D-Var problem [11]. In NWP, the initial guess for the minimisation is
generally chosen to be the predicted initial state from a previous forecast,
known as the ‘prior’ or ‘background’ state. However, for some applications of
4D-Var this choice may not be a good enough estimate of the analysis. We show
that in such cases, the GN minimisation procedure may fail to converge. There
are three main strategies that safeguard GN and make it convergent from an
arbitrary initial guess: line-search, regularisation and trust-region [7, 36].
These can all be regarded as variants of the original Levenberg-Marquardt
algorithm for solving nonlinear least-squares problems [25, 30]. In this work
we investigate GN method with regularisation (REG) and compare its performance
to GN with backtracking Armijo line-search (LS) and GN alone, applied to the
preconditioned 4D-Var problem when there is limited computational time and
evaluations available, such as in NWP.
In previous work, the use of a line-search strategy in combination with a
Quasi-Newton approach was implemented in the ECMWF NWP system to solve the
4D-Var problem and was found to improve the minimisation of the objective
function [15, 38]. This method uses the Wolfe line-search conditions [43] to
safeguard the convergence. The Wolfe conditions require the use of additional
evaluations of the objective function and its gradient, however, which is
computationally costly. Here we instead use the Armijo condition [1], which
only requires additional evaluations of the objective function and not the
gradient. We pair GN with backtracking Armijo line-search and use a fixed
number of computational evaluations to guarantee a reduction in the outer loop
objective function (assuming the inner loop is solved to a high accuracy). We
compare this method to the GN method and to the GN method safeguarded by
quadratic regularisation (REG), using a simple, inexpensive updating strategy.
Using two test models within the 4D-Var framework, we show that where there is
more uncertainty in the background information compared to the observations,
the GN method may fail to converge, yet the convergent methods, LS and REG,
are able to improve the estimate of the analysis. Assimilation over long time
windows is of particular interest. We use accuracy profiles to show
numerically that in the long time-window case and in cases where there is
higher uncertainty in the background information versus the observations, the
globally convergent methods are able to solve more problems than GN in the
limited cost available. By ‘solve’ we mean satisfying a criterion requiring a
reduction in the objective function within a set number of evaluations. We
also show the effect that poor background information has on the quality of
the estimate obtained. We consider the case where the background information
is highly inaccurate compared to the observations and find that the
convergence of all three methods is improved when more observations are
included along the time-window. Finally, for the case where GN performs well,
we recommend further research into the parameter updating strategies used
within the globally convergent methods.
The structure of this paper is organised as follows. In Section 2 we outline
the strong-constraint 4D-Var problem as a nonlinear least-squares problem and
the GN method that is frequently used to solve it. In Section 3 we outline the
globally convergent methods used within this paper. In Section 4 we describe
the experimental design including the dynamical models used. In Section 5 we
present the numerical results obtained when applying GN and the globally
convergent methods to the 4D-Var problem with different features. Finally, we
conclude our findings in Section 6. In an appendix we detail the proofs of
convergence for the REG and LS methods.
## 2 Variational data assimilation
### 2.1 4D-Var: least-squares formulation
In four-dimensional variational data assimilation (4D-Var), the analysis
$\mathbf{x}_{0}^{a}\in\mathbb{R}^{n}$ is obtained by minimising a objective
function consisting of two terms: the background term and the observation
term, namely;
$\mathcal{J}(\mathbf{x}_{0})=\frac{1}{2}(\mathbf{x}_{0}-\mathbf{x}_{0}^{b})^{T}\mathbf{B}_{0}^{-1}(\mathbf{x}_{0}-\mathbf{x}_{0}^{b})+\frac{1}{2}\sum_{i=0}^{N}(\mathbf{y}_{i}-\mathcal{H}_{i}(\mathbf{x}_{i}))^{T}\mathbf{R}_{i}^{-1}(\mathbf{y}_{i}-\mathcal{H}_{i}(\mathbf{x}_{i})).$
(1)
The background term measures the difference between the initial state of the
system and the background state vector $\mathbf{x}_{0}^{b}\in\mathbb{R}^{n}$,
which contains prior information. The observation term measures the difference
between information from observations at times $t_{i}$ in the observation
vector $\mathbf{y}_{i}\in\mathbb{R}^{p_{i}}$ and the model state vector
$\mathbf{x}_{i}\in\mathbb{R}^{n}$ at the same time through use of the
observation operator
$\mathcal{H}_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p_{i}}$ that maps from
the model state space to the observation space. Both terms are weighted by
their corresponding covariance matrices to represent the uncertainty in the
respective measures, the background error covariance matrix
$\mathbf{B}\in\mathbb{R}^{n\times n}$ and the observation error covariance
matrices at times $t_{i}$, $\mathbf{R}_{i}\in\mathbb{R}^{p_{i}\times p_{i}}$,
which are assumed to be symmetric positive definite. We note that observations
are distributed both in time and space and there are usually fewer
observations available than there are state variables so $p<n$, where
$p=\sum_{i=0}^{N}p_{i}$. The 4D-Var objective function (1) is subject to the
nonlinear dynamical model equations which contain the physics of the system
$\mathbf{x}_{i}=\mathcal{M}_{0,i}(\mathbf{x}_{0}),$ (2)
where the nonlinear model
$\mathcal{M}_{0,i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ evolves the state
vector from the initial time point $t_{0}$ to the time point $t_{i}$.
We precondition the 4D-Var problem using a variable transform, which has been
shown to improve the conditioning of the variational optimisation problem [19,
20]. To be able to use the negative square root of $\mathbf{B}$ in our
variable transformation, we first require the assumption that the matrix
$\mathbf{B}$ is full rank. This assumption is satisfied for our choices of
$\mathbf{B}$ in Section 5. We define a new variable $\mathbf{v}$ to be,
$\mathbf{v}=\mathbf{B}^{-1/2}(\mathbf{x}_{0}-\mathbf{x}_{0}^{b}).$ (3)
The 4D-Var objective function can then be written in terms of $\mathbf{v}$,
known as the control variable in data assimilation (DA), and minimised with
respect to this instead. Furthermore, by including the model information
within the objective function, we are able to write the constrained
optimisation problem (1)-(2) in the form of an unconstrained optimisation
problem and apply the minimisation methods described later in this paper. The
preconditioned 4D-Var objective function is given by
$\mathcal{J}(\mathbf{v})=\frac{1}{2}\mathbf{v}^{T}\mathbf{v}+\frac{1}{2}\sum_{i=0}^{N}(\mathbf{y}_{i}-\mathcal{H}_{i}(\mathcal{M}_{0,i}(\mathbf{B}^{1/2}\mathbf{v}+\mathbf{x}_{0}^{b})))^{T}\mathbf{R}_{i}^{-1}(\mathbf{y}_{i}-\mathcal{H}_{i}(\mathcal{M}_{0,i}(\mathbf{B}^{1/2}\mathbf{v}+\mathbf{x}_{0}^{b}))).$
(4)
We note that the function (4) is continuously differentiable if the operators
$\mathcal{H}_{i}$ and $\mathcal{M}_{0,i}$ are continuously differentiable. To
save both computational cost and time in 4D-Var, tangent linear approximations
of the nonlinear operators in (4) are use in the inner loop [8]. The tangent
linear model (TLM) and tangent linear observation operator are usually derived
by linearising the discrete nonlinear model equations.
In a nonlinear least-squares problem, the function
$\mathcal{J}:\mathbb{R}^{n}\rightarrow\mathbb{R}$ has a special form, as
defined in the following,
$\min_{\mathbf{v}}\mathcal{J}(\mathbf{v})=\frac{1}{2}\|\mathbf{r}(\mathbf{v})\|_{2}^{2},$
(5)
where $\mathbf{r}(\mathbf{v})=[r_{1}(\mathbf{v}),...,r_{n+p}(\mathbf{v})]^{T}$
and each $r_{j}:\mathbb{R}^{n}\rightarrow\mathbb{R}$, for $j=1,2,\dots,n+p$,
is referred to as a residual. In (5), $\|.\|_{2}$ denotes the $l_{2}$-norm,
which will be used throughout this paper. Equation (4) is equivalent to (5)
where the residual vector $\mathbf{r}(\mathbf{v})\in\mathbb{R}^{(n+p)}$ and
its Jacobian $\mathbf{J}(\mathbf{v})$ are given by
$\mathbf{r}(\mathbf{v})=\begin{pmatrix}\mathbf{v}\\\
\mathbf{R}_{0}^{-1/2}(\mathbf{y}_{0}-\mathcal{H}_{0}(\mathbf{B}^{1/2}\mathbf{v}+\mathbf{x}_{0}^{b}))\\\
\mathbf{R}_{1}^{-1/2}(\mathbf{y}_{1}-\mathcal{H}_{1}(\mathcal{M}_{0,1}(\mathbf{B}^{1/2}\mathbf{v}+\mathbf{x}_{0}^{b})))\\\
\vdots\\\
\mathbf{R}_{N}^{-1/2}(\mathbf{y}_{N}-\mathcal{H}_{N}(\mathcal{M}_{0,N}(\mathbf{B}^{1/2}\mathbf{v}+\mathbf{x}_{0}^{b})))\end{pmatrix}\text{
and }\mathbf{J}(\mathbf{v})=\begin{pmatrix}\mathbf{I}\\\
-\mathbf{R}_{0}^{-1/2}\mathbf{H}_{0}\mathbf{B}^{1/2}\\\
-\mathbf{R}_{1}^{-1/2}\mathbf{H}_{1}\mathbf{M}_{0,1}\mathbf{B}^{1/2}\\\
\vdots\\\
-\mathbf{R}_{N}^{-1/2}\mathbf{H}_{N}\mathbf{M}_{0,N}\mathbf{B}^{1/2}\\\
\end{pmatrix},$ (6)
where
$\mathbf{M}_{0,i}=\frac{\partial\mathcal{M}_{0,i}}{\partial\mathbf{v}}\big{|}_{\mathcal{M}_{0,i}(\mathbf{B}^{1/2}\mathbf{v}+\mathbf{x}_{0}^{b})}\text{
and
}\mathbf{H}_{i}=\frac{\partial\mathcal{H}_{0}}{\partial\mathbf{v}}\big{|}_{\mathcal{M}_{0,i}(\mathbf{B}^{1/2}\mathbf{v}+\mathbf{x}_{0}^{b})}$
(7)
are the Jacobian matrices of the model operator $\mathcal{M}_{0,i}$ and
observation operator $\mathcal{H}_{i}$ respectively, where
$\mathbf{M}_{0,i}\in\mathbb{R}^{n\times n}$ is the tangent linear of
$\mathcal{M}_{0,i}$ and $\mathbf{H}_{i}\in\mathbb{R}^{p_{i}\times n}$ is the
tangent linear of $\mathcal{H}_{i}$ [35]. In practice, an adjoint method is
used to calculate the gradient of (4), defined as
$\nabla\mathcal{J}(\mathbf{v})=\mathbf{J}(\mathbf{v})^{T}\mathbf{r}(\mathbf{v}).$
(8)
The Hessian is the matrix of second-order partial derivatives of (4),
$\nabla^{2}\mathcal{J}(\mathbf{v})=\mathbf{J}(\mathbf{v})^{T}\mathbf{J}(\mathbf{v})+\sum\limits_{j=1}^{n+p}r_{j}(\mathbf{v})\nabla^{2}r_{j}(\mathbf{v}).$
(9)
In data assimilation, the second-order terms in (9) are often difficult to
calculate in the time and cost available and too large to store, and so one
cannot easily use Newton-type methods for 4D-Var. Therefore, a first-order
approximation to the Hessian of the objective function (4) is used, resulting
in a GN method, and is given by
$\mathbf{S}=\mathbf{J}(\mathbf{v})^{T}\mathbf{J}(\mathbf{v})=\mathbf{I}+\sum_{i=0}^{N}\mathbf{B}^{1/2}\mathbf{M}_{0,i}^{T}\mathbf{H}_{i}^{T}\mathbf{R}_{i}^{-1}\mathbf{H}_{i}\mathbf{M}_{0,i}\mathbf{B}^{1/2},$
(10)
which is, by construction, full rank and symmetric positive definite. The
condition number in the $l_{2}$-norm of (10), $\kappa(\mathbf{S})$, is the
ratio of its largest and smallest eigenvalues and is related to the number of
iterations used for the linear minimisation problems in 4D-Var and how
sensitive the estimate of the initial state is to perturbations of the data.
We can use $\kappa(\mathbf{S})$ to indicate how quickly and accurately the
optimisation problem can be solved [16].
### 2.2 4D-Var implementation
The incremental 4D-Var method, which was first proposed for practical
implementation of the NWP problem in [8], has been shown to be equivalent to
the GN method when an exact TLM is used in the inner loop. When an approximate
TLM is used, the method is equivalent to an inexact GN method [18, 22]. A
summary of the GN method is given next.
Algorithm 2.1: GN algorithm applied to (5) [11].
Step $0$: Initialisation. Given $\mathbf{v}^{(0)}\in\mathbb{R}^{n}$ and some
stopping criteria. Set $k=0$. Step $1$: Check stopping criteria. While the
stopping criteria are not satisfied, do: Step $2$: Step computation. Compute a
step $\mathbf{s}^{(k)}$ that satisfies
$\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{J}(\mathbf{v}^{(k)})\mathbf{s}^{(k)}=-\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{r}(\mathbf{v}^{(k)}).$
(11) Step $3$: Iterate update. Set
$\mathbf{v}^{(k+1)}=\mathbf{v}^{(k)}+\mathbf{s}^{(k)}$, $k:=k+1$ and go to
Step 1.
In Algorithm 2.2, the updated control variable $\mathbf{v}^{(k+1)}$ is
computed by finding a step $\mathbf{s}^{(k)}$ that satisfies (11), which is
known as the preconditioned linearised subproblem. By substituting
$\mathbf{v}^{(k+1)}$ into (3) and rearranging, we obtain the current estimate
$\mathbf{x}_{0}^{(k+1)}$ of the initial state to the original nonlinear 4D-Var
problem.
To reduce the computational cost in large DA systems and to solve the DA
problem in real time, the series of problems (11) can be solved approximately
in the inner loop using iterative optimisation methods such as Conjugate
Gradient (CG) where a limited number of CG iterations are allowed and an exact
or approximate $\mathbf{J}$ is used [18]. We do not focus on this here and
assume that (11) is solved exactly.
We note that the step calculation (11) uniquely defines $\mathbf{s}^{(k)}$,
and $\mathbf{s}^{(k)}$ is a descent direction when $\mathbf{J}(\mathbf{v})$ is
full column rank. This is the case in 4D-Var as the Jacobian,
$\mathbf{J}(\mathbf{v})$ in (6) is full column rank due to the presence of the
identity matrix, thus ensuring that $\mathbf{s}^{(k)}$ is a descent direction.
The definitions of two solution types, namely, local and global minima, are
stated in Appendix A, along with a brief explanation of the local convergence
property of GN. Although the GN method benefits from local convergence
properties, convergence can only be guaranteed if the initial guess
$\mathbf{v}^{(0)}$ of the algorithm is in some neighbourhood around an
(unknown) local solution $\mathbf{v}^{*}$, that is, convergence from an
arbitrary initial guess is not guaranteed [11]. Even if the GN method does
converge, it may not necessarily converge to the global minimum due to the
fact that multiple local minima of a nonlinear least-squares objective
function may exist.
GN has no way of adjusting the length of the step $\mathbf{s}^{(k)}$ and
hence, may take steps that are too long and fail to decrease the objective
function value and thus to converge, see Example 10.2.5 in [11] and later in
Section 5 where the poor performance of GN is demonstrated. As GN only
guarantees local convergence, we are interested in investigating methods that
converge when $\mathbf{v}^{(0)}$ is far away from a local minimiser
$\mathbf{v}^{*}$. We refer to these methods as ‘globally convergent’.
Mathematical theory on global strategies can be found in [36] and [11]. Two
globally convergent methods are GN with line search and GN with quadratic
regularisation, which use a strategy within the GN framework to achieve
convergence to a stationary point given an arbitrary initial guess by
adjusting the length of the step. These methods will be presented in the next
section.
## 3 Globally convergent methods
Within this section, we outline the two globally convergent algorithms that we
apply in Section 5 to the preconditioned 4D-Var problem.
### 3.1 Gauss-Newton with line search (LS)
A line search method aims to restrict the step $\mathbf{s}^{(k)}$ in (11) so
as to guarantee a decrease in the value of $\mathcal{J}$. Within our work, an
inexact line search method known as the backtracking-Armijo (bArmijo)
algorithm is used within the inner loop of GN to find a step length $\alpha>0$
that satisfies the Armijo condition [1]. The Gauss-Newton with backtracking-
Armijo line search (LS) method is as follows.
Algorithm 3.1: LS algorithm applied to (5) [36].
Step $0$: Initialisation. Given $\mathbf{v}^{(0)}\in\mathbb{R}^{n}$,
$\tau\in(0,1)$ and $\beta\in(0,1)$ and $\alpha_{0}>0$ and some stopping
criteria. Set $k=0$. Step $1$: Check stopping criteria. While the stopping
criteria are not satisfied, do: Step $2$: Step computation. Compute a step
$\mathbf{s}^{(k)}$ that satisfies
$\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{J}(\mathbf{v}^{(k)})\mathbf{s}^{(k)}=-\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{r}(\mathbf{v}^{(k)})$
(12) and set $\alpha^{(k)}=\alpha_{0}$. Step $3$: Check Armijo condition.
While the following (Armijo) condition is not satisfied
$\mathcal{J}(\mathbf{v}^{(k)}+\alpha^{(k)}\mathbf{s}^{(k)})\leq\mathcal{J}(\mathbf{v}^{(k)})+\beta\alpha^{(k)}{\mathbf{s}^{(k)}}^{T}\nabla\mathcal{J}(\mathbf{v}^{(k)}),$
(13) do: Step $4$: Shrink stepsize. Set $\alpha^{(k)}:=\tau\alpha^{(k)}$ and
go to Step 3. Step $5$: Iterate update. Set
$\mathbf{v}^{(k+1)}=\mathbf{v}^{(k)}+\alpha^{(k)}\mathbf{s}^{(k)}$, $k:=k+1$
and go to Step 1.
In Algorithm 3.1, the control parameter $\beta$ in (13) is typically chosen to
be small (see [36]). The step equation (12) is the same as the GN step
equation (11); thus when $\alpha^{(k)}=1$, the GN and LS iterates coincide at
(the same) point $\mathbf{v}^{(k)}$. The use of condition (13) in this method
ensures that the accepted steps produce a sequence of strictly decreasing
function values given
$\triangledown\mathcal{J}(\mathbf{v}^{(k)})^{T}\mathbf{s}^{(k)}<0$. This
latter condition is satisfied by $\mathbf{s}^{(k)}$ defined in (12) whenever
$\mathcal{J}(\mathbf{v}^{(k)})$ is full column rank (which is the case here)
as mentioned in Section 2 [36].
Despite its global convergence property (see Appendix A.1), the LS method has
some disadvantages. We remark that the use of the step length $\alpha^{(k)}$
may sometimes unnecessarily shorten the step $\mathbf{s}^{(k)}$, slowing down
the convergence. Furthermore, LS may be computationally costly due to the need
to calculate the value of the function $\mathcal{J}$ each time $\alpha^{(k)}$
is adjusted, although more sophisticated updating strategies for $\alpha$ may
be used to try to reduce this effect.
Other line search strategies are possible such as Wolfe, Goldstein-Armijo and
more [36], but they are more involved and potentially more computationally
costly. As LS requires the re-evaluation of the outer loop objective function
each time it adjusts its line search parameter, its applicability to real
systems has been in doubt due to the computational cost limitations in 4D-Var
[38]. In Section 5, we show that given the same cost as the GN method, the LS
method can in some cases, better minimise the preconditioned 4D-Var objective
function.
### 3.2 Gauss-Newton with regularisation (REG)
The GN method may also be equipped with a globalisation strategy by including
a regularisation term $\gamma^{(k)}\mathbf{s}^{(k)}$ in the step calculation
(11) of Algorithm 2.2. This ensures that the accepted steps produce a sequence
of monotonically decreasing function values. This is a common variation of the
GN method known as the Levenberg-Marquardt method, proposed in [25] and [30].
The effect of the regularisation parameter $\gamma^{(k)}$ is to implicitly
control the length of the step $\mathbf{s}^{(k)}$. Increasing $\gamma^{(k)}$
shortens the steps, thus increasing the possibility that the procedure will
decrease the objective function in the next iteration. The REG method can be
summarised as follows.
Algorithm 3.2: REG algorithm applied to (5) [33].
Step $0$: Initialisation. Given $\mathbf{x}^{(0)}\in\mathbb{R}^{n}$,
$1>\eta_{2}\geq\eta_{1}>0$, $\gamma^{(0)}>0$ and some stopping criteria. Set
$k=0$. Step $1$: Check stopping criteria. While the stopping criteria are not
satisfied, do: Step $2$: Step computation. Compute a step $\mathbf{s}^{(k)}$
that satisfies
$\left(\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{J}(\mathbf{v}^{(k)})+\gamma^{(k)}\mathbf{I}\right)\mathbf{s}^{(k)}=-\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{r}(\mathbf{v}^{(k)}).$
(14) Step $3$: Iterate update. Compute the ratio
$\rho^{(k)}=\frac{\mathcal{J}(\mathbf{v}^{(k)})-\mathcal{J}(\mathbf{v}^{(k)}+\mathbf{s}^{(k)})}{\mathcal{J}(\mathbf{v}^{(k)})-m(\mathbf{s}^{(k)})},$
(15) where
$m(\mathbf{s}^{(k)})=\frac{1}{2}\|\mathbf{J}(\mathbf{v}^{(k)})\mathbf{s}^{(k)}+\mathbf{r}(\mathbf{v}^{(k)})\|_{2}^{2}+\frac{1}{2}\gamma^{(k)}\|\mathbf{s}^{(k)}\|_{2}^{2}.$
(16) Set
$\mathbf{v}^{(k+1)}=\begin{cases}\mathbf{v}^{(k)}+\mathbf{s}^{(k)},&\text{if
$\rho^{(k)}\geq\eta_{1}$}\\\ \mathbf{v}^{(k)},&\text{otherwise}.\end{cases}$
(17) Step $4$: Regularisation parameters update. Set
$\gamma^{(k+1)}=\begin{cases}\frac{1}{2}\gamma^{(k)},&\text{if
$\rho^{(k)}\geq\eta_{2}$}\text{ (very successful iteration)}\\\
\gamma^{(k)},&\text{if $\eta_{1}\leq\rho^{(k)}<\eta_{2}$}\text{ (successful
iteration)}\\\ 2\gamma^{(k)},&\text{otherwise,}\text{ (unsuccessful
iteration)}\end{cases}$ (18) Let $k:=k+1$ and go to Step 1.
As in Algorithms 2.2 and 3.1, the step equation (14) is solved directly in the
numerical experiments in Section 5. We note that when $\gamma^{(k)}=0$ in
(14), the REG step in (14) is the same as the GN step in (11). By comparing
(14) with (11), we are able to see how the REG step differs from the GN step.
The diagonal entries of the Hessian of the 4D-Var objective function (4) are
increased by the regularisation parameter $\gamma^{(k)}$ at each iteration of
the REG method. The method is able to vary its step between a GN and a
gradient descent step by adjusting $\gamma^{(k)}$ (see [36]) but may be costly
due to the need to calculate the value of the function $\mathcal{J}$ to assess
the step. Note that other choices of the factors $\frac{1}{2}$ and $2$ for
updating $\gamma^{(k)}$ in (18) are possible and even more sophisticated
variants for choosing $\gamma^{(k)}$ have been proposed. The proof of global
convergence of the REG method is presented in Appendix A.2.
## 4 Experimental design
Before evaluating the GN, LS and REG methods numerically, we first explain the
experimental design.
Twin experiments are commonly used to test DA methods. They use error
statistics that satisfy the DA assumptions as well as synthetic observations
generated by running the nonlinear model forward in time to produce a
reference state (not generally a local minimum of (5)). Within this section,
we define our choices for the twin experimental design. We begin by briefly
outlining two commonly used dynamical models, which are sensitive to initial
conditions (chaotic nature), a property shared with NWP models.
### 4.1 Models
Lorenz 1963 model (L63) Proposed in [28], the Lorenz 63 model (L63) is a
popular experimental dynamical system that represents meteorological processes
using a simple model. The model consists of three nonlinear, ordinary
differential equations given as
$\displaystyle\frac{dx}{dt}$ $\displaystyle=\sigma(y-x),$ (19)
$\displaystyle\frac{dy}{dt}$ $\displaystyle=x(\rho-z)-y,$
$\displaystyle\frac{dz}{dt}$ $\displaystyle=xy-\beta z,$
where the state vector consists of $n=3$ time-dependent variables
$\mathbf{x}=[x(t),y(t),z(t)]^{T}\in\mathbb{R}^{3}$. The scalar parameters are
chosen to be $\sigma=10$, $\rho=\frac{8}{3}$ and $\beta=28$, making the system
chaotic. A second-order Runge-Kutta method is used to discretise the model
equations using a time step $\Delta t=0.025$.
Lorenz 1996 model (L96) Another popular experimental system is the atmospheric
Lorenz 96 model (L96) [29] given by the following $n$ equations,
$\frac{dx_{j}}{dt}=-x_{j-2}x_{j-1}+x_{j-1}x_{j+1}-x_{j}+F,$ (20)
where $j=1,2,\ldots,n$ is a spatial coordinate. For a forcing term $F=8$ and
$n=40$ state variables, the system is chaotic [29]. The variables are evenly
distributed over a circle of latitude of the Earth with $n$ points with a
cyclic domain and a single time unit is equivalent to approximately 5
atmospheric days. A fourth-order Runge-Kutta method is used to discretise the
model equations using a time step $\Delta t=0.025$ (approximately 3 hours).
For both the L63 and L96 models, the time-window length $t_{a}$ is varied in
the numerical experiments in Section 5.1. We will now outline how we formulate
the twin experiments, beginning with generating the reference state.
### 4.2 Twin experiments
The reference state at time $t_{0}$, $\mathbf{x}^{ref}_{0}$ is used as the
basis of a twin experiment in the definition of the background state (the
initial guess for the optimisation algorithms) as well as to generate the
observations using a nonlinear model run called the ‘nature’ run. We begin by
explaining how we obtain $\mathbf{x}^{ref}_{0}$.
Reference state A vector of length $n$ is drawn from the uniform distribution
and used as the initial vector of state variables $\mathbf{x}^{rand}$. For the
L63 model, $\mathbf{x}^{rand}$ is integrated forward using a second-order
Runge-Kutta method, which is spun-up over 1000 time steps to obtain the
reference state on the model attractor for the L63 twin experiments,
$\mathbf{x}^{ref}_{0}\in\mathbb{R}^{3}$. This is the same for the L96 model
except a fourth-order Runge-Kutta method is used to obtain
$\mathbf{x}^{ref}_{0}\in\mathbb{R}^{40}$. The reference state at time $t_{0}$,
$\mathbf{x}^{ref}_{0}$ can then be used to obtain the full nonlinear model
trajectory.
We next explain how we obtain the background state vector used within our twin
experiments to be used as the initial guess for the optimisation algorithms.
Background In 4D-Var, the initial guess for the optimisation algorithm is
taken to be the background state at time $t_{0}$, $\mathbf{x}_{0}^{b}$, which
incorporates information from previous forecasts. In our experiments, the
background state vector $\mathbf{x}^{b}_{0}$ is generated by adding Gaussian
noise
$\mathbf{\varepsilon_{b}}\sim\mathcal{N}(0,\mathbf{B}),$ (21)
to the reference state at time $t_{0}$, $\mathbf{x}^{ref}_{0}$. For the
background error covariance matrix, we choose
$\mathbf{B}=\sigma_{b}^{2}\mathbf{I}_{n}$ where $\sigma_{b}^{2}$ is the
background error variance. The standard deviations of the errors from the
reference state for each model are based on the average order of magnitude of
the entries of $\mathbf{x}^{ref}_{0}$. For the L63 experiments,
$\sigma_{b}^{2}=0.25,1,6.25$ and $25$ represent a $5\%,10\%,25\%$ and $50\%$
error respectively. Similarly for the L96 experiments we set
$\sigma_{b}^{2}=0.0625,0.25,1.5625$ and $6.25$.
As previously mentioned, we generate synthetic observations from a nonlinear
model run using the reference state at time $t_{0}$, $\mathbf{x}^{ref}_{0}$.
We next describe the choices we made when specifying these observations.
Observations We consider both the spatial and temporal locations of the
observations. We assume that for both models observations of single state
variables are taken and $\mathbf{H}_{i}$ are the exact observation operators
at times $t_{i}$ used to map to observation space. For the L63 model, we
consider where we have $p=2$ observations, one of $x$ and one of $z$ per
observation location in time. For the L96 model, we consider where we have an
observation of the first half of the state variables per observation location
in time. This choice mimics what we may expect in reality where we have more
observations concentrated in one part of the globe. For both models, we first
consider where there is only one set of observations at time $N$ (Nobs1) and
then show the effect of using more observations along the time-window in later
experiments. We use imperfect observations where the observations
$\mathbf{y}_{i}$ are generated by adding Gaussian noise
$\mathbf{\varepsilon_{o}}\sim\mathcal{N}(0,\mathbf{R}_{i}),$ (22)
to $\mathbf{H}_{i}\mathbf{x}^{ref}_{i}$ for each observation location in time.
For the observation error covariance matrix we choose
$\mathbf{R}_{i}=\sigma_{o}^{2}\mathbf{I}_{p}$ where $\sigma_{o}^{2}$ is the
observation error variance. We expect the problem (4) to be more ill-
conditioned, thus difficult to solve accurately, when the ratio
$\frac{\sigma_{b}}{\sigma_{o}}$ (23)
is large [19, 20]. The ratio (23) controls the influence of the observation
term in the preconditioned objective function (4). For all experiments, we set
the standard deviation of the observation error to be $10\%$ of the average
order of magnitude of the entries of $\mathcal{H}(\mathbf{x}^{ref}_{i})$ for
both models. For the L63 model, this is $\sigma_{o}^{2}=1$ and for the L96
model, this is $\sigma_{o}^{2}=0.25$. We vary the background error variance
$\sigma_{b}^{2}$ above and below $\sigma_{o}^{2}$ such that the ratio (23)
varies. This can be thought of as having more confidence in the observations
compared to background when $\sigma_{b}>\sigma_{o}$ and vice versa.
Furthermore, as the initial guess is set to be the background state vector,
which is dependent on the value of $\sigma_{b}$, by varying $\sigma_{b}^{2}$
we are essentially varying the initial guess of the algorithms, thus
eliminating starting point bias from our results [4]. It is important to
recall here that under certain conditions, the GN method is known for its fast
convergence properties when in close vicinity to a local minimum, see [11]. By
choosing a small value of $\sigma_{b}^{2}$, we expect the performance of GN to
beat that of both LS and REG as it does not require the adjustment of the
additional parameters $\alpha^{(k)}$ and $\gamma^{(k)}$. Also, when assuming
that the observations are more accurate than the background, the use of more
observation locations in time means that we are constraining the estimate of
the initial state more tightly to the reference state in the twin experiment
design. The effect this has on the convergence of the optimisation methods
will be investigated. We next outline the algorithmic choices we have made.
### 4.3 Algorithmic choices
Stopping criteria We now outline the criteria used to terminate Algorithms
2.2, 3.1 and 3.2. Due to the limited time and computational cost available in
practice, the GN method is not necessarily run to convergence and a stopping
criterion is used to limit the number of iterations. Each calculation of the
residual vector $\mathbf{r}(\mathbf{v})$ requires the non-linear model to be
run forward to obtain the state at each observation location in time. This can
then be used to calculate the value of the objective function. Furthermore,
one run of the adjoint model is required to calculate the gradient.
To reduce computational cost in practical implementations of 4D-Var, the inner
loop problem is solved at a lower resolution than the outer loop problem [13].
However, as the dimension of the problems used within this paper are
relatively small compared to DA systems in practice, we solve the full
resolution inner loop problem using the full resolution residual and Jacobian
given in (6) and solve the inner loop problem using MATLAB’s backslash
operator where an appropriate solver is chosen according to the properties of
the Hessian matrix $\nabla^{2}\mathcal{J}(\mathbf{v})$ (see [31] for more
details). The limit on the total number of function and Jacobian evaluations
is achieved by using the following criterion
$k_{J}+l\leq\tau_{e},$ (24)
where $k_{J}$ is the total number of Jacobian evaluations (which is equivalent
to the number of outer iterations $k$ in 4D-Var), $l$ is the total number of
function evaluations and $\tau_{e}$ is the tolerance. The tolerance $\tau_{e}$
can be chosen according to the maximum number of evaluations desired. We note
that for GN, $k_{J}=l$ as the method requires as many Jacobian evaluations as
function evaluations. However, for both LS and REG there could be more than
one function evaluation per Jacobian evaluation since for unsuccessful steps,
the Jacobian is not updated so $k_{J}\leq l$.
To ensure that the algorithms are stopped before the function values stagnate,
the following common termination criterion based on the relative change in the
function at each iteration is also used
$\frac{|\mathcal{J}(\mathbf{v}^{(k-1)})-\mathcal{J}(\mathbf{v}^{(k)})|}{1+\mathcal{J}(\mathbf{v}^{(k)})}\leq\tau_{s},$
(25)
for $k\geq 1$, where $\tau_{s}$ is the tolerance, chosen to be $10^{-5}$. The
criterion (25) is used throughout Section 5 unless indicated otherwise.
We expect the norm of the gradient of the objective function,
$\|\nabla\mathcal{J}(\mathbf{v}^{(k)})\|$ to be close to zero at a stationary
point. The following termination criterion will be used in Section 5.2 to
identify whether or not a given method has located a stationary point
$\|\nabla\mathcal{J}(\mathbf{v}^{(k)})\|\leq 10^{-5}.$ (26)
Parameter choices For the LS method, we choose $\alpha_{0}=1$ so that the
first step assessed by the bArmijo rule is the GN step. We set $\beta=0.1$ and
to adjust the step length, $\tau=0.5$.
For the REG method, we select the initial regularisation parameter to be
$\gamma^{(0)}=1$ so that the condition in Algorithm 3.2, $\gamma^{(0)}>0$, is
satisfied and the REG step differs from the GN step. Furthermore, we choose
$\eta_{1}=0.1$ and $\eta_{2}=0.9$ to assess how well the model (16)
approximates the true function value at the next iteration.
For all three optimisation methods, we set $\tau_{e}=8,100$ or $1000$
depending on the experiment. The choice of $\tau_{e}=8$ comes from that which
is used operationally in the ECMWF Integrated Forecasting System [12], whereas
the choice of $\tau_{e}=100$ or $1000$ is used to measure the performance of
the optimisation methods when closer to convergence.
In order to best present our results, we use accuracy profiling described as
follows.
Accuracy profiles An accuracy profile shows the proportion of problems a given
method can solve within a fixed amount of work ($\tau_{e}$) and a given
tolerance ($\tau_{f}$) of the change in the function value [34]. To ensure the
robustness of our results, we apply the three optimisation methods to a series
of $n_{r}$ randomly generated problems, where the randomness occurs through
the background and observation error vectors, $\mathbf{\varepsilon_{b}}$ and
$\mathbf{\varepsilon_{o}}$. For each realisation, a new
$\mathbf{\varepsilon_{b}}$ and $\mathbf{\varepsilon_{o}}$ are generated from
their respective distributions, (21) and (22). The following criterion
proposed in [34] is used to flag that an estimate of the initial state has
been obtained by an optimisation method
$\frac{\mathcal{J}(\mathbf{v}_{0}^{(l)})-\mathcal{J}(\mathbf{v}_{0}^{t})}{\mathcal{J}(\mathbf{v}_{0}^{(0)})-\mathcal{J}(\mathbf{v}_{0}^{t})}\leq\tau_{f},$
(27)
where $\mathbf{v}_{0}^{t}$ is a solution of (4) referred to as the ‘truth’ and
$\tau_{f}$ is the tolerance. The measure (27) compares the optimality gap
$\mathcal{J}(\mathbf{v}_{0}^{(l)})-\mathcal{J}(\mathbf{v}_{0}^{t})$ relative
to the best reduction
$\mathcal{J}(\mathbf{v}_{0}^{(0)})-\mathcal{J}(\mathbf{v}_{0}^{t})$ [34]. This
ensures that the 4D-Var problem is only flagged as solved by the optimisation
method once the value of the objective function is within some error
($\tau_{f}$) of the truth.
For our problems, the truth is unknown. We only know that, due to the
nonlinearity of the 4D-Var problem, there may exist many values of
$\mathbf{v}_{0}$ that could minimise (4) locally. We are interested in the
estimate $\mathbf{v}_{0}^{t}$ that gives the greatest reduction in (4) that
any of the three methods can obtain. Therefore, we set the truth to be the
$\mathbf{v}_{0}^{(l)}$ obtained by any of the three methods that gives the
smallest function value within the given number of evaluations. Using this
criterion allows us to benchmark the methods against each other using accuracy
profiles.
For each experiment, we plot the proportion of the same $n_{r}=100$
realisations solved by each method against the relative accuracy obtained,
$\tau_{f}$. The relative accuracy obtained is varied using $\tau_{f}=10^{-i}$,
where $i=0,0.01,0.02,\dots,5$.
## 5 Numerical results
In this section, we present the results when applying GN, LS and REG using the
experimental design described in the previous section. We begin by conducting
experiments showing the effect of the length of the assimilation time-window
on the convergence of the three methods.
### 5.1 Effect of time-window length
We produce accuracy profiles for different time-window lengths to understand
the effect this has on the convergence of each method while limiting the
number of function and Jacobian evaluations to $\tau_{e}=8$. We choose a
background error of $50\%$ and an observation error of $10\%$ so that the
ratio (23) is large relative to the other cases we consider. For both the L63
and L96 models, we consider both short and long time-window lengths of 6 hours
($t_{a}=0.05$), 12 hours ($t_{a}=0.1$), 1 day ($t_{a}=0.2$) and 5 days
($t_{a}=1$) with the results shown in Figure 1.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 1: Accuracy profiles for the GN (black), LS (red) and REG (blue)
methods applied to the L63 and L96 problems using different time-window
lengths $t_{a}$. These show the proportion of $n_{r}=100$ problems solved by
each of the methods against the specified accuracy $-\log(\tau_{f})$ when
$\tau_{e}=8$. The GN line is below the LS line in (a), (b), (e), (f) and (g).
From Figure 1, we see that as the length of the time-window of both the L63
and L96 problems is increased, the performance of the GN, LS and REG methods
suffers.
For the L63 problems, Figures 1(a) and 1(b) show that GN and LS perform
similarly and solve more problems to the highest accuracy than REG. However,
as $\tau_{f}$ is increased, REG is solving all of the problems, so the REG
estimate must be close to that of GN and LS. In Figure 1(c), both LS and REG
solve fewer problems compared to GN, even for relatively large choices of
$\tau_{f}$. However, there is a choice of $\tau_{f}$ where all three methods
are solving all problems, again indicating that the LS and REG estimates are
close to the GN estimate. The initial guess for the three methods (the
background) appears to be close enough to the solution and so the GN step is
able to attain a sufficient decrease in the objective function as predicted by
its local convergence properties. LS and REG are inadvertently shortening the
GN step, which is a good step in the short time-window case. As we know, LS
and REG need to adjust their respective parameters, $\alpha^{(k)}$ and
$\gamma^{(k)}$ to attain GN’s fast local convergence, so LS and REG are
requiring more evaluations than GN to achieve the same result. For the L96
short time-window results in Figures 1(e), 1(f) and 1(g), this is not the
case. In fact, REG is outperforming GN and LS and it appears that LS is
mimicking the behaviour of GN quite closely as the GN step is attaining a
sufficient decrease in the objective function. However the decrease that the
REG step is achieving appears to be much greater for the L96 problems.
Therefore, REG is able to solve a greater number of problems within a higher
level of accuracy, which explains the difference between the L63 results in
Figures 1(a), 1(b) and the L96 results in 1(e) and 1(f).
The long time-window results for the L63 and L96 problems are shown in Figures
1(d) and 1(h), respectively. In both figures, LS is outperforming GN. For the
L63 problems, the performance of GN does not differ much from the performance
of REG. However, comparing the performance of GN in 1(c) with 1(d), we can see
that performance of GN has deteriorated greatly when increasing the length of
the time-window. In fact, in the results where even longer time-windows are
used (not included here), LS and REG outperform the GN method for the L63
problems, as in 1(h).
For the remainder of our experiments, we set $t_{a}=1$ in order to consider a
long time-window case only, as this is where we expect to see the greatest
benefit from the globally convergent methods.
### 5.2 Behaviour of methods and stagnation of GN
In order to gain an understanding of how the globally convergent methods, LS
and REG, contend with GN, we next demonstrate the behaviour of GN, LS and REG
when applied to typical preconditioned 4D-Var L63 and L96 problems, where the
background error is large and the time-window length is long.
Figure 2 shows the convergence plots for two typical realisations when using
the GN, LS and REG methods to obtain a solution to the preconditioned 4D-Var
problem with the L63 and L96 models. In this figure, the total number of
function and Jacobian evaluations allowed is set to $\tau_{e}=100$ for both
the L63 and the L96 problems to see if any progress is made beyond the number
of evaluations allowed in practice. We recall that GN updates the gradient (8)
when the function (4) is updated, so there are as many function evaluations as
Jacobian evaluations. However, both LS and REG only update the Jacobian on
successful iterations when there is a reduction in the objective function.
Therefore, the total number of evaluations used by each of the methods could
consist of a different combination of function and Jacobian evaluations. As in
Section 5.1, we set the ratio (23) to be large. It is in this case that we are
able to best demonstrate the benefit of the globally convergent methods, LS
and REG. In Figure 2, we set $\tau_{s}=10^{-3}$ to ensure that the methods
stop before the function values stagnate. As Figure 2 includes function
evaluations for both successful and unsuccessful step calculations, it is
natural to see jumps in the function values of LS and REG while their
parameters, $\alpha^{(k)}$ and $\gamma^{(k)}$ are being adjusted to guarantee
a reduction in the function.
(a)
(b)
Figure 2: Convergence plots showing the value of the objective function at
each iteration (including unsuccessful iterations) of the GN (black), LS (red)
and REG (blue) methods when applied to a L63 problem (a) and a L96 problem
(b).
For the L63 problems (Figure 2(a)), all three methods stop when the relative
change in the function criterion (25) is satisfied and before the limit on the
total number of function and Jacobian evaluations (24) is met. Table 1
supports this figure by showing the algorithmic output for each of the GN, LS
and REG methods when two different stopping criteria are used. From these
results, we see that both LS and REG stop at the same function value, although
REG requires fewer evaluations to do so, and that GN is converging towards a
larger value of the objective function (4) than LS and REG. By instead
stopping on the criterion (26) and setting $\tau_{e}=1000$, we see in Table 1
that all three methods are still making progress on the gradient and iterate
level, indicating that the methods are in fact locating stationary points
despite a small change in the function value beyond those shown in Figure 2.
Table 1: Table of algorithmic output when applying, GN, LS and REG to a typical realisation of the L63 problems, corresponding to Figure 2(a). Criteria | Method | $l$ | $k_{J}$ | $\mathcal{J}(\mathbf{v}^{(k_{J})})$ | $\|\mathbf{v}^{(k_{J})}-\mathbf{v}^{(k_{J}-1)}\|$ | $\|\nabla\mathcal{J}(\mathbf{v}^{(k_{J})})\|$
---|---|---|---|---|---|---
| GN | 20 | 20 | 81.55 | 0.42 | 86.35
(25) | LS | 27 | 14 | 8.69 | 0.03 | 5.18
| REG | 14 | 14 | 8.69 | 0.05 | 1.00
| GN | 101 | 101 | 78.87 | $3.54^{-8}$ | $8.47^{-6}$
(26) | LS | 43 | 27 | 8.69 | $8.21^{-7}$ | $8.31^{-6}$
| REG | 66 | 66 | 8.69 | $7.34^{-7}$ | $9.24^{-6}$
For the L96 problems (Figure 2(b)), LS and REG stop when (25) is satisfied and
before (24) is satisfied, whereas GN only satisfies (24). Table 2 supports
this figure by showing the algorithmic output for each of the GN, LS and REG
methods when two different stopping criteria are used. From these results, we
see that both GN and LS are stopping at a larger value of the objective
function (4) than REG. Recall that the norm of the gradient criterion (26) can
be used to identify whether or not a given method has located a stationary
point. The values of $\|\nabla\mathcal{J}(\mathbf{v}^{(k_{J})})\|$ for LS and
REG when the relative change in the function criterion (25) is used are much
smaller than that of GN. However, when we instead use the norm of the gradient
criterion (26) and limit the number of iterations to $\tau_{e}=1000$, the
methods stop on the limit of the number of iterations. Therefore, our results
do not indicate that the estimates of LS and REG may indeed be stationary
points of the objective function as they did for the L63 problems. However, LS
and REG are are able to make some improvement (REG more so than LS) on the
gradient norm level, unlike GN, which appears to fluctuate at gradient level,
even after $\tau_{e}=1000$ evaluations.
Table 2: Table of algorithmic output when applying, GN, LS and REG to a typical realisation of the L96 problems, corresponding to Figure 2(b). Criteria | Method | $l$ | $k_{J}$ | $\mathcal{J}(\mathbf{x}^{(k_{J})})$ | $\|\mathbf{v}^{(k_{J})}-\mathbf{v}^{(k_{J}-1)}\|$ | $\|\nabla\mathcal{J}(\mathbf{v}^{(k_{J})})\|$
---|---|---|---|---|---|---
| GN | 50 | 50 | 1728.99 | 20.02 | 5758.47
(25) | LS | 24 | 14 | 12.72 | 0.07 | 10.09
| REG | 19 | 16 | 5.52 | 0.08 | 1.89
| GN | 500 | 500 | 960.32 | 15.88 | 8015.13
(26) | LS | 967 | 32 | 12.71 | 0 | 10.09
| REG | 967 | 32 | 5.51 | 0 | 0.03
Table 2 shows that as LS and REG iterate beyond what is shown in Figure 2(b),
there is very little change in the value of the cost function, despite making
some change on the iterate and/or gradient level. The effect of rounding error
means that although we see progress made, the function value may remain
stagnant because of limitations in computer precision and because of the
conditioning of the problem. The condition number of the Hessian
$\kappa(\mathbf{S})$ can be used to indicate the accuracy we could be able to
achieve. In our work, both the L63 and L96 problems are well-conditioned.
The observed behaviour in this section is partly due to the fact that there is
no mechanism in GN to force it to converge as there is in LS and REG. The
benefit of these mechanisms is clearly shown in Figure 2(b) where the GN
method is stagnating while the LS and REG methods are converging, further
motivating our investigation of these methods.
### 5.3 Effect of background error variance
In this section, we study the effect on the performance of the three methods
when the uncertainty in the background information is increased whilst the
uncertainty in the observations is fixed. Figure 3 shows the accuracy profiles
used to benchmark the performance of the GN, LS and REG methods as the
tolerance $\tau_{f}$ is reduced, where $\tau_{e}=8$, while Figure 4 allows
$\tau_{e}$ to increase for both models with the increase chosen relative to
the dimension of the models, i.e. a larger increase in $\tau_{e}$ is allowed
for the L63 problems, where $n=3$, than the L96 problems, where $n=40$. From
both these figures, we generally see that as the error in the background is
reduced, the performance of all three methods improves. The conditioning of
the problem has been shown to depend on the ratio of the standard deviations
of the background and observation errors (23) [19, 20]. Therefore, the
estimate obtained by any of the optimisation methods may not be accurate
enough to produce a reliable forecast if the ratio (23) is large. The accuracy
of the estimate obtained by each method will be investigated further later on
in the paper.
Figures 3(a) and 3(e) show that a globally convergent method is able to find a
smaller function value than GN. As the ratio (23) is reduced, from Figures
3(b), 3(c), 3(f) and 3(g) we see that the REG method is able to solve the most
problems at the highest level of accuracy. When there is less uncertainty in
the background versus the observations, Figure 3(d) shows that for the L63
problems, all three methods are solving close to all of the problems within a
high level of accuracy. This is because the three methods are able to solve a
large portion of the cases when the problem is well-conditioned, which could
explain this result. However, for the L96 problems Figure 3(h) shows that the
GN and LS methods are solving the majority of the problems and REG is not
performing as well at higher levels of background accuracy. We can see the
performance of REG improving for the L96 problems when more evaluations are
allowed in Figure 4(h).
In Figure 4, where more evaluations are allowed than in Figure 3, we see a
much greater difference between the globally convergent methods and GN when
the background error is larger than the observation error. In Figures 4(a),
4(b), 4(e) and 4(f), it appears that when more evaluations are allowed, the
performance of GN worsens relative to LS and REG in the case when $\sigma_{b}$
is large. The globally convergent methods are able to locate estimates of the
initial states for the preconditioned 4D-Var problem, which when compared to
GN, better minimise the objective function (4). When the background error is
the same as the observation error in Figure 4(c), it is GN that is performing
better than LS and REG for the L63 problems. For LS, this could be because LS
is unnecessarily shortening the GN step, causing slower convergence. For the
REG method, the regularisation parameter must be shrunk and therefore, REG
requires more iterations to benefit from GN’s fast convergence property.
In Figure 4(d), all three methods are solving essentially the same number of
problems, with a slight decrease in success for REG, that again could be due
to the need to adjust the regularisation parameter. For the L96 problems, we
see a slightly different result. Figures 4(g) and 4(h) show that a globally
convergent method is solving more problems, more accurately than GN despite
the background error being at most equal to the observation error. This is an
interesting result for this higher-dimensional model as we would expect GN to
locally converge at a faster rate than the globally convergent methods due to
the fact that GN does not need to adjust any parameters; however, we find this
not to be the case.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 3: Accuracy profiles for the GN (black), LS (red) and REG (blue)
methods applied to the L63 problems in (a)-(d) and the L96 problems in (e)-(h)
where $n_{r}=100$, $\tau_{e}=8$ and where there is one observation at the end
of the time-window. The observation error is $10\%$ and the background error
is varied above and below this, as indicated in the plot captions. The GN line
is below the LS line in (c), (d), (g) and (h).
In DA, we are interested in knowing the accuracy of the estimate obtained as
in applications such as NWP, the estimate is used as the initial conditions
for a forecast and so the quality of this forecast will depend on the errors
in the estimate. In the following section, we quantify and compare the errors
in the estimates obtained by each method.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 4: Accuracy profiles for the GN (black), LS (red) and REG (blue)
methods applied to the L63 problems where $\tau_{e}=1000$ in (a)-(d) and the
L96 problems where $\tau_{e}=100$ in (e)-(h). We set $n_{r}=100$ and there is
one observation at the end of the time-window. The observation error is $10\%$
and the background error is varied above and below this, as indicated in the
plot captions.
### 5.4 Quality of the analysis
We recall that the initial guess of the algorithms is the reference state
$\mathbf{x}_{0}^{ref}$ perturbed by the background error
$\mathbf{\varepsilon_{b}}$. In order to compare the quality of the estimate
obtained by each method, we compare their estimate to the reference state
$\mathbf{x}_{0}^{ref}$ to understand how far the estimates obtained by the
methods have deviated from this. The analysis error for each state variable is
given by $\varepsilon^{a}_{i}=x^{a}_{i}-x^{ref}_{i}$. For each realisation, we
calculate the root mean square error (RMSE) of the analysis error, which is
the difference between the reference state and the estimate obtained by each
method,
$RMSE=\frac{1}{\sqrt{n}}\|\varepsilon^{a}\|_{2}.$ (28)
For each method, we plot the percentage of problems solved (according to the
criterion (27) where $\tau_{f}=10^{-3}$) within a specified tolerance of the
RMSE (28). We acknowledge in this work that the code for the RMSE profiles has
been adapted from the code for the data profiles used in [34].
The results for the L63 and L96 problems are in Figure 5, which coincides with
the case shown in Figure 3 where $\tau_{f}=10^{-3}$. From this, we see that
the GN method solves fewer problems within the same level of RMSE accuracy as
LS and REG when the background error is large in Figures 5(a), 5(b), 5(e) and
5(f). Furthermore, we see how the RMSE of the analyses successfully found by
each method reduces as the background error variance is reduced. This can be
seen in the scale of the x axis in Figures 5(a), 5(b), 5(c) and 5(d) for the
L63 problems and Figures 5(e), 5(f), 5(g) and 5(h) for the L96 problems. For
both models, the concentration of points in Figures 5(a) and 5(e) shows us
that the LS method is solving more problems than GN and REG within the same
RMSE tolerance. A similar result can be seen for REG in Figures 5(b), 5(c),
5(f) and 5(g). In Figures 5(d) and 5(h), we see that all three methods are
performing similarly, the RMSE errors for each of the analyses are very close
together.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 5: RMSE plots for the GN (black), LS (red) and REG (blue) methods
applied to the L63 problems in (a)-(d) and the L96 problems in (e)-(h) where
$n_{r}=100$, $\tau_{e}=8$, $\tau_{f}=10^{-3}$ and where there is one
observation at the end of the time-window. The observation error is $10\%$ and
the background error is varied above and below this, as indicated in the plot
captions.
Including more observations constrains the solution to be closer to the
reference state when the observation error is small. We next show the effect
on the performance of the methods as we include more observations and see if
this gives any improvement in the performance of the methods when the
background error is much larger than the observation error.
### 5.5 Effect of observations
Within this section, we show how the use of more observation locations in time
affects the performance of the three methods. We take the worst case for the
three methods when there is a $50\%$ error in the background and see if
including more observations in time with a $10\%$ error affects the
performance of the methods. For both models, we consider only equally spaced
observations in time, one set of observations at time $N$ (Nobs1), times $N/2$
and $N$ (Nobs2), times $N/4,N/2,3N/4$ and $N$ (Nobs3) and the even time points
(Nobs4), where $N=40$. For the Nobs1 case, observations are based on the
reference state at the end of the time-window and more observations are
included over time in the Nobs2, Nobs3 and Nobs4 cases. This not only
increases the condition number of the problem but also constrains the estimate
more tightly to the reference state.
For the L63 problems from Figures 6(a), 6(b), 6(c) and 6(d), we see that as
the number of observation locations in time is increased, all three methods
are solving more problems at a higher level of accuracy. This is more apparent
when more evaluations are allowed as shown in Figure 7(a), 7(b), 7(c) and
7(d). Here, the performance of GN improves drastically between the Nobs1 and
Nobs2 cases (Figures 6(a) and 6(b)) while there is less significant change in
the behaviour of LS and REG. In Figure 6(d), we see that GN is able to solve
more problems than LS and REG. Again, this could be because the LS and REG
methods require more iterations to converge when GN is performing well due to
the need to adjust their parameters. This argument coincides with Figure 7(d)
where more evaluations are allowed and the LS and REG methods are able to
perform as well as or better than GN. For the L96 problems, we see a different
result. From Figure 6, we only see a significant improvement in the
performance of GN in the Nobs4 case (Figure 6(h)). Otherwise, there is little
effect. This conclusion can also be drawn from Figure 7(g) and 7(h) where more
evaluations are allowed.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 6: Accuracy profiles where $n_{r}=100$ and $\tau_{e}=8$ for the L63
problems in (a)-(d) and the L96 problems in (e)-(h) for different observation
locations in time, as indicated in the plot captions, where the background
error is $50\%$ and the observation error is $10\%$.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 7: Accuracy profiles where $n_{r}=100$ for the L63 problems where
$\tau_{e}=1000$ in (a)-(d) and the L96 problems where $\tau_{e}=100$ in
(e)-(h) for different observation locations in time, as indicated in the plot
captions, where the background error is $50\%$ and the observation error is
$10\%$. The GN line is below the LS line in (d).
Similar studies were carried out on the performance of GN, LS and REG when
applied to the preconditioned 4D-Var problem where we instead choose
$\mathbf{B}=\sigma_{b}^{2}\mathbf{C}_{B}$, where $\mathbf{C}_{B}$ is a
correlation matrix; similar conclusions are drawn but due to space
constraints, are not included within this paper.
## 6 Conclusion
We have shown that the globally convergent methods, LS and REG, have the
capacity to improve current estimates of the DA analysis within the limited
time and cost available in DA, through the use of safeguards within GN which
guarantee the convergence of the method from any initial guesses.
Using the L63 and L96 models in the preconditioned 4D-Var framework, we have
shown that when there is more uncertainty in the background information
compared to the observations, the GN method may fail to converge in the long
time-window case yet the globally convergent methods LS and REG are able to
improve the estimate of the initial state. We compare the quality of the
estimate obtained using the RMSE of the analysis and show that even in the
case where the background is highly inaccurate compared to the observations,
the globally convergent methods find estimates with an RMSE less than or equal
to the RMSE of the estimates GN obtains. We take the case where the background
is highly inaccurate compared to the observations and find that the
convergence of all three methods is improved when more observations are
included along the time-window. In addition to the numerical results, the
assumptions made in the global convergence theorems of both LS and REG when
applied to a general nonlinear least-squares problem and a discussion as to
whether these assumptions are satisfied in DA is presented in the appendix. We
note that preconditioning the second derivative matrix is not necessary for
these results to hold, although this is the case we have focused on within our
work.
Our findings are important in DA as they show that in cases where the accuracy
of the prior information is poor and when there is limited computational
budget, the globally convergent methods are able to minimise the 4D-Var
objective function, unlike GN. We recommend that these methods are tested on
DA problems with realistic models and for different applications to understand
if these conclusions continue to hold. In particular, one should consider such
problems where an accurate initial guess for the algorithms is unavailable and
a long assimilation time-window is used, as we found that it is in this case
that LS and REG have an advantage over GN.
Within this paper, the 4D-Var inner loop problem is solved exactly. In
practice this must be solved inexactly, due to the size of the control vector,
and by the use of approximations to meet the computational and time
constraints. This is a common area of research in the DA community in order to
improve the quality of the assimilation analysis as well as the speed of
convergence of the algorithms. Furthermore, in the case where GN performs
better than LS and REG, further research is needed on updating the
globalisation parameters (stepsize $\alpha^{(k)}$ and regularisation parameter
$\gamma^{(k)}$) to speed up convergence.
Acknowledgements This work has been funded in part by the UK Engineering and
Physical Sciences Research Council Centre for Doctoral Training in Mathematics
of Planet Earth, the University of Reading EPSRC studentship (part of
Grant/Award Number: EP/N509723/1) and by the NERC National Centre for Earth
Observation. We acknowledge in this work that the code for the Lorenz 1996
model was developed by Adam El-Said.
Declarations of interest None.
## 7 Appendix
## Appendix A Convergence theorems
In this section, we outline some existing global convergence results for the
LS and REG methods and discuss whether the assumptions made hold in DA. We
first state the definitions of a local and global minimum of an optimisation
problem $\min_{\mathbf{v}\in\mathbb{R}^{n}}f(\mathbf{v})$ where
$f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ and $\mathbf{v}\in\mathbb{R}^{n}$.
###### Definition A.1 (Local minimiser [36]).
A point $\mathbf{v}^{*}$ is a local minimiser of $f$ if there is a
neighbourhood $\mathcal{N}$ of $\mathbf{v}^{*}$ such that
$f(\mathbf{v}^{*})\leq f(\mathbf{v})$ for all $\mathbf{v}\in\mathcal{N}$.
###### Definition A.2 (Global minimiser [36]).
A point $\mathbf{v}^{*}$ is a global minimiser of
$f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ if $f(\mathbf{v}^{*})\leq
f(\mathbf{v})$ for all $\mathbf{v}\in\mathbb{R}^{n}$.
A global solution is difficult to locate in most cases due to the nonlinearity
of the problems. Therefore, a local solution is often sought by algorithms for
nonlinear optimisation.
We focus on nonlinear least-squares optimisation problems of the form (5) for
the remainder of this section. The GN method can only guarantee local
convergence under certain conditions and not necessarily global convergence.
This is dependent on how close the initial guess is from the local minimum the
algorithm locates and whether or not the residual vector $\mathbf{r}$ of (5)
is a zero vector at a solution $\mathbf{v}^{*}$. Furthermore, the region of
local convergence depends on problem constants not known a priori, such as
Lipschitz constants of the gradient.
A local convergence result for the GN method can be found in Theorem 10.2.1 of
[11] where the performance of GN is shown to be dependent on whether or not
the second-order terms in (9) evaluated at the solution $\mathbf{v}^{*}$ are
close to zero. Another local convergence result can be found in Theorem 4 of
[18] where GN is treated as an inexact Newton method. The theorem guarantees
convergence of the GN method if for each iteration $k=0,1,\ldots,$ the norm of
the ratio of $\mathbf{Q}(\mathbf{v}^{(k)})$ and
$\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{J}(\mathbf{v}^{(k)})$, the second and
first terms of (9) respectively, is less than or equal to some constant
$\hat{\eta}$ where $0\leq\hat{\eta}\leq 1$.
It is important to note here that the globally convergent methods we are
concerned with, namely LS and REG, can only guarantee global convergence to a
local minimum under certain conditions and not necessarily to a global
minimum.
Before we list the assumptions for the global convergence theorems, we first
state the definition of the Lipschitz continuity property of a general
function $g$ as this is widely used in the theorems.
###### Definition A.3 (Lipschitz continuous function (see [36] A.42)).
Let $g$ be a function where $g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ for
general $n$ and $m$. The function $g$ is said to be Lipschitz continuous on
some set $\mathcal{N}\subset\mathbb{R}^{n}$ if there exists a constant $L>0$
such that,
$\|g(\mathbf{v})-g(\mathbf{w})\|\leq
L\|\mathbf{v}-\mathbf{w}\|,\qquad\forall\mathbf{v},\mathbf{w}\in\mathcal{N}.$
(29)
The following assumptions are used to prove global convergence of both the LS
and REG methods.
###### A1.
$\mathbf{r}$ is uniformly bounded above by $\omega>0$ such that
$\|\mathbf{r}(\mathbf{v})\|\leq\omega$.
###### A2.
$\mathbf{r}\in\mathcal{C}^{1}(\mathbb{R}^{n})$ is Lipschitz continuous on
$\mathbb{R}^{n}$ with Lipschitz constant $L_{r}>0$.
###### A3.
$\mathbf{J}$ is Lipschitz continuous on $\mathbb{R}^{n}$ with Lipschitz
constant $L_{J}>0$.
We remark that for the LS method, we can weaken assumptions A2 and A3 using
the open set $\mathcal{N}$ containing the level set
$\mathcal{L}=\left\\{\mathbf{v}\in\mathbb{R}^{n}|\mathcal{J}(\mathbf{v})\leq\mathcal{J}(\mathbf{v}^{(0)})\right\\}.$
(30)
In order to achieve the sufficient decrease property of the LS method, the
following assumption must be made.
###### A4.
$\mathbf{J}(\mathbf{v})$ in (6) is uniformly full rank for all
$\mathbf{v}\in\mathbb{R}^{n}$, that is, the singular values of
$\mathbf{J}(\mathbf{v})$ are uniformly bounded away from zero, so there exists
a constant $\nu$ such that
$\|\mathbf{J}(\mathbf{v})\mathbf{z}\|\geq\nu\|\mathbf{z}\|$ for all
$\mathbf{v}$ in a neighbourhood $\mathcal{N}$ of the level set $\mathcal{L}$
where $\mathbf{z}\in\mathbb{R}^{n}$.
In 4D-Var practice, it is reasonable to assume that the physical quantities
are bounded. Therefore, we can say that both $\mathbf{x}_{0}-\mathbf{x}^{b}$
and the innovation vector $\mathbf{y}-\mathcal{H}(\mathbf{x})$ are bounded in
practice, thus satisfying assumption A1. In 4D-Var, we must assume that the
nonlinear model $\mathcal{M}_{0,i}$ is Lipschitz continuous in order for A2 to
hold. As discussed in [32], this is a common assumption made in the
meteorological applications. However, we cannot say that this is necessarily
the case in 4D-Var practice.
In order for the Jacobian $\mathbf{J}$ to be Lipschitz continuous, we require
its derivative to be bounded above by its Lipschitz constant. Therefore, for
assumption A3 to hold, we require $\mathbf{r}$ to be twice continuously
differentiable in practice, which is a common assumption made in 4D-Var, and
also, that these derivatives of $\mathbf{r}$ are bounded above.
As mentioned in Section 2, the preconditioned 4D-Var Hessian (10) is full rank
by construction as it consists of the identity matrix and a non-negative
definite term. Therefore, the Jacobian of the residual of the preconditioned
problem in (6) is full rank and assumption A4 holds. This is also the case for
the standard 4D-Var problem (1), because of the presence of $\mathbf{B}^{1/2}$
in its Jacobian.
We now outline the global convergence theorems for the LS and REG methods,
using these assumptions.
### A.1 Global convergence of the LS method
Nocedal et al. outline the proof for the GN method with Wolfe line search
conditions in [36], which uses the Zoutendijk condition. This proof can be
adapted to prove the global convergence theorem of the LS method, Algorithm
3.1, given as follows.
###### Theorem A.4 (Global convergence for the Gauss-Newton with bArmijo line
search method, Algorithm 3.1).
Suppose we have a function $\mathcal{J}=\frac{1}{2}\mathbf{r}^{T}\mathbf{r}$
and its gradient $\nabla\mathcal{J}=\mathbf{J}^{T}\mathbf{r}$ where
$\mathbf{r}\in\mathcal{C}^{1}(\mathbb{R}^{n})$ and $\mathbf{J}$ is the
Jacobian of $\mathbf{r}$. Assume A1 \- A4 hold. Then if the iterates
$\\{\mathbf{v}^{(k)}\\}$ are generated by the GN method with stepsizes
$\alpha^{(k)}$ that satisfy the Armijo condition (13), we have
$\lim_{k\rightarrow\infty}\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{r}(\mathbf{v}^{(k)})=0.$
(31)
That is, the gradient norms converge to zero, and so the Gauss-Newton method
with bArmijo line search is globally convergent.
The proof of Theorem A.4 requires the bArmijo chosen stepsizes $\alpha^{(k)}$
to be bounded below, which can be derived using assumptions A1 \- A3. Using
this lower bound, as well as assumption A4, we are able to prove the
Zoutendijk condition (as in [36]) and its variant
$\sum_{k\geq
0}\cos(\theta^{(k)})\|\nabla\mathcal{J}(\mathbf{v}^{(k)})\|_{2}\|\mathbf{s}^{(k)}\|_{2}<\infty$
(32)
hold. Both the Zoutendijk condition and its variant (32) use the angle between
$\mathbf{s}^{(k)}$ (the GN search direction) and
$-\nabla\mathcal{J}(\mathbf{v}^{(k)})$ (the steepest descent direction),
$\theta^{(k)}$, which is given by
$\cos(\theta^{(k)})=\frac{(-\nabla\mathcal{J}(\mathbf{v}^{(k)}))^{T}\mathbf{s}^{(k)}}{\|\nabla\mathcal{J}(\mathbf{v}^{(k)})\|_{2}\|\mathbf{s}^{(k)}\|_{2}}.$
(33)
By showing that the angle is uniformly bounded away from zero with $k$, one
can show that GN with line search is a globally convergent method.
We will next present the global convergence theorem for the REG method. The
REG method has no sufficient decrease condition as in the LS method.
Therefore, the use of the level set (30) is not required. The assumptions for
convergence are similar to the LS method aside from the requirement of
$\mathbf{J}(\mathbf{v})$ being full rank.
### A.2 Global convergence of the REG method
The global convergence theorem for the GN with quadratic regularisation
method, Algorithm 3.2, is given as follows.
###### Theorem A.5 (Global convergence for the Gauss-Newton with
regularisation method, Algorithm 3.2).
Suppose we have a function $\mathcal{J}=\frac{1}{2}\mathbf{r}^{T}\mathbf{r}$
and its gradient $\nabla\mathcal{J}=\mathbf{J}^{T}\mathbf{r}$ where
$\mathbf{r}\in\mathcal{C}^{1}(\mathbb{R}^{n})$ and $\mathbf{J}$ is the
Jacobian of $\mathbf{r}$. Assume A1 \- A3 hold. Then if the iterates
$\\{\mathbf{v}^{(k)}\\}$ are generated by the Gauss-Newton with regularisation
method, we have that
$\lim_{k\rightarrow\infty}\mathbf{J}(\mathbf{v}^{(k)})^{T}\mathbf{r}(\mathbf{v}^{(k)})=0.$
(34)
That is, the gradient norms converge to zero, and so the Gauss-Newton method
with regularisation is globally convergent.
We first note that some adaptations of the lemmas from the global convergence
proof of the Adaptive Regularisation algorithm using Cubics (ARC method) are
used to prove Theorem A.5, see [5] and [6]. We begin the proof by deriving an
expression for the predicted model decrease in terms of the gradient. We
require the use of an upper bound on $\gamma^{(k)}$, denoted as
$\gamma_{\max}$, which is derived using a property of Lipschitz continuous
gradients. We show that $\gamma^{(k)}\leq\gamma_{\max}$ for all $k\geq 0$ by
first showing that if $\gamma^{(k)}$ is large enough, then we have a
successful step so that $\gamma^{(k)}$ can stop increasing due to unsuccessful
steps in Algorithm 3.2. We use the expression for $\gamma_{\max}$ to prove
global convergence of the REG method under assumptions A1-A3 by showing that
the gradient norms converge to zero as we iterate.
Note that for both the LS and REG, if $\mathbf{r}(\mathbf{v}^{(k)})\rightarrow
0$, i.e. (5) is a zero residual problem, then we have that (31) and (34) hold
as $|\mathcal{J}(\mathbf{v}^{(k)})|$ is uniformly bounded. However, in
practice the variational problem is not usually a zero residual problem.
## References
* [1] L. Armijo. Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1):1–3, 1966.
* [2] R. Bannister. A review of operational methods of variational and ensemble-variational data assimilation. Quarterly Journal of the Royal Meteorological Society, 143(703):607–633, 2017.
* [3] R. N. Bannister. A review of forecast error covariance statistics in atmospheric variational data assimilation. II: Modelling the forecast error covariance statistics. Quarterly Journal of the Royal Meteorological Society, 134(637):1971–1996, 2008.
* [4] V. Beiranvand, W. Hare, and Y. Lucet. Best practices for comparing optimization algorithms. Optimization and Engineering, 18(4):815–848, 2017.
* [5] C. Cartis, N. I. Gould, and P. L. Toint. Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Mathematical Programming, 127(2):245–295, 2011.
* [6] C. Cartis, N. I. Gould, and P. L. Toint. Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function-and derivative-evaluation complexity. Mathematical Programming, 130(2):295–319, 2011.
* [7] A. R. Conn, N. I. Gould, and P. L. Toint. Trust Region Methods. SIAM, 2000.
* [8] P. Courtier, J.-N. Thépaut, and A. Hollingsworth. A strategy for operational implementation of 4D-Var, using an incremental approach. Quarterly Journal of the Royal Meteorological Society, 120(519):1367–1387, 1994.
* [9] D. N. Daescu and I. M. Navon. An analysis of a hybrid optimization method for variational data assimilation. International Journal of Computational Fluid Dynamics, 17(4):299–306, 2003.
* [10] R. S. Dembo, S. C. Eisenstat, and T. Steihaug. Inexact Newton methods. SIAM Journal on Numerical Analysis, 19(2):400–408, 1982.
* [11] J. E. Dennis Jr and R. B. Schnabel. Numerical methods for unconstrained optimization and nonlinear equations. SIAM, 1996.
* [12] ECMWF. ECMWF Newsletter No.158 Winter 2018/19. (158):21–26, 2019.
* [13] M. Fisher, J. Nocedal, Y. Trémolet, and S. J. Wright. Data assimilation in weather forecasting: a case study in PDE-constrained optimization. Optimization and Engineering, 10(3):409–426, 2009.
* [14] P. Gauthier, M. Tanguay, S. Laroche, S. Pellerin, and J. Morneau. Extension of 3DVAR to 4DVAR: Implementation of 4DVAR at the Meteorological Service of Canada. Monthly Weather Review, 135(6):2339–2354, 2007.
* [15] J. C. Gilbert and C. Lemaréchal. Some numerical experiments with variable-storage quasi-Newton algorithms. Mathematical Programming, 45(1-3):407–435, 1989.
* [16] G. H. Golub and C. F. Van Loan. Matrix computations, volume 3. Johns Hopkins University Press, 2012.
* [17] S. Gratton, P. Laloyaux, and A. Sartenaer. Derivative-free optimization for large-scale nonlinear data assimilation problems. Quarterly Journal of the Royal Meteorological Society, 140(680):943–957, 2014.
* [18] S. Gratton, A. S. Lawless, and N. K. Nichols. Approximate Gauss–Newton methods for nonlinear least squares problems. SIAM Journal on Optimization, 18(1):106–132, 2007.
* [19] S. A. Haben, A. S. Lawless, and N. K. Nichols. Conditioning and preconditioning of the variational data assimilation problem. Computers & Fluids, 46(1):252–256, 2011.
* [20] S. A. Haben, A. S. Lawless, and N. K. Nichols. Conditioning of incremental variational data assimilation, with application to the Met Office system. Tellus A: Dynamic Meteorology and Oceanography, 63(4):782–792, 2011\.
* [21] S. Laroche and P. Gauthier. A validation of the incremental formulation of 4D variational data assimilation in a nonlinear barotropic flow. Tellus A: Dynamic Meteorology and Oceanography, 50(5):557–572, 1998\.
* [22] A. S. Lawless, S. Gratton, and N. K. Nichols. An investigation of incremental 4D-Var using non-tangent linear models. Quarterly Journal of the Royal Meteorological Society, 131(606):459–476, 2005.
* [23] F.-X. Le Dimet, I. M. Navon, and D. N. Daescu. Second-order information in data assimilation. Monthly Weather Review, 130(3):629–648, 2002.
* [24] F.-X. Le Dimet and O. Talagrand. Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus A: Dynamic Meteorology and Oceanography, 38(2):97–110, 1986\.
* [25] K. Levenberg. A method for the solution of certain non-linear problems in least squares. Quarterly of Applied Mathematics, 2(2):164–168, 1944.
* [26] C. Liu, Q. Xiao, and B. Wang. An ensemble-based four-dimensional variational data assimilation scheme. Part I: Technical formulation and preliminary test. Monthly Weather Review, 136(9):3363–3373, 2008.
* [27] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1-3):503–528, 1989.
* [28] E. N. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2):130–141, 1963.
* [29] E. N. Lorenz. Predictability: A problem partly solved. In Proc. Seminar on predictability, volume 1, 1996.
* [30] D. W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics, 11(2):431–441, 1963.
* [31] MathWorks. mldivide documentation. https://www.mathworks.com/help/matlab/ref/mldivide.html, 2021. [Online; accessed 30-May-2021].
* [32] A. J. Moodey, A. S. Lawless, R. W. Potthast, and P. J. Van Leeuwen. Nonlinear error dynamics for cycled data assimilation methods. Inverse Problems, 29(2):025002, 2013.
* [33] J. J. Moré. The Levenberg-Marquardt algorithm: implementation and theory. In Numerical analysis, pages 105–116. Springer, 1978.
* [34] J. J. Moré and S. M. Wild. Benchmarking derivative-free optimization algorithms. SIAM Journal on Optimization, 20(1):172–191, 2009.
* [35] N. K. Nichols. Mathematical concepts of data assimilation. In W. Lahoz, R. Swinbank, and B. Khattatov, editors, Data Assimilation: Making Sense of Observations, pages 13–39. Springer, 2010.
* [36] J. Nocedal and S. J. Wright. Numerical Optimization 2nd Edition. Springer, 2006.
* [37] F. Rabier. Overview of global data assimilation developments in numerical weather-prediction centres. Quarterly Journal of the Royal Meteorological Society, 131(613):3215–3233, 2005.
* [38] F. Rabier, J.-N. Thépaut, and P. Courtier. Extended assimilation and forecast experiments with a four-dimensional variational assimilation system. Quarterly Journal of the Royal Meteorological Society, 124(550):1861–1887, 1998.
* [39] F. Rawlins, S. Ballard, K. Bovis, A. Clayton, D. Li, G. Inverarity, A. Lorenc, and T. Payne. The Met Office global four-dimensional variational data assimilation scheme. Quarterly Journal of the Royal Meteorological Society, 133(623):347–362, 2007.
* [40] D. F. Shanno and K.-H. Phua. Remark on “Algorithm 500: Minimization of unconstrained multivariate functions [e4]”. ACM Transactions on Mathematical Software (TOMS), 6(4):618–622, 1980.
* [41] Z. Wang, K. Droegemeier, and L. White. The adjoint Newton algorithm for large-scale unconstrained optimization in meteorology applications. Computational Optimization and Applications, 10(3):283–320, 1998\.
* [42] Z. Wang, I. Navon, X. Zou, and F. Le Dimet. A truncated Newton optimization algorithm in meteorology applications with analytic Hessian/vector products. Computational Optimization and Applications, 4(3):241–262, 1995\.
* [43] P. Wolfe. Convergence conditions for ascent methods. SIAM Review, 11(2):226–235, 1969.
* [44] X. Zou, I. M. Navon, M. Berger, K. H. Phua, T. Schlick, and F.-X. Le Dimet. Numerical experience with limited-memory quasi-Newton and truncated Newton methods. SIAM Journal on Optimization, 3(3):582–608, 1993.
|
arxiv-papers
| 2021-07-26T17:54:44 |
2024-09-04T03:07:19.535567
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Coralia Cartis, Maha H. Kaouri, Amos S. Lawless, Nancy K. Nichols",
"submitter": "Maha Hussein Kaouri",
"url": "https://arxiv.org/abs/2107.12361"
}
|
2107.12362
|
# Pressure Test: Quantifying the impact of positive stress on companies from
online employee reviews
Sanja Šćepanović Nokia Bell Labs, Cambridge, United Kingdom Marios
Constantinides Nokia Bell Labs, Cambridge, United Kingdom Daniele Quercia
Nokia Bell Labs, Cambridge, United Kingdom CUSP, Kings College London, United
Kingdom [email protected] Seunghyun Kim Georgia Tech, USA
###### Abstract
Workplace stress is often considered to be negative, yet lab studies on
individuals suggest that not all stress is bad. There are two types of stress:
_distress_ refers to harmful stimuli, while _eustress_ refers to healthy,
euphoric stimuli that create a sense of fulfillment and achievement. Telling
the two types of stress apart is challenging, let alone quantifying their
impact across corporations. By leveraging a dataset of 440K reviews about S&P
500 companies published during twelve successive years, we developed a deep
learning framework to extract stress mentions from these reviews. We proposed
a new methodology that places each company on a stress-by-rating quadrant
(based on its overall stress score and overall rating on the site), and
accordingly scores the company to be, on average, either a _low stress_ ,
_passive_ , _negative stress_ , or _positive stress_ company. We found that
(former) employees of positive stress companies tended to describe high-growth
and collaborative workplaces in their reviews, and that such companies’ stock
evaluations grew, on average, 5.1 times in 10 years (2009-2019) as opposed to
the companies of the other three stress types that grew, on average, 3.7 times
in the same time period. We also found that the four stress scores aggregated
every year—from 2008 to 2020 —closely followed the unemployment rate in the
U.S.: a year of positive stress (2008) was rapidly followed by several years
of negative stress (2009-2015), which peaked during the Great Recession
(2009-2011). These results suggest that automated analyses of the language
used by employees on corporate social-networking tools offer yet another way
of tracking workplace stress, allowing quantification of its impact on
corporations.
## Introduction
According to the American Institute of Stress, 40% of workers consider their
jobs to be stressful; a number that has significantly increased during the
COVID-19 pandemic [1]. The World Health Organization treats stress as the
number one health threat in the U.S., with more than 60% of doctor visits
being due to a stress-related issue [2]. Workplace stress is often linked to
lower motivation, poor performance, and decline in employees’ well-being [3],
while it is estimated to amount to 190 billions in healthcare costs in the
U.S. alone [4]. To currently track how its employees deal with stress, a large
company would typically recruit consultants who would then administer surveys
tailored to the company’s situation, which typically end up being costly [5],
and restricted to a limited pool of self-selected participants [6, 7, 8]. The
current situation points to the need of more research.
Stress is defined as “a set of physical and psychological responses to
external conditions or influences, known as stressors” [9]. According to
Lazarus [10], “any change, either good (eustress) or bad (distress), is
stressful, and whether it is a positive or a negative change, the
physiological response is the same.” To cope with workplace stress, an
employee has to cognitively acknowledge that a situation causes stress before
even attempting to manage it [11]. Kobasa’s framework of psychological
hardiness offers three main categories of coping strategies [12]: _commitment_
(having an active involvement in one’s own work with a sense of purpose),
_control_ (believing and acting instead of feeling helpless in front of
adversity), and _challenge_ (believing that change could be a source of
improvement). Kobasa posited that these categories could help individuals face
challenges and, as such, individuals could turn stressful events into
opportunities for personal growth [12]. However, despite having explored the
relation between stress and job performance for decades, researchers have not
yet established whether stress and performance are in a negative linear
relation, a positive linear relation, or an inverted-U relation [13].
To tackle this gap, we draw upon two streams of previous literature, that is,
literature about stress in the corporate context, and literature on how to
gauge stress from online data. In the literature about stress in the corporate
context, stress is often being portrayed as negative [14]; and as a leading
cause of death, poor work performance, and diminishing well-being [3]. More
recently, however, researchers have advocated that there exist another type of
stress: _positive stress_. The idea is that whether stress is positive or
negative depends on how an individual reacts to a stressor [15]. _‘One’s
stress mindset can be conceptualized as the extent to which one holds the
belief that stress has enhancing consequences for various stress-related
outcomes such as performance and productivity, health and well-being, and
learning and growth, or holds the belief that stress has debilitating
consequences for those outcomes[15]’_. Of prime importance is to distinguish
appraisal from stress mindset. Stress mindset describes the evaluation of the
nature of stress itself as positive or negative (i.e., enhancing or
debilitating) [15], whereas appraisal is about the evaluation of a particular
stressor as more or less stressful [16]. For example, one may appraise a
difficult task as highly stressful and have a stress debilitating mindset,
which, in turn, leads the individual to experience the situation as draining
(negative stress). By contrast, another individual may again consider the task
as highly stressful but have a stress enhancing mindset, leading the
individual to experience the situation as an opportunity for growth and
development (positive stress). While stress is often linked to depression [17,
18], several accounts posit that certain stressful experiences may
fundamentally change individuals for the better—a phenomenon referred to as
stress-related growth [15]. The experience of stress can enhance the
development of mental toughness, greater appreciation for life, and an
increased sense of meaningfulness [19, 20]. However, as Crum et al. [15]
pointed out, these conflicting findings in the stress literature suggest a
nuanced view of stress. A view that recognizes the debilitating nature of
stress on health and performance, but can also account for its enhancing
nature in specific circumstances. We hypothesized that the presence of both
positive and negative stress can be measured from digital data based on
previous literature that has done just that with different techniques upon
different datasets. Guntuku et al. [21] used the Linguistic Inquiry and Word
Count (LIWC) [22] dictionary’s features (e.g., topical categories, emotions,
parts-of-speech) to predict stress of social media (Facebook and Twitter)
users. Saha and De Choudhury [23] did a similar study but on Reddit and did so
in conjunction with gun violence events, and found specific stress markers to
be associated with linguistic changes about _“higher self pre-occupation and
death-related discussion.”_ Similar to our study, Vedant et al. [24] showed
that the use of language in employee reviews can be used to operationalize
organizational culture: the collection of values, expectations, and practices
that guide and inform employees’ actions within a company.
Based on these preliminary findings, we hypothesized that workplace stress is
reflected in company reviews. To explore this hypothesis, we placed companies
on a 2x2 coordinate system, based on their overall stress scores and overall
ratings on the company review site. This stress-by-rating quadrant effectively
divided companies in four stress types that we termed low stress, passive,
negative stress, and positive stress (Table 1 shows example reviews of
companies of each stress type). Low stress companies enjoy high overall
ratings and low stress scores. These are usually established organizations
that offer workplace flexibility, good pay, and bonuses. Passive companies are
characterized by low overall ratings and a small proportion of posts
mentioning stress. They tend to have high turnover, due to repetitive workload
and non-motivated employees. Negative stress companies are characterized by
low overall ratings and a high proportion of posts mentioning stress.
Employees of these companies are particularly unhappy as, in addition to the
unsatisfactory conditions, they also experience high pressure. Finally,
positive stress companies enjoy high overall ratings but also high stress
scores. These tend to be inspiring, reputable workplaces that attract
employees because of the collaborative atmosphere and career prospects despite
the pressure employees are subject to. The project website for our study is
found on https://social-dynamics.net/positive-stress.
### Data
After obtaining the U.S. unemployment rates between 2008 and 2020 from the
U.S. Bureau of Labor Statistics [25] and the S&P 500 stock market data
(including the 500 large capital U.S. companies with a cumulative market
capitalization to be around 70-80% of the total in the country) from the Yahoo
Finance portal [26], we matched that data with our company reviews. More
specifically, we obtained 440K geo-referenced posts on Glassdoor
(https://www.glassdoor.com), a popular company reviewing site about the S&P
500 companies published during twelve successive years, from 2008 to 2020. On
this site, current and, more likely, former employees of companies write
reviews about their own corporate experience, ranging from job interviews to
salaries to workplace culture. The site provided an overall rating of each
company based on employees’ reviews. As of 2021, there are 50M monthly
visitors on the platform, and 70M reviews about 1.3M companies. To ensure
quality reviews, the site employs the following three mechanisms. First, an
automatic (proprietary) and manual content moderation system is paired with
the possibility for users to flag content. Such a combined system partly
prevents fake reviews (e.g., a company unfairly forcing employees to leave
positive reviews). Second, every user wanting to browse others’ content has to
take the time to write one review. This requirement encourages the presence of
neutral reviews and partly prevents the so-called _non-response bias_ , a
situation in which the opinions of respondents are very different from those
of non-respondents. Third, the site allows for a maximum of one review per
employee per company per year, preventing any employee from contributing a
disproportionate number of reviews, and, in so doing, discouraging _sampling
bias_ , a situation in which the opinions of some members are more represented
in the data than those of others.
Table 1: Example reviews of companies of each stress type. Stress Type | Review excerpt
---|---
Low stress | My company walks its talk. It _[the company]_ takes care of customers and employees.
Negative stress | There is a feeling of scarcity due to the constant reorganizations, pressure, and surprise layoffs. _[…]_ You could imagine how toxic the environment is.
Passive stress | There is no regard for how the remaining work will get done, just how the bottom line looks at that moment in time. People are not treated as respected contributors to the organization. _[…]_ This is a very unstable, unhealthy, volatile, stressed out environment, with incredibly poor leadership.
Positive stress | You have to be a very driven and self-motivated person to be successful here. If you are willing to commit and put in the extra effort and hard work, it will be extremely worth it. _[…]_ Every day is very busy and it can be stressful at times but its all worth it!.
## Methods
Figure 1: Placing companies on a stress-by-rating quadrant by detecting stress
mentions in reviews about a company using a state-of-the-art NLP deep-learning
framework (_Step 1_), placing the company in the rating-by-stress quadrant,
and computing its association with its stress type (i.e., with its quadrant)
(_Step 2_). To see how the association is computed, consider company $c$ shown
in _(b)_ to be of positive stress. $c$ is placed according to its
$z_{rating}(c,T)$ along the $x$-axis, and to its $z_{stress}(c,T)$ along the
$y$-axis. $R$ is the radius from the center to $c$’s point; $\alpha$ is the
angle between the radius line and the $x$-axis; $\beta$ is the angle between
the radius line and the $y$-axis; and $\gamma$ is the angle between the radius
line and the diagonal shown as a dotted line. The function $f(c,s,T)$ combines
$R$, $\alpha$, $\beta$, and $\gamma$, and accordingly scores $c$ to have a
high association weight with positive stress $s$ during period $T$ (darker
shades of colors), as $c$ is close to the quadrant’s diagonal, and distant
from the intersection point.
We extracted mentions related to stress using an NLP deep-learning tool, which
is trained to extract medical conditions from free-form text (Figure 1(a)). We
then designed a new methodology that placed the 500 S&P companies on a stress-
by-rating quadrant based on their overall ratings on the reviewing site on one
axis, and the presence of mentions related to stress in their reviews on the
other axis (Figure 1(b)). In so doing, we classified each company to be, on
average, either a _low stress_ , _passive_ , _negative stress_ , or _positive
stress_ company. We finally computed each company’s strength of membership to
its quadrant depending on whether the company is placed both close to the
diagonal and far from the (0,0) intersection point. The function $f(c,s,T)$,
which is expressed in Equation (3) and graphically depicted in Figure 1(b),
assigns a higher weight to company $c$ of stress type $s$, if $c$ is both
closer to the quadrant’s diagonal (i.e., it is farther from the remaining
quadrants) and more distant from the two axes’ intersection (i.e., it has
higher absolute overall rating and stress score values). We call $f(c,s,T)$ to
be company $c$’s association with stress type $s$ during $T$ since the higher
its value, the more $c$ is associated with stress type $s$ during $T$.
Extracting stress mentions from posts. To extract stress mentions, we used the
MedDL entity extraction module [27] (the left rectangle in Figure 1 (a)).
MedDL uses _contextual embeddings_ and a _deep BiLSTM-CRF sequence labeling
architecture_ (we used the default parameters as specified in [27]). The model
was pre-trained and evaluated on a labeled dataset of Reddit posts called
MedRed [27]. The MedRed dataset was split into train (50%), dev (25%), and
test (25%) sets. We evalued MedDL using the strict and relaxed $F1$-scores,
two commonly used performance metrics for entity extraction models. The
_strict $F1$-score_ counts only the _exact_ matches with the true labels as
correct, while the _relaxed $F1$-score_ takes into accoun the partially
matching extracted entities. We provide formulae for the two scores in
_Supplementary Information_ (SI Equation (1)). MedDL was compared against two
well-known entity extraction tools: MetaMap (a well-established tool [28] and
a de-facto baseline method for NLP studies related to health [29]) and
TaggerOne (a machine learning tool using semi-Markov models and a medical
lexicon [30]). The MedDL method achieved a strict/relaxed $F1$-score of
$.71$/$.85$ when extracting symptoms (Figure S4), outperforming both MetaMap
and TaggerOne by a large margin (the two have $F1$-scores of $.17/.48$ and
$.31/.58$, respectively). Furthermore, MedDL has shown generalizability when
applied on dataset (e.g., dream reports [31]) different than those it was
trained on (i.e., Reddit data).
Table 2: Top15 most frequent stress-related mentions identified on a company review site, and their frequency counts. Condition related to stress | # mentions | Example mention
---|---|---
stress | 3473 | _“Great company to work for, if you can handle stress.”_
high stress | 710 | _“High stress work environment, long work hours.”_
pressure | 447 | _“a lot of pressure to get things done.”_
burnout | 277 | _“[…], the ones who made the cut to stay are suffering from burnout.”_
understaffing | 99 | _“Somewhat job stability due to understaffing.”_
heavy workload | 58 | _“Lack of work/life balance, extremely heavy workload.”_
exhaustion | 58 | _“You will be pushed to the point of exhaustion […].”_
stress levels | 57 | _“[…] stress levels peak insanely when the store manager […].”_
overworked | 45 | _“At times, you can feel overworked and undervalued.”_
tension | 38 | _“There’s a lot of tension between coworkers because of commission.”_
high workload | 38 | _“[…] seeing many large set-backs which cause very high workload”_
extreme stress | 33 | _“Beware: extreme stress and pressure.”_
mental stress | 23 | _“[…] ends up giving you a lot of mental stress.”_
overload | 17 | _“No work life balance […], overloaded and benefits are not good.”_
pressure to perform | 9 | _“[…] a lot of pressure to perform, long working hours”_
We extracted stress mentions in three steps (further detailed in
_Supplementary Information_). First, we detected over 21K posts that mentioned
over 5K unique medical conditions. Most frequent medical conditions identified
include _stress_ , _pain_ , _headache_ , and _depression_. Second, we
inspected the top 200 most mentioned conditions and manually selected $31$ of
them that specifically reflect workplace stress (top 15 are shown in Table 2).
Third, we extracted all reviews mentioning any of the $31$ conditions. This
resulted in $7,338$ posts related to stress, which accounted for $1\%$ of all
posts. Despite this seemingly low number of posts, when aggregated, these
posts returned statistically significant results for our metrics, which are
described next.
Associating stress types with companies. We placed each S&P 500 company on a
stress-by-rating quadrant. More specifically, for each company $c$, we
computed its average review rating and its stress score:
$\displaystyle\textit{rating(c,T)}=\textit{$c$'s average review rating during
}T,$ $\displaystyle\textit{stress(c,T)}=\frac{\textit{\\# $c$'s posts related
to stress during }T}{\textit{total \\# $c$'s posts during }T}.$
where $T$ is set to initially include all the years under study (2009-2019).
To then ease comparability, we $z$-scored these two values:
$\displaystyle z_{rating}(c,T)$
$\displaystyle=\frac{rating(c,T)-\mu_{rating}(T)}{\sigma_{rating}(T)},$
$\displaystyle z_{stress}(c,T)$
$\displaystyle=\frac{stress(c,T)-\mu_{stress}(T)}{\sigma_{stress}(T)}.$
where $\mu_{rating}(T)$ and $\sigma_{rating}(T)$ are the average and standard
deviation of the review ratings for all companies (regardless of their stress
types) during the whole period $T$ (readily available on the company review
site), and $\mu_{stress}(T)$ and $\sigma_{stress}(T)$ are the average and
standard deviation of the stress scores for all companies during the whole
period $T$.
Each S&P 500 company was assigned to one of the four quadrants based on the
signs of their two z-scores (Figure 1(b)). For example, a company $c$ with a
negative $z_{rating}(c,T)$ and a positive $z_{stress}(c,T)$ would be placed in
the negative stress quadrant, while a company with a positive
$z_{rating}(c,T)$ and a positive $z_{stress}(c,T)$ would be placed in the
positive stress quadrant. The resulting quadrants are consequently four:
* _Low Stress companies_ \- These enjoy high overall ratings and low stress scores. Their employees tended to think very positively about their workplace experience with comparatively fewer posts mentioning stress conditions.
* _Passive companies_ – These are characterized by low overall ratings and a small proportion of posts mentioning stress. Their employees were mostly not satisfied with their jobs, but they also showed comparatively fewer signs of stress in their reviews.
* _Negative stress companies_ – These are characterized by high stress scores and low overall ratings. Their employees mentioned stress conditions, while also scoring their workplace experience low.
* _Positive stress companies_ – These enjoy high ratings despite high stress scores. Their employees mentioned stress yet did so in the context of high-pressure and highly rewarding work environments.
Once a company $c$ is placed in its quadrant (i.e., associated with its stress
type $s$), we needed to estimate its association with this quadrant, i.e.,
with $s$. For example, company $c$ with ($z_{rating}(c,T),z_{stress}(c,T)$)
equal to (3,3) is more strongly associated with positive stress, than what a
company with $(0.5,0.5)$ would be. To estimate $c$’s association with $s$, we
combined $c$’s two $z$-scores concerning review rating and stress score as
follows (and as depicted in Figure 1(b)):
$\displaystyle f(c,s,T)$
$\displaystyle=\left\\{\begin{array}[]{@{}ll@{}}l(z_{rating}(c,T),z_{stress}(c,T))=R/(\gamma+\pi),&\text{if}\
c\in s\text{ during }T;\\\ 0,&\text{if}\ c\notin s\text{, or }$c$\text{ has no
review during }T;\end{array}\right.$ (3) where: $\displaystyle R$
$\displaystyle=\sqrt{z_{rating}(c,T){}^{2}+z_{stress}{}(c,T)^{2}},$
$\displaystyle\gamma$
$\displaystyle=max((\alpha-\pi/4),(\beta-\pi/4)),\quad\quad$
$\displaystyle\alpha$
$\displaystyle=arccos(|z_{rating}(c,T){})|/R),\quad\quad\quad\quad\quad$
$\displaystyle\beta$ $\displaystyle=arccos(|z_{stress}(c,T){})|/R).$
where $T$ is initially set to include all the years under study, from 2009 to
2019. To ease understanding of the above formula, consider that function $l$,
on input of the two $z$-scores (i.e., the company’s two coordinates in the
quadrant), computes the extent to which company $c$ is on the diagonal and far
from the (0,0) intersection point (Figure 1(b)). It gives higher weights to
companies that are both closer to the quadrant’s diagonal (i.e., which are
farthest from the remaining quadrants) and more distant from the axes’
intersection (i.e., which have higher absolute rating/stress score values).
Computing stress scores over the years. For each year $y$, we quantified the
amount of a given stress type $s$ expressed in the posts produced in that
year. More specifically, we computed:
$\displaystyle m{(s,y)}=\sum_{c\in s}f(c,s,y)\times w{(c,y,s)},$ (4)
For all the companies of stress type $s$, we summed each company’s association
$f(c,s,y)$ with $s$ during year $y$ weighted by the presence of posts about
the company during $y$ (giving higher weights to companies whose employees
contributed more reviews in that year):
$w(c,y,s)=\left\\{\begin{array}[]{@{}ll@{}}\frac{\textit{\\# $c$'s posts in
year $y$}}{\textit{total \\# posts in year $y$}},&\text{if}\ c\in s\textit{ in
year }y;\\\ 0,&\text{if $c$ has no reviews in year $y$.}\end{array}\right.$
(5)
Associating topical categories with stress types. To identify relevant words
for each stress type, we run BERTopic [32], which is a state-of-the-art topic
modeling algorithm. A topic modeling algorithm is an unsupervised technique to
extract topics that appear frequently in a piece of text (in our case, a
post). The algorithm works in four sequential steps:
1. 1.
converts each post into a 512-dimensional vector (called embedding) of
numerical values using a pre-trained BERT-based sentence transformer [33] (in
our case, we used the default model, that is, the “paraphrase-MiniLM-L6-v2”).
BERT (Bidirectional Encoder Representations from Transformers) is a state-of-
the-art transformer-based machine learning technique for natural language
processing (NLP), which takes into account the context of each word.
2. 2.
reduces dimensionality using UMAP [34] (or Unification Map) for every
embedding, as many clustering algorithms handle high dimensionality poorly.
UMAP is arguably the best performing dimensionality reduction algorithm as it
keeps significant portion of the high-dimensional structure in lower
dimensionality.
3. 3.
uses HDBSCAN [35] for clustering with the “UMAP” embeddings, resulting in
similar posts being clustered together. HDBSCAN is a density-based algorithm
that works well with UMAP as the structure is preserved in a lower-dimensional
space. Additionally, HDBSCAN does not force data points to clusters as it
considers them outliers.
4. 4.
identifies keywords using the c-TF-IDF [32] score (Equation 6), and using that
score, derives topics from the identified keywords. To create a topic
representation, we took the top 3 keywords per topic based on their c-TF-IDF
scores. The higher the score, the more representative is as the score is a
proxy of information density.
$\textrm{c-TF-IDF}_{l}=\frac{k_{l}}{o_{l}}\times\frac{p}{\sum_{j}^{q}k_{j}}$
(6)
where the frequency of each keyword $k$ is extracted for each topic $l$ and
divided by the total number of keywords $o$. The total, unjoined, number of
posts $p$ is divided by the total frequency of keyword $k$ across all topics
$q$.
### Analysis plan
Our analysis plan unfolded in three steps. First, as an initial validation
step, we ascertained that stress was paraphrased in a company’s reviews
differently according to the company’s stress type. Second, we tested whether
the evolution of each stress score over the years tallied with large-scale
exogenous events such as the Great Recession. Third, we tested that a
company’s stress type is partly associated with its stock growth.
## Results
### Topics discussed in reviews of companies of different stress types
To ascertain whether the content of the reviews captured aspects specific to
the four stress types, we identified the top relevant words for each type by
running a topic modeling algorithm called BERTopic [32], and did so on four
distinct sets of reviews: each set contained reviews of all the companies of a
given stress type. This algorithm found the emergent topics in the four sets,
and Table 3 lists the the top three words for each topic. The top10 topics for
each quadrant are statistically associated with the quadrant. That is, based
on chi-square tests, each topic $l$ associated with quadrant $s$: has
frequency in $s$ always above zero (is dependent on $s$), and is independent
of any quadrant other than $s$. As detailed in _Supplementary Information_ ,
by inspecting these groups of words and corresponding representative reviews,
six annotators identified the emergence of three workplace themes:
_Career drivers_ (first set of rows in Table 3). Negative stress companies
were associated with words such as ‘overtime’, ‘mandatory, ‘shifts’, and the
typical workplace described in the reviews, according to our annotators, was
characterized by considerable emotional pressure. On the other hand, passive
companies were associated with words such as ‘vacation’, ‘pto’, and
‘vacation/sick’, and the corresponding reviews tended to deflect from the day-
to-day work and focus on activities outside work such as vacation and time
off. Low stress companies were associated with words such as ‘scheduling’,
‘flexibility’, and ‘autonomy’, and the typical workplace described in the
reviews was one in which employees cherished their sense of control over their
work. Finally, positive stress companies were associated with words such as
‘teamwork’, ‘supportive’, and ‘collaborative’, and the typical workplace in
the reviews was one with a collaborative and supportive culture.
_Industry or benefits_ (second set of rows in Table 3). Negative stress
companies were associated with words such as ‘discounts’, ‘sale’, ‘coupons’,
while positive stress companies were associated with words such as ‘gain’,
‘billions’, and ‘software’. Their reviews were effectively mentioning the
industry sectors they referred to: Consumer Discretionary (e.g., retail shops)
for the reviews of negative stress companies, and Information Technology for
those of positive stress ones. On the other hand, passive companies were
associated with words such as ‘insurance’, ‘espp’, and ‘hsa’, and, similarly,
low stress ones with words such as ‘401k’, ‘bonus’, and ‘retirement’; the
corresponding reviews indicated workplaces in which concerns about long-term
financial benefits rather than the presence of implicit incentives in one’s
own work were at the forefront.
_Emotional Aspects_ (third set of rows in Table 3). Negative stress companies
were associated with words such as ‘horrible’, ‘terrible’, and ‘awful’,
confirming, once again, the presence of emotional pressure. Passive companies
were instead associated with words such as ‘repetitive’, ‘turnover’, and
‘workload’, confirming the tedious nature of those workplaces. Low stress
companies were associated with words such as ‘fair’, ‘friendlygood’, and
‘pays’, and the corresponding reviews described a good work-life balance.
Finally, positive stress companies were associated with words such as
‘prestige’, ‘boost’, and ‘reputation’, and their reviews described high
performing, dynamic, and fast-paced workplaces.
| Negative stress | Passive | Low stress | Positive stress
---|---|---|---|---
Career drivers | overtime | vacation | scheduling | teamwork
mandatory | pto | flexibility | supportive
shifts | vacation/sick | autonomy | collaborative
Industry or benefits | discounts | insurance | 401k | gain
sale | espp | bonus | billions
coupons | hsa | retirement | software
Emotional aspects | horrible | repetitive | fair | prestige
terrible | turnover | friendly/good | boost
awful | workload | pays | reputation
Table 3: Three-word groups present in the reviews of companies of the four
stress types. These groups were automatically found by BERTopic and speak to
three main workplace characteristics: career drivers, industry and benefits,
and emotional aspects. For each group, the top three words are shown together
with their normalized word importance. Abbreviations of words describing
monetary benefits include pto (paid time off); espp (employee stock purchase
plan); hsa (health savings account); 401k (a retirement savings and investing
plan that employers offer).
### Evolution of stress types and the Great Recession
After the preliminary validation step in which we ascertained that stress was
paraphrased in reviews differently according to the stress type, we tested
whether the evolution of each stress score over the years tallied with large-
scale exogenous events such as the Great Recession. We plotted the amount
$m(s,y)$ of each stress score $s$ in each year $y$ (as per Equation (9)), from
2008 to 2020 (top panel in Figure 2). The overall changes closely followed the
unemployment rates from the U.S. Bureau of Labor (bottom panel in Figure 2): a
year of positive stress (2008) was rapidly followed by several years of
negative stress (2009-2015), which peaked during the Great Recession
(2009-2011) during which the U.S. went through a loss of over 7.5 million jobs
and high unemployment rates [36].
Figure 2: The evolution of: _(top)_ the four types of stress; and _(bottom)_
the unemployment rate in the U.S., with the horizontal dashed line reflecting
pre-recession rate. The stress score per year is calculated using Equation
(9), and its standard deviations are shown with shaded lines.
(a)
(b)
Figure 3: _(a)_ Distribution across companies of the logarithm of stock growth
values from the average stock price in 2009 and that of 2019
(${stock\\_growth}_{[09-19]}=stock_{2019}/stock_{2009}$) showing the stock
growth is log-normally distributed. The average stock price for year $y$
($stock_{y}$) is calculated as the average of the daily Adjusted Closing
Prices for the year. _(b)_ Geometric mean of the stock growth values
$\bar{GM}({stock\\_growth}_{[09-19]})$ for increasing stress score percentiles
for the companies of a given stress type. Error bars represent geometric
standard error $GSE({stock\\_growth}_{[09-19]})=$
$\bar{GM}({stock\\_growth}_{[09-19]})/$
$\sqrt{N}\cdot\sigma(log({stock\\_growth}_{[09-19]}))$.
### Stock growth of companies of different stress types
Finally, we hypothesized that a company’s way of dealing with workplace stress
was partly _associated_ with performance. Given our data, we cannot study
whether stress _causes_ (poor) performance but can only study how the two are
associated. Also, there is no company performance measure that is solely
affected by a company’s stress culture. As such, our stress scores are
unlikely to be predictive of any company-wide performance measure. We opted
for long-term stock growth as our performance measure, not least because it is
publicly available and standardized across companies. However, such a growth
is partly affected by a company’s culture, and conflates endogenous factors
(e.g., productivity) with exogenous ones (e.g., financial cycles). Yet we
expected our stress measures to qualitatively describe different forms of
financial success, at least in the long term. To that end, we computed stock
growth during the full period of study, that is, between 2009 to 2019:
$\textrm{stock growth}_{[09-19]}(c)=\frac{stock(c)_{2019}}{stock(c)_{2009}}$
(7)
where $stock^{i}$ is the average adjusted closing price of company $c$’s stock
in year $i$. We chose long-term growth instead of short-term one (e.g., that
pertaining 2018-2019) to partly account for any potential influence of
exogenous events (e.g., Great Recession, market manipulation, incidental
growth/decline [37]). In _Supplementary Information_ , we show that the
results do not qualitatively change when considering the narrower 5-year
period from 2014 to 2019. Given a stress type $s$, we computed company $c$’s
_association_ $f(c,s,T)$ with $s$ during time period $T$ (initially set to the
whole period of study), consequently grouping all the companies of a given
stress type into their stress score percentiles (Figure 3b). As the
distribution of stock growth values across companies is heavy-tailed (Figure
3a), we used the geometric mean to average these values across companies. That
is, $GM({\textrm{stock growth}_{[09-19]}})=\Pi(\textrm{stock
growth}_{[09-19]}(c))^{1/n}$, where $c$ is a company in a specific _(stress
type,percentile)_ bin, and $n$ is the number of the companies in such a bin.
Positive stress companies enjoyed the highest stock growth with an average
value across all percentiles of $\bar{GM}(\textrm{stock
growth}_{[09-19]})=5.07$ (Figure 3b), while the average stock growth across
the other three types of companies was noticeably lower
($\bar{GM}({\textrm{stock growth}_{09-19}})=3.70$), with passive stress
companies exhibiting the lowest growth ($\bar{GM}({\textrm{stock
growth}_{09-19}})=3.42$). To ease the interpretation of such values, consider
the example of Equinix, a digital infrastructure company headquartered in
California, which our approach labeled to be a “positive stress” company. Its
stock price traded at 61$ in 2009 and its stock price climbed over 695% (i.e.,
its $\bar{GM}(\textrm{stock growth}_{[09-19]})$ was 7.95), trading at 485$ ten
years later.
## Discussion
### Limitations
This work has five main limitations. The first concerns the inability of
studying whether stress causes performance differences given the absence of
cross-lag data that links performance to a stress-related company-wide
indicator. Theoretically, we could run a lagged analysis as a linear
regression where the dependent variable is the company’s growth at different
time intervals. However, such an analysis is hard because of two main reasons:
_a)_ no fine-grained temporal granularity for reviews is possible as reviews
might be temporally misaligned since they could be posted after an employee
leaves the company, and _b)_ many, mostly smaller, companies have joined the
public reviewing site at later points in time, thus reviews will not cover all
12 years of analysis.
The second limitation is that the decreasing trend of stock growth may be
dependent on the two main aspects: company ratings and industry sector. These
two have little to do with the hypothesized relationship between stress and
performance. We therefore repeated our analyses by considering a company’s
overall website rating and its industry sector. As for ratings, we indeed
found increasing stock growth with increasing review ratings; still, positive
stress companies experienced the highest growth (Figure S6 in _Supplementary
Information_) compared to highly-rated companies. As for industry sectors, we
showed that tech companies were over-represented in the positive stress set,
and stock growth was partly driven by them (Figure S7 in _Supplementary
Information_). However, by separating companies by industry sector, we still
observed that positive stress companies grew more than the other three types
(Figure S8 in _Supplementary Information_).
The third limitation concerns data biases related to temporal granularity and
geographic representativeness. Upon new available data, future studies could
study workplace stress outside US, allowing for cross-cultural comparisons.
The fourth limitation has to do with nuances when rating a company (e.g.,
being satisfied with the use of the overall company rating and not its
composing dimensions). While on Glassdoor there are several rating fields
available, only the overall rating field was mandatory and hence provided
sufficient coverage for our analysis.
The fifth limitation is that the deep-learning model used to detect stress
mentions in posts is not always accurate. Our medical entity extraction model
has two main limitations. First, the model’s strict/relaxed accuracy is
.71/.85, and, even though it outperformed competitive baselines by a large
margin, it still is not perfect. To partly address this issue, our method
limits itself to textual mentions pertaining stress at work. Second, entity
extraction models such as ours are not always able to tell apart personal from
figurative health mentions (e.g., _‘I felt pain’ vs. ‘He was such a pain to
work with’_). This is still an active area of research. Yet our model is
relying on a large transformer model (e.g., contextual embeddings RoBERTa),
and, as such, it is less likely to make such errors than a simpler, keyword-
matching technique. Future studies could use some of the newly published
social media datasets [38] to further train our model to distinguish between
different _types of health mentions_.
### Implications
To place our work in the broader context of the literature, we point out three
main findings. Our first was that _company reviews contain linguistic markers
of four stress types_. Previous work found that stress of social media users
can be detected by analyzing their textual content, both on Twitter and Reddit
[21]. Another study by Saha and De Choudhury found that high levels of stress
markers were present in the use of language in Reddit comments posted by
university students who experienced gun violence events at their campuses.
This work showed that such linguistic changes are sufficiently subtle to
reflect four _different types of stress_ , that is, low, passive, positive,
and negative stress. Our second finding was that _stress over the years
tallied with large-scale exogenous events_. In particular, negative stress was
the most prevalent among the four stress types in recession years (both great
and mini recessions). This finding is in line with the literature linking
economic downturn with stress and mental health issues caused by job
instability [39], and speaks to the presence of linguistic markers reflecting
negative stress associated with country-level economic performance. Our third
finding was that _company stock growth is associated with positive stress._
This is a new finding, not least because of lack of data in the past. While
stock growth conflates endogenous factors (e.g., productivity) with exogenous
ones (e.g., financial cycles), we found that positive stress companies enjoyed
significantly stronger stock growth.
However, more work is needed to understand how to change a company’s culture
into one in which stressors could be used for one’s growth and self-
development. Given the recent wave of Great Resignation (i.e., the elevated
rate at which U.S. workers have quit their jobs [40]), questions relating to
corporate culture [41] and ways of retaining top talent are of utmost
importance. A recent study from Mercer, an American asset management firm,
found that elevated levels of employee turnover are not due to lack of
engagement at work but attributed to workplace culture and heightened
stressors. Therefore, organizations need to take immediate actions by
(re)assessing their workplace culture first and by then shifting it when
deemed appropriate, through training that fosters psychological safety and
cultivates one’s mindset towards positive stress. Traditionally, employee
well-being has been tracked with tailored surveys. Automated analyses of the
language used by employees on corporate social-networking tools might offer
yet another way of tracking workplace stress, which is sufficiently granular
to assess the impact of interventions in a company. Beyond the immediate use
of these findings for individual companies, several other stakeholders could
benefit from our methodology including government officials. As the
performance of the S&P 500 companies affects the broader U.S. economy,
recommended workplace practices could be established at state- or national-
level to improve work conditions.
## Data availability
We made our code and data available in a readily usable format on GitHub
(https://github.com/sanja7s/positive_stress_in_companies) to allow for
reproducibility. For each company, we shared the following attributes: company
name, #total reviews, #stress reviews, average rating, rating of work-life
balance, rating of career prospects, rating of the company, rating of the
culture, rating of the management, stress type, strength of association with
the stress type, stock values/growth for: 2009, 2012, 2014, 2019, and industry
sector.
## References
* [1] Sugar, A. Stay cool under pressure — without appearing cold. _Harvard Business Review_ (2020).
* [2] Nerurkar, A., Bitton, A., Davis, R. B., Phillips, R. S. & Yeh, G. When physicians counsel about stress: Results of a national study. _JAMA Internal Medicine_ 173, 76–77 (2013).
* [3] Cartwright, S. & Cooper, C. L. _Managing Workplace Stress_ , vol. 1 (Sage, 1997).
* [4] Pal, P. Battling the physical symptoms of stress. _Harvard Business Review_ (2016).
* [5] Vaske, J. J. Advantages and disadvantages of internet surveys: Introduction to the special issue. _Human Dimensions of Wildlife_ 16, 149–153 (2011).
* [6] Duda, M. D. & Nobile, J. L. The fallacy of online surveys: No data are better than bad data. _Human Dimensions of Wildlife_ 15, 55–64 (2010).
* [7] Gigliotti, L. M. Comparison of an internet versus mail survey: A case study. _Human Dimensions of Wildlife_ 16, 55–62 (2011).
* [8] Fricker, R. D. & Schonlau, M. Advantages and disadvantages of internet research surveys: Evidence from the literature. _Field methods_ 14, 347–367 (2002).
* [9] Selye, H. _The Stress of Life_ (McGraw-Hill, 1956).
* [10] Lazarus, R. S. Toward better research on stress and coping. _American Psychological Association_ (2000).
* [11] Colligan, T. W. & Higgins, E. M. Workplace stress: Etiology and consequences. _Workplace Behavioral Health_ 21, 89–97 (2006).
* [12] Kobasa, S. C. Stressful life events, personality, and health: An inquiry into hardiness. _Personality and Social Psychology_ 37, 1 (1979).
* [13] Muse, L., Harris, S. & Feild, H. Has the Inverted-U Theory of Stress and Job Performance Had a Fair Test? _Journal of Human Performance_ 16, 349–364 (2003).
* [14] Wallis, C., Mehrtens, R. & Thompson, D. Stress: can we cope? _Time_ 121, 48–54 (1983).
* [15] Crum, A. J., Salovey, P. & Achor, S. Rethinking stress: The role of mindsets in determining the stress response. _Personality and Social Psychology_ 104, 716 (2013).
* [16] Cohen, S., Kamarck, T. & Mermelstein, R. A global measure of perceived stress. _Journal of Health and Social Behavior_ 385–396 (1983).
* [17] Hammen, C. Stress and depression. _Annual Review of Clinical Psychology_ 1, 293–319 (2005).
* [18] Wang, J. Work stress as a risk factor for major depressive episode (s). _Psychological medicine_ 35, 865–871 (2005).
* [19] Park, C. L. & Helgeson, V. S. Introduction to the special section: growth following highly stressful life events–current status and future directions. _Journal of consulting and clinical psychology_ 74, 791 (2006).
* [20] Tedeschi, R. G. & Calhoun, L. G. Posttraumatic growth: conceptual foundations and empirical evidence. _Psychological inquiry_ 15, 1–18 (2004).
* [21] Guntuku, S. C., Buffone, A., Jaidka, K., Eichstaedt, J. C. & Ungar, L. H. Understanding and measuring psychological stress using social media. In _Proceedings of the International AAAI Conference on Web and Social Media_ , vol. 13, 214–225 (2019).
* [22] Pennebaker, J. W., Francis, M. E. & Booth, R. J. Linguistic inquiry and word count: Liwc 2001. _Mahway: Lawrence Erlbaum Associates_ 71, 2001 (2001).
* [23] Saha, K. & De Choudhury, M. Modeling stress with social media around incidents of gun violence on college campuses. _Proceedings of the ACM on Human-Computer Interaction_ 1, 1–27 (2017).
* [24] Das Swain, V. _et al._ Modeling organizational culture with workplace experiences shared on glassdoor. In _Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI)_ , 1–15 (2020).
* [25] U.S. Bureau of Labor Statistics. https://www.bls.gov.
* [26] Yahoo Finance portal. https://finance.yahoo.com.
* [27] Šćepanović, S., Martín-López, E., Quercia, D. & Baykaner, K. Extracting medical entities from social media. In _Proceedings of the ACM Conference on Health, Inference, and Learning (CHIL)_ , 170–181 (2020).
* [28] Aronson, A. R. & Lang, F.-M. An overview of metamap: historical perspective and recent advances. _Journal of the American Medical Informatics Association_ 17, 229–236 (2010).
* [29] Tutubalina, E., Miftahutdinov, Z., Nikolenko, S. & Malykh, V. Medical concept normalization in social media posts with recurrent neural networks. _Journal of Biomedical Informatics_ (2018).
* [30] Leaman, R. & Lu, Z. Taggerone: joint named entity recognition and normalization with semi-markov models. _Bioinformatics_ 32, 2839–2846 (2016).
* [31] Šćepanović, S., Aiello, L. M., Barrett, D. & Quercia, D. Epidemic dreams: dreaming about health during the covid-19 pandemic. _Royal Society open science_ 9, 211080 (2022).
* [32] Grootendorst, M. Bertopic: leveraging bert and c-tf-idf to create easily interpretable topics. _URL https://doi. org/10.5281/zenodo_ 4381785.
* [33] Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ (2018).
* [34] McInnes, L., Healy, J. & Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. _arXiv preprint arXiv:1802.03426_ (2018).
* [35] McInnes, L., Healy, J. & Astels, S. hdbscan: Hierarchical density based clustering. _Journal of Open Source Software_ 2, 205 (2017).
* [36] Grusky, D. B., Western, B. & Wimer, C. The consequences of the great recession. _The Great Recession_ 3–20 (2011).
* [37] Gamestop shares surge 100% in a day, reddit group rejoices (2021).
* [38] Naseem, U., Kim, J., Khushi, M. & Dunn, A. G. Identification of disease or symptom terms in reddit to improve health mention classification. In _Proceedings of the ACM Web Conference 2022_ , 2573–2581 (2022).
* [39] Mehta, K. _et al._ Depression in the us population during the time periods surrounding the great recession. _The Journal of clinical psychiatry_ 76, 4221 (2015).
* [40] How to manage the great resignation (2021).
* [41] Reading corporate culture from the outside (2022).
* [42] Huang, Z., Xu, W. & Yu, K. Bidirectional lstm-crf models for sequence tagging. _arXiv preprint arXiv:1508.01991_ (2015).
* [43] Straková, J., Straka, M. & Hajic, J. Neural architectures for nested ner through linearization. In _Proceedings of the Conference of the Association for Computational Linguistics_ , 5326–5331 (2019).
* [44] Akbik, A., Bergmann, T. & Vollgraf, R. Pooled contextualized embeddings for Named Entity Recognition. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1_ , 724–728 (2019).
* [45] Pennington, J., Socher, R. & Manning, C. GloVe: Global vectors for word representation. In _Proceedings of the conference on empirical methods in natural language processing_ , 1532–1543 (Association for Computational Linguistics, 2014).
* [46] Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. Distributed representations of words and phrases and their compositionality. In _In Proceedings of the Advances in neural information processing systems conference_ , 3111–3119 (2013).
* [47] Braun, V. & Clarke, V. Using thematic analysis in psychology. _Qualitative Research in Psychology_ 3, 77–101 (2006).
## Supplementary Information
Figure 4: Number of posts (log) in our dataset between 2008 and 2020s. There
is no trend difference between all posts and those containing mentions related
to stress.
## Details of the dataset
We collected a total of 713,018 posts published for a company reviewing site
from the start of 2008 up until the first quarter of 2020 for the S&P 500
companies. We filtered out posts belonging to non-US based companies, yielding
a total of 439,163 posts across $399$ unique S&P 500 companies. The average
rating across companies ranged from a minimum value of $1.62$ up to a maximum
value of $5$ ($\mu=3.37,\sigma=0.40)$. While the overall fraction of stress
posts per company was $1.11\%$, this value ranged from $0\%$ up to $9.52\%$
across companies ($\mu=1\%,\sigma=1\%)$.
## Data representatives
A total of 439,163 posts were analyzed. These posts are about companies
distributed across all the 51 U.S. states (Table 4). The highest number of
posts were found in California (i.e., a total of 69,968 posts), while the
lowest in Wyoming (i.e., a total of 222 posts). The posts span across 11
industries classified according to the Global Industry Classification Standard
(GICS), with the highest number of posts for companies in Information
Technology, and the least number in Real Estate (Table 5). The posts were
written by managers, sales associates, software engineers, analysts, among
others (Table 6). Current employees make up 56% of the reviews, and the
remaining reviews are predominantly by former employees who held the job
within the last five years. The maximum annual number of posts between 2008
and 2020 was observed in 2016, while the lowest number of posts in 2009
(Figure 4).
Table 4: Number of posts and number of offices on the company reviewing site
across U.S. States, ranked by the number of posts published between 2008 and
2020 in descending order. The state of California had the most published
posts, while the state of Wyoming had the least published posts. The Pearson
correlation between the log number of posts and the number of companies per
state in our data is $.98$, while the correlation between the log number of
posts in our data and the log of population size across states is $.93$.
U.S. State | # posts | # offices
---|---|---
CA | 69968 | 340
TX | 43629 | 342
NY | 37515 | 313
IL | 25157 | 290
FL | 24082 | 283
GA | 17888 | 275
WA | 15672 | 239
NC | 14072 | 268
PA | 14064 | 271
OH | 12447 | 263
MA | 12355 | 253
AZ | 11834 | 228
NJ | 11561 | 245
VA | 11320 | 235
CO | 9408 | 249
MN | 8437 | 196
MI | 7953 | 237
MO | 7657 | 230
TN | 7165 | 221
OR | 6704 | 206
MD | 6610 | 207
IN | 5727 | 222
WI | 5006 | 181
CT | 4567 | 186
KY | 4224 | 190
UT | 3925 | 187
U.S. State | # posts | # offices
---|---|---
OK | 3772 | 174
DC | 3596 | 180
KS | 3459 | 169
SC | 3362 | 194
NV | 2815 | 163
LA | 2631 | 175
AL | 2546 | 171
DE | 2067 | 113
IA | 1849 | 133
RI | 1676 | 103
AR | 1666 | 149
NH | 1627 | 127
NE | 1564 | 139
ID | 1208 | 112
MS | 1138 | 127
NM | 1097 | 125
WV | 803 | 114
ME | 604 | 105
HI | 600 | 85
ND | 344 | 74
VT | 324 | 54
MT | 322 | 70
AK | 300 | 57
SD | 297 | 57
WY | 222 | 72
Figure 5: Number of posts (log) in our dataset versus state population (log). The states Washington DC and Rhode Island have more posts that what the population size would suggest. The line of best linear fit is shown in gray. U.S. states are shown with the two-code state abbreviation. Table 5: Number of posts across the Global Industry Classification Standard (GICS) sectors. More posts are generally found in sectors having more companies, as one expects. GICS Sector | # posts | # companies
---|---|---
Information Technology | 63198 | 52
Consumer Discretionary | 62395 | 40
Financials | 49955 | 42
Health Care | 36308 | 41
Consumer Staples | 26471 | 28
Industrials | 24074 | 43
Communication Services | 13842 | 13
Energy | 5510 | 20
Materials | 5269 | 21
Utilities | 3172 | 19
Real Estate | 2228 | 16
Table 6: Number of posts across roles and statuses. Employee Title | # posts
---|---
Sales Associate | 8006
Manager | 4536
Software Engineer | 4191
Customer Service Representative | 4058
Cashier | 3726
Director | 2819
Project Manager | 2365
Senior Manager | 2225
Senior Software Engineer | 2019
Associate | 1969
Store Manager | 1963
Assistant Manager | 1930
Pharmacy Technician | 1747
Analyst | 1660
Delivery Driver | 1619
Employee Status | # posts
Current Employee | 190876
Former Employee | 146449
Former Intern | 5207
Former Contractor | 3380
Current Intern | 2858
Figure 6: Correlation between the number of headquarters in each state in the
Fortune 500 list and the number of headquarters in each state in our dataset
(Spearman $r=.90$). The states of Nebraska (NE) and Arizona (AR) have fewer
headquarters than what the Fortune list would suggests.
## Description and evaluation of the deep-learning framework
To extract stress mentions, we used the MedDL entity extraction module [27]
(the left rectangle in Figure 1(a)). MedDL uses _contextual embeddings_ and a
_BiLSTM-CRF sequence labeling architecture_. The BiLSTM-CRF architecture [42]
is the deep-learning method commonly employed for accurately extracting
entities from text [43, 44], and consists of two layers. The first layer is a
BiLSMT network (the dashed rectangle in Figure 1(a)), which stands for Bi-
directional Long Short-Term Memory (LSTM). The outputs of the BiLSTM are then
passed to the second layer: the CRF layer (enclosed in the other dashed
rectangle). The predictions of the second layer (the white squares in Figure
1(a)) represent the output of the entity extraction module. To extract the
medical entities of symptoms and drug names, BiLSTM-CRF takes as input
representations of words (i.e., embeddings). The most commonly used embeddings
are Global Vectors for Word Representation (GloVe) [45] and Distributed
Representations of Words (word2vec) [46]. However, these do not take into
account a word’s context. The word ‘pressure’, for example, could be a stress
symptom at the workplace (e.g., ‘I felt constant pressure to deliver results’)
or could be used in the physics context (e.g., ‘The solid material found in
the centre of some planets at extremely high temperature and pressure’). To
account for context, _contextual embeddings_ are generally used. MedDL used
the RoBERTa embeddings as it had outperformed several others contextual
embeddings, including ELMo, BioBert and Clinical BERT [27].
Our evaluation metric is $F1$ score, which is the harmonic mean of precision
$P$ and recall $R$:
$F1=2\frac{P\cdot R}{P+R}$ (8) $P=\frac{\\#\textrm{correctly classified
medical entities}}{\\#\textrm{total entities classified as being medical}}$
and
$R=\frac{\\#\textrm{correctly classified medical entities}}{\\#\textrm{total
medical entities}}.$
For strict F-1 score, we counted as “correctly classified” only the entities
that were exactly matching the ground truth labels. For relaxed version of F-1
score, partially matching entities are also counted as correctly classified
(e.g., if the model extracts the entity “ _pain_ ” given the full mention of “
_strong pain_ ”). Also, given that our data comes with class imbalance (i.e.,
text tokens do not correspond equally to symptoms, or non-medical entities),
we corrected for that by computing $P$ and $R$ using micro-averages [ash13].
In so doing, we were able to compare Med-DL’s F1 scores with those of two
well-known entity extraction tools: MetaMap and TaggerOne. MetaMap is a well-
established tool for extracting medical concepts from text using symbolic NLP
and computational-linguistic techniques [28], and has become a de-facto
baseline method for NLP studies related to health [29]. TaggerOne is a machine
learning tool using semi-Markov models to jointly perform two tasks: entity
extraction and entity normalization. The tool does so using a medical lexicon
[30]. The MedDL pre-trained model was evaluated on a labeled dataset of Reddit
posts called MedRed. The MedRed dataset was split into train (50%), dev (25%),
and test (25%) sets. The MedDL method achieved a strict/relaxed $F1$-score of
$.71$/$.85$ when extracting symptoms (Figure SI 7), outperforming both MetaMap
and TaggerOne by a large margin (the two have $F1$-scores of $.17/.48$ and
$.31/.58$, respectively).
Figure 7: MedDL strict/relaxed F-1 score results when extracting medical
symptoms on the MedRed dataset compared to two competitive alternatives of
MetaMap and TaggerOne.
(a)
(b)
Figure 8: _(a)_ Distribution across companies of the logarithm of stock
growth values from the average stock price in 2014 and that of 2019
(${stock\\_growth}_{[14-19]}=stock_{2019}/stock_{2014}$) showing the stock
growth is log-normally distributed. The average stock price for year $y$
($stock_{y}$) is calculated as the average of the daily Adjusted Closing
Prices for the year. _(b)_ Geometric mean of the stock growth values
$\bar{GM}({stock\\_growth}_{[14-19]})$ for increasing stress score percentiles
for the companies of a given stress type. Error bars represent geometric
standard error $GSE({stock\\_growth}_{[14-19]})=$
$\bar{GM}({stock\\_growth}_{[14-19]})/$
$\sqrt{N}\cdot\sigma(log({stock\\_growth}_{[14-19]}))$. Figure 9: Geometric
mean of the stock growth values $\bar{GM}({stock}\mbox{ }{growth}_{[09,19]})$
for different ratings percentiles for companies of the four stress types.
Error bars represent geometric standard error $GSE({stock}\mbox{
}{growth}_{[09,19]})=$ $\bar{GM}({stock}\mbox{
}{growth}_{[09,19]})/\sqrt{N}\cdot\sigma(log({stock}\mbox{
}{growth}_{[09,19]}))$.
(a) Low stress
(b) Passive
(c) Negative stress
(d) Positive stress
Figure 10: The number of companies per industry sector for the four stress
types. IT is more prominent among positive stress companies, while Health Care
among negative stress companies.
(a)
(b)
(c)
Figure 11: Geometric mean of the stock growth values
$\bar{GM}({stock\\_growth}_{[09-19]})$ for increasing stress score percentiles
for the companies in each of the three most present industry sectors: (a)
Information Technology, (b) Consumer Discretionary, and (c) Health Care. The
three sectors have sufficient data to ensure statistical significance for each
percentile bin. Error bars represent geometric standard error
$GSE({stock\\_growth}_{[09-19]})=$ $\bar{GM}({stock\\_growth}_{[09-19]})/$
$\sqrt{N}\cdot\sigma(log({stock\\_growth}_{[09-19]}))$.
Annotations of the words BERTopic found. For each topic, we identified the
three most representative words and submitted the reviews mentioning them to
six annotators. For example, we picked three reviews containing the words
‘overtime’, ‘mandatory’, and ‘shift’ for negative stress companies, and asked
six annotators to read them and describe what type of workplaces these reviews
would suggest. Upon collecting a total of 72 free-form responses (i.e., each
annotator described the reviews corresponding to the 12 topics), we conducted
a thematic analysis [47]. To identify overarching themes, we used a
combination of open coding and axial coding. We first applied open coding to
identify key concepts. Specifically, one of the authors read the responses and
marked them with keywords. We then used axial coding to identify relationships
between the most frequent keywords to summarize them into semantically
cohesive themes.
We found three high-level themes: _career drivers_ , _industry or benefits_ ,
and _emotional aspects_. In the reviews, each theme was paraphrased
differently depending on the four types of company stress, allowing us to
identify sub-themes. The _career drivers_ theme described what motivated
employees to go to work. Its sub-themes concerned companies whose employees
experienced ‘considerable emotional pressure’ (negative stress), tended to
‘focus on activities outside the work’ (passive), cherished ‘their sense of
control over their work’ (low stress), and enjoyed ‘a collaborative and
supportive workplace culture’ (positive stress). In the _industry or benefits_
theme, we identified sub-themes mentioning either the industry sectors of the
corresponding companies (e.g., Consumer Discretionary for negative stress, and
Information Technology for positive stress) or aspects concerning long-term
financial benefits (e.g., passive and low stress). Finally, in the _emotional
aspects_ theme, we identified sub-themes suggesting employees who experienced
‘emotional pressure’ (negative stress), ‘tedious work’ (passive), ‘good work-
life balance’ (low stress), or a ‘fast-paced, high-performing, and dynamic
workplace environment’ (positive stress).
## Evaluation of BERTopic results
We ran the topic modeling algorithm BERTopic [32] separately on the four sets
of reviews (each set containing reviews of the companies of a given stress
type). The fact that BERTopic discovered distinct topics in the four sets
reveals that stress is paraphrased differently in the sets. We calculated the
topical overlapping values for the different combinations of the four sets
(using the Jaccard similarity on the sets of keywords from the top ten topics
of each stress type), and found them to be (on average) as low as 0.08 (on a
scale ranging from 0 to 1).
## Evaluation of the four quadrants
To test whether the quadrant division of companies into four types was
meaningful, we manually inspected 30 posts taken at random from companies with
high stress, and found stress mentions in companies with low ratings to be
qualitatively different from those in companies with high ratings (e.g., a
review from a lowly rated company _“The pressure is constantly high, while
your work is not appreciated […] and it feels like the managers do not know
what they are doing.”_ versus a review from a highly rated company _“Happy
Employee. Best culture I have experienced, especially in a stressful job. […]
The job is hard, but nothing worth having comes easy.”_). Similarly, we found
qualitatively different review between companies with low stress and high
versus low ratings (e.g., a review from a highly rated company _“Solid company
offering Work From Home. […] decent options to choose for hours worked, great
tech support, all equipment supplied, always feel connected to team, strong
work ethic. ”_ , versus a review from a lowly rated company _“Sinking Ship due
to Horribly Managed […] Merger. At legacy X office, they managed to retain
some of the positive company culture leftover from the X days. The people are
still the best part of that office, but with the increasing turnover, layoffs
and “Hunger Games” management style, that is in danger of ending… ”_). As a
final validity check, we arranged companies along the two axes and clustered
them in an unsupervised way. We found four to be the best number of clusters.
More specifically, we applied k-means clustering, and searched for the optimal
number of clusters using the elbow method (Figure 12). The method involves
calculating the sum of squared distances between data points and the $k$
assigned clusters’ centroids, for an increasing number of clusters $k$. Once
this value stops decreasing significantly, it means that that the optimal
number of clusters is reached.
Figure 12: Inertia of Cosine k-Means versus number of clusters having the
“elbow” at k=$4$.
## Sensitivity of the results
Weighting the scores. We explored the effects of weighting the yearly scores
in:
$\displaystyle m{(s,y)}=\sum_{c\in s}f(c,s,y)\times w{(c,y,s)},$ (9)
by plotting the temporal scores without weights, i.e., where $w=1$. The result
is shown in Figure 13. The simple aggregation skews the results towards (the
long tail of) small companies as it considers a small company equal to a big
one.
Figure 13: The effects of weighting the yearly scores. _(top)_ The evolution
of temporal scores without weights, i.e., where $w=1$ for the four types of
stress; and _(bottom)_ the unemployment rate in the U.S., with the horizontal
dashed line reflecting pre-recession rate. The stress score per year is
calculated using Equation (9) with $w=1$.
Shorter-term growth. To test whether our results on stock growth are not
affected by exogenous events such as the Great Recession, we computed stock
growth for the narrower 5-year period between 2014 to 2019:
${stock}\mbox{ }{growth}_{[14-19]}=\frac{stock^{2019}}{stock^{2014}}$ (10)
where $stock_{i}$ is the average adjusted closing price of their stocks in
year $i$. Figure 8 shows that the trend remains qualitatively the same as that
in Figure 2, even when removing the Great Recession period. Positive stress
companies enjoyed the highest stock growth (with average value across all
percentiles being $\bar{GM}(\textrm{stock growth}_{[14-19]})=1.97$ as per
Figure 8 on the right), low stress companies had the second highest
($\bar{GM}(\textrm{stock growth}_{[14-19]})=1.53)$, while passive and negative
stress companies enjoyed the lowest growth ($\bar{GM}({\textrm{stock
growth}_{[14-19]}})=1.46$, and $1.45$, respectively).
Interaction effects between stress scores and review ratings. We tested
whether our observed stock growth was genuinely associated with positive
stress companies rather than being simply associated with highly-rated
companies. To this end, for each stress type, we plotted
$\bar{GM}({stock\\_growth}_{[09-19]})$ against different rating percentiles
(Figure 9). Highly rated companies experienced stock growth, yet there are
still significant differences across companies of different stress types: in
particular, positive stress companies of varying rating percentiles
consistently enjoyed the highest growth (the yellow line in Figure 9 is
consistently above the other three lines).
Growth per industry sectors. To test whether a specific industry sector is
predominant for a given stress type, we first plotted the number of companies
per industry sector according to the GICS classification (Figure 10).
Information Technology was more prominent among positive stress and low stress
companies, Health Care and Financials among negative stress ones, and
Industrials and Consumer Discretionary among passive ones. To then check
whether the distribution of industry sectors across the four types of stress
affected our findings for stock growth, we computed stock growth between 2009
and 2019, and did so for the three most frequent industry sectors separately
(i.e., Information Technology, Consumer Discretionary, and Health Care). We
chose those three sectors because each individually contained a sufficient
number of companies and, as such, allowed us to obtain statistical significant
results.
Stock growth was computed as $GM({\textrm{stock
growth}_{[09-19]}})=\Pi(\textrm{stock growth}_{[09-19]}(c))^{1/n}$, where $c$
is each company from a given industry sector (e.g., Information Technology) in
a specific _(stress type,percentile)_ bin, and $n$ is the number of the
companies in such a bin. For the three industry sectors, we plotted
$\bar{GM}({stock\\_growth})$ against different stress score percentiles
(Figure 11). In all three sectors, we observed that positive stress companies
had consistently higher stock growth compared to the other three stress types.
Percentage of stress posts. To test the sensitivity of our results to the
percentage of stress posts being considered, we repeated our analyses by
including only the companies with at least $r$ reviews. We found the optimal
threshold $r$ to be $280$, and did so as follows. To include at least half of
the total S&P 500 companies, the least number of reviews per company had to be
less than $r=350$. Then, for each $r=1,...,350$, we subset the companies
having at least $r$ reviews, and calculated the correlation between a
company’s rating and its positive stress score (for positive stress companies)
or its negative stress score (for negative stress companies), and did so for
each subset. We found that the absolute values of the correlations increased
with the number of reviews (Figure 14), as one expected, and there was a phase
shift at $r=280$ for positive stress companies ($\rho($company_rating,
positive_stress_association)=$.75$). The same applied to negative stress
companies (Figure 14). At this threshold, we were left with $287$ companies
out of $380$ companies in total. We repeated the calculations on this subset
of companies and, compared to our previously reported results, found even
stronger associations between: i) negative stress scores in the whole U.S. and
the Great Recession, and ii) a company’s positive stress score and its stock
growth.
Figure 14: Threshold selection. Correlation values between each of the two
stress scores and a company’s website overall rating ($y$-axis) for the
companies with at least $r$ reviews ($x$-axis). These values have a phase
shift at $r=280$ for positive stress companies (blue), matching the value of
the correlation for negative stress companies (red).
Combining stress and review scores. We fit three Ordinary Least Squares (OLS)
models to predict stock growth (Figure 15). In each model, we used the (log of
the) number of reviews as a control variable. In addition, (a) $M_{r}$ uses
the average rating score as the additional independent variable (baseline
model), (b) $M_{s}$ uses the stress score, and (c) $M_{r+s}$ uses both the
rating score and the stress score as additional independent variables. We
applied bootstrapping to ascertain the statistical significance of the results
by randomly subsampling a set of 120 companies 10 times. We observed a 78% and
a 192% increase in $M_{s}$ and in $M_{r+s}$ over the baseline model,
respectively.
Figure 15: Adjusted $R^{2}$ values of three OLS models with different
predictors: r is the rating score; s is the stress score; r+s is the rating
score and stress score. We applied bootstrapping to ascertain the statistical
significance of the results by randomly subsample a set of 120 companies 10
times. Average values and standard deviations are reported. We observed a 78%
and a 192% increase in $M_{s}$ and in $M_{r+s}$ over the baseline model,
respectively.
|
arxiv-papers
| 2021-07-26T17:58:12 |
2024-09-04T03:07:19.552420
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Sanja \\v{S}\\'cepanovi\\'c, Marios Constantinides, Daniele Quercia,\n Seunghyun Kim",
"submitter": "Marios Constantinides",
"url": "https://arxiv.org/abs/2107.12362"
}
|
2107.12366
|
# $L$-series of harmonic Maass forms and a summation formula for harmonic
lifts
Nikolaos Diamantis University of Nottingham
[email protected] , Min Lee University of Bristol
[email protected] , Wissam Raji American University of Beirut
[email protected] and Larry Rolen Vanderbilt University
[email protected]
###### Abstract.
We introduce an $L$-series associated with harmonic Maass forms and prove
their functional equations. We establish converse theorems for these
$L$-series and, as an application, we formulate and prove a summation formula
for the holomorphic part of a harmonic lift of a given cusp form.
## 1\. Introduction
The theory of harmonic Maass forms has been a centre of attention in recent
years, having led to various striking results. To mention just one example,
the following harmonic Maass form with Nebentypus of weight $1/2$ for
$\Gamma_{0}(144)$ was a key to the proof of the Andrews-Dragonette conjecture
and deep insight into Dyson’s ranks [BO10, BO06]:
(1.1)
$q^{-1}+\sum_{n=0}^{\infty}\frac{q^{24n^{2}-1}}{(1+q^{24})(1+q^{48})\dots(1+q^{24n})^{2}}+\int_{-24\bar{z}}^{i\infty}\frac{\theta(\tau)d\tau}{\sqrt{-i(\tau+24z)}}.$
Here $q:=e^{2\pi iz}$ and $\theta(\tau)$ is a certain weight $3/2$ theta
series.
However, in contrast to the classical theory, where the deeper study of
holomorphic modular and Maass forms is often driven by the study of their
$L$-series, Dirichlet series have not yet featured prominently in the case of
harmonic Maass forms. An $L$-series has been associated to special classes of
harmonic Maass forms, namely the weakly holomorphic forms, and interesting
results about them have been proved [BFK14], but this $L$-series has not been
studied as intensely as the modular object themselves. Also, to our knowledge,
the definition has not been extended to all harmonic Maass forms, that is, for
any harmonic Maass forms which are non-holomorphic. In particular, with the
exception of a result in that direction we will discuss in the next section, a
converse theorem for $L$-series of general harmonic Maass forms does not seem
to have been formulated and proved.
In this paper, inspired by the ideas in [Boo15], we address this state of
affairs by proposing a definition of $L$-series of general harmonic Maass
forms. With this definition, we succeed in establishing a converse theorem. To
illustrate the idea more clearly, we will outline it in the special case of
weakly holomorphic modular forms on $\operatorname{SL}_{2}(\mathbb{Z})$.
First, we let $\mathcal{L}$ be the Laplace transform mapping each smooth
function $\varphi\colon\mathbb{R}_{+}\to\mathbb{C}$ to
(1.2) $(\mathcal{L}\varphi)(s)=\int_{0}^{\infty}e^{-st}\varphi(t)dt$
for each $s\in\mathbb{C}$ for which the integral converges absolutely.
Let $f$ be a weakly holomorphic cusp form of even weight $k$ for
$\operatorname{SL}_{2}(\mathbb{Z})$ (see §3 for a definition) with expansion
(1.3) $f(z)=\sum_{\begin{subarray}{c}n=-n_{0}\\\ n\neq
0\end{subarray}}^{\infty}a(n)e^{2\pi inz}.$
Let $\mathcal{F}_{f}$ be the space of test functions
$\varphi\colon\mathbb{R}_{+}\to\mathbb{C}$ such that
(1.4) $\sum_{\begin{subarray}{c}n=-n_{0}\\\ n\neq
0\end{subarray}}^{\infty}|a(n)|(\mathcal{L}|\varphi|)(2\pi n)$
converges. Because of the growth of $a(n)$ (see (3.11) below), the space
$\mathcal{F}_{f}$, contains the compactly supported smooth functions on
$\mathbb{R}_{+}$. Then we define the $L$-series map
$L_{f}\colon\mathcal{F}_{f}\to\mathbb{C}$ by
(1.5) $L_{f}(\varphi)=\sum_{\begin{subarray}{c}n=-n_{0}\\\ n\neq
0\end{subarray}}^{\infty}a(n)(\mathcal{L}\varphi)(2\pi n).$
The relation of this definition with the $L$-series associated to holomorphic
cusp forms and weakly holomorphic modular forms will be discussed in the next
section.
We will now state our converse theorem in the special case of weakly
holomorphic cusp forms for $\operatorname{SL}_{2}(\mathbb{Z})$. The general
statement for all harmonic Maass forms of all levels (Theorem 5.1) and its
proof will be given in §5.
###### Theorem 1.1.
Let $(a(n))_{n\geq-n_{0}}$ be a sequence of complex numbers such that
$a(n)=O(e^{C\sqrt{n}})$ as $n\to\infty$, for some $C>0.$ For each
$z\in\mathbb{H}$, set
(1.6) $f(z)=\sum_{\begin{subarray}{c}n=-n_{0}\\\ n\neq
0\end{subarray}}^{\infty}a(n)e^{2\pi inz}.$
Suppose that the function $L_{f}(\varphi)$ defined, for each compactly
supported smooth $\varphi:\mathbb{R}_{+}\to\mathbb{C}$ , by (1.5) satisfies
(1.7) $L_{f}(\varphi)=i^{k}L_{f}(\check{\varphi})$
where $\check{\varphi}$ is given by
(1.8) $\check{\varphi}(x):=x^{k-2}\varphi(1/x).$
Then $f$ is a weakly holomorphic cusp form of weight $k\in\mathbb{Z}$ for
$\operatorname{SL}_{2}(\mathbb{Z})$.
As an example of the way the functional equations and the converse theorem we
have established can be used, we present an alternative proof of the classical
fact that the $(k-1)$-th derivative of a weight $2-k$ weakly holomorphic form
is a weight $k$ weakly holomorphic form (Proposition 5.5).
The main application of our constructions and methods is a summation formula
for harmonic lifts via the operator $\xi_{2-k}$. This operator maps a weight
$2-k$ harmonic Maass form $f$ to its “shadow” weight $k$ holomorphic cusp form
(1.9) $\xi_{2-k}f:=2iy^{2-k}\overline{\frac{\partial f}{\partial\bar{z}}}$
where $z=x+iy$. As Bruinier and Funke showed in [BF04], the operator
$\xi_{2-k}$ is surjective, and finding a preimage for a given cusp form is a
fundamental problem in the theory of harmonic Maass forms with many arithmetic
applications (see, e.g., [BFOR17]). However, it is not known in general how to
compute explicitly a “holomorphic part” (see (1.11)) of a harmonic Maass form
$g$ with a known shadow. Our summation formula then provides information about
the behaviour of that “holomorphic part” upon the action of test functions, in
terms of the given shadow. Here we state in the special case of level $1$ and
even weight, but in Section 5.3 we will state it in prove it in general.
###### Theorem 1.2.
Let $f$ be a weight $k\in 2\mathbb{N}$ holomorphic cusp form with Fourier
expansion
(1.10) $f(z)=\sum_{n=1}^{\infty}a(n)e^{2\pi inz}.$
Suppose that $g$ is a weight $2-k$ harmonic Maass form such that
$\xi_{2-k}g=f$ with Fourier expansion
(1.11) $g(z)=\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}c^{+}(n)e^{2\pi
inz}+\sum_{\begin{subarray}{c}n<0\end{subarray}}c^{-}(n)\Gamma(k-1,-4\pi
ny)e^{2\pi inz}.$
where $\Gamma(a,z)$ is the incomplete Gamma function. Then, for every smooth,
compactly supported $\varphi\colon\mathbb{R}_{+}\to\mathbb{R}$, we have
(1.12) $\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}c^{+}(n)\int_{0}^{\infty}\varphi(y)\left(e^{-2\pi
ny}-(-iy)^{k-2}e^{-2\pi n/y}\right)dy\\\
=\sum_{l=0}^{k-2}\sum_{n>0}\overline{a(n)}\bigg{(}\frac{(k-2)!}{l!}(4\pi
n)^{1-k+l}\int_{0}^{\infty}e^{-2\pi ny}y^{l}\varphi(y)dy\\\
+\frac{2^{l+1}}{(k-1)}(8\pi n)^{-\frac{k+1}{2}}\int_{0}^{\infty}e^{-\pi
ny}y^{\frac{k}{2}-1}\varphi(y)M_{1-\frac{k}{2}+l,\frac{k-1}{2}}(2\pi
ny)dy\bigg{)},$
where $M_{\kappa,\mu}(z)$ is the Whittaker hypergeometric function.
As usual with summation formulas (see, e.g. [MS04] for an overview) the
formulation and derivation of our formula is based on the use of $L$-series,
test functions and integral transforms which are the main features of our
overall method.
As far as we are aware, this is one of the first instances that summation
formulas have appeared in the study of harmonic Maass forms and we are
currently working on possible applications of our formula. The applications we
are aiming for include information about the growth of the individual
coefficients $c^{+}(n)$ and asymptotic formulas for their moments.
## Acknowledgements
We are thankful to the referees for their careful reading of the manuscript
and their insightful comments. We thank K. Bringmann and J. Lagarias for very
useful feedback and suggestions for further work, as well as Ken Ono for his
helpful remarks and encouragement. The first author is partially supported by
EPSRC grant EP/S032460/1. The second author was supported by Royal Society
University Research Fellowship “Automorphic forms, $L$-functions and trace
formulas”. The third author is grateful for the support of the Center for
Advanced Mathematical Sciences (CAMS) at AUB. This work was supported by a
grant from the Simons Foundation (853830, LR). The fourth author is also
grateful for support from a 2021-2023 Dean’s Faculty Fellowship from
Vanderbilt University and to the Max Planck Institute for Mathematics in Bonn
for its hospitality and financial support.
## 2\. Context and previous work
We comment on the relation of our $L$-series with the classical $L$-series of
holomorphic cusp forms. For $s\in\mathbb{C}$, let
(2.1) $I_{s}(x):=(2\pi)^{s}x^{s-1}\frac{1}{\Gamma\left(s\right)}.$
Then, for $u>0$ and $\Re(s)>0$,
(2.2)
$(\mathcal{L}I_{s})(u)=\frac{(2\pi)^{s}}{\Gamma\left(s\right)}\int_{0}^{\infty}e^{-ut}t^{s-1}dt=\left(\frac{2\pi}{u}\right)^{s}.$
Here the Laplace transform of $I_{s}$ continues as an entire function of $s$.
Let $f$ be a holomorphic cusp form for $\Gamma_{0}(N)$ of weight
$k\in\mathbb{Z}$ with Fourier expansion (1.10). Since
$a(n)=O_{f,\epsilon}(n^{\frac{k-1}{2}+\epsilon})$, for any $s\in\mathbb{C}$
with $\Re(s)>\frac{k-1}{2}$, we have $I_{s}\in\mathcal{F}_{f}$ and
(2.3) $L_{f}(I_{s})=\sum_{n=1}^{\infty}\frac{a(n)}{n^{s}},$
is the usual $L$-series of $f$.
The relation with the $L$-series of a weakly holomorphic cusp form $f$ is more
subtle. In this case, $f$ can be expressed in terms of the Fourier expansion
(1.6) where $n_{0}$ is the largest integer such that $a(-n_{0})\neq 0$. The
associated $L$-series is defined in [BFK14, (1.5)], for any fixed $t_{0}>0$,
by
(2.4) $L(s,f):=\sum_{\begin{subarray}{c}n\geq-n_{0}\\\ n\neq
0\end{subarray}}\frac{a(n)\Gamma(s,2\pi nt_{0})}{(2\pi
n)^{s}}+i^{k}\sum_{\begin{subarray}{c}n\geq-n_{0}\\\ n\neq
0\end{subarray}}\frac{a(n)\Gamma\left(k-s,\frac{2\pi n}{t_{0}}\right)}{(2\pi
n)^{k-s}}$
for all $s\in\mathbb{C}$. The value of $L(s,f)$ is independent of $t_{0}$.
Here $\Gamma(s,x)$ is the incomplete gamma function
(2.5) $\Gamma(s,x)=\int_{x}^{\infty}t^{s-1}e^{-t}dt\quad(\Re(s)>0)$
which continues entirely as a function in $s\in\mathbb{C}$ for $x\neq 0$.
For a fixed $T>0$, we define the characteristic function
(2.6) $\mathbf{1}_{T}(x):=\begin{cases}1&\text{ when }x>T,\\\ 0&\text{
otherwise. }\end{cases}$
Then, with $I_{s}$ defined as in (2.1), we have, for $t_{0}>0$ and $u>0,$
(2.7)
$\mathcal{L}(I_{s}\mathbf{1}_{t_{0}})(u)=\int_{0}^{\infty}e^{-ut}I_{s}(t)\mathbf{1}_{t_{0}}(t)dt=\frac{(2\pi)^{s}}{\Gamma\left(s\right)}u^{-s}\int_{ut_{0}}^{\infty}e^{-t}t^{s-1}dt=\frac{\Gamma\left(s,ut_{0}\right)}{\Gamma\left(s\right)}\left(\frac{2\pi}{u}\right)^{s}.$
Although the integral defining $\Gamma\left(s,ut_{0}\right)$ diverges when
$u<0$, the incomplete gamma function has an analytic continuation giving an
entire function of $s$, when $u\neq 0$. Therefore, we interpret
$\mathcal{L}(I_{s}\mathbf{1}_{t_{0}})(u)$ as the analytic continuation of
$\Gamma\left(s,ut_{0}\right)$. By (3.9) and (3.11) below, combined with the
asymptotic behaviour of the incomplete gamma function and the Fourier
coefficients $a(n)$, we deduce that, for any $t_{0}>0$,
(2.8) $L_{f}(I_{s}\mathbf{1}_{t_{0}})=\sum_{\begin{subarray}{c}n\geq-n_{0}\\\
n\neq
0\end{subarray}}\frac{a(n)}{n^{s}}\frac{\Gamma\left(s,nt_{0}\right)}{\Gamma\left(s\right)}.$
converges absolutely and gives a non-symmetrised form of the $L$-series (2.4).
Although the definition of $L$-series of weak Maass forms given in [BFK14]
(see (2.4)) addresses the problem of the exponential growth of the forms and
of their Fourier coefficients, the fact that the functional equation of the
definition (2.4) was “built into” its defining formula prevented the
meaningful formulation of a converse theorem for such $L$-series.
The construction we present here makes a converse theorem possible by defining
the $L$-series on a broader class of test functions than on
$\\{I_{s}\mathbf{1}_{t_{0}}:s\in\mathbb{C}\\}$ or, equivalently, the parameter
$s\in\mathbb{C}$. Furthermore the dependence on the test function goes through
the Laplace transform, the essential use of which becomes clearer in the
applications (Proposition 5.5, Theorem 5.6). Our approach should be compared
to that of Miyazaki et al. [MSSU20] in our respective uses of test functions
and of integral transforms (Fourier, in their work, and Laplace in ours). The
results we establish here complement theirs, because the latter deal with
standard Maass forms whereas we cover functions of exponential growth and
harmonic Maass forms. Our approach seems to be also related to Miller and
Schmid’s philosophy of automorphic distributions (see e.g. [MS04]) and we
intend to investigate the connection more precisely in future work.
Recently [DSKS21], a converse theorem for harmonic Maass forms was announced,
but again its focus was on the special case of harmonic Maass forms of
polynomial growth, which, in particular, does not cover the function (1.1).
Our theorem, by addressing the case of exponential growth, accounts for the
situation of a typical harmonic Maass form. For the same reason, the
techniques introduced here should be more broadly applicable to the various
modular objects of non-polynomial growth that have increasingly been
attracting attention the last several years, including Brown’s real-analytic
modular forms [Bro18, DD20] and higher depth weak Maass forms. In relation to
the latter, we aim to investigate the connection of our $L$-series with the
sesquiharmonic Maass forms associated, in [BDR13], to non-critical values of
classical $L$-functions.
In this paper, we concentrate on foundational analytic aspects of our
$L$-series, but the theory is amenable to the study of specific invariants,
such as their special values. For example, in [DR22], the hypothetical
“central $L$-value” attached to the classical $j$-invariant in [BFI15] is
interpreted as an actual value of the $L$-series defined here.
Finally, a remark on the unusual lack of reference to meromorphic continuation
both in Theorem 4.5 and in Theorems 5.1, 5.4. The reason for this is that the
$L$-series in this paper is defined on a broad family of test functions that
contains the compactly supported functions $\varphi$. As a result, both
$\varphi$ and its “contragredient” $\check{\varphi}$ (1.8) belong to the
domain of absolute convergence of the $L$-series $L_{f}$. This cannot happen
in the case of standard $L$-series of holomorphic cusp forms because there is
no value of $s$ for which both $I_{s}$ (in (2.1)) and $\check{I_{s}}$ belong
to the domain of absolute convergence of the $L$-series.
However, it is possible to define our $L$-series on classes of test functions
for which the above property does not hold automatically. Then, the problem of
meromorphic continuation arises naturally and can lead to many interesting
questions and applications. Theorem 4.6 indicates what form a statement
involving meromorphic continuation can take in our setting. For the initial
applications we are concerned with here though, the main issues lay in other
aspects and thus the problem of continuation is not relevant.
## 3\. Harmonic Maass forms
We recall the definition and basic properties of harmonic Maass forms. For
$k\in\frac{1}{2}\mathbb{Z}$ we let $\Delta_{k}$ denote the weight $k$
hyperbolic Laplacian on $\mathbb{H}$ given by
(3.1) $\Delta_{k}:=-4y^{2}\frac{\partial}{\partial
z}\frac{\partial}{\partial\bar{z}}+2iky\frac{\partial}{\partial\bar{z}},$
where $z=x+iy$ with $x,y\in\mathbb{R}$.
For $k\in\mathbb{Z}$, we consider the action $|_{k}$ of
$\operatorname{SL}_{2}(\mathbb{R})$ on smooth functions
$f\colon\mathbb{H}\to\mathbb{C}$ on the complex upper half-plane $\mathbb{H}$,
given by
(3.2) $(f|_{k}\gamma)(z):=(cz+d)^{-k}f(\gamma z),\qquad\text{for
$\gamma=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in$ SL${}_{2}(\mathbb{R})$}.$
Here $\gamma z=\frac{az+b}{cz+d}$ is the Möbius transformation.
Now we define the action $|_{k}$ for $k\in\frac{1}{2}+\mathbb{Z}$. We let
$\left(\frac{c}{d}\right)$ be the Kronecker symbol. For an odd integer $d$, we
set
(3.3) $\epsilon_{d}:=\begin{cases}1&\text{ if }d\equiv 1\bmod{4},\\\ i&\text{
if }d\equiv 3\bmod{4},\end{cases}$
so that $\epsilon_{d}^{2}=\left(\frac{-1}{d}\right)$. We set the implied
logarithm to equal its principal branch so that $-\pi<$arg$(z)\leq\pi$. We
define the action $|_{k}$ of $\Gamma_{0}(N)$, for $4|N$, on smooth functions
$f\colon\mathbb{H}\to\mathbb{C}$ as follows:
(3.4)
$(f|_{k}\gamma)(z):=\left(\frac{c}{d}\right)\epsilon_{d}^{2k}(cz+d)^{-k}f(\gamma
z)\qquad\text{ for all }\gamma=\begin{pmatrix}*&*\\\
c&d\end{pmatrix}\in\Gamma_{0}(N).$
In the case of half-integral weight, Shimura [Shi73] uses the formalism of the
full metaplectic group for the definition of the action. From that more
general framework, in the sequel we will only need the following special cases
(see, e.g. the proof of [Shi73, Proposition 5.1]): Let
$W_{M}=\left(\begin{smallmatrix}0&-\sqrt{M}^{-1}\\\
\sqrt{M}&0\end{smallmatrix}\right)$ for $M\in\mathbb{N}$. We have
(3.5) $(f|_{k}W_{M})(z)=(f|_{k}W^{-1}_{M})(z)=f(W_{M}z)(-i\sqrt{M}z)^{-k}.$
For $a\in\mathbb{R}_{+}$ and $b\in\mathbb{R}$, we have
(3.6) $\left(f\Big{|}_{k}\begin{pmatrix}\frac{1}{a}&b\\\
0&a\end{pmatrix}\right)(z)=a^{-k}f\left(\frac{z+ba}{a^{2}}\right).$
Notice the extra $-i$ in the formula (3.5) in the half-integral weight case.
With this notation we now state the definition for harmonic Maass forms.
###### Definition 3.1.
Let $N\in\mathbb{N}$ and suppose that $4|N$ when $k\in\frac{1}{2}+\mathbb{Z}.$
Let $\psi$ be a Dirichlet character modulo $N$. A _harmonic Maass form of
weight $k$ and character $\psi$ for $\Gamma_{0}(N)$_ is a smooth function
$f\colon\mathbb{H}\to\mathbb{C}$ such that:
1. i).
For all $\gamma=\left(\begin{smallmatrix}*&*\\\
*&d\end{smallmatrix}\right)\in\Gamma_{0}(N)$, we have $f|_{k}\gamma=\psi(d)f$.
2. ii).
$\Delta_{k}(f)=0$.
3. iii).
For each $\gamma=\left(\begin{smallmatrix}*&*\\\
c&d\end{smallmatrix}\right)\in\operatorname{SL}_{2}(\mathbb{Z})$, there is a
polynomial $P(z)\in\mathbb{C}[e^{-2\pi iz}]$, such that
(3.7) $f(\gamma z)(cz+d)^{-k}-P(z)=O(e^{-\epsilon y}),\qquad\text{as
$y\to\infty$, for some $\epsilon>0.$}$
We let $H_{k}(N,\psi)$ be the space of weight $k$ harmonic Maass forms with
character $\psi$ for $\Gamma_{0}(N)$. On replacing (3.7) with $f(\gamma
z)(cz+d)^{-k}=O(e^{\epsilon y})$ we obtain a space denoted by
$H^{\prime}_{k}(N,\psi)$.
To describe the Fourier expansions of the elements of $H_{k}(N,\psi)$, we
recall the definition and the asymptotic behaviour of the incomplete Gamma
function. For $r,z\in\mathbb{C}$ with $\Re(r)>0$, we define the incomplete
Gamma function as
(3.8) $\Gamma(r,z):=\int_{z}^{\infty}e^{-t}t^{r}\,\frac{dt}{t}.$
When $z\neq 0$, $\Gamma(r,z)$ is an entire function of $r$ (see [OLBC10,
§8.2(ii)]). We note the asymptotic relation for $x\in\mathbb{R}$ (see [OLBC10,
(8.11.2)])
(3.9) $\Gamma(s,x)\sim x^{s-1}e^{-x}\qquad\text{as $|x|\to\infty.$ }$
With this notation we can state the following theorem due to Bruinier and
Funke [BF04, (3.2)]
###### Theorem 3.2 ([BF04]).
Let $k\in\frac{1}{2}\mathbb{Z}.$ Each $f\in H_{k}(N,\psi)$ have the absolutely
convergent Fourier expansion
(3.10) $f(z)=\sum_{\begin{subarray}{c}n\geq-n_{0}\end{subarray}}a(n)e^{2\pi
inz}+\sum_{\begin{subarray}{c}n<0\end{subarray}}b(n)\Gamma(1-k,-4\pi
ny)e^{2\pi inz}$
for some $a(n),b(n)\in\mathbb{C}$ and $n_{0}\in\mathbb{N}.$ Analogous
expansions hold at the other cusps.
A subspace of particular importance is the space $S_{k}^{!}(N,\psi)$ of
_weakly holomorphic cusp forms with weight $k\in 2\mathbb{Z}$ and character
$\psi$ for $\Gamma_{0}(N)$_. It consists of $f\in H_{k}(N,\psi)$ which are
holomorphic and have vanishing constant terms at all cusps.
We finally note (cf. [BF04, Lemma 3.4]) that
(3.11) $a(n)=O(e^{C\sqrt{n}}),\quad b(-n)=O(e^{C\sqrt{n}})\qquad\text{as
$n\to\infty$ for some $C>0$}.$
## 4\. $L$-series associated to harmonic Maass forms
Let $C(\mathbb{R},\mathbb{C})$ be the space of piece-wise smooth complex-
valued functions on $\mathbb{R}$. We recall the notation $\mathcal{L}\varphi$
for the Laplace transform of the function $\varphi$ on $\mathbb{R}_{+}$ given
in (1.2), when the integral is absolutely convergent. For $s\in\mathbb{C}$, we
define
(4.1) $\varphi_{s}(x):=\varphi(x)x^{s-1}.$
Note that $\varphi_{1}=\varphi$.
Let $M$ be a positive integer and $k\in\frac{1}{2}\mathbb{Z}$. For each
function $f$ on $\mathbb{H}$ given by the absolutely convergent series
(4.2) $f(z)=\sum_{n\geq-n_{0}}a(n)e^{2\pi
in\frac{z}{M}}+\sum_{n<0}b(n)\Gamma\left(1-k,\frac{-4\pi ny}{M}\right)e^{2\pi
in\frac{z}{M}},$
let $\mathcal{F}_{f}$ be the space of functions $\varphi\in
C(\mathbb{R},\mathbb{C})$ such that the integral defining
$(\mathcal{L}\varphi)(s)$ (resp. $(\mathcal{L}\varphi_{2-k})(s)$) converges
absolutely for all $s$ with $\Re(s)\geq-2\pi n_{0}$ (resp. $\Re(s)>0$), and
the following series converges:
(4.3) $\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}|a(n)|(\mathcal{L}|\varphi|)\left(2\pi\frac{n}{M}\right)+\sum_{n<0}|b(n)|\left(\frac{4\pi|n|}{M}\right)^{1-k}\int_{0}^{\infty}\frac{(\mathcal{L}|\varphi_{2-k}|)\left(\frac{-2\pi
n(2t+1)}{M}\right)}{(1+t)^{k}}dt.$
This definition expresses the condition required for absolute and uniform
convergence to be guaranteed in the setting we will be working. We note that
this construction is possible for any real $k$.
###### Remark 4.1.
In the proof of Theorem 4.5, we will see that, for the functions $f$ we will
be considering, the space $\mathcal{F}_{f}$ contains the compactly supported
functions.
With this notation we state the following definition.
###### Definition 4.2.
Let $M$ be a positive integer and $k\in\frac{1}{2}\mathbb{Z}$. Let $f$ be a
function on $\mathbb{H}$ given by the Fourier expansion (4.2). The $L$-series
of $f$ is defined to be the map $L_{f}\colon\mathcal{F}_{f}\to\mathbb{C}$ such
that, for $\varphi\in\mathcal{F}_{f}$,
(4.4) $L_{f}(\varphi)=\sum_{n\geq-n_{0}}a(n)(\mathcal{L}\varphi)(2\pi n/M)\\\
+\sum_{n<0}b(n)(-4\pi
n/M)^{1-k}\int_{0}^{\infty}\frac{(\mathcal{L}\varphi_{2-k})(-2\pi
n(2t+1)/M)}{(1+t)^{k}}dt.$
###### Remark 4.3.
As mentioned in §2, this definition is related with previously defined and
studied $L$-series. See page 2.4 for details on the precise relation. The
domain of the map $L_{f}$ can be extended to a larger class of test functions
$\varphi$ to account more directly for series such as (2.8). However, for the
purposes of this work, $\mathcal{F}_{f}$ is sufficient.
To prove the converse theorem in the case of non-holomorphic elements of
$H_{k}(N,\psi)$, we will also need the following renormalised version of the
partial derivative in terms of $x$, where $z=x+iy\in\mathbb{H}$:
(4.5) $(\delta_{k}f)(z):=z\frac{\partial f}{\partial x}(z)+\frac{k}{2}f(z).$
The context of this operator is that, in contrast to holomorphic functions, to
ensure vanishing of a general eigenfunction $F$ of the Laplacian, it is not
enough to show vanishing on the imaginary axis. In addition, it is required
that $\partial F/\partial x\equiv 0$ on the imaginary axis. The operator
$\delta_{k}$ enables us to formulate a condition in the converse theorem that
leads to that vanishing.
Recalling the Fourier expansion given in (4.2), we have
(4.6) $(\delta_{k}f)(z)=\frac{k}{2}f(z)+\sum_{n\geq-n_{0}}a(n)\left(2\pi
in\frac{z}{M}\right)e^{2\pi in\frac{z}{M}}\\\ +\sum_{n<0}b(n)\left(2\pi
in\frac{z}{M}\right)\Gamma\left(1-k,\frac{-4\pi ny}{M}\right)e^{2\pi
in\frac{z}{M}}.$
Although the expansion of $\delta_{k}f$ is not of the form (4.2), we can still
assign a class of functions $\mathcal{F}_{\delta_{k}f}$ and an $L$-series map
$L_{\delta_{k}f}:\mathcal{F}_{\delta_{k}f}\to\mathbb{C}$ to it. Specifically
we let $\mathcal{F}_{\delta_{k}f}$ consist of $\varphi\in
C(\mathbb{R},\mathbb{C})$ such that the following series converges:
(4.7) $2\pi\sum_{n\geq-n_{0}}|a(n)n|(\mathcal{L}|\varphi_{2}|)(2\pi n/M)\\\
+2\pi\sum_{n<0}|b(n)n|(-4\pi
n/M)^{1-k}\int_{0}^{\infty}\frac{(\mathcal{L}|\varphi_{3-k}|)(-2\pi
n(2t+1)/M)}{(1+t)^{k}}dt.$
Then, we let $L_{\delta_{k}f}$ be such that, for
$\varphi\in\mathcal{F}_{\delta_{k}f}$,
(4.8)
$L_{\delta_{k}f}(\varphi):=\frac{k}{2}L_{f}(\varphi)-\frac{2\pi}{M}\sum_{n\geq-
n_{0}}a(n)n(\mathcal{L}\varphi_{2})(2\pi n/M)\\\
-\frac{2\pi}{M}\sum_{n<0}b(n)n(-4\pi
n/M)^{1-k}\int_{0}^{\infty}\frac{(\mathcal{L}\varphi_{3-k})(-2\pi
n(2t+1)/M)}{(1+t)^{k}}dt.$
This converges absolutely.
###### Lemma 4.4.
Let $f$ be a function on $\mathbb{H}$ as a series in (4.2). For
$\varphi\in\mathcal{F}_{f}$, the $L$-series $L_{f}(\varphi)$ can be given by
(4.9) $L_{f}(\varphi)=\int_{0}^{\infty}f(iy)\varphi(y)dy.$
Similarly, for $\varphi\in\mathcal{F}_{\delta_{k}f}$,
(4.10)
$L_{\delta_{k}f}(\varphi)=\int_{0}^{\infty}(\delta_{k}f)(iy)\varphi(y)dy,$
where $\delta_{k}f$ is defined in (4.5) and $L_{\delta_{k}f}$ in (4.8).
###### Proof.
By Definition 4.2, for $\varphi\in\mathcal{F}_{f}$,
(4.11) $L_{f}(\varphi)=\sum_{n\geq-n_{0}}a(n)(\mathcal{L}\varphi)(2\pi n/M)\\\
+\sum_{n<0}b(n)(-4\pi
n/M)^{1-k}\int_{0}^{\infty}\frac{(\mathcal{L}\varphi_{2-k})(-2\pi
n(2t+1)/M)}{(1+t)^{k}}dt$
and this series converges absolutely. Since $\varphi\in\mathcal{F}_{f}$, we
can interchange the order of summation and integration and write the
“holomorphic” part of the series $L_{f}(\varphi)$, according to
(4.12) $(\mathcal{L}\varphi)\left(\frac{2\pi
n}{M}\right)=\int_{0}^{\infty}\varphi(y)e^{-2\pi n\frac{y}{M}}dy.$
For the remaining part, thanks to
(4.13)
$\Gamma(a,z)=z^{a}e^{-z}\int_{0}^{\infty}\frac{e^{-zt}}{(1+t)^{1-a}}dt\qquad\text{(valid
for $\Re(z)>0$)}$
(cf. [OLBC10, (8.6.5)]) we can interchange the order of integration to re-
write the “non-holomorphic” part of the series $L_{f}(\varphi)$, according to
(4.14) $\int_{0}^{\infty}\Gamma\left(1-k,-4\pi n\frac{y}{M}\right)e^{-2\pi
n\frac{y}{M}}\varphi(y)dy=\left(\frac{-4\pi
n}{M}\right)^{1-k}\int_{0}^{\infty}\frac{\mathcal{L}\varphi_{2-k}\left(\frac{-2\pi
n(2t+1)}{M}\right)}{(1+t)^{k}}dt.$
The same proof works for $L_{\delta_{k}f}(\varphi)$. ∎
Our goal in the remainder of this section is to state and prove the functional
equation of the $L$-series $L_{f}(\varphi)$, when $f\in H_{k}(N,\psi)$.
Let $f$ be a function on $\mathbb{H}$ with the given Fourier expansion (4.2)
with $M=1$. Let $D$ be a positive integer and let $\chi$ be a Dirichlet
character modulo $D$. We define the “twist” $f_{\chi}$ by the Dirichlet
character $\chi$ which has a similar series expansion (4.17) given below, with
$M=D$ in (4.2), and then we have the corresponding $L$-series
$L_{f_{\chi}}(\varphi)$ as in (4.18) below. Then, under the assumption that
$f$ is an element of the space $H_{k}(N,\psi)$ of weight $k$ harmonic Maass
forms for level $N$ and character $\psi$, we state and prove the functional
equation of the $L$-series of $f_{\chi}$. Note that $\chi$ is not necessarily
primitive.
For a Dirichlet character $\chi$ modulo $D$, for each $n\in\mathbb{Z}$, we
define the generalized Gauss sum
(4.15) $\tau_{\chi}(n):=\sum_{u\bmod D}\chi(u)e^{2\pi in\frac{u}{D}}.$
Let $f$ be a function on $\mathbb{H}$ with the Fourier expansion (4.2) with
$M=1$:
(4.16) $f(z)=\sum_{n\geq-n_{0}}a(n)e^{2\pi inz}+\sum_{n<0}b(n)\Gamma(1-k,-4\pi
ny)e^{2\pi inz}.$
Then we define the twisted functions $f_{\chi}$ as
(4.17)
$f_{\chi}(z):=D^{\frac{k}{2}}\sum_{u\bmod{D}}\overline{\chi(u)}\left(f\big{|}_{k}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix}\right)(z)\\\ =\sum_{n\geq-
n_{0}}a(n)\tau_{\bar{\chi}}(n)e^{2\pi
in\frac{z}{D}}+\sum_{n<0}b(n)\tau_{\bar{\chi}}(n)\Gamma\left(1-k,-4\pi
n\frac{y}{D}\right)e^{2\pi in\frac{z}{D}}.$
Then the $L$-series for $f_{\chi}$ and $\delta_{k}f_{\chi}$ are
(4.18) $L_{f_{\chi}}(\varphi)=\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}\tau_{\bar{\chi}}(n)a(n)(\mathcal{L}\varphi)(2\pi n/D)\\\
+\sum_{n<0}\tau_{\bar{\chi}}(n)b(n)(-4\pi
n/D)^{1-k}\int_{0}^{\infty}\frac{\mathcal{L}(\varphi_{2-k})(-2\pi
n(2t+1)/D)}{(1+t)^{k}}dt$
and
(4.19)
$L_{\delta_{k}f_{\chi}}(\varphi)=\frac{k}{2}L_{f_{\chi}}(\varphi)-\frac{2\pi}{D}\sum_{n\geq-
n_{0}}n\tau_{\bar{\chi}}(n)a(n)(\mathcal{L}\varphi_{2})(2\pi n/D)\\\
-\frac{2\pi}{D}\sum_{\begin{subarray}{c}n<0\end{subarray}}^{\infty}n\tau_{\bar{\chi}}(n)b(n)(-4\pi
n/D)^{1-k}\int_{0}^{\infty}\frac{(\mathcal{L}\varphi_{3-k})(-2\pi
n(2t+1)/D)}{(1+t)^{k}}dt,$
for $\varphi\in\mathcal{F}_{f_{\chi}}\cap\mathcal{F}_{\delta_{k}(f_{\chi})}$.
By Lemma 4.4, we have
(4.20) $\displaystyle
L_{f_{\chi}}(\varphi)=\int_{0}^{\infty}f_{\chi}(iy)\varphi(y)dy,$ (4.21)
$\displaystyle
L_{\delta_{k}f_{\chi}}(\varphi)=\int_{0}^{\infty}(\delta_{k}f_{\chi})(iy)\varphi(y)dy.$
Before stating the functional equation of the $L_{f_{\chi}}$, we introduce
another notation. For each $a\in\frac{1}{2}\mathbb{Z},$ $M\in\mathbb{N}$ and
$\varphi\colon\mathbb{R}_{+}\to\mathbb{C}$, we define (note the change in sign
convention from earlier in this paper for the action of $W_{M}$ on functions
on $\mathbb{H}$)
(4.22)
$(\varphi|_{a}W_{M})(x):=(Mx)^{-a}\varphi\left(\frac{1}{Mx}\right)\qquad\text{for
all $x>0$}.$
Here recall that $W_{M}=\left(\begin{smallmatrix}0&-\sqrt{M}^{-1}\\\
\sqrt{M}&0\end{smallmatrix}\right)$. Since this action applies to functions on
$\mathbb{R}_{+}$ and the action (3.5) to complex functions, the use of the
same notation should not cause a confusion but some caution is advised.
We also define a set of “test functions” we will be using in most of the
remaining results. Let $S_{c}(\mathbb{R}_{+})$ be a set of complex-valued,
compactly supported and piecewise smooth functions on $\mathbb{R}_{+}$ which
satisfy the following condition: for any $y\in\mathbb{R}_{+}$, there exists
$\varphi\in S_{c}(\mathbb{R}_{+})$ such that $\varphi(y)\neq 0$.
We can now prove the functional equation of our $L$-function $L_{f}(\varphi)$
and its twists.
###### Theorem 4.5.
Fix $k\in\frac{1}{2}\mathbb{Z}$. Let $N\in\mathbb{N}$ and let $\psi$ be a
Dirichlet character modulo $N$. When $k\in\frac{1}{2}+\mathbb{Z}$, assume that
$4|N$. Suppose that $f$ is an element of $H_{k}(N,\psi)$ with expansion (4.2)
and that $\chi$ is a character modulo $D$ with $(D,N)=1$. Consider the maps
$L_{f_{\chi}},L_{\delta_{k}f_{\chi}}\colon\mathcal{F}_{f_{\chi}}\cap\mathcal{F}_{\delta_{k}f_{\chi}}\to\mathbb{C}$
given in (4.18) and (4.19). Set
(4.23) $g:=f|_{k}W_{N}$
and
$\mathcal{F}_{f,g}:=\left\\{\varphi\in\mathcal{F}_{f}\cap\mathcal{F}_{\delta_{k}f}\;:\;\varphi|_{2-k}W_{N}\in\mathcal{F}_{g}\cap\mathcal{F}_{\delta_{k}g}\right\\}.$
Then $\mathcal{F}_{f,g}\neq\\{0\\}$ and we have the following functional
equations. For each $\varphi\in\mathcal{F}_{f,g}$, if $k\in\mathbb{Z}$,
(4.24) $\displaystyle L_{f_{\chi}}(\varphi)$
$\displaystyle=i^{k}\frac{\chi(-N)\psi(D)}{N^{k/2-1}}L_{g_{\bar{\chi}}}(\varphi|_{2-k}W_{N}),$
(4.25) $\displaystyle L_{\delta_{k}f_{\chi}}(\varphi)$
$\displaystyle=-i^{k}\frac{\chi(-N)\psi(D)}{N^{k/2-1}}L_{\delta_{k}g_{\bar{\chi}}}(\varphi|_{2-k}W_{N}).$
For each $\varphi\in\mathcal{F}_{f,g}$, if $k\in\frac{1}{2}+\mathbb{Z}$,
(4.26) $\displaystyle L_{f_{\chi}}(\varphi)$
$\displaystyle=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{-1+k/2}}L_{g_{\bar{\chi}\psi_{D}}}(\varphi|_{2-k}W_{N}),$
(4.27) $\displaystyle L_{\delta_{k}f_{\chi}}(\varphi)$
$\displaystyle=-\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{-1+k/2}}L_{\delta_{k}g_{\bar{\chi}\psi_{D}}}(\varphi|_{2-k}W_{N}).$
Here $\psi_{D}(u)=\left(\frac{u}{D}\right)$ is the real Dirichlet character
modulo $D$, given by the Kronecker symbol.
###### Proof.
We first note that, exactly as in the classical case, we can show that $g\in
H_{k}(N,\bar{\psi})$, if $k\in\mathbb{Z}$ and $g\in
H_{k}(N,\bar{\psi}\left(\frac{N}{\bullet}\right))$, if
$k\in\frac{1}{2}+\mathbb{Z}$. We further observe that $\mathcal{F}_{f,g}$ is
non-zero because, clearly, $S_{c}(\mathbb{R}_{+})$ is closed under the action
of $W_{N}$ and each $\mathcal{F}_{f}$ and $\mathcal{F}_{\delta_{k}f}$ contains
$S_{c}(\mathbb{R}_{+})$. Indeed, if $\varphi\in S_{c}(\mathbb{R}_{+}),$ with
${\rm Supp}(\varphi)\subset(c_{1},c_{2})$ ($c_{1},c_{2}>0$), then, for all
$x>0$,
(4.28)
$\mathcal{L}(|\varphi|)(x)=\int_{c_{1}}^{c_{2}}|\varphi(y)|e^{-xy}dy\ll_{c_{1},c_{2},\varphi}e^{-xc_{1}}$
and thus, using (3.11) , we deduce that the series in (4.3) are convergent .
We further note that if $\varphi\in\mathcal{F}_{f}$, then
$\varphi\in\mathcal{F}_{f_{\chi}}$, for all $\chi$. This follows from (4.18)
and the boundedness of $\tau_{\bar{\chi}}(n).$
Now we prove the functional equations for $L_{f_{\chi}}(\varphi)$ and
$L_{\delta_{k}f_{\chi}}(\varphi)$. Since they depend on whether
$k\in\mathbb{Z}$ or $k\in\frac{1}{2}+\mathbb{Z}$, we consider the two cases
separately.
_Case I: $k\in\mathbb{Z}$._ As in the classical case, the definition of
$g=f|_{k}W_{N}$ and the identity
(4.29) $W_{N}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix}W_{N}^{-1}=W_{N}^{-1}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix}W_{N}=\begin{pmatrix}D&-v\\\
-Nu&\frac{1+Nuv}{D}\end{pmatrix}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{v}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix},$
valid for $u,v\in\mathbb{Z}$ with $\gcd(u,D)=1$ and $Nuv\equiv-1\bmod{D}$,
imply that
(4.30) $f_{\chi}|_{k}W_{N}=\chi(-N)\psi(D)g_{\bar{\chi}}.$
By (4.20), by changing the variable $y$ to $\frac{1}{Ny}$, and then applying
the identity (4.30),
(4.31)
$L_{f_{\chi}}(\varphi)=\int_{0}^{\infty}f_{\chi}\left(i\frac{1}{Ny}\right)\varphi\left(\frac{1}{Ny}\right)N^{-1}y^{-2}dy\\\
=\frac{\chi(-N)\psi(D)i^{k}}{N^{\frac{k}{2}-1}}\int_{0}^{\infty}g_{\bar{\chi}}(iy)(\varphi|_{2-k}W_{N})(y)dy=\frac{\chi(-N)\psi(D)i^{k}}{N^{\frac{k}{2}-1}}L_{g_{\bar{\chi}}}(\varphi|_{2-k}W_{N}).$
This gives the first equality of (4.24).
For the second equality (4.25), we applying the operator $\delta_{k}$ to both
sides of (4.30)
(4.32)
$(\delta_{k}(f_{\chi}|_{k}W_{N}))(z)=\frac{k}{2}(f_{\chi}|_{k}W_{N})(z)+z\frac{\partial}{\partial
x}(f_{\chi}|_{k}W_{N})(z)=\chi(-N)\psi(D)(\delta_{k}g_{\bar{\chi}})(z).$
For the left hand side, we claim that the differential operator $\delta_{k}$
and action of $W_{N}$ via $|_{k}$ almost commute with each other:
(4.33)
$(\delta_{k}(f_{\chi}|_{k}W_{N}))(z)=\frac{k}{2}(f_{\chi}|_{k}W_{N})(z)+z\frac{\partial}{\partial
x}\bigg{(}(\sqrt{N}z)^{-k}f_{\chi}\left(-\frac{1}{Nz}\right)\bigg{)}=-((\delta_{k}f_{\chi})|_{k}W_{N})(z).$
Then we get
(4.34)
$((\delta_{k}f_{\chi})|_{k}W_{N})(z)=-\chi(-N)\psi(D)(\delta_{k}g_{\bar{\chi}})(z).$
As above, applying (4.21) and using the identity above, we get
(4.35)
$L_{\delta_{k}f_{\chi}}(\varphi)=i^{k}N^{-\frac{k}{2}+1}\int_{0}^{\infty}(\delta_{k}f_{\chi})|_{k}W_{N})(iy)(\varphi|_{2-k}W_{N})(y)dy\\\
=-\chi(-N)\psi(D)i^{k}N^{-\frac{k}{2}+1}\int_{0}^{\infty}(\delta_{k}g_{\bar{\chi}})(iy)(\varphi|_{2-k}W_{N})(y)dy\\\
=-\chi(-N)\psi(D)i^{k}N^{-\frac{k}{2}+1}L_{\delta_{k}g_{\bar{\chi}}}(\varphi|_{2-k}W_{N}).$
_Case II: $k\in\frac{1}{2}+\mathbb{Z}$._ Recall that in this case we assume
that $4\mid N$. We first note that $g=f|_{k}W_{N}$ is a modular form of weight
$k$ with character $\bar{\psi}\cdot\left(\frac{N}{\bullet}\right)$ for
$\Gamma_{0}(N)$. Indeed, for each $\gamma=\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)\in\Gamma_{0}(N)$, the identity
(4.36) $W_{N}\gamma=\begin{pmatrix}d&-\frac{c}{N}\\\ -bN&a\end{pmatrix}W_{N}$
implies
(4.37) $g(\gamma
z)(cz+d)^{-k}=\psi(a)\epsilon_{a}^{-2k}\left(\frac{-bN}{a}\right)(f|_{k}W_{N})(z)=\overline{\psi(d)}\epsilon_{d}^{-2k}\left(\frac{c}{d}\right)\left(\frac{N}{d}\right)g(z)$
since $a\equiv d\mod 4,$ $ad\equiv 1\mod(-bN)$ and $-bc\equiv 1\mod d$.
Now, according to Shimura’s [Shi73, Proposition 5.1], we have
(4.38)
$f_{\chi}\left(-\frac{1}{Nz}\right)\left(-i\sqrt{N}z\right)^{-k}=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}}g_{\bar{\chi}\psi_{D}}(z).$
With this, we obtain, similarly to Case I, the functional equation (4.26) and
the functional equation (4.27). ∎
As pointed out in the introduction, meromorphic continuation does not play a
role in Theorem 4.5 and in its converse theorem, Theorem 5.1. However, it is
possible, depending on the application one has in mind, to consider a setting
for the theorem that makes meromorphic continuation relevant. To illustrate
this point we describe such a setting and prove a theorem where meromorphic
continuation is part of the conclusion.
Specifically, the test functions, for which the series $L_{f}(\varphi)$
converges absolutely and the integral $\int_{0}^{\infty}f(iy)\varphi(y)dy$
converges (absolutely) are different. When $f$ is a holomorphic cusp form of
weight $k$ then $\varphi(y)=y^{s+\frac{k-1}{2}-1}$ makes the series
$L_{f}(\varphi)$ converge absolutely for $\Re(s)>1$, but the integral
$\int_{0}^{\infty}f(iy)\varphi(y)dy$ converges and defines a meromorphic
function for any $s\in\mathbb{C}$, which gives analytic continuation for
$L_{f}(\varphi)$ to any $s\in\mathbb{C}$. We discuss the analogue of this
phenomenon of the $L$-series in the remainder of this section.
Recall that $\varphi_{s}(x)=\varphi(x)x^{s-1}$. Then, for $y>0$ and
$s\in\mathbb{C}$ with $\Re(s)>\frac{1}{2}$, by Cauchy-Schwarz inequality,
(4.39)
$(\mathcal{L}|\varphi_{s}|)(y)\leq\left(\mathcal{L}(|\varphi|^{2})(y)\right)^{\frac{1}{2}}y^{-\Re(s)+\frac{1}{2}}\left(\Gamma(2\Re(s)-1)\right)^{\frac{1}{2}}.$
Now, for a given function $f$ on $\mathbb{H}$ with the series expansion (4.2)
with $M=1$, consider $\varphi\in\mathcal{F}_{f}$. In particular,
(4.40) $\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}|a(n)|\left((\mathcal{L}|\varphi|^{2})(2\pi
n)\right)^{\frac{1}{2}}+\sum_{n<0}|b(n)||(-4\pi
n)|^{1-k}\int_{0}^{\infty}\frac{\left((\mathcal{L}|\varphi_{2-k}|^{2})(-2\pi
n(2t+1))\right)^{\frac{1}{2}}}{(1+t)^{k}}dt$
converges. Then, with (4.39), we have $\varphi_{s}\in\mathcal{F}_{f}$ for
$\Re(s)>\frac{1}{2}$.
###### Theorem 4.6.
Let $k\in\mathbb{Z}$ and $f\in H_{k}(N,\psi)$. Set $g=f|_{k}W_{N}$ and let
$n_{0}\in\mathbb{N}$ be such that $f(z)$ and $g(z)$ are $O(e^{2\pi n_{0}y})$
as $y=\Im(z)\to\infty$. Suppose that $\varphi\in C(\mathbb{R},\mathbb{C})$ is
a non-zero function such that, for some $\epsilon>0$, $\varphi(x)$ and
$\varphi(x^{-1})$ are $o(e^{-2\pi(n_{0}+\epsilon)x})$ as $x\to\infty$. We
further assume that series (4.40) converges. Then the series
(4.41) $L(s,f,\varphi):=L_{f}(\varphi_{s})$
converges absolutely for $\Re(s)>\frac{1}{2}$, has an analytic continuation to
all $s\in\mathbb{C}$ and satisfies the functional equation
(4.42) $L(s,f,\varphi)=N^{-s-\frac{k}{2}+1}i^{k}L(1-s,g,\varphi|_{1-k}W_{N}).$
###### Proof.
By the assumption on the growth of $\varphi(y)$ we deduce that
$\mathcal{L}(|\varphi|^{2})(y)$ converges absolutely for $y\geq-2\pi n_{0}$.
This combined with the assumption on (4.40) and the remarks before the
statement of the theorem, imply that $\varphi_{s}\in\mathcal{F}_{f}$ for
$\Re(s)>\frac{1}{2}.$ Therefore, recalling the integral representation of
$L_{f}(\varphi_{s})=L(s,f,\varphi)$ in (4.9), separating the integral at
$\sqrt{N}^{-1}$, and then changing variables, we get
(4.43)
$L(s,f,\varphi)=\int_{\sqrt{N}^{-1}}^{\infty}f(i(Nx)^{-1})\varphi((Nx)^{-1})(Nx)^{-s}\frac{dx}{x}+\int_{\sqrt{N}^{-1}}^{\infty}f(ix)\varphi(x)x^{s}\frac{dx}{x}.$
Recall that
(4.44)
$f(i(Nx)^{-1})=(f|_{k}W_{N})(ix)(\sqrt{N}ix)^{k}=g(ix)i^{k}N^{\frac{k}{2}}x^{k}$
and
(4.45) $\varphi((Nx)^{-1})=(\varphi|_{a}W_{N})(x)(Nx)^{a}$
for any $a\in\frac{1}{2}\mathbb{Z}$. With $a=1-k$, we get, for
$\Re(s)>\frac{1}{2}$
(4.46)
$L(s,f,\varphi)=i^{k}N^{-\frac{k}{2}+1-s}\int_{\sqrt{N}^{-1}}^{\infty}g(iy)(\varphi|_{1-k}W_{N})(x)x^{1-s}\frac{dx}{x}+\int_{\sqrt{N}^{-1}}^{\infty}f(ix)\varphi(x)x^{s}\frac{dx}{x}.$
Because of the growth conditions for $\varphi$ at $0$ and $\infty$, the
integrals in the RHS are well-defined for all $s\in\mathbb{C}$ and give a
holomorphic function.
Since $g|_{k}W_{N}=f|_{k}W_{N}^{2}=(-1)^{k}f$ and
$((\varphi|_{1-k}W_{N})|_{1-k}W_{N})(x)=N^{-1+k}\varphi(x)$, we obtain the
functional equation (4.42). ∎
## 5\. The converse theorem
To state and prove the converse of Theorem 4.5, we recall some further
notation from previous sections.
For each $a,b\in\mathbb{R}$ such that $a<b$, we denote by
$\mathbf{1}_{[a,b]}(x)$ the characteristic function of the closed interval
$[a,b]$. Further, for each $s\in\mathbb{C}$ and
$\varphi\colon\mathbb{R}_{+}\to\mathbb{C}$, we have defined
$\varphi_{s}:\mathbb{R}_{+}\to\mathbb{C}$ so that
$\varphi_{s}(x)=x^{s-1}\varphi(x)$ or all $x\in\mathbb{R}_{+}.$ Finally, let
$S_{c}(\mathbb{R}_{+})$ be a set of complex-valued, compactly supported and
piecewise smooth functions on $\mathbb{R}_{+}$ which satisfy the following
condition: for any $y\in\mathbb{R}_{+}$, there exists $\varphi\in
S_{c}(\mathbb{R}_{+})$ such that $\varphi(y)\neq 0$.
###### Theorem 5.1.
Let $N$ be a positive integer and $\psi$ be a Dirichlet character modulo $N$.
For $j\in\\{1,2\\}$, let $(a_{j}(n))_{n\geq-n_{0}}$ for some integer $n_{0}$
and $(b_{j}(n))_{n<0}$ be sequences of complex numbers such that
$a_{j}(n),b_{j}(n)=O(e^{C\sqrt{|n|}})$ as $|n|\to\infty$ for some constant
$C>0$. We define smooth functions $f_{j}:\mathbb{H}\to\mathbb{C}$ given by the
following Fourier expansions associated to the given sequences:
(5.1) $f_{j}(z)=\sum_{n\geq-n_{0}}a_{j}(n)e^{2\pi
inz}+\sum_{n<0}b_{j}(n)\Gamma\left(1-k,-4\pi ny\right)e^{2\pi inz}.$
For all $D\in\\{1,2,\ldots,N^{2}-1\\}$, $\gcd(D,N)=1$, let $\chi$ be a
Dirichlet character modulo $D$. For any $\varphi\in S_{c}(\mathbb{R}_{+})$,
for any $D$ and $\chi$, we assume that,
(5.2)
$L_{f_{1\chi}}(\varphi)=i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}-1}}L_{f_{2\overline{\chi}}}(\varphi|_{2-k}W_{N})$
and
(5.3)
$L_{\delta_{k}(f_{1\chi})}(\varphi)=-i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}-1}}L_{\delta_{k}(f_{2\overline{\chi}})}(\varphi|_{2-k}W_{N}),$
if $k\in\mathbb{Z}$, and
(5.4)
$L_{f_{1\chi}}(\varphi)=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{\frac{k}{2}-1}}L_{f_{2\overline{\chi}\psi_{D}}}(\varphi|_{2-k}W_{N})$
and
(5.5)
$L_{\delta_{k}(f_{1\chi})}(\varphi)=-\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{\frac{k}{2}-1}}L_{\delta_{k}(f_{2\overline{\chi}\psi_{D}})}(\varphi|_{2-k}W_{N})$
if $k\in\frac{1}{2}+\mathbb{Z}$. Here $\psi_{D}(u)=\left(\frac{u}{D}\right)$
is the real quadratic Dirichlet character given by the Kronecker symbol.
Then, the function $f_{1}$ belongs to $H^{\prime}_{k}(\Gamma_{0}(N),\psi)$ and
$f_{2}=f_{1}|_{k}W_{N}$.
###### Remark 5.2.
There is some freedom in the choice of “test functions” $\varphi$ in this
theorem. The compactly supported functions we use in this formulation allow
for a cleaner statement and suffices for our applications. Other choices may
be more appropriate for different goals and then, additional aspects, such as
meromorphic continuation (cf. Theorem 4.6), may become important.
In a different direction, we can reduce the size of the set of the test
functions required in the converse theorem. For instance, we may assume that
our functional equations hold only for the family of test functions
$\varphi_{s}(x)=x^{s-1}\varphi(x)$ ($s\in\mathbb{C}$) for a single $\varphi\in
S_{c}(\mathbb{R}_{+})$. The converse theorem in this setting can be proved in
an essentially identical way as below.
###### Proof.
With the bounds for $a_{j}(n),b_{j}(-n)$ and the asymptotic behaviour of
$\Gamma(s,x)$ given in (3.9), we have that $f_{j}(z)$ converges absolutely to
a smooth function on $\mathbb{H}$ for $j\in\\{1,2\\}$. By the form of the
Fourier expansion, $f_{1}$ and $f_{2}$ satisfy condition (ii) and condition
(iii) at $\infty$ of Definition 3.1. Likewise, for any Dirichlet character
$\chi$ modulo $D$, recall that, by definition
(5.6) $f_{j\chi}(z)=\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}\tau_{\bar{\chi}}(n)a_{j}(n)e^{2\pi
inz/D}+\sum_{\begin{subarray}{c}n<0\end{subarray}}^{\infty}\tau_{\bar{\chi}}(n)b_{j}(n)\Gamma(1-k,-4\pi
ny/D)e^{2\pi inz/D}$
and
(5.7) $\delta_{k}(f_{j\chi})(z)=z\frac{\partial}{\partial
x}f_{j\chi}(z)+\frac{k}{2}f_{j\chi}(z),$
for $j\in\\{1,2\\}$, are absolutely convergent.
Our first aim is to show that those functions satisfy the relation (4.30) (if
$k\in\mathbb{Z}$):
(5.8) $(f_{1\chi}|_{k}W_{N})(z)=\chi(-N)\psi(D)f_{2\overline{\chi}}(z)$
and (4.38) (if $k\in\frac{1}{2}+\mathbb{Z}$):
(5.9)
$(f_{1\chi}|_{k}W_{N})(z)=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}}f_{2\overline{\chi}\psi_{D}}(z).$
Note that for any $s\in\mathbb{C}$ and $\varphi\in S_{c}(\mathbb{R}_{+})$,
$\varphi_{s}(y)=y^{s-1}\varphi(y)\in S_{c}(\mathbb{R}_{+})$. We first show
that $\varphi_{s}$ satisfies (4.3) for $f_{j\chi}$ and hence belongs to
$\mathcal{F}_{f_{1\chi}}\cap\mathcal{F}_{f_{2}\overline{\chi}}$. Indeed, since
$\varphi\in S_{c}(\mathbb{R}_{+})$, there exist $0<c_{1}<c_{2}$ and $C>0$ such
that ${\rm Supp}(\varphi)\subset[c_{1},c_{2}]$ and $|\varphi(y)|\leq C$ for
any $y>0$. Then, for $j\in\\{1,2\\}$ and $n>0$,
(5.10) $|a_{j}(n)|(\mathcal{L}|\varphi_{s}|)\left(\frac{2\pi n}{D}\right)\leq
C|a_{j}(n)|\int_{c_{1}}^{c_{2}}y^{\Re(s)}e^{-2\pi\frac{n}{D}y}\frac{dy}{y}\\\
\leq
C|a_{j}(n)|e^{-2\pi\frac{n}{D}c_{1}}(c_{2}-c_{1})\max\\{c_{1}^{\Re(s)-1},c_{2}^{\Re(s)-1}\\}.$
Thus,
(5.11) $\sum_{n\geq-
n_{0}}|\tau_{\bar{\chi}}(n)||a_{j}(n)|(\mathcal{L}|\varphi_{s}|)(2\pi
n/D)\leq\sum_{n=-n_{0}}^{0}|\tau_{\bar{\chi}}(n)||a_{j}(n)|(\mathcal{L}|\varphi_{s}|)(2\pi
n/D)\\\
+C(c_{2}-c_{1})\max\\{c_{1}^{\Re(s)-1},c_{2}^{\Re(s)-1}\\}\sum_{n=1}^{\infty}|\tau_{\bar{\chi}}(n)||a_{j}(n)|e^{-2\pi\frac{n}{D}c_{1}}<\infty,$
for any $s\in\mathbb{C}$ and for any Dirichlet character $\chi$ modulo $D$.
Likewise, for $n<0$, $t>0$:
$(\mathcal{L}|\varphi_{s+1-k}|)\left(\frac{-2\pi
n(2t+1)}{D}\right)\ll\int_{c_{1}}^{c_{2}}e^{\frac{2\pi
ny(2t+1)}{D}}y^{\Re(s)}\frac{dy}{y^{k}}\ll e^{\frac{2\pi
nc_{1}(2t+1)}{D}}\max\\{c_{1}^{\Re(s)-k},c_{2}^{\Re(s)-k}\\}$
and therefore
(5.12) $\sum_{n<0}|\tau_{\bar{\chi}}(n)||b_{j}(n)|\left|\frac{4\pi
n}{D}\right|^{1-k}\int_{0}^{\infty}\frac{(\mathcal{L}|\varphi_{s+1-k}|)\left(-\frac{2\pi
n(2t+1)}{D}\right)}{(1+t)^{k}}dt\\\
\ll\max\\{c_{1}^{\Re(s)-k},c_{2}^{\Re(s)-k}\\}\left(\int_{0}^{\infty}e^{\frac{-4\pi
tc_{1}}{D}}(1+t)^{-k}dt\right)\sum_{n<0}|\tau_{\bar{\chi}}(n)||b_{j}(n)|\left(\frac{4\pi|n|}{D}\right)^{1-k}e^{\frac{-2\pi|n|c_{1}}{D}}$
converge for any $s\in\mathbb{C}$ and for any Dirichlet character $\chi$
modulo $D$. Thus
$\varphi_{s}\in\mathcal{F}_{f_{1\chi}}\cap\mathcal{F}_{f_{2\bar{\chi}}}$ and
by applying Weierstrass theorem, we see that $L_{f_{j\chi}}(\varphi_{s})$ is
an analytic function on $s\in\mathbb{C}$.
This allows us to interchange summation and integration as in Lemma 4.4 and,
with Mellin inversion,
(5.13) $f_{j\chi}(iy)\varphi(y)=\frac{1}{2\pi
i}\int_{(\sigma)}L_{f_{j\chi}}(\varphi_{s})y^{-s}ds,$
for all $\sigma\in\mathbb{R}$. In the same way, we see that
$L_{\delta_{k}(f_{j\chi})}(\varphi_{s})$ is an analytic function for
$s\in\mathbb{C}$ and deduce
(5.14) $\delta_{k}(f_{j\chi})(iy)\varphi(y)=\frac{1}{2\pi
i}\int_{(\sigma)}L_{\delta_{k}(f_{j\chi})}(\varphi_{s})y^{-s}ds.$
Now we will show that $L_{f_{j\chi}}(\varphi_{s})\to 0$ as
$|\Im(s)|\to\infty$, uniformly for $\Re(s)$, in any compact set in
$\mathbb{C}$. Indeed, with an integration by parts, we have
(5.15)
$L_{f_{1\chi}}(\varphi_{s})=\int_{0}^{\infty}f_{1\chi}(iy)\varphi(y)y^{s}\frac{dy}{y}=-\frac{1}{s}\int_{0}^{\infty}\frac{d}{dy}\big{(}f_{1\chi}(iy)\varphi(y)\big{)}y^{s}dy$
since $\varphi(y)$ vanishes in $(0,\epsilon)\cup(1/\epsilon,\infty)$ for some
$\epsilon>0.$ Then
(5.16)
$\left|L_{f_{1\chi}}(\varphi_{s})\right|\leq\frac{1}{|s|}\int_{0}^{\infty}\left|\frac{d}{dy}\big{(}f_{1\chi}(iy)\varphi(y)\big{)}\right|y^{\Re(s)}dy\to
0,$
as $|\Im(s)|\to\infty$. The corresponding fact for
$L_{\delta_{k}(f_{1\chi})}(\varphi_{s})$ is verified in the same way.
We can therefore move the line of integration in (5.13) from $\Re(s)=\sigma$
to $k-\sigma$, and then changing the variable $s$ to $k-s$, and get
(5.17) $f_{1\chi}(iy)\varphi(y)=\frac{1}{2\pi
i}\int_{(k-\sigma)}L_{f_{1\chi}}(\varphi_{s})y^{-s}ds=\frac{1}{2\pi
i}\int_{(\sigma)}L_{f_{1\chi}}(\varphi_{k-s})y^{-k+s}ds.$
Similarly, we also have
(5.18) $\delta_{k}(f_{1\chi})(iy)\varphi(y)=\frac{1}{2\pi
i}\int_{(\sigma)}L_{\delta_{k}(f_{1\chi})}(\varphi_{k-s})y^{-k+s}ds.$
To proceed we separate two cases: $k\in\mathbb{Z}$ or
$k\in\mathbb{Z}+\frac{1}{2}$.
Case I: $k\in\mathbb{Z}$ Applying (5.2) to (5.17), we get
(5.19)
$f_{1\chi}(iy)\varphi(y)=i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}-1}}\frac{1}{2\pi
i}\int_{(\sigma)}L_{f_{2\overline{\chi}}}(\varphi_{k-s}|_{2-k}W_{N})y^{-k+s}ds.$
We have that
$\varphi_{k-s}|_{2-k}W_{N}\in\mathcal{F}_{f_{1\chi}}\cap\mathcal{F}_{f_{2\bar{\chi}}}$
and, for each $y>0$,
(5.20)
$(\varphi_{k-s}|_{2-k}W_{N})(y)=(Ny)^{k-2}\varphi_{k-s}\left(\frac{1}{Ny}\right)=(Ny)^{s-1}\varphi\left(\frac{1}{Ny}\right).$
So we get
(5.21)
$L_{f_{2\bar{\chi}}}(\varphi_{k-s}|_{2-k}W_{N})=\int_{0}^{\infty}f_{2\bar{\chi}}(iy)(\varphi_{k-s}|_{2-k}W_{N})(y)dy=\int_{0}^{\infty}f_{2\bar{\chi}}(iy)(Ny)^{s-1}\varphi\left(\frac{1}{Ny}\right)dy.$
Then, by the Mellin inversion,
(5.22)
$N^{-1}f_{2\bar{\chi}}\left(-\frac{1}{iNy}\right)\varphi(y)=\frac{1}{2\pi
i}\int_{(\sigma)}L_{f_{2\bar{\chi}}}(\varphi_{k-s}|_{2-k}W_{N})y^{s}ds.$
Therefore,
(5.23)
$f_{1\chi}(iy)\varphi(y)=i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}}}y^{-k}f_{2\bar{\chi}}\left(-\frac{1}{iNy}\right)\varphi(y),$
Similarly, applying (5.3) to (5.18), we get
(5.24)
$\delta_{k}(f_{1\chi})(iy)\varphi(y)=i^{k+2}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}}}y^{-k}\delta_{k}(f_{1\bar{\chi}})\left(-\frac{1}{iNy}\right)\varphi(y).$
Therefore, for $y\in\mathbb{R}_{+}$ such that $\varphi(y)\neq 0$, we have
(5.25)
$f_{1\chi}(iy)=i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}}}y^{-k}f_{2\bar{\chi}}\left(-\frac{1}{iNy}\right)$
and
(5.26)
$\delta_{k}(f_{1\chi})(iy)=i^{k+2}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}}}y^{-k}\delta_{k}(f_{2\bar{\chi}})\left(-\frac{1}{iNy}\right).$
Because of the choice of the set of functions $S_{c}(\mathbb{R}_{+})$, the
above relation is true for all $y>0$.
We now define
(5.27)
$F_{\chi}(z):=f_{1\chi}(z)-\chi(-N)\psi(D)(f_{2\bar{\chi}}|_{k}W_{N}^{-1})(z).$
The equations (5.25) and (5.26) imply that $F_{\chi}(iy)=0$ and
$\frac{\partial}{\partial x}F_{\chi}(iy)=0$. Now, $F_{\chi}$ is an
eigenfunction of the Laplace operator because $f_{1\chi}$ and
$f_{2\bar{\chi}}$ are eigenfunctions of the Laplace operator with the same
eigenvalue. Recall that $f_{1\chi}$ and $f_{2\bar{\chi}}$ are eigenfunctions
of the Laplace operator because they are defined as a Fourier series of
$e^{2\pi inz}$ and $\Gamma(1-k,-4\pi ny)e^{2\pi inz}$. Therefore (cf. e.g.
[Bum97, Lemma 1.9.2]), the vanishing of $F$ and $\frac{\partial}{\partial x}F$
on the imaginary axis implies that $F_{\chi}\equiv 0$, and then
(5.28) $f_{1\chi}=\chi(-N)\psi(D)(f_{2\bar{\chi}}|_{k}W_{N}^{-1}).$
By (4.29) and the identity $f_{1}=f_{2}|_{k}W_{N}^{-1}$ (deduced on applying
(5.28) with $D=1$) we get
(5.29) $f_{2\bar{\chi}}|_{k}W_{N}^{-1}=\sum_{\begin{subarray}{c}v\bmod{D}\\\
\gcd(v,D)=1,-Nuv\equiv
1\bmod{D}\end{subarray}}\chi(v)f_{1}\bigg{|}_{k}\begin{pmatrix}D&-u\\\
-Nv&\frac{1+Nuv}{D}\end{pmatrix}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix}.$
With the definition of $f_{1\chi}$ and $f_{1}=f_{2}|_{k}W_{N}^{-1}$, we have
(5.30) $f_{1\chi}=\sum_{\begin{subarray}{c}u\bmod{D}\\\
\gcd(u,D)=1\end{subarray}}\overline{\chi(u)}f_{1}\bigg{|}_{k}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix}\\\ =\psi(D)\sum_{\begin{subarray}{c}v\bmod{D}\\\
\gcd(v,D)=1,-Nuv\equiv
1\bmod{D}\end{subarray}}\overline{\chi(u)}f_{1}\bigg{|}_{k}\begin{pmatrix}D&-u\\\
-Nv&\frac{1+Nuv}{D}\end{pmatrix}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix},$
for $\chi(-Nv)=\overline{\chi(u)}$. By the orthogonality of the multiplicative
characters, after taking the sum over all characters modulo $D$, we deduce
that, for each $u$ and $v$ such that $-Nuv\equiv 1\bmod{D}$, we have
(5.31) $f_{1}=\psi(D)f_{1}\bigg{|}_{k}\begin{pmatrix}D&-u\\\
-Nv&\frac{1+Nuv}{D}\end{pmatrix},$
which is equivalent to
(5.32) $f_{1}\bigg{|}_{k}\begin{pmatrix}\frac{1+Nuv}{D}&u\\\
Nv&D\end{pmatrix}=\psi(D)f_{1}.$
This implies that $f_{1}$ is invariant with the character $\psi$ for the
entire $\Gamma_{0}(N)$ because, by [Raz77], the following set of matrices
generates $\Gamma_{0}(N)$:
(5.33) $\bigcup_{m=1}^{N}S_{m}\cup\\{\pm I_{2}\\},$
where, for each positive $m\in\mathbb{Z}$, $S_{m}$ is the set consisting of
one $\left(\begin{smallmatrix}t&s\\\
Nm&D\end{smallmatrix}\right)\in\Gamma_{0}(N)$ for each $D$ in a set of
congruence classes modulo $Nm$. Finally, working as in the classical case, we
deduce that $f_{1}$ is of at most exponential growth at all cusps.
_Case II: $k\in\frac{1}{2}+\mathbb{Z}.$_ Applying (5.4) to (5.13), we have
(5.34) $f_{1\chi}(iy)\varphi(y)=\frac{1}{2\pi
i}\int_{(\sigma)}L_{f_{1\chi}}(\varphi_{s})y^{-s}ds\\\
=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{\frac{k}{2}-1}}\frac{1}{2\pi
i}\int_{(\sigma)}L_{f_{2\bar{\chi}\psi_{D}}}(\varphi_{k-s}|_{2-k}W_{N})y^{-k+s}ds.$
By (5.20) (holding both for $k\in\mathbb{Z}$ and $k\not\in\mathbb{Z}$) and
Mellin inversion,
(5.35)
$f_{1\chi}(iy)\varphi(y)=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{\frac{k}{2}}}y^{-k}f_{2\bar{\chi}\psi_{D}}\left(-\frac{1}{iNy}\right)\varphi(y).$
This is true for any $\varphi\in S_{c}(\mathbb{R}_{+})$. Because of our choice
of $S_{c}(\mathbb{R}_{+})$, for any $y>0$, we have
(5.36) $\displaystyle f_{1\chi}(iy)$
$\displaystyle=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{\frac{k}{2}}}y^{-k}f_{2\bar{\chi}}\left(-\frac{1}{iNy}\right)$
$\displaystyle=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}}(f_{2\bar{\chi}\psi_{D}}|_{k}W_{N}^{-1})(iy).$
Similarly, by the functional equation for
$L_{\delta_{k}(f_{1\chi})}(\varphi_{s})$ given in (5.5), and applying the
above arguments, for any $y>0$, we have
(5.37)
$\delta_{k}(f_{1\chi})(iy)=-\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{\frac{k}{2}}}y^{-k}\delta_{k}(f_{2\bar{\chi}\psi_{D}})\left(-\frac{1}{iNy}\right).$
We define
(5.38)
$F_{\chi}(z)=f_{1\chi}(z)-\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\chi(-N)\psi(D)\epsilon_{D}^{-1}(f_{2\bar{\chi}\psi_{D}}|_{k}W_{N}^{-1})(z).$
The equations (5.36) and (5.37) imply that $F_{\chi}(iy)=0$ and
$\frac{\partial}{\partial x}F_{\chi}(iy)=0$. As in the Case I (for
$k\in\mathbb{Z}$), since $F_{\chi}(z)$ is a Laplace eigenfunction, we deduce
that $F_{\chi}(z)=0$, for any Dirichlet character $\chi$ modulo $D$ and we get
(5.39)
$f_{1\chi}=\psi_{D}((-1)^{k-\frac{1}{2}}N)\chi(-N)\psi(D)\epsilon_{D}^{-1}f_{2\bar{\chi}\psi_{D}}|_{k}W_{N}^{-1}.$
With similar arguments as in Case I we get
(5.40) $\sum_{\begin{subarray}{c}u\bmod{D}\\\
\gcd(u,D)=1\end{subarray}}\overline{\chi(u)}f_{1}\bigg{|}_{k}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix}\\\ =\psi(D)\sum_{\begin{subarray}{c}v\bmod{D}\\\
\gcd(v,D)=1\\\ -Nuv\equiv
1\bmod{D}\end{subarray}}\overline{\chi(u)}f_{1}\bigg{|}_{k}\begin{pmatrix}D&-u\\\
-Nv&\frac{1+Nuv}{D}\end{pmatrix}\begin{pmatrix}\frac{1}{\sqrt{D}}&\frac{u}{\sqrt{D}}\\\
0&\sqrt{D}\end{pmatrix}.$
By the orthogonality of the multiplicative characters, after taking the sum
over all characters $\chi$ modulo $D$, we deduce that, for each $u$ and $v$
such that $-Nuv\equiv 1\bmod{D}$,
(5.41) $f_{1}\\\ =\psi(D)f_{1}\bigg{|}_{k}\begin{pmatrix}D&-u\\\
-Nv&\frac{1+Nuv}{D}\end{pmatrix}.$
Therefore
(5.42) $f_{1}\bigg{|}_{k}\begin{pmatrix}\frac{1+Nuv}{D}&u\\\
Nv&D\end{pmatrix}\\\ =\psi(D)f_{1}.$
The fact that the set (5.33) generates $\Gamma_{0}(N)$ implies the theorem in
this case too. ∎
###### Corollary 5.3.
With the notation of Theorem 5.1, let $(a_{j}(n))_{n\geq-n_{0}}$ ($j=1,2$) be
sequences of complex numbers such that $a_{j}(n)=O(e^{C\sqrt{n}})$ as
$n\to\infty$, for some $C>0.$ Define holomorphic functions
$f_{j}:\mathbb{H}\to\mathbb{C}$ by the following Fourier expansions:
(5.43) $f_{j}(z)=\sum_{n\geq-n_{0}}a_{j}(n)e^{2\pi inz}.$
For all $D\in\\{1,2,\ldots,N^{2}-1\\}$, $\gcd(D,N)=1$, let $\chi$ be a
Dirichlet character modulo $D$. For each $D$, $\chi$ and any $\varphi\in
S_{c}(\mathbb{R}_{+})$, we assume that,
(5.44)
$L_{f_{1\chi}}(\varphi)=i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}-1}}L_{f_{2\overline{\chi}}}(\varphi|_{2-k}W_{N})$
if $k\in\mathbb{Z}$, and
(5.45)
$L_{f_{1\chi}}(\varphi)=\psi_{D}(-1)^{k-\frac{1}{2}}\psi_{D}(N)\frac{\chi(-N)\psi(D)}{\epsilon_{D}N^{\frac{k}{2}-1}}L_{f_{2\overline{\chi}\psi_{D}}}(\varphi|_{2-k}W_{N})$
if $k\in\frac{1}{2}+\mathbb{Z}$. Then, the function $f_{1}$ is a weakly
holomorphic form with weight $k$ and character $\psi$ for $\Gamma_{0}(N)$, and
$f_{2}=f_{1}|_{k}W_{N}$.
###### Proof.
The proof is identical to that of the theorem except that, thanks to the
holomorphicity of $f$ and $g$, (5.26) is not necessary and thus we do not need
the functional equations of $L_{\delta_{k}(f_{j\chi})}(\varphi)$. ∎
In the case of $N=1$ and the trivial character $\psi$ mod $1$, this corollary
becomes Theorem 1.1.
### 5.1. Alternative converse theorem for integral weight.
In the case of integer weight, it is possible to formulate the converse
theorem so that only primitive characters are required in the statement.
However, the number of primitive characters needed would be infinite and the
extension to half-integral weights less transparent. We state this theorem and
prove it with emphasis on the parts it differs from Theorem 5.1. In
particular, we will use the recent method of “three circles” [NO20] which
extends to real-analytic functions, the classical vanishing result under a
condition about the action of infinite order elliptic elements.
We first introduce the following notation for the Gauss sum of a character
$\chi$ modulo $D$:
(5.46) $\tau(\chi):=\sum_{m\mod D}\chi(m)e^{2\pi i\frac{m}{D}}.$
We recall that, when $\chi$ is primitive, we have
$\tau_{\bar{\chi}}(n)=\chi(n)\tau(\bar{\chi})$.
###### Theorem 5.4.
Let $k\in\mathbb{Z},$ $N\in\mathbb{N}$ and $\psi$ be a Dirichlet character
modulo $N$. For $j\in\\{1,2\\}$, let $(a_{j}(n))_{n\geq-n_{0}}$ for some
integer $n_{0}$ and $(b_{j}(n))_{n<0}$ be sequences of complex numbers such
that $a_{j}(n),b_{j}(n)=O(e^{C\sqrt{|n|}})$ as $|n|\to\infty$ for some
constant $C>0$. We define smooth functions $f_{j}:\mathbb{H}\to\mathbb{C}$
given by the following Fourier expansions associated to the given sequences:
(5.47) $f_{j}(z)=\sum_{n\geq-n_{0}}a_{j}(n)e^{2\pi
inz}+\sum_{n<0}b_{j}(n)\Gamma\left(1-k,-4\pi ny\right)e^{2\pi inz}.$
For all $D\in\mathbb{N}$ ($\gcd(D,N)=1$), all _primitive_ Dirichlet characters
$\chi$ modulo $D$ and all $\varphi\in S_{c}(\mathbb{R}_{+})$, we assume that,
(5.48) $\displaystyle L_{f_{1\chi}}(\varphi)$
$\displaystyle=i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}-1}}L_{f_{2\overline{\chi}}}(\varphi|_{2-k}W_{N})\,\,\text{and}$
$\displaystyle L_{\delta_{k}(f_{1\chi})}(\varphi)$
$\displaystyle=-i^{k}\frac{\chi(-N)\psi(D)}{N^{\frac{k}{2}-1}}L_{\delta_{k}(f_{2\overline{\chi}})}(\varphi|_{2-k}W_{N}).$
Then, the function $f_{1}$ belongs to $H^{\prime}_{k}(\Gamma_{0}(N),\psi)$ and
$f_{2}=f_{1}|_{k}W_{N}$.
###### Proof.
The deduction of (5.28) in the proof of Theorem 5.1 does not depend on whether
the character $\chi$ is primitive. Therefore, since the other assumptions of
the theorems are the same, we deduce
(5.49) $(f_{1\chi}|_{k}W_{N})(z)=\chi(-N)\psi(D)f_{2\bar{\chi}}(z).$
Applying this with $Dz$ instead of $z$, we obtain
$\tilde{f}_{1\chi}|_{k}W_{ND^{2}}=\chi(-N)\psi(D)\frac{\tau(\chi)}{\tau(\bar{\chi})}\tilde{f}_{2\bar{\chi}}$
where, for $j=1,2,$
$\tilde{f}_{j\chi}(z):=\frac{\chi(-1)\tau(\chi)}{D}f_{j\chi}(Dz)$. This
coincides with equation [Bum97, (5.13)] which, by matrix operations, implies
that, for each map on $c(r)$ on the non-zero classes modulo $D$ such that
$\sum c(r)=0,$ we have
(5.50) $\sum_{\begin{subarray}{c}r\mod D\\\
(r,D)=1\end{subarray}}c(r)f_{2}|_{k}\begin{pmatrix}D&-r\\\
-Nm&t\end{pmatrix}\begin{pmatrix}1&\frac{r}{D}\\\
0&1\end{pmatrix}=\sum_{\begin{subarray}{c}r\mod D\\\
(r,D)=1\end{subarray}}c(r)\psi(D)f_{2}|_{k}\begin{pmatrix}1&\frac{r}{D}\\\
0&1\end{pmatrix},$
where, for each $r$, the integers $t$ and $m$ are such that $Dt-Nmr=1$. We
note that, once we have such an identity for a given choice of the parameters
$r$, $t$ and $m$, then it will hold for _any_ other $r$, $t$ such that $Dt-
Nmr=1$. In the proof of [Bum97, Theorem 1.5.1 ], equation (5.50) implies that,
(5.51) $g:=f_{2}|_{k}\gamma-\psi(D)f_{2}\qquad\text{ where
}\gamma=\begin{pmatrix}D&r\\\ Nm&t\end{pmatrix}.$
satisfies $g=g|_{k}M$, for the elliptic element of infinite order
(5.52) $M=\begin{pmatrix}1&\frac{2r}{D}\\\
-\frac{2Nm}{t}&-3+\frac{4}{Dt}\end{pmatrix}.$
Since the argument in [Bum97] relies exclusively on algebraic manipulations in
$\mathbb{C}[\Gamma_{0}(N)]$, it applies in our case as well. Therefore,
$g_{1}:=g|_{k}\gamma^{-1}=f_{2}-\psi(D)f_{2}|_{k}\gamma^{-1}$ satisfies
(5.53) $g_{1}=g_{1}|_{k}\gamma M\gamma^{-1}.$
As mentioned above, this holds for any $r$ and $t$ such that $Dt-Nmr=1$. Let
$h_{1}:=f_{2}-\psi(D)f_{2}|_{k}\tilde{\gamma}^{-1}$ where
(5.54) $\tilde{\gamma}=\begin{pmatrix}D&r+D\\\ Nm&t+Nm\end{pmatrix}=\gamma T.$
Here $T=\left(\begin{smallmatrix}1&1\\\ 0&1\end{smallmatrix}\right)$ is the
usual translation matrix. Let
(5.55) $\tilde{M}=\begin{pmatrix}1&\frac{2(r+D)}{D}\\\
-\frac{2Nm}{(t+Nm)}&-3+\frac{4}{D(t+Nm)}\end{pmatrix}.$
Then we have
(5.56) $h_{1}=h_{1}|_{k}\tilde{\gamma}\tilde{M}\tilde{\gamma}^{-1}.$
Now, since $f_{2}$ satisfies $f_{2}=f_{2}|_{k}T,$ we have that
(5.57) $h_{1}=f_{2}-\psi(D)f_{2}|_{k}T^{-1}\gamma^{-1}=g_{1}.$
We claim that the elliptic elements of infinite order
$\tilde{\gamma}\tilde{M}\tilde{\gamma}^{-1}$ and $\gamma M\gamma^{-1}$ do not
have any fixed points in common. Clearly this is equivalent to the claim that
$T\tilde{M}T^{-1}$ and $M$ do not share any fixed points. Indeed, the former
has fixed points
(5.58) $\frac{1}{DmN}\left(1-Dt\pm\sqrt{1-D(2+mNr)(mN+t)+D^{2}t(mN+t)}\right)$
and the latter:
(5.59) $\frac{1}{DmN}\left(1-Dt\pm\sqrt{1-D(2+mNr)t+D^{2}t^{2})}\right).$
Their discriminants differ by $DNm\neq 0$.
Therefore, the real-analytic function $g_{1}$ is invariant under two infinite
order elliptic elements with distinct fixed points and, by [NO20, Theorem
3.11], it vanishes. The completion of the proof is identical to that of
[Bum97, Theorem 1.5.1]. ∎
### 5.2. Example of using the converse theorem
Using the above two theorems, we can give an alternative proof of the classic
fact that, if $k\in\mathbb{N}$ and $f$ is a weight $2-k$ weakly holomorphic
cusp form, then the $(k-1)$-th derivative of $f$ is weakly holomorphic cusp
form of weight $k$. [BFOR17, Lemma 5.3] Our purpose is to give a “proof of
concept” of the way our constructions work.
###### Proposition 5.5.
Let $k\in 2\mathbb{N},$ and let $f\in S_{2-k}^{!}$ for
$\operatorname{SL}_{2}(\mathbb{Z})$ with Fourier expansion (1.3). Then the
function $f_{1}$ given by
(5.60) $f_{1}(z)=\sum_{\begin{subarray}{c}n=-n_{0}\\\ n\neq
0\end{subarray}}^{\infty}a(n)(2\pi n)^{k-1}q^{n}$
is an element of $S_{k}^{!}$.
###### Proof.
Since $f\in S^{!}_{2-k},$ $n^{k-1}a(n)=O(e^{C\sqrt{n}})$ as $n\to\infty$ for
some $C>0$. For $\varphi\in S_{c}(\mathbb{R}_{+})$,
(5.61) $L_{f_{1}}(\varphi)=\sum_{\begin{subarray}{c}n=-n_{0}\\\ n\neq
0\end{subarray}}^{\infty}(2\pi n)^{k-1}a(n)(\mathcal{L}\varphi)(2\pi
n)=\sum_{\begin{subarray}{c}n=-n_{0}\\\ n\neq
0\end{subarray}}^{\infty}a(n)(\mathcal{L}(\alpha(\varphi))(2\pi
n)=L_{f}(\alpha(\varphi))$
where
(5.62)
$\alpha(\varphi)(x):=\mathcal{L}^{-1}(u^{k-1}(\mathcal{L}\varphi)(u))(x).$
Now, we note that [EMOT54, 4.1(8)] gives
$(\mathcal{L}\varphi^{(k-1)})(u)=u^{k-1}(\mathcal{L}\varphi)(u)-u^{k-2}\varphi(0)-u^{k-3}\varphi^{\prime}(0)-\dots=u^{k-1}(\mathcal{L}\varphi)(u)$
since $\varphi$ is supported in $(c_{1},c_{2})\subset\mathbb{R}_{>0}$. Then
(5.63)
$\alpha(\varphi)=\mathcal{L}^{-1}(u^{k-1}(\mathcal{L}\varphi)(u))=\mathcal{L}^{-1}(\mathcal{L}\varphi^{(k-1)})=\varphi^{(k-1)}$
and hence, $\alpha(\varphi)\in\mathcal{F}_{f}$. Therefore, Theorem 4.5
applies, to give (for $f\in S^{!}_{2-k}$)
(5.64)
$L_{f_{1}}(\varphi)=L_{f}(\alpha(\varphi))=i^{2-k}L_{f}(\alpha(\varphi)|_{k}W_{1}).$
Here, recall that
$(\alpha(\varphi)|_{k}W_{1})(x)=x^{-k}\alpha(\varphi)(x^{-1})$. On the other
hand,
(5.65) $L_{f_{1}}(\varphi|_{2-k}W_{1})=L_{f}(\alpha(\varphi|_{2-k}W_{1}))$
We claim that
(5.66) $\alpha(\varphi)|_{k}W_{1}=-\alpha(\varphi|_{2-k}W_{1}),$
which is equivalent to
(5.67)
$-u^{k-1}(\mathcal{L}(\varphi|_{2-k}W_{1}))(u)=\mathcal{L}(\alpha(\varphi)|_{k}W_{1})(u).$
Since both sides are holomorphic in $u$, it suffices to prove the above
identity for $u>0$. To this end, we let $p_{\ell}(x)=x^{\ell}$, for
$\ell\in\mathbb{Z}$, $\ell\geq 1$. By [EMOT54, 4.2(3)], for $u>0$, we have
$\frac{1}{\ell!}(\mathcal{L}p_{\ell})(u)=p_{-\ell-1}(u)=u^{-\ell-1}$. By
(5.62), we have
(5.68)
$\varphi(x)=\mathcal{L}^{-1}\bigl{(}p_{-k+1}\cdot\mathcal{L}\alpha(\varphi)\bigr{)}(x)=\frac{1}{(k-2)!}\mathcal{L}^{-1}\bigl{(}\mathcal{L}p_{k-2}\cdot\mathcal{L}\alpha(\varphi)\bigr{)}(x).$
By applying the convolution theorem, we get
(5.69)
$\varphi(x)=\frac{1}{(k-2)!}\int_{0}^{x}(x-t)^{k-2}\alpha(\varphi)(t)dt.$
Then, by two changes of variables,
(5.70)
$\mathcal{L}(\varphi|_{2-k}W_{1})(u)=\int_{0}^{\infty}x^{k-2}\varphi(x^{-1})e^{-ux}dx=\frac{1}{(k-2)!}\int_{0}^{\infty}\int_{0}^{x^{-1}}(1-tx)^{k-2}\alpha(\varphi)(t)dte^{-ux}dx\\\
=\frac{1}{(k-2)!}\int_{0}^{\infty}\int_{0}^{1}x^{-1}(1-t)^{k-2}\alpha(\varphi)(tx^{-1})dte^{-ux}dx\\\
=\frac{1}{(k-2)!}\int_{0}^{\infty}x^{-1}\alpha(\varphi)(x^{-1})\biggl{(}\int_{0}^{1}(1-t)^{k-2}e^{-uxt}dt\biggr{)}dx.$
With [OLBC10, 8.2.7] and [OLBC10, 8.4.7] we deduce
(5.71)
$\mathcal{L}(\varphi|_{2-k}W_{1})(u)=\int_{0}^{\infty}x^{-1}\alpha(\varphi)(x^{-1})(-ux)^{-k+1}e^{-ux}\biggl{(}1-e^{ux}\sum_{j=0}^{k-2}\frac{(-ux)^{j}}{j!}\biggr{)}dx\\\
=(-u)^{-k+1}\mathcal{L}(\alpha(\varphi)|_{k}W_{1})(u)-\sum_{j=0}^{k-2}\frac{(-u)^{-k+1+j}}{j!}\int_{0}^{\infty}x^{k-2-j}\alpha(\varphi)(x)dx.$
Since $\varphi$ is compactly supported we have
$\int_{0}^{\infty}\varphi^{(\ell)}(x)dx=0$ for $\ell\geq 1$. Since
$\alpha(\varphi)=\varphi^{(k-1)}$, for each $j\in[0,k-2]$, by applying
integration by parts, we get
(5.72) $\int_{0}^{\infty}\alpha(\varphi)(x)x^{j}dx=0.$
Then, since $k$ is even, we have shown (5.67) and thus, (5.66). Combining this
with (5.64) and (5.65), we deduce
$L_{f_{1}}(\varphi)=i^{k}L_{f_{1}}(\varphi|_{2-k}W_{1})$, which, by Corollary
5.3, implies that $f_{1}$ is a weakly holomorphic form with weight $k$ for
SL${}_{2}(\mathbb{Z}).$ ∎
### 5.3. A summation formula for harmonic lifts
Let $N$ be a positive integer, $\chi$ a Dirichlet character modulo $N$ and
$\bar{\chi}$ the complex conjugate of the character $\chi$. We restrict to
integers $k\geq 2$ and let $S_{k}(N,\bar{\chi})$ denote the space of standard
holomorphic cusp forms of weight $k$ for $\Gamma_{0}(N)$ and the central
character $\bar{\chi}$. We recall [BF04] that the “shadow operator”
$\xi_{2-k}\colon H_{2-k}(N,\chi)\to S_{k}(N,\bar{\chi})$ is given by
(5.73) $\xi_{2-k}:=2iy^{2-k}\frac{\bar{\partial}}{\partial\bar{z}}.$
It is an important fact, first proved in [BF04], that $\xi_{2-k}$ is
surjective. The main object in the next theorem is the inverse image of a
given element of $S_{k}(N,\bar{\chi})$.
###### Theorem 5.6.
Let $k\in 2\mathbb{N}$ and let $f\in S_{k}(N,\bar{\chi})$ with Fourier
expansion
(5.74) $f(z)=\sum_{n=1}^{\infty}a_{f}(n)e^{2\pi inz}.$
Suppose that $g$ is an element of $H_{2-k}(N,\chi)$ such that $\xi_{2-k}g=f$
with Fourier expansion
(5.75) $g(z)=\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}c_{g}^{+}(n)e^{2\pi
inz}+\sum_{\begin{subarray}{c}n<0\end{subarray}}c_{g}^{-}(n)\Gamma(k-1,-4\pi
ny)e^{2\pi inz}.$
Then, for every $\varphi$ in the space $C^{\infty}_{c}(\mathbb{R},\mathbb{R})$
of piecewise smooth, compactly supported functions on $\mathbb{R}$ with values
in $\mathbb{R}$, we have
(5.76) $\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}c_{g}^{+}(n)\int_{0}^{\infty}\varphi(y)e^{-2\pi
ny}dy-N^{\frac{k}{2}-1}\sum_{\begin{subarray}{c}n\geq-
n_{0}\end{subarray}}c_{g|_{2-k}W_{N}}^{+}(n)\int_{0}^{\infty}\varphi(y)(-iy)^{k-2}e^{-2\pi
n/(Ny)}dy\\\
=\sum_{l=0}^{k-2}\sum_{n>0}\overline{a_{f}(n)}\Big{(}\frac{(k-2)!}{l!}(4\pi
n)^{1-k+l}\int_{0}^{\infty}e^{-2\pi ny}y^{l}\varphi(y)dy\\\
+\frac{2^{l+1}}{(k-1)}(8\pi n)^{-\frac{k+1}{2}}\int_{0}^{\infty}e^{-\pi
ny}y^{\frac{k}{2}-1}\varphi(y)M_{1-\frac{k}{2}+l,\frac{k-1}{2}}(2\pi
ny)dy\Big{)}$
where $M_{\kappa,\mu}(z)$ is the Whittaker hypergeometric function. For its
properties, see [OLBC10, §13.14]).
###### Remark 5.7.
Directly from the definition of $\xi_{2-k}$ we see that
$a_{f}(n)=-\overline{c_{g}^{-}(-n)}(4\pi n)^{k-1}$, for each $n\in\mathbb{N}$.
###### Proof.
We can also check that
$C^{\infty}_{c}(\mathbb{R},\mathbb{R})\subset\mathcal{F}_{f}\cap\mathcal{F}_{g}$.
With (4.14), we deduce that the $L$-series of $g$ can be written, for each
$\varphi\in C^{\infty}_{c}(\mathbb{R},\mathbb{R})$ as
(5.77) $L_{g}(\varphi)=L_{g}^{+}(\varphi)-\sum_{n>0}\overline{a_{f}(n)}(4\pi
n)^{1-k}\int_{0}^{\infty}\Gamma(k-1,4\pi ny)e^{2\pi ny}\varphi(y)dy$
where $L_{g}^{+}$ denotes the part corresponding to the holomorphic part of
$g$:
(5.78) $L_{g}^{+}(\varphi):=\sum_{n\geq-
n_{0}}c_{g}^{+}(n)\mathcal{L}\varphi(2\pi n).$
The second sum in (5.77) can be written as
(5.79) $\sum_{n>0}\overline{a_{f}(n)}\mathcal{L}(\Phi(\varphi))(2\pi
n)=\overline{L_{f}(\Phi(\varphi))}$
where
(5.80)
$\Phi(\varphi)=\mathcal{L}^{-1}\left((2u)^{1-k}\int_{0}^{\infty}\Gamma(k-1,2uy)e^{uy}\varphi(y)dy\right).$
Therefore,
(5.81)
$L_{g}(\varphi)=L_{g}^{+}(\varphi)-\overline{L_{f}(\Phi(\varphi))}=L_{g}^{+}(\varphi)-\overline{L_{\xi_{2-k}g}(\Phi(\varphi))}.$
It is clear from its derivation, that this identity holds for any weight $k$
harmonic Maass form $g$ and, in particular, also for $g|_{2-k}W_{N}.$
Now, Theorem 4.5 applied to $L_{g}$ implies that
$L_{g}(\varphi)=i^{2-k}N^{k/2}L_{g|_{2-k}W_{N}}(\varphi|_{k}W_{N})$. Therefore
(5.82)
$L_{g}^{+}(\varphi)-\overline{L_{f}(\Phi(\varphi))}=i^{2-k}N^{k/2}\left(L_{g|_{2-k}W_{N}}^{+}(\varphi|_{k}W_{N})-\overline{L_{\xi_{2-k}(g|_{2-k}W_{N})}(\Phi(\varphi|_{k}W_{N}))}\right).$
Similarly, Theorem 4.5 and the identity
(5.83)
$\xi_{2-k}(g|_{2-k}W_{N})|_{k}{W_{N}}=\xi_{2-k}(g)|_{k}W_{N}|_{k}W_{N}=(-1)^{k}f$
imply that
(5.84)
$L_{\xi_{2-k}(g|_{2-k}W_{N})}(\Phi(\varphi|_{k}W_{N}))=i^{-k}N^{1-k/2}L_{f}(\Phi(\varphi|_{k}W_{N})|_{2-k}W_{N}).$
Therefore, (5.82) becomes
(5.85)
$L_{g}^{+}(\varphi)-\overline{L_{f}(\Phi(\varphi))}=i^{2-k}N^{k/2}L_{g|_{2-k}W_{N}}^{+}(\varphi|_{k}W_{N})+N\overline{L_{f}(\Phi(\varphi|_{k}W_{N})|_{2-k}W_{N}))}.$
To simplify $L_{f}(\Phi(\varphi|_{k}W_{N})|_{2-k}W_{N}))$, we first note that
a change of variables followed by an application of [EMOT54, 4.1(4)] gives
(5.86)
$\mathcal{L}^{-1}\left((2u)^{1-k}\int_{0}^{\infty}\Gamma(k-1,2uy)e^{uy}\varphi\left(\frac{1}{Ny}\right)\frac{dy}{(Ny)^{k}}\right)\left(\frac{1}{Nx}\right)\\\
=N^{-k}\mathcal{L}^{-1}\left((2u/N)^{1-k}\int_{0}^{\infty}\Gamma(k-1,2(u/N)y)e^{(u/N)y}\varphi\left(\frac{1}{y}\right)\frac{dy}{y^{k}}\right)\left(\frac{1}{x}\right)\\\
=N^{1-k}\mathcal{L}^{-1}\left((2u)^{1-k}\int_{0}^{\infty}\Gamma(k-1,2uy)e^{uy}\varphi\left(\frac{1}{y}\right)\frac{dy}{y^{k}}\right)\left(\frac{1}{x}\right).$
Then, with [EMOT54, 4.1(25)], we obtain
(5.87) $\mathcal{L}\left(\Phi(\varphi|_{k}W)|_{2-k}W_{N})\right)(2\pi n)\\\
=\mathcal{L}\left((Nx)^{k-2}\mathcal{L}^{-1}\left((2u)^{1-k}\int_{0}^{\infty}\Gamma(k-1,2uy)e^{uy}\varphi\left(\frac{1}{Ny}\right)\frac{dy}{(Ny)^{k}}\right)\left(\frac{1}{Nx}\right)\right)(2\pi
n)\\\ =\frac{(2\pi
n)^{\frac{1-k}{2}}}{N}\int_{0}^{\infty}u^{\frac{k-1}{2}}J_{k-1}(\sqrt{8\pi
nu})(2u)^{1-k}\int_{0}^{\infty}\Gamma(k-1,2uy)e^{uy}\varphi(1/y)y^{-k}dydu\\\
=\frac{2^{1-k}(2\pi
n)^{\frac{1-k}{2}}}{N}\int_{0}^{\infty}\varphi(y)y^{k-2}\int_{0}^{\infty}u^{\frac{1-k}{2}}J_{k-1}(\sqrt{8\pi
nu})\Gamma(k-1,2u/y)e^{u/y}dudy.$
The formula [OLBC10, (8.4.8)] for the incomplete Gamma function implies that
the last expression equals
(5.88) $\displaystyle\frac{(8\pi
n)^{\frac{1-k}{2}}}{N}\sum_{l=0}^{k-2}\frac{2^{l}(k-2)!}{l!}\int_{0}^{\infty}\varphi(y)y^{k-2-l}\int_{0}^{\infty}u^{\frac{1-k}{2}+l}J_{k-1}(\sqrt{8\pi
nu})e^{-u/y}dudy$ $\displaystyle=\frac{(8\pi
n)^{\frac{1-k}{2}}}{N}\sum_{l=0}^{k-2}\frac{2^{l+1}(k-2)!}{l!}\int_{0}^{\infty}\varphi(y)y^{k-2-l}\int_{0}^{\infty}u^{2-k+2l}J_{k-1}(\sqrt{8\pi
n}u)e^{-u^{2}/y}dudy$ $\displaystyle=\frac{(8\pi
n)^{\frac{-k}{2}}}{N\sqrt{8\pi
n}(k-1)}\sum_{l=0}^{k-2}2^{l+1}\int_{0}^{\infty}\varphi(y)y^{\frac{k}{2}-1}e^{-\pi
ny}M_{1-\frac{k}{2}+l,\frac{k-1}{2}}(2\pi ny)dy$
where, for the last equality we used [EMOT54, 6.8(8)].
Finally, with [OLBC10, (8.4.8)] again, we deduce
(5.89) $\displaystyle L_{f}(\Phi(\varphi))$
$\displaystyle=\sum_{n>0}a_{f}(n)\mathcal{L}(\Phi(\varphi))(2\pi n)$
$\displaystyle=\sum_{l=0}^{k-2}\sum_{n>0}a_{f}(n)\frac{(k-2)!}{l!}(4\pi
n)^{1-k+l}\int_{0}^{\infty}e^{-2\pi ny}y^{l}\varphi(y)dy.$
Replacing (5.88) and (5.89) into (5.85), we derive the theorem. ∎
## References
* [BDR13] Kathrin Bringmann, Nikolaos Diamantis, and Martin Raum, _Mock period functions, sesquiharmonic Maass forms, and non-critical values of $L$-functions_, Adv. Math. 233 (2013), 115–134.
* [BF04] Jan Hendrik Bruinier and Jens Funke, _On two geometric theta lifts_ , Duke Math. J. 125 (2004), no. 1, 45–90.
* [BFI15] Jan H. Bruinier, Jens Funke, and Özlem Imamoḡlu, _Regularized theta liftings and periods of modular functions_ , J. Reine Angew. Math. 703 (2015), 43–93.
* [BFK14] Kathrin Bringmann, Karl-Heinz Fricke, and Zachary A. Kent, _Special $L$-values and periods of weakly holomorphic modular forms_, Proc. Amer. Math. Soc. 142 (2014), no. 10, 3425–3439.
* [BFOR17] Kathrin Bringmann, Amanda Folsom, Ken Ono, and Larry Rolen, _Harmonic Maass forms and mock modular forms: theory and applications_ , American Mathematical Society Colloquium Publications, vol. 64, American Mathematical Society, Providence, RI, 2017.
* [BO06] Kathrin Bringmann and Ken Ono, _The $f(q)$ mock theta function conjecture and partition ranks_, Invent. Math. 165 (2006), no. 2, 243–266.
* [BO10] by same author, _Dyson’s ranks and Maass forms_ , Ann. of Math. (2) 171 (2010), no. 1, 419–449.
* [Boo15] Andrew R. Booker, _$L$ -functions as distributions_, Math. Ann. 363 (2015), no. 1-2, 423–454.
* [Bro18] Francis Brown, _A class of non-holomorphic modular forms I_ , Res. Math. Sci. 5 (2018), no. 1, Paper No. 7, 40.
* [Bum97] Daniel Bump, _Automorphic forms and representations_ , Cambridge Studies in Advanced Mathematics, vol. 55, Cambridge University Press, Cambridge, 1997.
* [DD20] Nikolaos Diamantis and Joshua Drewitt, _Period functions associated to real-analytic modular forms_ , Res. Math. Sci. 7 (2020), no. 3, Paper No. 21, 23.
* [DR22] Nikolaos Diamantis and L. Rolen, _L-values of harmonic maass forms_ , arXiv:2201.10193v3 (2022).
* [DSKS21] K. Deo Shankhadhar and R. Kumar Singh, _An analogue of Weil’s Converse Theorem for harmonic Maass forms of polynomial growth_ , arXiv:2101.03101. (2021), 1–28.
* [EMOT54] A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi, _Tables of integral transforms. Vol. I_ , McGraw-Hill Book Company, Inc., New York-Toronto-London, 1954, Based, in part, on notes left by Harry Bateman.
* [MS04] Stephen D. Miller and Wilfried Schmid, _Summation formulas, from Poisson and Voronoi to the present_ , Noncommutative harmonic analysis, Progr. Math., vol. 220, Birkhäuser Boston, Boston, MA, 2004, pp. 419–440.
* [MSSU20] Tadashi Miyazaki, Fumihiro Sato, Kazunari Sugiyama, and Takahiko Ueno, _Converse theorems for automorphic distributions and Maass forms of level $N$_, Res. Number Theory 6 (2020), no. 1, Paper No. 6, 59.
* [NO20] Michael Neururer and Thomas Oliver, _Weil’s converse theorem for Maass forms and cancellation of zeros_ , Acta Arith. 196 (2020), no. 4, 387–422.
* [OLBC10] Frank W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert, and Charles W. Clark (eds.), _NIST handbook of mathematical functions_ , U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC; Cambridge University Press, Cambridge, 2010, With 1 CD-ROM (Windows, Macintosh and UNIX).
* [Raz77] Michael J. Razar, _Modular forms for $G_{0}(N)$ and Dirichlet series_, Trans. Amer. Math. Soc. 231 (1977), no. 2, 489–495.
* [Shi73] Goro Shimura, _On modular forms of half integral weight_ , Ann. of Math. (2) 97 (1973), 440–481.
|
arxiv-papers
| 2021-07-26T17:59:15 |
2024-09-04T03:07:19.570199
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Nikolaos Diamantis, Min Lee, Wissam Raji, Larry Rolen",
"submitter": "Nikolaos Diamantis",
"url": "https://arxiv.org/abs/2107.12366"
}
|
2107.12368
|
# Revealing the Vertical Cloud Structure of a young low-mass Brown Dwarf, an
analog to the $\beta$-Pictoris b directly-imaged exoplanet, through Keck
I/MOSFIRE spectro-photometric variability
Elena Manjavacas AURA for the European Space Agency (ESA), ESA Office, Space
Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218 USA
W. M. Keck Observatory, 65-1120 Mamalahoa Hwy. Kamuela, HI, USA Theodora
Karalidi Department of Physics, University of Central Florida, 4000 Central
Florida Blvd., Orlando, FL 32816, USA Johanna M. Vos Department of
Astrophysics, American Museum of Natural History, Central Park West at 79th
Street, New York, NY 10024, USA Beth A. Biller SUPA, Institute for
Astronomy, University of Edinburgh, Blackford Hill, Edinburgh EH9 3HJ, UK
Centre for Exoplanet Science, University of Edinburgh, Edinburgh, UK Ben W.
P. Lew Lunar and Planetary Laboratory, The University of Arizona, 1640 E.
University Blvd., Tucson, AZ 85721, USA Department of Astronomy and Steward
Observatory, The University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721,
USA Elena Manjavacas [email protected]
###### Abstract
Young brown dwarfs are analogs to giant exoplanets, as they share effective
temperatures, near-infrared colors and surface gravities. Thus, the detailed
characterization of young brown dwarfs might shed light on the study of giant
exoplanets, that we are currently unable to observe with the sufficient
signal-to-noise to allow a precise characterization of their atmospheres.
2MASS J22081363+2921215 is a young L3 brown dwarf, member of the
$\beta$-Pictoris young moving group (23$\pm$3 Myr), that shares its effective
temperature and mass with the $\beta$ Pictoris b giant exoplanet. We performed
a $\sim$2.5 hr spectro-photometric $J$-band monitoring of 2MASS
J22081363+2921215 with the MOSFIRE multi-object spectrograph, installed at the
Keck I telescope. We measured a minimum variability amplitude of
3.22$\pm$0.42% for its $J$-band light curve. The ratio between the maximum and
the minimum flux spectra of 2MASS J22081363+2921215 shows a weak wavelength
dependence, and a potential enhanced variability amplitude in the alkali
lines. Further analysis suggests that the variability amplitude on the alkali
lines is higher than the overall variability amplitude (4.5–11%, depending on
the lines). The variability amplitudes in these lines are lower if we degrade
the resolution of the original MOSFIRE spectra to R$\sim$100, which explains
why this potential enhanced variability in the alkali lines has not been found
previously in HST/WFC3 light curves. Using radiative-transfer models, we
obtained the different cloud layers that might be introducing the spectro-
photometric variability we observe for 2MASS J22081363+2921215, that further
support the measured enhanced variability amplitude in the alkali lines. We
provide an artistic recreation of the vertical cloud structure of this
$\beta$-Pictoris b analog.
stars: brown dwarfs
††facilities: MOSFIRE (W. M. Keck Observatory)††software: astropy (Astropy
Collaboration et al., 2013)††software: Pypeit (Prochaska et al., 2019, 2020)
## 1 Introduction
Brown dwarfs are substellar objects that are unable to sustain hydrogen
fusion, contracting as they cool down over their lifetime. Thus, younger brown
dwarfs have larger radii and lower surface gravity than their older
counterparts. Young brown dwarfs and giant exoplanet atmospheres share similar
colours, temperatures, and surface gravities (Chauvin et al., 2004; Marois et
al., 2008; Faherty et al., 2013). Nevertheless, young brown dwarfs, unlike
giant exoplanets, are found in isolation, being technically easier to observe
with current instrumentation. Thus, the characterization of young free-
floating brown dwarfs might improve our understanding of the atmospheres of
imaged young giant exoplanets. Some examples of these class of objects are
2MASS J00452143+1634446 (L2, $\sim$50 Myr, Kendall et al. 2004), PSO 318.5-22
(L7, 23$\pm$3 Myr, Liu et al. 2013), 2MASS J00470038+6803543 (L7, 130$\pm$20
Myr, Gizis et al. 2012), 2MASS J035523.37+113343.7 (L5, $\sim$120 Myr, Reid &
Walkowicz 2006), and 2MASS J22081363+2921215 (L3, 23$\pm$3 Myr, Cruz et al.
2009), among others.
Photometric or spectro-photometric variability surveys with ground and space-
based data have shown that the majority of brown dwarfs have signs of low-
level variability across all spectral types, most likely due to the existence
of different layers of heterogeneous clouds in their atmospheres that evolve
as they rotate (Radigan, 2014; Buenzli et al., 2014; Metchev et al., 2015).
For example, Metchev et al. (2015) monitored 23 L-dwarfs, and 16 T-dwarfs
using the Spitzer telescope, and concluded that $\sim$61% of the L-dwarfs of
their sample show photometric variability signs with amplitudes $>$0.2%, and
also, at least 31% of the T-dwarfs showed signs of low-level variability. In
addition, Metchev et al. (2015) suggested that variability amplitudes for low
gravity brown dwarfs might be enhanced in comparison with the field brown
dwarf population.
Using the New Technology Telescope (NTT) and the United Kingdom Infrared
Telescope (UKIRT), Vos et al. (2019) photometrically monitored a sample of 36
young brown dwarfs with spectral types between L0 and L8.5, finding that
$\mathrm{30^{+16}_{-8}}$% of the young brown dwarfs were variable. In
contrast, Radigan (2014) found that only $\mathrm{11^{+13}_{-4}}$% of the
field brown dwarfs with the same spectra types are variable using also ground-
based data. These results suggest that the variability may be enhanced for
low-gravity/low-mass exoplanet analogs. In fact, for free-floating young
planetary mass objects like WISEP J004701+680352, VHS 1256-1267ABb and PSO
J318.5-22, very high variability amplitudes have been measured (Lew et al.,
2016; Biller et al., 2018; Zhou et al., 2020; Bowler et al., 2020).
Finally, photometric and spectro-photometric variability has been also
predicted for giant exoplanets (Komacek & Showman, 2020; Tan & Showman, 2019,
respectively). Their source of photometric variability is expected to be
atmopsheric dynamics, as for brown dwarfs. Some attempts to measure
photometric variability in giant exoplanets have been performed using Extreme
Adaptive Optics instrumentation. For example, Apai et al. (2016), and Biller
et al. (2021), attempted to use VLT/SPHERE to measure the photometric
variability of the HR 8799 system, but, due to the lack of a long data
baseline, just upper limits for variability amplitude could be provided. In
conclusion, detecting photometric and spectro-photometric variability in giant
exoplanets is challenging due to instrumental limitations. Thus, as young
brown dwarfs and giant exoplanets share several physical characteristics, and
given the easier observability of young brown dwarfs, and the higher chances
of finding detectable variability, spectro-photometric monitoring of these
objects can provide insights on the heterogeneous cloud coverage of exoplanet
atmospheres, and the vertical pressure levels at which those are found.
This paper is structured as follows: in Section 2, we introduce the key
properties of 2MASS J22081363+2921215. In Section 3, we describe the details
of the Keck I/MOSFIRE spectro-photometric monitoring we performed for 2MASS
J22081363+2921215. In Section 4, we describe the data reduction. In Section 5
we explain how the light curve production and correction was performed using
the calibration stars. In Section 6 we account for the potential influence of
systematics in the target’s light curve. In Section 7 we present the results
for photometric and spectro-photometric variability for 2MASS
J22081363+2921215. Finally, in Section 8, we describe the interpretation of
the spectro-photometric variability found for 2MASS J22081363+2921215 using
radiative-transfer models, and we provide a general picture of the cloud
structure of the object might be given the spectro-photometric variability
measured.
## 2 2MASS J22081363+2921215
2MASS J22081363+2921215 (2M2208+2921), $\mathrm{M_{J}}$ = 15.8, was one of the
first peculiar early L objects found (Kirkpatrick et al., 2000), because of
its weak alkali lines. It was spectrally classified in the optical by
Kirkpatrick et al. (2000), as an L2 object. Its peculiarity was later
explained as an effect of low-surface gravity (Kirkpatrick et al., 2008). Cruz
et al. (2009) classified it as an L3$\gamma$ in the near infrared. Allers &
Liu (2013) classified it as a very-low surface gravity object using spectral
indices. Using the BT-Settl atmospheric models with solar metallicity,
Manjavacas et al. (2014) estimated its effective temperature in 1800 K, and
its surface gravity in log g$\sim$4.0.
Zapatero Osorio et al. (2014) provided a trigonometric parallax for
2M2208+2921 of $\pi$ = 21.2$\pm$0.7 mas, proper motions of
$\mu_{\alpha}\cos\delta$ = 90.7$\pm$3.0 mas/yr, and $\mu_{\delta}$ =
-16.2$\pm$3.7 mas/yr, and a luminosity of $\mathrm{\log(L/L_{\odot}}$) =
-3.71$\pm$0.10. Gagné et al. (2014) found, with a modest probability of 53.8%,
that 2M2208+2921 belongs to the $\beta$-Pictoris young moving group (23$\pm$3
Myr, Mamajek & Bell 2014). Dupuy et al. (2018) confirmed 2M2208+2921 to be a
likely member of the $\beta$-Pictoris using also radial velocity measurements
from Vos et al. (2017). In this case, 2M2208+2921 would have an estimated mass
between 9 and 11 $\mathrm{M_{Jup}}$, being an analog of the planet/brown dwarf
companion $\beta$ Pictoris b (Lagrange et al., 2009). $\beta$ Pictoris b was
one of the first directly-imaged planets detected. It is a companion to the
$\beta$-Pictoris star at 8-14 AU, with a spectral type $\mathrm{L2\pm 2}$
(Bonnefoy et al., 2013), and with a dynamical mass of
$\mathrm{13^{+0.3}_{-0.4}M_{Jup}}$ (Dupuy et al., 2019). Dupuy et al. (2019)
also showed that 2M2208+2921 and $\beta$-Pictoris b share a similar position
in the color-magnitude diagram, further confirming the similarity of the two
objects.
Metchev et al. (2015) measured a rotational period of 3.5$\pm$0.2 h for
2M2208+2921 using Spitzer [3.6] and [4.5] bands, with variability amplitudes
of 0.69$\pm$0.07%, and 0.54$\pm$0.11%, respectively. Miles-Páez et al. (2017)
measured low values of $J$-band polarization for the object. Finally, Vos et
al. (2017) measured an inclination of $i$ = 55$\pm$10 deg.
## 3 Observations
Performing spectro-photometric monitoring observations from the ground using
single-slit spectrographs is technically challenging, since at least one
calibration star is needed for spectral calibration, to account for telluric
contamination, changes in the airmass, humidity and temperature variations in
the atmosphere, etc, that might potentially introduce spurious variability
signals. Normally, brown dwarfs are isolated, and no other object is found
close enough to be observed simultaneously as spectro-photometric calibrator
together with the target, in the few arseconds long of single-slits
spectrographs. This is only possible in the case of well-resolved binary brown
dwarfs like Luhman-16AB (Kellogg et al., 2017). Since brown dwarfs are in
their majority single objects (Bouy et al., 2003; Burgasser et al., 2003;
Luhman et al., 2007), near-infrared multi-object spectrographs like MOSFIRE
(McLean et al., 2010, 2012) are needed to perform spectro-photometric
monitoring of brown dwarfs from the ground. MOSFIRE is installed at the
Cassegrain focus of Keck I, and it performs simultaneous spectroscopy of up to
46 objects in a 6.1’x 6.1’ field of view, using the Configurable Slit Unit
(CSU), a cryogenic robotic slit mask system that is reconfigurable
electronically in less than 5 minutes without any thermal cycling of the
instrument. A single photometric band is covered in each instrument setting
($Y$, $J$, $H$ or $K$).
We observed 2M2208+2921 on UT 2019-10-13 with MOSFIRE at the Keck I telescope
during half a night. We obtained in total 13 spectra of 2M2208+2921 in the
$J$-band (1.153–1.352 $\mu$m) using an ABBA pattern during a total of
$\sim$2.5 h of monitoring. We used wide slits of 4.5” to avoid slit losses for
all 10 calibration stars and the target, obtaining a spectral resolution of
R$\sim$1000\. In Table 1 we show the list of objects used as calibrators,
their coordinates, and their $J$-band magnitudes. In general, the calibration
stars had similar magnitudes as the target. In Figure 1, we show the
configuration of the CSU mask, with the position of the target and the
calibration stars. We used exposure times of 150 s for each nod position in
the the first ABBA, and 180 s for each nod position of the rest of the ABBA
patterns. We observed over a airmass range of 1.01 and 1.35.
For data reduction purposes, 13 dome flats of 11 s exposure were obtained. Due
to challenges in producing a successful wavelength calibration using sky lines
with 4.5” slits, we obtained on UT 2020-03-05 four $J$-band ”sky” spectra
using the same configuration for the multiobject mask as for the observations,
but using 1.0” slits to obtain higher resolution sky lines. The 1.0” slits
provided spectra of the skylines with enough resolution to allow the pipeline
to produce an accurate wavelength calibration.
Table 1: Information about the calibration objects in the field of 2M2208+2921. Num. mask | Num. obj. | RA | DEC | $\mathrm{M_{J}}$
---|---|---|---|---
20 | 1 | 22:08:13.962 | 29:23:19.62 | 16.14
21 | 2 | 22:08:05.925 | 29:22:34.83 | 16.32
15 | 3 | 22:08:05.925 | 29:22:34.83 | 15.61
7 | 4 | 22:08:18.266 | 29:21:56.62 | 15.83
2 | 5 | 22:08:15.857 | 29:21:41.9 | 15.86
2M2208 | 2M2208 | 22:08:13.631 | 29:21:21.54 | 15.80
3 | 6 | 22:08:11.258 | 29:20:55.81 | 15.88
9 | 7 | 22:08:10.257 | 29:20:19.74 | 15.16
26 | 8 | 22:08:07.798 | 29:19:37.43 | 16.43
18 | 9 | 22:08:14.681 | 29:19:27.78 | 16.49
30 | 10 | 22:08:10.930 | 29:19:05.76 | 16.04
Figure 1: Illustration of the positioning of the CSU bars on MOSFIRE to obtain
simultaneous multi-object spectroscopy of the field of 2M2208+2921 as produced
by MAGMA, the MOSFIRE Automatic GUI-based Mask Application. Our target (named
as 2M2208) is placed in the center of the field. The position of the
comparison stars as shown in Table 1 are also marked. The round colored points
show the expected positions of the target and calibration stars. The yellow
squares show the position of the stars used for the alignment of the mask.
## 4 Data Reduction
We used the version 1.0 of PypeIt111https://github.com/pypeit/PypeIt to reduce
the multi-object spectroscopic data acquired with MOSFIRE in the $J$-band.
PypeIt is a Python-based data reduction pipeline for spectroscopic data,
applicable to a variety of spectrographs in different facilities (Prochaska et
al., 2019, 2020). The pipeline corrected all the raw images from dark current,
and a bad pixels mask is generated. The edges of the slits were traced using
the dome flats. A master flat was also created. PypeIt produced a wavelength
calibration for our data using the sky arc frames taken using the same
multiobject mask we employed for our observations, but with narrower slits of
1.0”, to obtain well-resolved skylines that would allow PypeIt to find a
wavelength solution automatically. The wavelength calibration accounted for
the spectral tilt across the slit. The calibrations were applied to our
science frames, and the sky was subtracted using the A-B or B-A frames
following Kelson (2003). The 1D science spectra were extracted from the 2D
sky-corrected frames. Finally we coadded A-B and B-A the extracted 1D science
spectra to obtain a signal-to-noise of $\sim$65 at 13000 $\AA$ for our science
target. The signal-to-noise achieved for each object on the field is
summarized in Table 2. No telluric calibration was performed for these
spectra, since the spectral types of the calibration stars, necessary to
perform this correction, could not be determined. Instead, for the upcoming
analysis, we have used the wavelength range between 12200 and 13200 $\AA$,
avoiding the most prominent telluric contamination.
## 5 Production and Correction of Light Curves
We produce a $J$-band light curve for each object in the field, restricting
the wavelength range of the spectra between 12200 and 13200 $\AA$, to avoid
the most prominent telluric contamination that might introduce spurious
variability for the objects in the field.
As these data were obtained from the ground, there might be other additional
sources of non-astrophysical contamination affecting the shape of the light
curve extracted for each object, such as varying atmospheric transparency,
change in the water vapor content of the atmosphere, the seeing, variations in
outside temperature during the $\sim$2.5 h of the observation, wind speed and
direction variations, airmass variations, etc. Thus, the science target light
curve needs to be corrected for those potential sources of contamination.
To perform the light curve correction, we followed a similar approach to
Radigan (2014), but with more conservative criteria to select the best
calibration stars. We corrected each light curve by dividing it by a
calibration light curve produced median combining the relative-flux light
curves of all the other objects in the field, beside the science target.
First, we normalized the light curves of all objects to the median flux for
each of them. For each reference star, a calibration light curve was created
by median combining the light curves of all the other objects, beside the
science target. Then, the raw light curve of each calibration star was divided
by its corresponding calibration light curve to obtain a corrected light
curve. Finally, we measured the standard deviation, $\sigma$, of each
corrected light curve for each calibration star. In Table 2 we show the
standard deviation of all the calibration stars and our science target before
and after correcting each light curve. To perform an optimal correction of the
2M2208+2921 light curve, we choose first which calibration stars are less
likely to show intrinsic astrophysical variability, due to star spots and/or
flares. To choose the more stable calibration stars, we selected those for
which their standard deviation is at most half of the standard deviation of
the target’s light curve ($\sigma_{star}<\sigma_{target}/2$). Using this
criteria, we selected five stable calibration stars (stars 1, 4, 5, 6, and 8),
that coincide with the calibration stars showing a smaller degree of
variability amplitude after they were corrected using the rest of the
calibration stars in the field (see Table 2). We show the uncorrected and
corrected light curves for the calibration stars in the Appendix, Figures 20
and 21. The uncertainties for the data points in the light curves are the
formal instrumental uncertainties provided by the PypeIt pipeline.
For a comparison, we also explored for our sample the best-selection criteria
for the calibration stars used by Radigan (2014), for which they subtracted
from the corrected light curved of each calibration star, a shifted version of
itself, and divided it by $\sqrt{2}$
($\sigma_{s}=[f_{cal}-f_{cal\\_shifted}]/\sqrt{2}$). Radigan (2014) then
identified poor-quality calibration stars as those where
$\sigma_{s}>1.5\times\sigma_{target}$. This criteria did not reject any of the
calibration stars in our field, thus, we used the more conservative method
detailed above to choose the most stable calibration stars.
The formal instrumental uncertainties provided by PypeIt probably
underestimate the uncertainties of 2M2208+2921 light curve, since it does not
necessarily account for spurious variability introduced by changes in the
Earth’s atmosphere during the observation. Thus, we use a similar approach as
Radigan (2014) to estimate the uncertainties for each point of the light
curve. We used the mean of the $\sigma_{s}$ calculated for the target and the
selected calibration stars as the uncertainty for each point in the light
curve of the target. This method accounts for any residual uncorrected
atmospheric contamination variability in the target’s light curve. The non-
corrected light curve of 2M2208+2921 is shown in Fig. 2, left, and the
corrected light curve in Fig. 2, right.
Figure 2: Normalized non-corrected (left) and corrected (right) light curves of 2M2208+2921. Table 2: Statistics of the light curves of 2M2208+2921 and the calibration stars on its field. We highlight in bold face the best reference stars, selected as those with $\sigma_{calibration\\_stars}$ $<$ $\sigma_{2M2208}/2$ Object Number | SNR at 13000 $\AA$ | $\sigma$ non-corrected light curve | $\sigma$ corrected light curve | Variability after correction
---|---|---|---|---
2M 2208+2921 | 64.3 | 1.12 x $10^{-2}$ | 9.70 x $10^{-3}$ | 3.22 %
Object 1 | 48.5 | 5.30 x $10^{-3}$ | 4.17 x $10^{-3}$ | 1.32 %
Object 2 | 36.2 | 1.07 x $10^{-2}$ | 5.15 x $10^{-3}$ | 2.00 %
Object 3 | 36.8 | 7.03 x $10^{-3}$ | 5.76 x $10^{-3}$ | 2.10 %
Object 4 | 98.6 | 5.93 x $10^{-3}$ | 3.73 x $10^{-3}$ | 1.22 %
Object 5 | 55.3 | 6.31 x $10^{-3}$ | 3.55 x $10^{-3}$ | 1.18 %
Object 6 | 53.0 | 5.59 x $10^{-3}$ | 3.40 x $10^{-3}$ | 1.39 %
Object 8 | 76.4 | 8.31 x $10^{-3}$ | 3.72 x $10^{-3}$ | 1.26 %
Object 9 | 40.1 | 1.03 x $10^{-2}$ | 5.79 x $10^{-3}$ | 2.24 %
Object 10 | 43.4 | 1.02 x $10^{-2}$ | 6.93 x $10^{-3}$ | 2.21 %
### 5.1 BIC Test for Significant Variability
To test the significance of the observed fluctuations in the light curve of
2M2208+2921, we use the Bayesian Information Criterion (BIC). The BIC is
defined as
$\mathrm{BIC}=-2~{}\mathrm{ln}~{}\mathcal{L}_{\mathrm{max}}+k~{}\mathrm{ln}~{}N$
(1)
where $\mathcal{L}_{\mathrm{max}}$ is the maximum likelihood achievable by the
model, $k$ is the number of parameters in the model and $N$ is the number of
data points used in the fit (Schwarz, 1978). In our case, we calculate
$\Delta\mathrm{BIC}=\mathrm{BIC}_{flat}-\mathrm{BIC}_{\mathrm{sin}}$ to assess
whether a variable sinusoidal or non-variable flat model is favored by the
data. This method has previously been used for identifying brown dwarf
variability by Naud et al. (2017); Vos et al. (2020). The BIC penalizes the
sinusoidal model for having additional parameters compared with the flat
model. The sinusoidal and flat model are shown in Fig. 3. A
$\Delta\mathrm{BIC}$ value of 37 implies that the variable model (sinusoidal)
is very strongly preferred over a flat model.
Figure 3: The light curve of 2M2208+2821 is shown by the orange points, with
the best-fit non-variable (flat) and variable (sinusoidal) models are shown in
grey. The BIC test shows that the variable model is strongly favored by the
light curve.
## 6 Systematics Corrections
### 6.1 Comparison of Variability between Blue and Red Half of Spectrum
Telluric contamination in the $J$-band spectra asymmetrically affects the blue
and the red edges, and also some intermediate wavelengths (Rudolf et al.,
2016), that could potentially influence the variability amplitude we measure
for 2M2208+2921. To test the potential influence of telluric contamination in
the light curve of 2M2208+2921, we produced two different light curves using
only the first and the second half of the wavelength range of spectra. The
first half spectra light curve was produced using the spectra between 12200
and 12700 $\AA$, and the second half light curve was produced using the range
between 12700 and 13200 $\AA$ (see both light curves in the Appendix, Fig.
15). Both light curves looked visually similar, but to quantitatively test
that both light curves are similar, and thus, the telluric contamination is
not affecting significantly the spectra in the blue and red ends, we run a
Mann-Whitney U test, which is a non-parametric test that checks the similarity
on the distribution of two samples (Neuhäuser, 2011). The Mann-Whitney U test
does not assume a normal distribution in the data. The null hypothesis asserts
that the medians of the two samples are identical. We calculate the value, U,
and compared it to a tabulated $\mathrm{U_{critical}}$ given by the number of
points in the sample. If U $>$ $\mathrm{U_{critical}}$, the null hypothesis
$H_{0}$ (samples are similar), is accepted. For the case of our target, the
calculated U = 94.5 $>$ $\mathrm{U_{critical}}$ = 45, for a sample of 13
points, as in the case of 2M2208+2921 light curve. We calculate the Kendall
$\tau$ non-parametric correlation test (Puka, 2011) between the target’s light
curve done with the first half and the second half wavelength range. We chose
the Kendall $\tau$ correlation test since it is a non-parametric test to
measure correlation of data, and more robust that other parametric correlation
test like the Spearman $\rho$ test (Langfelder & Horvath, 2012). We obtained a
Kendall $\tau$ coefficient of 0.85, (significance =
5.2$\times$$\mathrm{10^{-6}}$), indicating a strong correlation between both
light curves, supporting the U-test result.
### 6.2 Correlation between Stars and Target Light Curves
To evaluate the effects of potential contamination on the target’s light curve
due to atmospheric effects, and potential thin clouds, we investigate the
correlation between the non-corrected light curve of the target, and the
comparison stars. The Kendall’s $\tau$ coefficients suggests a weak to a
moderate correlation between the light curves, depending on the ”good”
comparison star. The Kendall $\tau$ correlation coefficients vary between 0.18
(significance = 0.43) and 0.46 (significance = 0.03). In Fig. 16 in the
Appendix, we show the correlation plots between the target and each of the
stars that we use for calibration.
After correcting the 2M2208+2921 light curve using the method explained in
Section 5, we run the Kendall $\tau$ non-parametric correlation test again,
finding correlation coefficients that range between 0.05 (significance = 0.85)
and -0.33 (significance = 0.12), suggesting from non to a weak anticorrelation
for some of the ”good” comparison stars (see Fig. 17 in the Appendix).
### 6.3 Correlation with Full Width Half Maximum of the Spectra
We obtained spectra following an ABBA pattern, thus, the slit losses might
vary slightly at A and B positions of the pattern, potentially influencing the
measured variability of the target. Thus, we investigated a potential
relationship between the variability found for 2M2208+2921, and the Full Width
Half Maximum (FWHM) of the target spectra taken during the 2.5 hr of
monitoring with MOSFIRE. We measured the FWHM at three different positions of
the coadded ABBA spectra in the spectral direction (x direction): across pixel
x = 683, across pixel x = 1042, and pixel x = 1365, and then we calculated the
mean of the FWHM at those three positions for each ABBA coadd. We obtained a
median FWHM of 0.84$\pm$0.15 arcsec during the 2.5 hr of monitoring (see Fig,
18, left, in the Appendix). We calculated the Kendall $\tau$ correlation of
the mean FWHM for each spectrum and the evolution flux of the spectra with
time, obtaining a very weak negative correlation between both quantities
($\tau$ = -0.077, significance = 0.76, Fig. 18, right).
### 6.4 Correlation with Atmospherical Parameters
The evolution of atmospheric conditions during the observation might influence
the amount of flux collected by MOSFIRE, affecting simultaneously the target
and the calibration stars. Namely, the most relevant factors that might
potentially affect our observations are: the humidity content, the external
temperature, and the airmass (Fig. 19 in the Appendix). The evolution of these
parameters are registered in the header, and/or in the Mauna Kea Weather
Center webpage (http://mkwc.ifa.hawaii.edu). We calculated the $\tau$ Kendall
correlation coefficient between the non-corrected light curve for 2M2208+2921,
and each of the atmospheric parameters mentioned above. We found no
correlation between the target’s light curve and the humidity content ($\tau$
= -0.08, significance = 0.72), a weak correlation with the external
temperature (0.35, significance = 0.09), and a weak anti-correlation with the
airmass ($\tau$ = -0.20, significance = 0.37).
Since these correlations are very small or not statistically significant, we
conclude that there is no correlation between the external conditions and the
target’s light curve.
## 7 Results
### 7.1 Photometric variability
As we did not cover the entire known rotational period of the target
(3.5$\pm$0.2 hr, Metchev et al. 2015), with our MOSFIRE spectro-photometric
observations, we are just able to provide a minimum variability amplitude for
this light curve in the $J$-band, that we found to be 3.22$\pm$0.42%. As
expected, this minimum variability amplitude is higher than the variability
amplitude measured by Spitzer in the [3.6] and [4.5] channels (Metchev et al.,
2015), which were 0.69$\pm$0.07%, and 0.54$\pm$0.11%, respectively. The
$J$-band is tracing deeper layers of the atmosphere of 2M2208+2921 than the
[3.6] and [4.5] bands, and thus, a higher variability amplitude is expected,
assuming that the variability amplitudes measured with Spitzer have not
changed significantly between epochs (Yang et al., 2016).
We do not have enough time coverage to observe a full rotational period of the
target (3.5$\pm$0.2 hr, Metchev et al. 2015), but still we searched for other
periods on the $J$-band light curve using a Lomb-Scargle periodogram (Lomb,
1976; Scargle, 1982; Horne & Baliunas, 1986), and a Bayesian Generalized
Lomb–Scargle (BGLS) Periodogram (Mortier et al., 2015) which did not find any
periodicity in the $J$-band light curve.
### 7.2 Spectral variability
We explored the amplitude of the variability as a function of the wavelength
by comparing the maximum and the minimum flux spectra among the 13 spectra
obtained. In Figure 4, left, we show the brightest and faintest spectrum,
indicating the molecular and atomic absorption features for 2M2208+2921.
Figure 4: Left: We show the spectrum corresponding to the maximum flux
obtained in the 2M2208+2921 light curve in blue, and the spectrum of the
minimum spectrum in orange. Right: Ratio between the maximum and minimum flux
spectrum of 2M2208+2921 (blue). The uncertainties of the ratio are in light
blue. The fitted slope to the ratio is shown in red.
In Figure 4, right we show the ratio between the maximum and the minimum flux
spectra, i.e. the relative amplitude across the spectral wavelength range,
with its uncertainties, and indicating as well the molecular and atomic
absorption features for our target. We fit a line to the ratio of the maximum
and minimum flux spectra using the numpy.polyfit Python library, obtaining a
negative slope to the ratio ($ratio=1.2589\pm 0.0666-[1.8714\pm 0.5225]\times
10^{-5}\lambda$, see Fig. 4, right), suggesting that the variability amplitude
decreases monotonically from 12200 to 13200 $\AA$, as it has been found for
other L-dwarfs like WISE0047+6803 ($ratio=1.19\pm 0.01-[0.7\pm 0.1]\times
10^{-5}\lambda$), or LP261-75B ($ratio=1.05\pm 0.01-[0.27\pm 0.05]\times
10^{-5}\lambda]$). As proposed for WISE0047+6803 in Lew et al. (2016), the
variability amplitude, and wavelength dependence for 2M2208+2921 could be
explained by the existence of hazes and dust particles in the atmosphere of
the object. Hiranaka et al. (2016) proposed the existence of sub-micron sized
particles in the atmospheres of L0-L6 brown dwarfs. For L3 dwarfs Hiranaka et
al. (2016) finds an effective radius of $\sim$0.27 $\mu$m, slightly smaller
particles than for WISE0047+6803 atmosphere (0.3-0.4 $\mu$m, Lew et al. 2016).
For the same number of particles, smaller particles imply smaller variability
amplitude, and a stronger wavelength dependence of the variability (Hiranaka
et al., 2016), which is what we find for 2M2208+2921 when compared to
WISE0047+6803. WISE0047+6803 has a variability amplitude of $\sim$8%, higher
than the 3.22$\pm$0.42% for 2M2208+2921, and a less strong wavelength
dependence than 2M2208+2921.
### 7.3 Potential enhanced variability in the alkali lines
In Figure 4 we found potentially prominent peaks at the wavelengths where the
K I, and Na I alkali lines are located, suggesting a potential enhanced
variability amplitude around those wavelengths. In the following, we
investigate in depth the potential enhanced variability amplitude on those
wavelengths. For this purpose, in the following sections we measure the
amplitude of variability inside the K I doublet and the Na I alkali lines, and
the variability amplitude of the blue and the red continuum of those lines.
Finally we compare those variability amplitudes between them and with the
overall $J$-band variability, and conclude if they are significantly
different.
#### 7.3.1 Variability of flux inside the K I and Na I lines
We investigated the variability of the flux inside the K I doublet and the Na
I line themselves, creating light curves using the flux inside these lines. We
used the range between 12400–12463 $\AA$ for the K I line at 12430 $\AA$. The
range between 12495–12540 $\AA$ for the K I line at 12525 $\AA$, and the range
between 12675–12683 $\AA$ for the Na I line at 12682 $\AA$. To correct the
light curves for the K I doublet and Na I lines from potential non-
astrophysical contamination, we follow the same approach to correct the light
curve as for the $J$-band light curve (see Section 5), but using only the
wavelength ranges of the calibration stars spectra corresponding to the K I
doublet and Na I wavelengths specified above (see correction light curves in
Appendix, Fig. 22, 23, 24, 25, and 26). This correction accounts for potential
telluric contamination at those specific wavelengths. This correction is
particularly important for the Na I continuum and line, since there is a
$\mathrm{O_{2}}$ telluric absorption between 12600 $\AA$ and 12750 $\AA$
(Vernet et al., 2008; Sameshima et al., 2018). In spite of our efforts to
correct for telluric contamination at those wavelengths, we acknowledge that
some contamination might remain uncorrected.
In Figure 5, right panel, we show the corrected light curves corresponding to
alkali lines. We find that the variability of the flux for the K I lines at
12430 $\AA$, and 12525 $\AA$ is 2-3$\sigma$ higher than the variability found
for the $J$-band of 2M2208+2921 (4.60$\pm$0.54%, and 4.48$\pm$0.54%,
respectively). Finally, the variability for the Na I line is about 2$\sigma$
higher than for the overall $J$-band, and also than the variability of the
continuum, 10.93$\pm$3.17%.
#### 7.3.2 Variability of alkali lines continuum flux
We measured the variability of the continuum on the blue, and the red end of
each line, expanding 40 $\AA$ in both ends. The wavelengths used as continuum
for the K I at 12430 $\AA$ is 12360–12400 $\AA$ in the blue end, and
12463–12503 $\AA$ in the red end. For the K I line at 12525 $\AA$, we have
used as blue side continuum the wavelength range between 12455–12495 $\AA$,
and as red side continuum 12540–12580 $\AA$. Finally, for the Na I line at
12682 $\AA$, we have used as blue end continuum the wavelength between
12635–12675 $\AA$, and as red end continuum the range between 12720–12760
$\AA$. We corrected the K I doublet and Na I continuum light curves as
explained in Section 5 (see correction light curves in Appendix, Fig. 22, 23,
24, 25, and 26).
In Figure 5, left and middle panel, we show the normalized continuum flux
variability. As we observe in Fig. 5, the variability amplitude of the
continuum of the alkali lines, and the variability inside the alkali lines
themselves is similar within the uncertainties for the K I lines. For the Na I
line the variability of the line is 1–2$\sigma$ higher than the variability of
the continuum. In any case, the variability amplitude found for the continuum
of the K I doublet and Na I alkali lines is slightly higher (1-2 $\sigma$)
than the overall variability found in the $J$-band for 2M2208+2921
(3.22$\pm$0.42%)
Figure 5: Variability of the K I doublet, and the Na I lines and their blue
and red continuum. The continuum width used is 40 $\AA$ in both ends.
#### 7.3.3 Comparison to low resolution spectro-photometric data
Although some spectro-photometric data for other brown dwarfs of similar
spectral types to 2M2208+2921 have already been collected using the Hubble
Space Telescope (HST), and its Wide Field Camera 3 (WFC3) with the G141 grism
(R$\sim$100) (e.g. 2MASS J17502484-0016151, a L4.5 brown dwarf from Buenzli et
al. 2014, and 2MASS J18212815+1414010, a L5.0 from Yang et al. 2015), no
enhanced variability amplitude has been found for the alkali lines in the
$J$-band for those objects. Thus, we investigate if the enhanced variability
inside those lines is washed out when the spectral resolution of the
MOSFIRE/Keck I spectra is degraded to the resolution of the HST/WFC3 + G141
grism spectra. For this purpose, we degraded the MOSFIRE/Keck I spectra
resolution (R$\sim$1000) to the resolution of HST/WFC3 + G141 (R$\sim$100)
using a gaussian convolution. We reproduced the plots 4, and 5, using the
R$\sim$100 resolution spectra, after correcting the light curves following the
same procedure than in Section 5 (see correction light curves in Appendix,
Fig. 27, 28, 29, 30, and 31), and we compare the variability amplitude found
for the continuum and inside the K I doublet and Na I alkali line.
In Figure 6, similar as Figure 4, we show the comparison, and the ratio
between the maximum and the minimum flux spectra in the 2M2208+2921 light
curve. As in Fig. 4, we mark the atomic and molecular features for the
$J$-band spectrum. We show the minimum spectrum in orange (corresponding to
the second point the $J$-band light curve in Fig. 2), and the maximum spectrum
in blue (corresponding to the 8th point in the $J$-band light curve in Fig.
2). In Fig. 6 right, we observe that within the uncertainties, the maximum and
minimum spectra overlap, and in Fig. 6, left, we observe that there is a
wavelength dependent slope, as in Fig. 4, but we observe no remarkable peaks
indicating potential enhanced variability amplitude in some wavelengths.
Nevertheless, the overall maximum and/or minimum in the spectral lines does
not necessarily coincide with the maximum and/or minimum of the $J$-band light
curve. The values of the linear fit to the ratio between the maximum and the
minimum spectra are consistent with those in Fig. 4 (right).
Figure 6: Same as Fig. 4 for R$\sim$100 spectra similar to HST/WFC3 + G141
grism. Left: We show the spectrum corresponding to the maximum flux obtained
in the 2M2208+2921 light curve in blue, and the spectrum of the minimum
spectrum in orange for R$\sim$100\. Right: Ratio between the maximum and
minimum flux spectrum of 2M2208+2921 for R$\sim$100.
In Figure 7, similar to Fig. 5, we show the variability inside the K I doublet
lines and the Na I alkali line measured as for the original resolution
MOSFIRE/Keck I spectra, but on the degraded MOSFIRE/Keck I spectra to a
resolution similar to the HST/WFC3 + G141 spectra. For the case of the K I
doublet lines, the variability amplitude inside the lines for the original
resolution spectra and the degraded spectra is similar within the
uncertainties. For K I line at 12430 $\AA$ the variability amplitude inside
the line for the original resolution is 3.95$\pm$0.54%, and for R$\sim$100 is
3.90$\pm$0.53. For the K I line at 12525 $\AA$ the variability amplitude is
4.80$\pm$0.54% for the original resolution spectra, and 4.27$\pm$0.53 for the
R$\sim$100 spectra.
Finally, for the Na I at 12682 $\AA$ line, the variability amplitude differs
if it is measured at the original resolution spectra, or in the degraded
resolution spectra. For the original resolution spectra, the variability of
the Na I line is 10.93$\pm$3.17%, and measured on the R$\sim$100 resolution
spectra is 4.63$\pm$2.38%, which is consistent with the variability amplitude
measured for the overall $J$-band. Therefore, this result suggests that the
enhanced variability inside the Na I line is partially washed out when the
resolution of the spectra is low, and the individual alkali lines cannot be
resolved, as it happens in the case of the HST/WFC3 + G141 grism spectra.
Thus, this would explain why enhanced variability in the Na I line has not
been found in HST/WFC3 + G141 grism spectra for brown dwarfs of a similar
spectral type.
In Figure 7, similar to Fig. 5, we show the variability of the continuum
measured 40 $\AA$ around the alkali lines as done previously, but for the
MOSFIRE/Keck 1 spectra smoothed to R$\sim$100\. In Fig. 7, we observe that for
both blue and red sides of the continuum for the K I doublet and the Na I
lines the variability amplitudes are consistent with the variability
amplitudes found for the continuum for the original resolution of the
MOSFIRE/Keck I spectra within the uncertainties. Thus, degrading the
resolution of the spectra does not significantly influence the measured
variability amplitude for the continuum around the K I doublet, and the Na I
line.
Figure 7: Same as Fig. 5 for R$\sim$100 spectra similar to HST/WFC3 + G141
grism spectra. Variability of the K I doublet, and the Na I lines and their
blue and red continuum for R$\sim$100\. The continuum width used is 40 $\AA$
in both ends.
## 8 Interpretation
### 8.1 Description of radiative-transfer models
The emergent flux at diverse wavelengths of the $J$-band MOSFIRE spectrum
traces different pressure levels of the atmosphere of 2M2208+2921, providing
information about the cloud coverage at different levels of the atmosphere of
the object. Spectro-photometric variability at those wavelengths can be used
to trace the various cloud layers in the atmosphere of the target. We used a
state-of-the-art radiative transfer to calculate the flux contribution of the
different modeled pressure levels. We used the effective temperature and
surface gravity estimated for 2M2208+2921 in Manjavacas et al. (2014), with a
VLT/ISAAC spectrum that covers the $J$, $H$, and $K$-bands. Manjavacas et al.
(2014) used the BT-Settl models (Allard et al., 2001, 2003, 2012a, 2012b) in
two different released versions (2010 and 2013) to estimate effective
temperatures and surface gravities for 2M2208+2921. The adopted atmospheric
parameters were $\mathrm{T_{eff}}$ = 1800$\pm$100 K, and log g = 4.0$\pm$0.5.
Further details on how the spectral fitting was performed can be found in
Manjavacas et al. (2014).
To obtain the contribution functions for 2M2208+2921, we followed a similar
approach to Yang et al. (2016), using standard radiative-convective
equilibrium atmosphere thermal structure models following the approach of
Saumon & Marley (2008). Then, a temperature perturbation was applied at
different pressure levels of the atmosphere of the object consecutively, and
each time, a new temperature profile was generated, and a new emergent
spectrum. The ratio between each emission spectrum generated for each
perturbation at each pressure level, and the spectrum to the baseline case,
provides the sensitivity of each wavelength range to temperature perturbations
at different pressure levels.
As in Yang et al. (2016), this procedure was repeated at different pressure
levels between 1.8 $\times\mathrm{10^{-4}}$ bars to $\sim$23 bars, obtaining
the flux contributions for the wavelengths covered by the MOSFIRE $J$-band,
after applying the MOSFIRE $J$-band bandpass, and also for the K I and the Na
I alkali lines, that trace slightly different, and narrower pressure levels.
As in Yang et al. (2016), the results strictly apply only to variations in
atmospheric temperature, but they reflect the atmospheric region to which the
spectra at a given wavelength are most sensitive.
### 8.2 Cloud Layers probed by alkali lines and $J$-band flux
In Figure 8, we show the result of the radiative transfer model for the
different atmospheric pressure levels traced by the MOSFIRE $J$-band spectrum,
and the K I and the Na I alkali lines. We also include an uncertainty for the
pressures probed by assigning an error-bar equal to the average pressure
difference probed between the core and edge of the wings of the lines for the
K I and the Na I alkali lines. For the $J$-band we use half the average
pressure range probed in the band. We overplot the predicted condensate mixing
ratio (mole fraction) for three different types of silicate clouds:
$\mathrm{Mg_{2}SiO_{4}}$, $\mathrm{MgSiO_{3}}$, and $\mathrm{Al_{2}O_{3}}$.
The pressure levels where the condensate mixing ratio reaches a maximum
indicate the bottom of the that type of silicate cloud. Above that pressure
level, the condensate mixing ration decreases as the pressure level decreases.
The bottom of the $\mathrm{Mg_{2}SiO_{4}}$ cloud is around the 1.0 bars. For
the $\mathrm{MgSiO_{3}}$ cloud is around 0.58 bar, and for the
$\mathrm{Al_{2}O_{3}}$ is around 1.7 bar.
As observed in Figure 8, the radiative transfer models predict that the K I
lines trace around the 0.55 bars pressure level and above, Na I line traces
the pressure level around 0.9 bars and above, and the $J$-band traces the
pressure levels around the 1.5 bars, and above. Thus, with the integrated
$J$-band light curve, we are observing the blended cloud maps of the three
silicate clouds of layers ($\mathrm{Mg_{2}SiO_{4}}$, $\mathrm{MgSiO_{3}}$, and
$\mathrm{Al_{2}O_{3}}$). With the integrated flux over the Na I line, we are
sensitive to the top two layers of clouds ($\mathrm{Mg_{2}SiO_{4}}$, and
$\mathrm{MgSiO_{3}}$). Finally, with the integrated flux over the K I doublet,
we are tracing the uppermost layer ($\mathrm{MgSiO_{3}}$) of the atmosphere of
2M2208+2921.
### 8.3 Modeling the amplitudes and wavelength-dependence of spectral
variability
The smaller amplitude variability measured in our MOSFIRE spectra for the
$J$-band in comparison to the alkali lines can be due to a more homogeneous
cloud-deck in the lower Al2O3 cloud, which would reduce the observed
variability. The larger number of cloud layers probed, which added produce a
more “homogeneous” cloud coverage, can also affect the observed amplitude of
the $J$-band. To test the assumption that the different number of cloud layers
probed could affect the observed variability in the $J$-band versus the alkali
lines, we modeled the $J$-band, Na I and K I light curves produced from cloud
maps at these three different pressure layers. To produce the light curves we
used pixelated maps (similar to Karalidi et al., 2015) and compared their
disk-integrated light curve shapes and variability amplitudes. Fig. 9 shows
the light curves produced at the top of the atmosphere by blending three
random, independent maps for three clouds layers of our model atmosphere. We
randomly assigned two to four spots in each cloud layer and placed them in
different, random locations on the map. To calculate the contrast ratio of the
cloud features to the background atmospheric layer, we used information from
the temperature–pressure profile of our model atmosphere ($\mathrm{T_{eff}}$ =
1800 K, and $\log g$ = 4.0).We then calculated the average light curve we
would observe at the top of the atmosphere by blending the individual light
curves using the contribution function information as a weight for each one.
The relative shape of all light curves appears the same, in agreement with our
MOSFIRE K I, Na I and $J$-band light curves. The light curve that would
correspond to the $J$-band observation has the smallest peak-to-trough
amplitude as the chances of a peak of one layer’s light curve coinciding with
a trough of another (i.e., a cloud clearing of one coinciding with a cloud-
decked area of another layer) are larger. This prediction actually agrees with
the spectro-photometric variability amplitudes detected in the MOSFIRE data,
as described previously in Section 7.
In Fig. 10 we show an illustrative representation of the vertical structure of
the atmosphere of 2M2208+2921, using the outcome of the radiative-transfer
models, that indicate at which pressure levels the different silicate clouds
condensate. In addition, we include the pressure levels that our light curves
for the K I doublet, the Na I line and the entire $J$-band trace.
Figure 8: Condensate mixing ratio (mole fraction) for different silicate
clouds ($\mathrm{Mg_{2}SiO_{4}}$, blue dotted line, $\mathrm{MgSiO_{3}}$, red
dotted line, and $\mathrm{Al_{2}O_{3}}$, green dotted line) versus vertical
pressure in the atmosphere of a model comparable to 2M2208+2921. The grey band
indicates the pressure levels that the $J$-band traces, the blue band
indicates the pressure levels traced by the Na I line, and the red band
indicates the pressures levels traced by the K I doublet. Figure 9: Simulated
light curves “observed” at three different pressure layers (i.e., different
wavelength bands) for a toy-model atmosphere with three cloud layers. We
assumed random maps for each cloud layer and used information from the
contribution function of 2M2208+2129 to create the “observed” light curve at
the top of the atmosphere for each band. Figure 10: Vertical cloud structure
of the atmosphere of 2M2208+2129 with the different heterogeneous cloud layers
we can find at different vertical pressures. We include the pressures that the
$J$-band, the K I doublet, and the Na I line trace. The arrows indicate the
maximum pressures of the atmosphere each spectral characteristic trace. Figure
11: Best-fit model to the ratio between the maximum and minimum flux spectrum
of 2M2208+2921 (Figure 4). We show the best-fit model ratio (blue line) and
best-fit slope (orange line).
We modeled the wavelength dependence of the ratio between the maximum and the
minimum spectra of 2M2208+2921 in low-resolution (similar to Fig. 6, right).
We modeled the low-resolution ratio, since the slope is not affected by the
resolution of the spectra, but the radiative-transfer models converge faster
to a best fit. We used a grid of cloudy and truncated cloud models similar to
Morley et al. (2014), and Lew et al. (2020). We found that the best-fit model
to the ratio of the maximum and the minimum 2M2208+2921 spectra is a
combination of $T_{\mathrm{eff}}=1800$ K and $T_{\mathrm{eff}}=1650$ K models
with a coverage fraction, $\delta A$, of 0.22. This means that 22% of the
atmosphere has $T_{\mathrm{eff}}=1650$ K and 78% of the atmosphere has
$T_{\mathrm{eff}}=1800$ K. In Fig. 11 we show the best-fit model to the ratio
of the maximum divided by the minimum spectrum (same as in Fig. 4, blue line),
and best-fit to the slope (similar as in Fig. 4, orange line) plotted between
1.10 and 1.32 $\mu$m for plot clarity. The linear fit of the best-fit model is
$1.2483-1.366*10^{-5}\lambda$, which within the error-bars agrees with our
MOSFIRE observations slope in Fig. 4, right panel.
To test our approach to retrieve the spectro-photometric variability of
2M2208+2921 we then modeled a heterogeneous atmosphere that produces a light
curve with a comparable amplitude to that of 2M2208+2921. We note that our aim
was to test the validity of our method and not to map the atmosphere of
2M2208+2921, so we did not aim to find the best-fit phase-resolved combination
of models that reproduces the observed MOSFIRE $J$-band light curve, but just
a light curve with a comparable amplitude. Our best-fit model combination
consisted of a $T_{\mathrm{eff}}=1800$ K and a $T_{\mathrm{eff}}=1600$ K with
clouds with $\mathrm{f_{sed}=1}$ and 3 respectively with $\delta A$=0.13. Note
that this model combination is slightly different from our best-fit model
combination for the spectral slope mentioned before (1800 K and a 1650 K). The
linear fit of this model is $1.0292-1.5879*10^{-5}\lambda$, which is a better
fit than our best-fit model combination for the spectral slope in Fig. 4 and
6, right panels. We blended the models in 13 time steps to create time-
resolved simulated “observations” that create a sinusoidal-like light curve
with a variability amplitude of $\sim$3%, i.e., comparable to that of our
MOSFIRE observations (see Figure 12). Each of the 13 model spectra was
assigned a random poissonian noise to mimic their corresponding uncertainties.
We then used the same method we did for our MOSFIRE observations to obtain the
modeled “observed” variability in the K I doublet, and the Na I alkali lines.
Figure 13 shows the variability of the K I doublet and the Na I alkali lines,
and their respective blue and red continuum, measured in the modeled spectra
following the same methodology as for our observed MOSFIRE spectra in Sections
7.3.1, and 7.3.2. In Figure 13 we observe that the variability amplitudes of
the alkali lines, and their blue and red continuums is between 3.6-5.1%, in
general inconsistent with the variability amplitude of $\sim$3% in the
simulated $J$-band light curve. The enhanced variability amplitude predicted
by the modeled spectra for the K I doublet is consistent, in amplitude value,
with the enhanced variability amplitude measured in Sections 7.3.1 and 7.3.2
for the observed MOSFIRE spectra at their original resolution. For the Na I
line, we measured a variability amplitude of 10.93$\pm$3.17% in the observed
MOSFIRE spectra. Since such enhanced variability amplitude is not predicted by
the models, we suspect that there might be uncorrected telluric contamination
remaining in the Na I light curve, even after the correction performed using
the other calibration stars in the field. Nevertheless, qualitative, the
radiative-transfer models still predict that the variability amplitude of the
Na I is enhanced.
Finally, as an illustration, we tested the effect of the cloud properties on
the retrieved variability for the K I lines. Fig. 14 shows the retrieved
amplitude of the K I line as a function of $f_{\mathrm{sed}}$ for a
combination of 1800 K and 1650 K clouds as in our best-fit slope model.
Changes in $f_{\mathrm{sed}}$ correspond to a change in the cloud properties,
and thus should correspond to changes in the retrieved variability. Indeed,
Fig. 14 shows that the average retrieved variability of the model K I line
changes slightly with the reduction of the optical thickness across our model
atmospheres, even though, the variability amplitude for the three $f_{sed}$
values is similar within the error-bars.
Note that Zhou et al. (2020) found a subdued variability in the alkali lines
of VHS 1256b, but their target was a cooler, L7 atmosphere with different
cloud structure than our target. Changes in the temperature of the atmosphere
affect the cloud structure and expected variability both in the $J$-band and
Spitzer channels (Vos et al., 2017, 2020) as well as in the alkali lines (see
also Morley et al., 2014, for T and Y atmospheres). Our result thus does not
contradict that of Zhou et al. (2020), but complements it with another
spectral type. Future JWST observations that constrain the changes of alkali
variability versus continuum variability as a function of atmospheric
temperature would be important to map the changes in cloud structures as these
atmospheres cool down.
Our observations highlight the importance of high resolution spectroscopy to
understand the atmospheric variability and 3D structures of brown dwarfs and
giant exoplanets ground-based, with multi-object spectrographs like Keck
I/MOSFIRE, or EMIR at the Gran Telescopio de Canarias (GTC) telescope, but
also from space-based telescopes like HST/WFC3. In the near future, the James
Webb Space Telescope (JWST) will be launched, and it is expected to produce
ground-braking discoveries in the field of brown dwarfs and exoplanets.
NIRSpec (Near Infrarred Spectrograph) and NIRISS (Near Infrared Imager and
Slitless Spectrograph) on-board JWST will provide high signal-to-noise and
resolution, and broad-wavelength spectroscopic observations, that will enable
the detection of variability in multiple pressure layers, allowing us to probe
the vertical structure of brown dwarf and imaged exoplanet atmospheres with an
unprecedented accuracy.
Figure 12: Simulated $J$-band light curve using radiative-transfer models with
nearly 3% of variability amplitude, similar to our MOSFIRE $J$-band light
curve. The best-fit model combination to reproduce a light curve with
$\sim$3.5% variability amplitude consisted of a 1800 K and a 1650 K with
clouds with $\mathrm{f_{sed}}$ = 3. Figure 13: Variability of the K I and Na I
lines and their blue and red continuum as measured the modeled spectra for
$f_{sed}$ = 3. Figure 14: Variability of the K I lines and their blue and red
continuum as measured the modeled spectra for $f_{sed}$ = 1, 2 and 3.
## 9 Conclusions
1. 1.
We have used MOSFIRE at the Keck I telescope to monitor over $\sim$2.5 hr
2M2208+2921, an L3 young brown dwarf, member of the $\beta$-Pictoris young
moving group, and an analog to the $\beta$ Pictoris b directly-imaged giant
exoplanet.
2. 2.
We found significant spectro-photometric variability amplitude in the $J$-band
using MOSFIRE spectroscopy with a minimum variability amplitude of
3.22$\pm$0.42%.
3. 3.
The ratio between the maximum and the minimum spectra of 2M2208+2921 show a
slightly wavelength dependence, with the variability amplitude descending
toward redder wavelengths. It also shows potentially enhanced variability
amplitude in the K I doublet and Na I alkali lines.
4. 4.
More detailed analysis of the variability amplitude of the continuum and the
flux inside the K I, and Na I lines further suggests the enhanced variability
amplitude inside those lines. The enhanced variability partially dissapaears
if we degrade the resolution of the spectra to R$\sim$100, especially for the
Na I line, coinciding with the spectral resolution of HST/WFC3 + G141 grism,
explaining why enhanced variability amplitude has not been found in previous
works using low-resolution data for brown dwarfs of similar spectral type.
5. 5.
We use radiative-transfer models to predict the different heterogeneous layers
of clouds that might be introducing the spectro-photometric variability
detected and their composition.
6. 6.
Using radiative-transfer models, we produced simulated $J$-band spectra for an
object with the same $\mathrm{T_{eff}}$ and log $g$ than 2M2208+2921, and with
the same $J$-band variability amplitude, and rotational period. We measured
the variability amplitude of the K I doublet and Na I alkali lines and their
respective continuums, finding an enhanced variability for the alkali lines,
in agreement with our observations.
7. 7.
Using the Aeolus code to produce brown dwarf maps, we are able to reproduce
that the $J$-band light curve has smaller variability amplitude than the K I
or the Na I lines light curves, in agreement with our observations.
8. 8.
We produce an artistic representation reproducing the vertical structure of
2M2208+2921, the different layers of clouds and their composition as proposed
by the relative-transfer models, and the different pressure levels that each
spectral characteristic (the $J$-band, the K I lines and the Na I line) traces
in the atmosphere of 2M2208+2921, analog to the $\beta$-Pictoris b exoplanet.
We thank our anonymous referee for the constructive comments provided for our
manuscript, that helped to improve it. The authors wish to recognize and
acknowledge the very significant cultural role and reverence that the summit
of Mauna Kea has always had within the indigenous Hawaiian community. We are
most fortunate to have the opportunity to conduct observations from this
mountain. We would like to acknowledge the PypeIt Development team for
developing a pipeline that was able to reduce our challenging MOSFIRE data
with extremely wide slits, in particular to Dr. Joe Hennawi for his efficient
support. We acknowledge the MOSFIRE/Keck I Instrument Scientist, Dr. Josh
Walawender, for his advises and recommendations on the preparation of the
observations, and the reduction of the data. Thanks to his idea of taking
”skylines spectra” of our mask with narrower slits we could calibrate in
wavelength the spectra presented in this paper. We acknowledge W. M. Keck
Observatory Chief Scientist, Dr. John O’Meara, for investing some of his
granted time on taking the ”skylines spectra” of our masks that made
wavelength calibration possible. We acknowledge Dr. Daniel Apai and his group
for their comments and suggestions on the analysis and interpretation of these
data.
## Appendix A Correlation between parameters
Figure 15: Light curves obtained using only the first half wavelength range
spectrum and the second half wavelength range.
Figure 16: Correlation between the target’s non-corrected light curve, and the
non-corrected calibration stars light curves.
Figure 17: Correlation between the target’s corrected light curve, and the
corrected calibration stars light curves.
Figure 18: Left: Evolution of the FWHM with time. Right: Correlation between
FWHM and 2M2208 light curve.
Figure 19: Left: Correlation between the target’s light curve and relative
external humidity (RH). Right: Correlation between the target’s light curve
and the external temperature. Bottom: Correlation between the target’s light
curve and the airmass.
## Appendix B $J$-band Light curves of the calibration stars before and after
correction
Figure 20: Normalized non-corrected light curves of the calibration stars on
the field of 2M2208+2921.
Figure 21: Normalized corrected light curves of the calibration stars on the
field of 2M2208+2921.
## Appendix C Light curves of the calibration stars at the wavelength of the
K I doublet and the Na I alkali lines
Figure 22: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 1 for spectra with the original $J$-band MOSFIRE resolution.
Figure 23: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 4 for spectra with the original $J$-band MOSFIRE resolution.
Figure 24: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 5 for spectra with the original $J$-band MOSFIRE resolution.
Figure 25: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 6 for spectra with the original $J$-band MOSFIRE resolution.
Figure 26: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 8 for spectra with the original $J$-band MOSFIRE resolution.
Figure 27: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 1 for spectra with R$\sim$100.
Figure 28: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 4 for spectra with R$\sim$100.
Figure 29: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 5 for spectra with R$\sim$100.
Figure 30: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 6 for spectra with R$\sim$100.
Figure 31: Variability inside the wavelength range of the blue and red
continuum, and inside the alkali lines wavelength range for the calibration
star 8 for spectra with R$\sim$100.
## References
* Allard et al. (2001) Allard, F., Hauschildt, P. H., Alexander, D. R., Tamanai, A., & Schweitzer, A. 2001, ApJ, 556, 357, doi: 10.1086/321547
* Allard et al. (2012a) Allard, F., Homeier, D., & Freytag, B. 2012a, Royal Society of London Philosophical Transactions Series A, 370, 2765, doi: 10.1098/rsta.2011.0269
* Allard et al. (2012b) Allard, F., Homeier, D., Freytag, B., & Sharp, C. M. 2012b, in EAS Publications Series, Vol. 57, EAS Publications Series, ed. C. Reylé, C. Charbonnel, & M. Schultheis, 3–43, doi: 10.1051/eas/1257001
* Allard et al. (2003) Allard, N. F., Allard, F., Hauschildt, P. H., Kielkopf, J. F., & Machin, L. 2003, A&A, 411, L473, doi: 10.1051/0004-6361:20031299
* Allers & Liu (2013) Allers, K. N., & Liu, M. C. 2013, ApJ, 772, 79, doi: 10.1088/0004-637X/772/2/79
* Apai et al. (2016) Apai, D., Kasper, M., Skemer, A., et al. 2016, ApJ, 820, 40, doi: 10.3847/0004-637X/820/1/40
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Biller et al. (2018) Biller, B. A., Vos, J., Buenzli, E., et al. 2018, AJ, 155, 95, doi: 10.3847/1538-3881/aaa5a6
* Biller et al. (2021) Biller, B. A., Apai, D., Bonnefoy, M., et al. 2021, MNRAS, doi: 10.1093/mnras/stab202
* Bonnefoy et al. (2013) Bonnefoy, M., Boccaletti, A., Lagrange, A.-M., et al. 2013, A&A, 555, A107, doi: 10.1051/0004-6361/201220838
* Bouy et al. (2003) Bouy, H., Brandner, W., Martín, E. L., et al. 2003, AJ, 126, 1526, doi: 10.1086/377343
* Bowler et al. (2020) Bowler, B. P., Zhou, Y., Morley, C. V., et al. 2020, ApJ, 893, L30, doi: 10.3847/2041-8213/ab8197
* Buenzli et al. (2014) Buenzli, E., Apai, D., Radigan, J., Reid, I. N., & Flateau, D. 2014, ApJ, 782, 77, doi: 10.1088/0004-637X/782/2/77
* Burgasser et al. (2003) Burgasser, A. J., Kirkpatrick, J. D., Reid, I. N., et al. 2003, ApJ, 586, 512, doi: 10.1086/346263
* Chauvin et al. (2004) Chauvin, G., Lagrange, A. M., Dumas, C., et al. 2004, A&A, 425, L29, doi: 10.1051/0004-6361:200400056
* Cruz et al. (2009) Cruz, K. L., Kirkpatrick, J. D., & Burgasser, A. J. 2009, AJ, 137, 3345, doi: 10.1088/0004-6256/137/2/3345
* Dupuy et al. (2019) Dupuy, T. J., Brandt, T. D., Kratter, K. M., & Bowler, B. P. 2019, ApJ, 871, L4, doi: 10.3847/2041-8213/aafb31
* Dupuy et al. (2018) Dupuy, T. J., Liu, M. C., Allers, K. N., et al. 2018, AJ, 156, 57, doi: 10.3847/1538-3881/aacbc2
* Faherty et al. (2013) Faherty, J. K., Rice, E. L., Cruz, K. L., Mamajek, E. E., & Núñez, A. 2013, AJ, 145, 2, doi: 10.1088/0004-6256/145/1/2
* Gagné et al. (2014) Gagné, J., Lafrenière, D., Doyon, R., Malo, L., & Artigau, É. 2014, ApJ, 783, 121, doi: 10.1088/0004-637X/783/2/121
* Gizis et al. (2012) Gizis, J. E., Faherty, J. K., Liu, M. C., et al. 2012, AJ, 144, 94, doi: 10.1088/0004-6256/144/4/94
* Hiranaka et al. (2016) Hiranaka, K., Cruz, K. L., Douglas, S. T., Marley, M. S., & Baldassare, V. F. 2016, ApJ, 830, 96, doi: 10.3847/0004-637X/830/2/96
* Horne & Baliunas (1986) Horne, J. H., & Baliunas, S. L. 1986, ApJ, 302, 757, doi: 10.1086/164037
* Karalidi et al. (2015) Karalidi, T., Apai, D., Schneider, G., Hanson, J. R., & Pasachoff, J. M. 2015, ApJ, 814, 65, doi: 10.1088/0004-637X/814/1/65
* Kellogg et al. (2017) Kellogg, K., Metchev, S., Heinze, A., Gagné, J., & Kurtev, R. 2017, ApJ, 849, 72, doi: 10.3847/1538-4357/aa8e4f
* Kelson (2003) Kelson, D. D. 2003, PASP, 115, 688, doi: 10.1086/375502
* Kendall et al. (2004) Kendall, T. R., Delfosse, X., Martín, E. L., & Forveille, T. 2004, A&A, 416, L17, doi: 10.1051/0004-6361:20040046
* Kirkpatrick et al. (2000) Kirkpatrick, J. D., Reid, I. N., Liebert, J., et al. 2000, AJ, 120, 447, doi: 10.1086/301427
* Kirkpatrick et al. (2008) Kirkpatrick, J. D., Cruz, K. L., Barman, T. S., et al. 2008, ApJ, 689, 1295, doi: 10.1086/592768
* Komacek & Showman (2020) Komacek, T. D., & Showman, A. P. 2020, ApJ, 888, 2, doi: 10.3847/1538-4357/ab5b0b
* Lagrange et al. (2009) Lagrange, A. M., Kasper, M., Boccaletti, A., et al. 2009, A&A, 506, 927, doi: 10.1051/0004-6361/200912098
* Langfelder & Horvath (2012) Langfelder, P., & Horvath, S. 2012, Journal of Statistical Software, Articles, 46, 1, doi: 10.18637/jss.v046.i11
* Lew et al. (2016) Lew, B. W. P., Apai, D., Zhou, Y., et al. 2016, ApJ, 829, L32, doi: 10.3847/2041-8205/829/2/L32
* Lew et al. (2020) Lew, B. W. P., Apai, D., Marley, M., et al. 2020, ApJ, 903, 15, doi: 10.3847/1538-4357/abb81d
* Liu et al. (2013) Liu, M. C., Magnier, E. A., Deacon, N. R., et al. 2013, ApJ, 777, L20, doi: 10.1088/2041-8205/777/2/L20
* Lomb (1976) Lomb, N. R. 1976, Ap&SS, 39, 447, doi: 10.1007/BF00648343
* Luhman et al. (2007) Luhman, K. L., Allers, K. N., Jaffe, D. T., et al. 2007, ApJ, 659, 1629, doi: 10.1086/512539
* Mamajek & Bell (2014) Mamajek, E. E., & Bell, C. P. M. 2014, MNRAS, 445, 2169, doi: 10.1093/mnras/stu1894
* Manjavacas et al. (2014) Manjavacas, E., Bonnefoy, M., Schlieder, J. E., et al. 2014, A&A, 564, A55, doi: 10.1051/0004-6361/201323016
* Marois et al. (2008) Marois, C., Macintosh, B., Barman, T., et al. 2008, Science, 322, 1348, doi: 10.1126/science.1166585
* McLean et al. (2010) McLean, I. S., Steidel, C. C., Epps, H., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Proc. SPIE, 77351E, doi: 10.1117/12.856715
* McLean et al. (2012) McLean, I. S., Steidel, C. C., Epps, H. W., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Proc. SPIE, 84460J, doi: 10.1117/12.924794
* Metchev et al. (2015) Metchev, S. A., Heinze, A., Apai, D., et al. 2015, ApJ, 799, 154, doi: 10.1088/0004-637X/799/2/154
* Miles-Páez et al. (2017) Miles-Páez, P. A., Zapatero Osorio, M. R., Pallé, E., & Peña Ramírez, K. 2017, MNRAS, 466, 3184, doi: 10.1093/mnras/stw3278
* Morley et al. (2014) Morley, C. V., Marley, M. S., Fortney, J. J., & Lupu, R. 2014, ApJ, 789, L14, doi: 10.1088/2041-8205/789/1/L14
* Mortier et al. (2015) Mortier, A., Faria, J. P., Correia, C. M., Santerne, A., & Santos, N. C. 2015, A&A, 573, A101, doi: 10.1051/0004-6361/201424908
* Naud et al. (2017) Naud, M.-E., Artigau, É., Rowe, J. F., et al. 2017, AJ, 154, 138, doi: 10.3847/1538-3881/aa83b7
* Neuhäuser (2011) Neuhäuser, M. 2011, Wilcoxon–Mann–Whitney Test, ed. M. Lovric (Berlin, Heidelberg: Springer Berlin Heidelberg), 1656–1658, doi: 10.1007/978-3-642-04898-2_615
* Prochaska et al. (2019) Prochaska, J. X., Hennawi, J., Cooke, R., et al. 2019, pypeit/PypeIt: Releasing for DOI, 0.11.0.1, Zenodo, doi: 10.5281/zenodo.3506873
* Prochaska et al. (2020) —. 2020, pypeit/PypeIt: Release 1.0.0, v1.0.0, Zenodo, doi: 10.5281/zenodo.3743493
* Puka (2011) Puka, L. 2011, Kendall’s Tau, ed. M. Lovric (Berlin, Heidelberg: Springer Berlin Heidelberg), 713–715, doi: 10.1007/978-3-642-04898-2_324
* Radigan (2014) Radigan, J. 2014, ApJ, 797, 120, doi: 10.1088/0004-637X/797/2/120
* Reid & Walkowicz (2006) Reid, I. N., & Walkowicz, L. M. 2006, PASP, 118, 671, doi: 10.1086/503446
* Rudolf et al. (2016) Rudolf, N., Günther, H. M., Schneider, P. C., & Schmitt, J. H. M. M. 2016, A&A, 585, A113, doi: 10.1051/0004-6361/201322749
* Sameshima et al. (2018) Sameshima, H., Matsunaga, N., Kobayashi, N., et al. 2018, PASP, 130, 074502, doi: 10.1088/1538-3873/aac1b4
* Saumon & Marley (2008) Saumon, D., & Marley, M. S. 2008, ApJ, 689, 1327, doi: 10.1086/592734
* Scargle (1982) Scargle, J. D. 1982, ApJ, 263, 835, doi: 10.1086/160554
* Schwarz (1978) Schwarz, G. 1978, Annals of Statistics, 6, 461
* Tan & Showman (2019) Tan, X., & Showman, A. P. 2019, ApJ, 874, 111, doi: 10.3847/1538-4357/ab0c07
* Vernet et al. (2008) Vernet, J., Kerber, F., Saitta, F., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7016, Observatory Operations: Strategies, Processes, and Systems II, ed. R. J. Brissenden & D. R. Silva, 70161G, doi: 10.1117/12.788676
* Vos et al. (2017) Vos, J. M., Allers, K. N., & Biller, B. A. 2017, ApJ, 842, 78, doi: 10.3847/1538-4357/aa73cf
* Vos et al. (2019) Vos, J. M., Biller, B. A., Bonavita, M., et al. 2019, MNRAS, 483, 480, doi: 10.1093/mnras/sty3123
* Vos et al. (2020) Vos, J. M., Biller, B. A., Allers, K. N., et al. 2020, AJ, 160, 38, doi: 10.3847/1538-3881/ab9642
* Yang et al. (2015) Yang, H., Apai, D., Marley, M. S., et al. 2015, ApJ, 798, L13, doi: 10.1088/2041-8205/798/1/L13
* Yang et al. (2016) —. 2016, ApJ, 826, 8, doi: 10.3847/0004-637X/826/1/8
* Zapatero Osorio et al. (2014) Zapatero Osorio, M. R., Béjar, V. J. S., Miles-Páez, P. A., et al. 2014, A&A, 568, A6, doi: 10.1051/0004-6361/201321340
* Zhou et al. (2020) Zhou, Y., Bowler, B. P., Morley, C. V., et al. 2020, AJ, 160, 77, doi: 10.3847/1538-3881/ab9e04
|
arxiv-papers
| 2021-07-26T17:59:55 |
2024-09-04T03:07:19.586251
|
{
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"authors": "Elena Manjavacas, Theodora Karalidi, Johanna Vos, Beth Biller and Ben\n W. P. Lew",
"submitter": "Elena Manjavacas",
"url": "https://arxiv.org/abs/2107.12368"
}
|
2107.12374
|
# Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike
Hybrid Input Encoding
Gourav Datta, Souvik Kundu, Peter A. Beerel Ming Hsieh Department of
Electrical and Computer Engineering
University of Southern California
Los Angeles, California 90089, USA
{gdatta, souvikku, pabeerel}@usc.edu
###### Abstract
Spiking Neural Networks (SNNs) have emerged as an attractive alternative to
traditional deep learning frameworks, since they provide higher computational
efficiency in event driven neuromorphic hardware. However, the state-of-the-
art (SOTA) SNNs suffer from high inference latency, resulting from inefficient
input encoding and training techniques. The most widely used input coding
schemes, such as Poisson based rate-coding, do not leverage the temporal
learning capabilities of SNNs. This paper presents a training framework for
low-latency energy-efficient SNNs that uses a hybrid encoding scheme at the
input layer in which the analog pixel values of an image are directly applied
during the first timestep and a novel variant of spike temporal coding is used
during subsequent timesteps. In particular, neurons in every hidden layer are
restricted to fire at most once per image which increases activation sparsity.
To train these hybrid-encoded SNNs, we propose a variant of the gradient
descent based spike timing dependent backpropagation (STDB) mechanism using a
novel cross entropy loss function based on both the output neurons’ spike time
and membrane potential. The resulting SNNs have reduced latency and high
activation sparsity, yielding significant improvements in computational
efficiency. In particular, we evaluate our proposed training scheme on image
classification tasks from CIFAR-10 and CIFAR-100 datasets on several VGG
architectures. We achieve top-1 accuracy of $66.46$% with $5$ timesteps on the
CIFAR-100 dataset with ${\sim}125\times$ less compute energy than an
equivalent standard ANN. Additionally, our proposed SNN performs
$5$-$300\times$ faster inference compared to other state-of-the-art rate or
temporally coded SNN models.
###### Index Terms:
SNN, STDB, Input encoding, Energy-efficient SNNs
## I Introduction
Artificial Neural Networks (ANNs) have contributed to a number of impressive
success stories in Artificial General Intelligence (AGI) [1, 2, 3, 4, 5].
However, their superior performance has come at the cost of high computational
and memory requirements [6, 7]. While convolutional neural networks (CNNs) on
general purpose high-performance compute platforms such as GPUs are now
ubiquitous [8], there has been increasing interest in domain-specific hardware
accelerators [9] and alternate types of neural networks. In particular,
Spiking Neural Network (SNN) accelerators have emerged as a potential low
power alternative for AGI [10, 11, 12, 13]. SNNs attempt to emulate the
remarkable energy-efficiency of the brain with event-driven neuromorphic
hardware. Neurons in an SNN exchange information via discrete binary events,
resulting in a significant paradigm shift from traditional CNNs.
Because SNNs receive and transmit information through spikes, analog values
must be encoded into a sequence of spikes. There has been a plethora of
encoding methods proposed, including rate coding [14, 15], temporal coding
[16, 17, 18, 19], rank-order coding [20], phase coding [21], [22] and other
exotic coding schemes [23]. Among these, rate-coding has shown competitive
performance on complex tasks [14, 15] while others are either generally
limited to simple tasks such as learning the XOR function and classifying
digits from the MNIST dataset or require a large number of spikes for
inference. In rate coding, the analog value is converted to a spike train
using a Poisson generator function with a rate proportional to the input pixel
value. The number of timesteps in each train is inversely proportional to the
quantization error in the representation, as illustrated in Fig. 1(b). Low
error requirements force a large number of timesteps at the expense of high
inference latency and low activation sparsity [15]. Temporal coding, on the
other hand, has higher sparsity and can more explicitly represent correlations
in inputs. However, temporal coding is challenging to scale [20] to vision
tasks and often requires kernel-based spike response models [17] which are
computationally expensive compared to the traditional leaky-integrate-and-fire
(LIF) or integrate-and-fire (IF) models. Recently, the authors in [24]
proposed direct input encoding, where they feed the analog pixel values
directly into the first convolutional layer, which treats them as input
currents to LIF neurons. Another recently proposed temporal encoding scheme
uses the discrete cosine transform (DCT) to distribute the spatial pixel
information over time for learning low-latency SNNs [25]. However, up to now,
there has been no attempt to combine both spatial (captured by rate or direct
encoding) and temporal information processed by the SNNs.
In addition to accommodating the various of forms of encoding inputs,
supervised learning algorithms for SNNs have overcome many roadblocks
associated with the discontinuous derivative of the spike activation function
[26, 27, 28]. However, effective SNN training remains a challenge, as seen by
the fact that SNNs still lag behind ANNs in terms of latency and accuracy in
traditional classification tasks [29, 15]. A single feed-forward pass in ANN
corresponds to multiple forward passes in SNN which is associated with a fixed
number of timesteps. In spike-based backpropagation, the backward pass
requires the gradients to be integrated over every timestep which increases
computation and memory complexity [26, 30]. It requires multiple iterations,
is memory intensive (for backward pass computations), and energy-inefficient,
and thus has been mainly limited to small datasets (e.g. CIFAR-10) on simple
shallow convolutional architectures [30]. Researchers have also observed high
spiking activity and energy consumption in these trained SNN models [31],
which further hinders their deployment in edge applications. Thus, the current
challenges in SNN models are high inference latency and spiking activity, long
training time, and high training costs in terms of memory and computation.
To address these challenges, this paper makes the following contributions:
* •
Hybrid Spatio-Temporal Encoding: We employ a hybrid input encoding technique
where the real-valued image pixels are fed to the SNN during the first
timestep. During the subsequent timesteps, the SNN follows a single-spike
temporal coding scheme, where the arrival time of the input spike is inversely
proportional to the pixel intensity. While the direct encoding in the first
timestep helps the SNN achieve low inference latency, the temporal encoding
increases activation sparsity.
* •
Single Spike LIF Model: To further harness the benefits of temporal coding, we
propose a modified LIF model, where neurons in every hidden layer fire at most
once over all the timesteps. This leads to higher activation sparsity and
compute efficiency.
* •
Novel Loss Function: We also propose a variant of the gradient descent based
spike timing dependent backpropagation mechanism to train SNNs with our
proposed encoding technique. In particular, we employ a hybrid cross entropy
loss function to capture both the accumulated membrane potential and the spike
time of the output neurons.
The remainder of our paper is structured as follows. In Section II we present
the necessary background. Section III describes our proposed input encoding
technique. We present our detailed experimental evaluation of the
classification accuracy and latency in Section V. We show the energy
improvement of our proposed framework in Section VI and finally present
conclusions in Section VII.
## II Background
### II-A SNN Fundamentals
An SNN consists of a network of neurons that communicate through a sequence of
spikes modulated by synaptic weights. The spiking dynamics of a neuron are
typically represented using either Integrate-and-Fire (IF) [32] or Leaky-
Integrate-and-Fire (LIF) model [33]. Fig. 1(a) illustrates a basic SNN
architecture with IF neurons processing rate-coded inputs. Both IF and LIF
neurons integrate the input current into their respective states referred to
as membrane potentials. The key difference between the models is that the
membrane potential of a IF neuron does not change during the time period
between successive input spikes while the LIF neuronal membrane potential
leaks with a finite time constant. In this work, we use the LIF model to
convert ANNs trained with ReLU activations to SNNs, because the leaky
behaviour provides improved robustness to noisy spike-inputs and better
generalization compared to those with no leak [34]. Moreover, the leak term
provides a tunable control knob, which can be leveraged to improve inference
accuracy, latency, and spiking activity in SNNs.
To characterize the LIF model, we use the following differential equation
Figure 1: (a) Feedforward fully-connected SNN architecture with Integrate and
Fire (IF) spiking dynamics, (b) The spike input generated over several
timesteps through Poisson generator. It is clear that having more timesteps
yields a better approximation of the input image.
$C\frac{dU_{i}^{t}}{dt}+GU_{i}^{t}=I_{i}^{t}=\sum_{j}W_{ij}\cdot{S_{j}^{t}}$
(1)
where $C$ and $G$ are the membrane capacitance and conductance respectively.
$U_{i}^{t}$ and $I_{i}^{t}$ are the membrane potential and input synaptic
current of the $i^{th}$ neuron at time $t$. Note that $U_{i}^{t}$ integrates
the incoming (pre-neuron) spikes $S_{j}^{t}$ modulated by weights $W_{ij}$ and
leaks with a time constant equal to $\frac{C}{G}$. The post-neuron generates
an output spike when $U_{i}$ exceeds the firing threshold $V$. However,
because of its’ continuous representation, Eq. 1 is not suitable for
implementations in popular Machine Learning (ML) frameworks (eg. Pytorch).
Hence, we convert Eq. 1 into an iterative discrete-time version, as shown in
Eq. 2 [30], in which spikes are characterized as binary values (1 represents
the presence of a spike). Note that $\lambda$ represents the leak term which
reduces $U_{i}$ by a factor of $(1-\lambda)$ in every timestep.
$U_{i}^{t}=\lambda U_{i}^{t-1}+\sum_{j}W_{ij}{S_{j}(t)}-V{O_{i}^{t-1}}$ (2)
The binary output spike at timestep $t$ is given as
$O_{i}^{t}=\begin{cases}1,&\text{if }U_{i}^{t}>V\\\
0,&\text{otherwise}\end{cases}$ (3)
Note that the last term in Eq. 2 represents soft reset that reduces the
membrane potential $U_{i}$ by the threshold $V$ at timestep $t$ in response to
an output spike generated at timestep $(t-1)$. In contrast, hard reset means
resetting $U_{i}$ to $0$ after an output spike is generated. Soft reset
minimises the information loss by allowing the spiking neuron to carry forward
the surplus potential above the firing threshold to the subsequent timestep
[30, 35] and is adopted in this work.
### II-B SNN Training Techniques
Recent research on training supervised deep SNNs can be broadly divided into
three categories: i) Indirect learning; ii) Direct Learning; iii) Hybrid
Learning.
#### II-B1 Indirect Learning
Recent works have demonstrated that SNNs can be efficiently converted from
ANNs by approximating the activation value of ReLU neurons with the firing
rate of spiking neurons [12, 36, 37, 15, 38]. This technique uses the standard
backpropagation algorithm for training in the ANN domain, and helps SNNs
achieve SOTA results on various challenging inference tasks, particularly in
image recognition [36, 15]. Moreover, ANN-SNN conversion simplifies the
training procedures compared to approximate gradient techniques, since it
involves only a single forward pass to process a single input. However, a
disadvantage of ANN-SNN conversion is that it yields SNNs with an order of
magnitude higher latency than other training techniques [15]. In this work, we
use ANN-SNN conversion as an initial step in our proposed framework because it
yields high classification accuracy on deep networks. We then leverage direct
encoding in the first timestep to reduce the number of synaptic operations and
thus improve the SNN’s energy efficiency.
#### II-B2 Direct Learning
The discontinuous and non-differentiable nature of a spiking neuron makes it
difficult to implement gradient descent based backpropagation. Consequently,
several approximate training methodologies have been proposed that leverage
the temporal dynamics of SNNs [39, 26, 40, 41, 42, 43]. The basic idea of
these works is to approximate the spiking neuron functionality with a
continuous differentiable model or use surrogate gradients to approximate real
gradients. However, STDB requires the gradients to be integrated over all
timesteps, increasing computation and memory requirements significantly,
particularly for deep networks.
#### II-B3 Hybrid Learning
Authors in [30] proposed a hybrid training methodology that consists of ANN-
SNN conversion, followed by approximate gradient descent on the initialized
network to obtain the final trained SNN model. The authors claimed that
combining the two training techniques helps SNNs converge within a few epochs
and require fewer timesteps. Another recent paper [24] proposes a training
scheme for deep SNNs in which the membrane leak and the firing threshold along
with other network parameters (weights) are updated at the end of every batch
via gradient descent after ANN-SNN conversion. Moreover, instead of converting
the image pixel values into spike trains using Poisson rate coding described
above, the authors directly feed the analog pixel values in the first
convolutional layer, which emits spikes using the LIF neuron model. This
enables requiring fewer timesteps compared to Poisson rate coding. In this
work, we employ a variant of the hybrid learning technique (ANN-SNN
Conversion, followed by STDB with trainable weights, threshold and leak) to
train deep SNNs.
## III Hybrid Spike Encoding
We propose a hybrid encoding scheme to convert the real-valued pixel
intensities of input images into SNN inputs over the total number of timesteps
dictated by the desired inference accuracy. As is typical, input images fed to
the ANN are normalized to zero mean and unit standard deviation. In our
proposed coding technique, we feed the analog pixel value in the input layer
of the SNN in the $1^{st}$ timestep. Next, we convert the real-valued pixels
into a spike train starting from the $2^{nd}$ timestep representing the same
information. Considering a gray image with pixel intensity values in the range
$[I_{min},I_{max}]$, each input neuron encodes the temporal information of
its’ corresponding pixel value in a single spike time in the range $[2,T]$
where $T$ is the total number of timesteps. The firing time of the $i^{th}$
input neuron, $T_{i}$, is computed based on the $i^{th}$ pixel intensity
value, $I_{i}$, as follows
$T_{i}=\lfloor
T+\left(\frac{2-T}{I_{max}-I_{min}}\right)\cdot(I_{i}-I_{min})\rceil$ (4)
where $\lfloor.\rceil$ represents the nearest integer function. Eq. 4 is
represented as the point-slope form of the linear relationship shown in Fig.
2(b) and $\lfloor.\rceil$ is applied because $T_{i}$ should be integral. Note
that Eq. 4 also implies that the spike train starts from the $2^{nd}$ timestep
$2$. The encoded value of the $i^{th}$ neuron in the input layer is thus
expressed as
$\displaystyle X_{i}(t)=\begin{cases}I_{i},&\text{if }t=1\\\ 1,&\text{else if
}t=T_{i}\\\ 0,&\text{otherwise}\end{cases}$ (5)
which is further illustrated in Fig. 2(b). Brighter image pixels have higher
intensities, and hence, lower $T_{i}$. Neurons at the subsequent layers fire
as soon as they reach their threshold, and both the voltage potential and time
to reach the threshold in the output layer determines the network decision.
The analog pixel value in the $1^{st}$ time step influences the membrane
potential of the output neurons, while the firing times of the input neurons
based on the pixel intensities are responsible for the spike times of the
output neurons.
Figure 2: (a) Hybrid coded input to the SNN (b) Mapping between the pixel
intensity of images and the firing time of individual neurons where
$\lfloor.\rceil$ denotes the nearest integer function
Notably, this hybrid encoding scheme captures both the intensity and temporal
nature of the input neurons, does not need any preprocessing steps like
applying Gabor filters that are commonly used in SNNs trained with spike time
dependent plasticity (STDP) [44, 45]. Moreover, our proposed encoding
technique is compatible with event-driven cameras which capture actual pixel
value first, and subsequently emit spikes based on the changes in pixel
intensity [46]. Lastly, our proposal ensures that there is a single input
spike per pixel, and hence, the obtained spike train is sparser than that
observed in rate/direct coded techniques.
## IV Proposed Training Scheme
We employ a modified version of the LIF model illustrated in Section II to
train energy-efficient SNNs. In our proposed training framework, neurons in
all the hidden convolutional and fully-connected layers (except the output
layer) spike at most once over all the timesteps. During inference, once a
neuron emits a spike, it is shut off, and does not participate in the
remaining LIF computations. However, during training, the neurons in the
hidden layers follow the model illustrated in Eq. (6)-(8) which shows that
even though each neuron can fire at most once, it still needs to perform
computations following the LIF model. This ensures that the error gradients
are still non-zero following the spike time and enables our proposed training
framework to avoid the dead neuron problem where learning does not happen in
the absence of a spike.
$\displaystyle\mbox{\boldmath$U$}_{l}^{t}$
$\displaystyle=\lambda_{l}{\mbox{\boldmath$U$}_{l}^{t-1}}+W_{l}{\mbox{\boldmath$O$}_{l-1}^{t}}-V_{l}\cdot(\mbox{\boldmath$z$}_{l}^{t-1}>0)$
(6) $\displaystyle\mbox{\boldmath$z$}_{l}^{t}$
$\displaystyle=\frac{\mbox{\boldmath$U$}_{l}^{t}}{V_{l}}-1$ (7)
$\mbox{\boldmath$O$}_{l}^{t}=\begin{cases}1,&\text{if
}\mbox{\boldmath$z$}_{l}^{t}>0\text{ and }\mbox{\boldmath$z$}_{l}^{t_{i}}\leq
0\ {\forall}t_{i}\in[1,t)\\\ 0,&\text{otherwise }\end{cases}$ (8)
Note that $\mbox{\boldmath$U$}_{l}^{t}$, $\mbox{\boldmath$O$}_{{l}-{1}}$, and
$W_{l}$ are vectors containing the membrane potential of the neurons of layer
$l$ at timestep $t$, spike signals from layer $({l}-{1})$, and the weight
matrix connecting the layer $l$ and $({l}-{1})$. Also note that
$(\mbox{\boldmath$z$}_{l}^{t-1}>0)$ in Eq. 6 denotes a Boolean vector of size
equal to the number of neurons in layer $l$. The leak and threshold voltage
for all the neurons in layer $l$ are represented by $\lambda_{l}$ and $V_{l}$
respectively. In our training framework, both these parameters (same for the
neurons in a particular layer) are trained with backpropagation along with the
weights to optimize both accuracy and latency.
The neurons in the output layer accumulate the incoming inputs without any
leakage as shown in Eq. 9. However, unlike previous works [24, 30], the output
neurons in our proposed framework emit spikes following a model shown in Eq.
10 where $\mbox{\boldmath$T$}_{l}$ denote the vector containing the spike
times of the output neurons and $T$ is the total number of timesteps.
$\displaystyle\mbox{\boldmath$U$}_{l}^{t}$
$\displaystyle=\mbox{\boldmath$U$}_{l}^{t-1}+W_{l}\mbox{\boldmath$O$}_{l-1}^{t}$
(9) $\displaystyle\mbox{\boldmath$T$}_{l}$
$\displaystyle=\begin{cases}T,&\text{if }\mbox{\boldmath$U$}_{l}^{T}<V_{l}\\\
\mbox{\boldmath$t$}\ s.t.\ \mbox{\boldmath$U$}_{l}^{t}\ {\geq}\ {V_{l}}\ \&\
\mbox{\boldmath$U$}_{l}^{t-1}<{V_{l}},&\text{otherwise}\end{cases}$ (10)
The output layer only triggers an output spike if there was no spike in the
earlier timesteps and the corresponding membrane potential crosses the
threshold. Also, an output neuron is forced to fire at the last timestep if it
was unable to emit a spike in any of the timesteps. This ensures that all the
neurons in the output layer have a valid $T_{l}$ which can be included in the
loss function.
Let us now we derive the expressions to compute the gradients of the trainable
parameters of all the layers. We perform the spatial and temporal credit
assignment by unrolling the network in temporal axis and employing
backpropagation through time (BPTT) [30].
Output Layer: The loss function is defined on both
$\mbox{\boldmath$U$}_{l}^{T}$ and $\mbox{\boldmath$T$}_{l}$ to correctly
capture both the direct and temporal information presented at the input layer.
Therefore, we employ two softmax functions of the $i^{th}$ output neuron shown
in Eq. 11, where $N$ denotes the total number of classes, $U_{i}^{T}$ and
$t_{i}$ represent the accumulated membrane potential after the final timestep
and the firing time of the $i^{th}$ neuron respectively. .
$\tilde{U_{i}}=\frac{e^{U_{i}^{T}}}{\sum_{j=1}^{N}e^{U_{j}^{T}}},\ \
\tilde{t_{i}}={\frac{e^{-t_{i}}}{\sum_{j=1}^{N}e^{-t_{j}}}}$ (11)
The resulting hybrid cross entropy loss ($\mathcal{L}$) and its’ gradient with
respect to the accumulated membrane potential vector
($\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$U$}_{l}^{T}}$) are thus
defined as
$\mathcal{L}=-\sum_{i=1}^{N}{y_{i}log(\tilde{U_{i}}\tilde{t_{i}})},\quad\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$U$}_{l}^{T}}=\mbox{\boldmath$\tilde{U}$}_{l}^{T}-\mbox{\boldmath$y$}$
(12)
where $\mbox{\boldmath$\tilde{U}$}_{l}^{T}$ is the vector containing the
softmax values $\tilde{U}_{i}$, and $y$ is the one-hot encoded vector of the
correct class. Similarly, the gradient with respect to the firing time vector
($\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$T$}_{l}}$) is
$(\mbox{\boldmath$\tilde{T}$}_{l}-\mbox{\boldmath$y$})$. Now, we compute the
weight update as
$\displaystyle W_{l}$ $\displaystyle=W_{l}-\eta\Delta{W_{l}}$ (13)
$\displaystyle\Delta{W_{l}}$
$\displaystyle=\sum_{t}\frac{\partial\mathcal{L}}{\partial
W_{l}}=\sum_{t}\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$U$}_{l}^{t}}\frac{\partial\mbox{\boldmath$U$}_{l}^{t}}{\partial
W_{l}}$
$\displaystyle=\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$U$}_{l}^{T}}\sum_{t}\frac{\partial\mbox{\boldmath$U$}_{l}^{t}}{\partial
W_{l}}=(\mbox{\boldmath$\tilde{U}$}_{l}^{T}-\mbox{\boldmath$y$})\sum_{t}\mbox{\boldmath$O$}_{l-1}^{t}$
(14)
where $\eta$ is the learning rate (LR). In order to evaluate the threshold
update at the output layer, we rewrite Eq. (10) as
$\displaystyle\mbox{\boldmath$T$}_{l}=\sum_{t=1}^{T-1}(t\mathcal{H}(a)\mathcal{H}(b))+T\mathcal{H}(c)$
(15)
where $\mathcal{H}$ denotes the Heaviside step function,
$\mbox{\boldmath$a$}=\mbox{\boldmath$U$}_{l}^{t}-\mbox{\boldmath$V$}_{l}$,
$\mbox{\boldmath$b$}=\mbox{\boldmath$V$}_{l}-\mbox{\boldmath$U$}_{l}^{t-1}$,
and $\mbox{\boldmath$c$}=\mbox{\boldmath$V$}_{l}-\mbox{\boldmath$U$}_{l}^{T}$.
Note that $\mbox{\boldmath$V$}_{l}$ represents a vector of repeated elements
of the threshold voltage of the output layer. The derivative
$(\frac{\partial\mbox{\boldmath$T$}_{l}}{\partial V_{l}})$ can then be
represented as
$\displaystyle\frac{\partial\mbox{\boldmath$T$}_{l}}{\partial
V_{l}}=\sum_{t=1}^{T-1}t(\mathcal{H}(\mbox{\boldmath$a$})\delta(\mbox{\boldmath$b$})-\mathcal{H}(\mbox{\boldmath$b$})\delta(\mbox{\boldmath$a$}))+T\delta(\mbox{\boldmath$c$})$
(16)
where $\delta$ represents the Dirac-delta function. Since the delta function
is zero almost everywhere, it will not allow the gradient of
$\mbox{\boldmath$T$}_{l}$ to change and train $V_{l}$. Hence, we approximate
Eq. (16) as
$\displaystyle\sum_{t=1}^{T-1}{t(\mathcal{H}(\mbox{\boldmath$a$})(|\mbox{\boldmath$b$}|{<}\mbox{\boldmath$\beta$}){-}\mathcal{H}(\mbox{\boldmath$b$})(|\mbox{\boldmath$a$}|{<}\mbox{\boldmath$\beta$}]){+}T(|\mbox{\boldmath$c$}|{<}\mbox{\boldmath$\beta$})}$
(17)
where $\beta$ is a vector of size equal to the number of output neurons,
consisting of the repeated elements of a training hyperparameter that controls
the gradient of $\mbox{\boldmath$T$}_{l}$. Note that
$(|\mbox{\boldmath$a$}|{<}\mbox{\boldmath$\beta$})$,$(|\mbox{\boldmath$b$}|{<}\mbox{\boldmath$\beta$})$,
and $(|\mbox{\boldmath$c$}|{<}\mbox{\boldmath$\beta$}$) are all Boolean
vectors of the same size as $\beta$. We then compute the threshold update as
$\displaystyle V_{l}=V_{l}-\eta\Delta{V_{l}},\
\Delta{V_{l}}=\frac{\partial\mathcal{L}}{\partial
V_{l}}=\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$T$}_{l}}\frac{\partial\mbox{\boldmath$T$}_{l}}{\partial
V_{l}}$ (18)
Hidden Layers: The weight update of the $l^{th}$ hidden layer is calculated
from Eq. (6)-(8) as
$\displaystyle\Delta{W_{l}}$
$\displaystyle=\sum_{t}\frac{\partial\mathcal{L}}{\partial
W_{l}}=\sum_{t}\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$z$}_{l}^{t}}\frac{\partial\mbox{\boldmath$z$}_{l}^{t}}{\partial\mbox{\boldmath$O$}_{l}^{t}}\frac{\partial\mbox{\boldmath$O$}_{l}^{t}}{\partial\mbox{\boldmath$U$}_{l}^{t}}\frac{\partial\mbox{\boldmath$U$}_{l}^{t}}{\partial
W_{l}}$
$\displaystyle=\sum_{t}\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$z$}_{l}^{t}}\frac{\partial\mbox{\boldmath$z$}_{l}^{t}}{\partial\mbox{\boldmath$O$}_{l}^{t}}\frac{1}{V_{l}}\mbox{\boldmath$O$}_{l-1}^{t}$
(19)
$\frac{d\mbox{\boldmath$z$}_{l}^{t}}{d\mbox{\boldmath$O$}_{l}^{t}}$ is the
non-differentiable gradient which can be approximated with the surrogate
gradient proposed in [42].
$\displaystyle\frac{\partial\mbox{\boldmath$z$}_{l}^{t}}{\partial\mbox{\boldmath$O$}_{l}^{t}}=\gamma\cdot{max(0,1-|\mbox{\boldmath$z$}_{l}^{t}|)}$
(20)
where $\gamma$ is a hyperparameter denoting the maximum value of the gradient.
The threshold update is then computed as
$\displaystyle\Delta{V_{l}}$
$\displaystyle=\sum_{t}\frac{\partial\mathcal{L}}{\partial
V_{l}}=\sum_{t}\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$O$}_{l}^{t}}\frac{\partial\mbox{\boldmath$O$}_{l}^{t}}{\partial\mbox{\boldmath$z$}_{l}^{t}}\frac{\partial\mbox{\boldmath$z$}_{l}^{t}}{\partial
V_{l}}$
$\displaystyle=\sum_{t}\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$O$}_{l}^{t}}\frac{\partial\mbox{\boldmath$O$}_{l}^{t}}{\partial\mbox{\boldmath$z$}_{l}^{t}}\left(\frac{-V_{l}\cdot(\mbox{\boldmath$z$}_{l}^{t-1}>0)-\mbox{\boldmath$U$}_{l}^{t}}{(V_{l})^{2}}\right)$
(21)
Given that the threshold is same for all neurons in a particular layer, it may
seem redundant to train both the weights and threshold together. However, our
experimental evaluation detailed in Section VI shows that the number of
timesteps required to obtain the state-of-the-art classification accuracy
decreases with this joint optimization. We hypothesize that this is because
the optimizer is able to reach an improved local minimum when both parameters
are tunable. Finally, the leak update is computed as
$\displaystyle\lambda_{l}$ $\displaystyle=\lambda_{l}-\eta\Delta{\lambda_{l}}$
(22)
$\displaystyle\Delta\lambda_{l}=\sum_{t}\frac{\partial\mathcal{L}}{\partial\lambda_{l}}$
$\displaystyle=\sum_{t}\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$O$}_{l}^{t}}\frac{\partial\mbox{\boldmath$O$}_{l}^{t}}{\partial\mbox{\boldmath$z$}_{l}^{t}}\frac{\partial\mbox{\boldmath$z$}_{l}^{t}}{\partial\mbox{\boldmath$U$}_{l}^{t}}\frac{\partial\mbox{\boldmath$U$}_{l}^{t}}{\partial\lambda_{l}}$
$\displaystyle=\sum_{t}\frac{\partial\mathcal{L}}{\partial\mbox{\boldmath$O$}_{l}^{t}}\frac{\partial\mbox{\boldmath$O$}_{l}^{t}}{\partial\mbox{\boldmath$z$}_{l}^{t}}\frac{1}{V_{l}}\mbox{\boldmath$U$}_{l}^{(t-1)}$
(23)
## V Experiments
This section first describes how we evaluate the efficacy of our proposed
encoding and training framework and then presents the inference accuracy on
CIFAR-10 and CIFAR-100 datasets with various VGG model variants.
### V-A Experimental Setup
#### V-A1 ANN Training for Initialization
To train our ANNs, we used the standard data-augmented input set for each
model. For ANN training with various VGG models, we imposed a number of
constraints that leads to near lossless SNN conversion [15]. In particular,
our models are trained without the bias term because it complicates parameter
space exploration which increases conversion difficulty and tends to increase
conversion loss. The absence of bias term implies that Batch Normalization
[47] cannot be used as a regularizer during the training process. Instead, we
use Dropout [48] as the regularizer for both ANN and SNN training. Also, our
pooling operations use average pooling because for binary spike based
activation layers, max pooling incurs significant information loss. We
performed the ANN training for $200$ epochs with an initial LR of $0.01$ that
decays by a factor of $0.1$ after $120$, $160$, and $180$ epochs.
#### V-A2 ANN-SNN Conversion and SNN Training
Previous works [15, 30] set the layer threshold of the first hidden layer by
computing the maximum input to a neuron over all its neurons across all $T$
timesteps for a set of input images [15]. The thresholds of the subsequent
layers are sequentially computed in a similar manner taking the maximum across
all neurons and timesteps. However, in our proposed framework, the threshold
for each layer is computed sequentially as the $99.7$ percentile (instead of
the maximum) of the neuron input distribution at each layer, which improves
the SNN classification accuracy [24]. During threshold computation, the leak
in the hidden layers is set to unity and the analog pixel values of an image
are directly applied to the input layer [24]. We considered only $512$ input
images to limit conversion time and used a threshold scaling factor of $0.4$
for SNN training and inference, following the recommendations in [30].
Initialized with these layer thresholds and the trained ANN weights, we
performed our proposed SNN training with the hybrid input encoding scheme for
$150$ epochs for CIFAR-10 and CIFAR-100, respectively, where we jointly
optimize the weights, the membrane leak, and the firing thresholds of each
layer as described in Section IV. We set $\gamma$ = $0.3$ [42], $\beta=0.2$,
and used a starting LR of $10^{-4}$ which decays by a factor of $0.1$ every
$10$ epochs.
TABLE I: Model performances with single-spike hybrid encoded SNN training on CIFAR-10 and CIFAR-100 after a) ANN training, b) ANN-to-SNN conversion and c) SNN training. | a. | b. Accuracy ($\%$) with | c. Accuracy ($\%$) after
---|---|---|---
Architecture | ANN ($\%$) | ANN-SNN conversion | proposed SNN training
| accuracy | for $T$ = 200 | for T=5
Dataset : CIFAR-10
VGG-6 | 90.22 | 89.98 | 88.89
VGG-11 | 91.02 | 91.77 | 90.66
VGG-16 | 93.24 | 93.16 | 91.41
Dataset : CIFAR-100
VGG-16 | 71.02 | 70.38 | 66.46
### V-B Classification Accuracy & Latency
We evaluated the performance of these networks on multiple VGG architectures,
namely VGG-6, VGG-9 and VGG-11 for CIFAR-10 and VGG-16 for CIFAR-100 datasets
respectively. Column-$2$ in Table I shows the ANN accuracy; column-$3$ shows
the accuracy after ANN-SNN conversion with $200$ timesteps. Note that we need
$200$ timesteps to evaluate the thresholds of the SNN for VGG architectures
without any significant loss in accuracy. Column-4 in Table I shows the
accuracy when we perform our proposed training with our hybrid input encoding
discussed in Section III. The performance of the SNNs trained via our proposed
framework is compared with the current state-of-the-art SNNs with various
encoding and training techniques in Table II. Our proposal requires only $5$
timesteps for both SNN training and inference to obtain the SOTA test accuracy
and hence, representing $5$-$300\times$ improvement in inference latency
compared to other rate/temporally coded spiking networks. Note that the direct
encoding in the first time step is crucial for SNN convergence, and temporal
coding solely leads to a test accuracy of ${\sim}10\%$ and ${\sim}1\%$ on
CIFAR-10 and CIFAR-100 respectively, for all the network architectures.
TABLE II: Performance comparison of the proposed single spike hybrid encoded SNN with state-of-the-art deep SNNs on CIFAR-10 and CIFAR-100. TTFS denotes time-to-first-spike coding. Authors | Training | Input | Architecture | Accuracy | Time
---|---|---|---|---|---
| type | encoding | | ($\%$) | steps
Dataset : CIFAR-10
Sengupta et | ANN-SNN | Rate | VGG-16 | 91.55 | 2500
al. (2019) [15] | conversion | | | |
Wu et al. | Surrogate | Direct | 5 CONV, | 90.53 | 12
(2019) [27] | gradient | | 2 linear | |
Rathi et al. | Conversion+ | Rate | VGG-16 | 91.13 | 100
(2020) [30] | STDB training | | | 92.02 | 200
Garg et al. | Conversion+ | DCT | VGG-9 | 89.94 | 48
(2019) [25] | STDB training | | | |
Kim et al. | ANN-SNN | Phase | VGG-16 | 91.2 | 1500
(2018) [21] | conversion | | | |
Park et al. | ANN-SNN | Burst | VGG-16 | 91.4 | 1125
(2019) [22] | conversion | | | |
Park et al. | STDB | TTFS | VGG-16 | 91.4 | 680
(2020) [18] | training | | | |
Kim at. al. | Surrogate | Rate | VGG-9 | 90.5 | 25
(2020) [28] | gradient | | | |
Rathi at. al. | Conversion+ | Direct | VGG-16 | 92.70 | 5
(2020) [24] | STDB training | | | 93.10 | 10
This work | Conversion+ | Hybrid | VGG-16 | 91.41 | 5
| STDB training | | | |
Dataset : CIFAR-100 |
Lu et al. | ANN-SNN | Direct | VGG-16 | 63.20 | 62
(2020) [32] | conversion | | | |
Garg et al. | Conversion+ | DCT | VGG-11 | 68.3 | 48
(2020) [25] | STDB training | | | |
Park et al. | ANN-SNN | Burst | VGG-16 | 68.77 | 3100
(2019) [22] | conversion | | | |
Park et al. | STDB | TTFS | VGG-16 | 68.8 | 680
(2020) [18] | training | | | |
Kim at. al. | Surrogate | Rate | VGG-9 | 66.6 | 50
(2020) [28] | gradient | | | |
Rathi et al. | Conversion+ | Direct | VGG-16 | 69.67 | 5
(2020) [24] | STDB training | | | |
This work | Conversion+ | Hybrid | VGG-16 | 66.46 | 5
| STDB training | | | |
## VI Improvement in Energy-efficiency
### VI-A Reduction in Spiking Activity
To model energy consumption, we assume a generated SNN spike consumes a fixed
amount of energy [12]. Based on this assumption, earlier works [30, 15] have
adopted the average spiking activity (also known as average spike count) of an
SNN layer $l$, denoted ${\zeta}^{l}$, as a measure of compute-energy of the
model. In particular, ${\zeta}^{l}$ is computed as the ratio of the total
spike count in $T$ steps over all the neurons of the layer $l$ to the total
number of neurons in that layer. Thus lower the spiking activity, the better
the energy efficiency.
Fig. 3 shows the average number of spikes for each layer with our proposed
single-spike hybrid encoding and direct encoding scheme on VGG-16 when
evaluated for 1500 samples from CIFAR-10 testset for VGG-16 architecture. Let
the average be denoted by $\zeta^{l}$ which is computed by summing all the
spikes in a layer over 100 timesteps and dividing by the number of neurons in
that layer. For example, the average spike count of the $11^{th}$
convolutional layer of the direct encoded SNN is $0.78$, which implies that
over a $5$ timestep period each neuron in that layer spikes $0.78$ times on
average over all input samples. As we can see, the spiking activity for almost
all the layers reduces significantly with our proposed encoding technique.
Figure 3: Comparison of average spiking activity per layer for VGG-16 on
CIFAR-10 and CIFAR-100 with both direct and hybrid input encoding.
To compare our proposed work with the SOTA SNNs, we perform hybrid training
(ANN-SNN conversion, along with STDB) on spiking networks with (a) IF neurons
with Poisson rate encoding [30], (b) IF neurons with DCT-based input encoding
[25], and (c) LIF neurons with direct encoding [24]. We employ trainable leak
and threshold in these SNNs for fair comparison. We also evaluate the impact
of the hybrid spatio-temporal encoding with the modified loss function and the
single-spike constraint individually on the average spike rate and latency
under similar accuracy and conditions (trainable threshold and leak). In
particular, we train three additional spiking networks: (d) SNN with LIF
neuron and proposed hybrid-encoding, (e) SNN with LIF neuron and direct
encoding with the single-spike constraint over all the layers, and (f) single-
spike hybrid encoded SNN with LIF neuron. All the six networks achieve test
accuracies between $90$-$93\%$ for VGG-16 on CIFAR-10. Fig. 4 shows the
average spiking rate and the number of timesteps required to obtain the SOTA
test accuracy of all these SNNs. Both (d) and (e) result in lower average
spiking activity compared to all the SOTA SNNs, with at most the same number
of timesteps. Finally, (f) generates even lower number of average spikes
($2\times$,$17.2\times$, and $94.8\times$ less compared to direct, DCT, and
rate coding) with the lowest inference latency reported till date for deep SNN
architectures [24], and no significant reduction in the test accuracy. The
improvement stems from both the hybrid input encoding which reduces spiking
activity in the initial few layers and our single-spike constraint which
reduces the average spike rate throughout the network, particularly in the
later layers. It becomes increasingly difficult for the membrane potential of
the convolutional layers deep into the network to increase sufficient to emit
a spike, due to the fact that the neurons in the earlier layers cannot fire
multiple times and we need only $5$ timesteps for classification.
Figure 4: Effect of Poisson rate encoding, DCT encoding, direct encoding, and single-spike hybrid input encoding on the average spike rate and latency for VGG-16 architecture on CIFAR-10 dataset. TABLE III: Convolutional and Fully-connected layer FLOPs for ANN and SNN models Model | Number of FLOPs
---|---
| Notation | Convolutional layer $l$ | Fully-connected layer $l$
$ANN$ | $F_{ANN}^{l}$ | $(k^{l})^{2}\times H_{o}^{l}\times W_{o}^{l}\times C_{o}^{l}\times C_{i}^{l}$ | $f_{i}^{l}\times f_{o}^{l}$
$SNN$ | $F_{SNN}^{l}$ | $(k^{l})^{2}\times H_{o}^{l}\times W_{o}^{l}\times C_{o}^{l}\times C_{i}^{l}\times{\zeta}^{l}$ | $f_{i}^{l}\times f_{o}^{l}\times{\zeta}^{l}$
### VI-B Reduction in FLOPs and Compute Energy
Let us assume a convolutional layer $l$ having weight tensor
${\textbf{W}^{l}}\in{\mathbb{R}^{k^{l}\times k^{l}\times C_{i}^{l}\times
C_{o}^{l}}}$ that operates on an input activation tensor
$\textbf{I}^{l}\in\mathbb{R}^{H_{i}^{l}\times W_{i}^{l}\times C_{i}^{l}}$,
where $H_{i}^{l},W_{i}^{l}$, $C_{i}^{l}$ and $C_{o}^{l}$ are the input tensor
height, width, number of channels, and filters, respectively. $k^{l}$
represents both filter height and width. We now quantify the energy consumed
to produce the corresponding output activation tensor
$\textbf{O}^{l}\in\mathbb{R}^{H_{o}^{l}\times W_{o}^{l}\times C_{o}^{l}}$ for
an ANN and SNN, respectively. Our model can be extended to fully-connected
layers with $f_{i}^{l}$ and $f_{o}^{l}$ as the number of input and output
features respectively. In particular, for an ANN, the total number of FLOPS
for layer $l$, denoted $F_{ANN}^{l}$, is shown in row 1 of Table III [49, 50].
The formula can be easily adjusted for an SNN in which the number of FLOPs at
layer $l$ is a function of the average spiking activity at the layer
$(\zeta^{l})$ denoted as $F_{SNN}^{l}$ in Table III. Thus, as the activation
output gets sparser, the compute energy decreases.
For ANNs, FLOPs primary consist of multiply accumulate (MAC) operations of the
convolutional and linear layers. On the contrary, for SNNs, except the first
and last layer, the FLOPs are limited to accumulates (ACs) as the spikes are
binary and thus simply indicate which weights need to be accumulated at the
post-synaptic neurons. For the first layer, we need to use MAC units as we
consume analog input111Note that for the hybrid coded data input we need to
perform MAC at the first layer at $t=1$, and AC operation during remaining
timesteps at that layer. For the direct coded input, only MAC during the
$1^{st}$ timestep is sufficient, as neither the inputs nor the weights change
during remaining timesteps (i.e. $5\geq t\geq 2$). (at timestep one). Hence,
the compute energy for an ANN $(E_{ANN})$ and an iso-architecture SNN model
$(E_{SNN})$ can be written as
$\displaystyle E_{ANN}$
$\displaystyle=(\sum_{l=1}^{L}F^{l}_{SNN})\cdot{E_{MAC}}$ (24) $\displaystyle
E_{SNN}$
$\displaystyle=(F^{1}_{ANN})\cdot{E_{MAC}}+(\sum_{l=2}^{L}F^{l}_{SNN})\cdot{E_{AC}}$
(25)
where $L$ is the total number of layers. Note that $E_{MAC}$ and $E_{AC}$ are
the energy consumption for a MAC and AC operation respectively. As shown in
Table IV, $E_{AC}$ is $\mathord{\sim}32\times$ lower than $E_{MAC}$ [51] in
$45$ nm CMOS technology. This number may vary for different technologies, but
generally, in most technologies, an AC operation is significantly cheaper than
a MAC operation.
Fig. 5 illustrates the energy consumption and FLOPs for ANN and SNN models of
VGG-16 while classifying the CIFAR datasets, where the energy is normalized to
that of an equivalent ANN. The number of FLOPs for SNNs obtained by our
proposed training framework is smaller than that for an ANN with similar
number of parameters. Moreover, because the ACs consume significantly less
energy than MACs, as shown in Table IV, SNNs are significantly more energy
efficient. In particular, for CIFAR-10 our proposed SNN consumes
$\mathord{\sim}70\times$ less compute energy than a comparable iso-
architecture ANN with similar parameters and $\mathord{\sim}1.2\times$ less
compute energy than a comparable SNN with direct encoding technique and
trainable threshold/leak [24] parameters. For CIFAR-100 with hybrid encoding
and our single-spike constraint, the energy-efficiency can reach up to
$\mathord{\sim}125\times$ and $\mathord{\sim}1.8\times$, respectively,
compared to ANN and direct-coded SNN models [24] having similar parameters and
architecture. Note that we did not consider the memory access energy in our
evaluation because it is dependent on the underlying system architecture.
Although SNNs incur significant data movement because the membrane potentials
need to be fetched at every timestep, there have been many proposals to reduce
the memory cost by data buffering [52], computing in non-volatile crossbar
memory arrays [53], and data reuse with energy-efficient dataflows [9]. All
these techniques can be applied to the SNNs obtained by our proposed training
framework to address the memory cost.
Figure 5: Comparison of normalized compute cost on CIFAR-10 and CIFAR-100 for
VGG-16 of ANN and SNN with direct and hybrid input encoding.
## VII Conclusions
SNNs that operate with discrete spiking events can potentially unlock the
energy wall in deep learning for edge applications. Towards this end, we
presented a training framework that leads to low latency, energy-efficient
spiking networks with high activation sparsity. We initialize the parameters
of our proposed SNN taken from a trained ANN, to speed-up the training with
spike-based backpropagation. The image pixels are applied directly as input to
the network during the first timestep, while they are converted to a sparse
spike train with firing times proportional to the pixel intensities in
subsequent timesteps. We also employ a modified version of the LIF model for
the hidden and output layers of the SNN, in which all the neurons fire at most
once per image. Both of these lead to high activation sparsity in the input,
convolutional, and dense layers of the network. Moreover, we employ a hybrid
cross entropy loss function to account for the spatio-temporal encoding in the
input layer and train the network weights, firing threshold, and membrane leak
via spike-based backpropagation to optimize both accuracy and latency. The
high sparsity combined with low inference latency reduces the compute energy
by ${\sim}70$-$130\times$ and ${\sim}1.2$-$1.8\times$ compared to an
equivalent ANN and a direct encoded SNN respectively with similar accuracy.
SNNs obtained by our proposed framework achieves similar accuracy as other
state-of-the-art rate or temporally coded SNN models with $5$-$300\times$
fewer timesteps. Future works include power and performance evaluation of our
energy-efficient models on neurmorphic chips such as Loihi [54] and
exploration of neuromorphic datasets [55] to leverage the temporal learning
ability of our training framework.
TABLE IV: Estimated energy costs for MAC and AC operations in 45 $nm$ CMOS process at 0.9 V [51] Serial No. | Operation | Energy ($pJ$)
---|---|---
1. | 32-bit multiplication $int$ | $3.1$
2. | 32-bit addition $int$ | $0.1$
3. | 32-bit MAC | $3.2$ ($\\#1$ \+ $\\#2$)
4. | 32-bit AC | $0.1$ ($\\#2$)
## VIII Acknowledgements
This work was supported in part by the NSF CCF-1763747 award.
## References
* [1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _Nature_ , vol. 521, pp. 436–44, 05 2015.
* [2] l. Deng, J. Li, J.-T. Huang, K. Yao, D. Yu, F. Seide, M. Seltzer, G. Zweig, X. He, J. Williams, Y. Gong, and A. Acero, “Recent advances in deep learning for speech research at microsoft,” in _Acoustics, Speech, and Signal Processing, 1988. ICASSP-88., 1988 International Conference on_ , 10 2013, pp. 8604–8608.
* [3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in _Advances in Neural Information Processing Systems 25_ , F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1097–1105.
* [4] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: Learning affordance for direct perception in autonomous driving,” in _2015 IEEE International Conference on Computer Vision (ICCV)_ , vol. 1, no. 1, 2015, pp. 2722–2730.
* [5] A. Rezvantalab, H. Safigholi, and S. Karimijeshni, “Dermatologist level dermoscopy skin cancer classification using different deep learning convolutional neural networks algorithms,” _arXiv preprint arXiv:1810.10348_ , 2018.
* [6] S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural networks,” in _Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1_. MIT Press, 2015, p. 1135–1143.
* [7] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding,” _arXiv preprint arXiv:1510.00149_ , 2015.
* [8] S. Lym, D. Lee, M. O’Connor, N. Chatterjee, and M. Erez, “DeLTA: GPU performance model for deep learning applications with in-depth memory system traffic analysis,” _2019 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)_ , Mar 2019.
* [9] Y.-H. Chen, J. Emer, and V. Sze, “Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks,” in _ACM SIGARCH Computer Architecture News_ , vol. 44, 06 2016.
* [10] G. Indiveri and T. Horiuchi, “Frontiers in neuromorphic engineering,” _Frontiers in Neuroscience_ , vol. 5, 2011.
* [11] M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: Opportunities and challenges,” _Frontiers in Neuroscience_ , vol. 12, p. 774, 2018.
* [12] Y. Cao, Y. Chen, and D. Khosla, “Spiking deep convolutional neural networks for energy-efficient object recognition,” _International Journal of Computer Vision_ , vol. 113, pp. 54–66, 05 2015.
* [13] A. Sengupta, A. Banerjee, and K. Roy, “Hybrid spintronic-CMOS spiking neural network with on-chip learning: Devices, circuits, and systems,” _Phys. Rev. Applied_ , vol. 6, Dec 2016.
* [14] P. U. Diehl, G. Zarrella, A. Cassidy, B. U. Pedroni, and E. Neftci, “Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware,” in _2016 IEEE International Conference on Rebooting Computing (ICRC)_. IEEE, 2016, pp. 1–8.
* [15] A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy, “Going deeper in spiking neural networks: VGG and residual architectures,” _Frontiers in Neuroscience_ , vol. 13, p. 95, 2019.
* [16] I. M. Comsa, K. Potempa, L. Versari, T. Fischbacher, A. Gesmundo, and J. Alakuijala, “Temporal coding in spiking neural networks with alpha synaptic function,” in _ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , vol. 1, no. 1, 2020, pp. 8529–8533.
* [17] S. Zhou, X. LI, Y. Chen, S. T. Chandrasekaran, and A. Sanyal, “Temporal-coded deep spiking neural network with easy training and robust performance,” _arXiv preprint arXiv:1909.10837_ , 2020.
* [18] S. Park, S. Kim, B. Na, and S. Yoon, “T2FSNN: Deep spiking neural networks with time-to-first-spike coding,” _arXiv preprint arXiv:2003.11741_ , 2020\.
* [19] M. Zhang, J. Wang, B. Amornpaisannon, Z. Zhang, V. Miriyala, A. Belatreche, H. Qu, J. Wu, Y. Chua, T. E. Carlson, and H. Li, “Rectified linear postsynaptic potential function for backpropagation in deep spiking neural networks,” 2020.
* [20] S. R. Kheradpisheh and T. Masquelier, “Temporal backpropagation for spiking neural networks with one spike per neuron,” _International Journal of Neural Systems_ , vol. 30, no. 06, May 2020.
* [21] J. Kim, H. Kim, S. Huh, J. Lee, and K. Choi, “Deep neural networks with weighted spikes,” _Neurocomputing_ , vol. 311, pp. 373–386, 2018.
* [22] S. Park, S. Kim, H. Choe, and S. Yoon, “Fast and efficient information transmission with burst spikes in deep spiking neural networks,” in _2019 56th ACM/IEEE Design Automation Conference (DAC)_ , vol. 1, no. 1, 2019, pp. 1–6.
* [23] D. Almomani, M. Alauthman, M. Alweshah, O. Dorgham, and F. Albalas, “A comparative study on spiking neural network encoding schema: implemented with cloud computing,” _Cluster Computing_ , vol. 22, 06 2019.
* [24] N. Rathi and K. Roy, “DIET-SNN: Direct input encoding with leakage and threshold optimization in deep spiking neural networks,” _arXiv preprint arXiv:2008.03658_ , 2020.
* [25] I. Garg, S. S. Chowdhury, and K. Roy, “DCT-SNN: Using DCT to distribute spatial information over time for learning low-latency spiking neural networks,” _arXiv preprint arXiv:2010.01795_ , 2020.
* [26] J. H. Lee, T. Delbruck, and M. Pfeiffer, “Training deep spiking neural networks using backpropagation,” _Frontiers in Neuroscience_ , vol. 10, 2016\.
* [27] Y. Wu, L. Deng, G. Li, J. Zhu, Y. Xie, and L. Shi, “Direct training for spiking neural networks: Faster, larger, better,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, 2019, pp. 1311–1318.
* [28] Y. Kim and P. Panda, “Revisiting batch normalization for training low-latency deep spiking neural networks from scratch,” _arXiv preprint arXiv:2010.01729_ , 2020.
* [29] A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” _Neural Networks_ , vol. 111, p. 47–63, Mar 2019.
* [30] N. Rathi, G. Srinivasan, P. Panda, and K. Roy, “Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation,” _arXiv preprint arXiv:2005.01807_ , 2020.
* [31] S. Kundu, G. Datta, M. Pedram, and P. A. Beerel, “Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression,” in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_ , January 2021, pp. 3953–3962.
* [32] S. Lu and A. Sengupta, “Exploring the connection between binary and spiking neural networks,” _arXiv preprint arXiv:2002.10064_ , 2020.
* [33] C. Lee, S. S. Sarwar, P. Panda, G. Srinivasan, and K. Roy, “Enabling spike-based backpropagation for training deep neural network architectures,” _Frontiers in Neuroscience_ , vol. 14, p. 119, 2020.
* [34] S. S. Chowdhury, C. Lee, and K. Roy, “Towards understanding the effect of leak in spiking neural networks,” _arXiv preprint arXiv:2006.08761_ , 2020.
* [35] E. Ledinauskas, J. Ruseckas, A. Juršėnas, and G. Buračas, “Training deep spiking neural networks,” _arXiv preprint arXiv:2006.04436_ , 2020.
* [36] B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu, “Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,” _Frontiers in Neuroscience_ , vol. 11, p. 682, 2017.
* [37] P. U. Diehl, D. Neil, J. Binas, M. Cook, S. Liu, and M. Pfeiffer, “Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing,” in _2015 International Joint Conference on Neural Networks (IJCNN)_ , vol. 1, no. 1, 2015, pp. 1–8.
* [38] Y. Hu, H. Tang, and G. Pan, “Spiking deep residual network,” _arXiv preprint arXiv:1805.01352_ , 2018.
* [39] P. O’Connor, D. Neil, S.-C. Liu, T. Delbruck, and M. Pfeiffer, “Real-time classification and sensor fusion with a spiking deep belief network,” _Frontiers in neuroscience_ , vol. 7, p. 178, 2013.
* [40] C. Lee, P. Panda, G. Srinivasan, and K. Roy, “Training deep spiking convolutional neural networks with STDP-based unsupervised pre-training followed by supervised fine-tuning,” _Frontiers in Neuroscience_ , vol. 12, 2018.
* [41] P. Panda and K. Roy, “Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition,” _arXiv preprint arXiv:1602.01510_ , 2016.
* [42] G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long short-term memory and learning-to-learn in networks of spiking neurons,” _arXiv preprint arXiv:1803.09574_ , 2018.
* [43] E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks,” _IEEE Signal Processing Magazine_ , vol. 36, no. 6, pp. 51–63, 2019.
* [44] S. R. Kheradpisheh, M. Ganjtabesh, S. J. Thorpe, and T. Masquelier, “STDP-based spiking deep convolutional neural networks for object recognition,” _Neural Networks_ , vol. 99, p. 56–67, Mar 2018. [Online]. Available: http://dx.doi.org/10.1016/j.neunet.2017.12.005
* [45] M. Mozafari, S. R. Kheradpisheh, T. Masquelier, A. Nowzari-Dalini, and M. Ganjtabesh, “First-spike-based visual categorization using reward-modulated STDP,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 29, no. 12, pp. 6178–6190, 2018.
* [46] G. Gallego, T. Delbruck, G. M. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-based vision: A survey,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 1, no. 1, pp. 1–1, 2020.
* [47] N. Bjorck, C. P. Gomes, B. Selman, and K. Q. Weinberger, “Understanding batch normalization,” in _Advances in Neural Information Processing Systems_ , 2018, pp. 7694–7705.
* [48] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” _Journal of Machine Learning Research_ , vol. 15, pp. 1929–1958, 06 2014\.
* [49] S. Kundu, M. Nazemi, M. Pedram, K. M. Chugg, and P. A. Beerel, “Pre-defined sparsity for low-complexity convolutional neural networks,” _IEEE Transactions on Computers_ , vol. 69, no. 7, pp. 1045–1058, 2020.
* [50] S. Kundu, S. Prakash, H. Akrami, P. A. Beerel, and K. M. Chugg, “pSConv: A pre-defined sparse kernel based convolution for deep CNNs,” in _2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_. IEEE, 2019, pp. 100–107.
* [51] M. Horowitz, “1.1 Computing’s energy problem (and what we can do about it),” in _2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC)_. IEEE, 2014, pp. 10–14.
* [52] Y. Shen, M. Ferdman, and P. Milder, “Escher: A cnn accelerator with flexible buffering to minimize off-chip transfer,” in _2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)_ , vol. 1, no. 1, 2017, pp. 93–100.
* [53] B. Chen, F. Cai, J. Zhou, W. Ma, P. Sheridan, and W. D. Lu, “Efficient in-memory computing architecture based on crossbar arrays,” in _2015 IEEE International Electron Devices Meeting (IEDM)_ , vol. 1, no. 1, 2015, pp. 1–4.
* [54] M. Davies, N. Srinivasa, T. H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, Y. Liao, C. K. Lin, A. Lines, R. Liu, D. Mathaikutty, S. McCoy, A. Paul, J. Tse, G. Venkataramanan, Y. H. Weng, A. Wild, Y. Yang, and H. Wang, “Loihi: A neuromorphic manycore processor with on-chip learning,” _IEEE Micro_ , vol. 38, no. 1, pp. 82–99, 2018.
* [55] S. Miao, G. Chen, X. Ning, Y. Zi, K. Ren, Z. Bing, and A. Knoll, “Neuromorphic vision datasets for pedestrian detection, action recognition, and fall detection,” _Frontiers in Neurorobotics_ , vol. 13, p. 38, 2019.
|
arxiv-papers
| 2021-07-26T06:16:40 |
2024-09-04T03:07:19.602352
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Gourav Datta, Souvik Kundu, Peter A. Beerel",
"submitter": "Gourav Datta",
"url": "https://arxiv.org/abs/2107.12374"
}
|
2107.12375
|
Geometric Deep Learning on Molecular Representations
Kenneth Atz1,†, Francesca Grisoni2,1,†∗, Gisbert Schneider1,3∗
1ETH Zurich, Dept. Chemistry and Applied Biosciences, RETHINK, Vladimir-
Prelog-Weg 4, 8093 Zurich, Switzerland.
2Eindhoven University of Technology, Dept. Biomedical Engineering, Groene
Loper 7, 5612AZ Eindhoven, Netherlands.
3ETH Singapore SEC Ltd, 1 CREATE Way, $\\#$06-01 CREATE Tower, Singapore,
Singapore.
$\dagger$ these authors contributed equally to this work
*[email protected], [email protected]
###### Abstract
Geometric deep learning (GDL), which is based on neural network architectures
that incorporate and process symmetry information, has emerged as a recent
paradigm in artificial intelligence. GDL bears particular promise in molecular
modeling applications, in which various molecular representations with
different symmetry properties and levels of abstraction exist. This review
provides a structured and harmonized overview of molecular GDL, highlighting
its applications in drug discovery, chemical synthesis prediction, and quantum
chemistry. Emphasis is placed on the relevance of the learned molecular
features and their complementarity to well-established molecular descriptors.
This review provides an overview of current challenges and opportunities, and
presents a forecast of the future of GDL for molecular sciences.
## Introduction
Recent advances in deep learning, which is an instance of artificial
intelligence (AI) based on neural networks [1, 2], have led to numerous
applications in the molecular sciences, e.g., in drug discovery [3, 4],
quantum chemistry [5], and structural biology [6, 7]. Two characteristics of
deep learning render it particularly promising when applied to molecules.
First, deep learning methods can cope with "unstructured" data
representations, such as text sequences [8, 9], speech signals [10, 11],
images [12, 13, 14], and graphs [15, 16]. This ability is particularly useful
for molecular systems, for which chemists have developed many models (i.e.,
"molecular representations") that capture molecular properties at varying
levels of abstraction (Figure 1). The second key characteristic is that deep
learning can perform feature extraction (or feature learning) from the input
data, that is, produce data-driven features from the input data without the
need for manual intervention. These two characteristics are promising for deep
learning as a complement to “classical” machine learning applications (e.g.,
Quantitative Structure-Activity Relationship [QSAR]), in which molecular
features (i.e., "molecular descriptors" [17]) are encoded a priori with rule-
based algorithms. The capability to learn from unstructured data and obtain
data-driven molecular features has led to unprecedented applications of AI in
the molecular sciences.
One of the most promising advances in deep learning is geometric deep learning
(GDL). Geometric deep learning is an umbrella term encompassing emerging
techniques which generalize neural networks to Euclidean and non-Euclidean
domains, such as graphs, manifolds, meshes, or string representations [15]. In
general, GDL encompasses approaches that incorporate a geometric prior, i.e.,
information on the structure space and symmetry properties of the input
variables. Such a geometric prior is leveraged to improve the quality of the
information captured by the model. Although GDL has been increasingly applied
to molecular modeling [18, 19, 5], its full potential in the field is still
untapped.
Figure 1: Exemplary molecular representations for a selected molecule (i.e.,
the penam substructure of penicillin)
a. Two-dimensional (2D) depiction (Kekulé structure).
b. Molecular graph (2D), composed of vertices (atoms) and edges (bonds).
c. SMILES string [20], in which atom type, bond type and connectivity are
specified by alphanumerical characters.
d. Three-dimensional (3D) graph, composed of vertices (atoms), their position
($x$, $y$, $z$ coordinates) in 3D space, and edges (bonds).
e. Molecular surface represented as a mesh colored according to the respective
atom types.
The aim of this review is to (i) provide a structured and harmonized overview
of the applications of GDL on molecular systems, (ii) delineate the main
research directions in the field, and (iii) provide a forecast of the future
impact of GDL. Three fields of application are highlighted, namely drug
discovery, quantum chemistry, and computer-aided synthesis planning (CASP),
with particular attention to the data-driven molecular features learned by GDL
methods. A glossary of selected terms can be found in Box 1.
## Principles of geometric deep learning
The term geometric deep learning was coined in 2017 [15]. Although GDL was
originally used for methods applied to non-Euclidean data [15], it now extends
to all deep learning methods that incorporate geometric priors [21], that is,
information about the structure and symmetry of the system of interest.
Symmetry is a crucial concept in GDL, as it encompasses the properties of the
system with respect to manipulations (transformations), such as translation,
reflection, rotation, scaling, or permutation (Box 2).
Symmetry is often recast in terms of invariance and equivariance to express
the behavior of any mathematical function with respect to a transformation
$\mathcal{T}$ (e.g. rotation, translation, reflection or permutation) of an
acting symmetry group [22]. Here, the mathematical function is a neural
network $\mathcal{F}$ applied to a given molecular input $\mathcal{X}$.
$\mathcal{F}(\mathcal{X})$ can therein transform equivariantly, invariantly or
neither with respect to $\mathcal{T}$, as described below:
* •
Equivariance. A neural network $\mathcal{F}$ applied to an input $\mathcal{X}$
is equivariant to a transformation $\mathcal{T}$ if the transformation of the
input $\mathcal{X}$ commutes with the transformation of
$\mathcal{F}(\mathcal{X})$, via a transformation $\mathcal{T}^{\prime}$ of the
same symmetry group, such that:
$\mathcal{F}(\mathcal{T}(\mathcal{X}))=\mathcal{T}^{\prime}\mathcal{F}(\mathcal{X})$.
Neural networks are therefore equivariant to the actions of a symmetry group
on their inputs if and only if each layer of the network “equivalently"
transforms under any transformation of that group.
* •
Invariance. Invariance is a special case of equivariance, where
$\mathcal{F}(\mathcal{X})$ is invariant to $\mathcal{T}$ if
$\mathcal{T}^{\prime}$ is the trivial group action (i.e., identity):
$\mathcal{F}(\mathcal{T}(\mathcal{X}))=\mathcal{T}^{\prime}\mathcal{F}(\mathcal{X})=\mathcal{F}(\mathcal{X})$.
* •
$\mathcal{F}(\mathcal{X})$ is neither equivariant nor invariant to
$\mathcal{T}$ when the transformation of the input $\mathcal{X}$ does not
commute with the transformation of $\mathcal{F}(\mathcal{X})$:
$\mathcal{F}(\mathcal{T}(\mathcal{X}))\neq\mathcal{T}^{\prime}\mathcal{F}(\mathcal{X})$.
The symmetry properties of a neural network architecture vary depending on the
network type and the symmetry group of interest and are individually discussed
in the following sections. Readers can find an in-depth treatment of
equivariance and group equivariant layers in neural networks elsewhere [23,
24, 25, 26].
The concept of equivariance and invariance can also be used in reference to
the molecular features obtained from a given molecular representation
($\mathcal{X}$), depending on their behaviour when a transformation is applied
to $\mathcal{X}$. For instance, many molecular descriptors are invariant to
the rotation and translation of the molecular representation by design [17],
e.g., the Moriguchi octanol-water partitioning coefficient [27], which relies
only on the occurrence of specific molecular substructures for calculation.
The symmetry properties of molecular features extracted by a neural network
depend on both the symmetry properties of the input molecular representation
and of the utilized neural network.
Many relevant molecular properties (e.g., equilibrium energies, atomic
charges, or physicochemical properties such as permeability, lipophilicity or
solubility) are invariant to certain symmetry operations (Box 2). In many
tasks in chemistry, it is thus desirable to design neural networks that
transform equivariantly under the actions of pre-defined symmetry groups.
Exceptions occur if the targeted property changes upon a symmetry
transformation of the molecules (e.g., chiral properties which change under
inversion of the molecule, or vector properties which change under rotation of
the molecule). In such cases, the inductive bias (learning bias) of
equivariant neural networks would not allow for the differentiation of
symmetry-transformed molecules.
While neural networks can be considered as universal function approximators
[28], incorporating prior knowledge such as reasonable geometric information
(geometric priors) has evolved as a core design principle of neural network
modeling [21]. By incorporating geometric priors, GDL allows to increase the
quality of the model and bypasses several bottlenecks related to the need to
force the data into Euclidean geometries (e.g., by feature engineering).
Moreover, GDL provides novel modeling opportunities, such as data augmentation
in low data regimes [29, 30].
[t]
### Box 1: Glossary of selected terms
CoMFA and CoMSIA. Comparative Molecular Field Analysis (CoMFA) [31] and
Comparative Molecular Similarity Indices Analysis (CoMSIA) [32] are popular 3D
QSAR methods developed in the 1980s and 1990s, in which three-dimensional
grids are used to capture the distributions of molecular features (e.g.,
steric, hydrophobic, and electrostatic properties). The obtained molecular
descriptors serve as inputs to a regression model for quantitative bioactivity
prediction. Convolution. Operation within a neural network that transforms a
feature space into a new feature space and thereby captures the local
information found in the data. Convolutions were first introduced for pixels
in images [33, 34] but the term "convolution" is now used for neural network
architectures covering a variety of data structures such as graphs, point
clouds, spheres, grids, or manifolds. Density Functional Theory (DFT). A
quantum mechanical modeling approach used to investigate the electronic
structure of molecules. Data augmentation. Artificial increase of the data
volume available for model training, often achieved by leveraging symmetrical
properties of the input data which are not captured by the model (e.g.,
rotation or permutation). Feature. An individually measurable or
computationally obtainable characteristic of a given sample (e.g., molecule),
in the form of a scalar. In this review, the term refers to a numeric value
characterizing a molecule. Such molecular features can be computed with rule-
based algorithms ("molecular descriptors") or generated automatically by deep
learning from a molecular representation ("hidden" or "learned" features).
Geometric prior. An inductive bias incorporating information on the symmetric
nature of the system of interest into the neural network architecture. Also
known as symmetry prior. Inductive bias. Set of assumptions that a learning
algorithm (e.g., a neural network) uses to learn the target function and to
make predictions on previously unseen data points. One-hot encoding. Method
for representing categorical variables as numerical arrays by obtaining a
binary variable (0, 1) for each category. It is often used to convert
sequences (e.g., SMILES strings) into numerical matrices, suitable as inputs
and/or outputs of deep learning models (e.g., chemical language models).
Quantitative Structure-Activity Relationship (QSAR). Machine learning
techniques aimed at finding an empirical relationship between the molecular
structure (usually encoded as molecular descriptors) and experimentally
determined molecular properties, such as pharmacological activity or toxicity.
Reinforcement learning. A technique used to steer the output of a machine
learning algorithm toward user-defined regions of optimality via a predefined
reward function [35]. Transfer learning. Transfer of knowledge from an
existing deep learning model to a related task for which fewer training
samples are available [36]. Unstructured data. Data that are not arranged as
vectors of (typically handcrafted) features. Examples of unstructured data
include graphs, images, and meshes. Molecular representations are typically
unstructured, whereas numerical molecular descriptors (e.g., molecular
properties, molecular "fingerprints") are examples of structured data. Voxel.
Element of a regularly spaced, 3D grid (equivalent to a pixel in 2D space).
Table 1: Summary of selected geometric deep learning (GDL) approaches for molecular modeling. For each approach, the utilized molecular representation(s) and selected applications are reported. 1D, one-dimensional; 2D, two-dimensional; 3D, three-dimensional. GDL approach | Molecular representation(s) | Applications
---|---|---
Graph neural networks (GNNs) | 2D and 3D molecular graph, and 3D point cloud. | Molecular property prediction in drug discovery [37, 38] and in quantum chemistry for energies [39, 40, 41], forces [42, 41, 43] and wave-functions [44], CASP [45, 46], and generative molecular design [47, 48].
3D convolutional neural networks (3D CNNs) | 3D grid. | Structure-based drug design and property prediction [49, 50].
Mesh convolutional neural networks (geodesic CNNs or 3D GNNs) | Surface (mesh) encoded as a 2D grid or 3D graph. | Protein-protein interaction prediction and ligand-pocket fingerprinting [18].
Recurrent neural networks (RNNs) | String notation (1D grid). | Generative molecular design [19, 51], synthesis planning [52], protein structure prediction [53] and prediction of properties in drug discovery [54, 55].
Transformers | String notation encoded as a graph. | Synthesis planning [56], prediction of reaction yields [57], generative molecular design [58], prediction of properties in drug discovery [59], and protein structure prediction [6, 7].
| |
## Molecular GDL
The application of GDL to molecular systems is challenging, in part because
there are multiple valid ways of representing the same molecular entity.
Molecular representations111Note that in this review the term ”representation”
is used solely to denote human-made models of molecules (e.g., molecular
graphs, 3D conformers, SMILES strings). To avoid confusion with other usages
of the word ”representation” in deep learning, we will use the term ”feature”
whenever referring to any numerical description of molecules, obtained either
with rule-based algorithms (molecular descriptors) or learned (extracted) by
neural networks. can be categorized based on their different levels of
abstraction and the physicochemical and geometrical aspects they capture.
Importantly, all of these representations are models of the same reality and
are thus "suitable for some purposes, not for others" [60]. GDL provides the
opportunity to experiment with different representations of the same molecule
and leverages their intrinsic geometrical features to increase the quality of
the model. Moreover, GDL has repeatedly proven useful in providing insights
into relevant molecular properties for the task at hand, thanks to its feature
extraction (feature learning) capabilities. In the following sections, we
delineate the most prevalent molecular GDL approaches and their applications
in chemistry, grouped according to the respective molecular representations
used for deep learning: molecular graphs, grids, strings, and surfaces.
[t]
### Box 2: Euclidean symmetries in molecular systems
Molecular systems (and three-dimensional representations thereof) can be
considered as objects in Euclidean space. In such a space, one can apply
several symmetry operations (transformations) that are (i) performed with
respect to three symmetry elements (i.e., line, plane, point), and (ii) rigid,
that is, they preserve the Euclidean distance between all pairs of atoms
(i.e., isometry). The Euclidean transformations are as follows: • Rotation.
Movement of an object with respect to the radial orientation to a given point.
• Translation. Movement of every point of an object by the same distance in a
given direction. • Reflection. Mapping of an object to itself through a point
(inversion), a line or a plane (mirroring). All three transformations and
their arbitrary finite combinations are included in the Euclidean group
[E(3)]. The special Euclidean group [SE(3)] comprises only translations and
rotations. Molecules are always symmetric in the SE(3) group, i.e., their
intrinsic properties (e.g., biological and physicochemical properties, and
equilibrium energy) are invariant to coordinate rotation and translation, and
combinations thereof. Several molecules are chiral, that is, some of their
(chiral) properties depend on the absolute configuration of their stereogenic
centers, and are thus non-invariant to molecule reflection. Chirality plays a
key role in chemical biology; relevant examples of chiral molecules are DNA,
and several drugs whose enantiomers exhibit markedly different pharmacological
and toxicological properties [61].
### Learning on molecular graphs
Figure 2: Deep learning on molecular graphs.
a. Message passing graph neural networks applied to two-dimensional (2D)
molecular graphs: 2D molecular graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$
with its labeled vertex (atom) features
($\textbf{v}_{i}\in\mathbb{R}^{d_{v}}$), and edge (bond) features
($\textbf{e}_{ij}\in\mathbb{R}^{d_{e}}$). Vertex features are updated by
iterative message passing for a defined number of time steps $T$ across each
pair of vertices ${v}_{i}$ and ${v}_{j}$, connected via an edge $e_{j,i}$.
After the last message passing convolution, the final vertex
$\textbf{v}_{i}^{t}$ can be (i) mapped to a bond ($y_{ij}$) or atom ($y_{i}$)
property, or (ii) aggregated to form molecular features (that can be mapped to
a molecular property $y$).
b. E(3)-equivariant message passing graph neural networks applied to three-
dimensional (3D) molecular graphs: 3D graphs
$\mathcal{G}_{3}=(\mathcal{V},\mathcal{E},\mathcal{R})$ that are labeled with
atom features ($\textbf{v}_{i}\in\mathbb{R}^{d_{v}}$), their absolute
coordinates in 3D space ($\textbf{r}_{i}\in\mathbb{R}^{3}$) and their edge
features ($\textbf{e}_{ij}\in\mathbb{R}^{d_{e}}$). Iterative spherical
convolutions are used to obtain data-driven atomic features
($\textbf{v}_{i}^{t}$), which can be mapped to atomic properties or
aggregated, and mapped to molecular properties (${y}_{i}$ and ${y}$,
respectively).
#### Molecular graphs
Graphs are among the most intuitive ways to represent molecular structures
[62]. Any molecule can be thought of as a mathematical graph
$\mathcal{G}=(\mathcal{V},\mathcal{E})$, whose vertices
($\textbf{v}_{i}\in\mathcal{V}$) represent atoms, and whose edges
($\textbf{e}_{i,j}\in\mathcal{E}$) constitute their connection (Figure 3.1).
In many deep learning applications, molecular graphs can be further
characterized by a set of vertex and edge features.
#### Graph neural networks
Deep learning methods devoted to handling graphs as input are commonly
referred to as graph neural networks (GNNs). When applied to molecules, GNNs
allow for feature extraction by progressively aggregating information from
atoms and their molecular environments (Figure 2a, [63, 64]). Different
architectures of GNNs have been introduced [65], the most popular of which
fall under the umbrella term of message passing neural networks [66, 67, 5].
Such networks iteratively update the vertex features of the l-th network layer
($\textbf{v}_{i}^{l}\rightarrow\textbf{v}_{i}^{l+1}$) via graph convolutional
operations, employing at least two learnable functions $\psi$ and $\phi$, and
a local permutation-invariant aggregation operator (e.g., sum):
$\textbf{v}_{i}^{l+1}=\phi\left(\textbf{v}_{i}^{l},\bigoplus_{j\in\mathcal{N}(i)}\psi\left(\textbf{v}_{i}^{l},\textbf{v}_{j}^{l}\right)\right)$.
Since their introduction as a means to predict quantum chemical properties of
small molecules at the density functional theory (DFT) level [5], GNNs have
found many applications in quantum chemistry [68, 69, 70, 71, 72], drug
discovery [73, 37, 74], CASP [75], and molecular property prediction [76, 77].
When applied to quantum chemistry tasks, GNNs often use E(3)-invariant 3D
information by including radial and angular information into the edge features
of the graph [78, 68, 69, 43, 72], thereby improving the prediction accuracy
of quantum chemical forces and energies for equilibrium and non-equilibrium
molecular conformations, as in the case of SchNet [79, 80] and PaiNN [43].
SchNet-like architectures were used to predict quantum mechanical wave-
functions in the form of Hartree-Fock and DFT density matrices [81], and
differences in quantum properties obtained by DFT and coupled cluster level-
of-theory calculations [82].
GNNs for molecular property prediction have been shown to outperform human-
engineered molecular descriptors for several biologically relevant properties
[83]. Although including 3D information into molecular graphs generally
improved the prediction of drug-relevant properties, no marked difference was
observed between using a single or multiple molecular conformers for network
training [84]. Because of their natural connection with molecular
representations, GNNs seem particularly suitable in the context of explainable
AI (XAI) [85], where they have been used to interpret models predicting
molecular properties of preclinical relevance [38] and quantum chemical
properties [86].
GNNs have been used for de novo molecule generation [87, 88, 89, 47], for
example by performing vertex and edge addition from an initial vertex [87]
(Figure 2b). GNNs have also been combined with variational autoencoders [48,
89, 88, 90] and reinforcement learning [91, 92, 47]. Finally, GNNs have been
applied to CASP [75, 45, 93]; however, the current approaches are limited to
reactions in which one bond is removed between the products and the reactants.
#### Equivariant message passing
A recent area of development of graph-based methods are SE(3)- and
E(3)-equivariant GNNs (equivariant message passing networks) which deal with
the absolute coordinate systems of 3D graphs [94, 95] (Figure 2b). Thus, these
networks may be particularly well-suited to be applied to 3D molecular
representations. Such networks exploit Euclidean symmetries of the system (Box
2).
3D molecular graphs $\mathcal{G}_{3D}=(\mathcal{V},\mathcal{E},\mathcal{R})$,
in addition to their vertex and edge features ($\textbf{v}_{i}\in\mathcal{V}$
and $\textbf{e}_{ij}\in\mathcal{E}$, respectively), also encode information on
the vertex position in a 3D coordinate system
($\textbf{r}_{i}\in\mathcal{R}$). By employing E(3)- [41] and
SE(3)-equivariant [94] convolutions, such networks have shown high accuracy
for predicting several quantum chemical properties such as energies [40, 96,
39, 42, 97, 43, 98], interatomic potentials for molecular dynamics simulations
[42, 99, 41], and wave-functions [44]. SE(3) equivariant neural networks do
not commute with reflections of the input (i.e. non-equivariant to
reflections), and thereby enable SE(3) equivariant models to distinguish
between stereoisomers of chiral molecules including enantiomers [94]. E(3)
equivariant neural networks on the other side transform equivariantly with
refelctions, which allows E(3) equivariant models only to distinguish between
diastereomers and not eneantiomers. SE(3) neural networks are computationally
expensive due to their use of spherical harmonics [100] and Wigner D-functions
[101] to compute learnable weight kernels. E(3)-equivariant neural networks
are computationally more efficient and have shown to perform equal to, or
better than, SE(3)-equivariant networks, e.g., for the modeling of quantum
chemical properties and dynamic systems [41]. Equivariant message passing
networks have been applied to predict the quantum mechanical wave-function of
nuclei and electron-based representations in an end-to-end fashion [102, 103,
104]. However, such networks are currently limited to small molecular systems
because of the large size of the learned matrices, which scale quadratically
with the number of electrons in the system.
### Learning on grids
Grids capture the properties of a system at regularly spaced intervals. Based
on the number of dimensions included in the system, grids can be 1D (e.g.,
sequences), 2D (e.g., RGB images), 3D (e.g., cubic lattices), or higher-
dimensional. Grids are defined by a Euclidean geometry and can be considered
as a graph with a special adjacency, where (i) the vertices have a fixed
ordering that is defined by the spatial dimensions of the grid, and (ii) each
vertex has an identical number of adjacent edges and is therefore
indistinguishable from all other vertices structure-wise [21]. These two
properties render local convolutions applied to a grid inherently permutation
invariant, and provide a strong geometric prior for translation invariance
(e.g. by weight sharing in convolutions). These grid properties have
critically determined the success of convolutional neural networks (CNNs),
e.g., in computer vision [34, 33], natural language processing [105, 9], and
speech recognition [10, 11].
#### Molecular grids
Molecules can be represented as grids in different ways. 2D grids (e.g.,
molecular structure drawings) are generally more useful for visualization
rather than prediction, with few exceptions [106]. Analogous with some popular
pre-deep learning approaches, for example Comparative molecular field analysis
(CoMFA) [31], and comparative molecular similarity indices analysis (CoMSIA)
[32], 3D grids are often used to capture the spatial distribution of the
properties within one (or more) molecular conformer. Such representations are
then used as inputs to the 3D CNNs. 3D CNNs are characterized by a greater
resource efficiency than equivariant GNNs, which until now have mainly been
applied to molecules with fewer than approximately 1000 atoms. Thus, 3D CNNs
have often been the method of choice when the protein structure has to be
considered, e.g., for protein-ligand binding affinity prediction [49, 50, 107,
108, 109], or active site recognition [110].
### Learning on molecular surfaces
Molecular surfaces can be defined by the surface enclosing the 3D structure of
a molecule at a certain distance from each atom center. Each point on such a
continuous surface can be further characterized by its chemical (e.g.,
hydrophobic, electrostatic) and geometric features (e.g., local shape,
curvature). From a geometrical perspective, molecular surfaces are considered
as 3D meshes, i.e., a set of polygons (faces) that describe how the mesh
coordinates exist in the 3D space [111]. Their vertices can be represented by
a 2D grid structure (where four vertices on the mesh define a pixel) or by a
3D graph structure. The grid- and graph-based structures of meshes enable
applications of 2D CNNs, geodesic CNNs and GNNs to learn on mesh-based
molecular surfaces. Recently, geodesic (2D) CNNs have been applied to learn on
mesh-based representations of protein surfaces to predict protein-protein
interactions and recognize corresponding binding sites [18]. This approach
generated data-driven fingerprints that are relevant for specific biomolecular
interactions. Approaches like 2D CNNs applied to meshes come with certain
limitations, such as the need for rotational data augmentation (due to their
non-equivariance to rotations) and for enforcing a homogeneous mesh resolution
(i.e., uniform spacing of all the points in the mesh). Recently introduced
GNNs for mesh-based representations have been shown to incorporate rotational
equivariance into their network architecture and allow for heterogeneous mesh
resolution [112]. Such GNNs are computationally efficient and have potential
for modeling macromolecular structures; however, they have not yet found
applications to molecular systems. Other studies have used 3D voxel-based
surface representations of (macro)molecules as inputs to 3D CNNs, e.g., for
protein-ligand affinity [113] and protein binding-site [114] prediction.
### Learning on string representations
#### Molecular strings
Molecules can be represented as molecular strings, i.e., linear sequences of
alphanumeric symbols. Molecular strings were originally developed as manual
ciphering tools to complement systematic chemical nomenclature [115, 116] and
later became suitable for data storage and retrieval. Some of the most popular
string-based representations are the Wiswesser Line Notation [117], the Sybyl
line notation [118], the International Chemical Identifier (InChI) [119],
Hierarchical Editing Language for Macromolecules [120], and the Simplified
Molecular Input Line Entry System (SMILES) [20].
Each type of linear representation can be considered as a "chemical language."
In fact, such notations possess a defined syntax, i.e., not all possible
combinations of alphanumerical characters will lead to a “chemically valid”
molecule. Furthermore, these notations possess semantic properties: depending
on how the elements of the string are combined, the corresponding molecule
will have different physicochemical and biological properties. These
characteristics make it possible to extend the deep learning methods developed
for language and sequence modeling to the analysis of molecular strings for
"chemical language modeling" [121, 122].
SMILES strings – in which letters are used to represent atoms, and symbols and
numbers are used to encode bond types, connectivity, branching, and
stereochemistry (Figure 3a) – have become the most frequently employed data
representation method for sequence-based deep learning [19, 52]. Whereas
several other string representations have been tested in combination with deep
learning, e.g., InChI [123], DeepSMILES [124], and self-referencing embedded
strings (SELFIES) [125], SMILES remains the de facto representation of choice
for chemical language modeling [30]. The following text introduces the most
prominent chemical language modeling methods, along with selected examples of
their application to chemistry.
Figure 3: Chemical language modeling.
a. SMILES strings, in which atom types are represented by their element
symbols, and bond types and branching are indicated by other predefined
alphanumeric symbols. For each molecule, via the SMILES algorithm a string of
$T$ symbols ("tokens") is obtained
($\textbf{s}=\\{{s}_{1},{s}_{2},\ldots,{s}_{T}\\}$), which encodes the
molecular connectivity, herein illustrated via the color that indicates the
corresponding atomic position in the graph (left) and string (right). A
molecule can be encoded via different SMILES strings depending on the chosen
starting atom. Three random permutations incorporating identical molecular
information are presented.
b. Recurrent neural networks, at any sequence position t, learn to predict the
next token ${s}_{t+1}$ of a sequence s given the current sequence
($\\{{s}_{1},{s}_{2},\ldots,{s}_{t}\\}$) and hidden state ${h}_{t}$.
c. Transformer-based language models, in which the input sequence is
structured as a graph. Vertices are featurized according to their token
identity (e.g., via token embedding, $\textbf{v}_{i}\in\mathbb{R}^{d_{v}}$)
and their position in the sequence (e.g., via sinusoidal positional encoding,
$\textbf{p}_{i}\in\mathbb{R}^{d_{v}}$). During transformer learning, the
vertices are updated via residual attention blocks. After passing $T$
attention layers, an individual feature representation $\textbf{s}_{t}^{T}$
for each token is obtained.
#### Chemical language models
Chemical language models are machine learning methods that can handle
molecular sequences as inputs and/or outputs. The most common algorithms for
chemical language modeling are Recurrent neural networks (RNNs) and
Transformers:
* •
RNNs (Figure 3b) [126] are neural networks that process sequence data as
Euclidean structures, usually via one-hot-encoding. RNNs model a dynamic
system in which the hidden state (${h}_{t}$) of the network at any t-th time
point (i.e., at any t-th position in the sequence) depends on both the current
observation (${s}_{t}$) and the previous hidden state (${h}_{t-1}$). RNNs can
process sequence inputs of arbitrary lengths and provide outputs of arbitrary
lengths. RNNs are often used in an "auto-regressive" fashion, i.e., to predict
the probability distribution over the next possible elements (tokens) at the
time step $t+1$, given the current hidden state (${h}_{t}$) and the preceding
portions of the sequence. Several RNN architectures have been proposed to
solve the gradient vanishing or exploding problems of "vanilla" RNNs [127,
128], such as long short-term memory [105] and gated recurrent units [129].
* •
Transformers (Figure 3c) process sequence data as non-Euclidean structures, by
encoding sequences as either (i) a fully connected graph, or (ii) a
sequentially connected graph, where each token is only connected to the
previous tokens in the sequence. The former approach is often used for feature
extraction in general (e.g., in a Transformer-encoder), whereas the latter is
employed for next-token prediction e.g. in a Transformer-decoder). The
positional information of tokens is usually encoded by positional embedding or
sinusoidal positional encoding [8]. Transformers combine graph-like processing
with the so-called attention layers. Attention layers allow Transformers to
focus on ("pay attention to") the perceived relevant tokens for each
prediction. Transformers have been particularly successful in sequence-to-
sequence tasks, such as language translation.
[th!]
### Box 3: Structure-activity landscape modeling with geometric deep learning
This worked example shows how geometric deep learning (GDL) can be used to
interpret the structure-activity landscape learned by a trained model.
Starting from a publicly available molecular dataset containing estrogen
receptor binding information [130], we trained an E(3)-equivariant graph
neural network (six hidden layers, 128 hidden neurons per layer) and analyzed
the learned features and their relationship to ligand binding to the estrogen
receptor. The figure shows an analysis of the learned molecular features
(third hidden layer, analyzed via principal component analysis; the first two
principal components are shown), and how these features relate to the density
of active and inactive molecules in the chemical space. The network
successfully separated the molecules based on both their experimental
bioactivity and their structural features (e.g., atom scaffolds [131]) and
might offer novel opportunities for explainable AI with GDL.
Extending early studies [132, 19, 133], RNNs for next-token prediction have
been routinely applied to the de novo generation of molecules with desired
biological or physicochemical properties, in combination with transfer [19,
134, 135, 136] or reinforcement learning [137, 138]. In this context, RNNs
have shown remarkable capability to learn the SMILES syntax [19, 134], and
capture high-level molecular features ("semantics"), such as physicochemical
[19, 134] and biological properties [135, 136, 139, 132]. In this context,
data augmentation based on SMILES randomization [140, 133] or bidirectional
learning [141] have proven to be efficient for improving the quality of the
chemical language learned by RNNs. Most published studies have used SMILES
strings or derivative representations. In a few studies, one-letter amino acid
sequences were employed for peptide design [142, 143, 144, 51, 145]. RNNs have
also been applied to predict ligand–protein interactions and the
pharmacokinetic properties of drugs [55, 54], protein secondary structure [53,
146], and the temporal evolution of molecular trajectories [147]. RNNs have
been applied for molecular feature extraction [148, 149], showing that the
learned features outperformed both traditional molecular descriptors and
graph-convolution methods for virtual screening and property prediction [148].
The Fréchet ChemNet distance [150], which is based on the physicochemical and
biological features learned by an RNN model, has become the de facto reference
method to capture molecular similarity in this context.
Molecular Transformers have been applied to CASP, which can be cast as a
sequence-to-sequence translation task, in which the string representations of
the reactants are mapped to those of the corresponding product, or vice versa.
Since their initial applications [56], Transformers have been employed to
predict multi-step syntheses [151], regio- and stereoselective reactions
[152], enzymatic reaction outcomes [153], and reaction yields and classes [57,
154]. Recently, Transformers have been applied to molecular property
prediction [155, 59] and optimization [156]. Transformers have also been used
for de novo molecule design by learning to translate the target protein
sequence into SMILES strings of the corresponding ligands [58].
Representations learned from SMILES strings by Transformers have shown promise
for property prediction in low-data regimes [157]. Furthermore, Transformers
have recently been combined with E(3) and SE(3) equivariant layers to learn
the 3D structures of proteins from their amino-acid sequence [6, 7]. These
equivariant Transformers achieve state-of-the-art performance in protein
structure prediction.
Other deep learning approaches have relied on string-based representations for
de novo design, e.g., conditional generative adversarial networks [158, 159,
160] and variational autoencoders [161, 162]. Most of these models, however,
have limited or equivalent ability to automatically learn SMILES syntax, as
compared to RNNs. 1D CNNs [163, 164] and self-attention networks [165, 166,
167] have been used with SMILES for property prediction. Recently, deep
learning on amino acid sequences for property prediction was shown to perform
on par with approaches based on human-engineered features [168].
## Conclusions and outlook
Geometric deep learning in chemistry has allowed researchers to leverage the
symmetries of different unstructured molecular representations, resulting in a
greater flexibility and versatility of the available computational models for
molecular structure generation and property prediction. Such approaches
represent a valid alternative to classical chemoinformatics approaches that
are based on molecular descriptors or other human-engineered features. For
modeling tasks that are usually characterized by the need for highly
engineered rules (e.g., chemical transformations for de novo design, and
reactive site specification for CASP), the benefits of GDL have been
consistently shown. In published applications of GDL, each molecular
representation has shown characteristic strengths and weaknesses.
Molecular strings, like SMILES, have proven particularly suited for generative
deep learning tasks, such as de novo design and CASP. This success may be due
to the relatively easy syntax of such a chemical language, which facilitates
next-token and sequence-to-sequence prediction. For molecular property
prediction, SMILES strings could be limited due to their non-univocity.
Molecular graphs have shown particular usefulness for property prediction,
partly because of their human interpretability and ease of inclusion of
desired edge and node features. The incorporation of 3D information (e.g.,
with equivariant message passing) is useful for quantum chemistry related
modeling, whereas in drug discovery applications, this approach has often
failed to clearly outbalance the increased complexity of the model.
E(3)-equivariant graph neural networks have also been applied for
conformation-aware de novo design [169], but prospective experimental
validation studies have not yet been published.
Molecular grids have become the de facto standard for 3D representations of
large molecular systems, due to (i) their ability to capture information at a
user-defined resolution (voxel density) and (ii) the Euclidean structure of
the input grid.
Finally, molecular surfaces are currently at the forefront of GDL. We expect
many interesting applications of GDL on molecular surfaces in the near future.
To further the application and impact of GDL in chemistry, an evaluation of
the optimal trade-off between algorithmic complexity, performance, and model
interpretability will be required. These aspects are crucial for reconciling
the “two QSARs” [170] and connect computer science and chemistry communities.
We encourage GDL practitioners to include aspects of interpretability in their
models (e.g., via XAI [85]) whenever possible and transparently communicate
with domain experts. The feedback from domain experts will also be crucial to
develop new "chemistry-aware" architectures, and further the potential of
molecular GDL for concrete prospective applications.
The potential of GDL for molecular feature extraction has not yet been fully
explored. Several studies have shown the benefits of learned representations
compared to classical molecular descriptors, but in other cases, GDL failed to
live up to its promise in terms of superior learned features. Although there
are several benchmarks for evaluating machine learning models for property
prediction [171, 172] and molecule generation [173, 174], at present, there is
no such framework to enable the systematic evaluation of the usefulness of
data-driven features learned by AI. Such benchmarks and systematic studies are
key to obtaining an unvarnished assessment of deep representation learning.
Moreover, investigating the relationships between the learned features and the
physicochemical and biological properties of the input molecules will augment
the interpretability and applicability of GDL, e.g., to modeling structure-
function relationships like structure-activity landscapes (Box 3).
Compared to conventional QSAR approaches, in which the assessment of the
applicability domain (i.e., the region of the chemical space where model
predictions are considered reliable) has been routinely performed,
contemporary GDL studies lack such an assessment. This systematic gap might
constitute one of the limiting factors to the more widespread use of GDL
approaches for prospective studies, as it could lead to unreliable
predictions, e.g., for molecules with different mechanisms of action,
functional groups, or physicochemical properties than the training data. In
the future, it will be necessary to devise “geometry-aware” approaches for
applicability domain assessment.
Another opportunity will be to leverage less explored molecular
representations for GDL. For instance, the electronic structure of molecules
has vast potential for tasks such as CASP, molecular property prediction, and
prediction of macromolecular interactions (e.g. protein-protein interactions).
Although accurate statistical and quantum mechanical simulations are
computationally expensive, modern quantum machine learning models [175, 176]
trained on large quantum data collections [177, 178, 179] allow quantum
information to be accessed much faster with high accuracy. This aspect could
enable quantum and electronic featurization of extensive molecular datasets,
to be used as input molecular representations for the task of interest.
Deep learning can be applied to a multitude of biological and chemical
representations. The corresponding deep neural network models have the
potential to augment human creativity, paving the way for new scientific
studies that were previously unfeasible. However, research has only explored
the tip of the iceberg. One of the most significant catalysts for the
integration of deep learning in molecular sciences may be the responsibility
of academic institutions to foster interdisciplinary collaboration,
communication, and education. Picking the "high hanging fruits" will only be
possible with a deep understanding of both chemistry and computer science,
along with out-of-the-box thinking and collaborative creativity. In such a
setting, we expect molecular GDL to increase the understanding of molecular
systems and biological phenomena.
## Acknowledgements
This research was supported by the Swiss National Science Foundation (SNSF,
grant no. 205321$\\_$182176) and the ETH RETHINK initiative.
## Competing interest
G.S. declares a potential financial conflict of interest as co-founder of
inSili.com LLC, Zurich, and in his role as scientific consultant to the
pharmaceutical industry.
## List of abbreviations
AI: Artificial Intelligence
CASP: Computer-aided Synthesis Planning
CNN: Convolutional Neural Network
DFT: Density Functional Theory
E(3): Euclidean Symmetry Group
GDL: Geometric Deep Learning
GNN: Graph Neural Network
QSAR: Quantitative Structure-Activity Relationship
RNN: Recurrent Neural Network
SE(3): Special Euclidean Symmetry Group
SMILES: Simplified Molecular Input Line Entry Systems
XAI: Explainable Artificial Intelligence
1D: One-dimensional
2D: Two-dimensional
3D: Three-dimensional
## References
* [1] Yann LeCun, Yoshua Bengio and Geoffrey Hinton “Deep learning” In _Nature_ 521.7553 Nature Publishing Group, 2015, pp. 436–444
* [2] Jürgen Schmidhuber “Deep learning in neural networks: An overview” In _Neural Networks_ 61 Elsevier, 2015, pp. 85–117
* [3] Erik Gawehn, Jan A Hiss and Gisbert Schneider “Deep learning in drug discovery” In _Molecular Informatics_ 35.1 Wiley Online Library, 2016, pp. 3–14
* [4] José Jiménez-Luna, Francesca Grisoni, Nils Weskamp and Gisbert Schneider “Artificial intelligence in drug discovery: Recent advances and future perspectives” In _Expert Opinion on Drug Discovery_ Taylor & Francis, 2021, pp. 1–11
* [5] Justin Gilmer et al. “Neural message passing for quantum chemistry” In _International Conference on Machine Learning_ , 2017, pp. 1263–1272 PMLR
* [6] John Jumper et al. “Highly accurate protein structure prediction with AlphaFold” In _Nature_ Nature Publishing Group, 2021, pp. 1–11
* [7] Minkyung Baek et al. “Accurate prediction of protein structures and interactions using a three-track neural network” In _Science_ American Association for the Advancement of Science, 2021
* [8] Ashish Vaswani et al. “Attention is all you need” In _Advances in Neural Information Processing Systems_ , 2017, pp. 5998–6008
* [9] Tom B Brown et al. “Language models are few-shot learners” In _arXiv:2005.14165_ , 2020
* [10] Geoffrey Hinton et al. “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups” In _IEEE Signal Processing Magazine_ 29.6 IEEE, 2012, pp. 82–97
* [11] Tomáš Mikolov et al. “Strategies for training large scale neural network language models” In _2011 IEEE Workshop on Automatic Speech Recognition & Understanding_, 2011, pp. 196–201 IEEE
* [12] Alex Krizhevsky, Ilya Sutskever and Geoffrey E Hinton “Imagenet classification with deep convolutional neural networks” In _Communications of the ACM_ 60.6 AcM New York, NY, USA, 2017, pp. 84–90
* [13] Clement Farabet, Camille Couprie, Laurent Najman and Yann LeCun “Learning hierarchical features for scene labeling” In _IEEE transactions on pattern analysis and machine intelligence_ 35.8 IEEE, 2012, pp. 1915–1929
* [14] Jonathan J Tompson, Arjun Jain, Yann LeCun and Christoph Bregler “Joint training of a convolutional network and a graphical model for human pose estimation” In _Advances in Neural Information Processing Systems_ , 2014, pp. 1799–1807
* [15] Michael M Bronstein et al. “Geometric deep learning: going beyond euclidean data” In _IEEE Signal Processing Magazine_ 34.4 IEEE, 2017, pp. 18–42
* [16] Federico Monti et al. “Fake news detection on social media using geometric deep learning” In _arXiv:1902.06673_ , 2019
* [17] Roberto Todeschini and Viviana Consonni “Molecular descriptors for chemoinformatics: volume I: alphabetical listing/volume II: appendices, references” John Wiley & Sons, 2009
* [18] Pablo Gainza et al. “Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning” In _Nature Methods_ 17.2 Nature Publishing Group, 2020, pp. 184–192
* [19] Marwin HS Segler, Thierry Kogej, Christian Tyrchan and Mark P Waller “Generating focused molecule libraries for drug discovery with recurrent neural networks” In _ACS Central Science_ 4.1 ACS Publications, 2018, pp. 120–131
* [20] David Weininger “SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules” In _Journal of Chemical Information and Computer Sciences_ 28.1 ACS Publications, 1988, pp. 31–36
* [21] Michael M Bronstein, Joan Bruna, Taco Cohen and Petar Veličković “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges” In _arXiv:2104.13478_ , 2021
* [22] Jerrold Marsden and Alan Weinstein “Reduction of symplectic manifolds with symmetry” In _Reports on mathematical physics_ 5.1 Elsevier, 1974, pp. 121–130
* [23] Taco S Cohen and Max Welling “Group equivariant convolutional networks” In _Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48_ , 2016, pp. 2990–2999
* [24] Taco S Cohen and Max Welling “Steerable cnns” In _arXiv:1612.08498_ , 2016
* [25] Taco S Cohen, Mario Geiger, Jonas Köhler and Max Welling “Spherical CNNs” In _International Conference on Learning Representations_ , 2018
* [26] Risi Kondor and Shubhendu Trivedi “On the generalization of equivariance and convolution in neural networks to the action of compact groups” In _International Conference on Machine Learning_ , 2018, pp. 2747–2755 PMLR
* [27] Ikuo Moriguchi et al. “Simple method of calculating octanol/water partition coefficient” In _Chemical and pharmaceutical bulletin_ 40.1 The Pharmaceutical Society of Japan, 1992, pp. 127–130
* [28] George Cybenko “Approximation by superpositions of a sigmoidal function” In _Mathematics of control, signals and systems_ 2.4 Springer, 1989, pp. 303–314
* [29] Igor V Tetko, Pavel Karpov, Ruud Van Deursen and Guillaume Godin “State-of-the-art augmented NLP transformer models for direct and single-step retrosynthesis” In _Nature communications_ 11.1 Nature Publishing Group, 2020, pp. 1–11
* [30] Michael A Skinnider, R Greg Stacey, David S Wishart and Leonard J Foster “Chemical language models enable navigation in sparsely populated chemical space” In _Nature Machine Intelligence_ Nature Publishing Group, 2021, pp. 1–12
* [31] Richard D Cramer, David E Patterson and Jeffrey D Bunce “Comparative molecular field analysis (CoMFA). 1. Effect of shape on binding of steroids to carrier proteins” In _Journal of the American Chemical Society_ 110.18 ACS Publications, 1988, pp. 5959–5967
* [32] Gerhard Klebe “Comparative molecular similarity indices analysis: CoMSIA” In _3D QSAR in drug design_ Springer, 1998, pp. 87–104
* [33] Yann LeCun and Yoshua Bengio “Convolutional networks for images, speech, and time series” In _The handbook of brain theory and neural networks_ 3361.10, 1995, pp. 1995
* [34] Yann LeCun, Léon Bottou, Yoshua Bengio and Patrick Haffner “Gradient-based learning applied to document recognition” In _Proceedings of the IEEE_ 86.11 Ieee, 1998, pp. 2278–2324
* [35] Richard S Sutton and Andrew G Barto “Reinforcement learning: An introduction” MIT press, 2018
* [36] Sinno Jialin Pan and Qiang Yang “A survey on transfer learning” In _IEEE Transactions on knowledge and data engineering_ 22.10 IEEE, 2009, pp. 1345–1359
* [37] Evan N Feinberg et al. “PotentialNet for molecular property prediction” In _ACS Central Science_ 4.11 ACS Publications, 2018, pp. 1520–1530
* [38] José Jiménez-Luna, Miha Skalic, Nils Weskamp and Gisbert Schneider “Coloring molecules with explainable artificial intelligence for preclinical relevance assessment” In _Journal of Chemical Information and Modeling_ 61.3 ACS Publications, 2021, pp. 1083–1094
* [39] Benjamin Kurt Miller, Mario Geiger, Tess E Smidt and Frank Noé “Relevance of rotationally equivariant convolutions for predicting molecular properties” In _arXiv:2008.08461_ , 2020
* [40] Brandon Anderson, Truong Son Hy and Risi Kondor “Cormorant: Covariant Molecular Neural Networks” In _Advances in Neural Information Processing Systems_ 32, 2019, pp. 14537–14546
* [41] Victor Garcia Satorras, Emiel Hoogeboom and Max Welling “E (n) Equivariant Graph Neural Networks” In _arXiv:2102.09844_ , 2021
* [42] Fabian Fuchs, Daniel Worrall, Volker Fischer and Max Welling “SE (3)-Transformers: 3D Roto-Translation Equivariant Attention Networks” In _Advances in Neural Information Processing Systems_ 33, 2020
* [43] Kristof T Schütt, Oliver T Unke and Michael Gastegger “Equivariant message passing for the prediction of tensorial properties and molecular spectra” In _arXiv:2102.03150_ , 2021
* [44] Oliver T Unke et al. “SE (3)-equivariant prediction of molecular wavefunctions and electronic densities” In _arXiv:2106.02347_ , 2021
* [45] Connor W Coley et al. “A graph-convolutional neural network model for the prediction of chemical reactivity” In _Chemical Science_ 10.2 Royal Society of Chemistry, 2019, pp. 370–377
* [46] Wengong Jin, Connor Coley, Regina Barzilay and Tommi Jaakkola “Predicting organic reaction outcomes with weisfeiler-lehman network” In _Advances in Neural Information Processing Systems_ , 2017, pp. 2607–2616
* [47] Zhenpeng Zhou et al. “Optimization of molecules via deep reinforcement learning” In _Scientific Reports_ 9.1 Nature Publishing Group, 2019, pp. 1–10
* [48] Wengong Jin, Regina Barzilay and Tommi Jaakkola “Junction tree variational autoencoder for molecular graph generation” In _International Conference on Machine Learning_ , 2018, pp. 2323–2332 PMLR
* [49] José Jiménez, Miha Skalic, Gerard Martinez-Rosell and Gianni De Fabritiis “K deep: Protein–ligand absolute binding affinity prediction via 3d-convolutional neural networks” In _Journal of Chemical Information and Modeling_ 58.2 ACS Publications, 2018, pp. 287–296
* [50] Matthew Ragoza et al. “Protein–ligand scoring with convolutional neural networks” In _Journal of Chemical Information and Modeling_ 57.4 ACS Publications, 2017, pp. 942–957
* [51] Francesca Grisoni et al. “Designing anticancer peptides by constructive machine learning” In _ChemMedChem_ 13.13 Wiley Online Library, 2018, pp. 1300–1302
* [52] Philippe Schwaller et al. “Found in Translation: predicting outcomes of complex organic chemistry reactions using neural sequence-to-sequence models” In _Chemical Science_ 9.28 Royal Society of Chemistry, 2018, pp. 6091–6098
* [53] Andrew W Senior et al. “Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13)” In _Proteins: Structure, Function, and Bioinformatics_ 87.12 Wiley Online Library, 2019, pp. 1141–1148
* [54] Xiting Wang et al. “Optimizing Pharmacokinetic Property Prediction Based on Integrated Datasets and a Deep Learning Approach” In _Journal of Chemical Information and Modeling_ 60.10 ACS Publications, 2020, pp. 4603–4613
* [55] Shuangjia Zheng et al. “Predicting drug–protein interaction using quasi-visual question answering system” In _Nature Machine Intelligence_ 2.2 Nature Publishing Group, 2020, pp. 134–140
* [56] Philippe Schwaller et al. “Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction” In _ACS Central Science_ 5.9 ACS Publications, 2019, pp. 1572–1583
* [57] Philippe Schwaller, Alain C Vaucher, Teodoro Laino and Jean-Louis Reymond “Prediction of chemical reaction yields using deep learning” In _Machine Learning: Science and Technology_ 2.1 IOP Publishing, 2021, pp. 015016
* [58] Daria Grechishnikova “Transformer neural network for protein-specific de novo drug generation as a machine translation problem” In _Scientific Reports_ 11.1 Nature Publishing Group, 2021, pp. 1–13
* [59] Paul Morris, Rachel St., William Edward Hahn and Elan Barenholtz “Predicting Binding from Screening Assays with Transformer Network Embeddings” In _Journal of Chemical Information and Modeling_ 60.9, 2020, pp. 4191–4199
* [60] Roald Hoffmann and Pierre Laszlo “Representation in chemistry” In _Angewandte Chemie International Edition in English_ 30.1 Wiley Online Library, 1991, pp. 1–16
* [61] Lien Ai Nguyen, Hua He and Chuong Pham-Huy “Chiral drugs: an overview” In _International Journal of Biomedical Science: IJBS_ 2.2 Master Publishing Group, 2006, pp. 85
* [62] Thomas N Kipf and Max Welling “Semi-supervised classification with graph convolutional networks” In _arXiv:1609.02907_ , 2016
* [63] Peter Battaglia, Razvan Pascanu, Matthew Lai and Danilo Jimenez Rezende “Interaction Networks for Learning about Objects, Relations and Physics” In _Advances in Neural Information Processing Systems_ , 2016, pp. 4502–4510
* [64] Peter W Battaglia et al. “Relational inductive biases, deep learning, and graph networks” In _arXiv:1806.01261_ , 2018
* [65] Jie Zhou et al. “Graph neural networks: A review of methods and applications” In _AI Open_ 1 Elsevier, 2020, pp. 57–81
* [66] Floris Geerts, Filip Mazowiecki and Guillermo A Pérez “Let’s Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework” In _arXiv:2004.02593_ , 2020
* [67] David Duvenaud et al. “Convolutional networks on graphs for learning molecular fingerprints” In _Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2_ , 2015, pp. 2224–2232
* [68] Johannes Klicpera, Janek Groß and Stephan Günnemann “Directional Message Passing for Molecular Graphs” In _International Conference on Learning Representations_ , 2019
* [69] Shuo Zhang, Yang Liu and Lei Xie “Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures” In _arXiv:2011.07457_ , 2020
* [70] Michael Withnall, Edvard Lindelöf, Ola Engkvist and Hongming Chen “Building attention and edge message passing neural networks for bioactivity and physical–chemical property prediction” In _Journal of Cheminformatics_ 12.1 Springer, 2020, pp. 1
* [71] Bowen Tang et al. “A self-attention based message passing neural network for predicting molecular lipophilicity and aqueous solubility” In _Journal of Cheminformatics_ 12.1 BioMed Central, 2020, pp. 1–9
* [72] Yi Liu et al. “Spherical message passing for 3d graph networks” In _arXiv:2102.05013_ , 2021
* [73] Jonathan M Stokes et al. “A deep learning approach to antibiotic discovery” In _Cell_ 180.4 Elsevier, 2020, pp. 688–702
* [74] Wen Torng and Russ B Altman “Graph convolutional neural networks for predicting drug-target interactions” In _Journal of Chemical Information and Modeling_ 59.10 ACS Publications, 2019, pp. 4131–4149
* [75] Vignesh Ram Somnath et al. “Learning Graph Models for Retrosynthesis Prediction” In _arXiv:2006.07038_ , 2020
* [76] Junying Li, Deng Cai and Xiaofei He “Learning graph-level representation for drug discovery” In _arXiv:1709.03741_ , 2017
* [77] Ke Liu et al. “Chemi-Net: a molecular graph convolutional network for accurate drug property prediction” In _International Journal of Molecular Sciences_ 20.14 Multidisciplinary Digital Publishing Institute, 2019, pp. 3389
* [78] Oliver T Unke and Markus Meuwly “PhysNet: a neural network for predicting energies, forces, dipole moments, and partial charges” In _Journal of Chemical Theory and Computation_ 15.6 ACS Publications, 2019, pp. 3678–3693
* [79] Kristof T Schütt et al. “SchNet–A deep learning architecture for molecules and materials” In _The Journal of Chemical Physics_ 148.24 AIP Publishing LLC, 2018, pp. 241722
* [80] Kristof T Schütt et al. “Quantum-chemical insights from deep tensor neural networks” In _Nature Communications_ 8.1 Nature Publishing Group, 2017, pp. 1–8
* [81] KT Schütt et al. “Unifying machine learning and quantum chemistry with a deep neural network for molecular wavefunctions” In _Nature Communications_ 10.1 Nature Publishing Group, 2019, pp. 1–10
* [82] Mihail Bogojeski et al. “Quantum chemical accuracy from density functional approximations via machine learning” In _Nature Communications_ 11.1 Nature Publishing Group, 2020, pp. 1–11
* [83] Kevin Yang et al. “Analyzing learned molecular representations for property prediction” In _Journal of Chemical Information and Modeling_ 59.8 ACS Publications, 2019, pp. 3370–3388
* [84] Simon Axelrod and Rafael Gomez-Bombarelli “Molecular machine learning with conformer ensembles” In _arXiv:2012.08452_ , 2020
* [85] José Jiménez-Luna, Francesca Grisoni and Gisbert Schneider “Drug discovery with explainable artificial intelligence” In _Nature Machine Intelligence_ 2.10 Nature Publishing Group, 2020, pp. 573–584
* [86] Thomas Schnake et al. “XAI for Graphs: Explaining Graph Neural Network Predictions by Identifying Relevant Walks” In _arXiv:2006.03589_ , 2020
* [87] Yujia Li et al. “Learning deep generative models of graphs” In _arXiv:1803.03324_ , 2018
* [88] Martin Simonovsky and Nikos Komodakis “Graphvae: Towards generation of small graphs using variational autoencoders” In _International Conference on Artificial Neural Networks_ , 2018, pp. 412–422 Springer
* [89] Nicola De Cao and Thomas Kipf “MolGAN: An implicit generative model for small molecular graphs” In _arXiv:1805.11973_ , 2018
* [90] Daniel Flam-Shepherd, Tony C Wu and Alan Aspuru-Guzik “MPGVAE: Improved Generation of Small Organic Molecules using Message Passing Neural Nets” In _Machine Learning: Science and Technology_ IOP Publishing, 2021
* [91] Jiaxuan You et al. “Graph convolutional policy network for goal-directed molecular graph generation” In _Advances in Neural Information Processing Systems_ , 2018, pp. 6410–6421
* [92] Wengong Jin, Regina Barzilay and Tommi Jaakkola “Multi-objective molecule generation using interpretable substructures” In _International Conference on Machine Learning_ , 2020, pp. 4849–4859 PMLR
* [93] Tao Lei, Wengong Jin, Regina Barzilay and Tommi Jaakkola “Deriving neural architectures from sequence and graph kernels” In _arXiv:1705.09037_ , 2017
* [94] Nathaniel Thomas et al. “Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds” In _arXiv:1802.08219_ , 2018
* [95] Tess E Smidt, Mario Geiger and Benjamin Kurt Miller “Finding symmetry breaking order parameters with Euclidean neural networks” In _Physical Review Research_ 3.1 APS, 2021, pp. L012002
* [96] Tess E Smidt “Euclidean symmetry and equivariance in machine learning” In _Trends in Chemistry_ Elsevier, 2020
* [97] Michael Hutchinson et al. “LieTransformer: Equivariant self-attention for Lie Groups” In _arXiv:2012.10885_ , 2020
* [98] Oliver T Unke et al. “Spookynet: Learning force fields with electronic degrees of freedom and nonlocal effects” In _arXiv:2105.00304_ , 2021
* [99] Simon Batzner et al. “SE (3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials” In _arXiv:2101.03164_ , 2021
* [100] Claus Müller “Spherical harmonics” Springer, 2006
* [101] Tevian Dray “A unified treatment of Wigner D functions, spin-weighted spherical harmonics, and monopole harmonics” In _Journal of mathematical physics_ 27.3 American Institute of Physics, 1986, pp. 781–792
* [102] Jan Hermann, Zeno Schätzle and Frank Noé “Deep-neural-network solution of the electronic Schrödinger equation” In _Nature Chemistry_ 12.10 Nature Publishing Group, 2020, pp. 891–897
* [103] David Pfau, James S Spencer, Alexander GDG Matthews and W Matthew C Foulkes “Ab initio solution of the many-electron Schrödinger equation with deep neural networks” In _Physical Review Research_ 2.3 APS, 2020, pp. 033429
* [104] Kenny Choo, Antonio Mezzacapo and Giuseppe Carleo “Fermionic neural-network states for ab-initio electronic structure” In _Nature Communications_ 11.1 Nature Publishing Group, 2020, pp. 1–7
* [105] Sepp Hochreiter and Jürgen Schmidhuber “Long short-term memory” In _Neural Computation_ 9.8 MIT Press, 1997, pp. 1735–1780
* [106] Kohulan Rajan, Achim Zielesny and Christoph Steinbeck “DECIMER: towards deep learning for chemical image recognition” In _Journal of Cheminformatics_ 12.1 Springer, 2020, pp. 1–9
* [107] Yanjun Li, Mohammad A Rezaei, Chenglong Li and Xiaolin Li “Deepatom: A framework for protein-ligand binding affinity prediction” In _2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)_ , 2019, pp. 303–310 IEEE
* [108] Mostafa Karimi, Di Wu, Zhangyang Wang and Yang Shen “DeepAffinity: interpretable deep learning of compound–protein affinity through unified recurrent and convolutional neural networks” In _Bioinformatics_ 35.18 Oxford University Press, 2019, pp. 3329–3338
* [109] José Jiménez et al. “DeltaDelta neural networks for lead optimization of small molecule potency” In _Chemical Science_ 10.47 Royal Society of Chemistry, 2019, pp. 10911–10918
* [110] José Jiménez et al. “DeepSite: protein-binding site predictor using 3D-convolutional neural networks” In _Bioinformatics_ 33.19 Oxford University Press, 2017, pp. 3036–3042
* [111] Eman Ahmed et al. “A survey on deep learning advances on different 3D data representations” In _arXiv:1808.01462_ , 2018
* [112] Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez and Peter Battaglia “Learning Mesh-Based Simulation with Graph Networks” In _International Conference on Learning Representations_ , 2020
* [113] Qinqing Liu et al. “OctSurf: Efficient hierarchical voxel-based molecular surface representation for protein-ligand affinity prediction” In _Journal of Molecular Graphics and Modelling_ 105 Elsevier, 2021, pp. 107865
* [114] Stelios K Mylonas, Apostolos Axenopoulos and Petros Daras “DeepSurf: A surface-based deep learning approach for the prediction of ligand binding sites on proteins” In _arXiv:2002.05643_ , 2020
* [115] John M Barnard “Representation of Molecular Structures-Overview” In _Handbook of Chemoinformatics: From Data to Knowledge in 4 Volumes_ , 2003, pp. 27–50
* [116] William J Wiswesser “Historic development of chemical notations” In _Journal of Chemical Information and Computer Sciences_ 25.3 ACS Publications, 1985, pp. 258–263
* [117] William J. Wiswesser “The Wiswesser Line Formula Notation” In _Chemical & Engineering News Archive_ 30.34, 1952, pp. 3523–3526
* [118] Sheila Ash et al. “SYBYL line notation (SLN): A versatile language for chemical structure representation” In _Journal of Chemical Information and Computer Sciences_ 37.1 ACS Publications, 1997, pp. 71–79
* [119] Stephen Heller et al. “InChI the worldwide chemical structure identifier standard” In _Journal of Cheminformatics_ 5.1 BioMed Central, 2013, pp. 1–9
* [120] Tianhong Zhang et al. “HELM: A Hierarchical Notation Language for Complex Biomolecule Structure Representation” In _Journal of Chemical Information and Modeling_ 52.10, 2012, pp. 2796–2806
* [121] Hakime Öztürk et al. “Exploring chemical space using natural language processing methodologies for drug discovery” In _Drug Discovery Today_ 25.4 Elsevier, 2020, pp. 689–705
* [122] Andrea Cadeddu et al. “Organic Chemistry as a Language and the Implications of Chemical Linguistics for Structural and Retrosynthetic Analyses” In _Angewandte Chemie International Edition_ 53.31, 2014, pp. 8108–8112
* [123] Rafael Gómez-Bombarelli et al. “Automatic chemical design using a data-driven continuous representation of molecules” In _ACS Central Science_ 4.2 ACS Publications, 2018, pp. 268–276
* [124] Noel O’Boyle and Andrew Dalke “DeepSMILES: An Adaptation of SMILES for Use in Machine-Learning of Chemical Structures.” In _chemrxiv.7097960.v1_
* [125] Mario Krenn et al. “Self-Referencing Embedded Strings (SELFIES): A 100% robust molecular string representation” In _Machine Learning: Science and Technology_ 1.4 IOP Publishing, 2020, pp. 045024
* [126] David E Rumelhart, Geoffrey E Hinton and Ronald J Williams “Learning internal representations by error propagation”, 1985
* [127] Sepp Hochreiter “The vanishing gradient problem during learning recurrent neural nets and problem solutions” In _International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems_ 6.02 World Scientific, 1998, pp. 107–116
* [128] Razvan Pascanu, Tomas Mikolov and Yoshua Bengio “On the difficulty of training recurrent neural networks” In _International Conference on Machine Learning_ , 2013, pp. 1310–1318
* [129] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho and Yoshua Bengio “Empirical evaluation of gated recurrent neural networks on sequence modeling” In _arXiv:1412.3555_ , 2014
* [130] Cecile Valsecchi et al. “NURA: A curated dataset of nuclear receptor modulators” In _Toxicology and Applied Pharmacology_ 407 Elsevier, 2020, pp. 115244
* [131] Guy W Bemis and Mark A Murcko “The properties of known drugs. 1. Molecular frameworks” In _Journal of Medicinal Chemistry_ 39.15 ACS Publications, 1996, pp. 2887–2893
* [132] William Yuan et al. “Chemical Space Mimicry for Drug Discovery” PMID: 28257191 In _Journal of Chemical Information and Modeling_ 57.4, 2017, pp. 875–882
* [133] Esben Jannik Bjerrum and Richard Threlfall “Molecular generation with recurrent neural networks (RNNs)” In _arXiv:1705.04612_ , 2017
* [134] Anvita Gupta et al. “Generative recurrent networks for de novo drug design” In _Molecular Informatics_ 37.1-2 Wiley Online Library, 2018, pp. 1700111
* [135] Daniel Merk, Lukas Friedrich, Francesca Grisoni and Gisbert Schneider “De novo design of bioactive small molecules by artificial intelligence” In _Molecular Informatics_ 37.1-2 Wiley Online Library, 2018, pp. 1700153
* [136] Daniel Merk, Francesca Grisoni, Lukas Friedrich and Gisbert Schneider “Tuning artificial intelligence on the de novo design of natural-product-inspired retinoid X receptor modulators” In _Communications Chemistry_ 1.1 Nature Publishing Group, 2018, pp. 1–9
* [137] Marcus Olivecrona, Thomas Blaschke, Ola Engkvist and Hongming Chen “Molecular de-novo design through deep reinforcement learning” In _Journal of Cheminformatics_ 9.1 BioMed Central, 2017, pp. 1–14
* [138] Mariya Popova, Olexandr Isayev and Alexander Tropsha “Deep reinforcement learning for de novo drug design” In _Science Advances_ 4.7 American Association for the Advancement of Science, 2018, pp. eaap7885
* [139] Francesca Grisoni et al. “Combining generative artificial intelligence and on-chip synthesis for de novo drug design” In _Science Advances_ 7.24 American Association for the Advancement of Science, 2021
* [140] Josep Arús-Pous et al. “Randomized SMILES strings improve the quality of molecular generative models” In _Journal of Cheminformatics_ 11.1 BioMed Central, 2019, pp. 1–13
* [141] Francesca Grisoni, Michael Moret, Robin Lingwood and Gisbert Schneider “Bidirectional Molecule Generation with Recurrent Neural Networks” In _Journal of Chemical Information and Modeling_ ACS Publications, 2020
* [142] Alex T Müller, Jan A Hiss and Gisbert Schneider “Recurrent neural network model for constructive peptide design” In _Journal of Chemical Information and Modeling_ 58.2 ACS Publications, 2018, pp. 472–479
* [143] Deepesh Nagarajan et al. “Computational antimicrobial peptide design and evaluation against multidrug-resistant clinical isolates of bacteria” In _Journal of Biological Chemistry_ 293.10 Elsevier, 2018, pp. 3492–3509
* [144] Md-Nafiz Hamid and Iddo Friedberg “Identifying antimicrobial peptides using word embedding with deep recurrent neural networks” In _Bioinformatics_ 35.12, 2018, pp. 2009–2016
* [145] Payel Das et al. “Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations” In _Nature Biomedical Engineering_ 5.6 Nature Publishing Group, 2021, pp. 613–623
* [146] Shusen Zhou et al. “Combining Deep Neural Networks for Protein Secondary Structure Prediction” In _IEEE Access_ 8 IEEE, 2020, pp. 84362–84370
* [147] Sun-Ting Tsai, En-Jui Kuo and Pratyush Tiwary “Learning molecular dynamics with simple language model built upon long short-term memory neural network” In _Nature Communications_ 11.1 Nature Publishing Group, 2020, pp. 1–11
* [148] Rafael Gomez-Bombarelli et al. “Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules” In _ACS Central Science_ 4.2, 2018, pp. 268–276
* [149] Xuan Lin et al. “A novel molecular representation with BiGRU neural networks for learning atom” In _Briefings in Bioinformatics_ 21.6, 2019, pp. 2099–2111
* [150] Kristina Preuer et al. “Fréchet ChemNet distance: a metric for generative models for molecules in drug discovery” In _Journal of Chemical Information and Modeling_ 58.9 ACS Publications, 2018, pp. 1736–1741
* [151] Philippe Schwaller et al. “Predicting retrosynthetic pathways using transformer-based models and a hyper-graph exploration strategy” In _Chemical Science_ 11.12 Royal Society of Chemistry, 2020, pp. 3316–3325
* [152] Giorgio Pesciullesi, Philippe Schwaller, Teodoro Laino and Jean-Louis Reymond “Transfer learning enables the molecular transformer to predict regio-and stereoselective reactions on carbohydrates” In _Nature Communications_ 11.1 Nature Publishing Group, 2020, pp. 1–8
* [153] David Kreutter, Philippe Schwaller and Jean-Louis Reymond “Predicting Enzymatic Reactions with a Molecular Transformer” In _Chemical Science_ Royal Society of Chemistry, 2021
* [154] Philippe Schwaller et al. “Mapping the space of chemical reactions using attention-based neural networks” In _Nature Machine Intelligence_ Nature Publishing Group, 2021, pp. 1–9
* [155] Seyone Chithrananda, Gabe Grand and Bharath Ramsundar “ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction” In _arXiv:2010.09885_ , 2020
* [156] Jiazhen He et al. “Molecular Optimization by Capturing Chemist’s Intuition Using Deep Neural Networks”, 2020
* [157] Shion Honda, Shoi Shi and Hiroki R Ueda “SMILES transformer: pre-trained molecular fingerprint for low data drug discovery” In _arXiv:1911.04738_ , 2019
* [158] Mehdi Mirza and Simon Osindero “Conditional generative adversarial nets” In _arXiv:1411.1784_ , 2014
* [159] Martin Arjovsky, Soumith Chintala and Léon Bottou “Wasserstein generative adversarial networks” In _International Conference on Machine Learning_ , 2017, pp. 214–223 PMLR
* [160] Oscar Méndez-Lucio et al. “De novo generation of hit-like molecules from gene expression signatures using artificial intelligence” In _Nature Communications_ 11.1 Nature Publishing Group, 2020, pp. 1–10
* [161] Ryan-Rhys Griffiths and José Miguel Hernández-Lobato “Constrained Bayesian optimization for automatic chemical design using variational autoencoders” In _Chemical Science_ 11.2 Royal Society of Chemistry, 2020, pp. 577–586
* [162] Zaccary Alperstein, Artem Cherkasov and Jason Tyler Rolfe “All smiles variational autoencoder” In _arXiv:1905.13343_ , 2019
* [163] Maya Hirohara et al. “Convolutional neural network based on SMILES representation of compounds for detecting chemical motif” In _BMC bioinformatics_ 19.19 Springer, 2018, pp. 83–94
* [164] Talia B Kimber et al. “Synergy effect between convolutional neural networks and the multiplicity of SMILES for improvement of molecular prediction” In _arXiv:1812.04439_ , 2018
* [165] Shuangjia Zheng, Xin Yan, Yuedong Yang and Jun Xu “Identifying structure–property relationships through SMILES syntax analysis with self-attention mechanism” In _Journal of Chemical Information and Modeling_ 59.2 ACS Publications, 2019, pp. 914–923
* [166] Sangrak Lim and Yong Oh Lee “Predicting Chemical Properties using Self-Attention Multi-task Learning based on SMILES Representation” In _arXiv:2010.11272_ , 2020
* [167] Bonggun Shin, Sungsoo Park, Keunsoo Kang and Joyce C Ho “Self-attention based molecule representation for predicting drug-target interaction” In _Machine Learning for Healthcare Conference_ , 2019, pp. 230–248 PMLR
* [168] Hesham ElAbd et al. “Amino acid encoding for deep learning applications” In _BMC bioinformatics_ 21.1 Springer, 2020, pp. 1–14
* [169] Victor Garcia Satorras et al. “E (n) Equivariant Normalizing Flows for Molecule Generation in 3D” In _arXiv:2105.09016_ , 2021
* [170] Toshio Fujita and David A Winkler “Understanding the roles of the “two QSARs”” In _Journal of Chemical Information and Modeling_ 56.2 ACS Publications, 2016, pp. 269–274
* [171] Weihua Hu et al. “Open graph benchmark: Datasets for machine learning on graphs” In _arXiv:2005.00687_ , 2020
* [172] Zhenqin Wu et al. “MoleculeNet: a benchmark for molecular machine learning” In _Chemical Science_ 9.2 Royal Society of Chemistry, 2018, pp. 513–530
* [173] Daniil Polykovskiy et al. “Molecular sets (MOSES): a benchmarking platform for molecular generation models” In _Frontiers in Pharmacology_ 11 Frontiers Media SA, 2020
* [174] Nathan Brown, Marco Fiscato, Marwin HS Segler and Alain C Vaucher “GuacaMol: benchmarking models for de novo molecular design” In _Journal of Chemical Information and Modeling_ 59.3 ACS Publications, 2019, pp. 1096–1108
* [175] O Anatole Lilienfeld, Klaus-Robert Müller and Alexandre Tkatchenko “Exploring chemical compound space with quantum-based machine learning” In _Nature Reviews Chemistry_ Nature Publishing Group, 2020, pp. 1–12
* [176] Oliver T Unke et al. “Machine learning force fields” In _Chemical Reviews_ 121.16 ACS Publications, 2021, pp. 10142–10186
* [177] Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp and O Anatole Von Lilienfeld “Quantum chemistry structures and properties of 134 kilo molecules” In _Scientific Data_ 1.1 Nature Publishing Group, 2014, pp. 1–7
* [178] Clemens Isert, Kenneth Atz, José Jiménez-Luna and Gisbert Schneider “QMugs: Quantum Mechanical Properties of Drug-like Molecules” In _arXiv:2107.00367_ , 2021
* [179] Guido Falk Rudorff, Stefan N Heinen, Marco Bragato and O Anatole Lilienfeld “Thousands of reactants and transition states for competing E2 and S2 reactions” In _Machine Learning: Science and Technology_ 1.4 IOP Publishing, 2020, pp. 045026
|
arxiv-papers
| 2021-07-26T09:23:43 |
2024-09-04T03:07:19.616509
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Kenneth Atz, Francesca Grisoni, Gisbert Schneider",
"submitter": "Kenneth Atz",
"url": "https://arxiv.org/abs/2107.12375"
}
|
2107.12378
|
# Anomaly Ratio Distributions of Hadronic Axion Models with Multiple Heavy
Quarks
Vaisakh Plakkot [email protected] Sebastian Hoof
[email protected] Institut für Astrophysik, Georg-August-Universität
Göttingen, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany
(September 2021)
###### Abstract
We consider hadronic axion models that extend the Standard Model by one
complex scalar field and one or more new heavy quarks, i.e.
$N_{\mathcal{Q}}\geq 1$. We review previously suggested selection criteria as
well as categorize and catalog all possible models for $N_{\mathcal{Q}}\leq
9$. In particular, allowing for $N_{\mathcal{Q}}>1$ can introduce models that
spoil the axion solution of the strong CP problem. Demanding that Landau poles
do not appear below some energy scale limits the number of preferred models to
a finite number. For our choice of criteria, we find that $N_{\mathcal{Q}}\leq
28$ and only 820 different anomaly ratios $E/N$ exist (443 when considering
additive representations, 12 when all new quarks transform under the same
representation). We analyze the ensuing $E/N$ distributions, which can be used
to construct informative priors on the axion-photon coupling. The hadronic
axion model band may be defined as the central region of one of these
distributions, and we show how the band for equally probable, preferred models
compares to present and future experimental constraints.
## I Introduction
QCD axions [1, 2], initially proposed as a solution to the strong CP problem
[3, 4], are excellent cold dark matter (DM) candidates [5, 6, 7, 8, 9].
Numerous experimental searches are currently underway to find such particles
[10]. One major challenge of axion detection is that the axion mass is set by
an unknown parameter, the axion decay constant $f_{a}$, which can range across
many orders of magnitude. Moreover, the axion’s interactions with the Standard
Model (SM) are usually model-dependent, and a UV axion model has to be
constructed in order to determine the exact relationship of $f_{a}$ and the
axion couplings.
One class of such UV models are hadronic (also called KSVZ-type) axion models
[11, 12], which extend the SM by a new complex scalar field and
$N_{\mathcal{Q}}\geq 1$ heavy, exotic quarks. For a given value of
$N_{\mathcal{Q}}$ there exist multiple, discrete models, which trace out lines
in the axion mass and axion-photon coupling parameter space. The locations of
these lines are determined by the anomaly ratio $E/N$ and a model-independent
contribution from axion-meson mixing.
To map and restrict the resulting landscape of axion models, it has been
suggested that phenomenological selection criteria can be used to single out
_preferred_ models [13, 14]. This allows us to restrict the parameter space
and helps experiments to assess their sensitivity requirements. However, so
far only the case of $N_{\mathcal{Q}}=1$ has been fully cataloged, which is
why we want to study models with $N_{\mathcal{Q}}>1$ as far as this is
feasible. First, we summarize the construction of KSVZ-type axion models and
phenomenological selection criteria in Secs. II and III. Subsequently, a
catalog of all possible models with $N_{\mathcal{Q}}\leq 9$ is presented and
the resulting $E/N$ distributions are discussed. We catalog all _preferred_
models, for which we find that the maximum possible number of $\mathcal{Q}$s
is $N_{\mathcal{Q}}=28$. In Sec. V we outline how the catalog of models can be
used to construct informative prior distributions on $E/N$. These can be used
to define the KSVZ axion model band and we show how it compares to current and
future experimental constraints. Finally, we summarize our work and end with
some closing remarks. Model catalogs and further supplementary material are
available on Zenodo [15].
## II Hadronic axion models
Let us denote a representation of a particle as
$(\mathcal{C},\mathcal{I},\mathcal{Y})$, where $\mathcal{C}$ and $\mathcal{I}$
are the $\mathrm{SU}(3)_{\mathcal{C}}$ color and
$\mathrm{SU}(2)_{\mathcal{I}}$ isospin representations, respectively, while
$\mathcal{Y}$ denotes the particle’s $\mathrm{U}(1)_{\mathcal{Y}}$
hypercharge.
For example, the traditional KSVZ axion model contains a heavy chiral quark
$\mathcal{Q}=\mathcal{Q}_{L}+\mathcal{Q}_{R}\sim(3,1,0)$, charged under the
$\mathrm{U}(1)_{\text{PQ}}$ Peccei-Quinn (PQ) symmetry with charge
$\mathcal{X}=\mathcal{X}_{L}-\mathcal{X}_{R}=\pm 1$, and the complex scalar
field $\Phi\sim(1,1,0)$ with PQ charge normalized to $\mathcal{X}_{\Phi}=1$.
All SM fields are uncharged under the PQ symmetry in the KSVZ model, and the
relevant part of the Lagrangian is
$\displaystyle\mathcal{L}\supset\ $ $\displaystyle
i\,\overline{\mathcal{Q}}\,\gamma^{\mu}D_{\mu}\mathcal{Q}-(y_{\mathcal{Q}}\overline{\mathcal{Q}}_{L}\mathcal{Q}_{R}\Phi+\text{h.c.})$
$\displaystyle-\lambda_{\Phi}\left(|\Phi|^{2}-\frac{v_{a}^{2}}{2}\right)^{2},$
(1)
where $y_{\mathcal{Q}}$ is the Yukawa coupling constant and the last term is a
potential for the complex scalar field with order parameter $v_{a}$. The
Lagrangian is invariant under a chiral $\mathrm{U}(1)_{\text{PQ}}$
transformation $\Phi\mapsto\mathrm{e}^{i\alpha}\Phi$,
$\mathcal{Q}_{L/R}\mapsto\mathrm{e}^{\pm i\alpha/2}\mathcal{Q}_{L/R}$. The
field $\Phi$ attains a non-zero value at the minimum of the potential,
resulting in a spontaneously broken PQ symmetry. Expanding $\Phi$ around its
vacuum expectation value gives the axion as the corresponding angular degree
of freedom, with value in the interval $[0,2\pi v_{a})$. The mass of
$\mathcal{Q}$ is then $m_{\mathcal{Q}}=y_{\mathcal{Q}}v_{a}/\sqrt{2}$.
Performing a chiral $\mathrm{U}(1)$ transformation such that
$\mathcal{Q}_{L/R}\mapsto\mathrm{e}^{\pm ia/(2v_{a})}\mathcal{Q}_{L/R}$, the
mass term for $\mathcal{Q}$ can be made independent of the axion field phase.
This transformation adds an anomalous $G\widetilde{G}$ term to Eq. (1) as well
as an $F\widetilde{F}$ term, where $G$ and $F$ are the gluon and photon field
strength tensors, respectively, and the tilde denotes their duals. With the
electromagnetic (EM) and color anomaly contributions due to the
$\mathrm{U}(1)_{\text{PQ}}$ charged quarks labeled $E$ and $N$ respectively,
the coupling terms become
$\displaystyle\mathcal{L}$
$\displaystyle\supset\frac{N\alpha_{\text{s}}}{4\pi}\frac{a}{v_{a}}G\widetilde{G}+\frac{E\alpha_{\text{em}}}{4\pi}\frac{a}{v_{a}}F\widetilde{F}$
$\displaystyle=\frac{\alpha_{\text{s}}}{8\pi
f_{a}}aG\widetilde{G}+\frac{\alpha_{\text{em}}}{8\pi
f_{a}}\frac{E}{N}aF\widetilde{F}\,,$ (2)
where $f_{a}=v_{a}/(2N)$. The axion-photon coupling is thus parameterized by
the anomaly ratio $E/N$ alone.
More precisely, the mass and coupling to photons for QCD axion models are
given by [16, 17]
$\displaystyle m_{a}$ $\displaystyle=\frac{\chi_{0}^{2}}{f_{a}}=$5.69\pm
0.05\text{\,}\mathrm{\SIUnitSymbolMicro
eV}$\left(\frac{${10}^{12}\text{\,}\mathrm{GeV}$}{f_{a}}\right)\,,$ (3)
$\displaystyle g_{a\gamma\gamma}$
$\displaystyle=\frac{\alpha_{\text{em}}}{2\pi
f_{a}}\,C_{a\gamma\gamma}=\frac{\alpha_{\text{em}}}{2\pi
f_{a}}\left[\frac{E}{N}-C_{a\gamma\gamma}^{(0)}\right]$
$\displaystyle=\frac{\alpha_{\text{em}}}{2\pi
f_{a}}\left[\frac{E}{N}-($1.92\pm 0.04$)\right]\,.$ (4)
For some representation $r$ under which the heavy quark $\mathcal{Q}$ in the
KSVZ axion model transforms, the EM and color anomalies can be calculated as
$\displaystyle E$
$\displaystyle=\mathcal{X}\,d(\mathcal{C})\,\mathrm{tr}(q^{2})$
$\displaystyle=\mathcal{X}\,d(\mathcal{C})\,d(\mathcal{I})\left(\frac{d(\mathcal{I})^{2}-1}{12}+\mathcal{Y}^{2}\right)\,,$
(5a) $\displaystyle N$
$\displaystyle=\mathcal{X}\,d(\mathcal{I})\,T(\mathcal{C})\,,$ (5b)
where $d(\cdot)$ denotes the dimension of a representation,
$q=\mathcal{I}^{(3)}-\mathcal{Y}$ is the EM charge of $\mathcal{Q}$, and
$T(\mathcal{C})$ is the $\mathrm{SU}(3)_{\mathcal{C}}$ Dynkin index (see Ref.
[18]).
In KSVZ-type models, only $\mathcal{Q}$ is charged under the PQ symmetry
(apart from $\Phi$) and e.g. for $\mathcal{Q}\sim(3,1,0)$ we have
$N=\mathcal{X}/2$ and $E=3\mathcal{X}\,\mathrm{tr}(q^{2})$, using that
$T(3)=1/2$. In general, one finds for a single $\mathcal{Q}$ that
$\frac{E}{N}=6\,\mathrm{tr}(q^{2})=6q^{2}\,,$ (6)
where the last equality holds only when $\mathcal{Q}$ is a singlet under
$\mathrm{SU}(2)_{\mathcal{I}}$. This e.g. leads to the well-known result that
the original KSVZ model has $E/N=0$.
When considering models with multiple $\mathcal{Q}_{i}$, which have
representations $r_{i}$ and anomaly coefficients $E_{i}$ and $N_{i}$ given by
Eqs. (5a) and (5b), respectively, the overall anomaly ratio is simply
$\frac{E}{N}=\frac{\sum_{i}E_{i}}{\sum_{i}N_{i}}\,,$ (7)
where the index $i$ runs over the different quarks, labeled $i=1,\dots,n$.
Note that, when labeling a tuple of $\mathcal{Q}$s in a model, there exists a
“relabeling symmetry.” For example, assume that two $\mathcal{Q}$s with the
same $\mathrm{U}(1)_{\text{PQ}}$ charge respectively transform under
representations $r_{1}$ and $r_{2}$, denoted by $r_{1}\oplus r_{2}$. Then
there is an equivalency relation such that $r_{1}\oplus r_{2}\sim r_{2}\oplus
r_{1}$, in the sense that they trivially give the same anomaly ratio $E/N$.
Similarly, we can also consider combinations of representations with
“$\ominus$”, the symbol we use to denote $\mathcal{Q}$s with opposite
$\mathrm{U}(1)_{\text{PQ}}$ charges such that $r_{i}\ominus
r_{j}\Rightarrow\mathcal{X}_{i}=-\mathcal{X}_{j}$. Here we have e.g.
$r_{1}\oplus r_{2}\ominus r_{2}\sim r_{1}\ominus r_{2}\oplus r_{2}\sim
r_{2}\ominus\left(r_{1}\oplus r_{2}\right)$, as all three models trivially
give the same overall anomaly ratio.
The relabeling symmetry allows us to simplify the presentation of the catalog,
and we refer to a list of models where this symmetry has been accounted for as
“non-equivalent.” It may also play a role in the statistical interpretation of
the catalog: if not all $\mathcal{Q}$s are indistinguishable, the multiplicity
arising from the equivalency relation must be taken into account. We comment
on this further in Section V.1.
## III Phenomenological selection criteria
Let us now review the various selection criteria for _preferred_ axion models,
most of which have already been proposed and discussed extensively in Refs.
[13, 14]. Here, we focus on the applicability in the pre- and post-
inflationary PQ symmetry breaking scenarios and observe that
$N_{\mathcal{Q}}~{}>~{}1$ allows for the existence of a new criterion related
to the axion’s ability to solve the strong CP problem.
### III.1 Dark matter constraints
A natural requirement is to demand that axions do not produce more DM than the
observed amount, $\Omega_{\text{c}}h^{2}\lesssim 0.12$ [19]. For QCD axions
this results in an upper bound on $f_{a}$ and previous studies of _preferred_
axion models used $f_{a}<$5\text{\times}{10}^{11}\text{\,}\mathrm{GeV}$$ [13,
14], assuming a post-inflationary cosmology with realignment axion production.
Let us extend this discussion and make a few comments regarding the different
cosmological scenarios and their impact on the $f_{a}$ bound.
First, in the pre-inflationary PQ symmetry breaking scenario, the initial
misalignment angle of the axion field, denoted by $\theta_{\text{i}}$, is a
random variable. Since any topological defects are inflated away, realignment
production is the only relevant contribution and the limit on $f_{a}$ depends
on its “naturalness.” While this is not a uniquely defined concept, using the
usual assumption of uniformly distributed angles,
$\theta_{\text{i}}\sim\mathcal{U(-\pi,\pi)}$ the code developed in Ref. [20]
finds $f_{a}<$4\text{\times}{10}^{12}\text{\,}\mathrm{GeV}$$ for the 95%
credible region of posterior density.111Note that we used a prior of
$\log_{10}(f_{a}/$\mathrm{G}\mathrm{e}\mathrm{V}$)\sim\mathcal{U}(6,16)$,
which introduces some prior dependence, and also included QCD nuisance
parameters [20]. This limit on $f_{a}$ effectively relies on the naturalness
being encoded automatically in prior on $\theta_{\text{i}}$.
Second, when topological defects can be neglected in the post-inflationary
symmetry breaking, the relic axion density is determined by an average of
misalignment angles over many causally-disconnected patches. This corresponds
to the benchmark scenario of Refs. [13, 14]. Again using the code developed in
Ref. [20], we obtain $f_{a}<$2\text{\times}{10}^{11}\text{\,}\mathrm{GeV}$$
(at the 95% CL).
The third and last case is the post-inflationary scenario including a
significant contribution from topological defects i.e. cosmic strings and
domain walls (DWs). In fact, recent studies indicate that the production of
axions via topological defects dominates the vacuum realignment production
[21, 22]. For models with domain wall number $N_{\text{\tiny DW}}\equiv 2N=1$
(cf. Section III.5), the authors find that
$f_{a}\lesssim${10}^{10}\text{\,}\mathrm{GeV}$$, while models with
$N_{\text{\tiny DW}}>1$ reduce the value of $f_{a}$ by a factor
$\mathcal{O}(N_{\text{\tiny DW}})$ [22]. For the _preferred_ models considered
in this work, $N_{\text{\tiny DW}}\leq 28$ such that the bound might be
loosened to about
$f_{a}\lesssim$3\text{\times}{10}^{8}\text{\,}\mathrm{GeV}$$. It should be
noted that these results rely on extrapolating the outcome of numerical
simulations more than 60 orders of magnitude, and they hence are potentially
subject to large systematic uncertainties.
In summary, the upper limit on $f_{a}$, and hence the results presented in
what follows, very much depend on the cosmological scenario at hand. To
simplify the discussion, to avoid the potentially large uncertainties
mentioned above, and to better compare with previous work of Ref. [14], we
also adopt $f_{a}<$5\text{\times}{10}^{11}\text{\,}\mathrm{GeV}$$.
However, we stress again that a different choice of $f_{a}$ will affect the
number of _preferred_ models, as $f_{a}$ is one of the factors that determines
the value of $m_{\mathcal{Q}}$. This is because
$m_{\mathcal{Q}}=y_{\mathcal{Q}}\,v_{a}/\sqrt{2}=y_{\mathcal{Q}}\,N_{\text{\tiny
DW}}f_{a}/\sqrt{2}$, such that $f_{a}$ provides an upper bound on
$m_{\mathcal{Q}}$. Moreover, a universal bound on the $m_{\mathcal{Q}}$ (up to
the Yukawa couplings) requires that all $\mathcal{Q}$s are coupled to the
$\Phi$ field in the same way to get a single $v_{a}$ parameter. So long as the
coupling $y_{\mathcal{Q}}\sim\mathcal{O}(1)$ or lower, the upper bound on
$f_{a}=v_{a}/N_{\text{\tiny DW}}$ is indeed an upper limit to
$m_{\mathcal{Q}}$. Larger values of the coupling require fine-tuning of
parameters, and are hence deemed undesirable from a theoretical viewpoint. In
what follows, we choose
$m_{\mathcal{Q}}=$5\text{\times}{10}^{11}\text{\,}\mathrm{GeV}$$ as a
conservative value for all $\mathcal{Q}$ masses (see Sec. III.4 for more
details on the influence on Landau pole constraints).
Finally, note that the $\mathcal{Q}$s themselves contribute to the matter
content in the Universe, and we need to consider the possibility that their
abundance exceeds $\Omega_{\text{c}}h^{2}$. Since this issue can be avoided if
the lifetime of the quarks is short enough, we discuss this in the next
section.
### III.2 Lifetimes
Other than the possibility that the $\mathcal{Q}$s’ abundances exceed
$\Omega_{\text{c}}h^{2}$, there also exist additional experimental and
observational constraints, which have already been discussed before [13, 14].
To avoid the DM constraints, we require the $\mathcal{Q}$s to decay into SM
particles with a reasonably low lifetime. Heavy quarks with
$m_{\mathcal{Q}}\gg$1\text{\,}\mathrm{TeV}$$ and lifetimes
$$0.01\text{\,}\mathrm{s}$<\tau_{\mathcal{Q}}<${10}^{12}\text{\,}\mathrm{s}$$
are severely constrained, as they would also affect Big Bang Nucleosynthesis
and observations of the Cosmic Microwave Background [23, 24]. Fermi-LAT
excludes
$${10}^{13}\text{\,}\mathrm{s}$<\tau_{\mathcal{Q}}<${10}^{26}\text{\,}\mathrm{s}$$,
thus excluding lifetimes greater than even the age of the Universe
($\sim${10}^{17}\text{\,}\mathrm{s}$$) [25]. As a result, for heavy quarks
($m_{\mathcal{Q}}\gg$1\text{\,}\mathrm{TeV}$$), only representations with
$\tau_{\mathcal{Q}}<${10}^{-2}\text{\,}\mathrm{s}$$ are considered to be a
part of the _preferred_ window. Lighter relics would be excluded from
experimental bounds e.g. at the LHC [26].
Such a constraint on the $\mathcal{Q}$ lifetime, when applied to the heavy
quark decay rate, translates to restrictions on the dimensionality of the
possible $\mathcal{Q}$ to SM fermion decay operators. With
$m_{\mathcal{Q}}\lesssim$5\text{\times}{10}^{11}\text{\,}\mathrm{GeV}$$, the
lifetime constraints in turn constrain operators to have dimensions
$d\leq~{}5$ [13, 14]. This implies a total of 20 possible representations for
$\mathcal{Q}$, all charged under $\mathrm{SU}(3)_{\mathcal{C}}$ and
$\mathrm{U}(1)_{\mathcal{Y}}$. The lifetime constraint has no further
consequence on cases with $N_{\mathcal{Q}}>1$ under the assumption that the
different $\mathcal{Q}_{i}$ do not interact among themselves or decay into
particles other than SM fermions.
As noted before [14], the lifetime constraints are typically not required in
the pre-inflationary PQ symmetry breaking scenario. This is because the
$\mathcal{Q}$s can get diluted by inflation, which prevents them from becoming
cosmologically dangerous relics after they freeze out. Without these
constraints, many more models with even higher-dimensional operators can
exist, and restricting ourselves to at most five-dimensional operators
therefore only becomes an assumption in this case.
### III.3 Failure to solve the strong CP problem
This criterion is specific to models with $N_{\mathcal{Q}}>1$ that allow the
$\mathcal{Q}$s to have opposite $\mathrm{U}(1)_{\text{PQ}}$ charges. It is
clear from Eq. 5b that the addition of multiple heavy quarks can lead to a
smaller overall $N$ than the individual $N_{i}$, but only when one or more of
the quarks have a (relative) negative $\mathrm{U}(1)_{\text{PQ}}$ charge. In
some cases a total cancellation of the $N_{i}$ terms occurs ($N=0$). While
these models give rise to massless axion-like particles with a coupling to
photons governed by $E$, they do not solve the strong CP problem: as can be
seen from Eq. 2, $N=0$ means that there is no $G\widetilde{G}$ contribution in
the Lagrangian. Considering that the primary objective of QCD axion models is
to solve the strong CP problem, we propose that only models with $N\neq 0$
should be considered _preferred_.
### III.4 Landau poles
The single most powerful criterion amongst the ones proposed by Refs. [13, 14]
in the context of this work comes from the observation that representations
with large $\mathcal{C}$, $\mathcal{I}$, or $\mathcal{Y}$ can induce Landau
poles (LPs) at energies well below the Planck mass. At an LP, the value of a
coupling mathematically tends to infinity, signaling a breakdown of the
theory. Since quantum gravity effects are only expected to appear at energies
near the Planck mass, a breakdown of the theory before that point can be
regarded as problematic or undesirable.
It has thus been proposed that _preferred_ models have LPs at energy scales
$\Lambda_{\text{LP}}\gtrsim${10}^{18}\text{\,}\mathrm{GeV}$$. From the 20
representations mentioned previously, only 15 fulfil this criterion [13, 14];
we refer to these as “LP-allowed” models and label them $r_{1}$ to $r_{15}$
(as per Table II in Ref. [13]).
The running of the couplings are computed at two-loop level with the
renormalization group equation [27, 28]
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}\mathrm{t}}\alpha_{i}^{-1}$
$\displaystyle=-a_{i}-\frac{b_{ij}}{4\pi}\alpha_{j}\,\,,$ (8)
where
$\displaystyle a_{i}$
$\displaystyle=-\frac{11}{3}\,C_{2}(G_{i})+\frac{4}{3}\sum_{F}\kappa\,T(F_{i})+\frac{1}{3}\sum_{S}\eta\,T(S_{i})\,,$
(9a) $\displaystyle b_{ij}$
$\displaystyle=\left[-\frac{34}{3}\,\big{(}C_{2}(G_{i})\big{)}^{2}+\sum_{F}\left(4C_{2}(F_{i})+\frac{20}{3}C_{2}(G_{i})\right)\kappa\,T(F_{i})+\sum_{S}\left(4C_{2}(S_{i})+\frac{2}{3}C_{2}(G_{i})\right)\eta\,T(S_{i})\right]\delta_{ij}$
$\displaystyle+4\left(1-\delta_{ij}\right)\left[\sum_{F}\kappa\,C_{2}(F_{j})\,T(F_{i})+\sum_{S}\eta\,C_{2}(S_{j})\,T(S_{i})\right]\,,$
(9b)
with $i,j\in\\{1,2,3\\}$ for the three gauge groups,
$\alpha_{i}=g_{i}^{2}/4\pi$, $\mathrm{t}=\frac{1}{2\pi}\ln(\mu/m_{Z})$ for
energy scale $\mu$ and $Z$ boson mass $m_{Z}$, while $a_{i}$ and $b_{i}$ are
the one- and two-loop beta functions. $C_{2}$ and $T$ are the quadratic
Casimir and Dynkin indices of the corresponding gauge group, respectively, and
$F$ and $S$ denote fermionic and scalar fields. $G_{i}$ denotes the adjoint
representation of the gauge group, and $\kappa=\frac{1}{2},1$ for Weyl and
Dirac fermions, while $\eta=1$ for complex scalars.222The case of
$\eta=\frac{1}{2}$ for real scalars is not relevant for the present study.
Also note that the expression for $b_{ij}$ in Ref. [28] is slightly erroneous
since the second term applies only to the non-diagonal elements of $b_{ij}$,
as found when comparing with the SM beta functions in Ref. [27]. Adding
multiple $\mathcal{Q}$s to the theory increases the coefficients of beta
functions through the fermionic terms. As a consequence, the couplings diverge
faster i.e. induce LPs at lower energy scales, as has been anticipated before
[14].
Since the addition of more particles with a given representation into a gauge
theory only worsens the running of the corresponding gauge coupling, it is
possible to find the number of copies of a particle that can be included in
the theory before it induces an LP below ${10}^{18}\text{\,}\mathrm{GeV}$.
This drastically reduces the number of LP-allowed combinations possible.
Integrating all $\mathcal{Q}_{i}$ in at
$m_{\mathcal{Q}}=$5\text{\times}{10}^{11}\text{\,}\mathrm{GeV}$$, we find that
there are 59,066 non-equivalent combinations of $\mathcal{Q}_{i}$ from the
representations $r_{1},r_{2},\dots,r_{15}$ that do not induce LPs below
${10}^{18}\text{\,}\mathrm{GeV}$.
As the $\mathcal{Q}_{i}$ contribute to the beta functions above the energy
scale $m_{\mathcal{Q}}$, the running of the gauge coupling begins to deviate
from the SM only at this scale. Different values of $m_{\mathcal{Q}}$ are
bound to produce different results for the LPs; the lower $m_{\mathcal{Q}}$
is, the earlier an LP appears. As an example, consider
$m_{\mathcal{Q}}=${10}^{10}\text{\,}\mathrm{GeV}$$: for $N_{\mathcal{Q}}=3$,
we find that 888 models are _preferred_ (they have
$\Lambda_{\text{LP}}>${10}^{18}\text{\,}\mathrm{GeV}$$ and $N\neq 0$ as per
the discussion in III.3), compared to 1,442 models when
$m_{\mathcal{Q}}=$5\text{\times}{10}^{11}\text{\,}\mathrm{GeV}$$. Furthermore,
we use the same mass for all $\mathcal{Q}_{i}$ in the models, which may not be
the case in reality (due to different $y_{\mathcal{Q}_{i}}$ or e.g. in multi-
axion models). However, setting the masses to the highest possible value in
the _preferred_ window allows us to keep the number of disfavored models to a
minimum. Without further information on the values of $f_{a}$ and individual
$m_{\mathcal{Q}_{i}}$, excluding fewer models may be advantageous in the sense
of presenting a more inclusive $E/N$ catalog.
### III.5 Other interesting model properties
Let us summarize a few other possible model properties, already discussed in
Refs. [13, 14].
Since the axion is the angular degree of freedom of the PQ scalar field, it
has a periodic potential and several degenerate vacua, given by the domain
wall number $N_{\text{\tiny DW}}=2N$. During PQ symmetry breaking, the axion
field can settle into any of these degenerate minima in different Hubble
patches, giving rise to domain walls. The energy density contained in such
topological defects can far exceed the energy density of the Universe [29] in
the post-inflationary PQ breaking scenario. However, in models with
$N_{\text{\tiny DW}}=1$, the string-domain wall configuration would be
unstable [30], which presents a possible solution and makes $N_{\text{\tiny
DW}}=1$ a desirable property of such models.
However, the DW problem can be avoided by allowing for a soft breaking of the
PQ symmetry [29]. Moreover, in a pre-inflationary PQ symmetry breaking
scenario, the patches and the topological defects are inflated away [31]. In
line with Refs. [13, 14], we therefore do not impose this criterion.
Among the 15 LP-allowed representations, only two have $N_{\text{\tiny
DW}}=1$. When all $\mathcal{Q}_{i}$ have the same $\mathrm{U}(1)_{\text{PQ}}$
charges, such a restriction would forbid any models with multiple heavy
quarks. With this in mind, a constraint on $N_{\text{\tiny DW}}$ is not used
to exclude $N_{\mathcal{Q}}>1$ models. In cases where the $\mathcal{Q}_{i}$
are permitted to have opposite $\mathrm{U}(1)_{\text{PQ}}$ charges, more
complicated models with $N_{\text{\tiny DW}}=1$ can be built by choosing the
$\mathcal{Q}_{i}$ such that $\sum_{i}N_{i}=1/2$. Even then, the number of such
models is few in comparison to the whole set of LP-allowed models.
Another intriguing property is the unification of the gauge couplings due to
the presence of the $\mathcal{Q}$s. The authors of Refs. [13, 14] note that
one of the 15 LP-allowed representations induces a significant improvement in
unification. While we do not investigate this further, we expect to find more
models that improve unification for higher $N_{\mathcal{Q}}$, which might be
an interesting topic for a future study.
## IV Model catalog and anomaly ratio distributions
Table 1: Selected statistics for the complete set of models with $N_{\mathcal{Q}}\leq 9$. We include information about the $E/N$ ratios that give rise to the largest axion-photon coupling i.e. $\widehat{E/N}\equiv\mathrm{argmax}_{E/N}(|E/N-1.92|)$, photophobic models ($|E/N-1.92|<0.04$), and _preferred_ (LP-allowed and $N\neq 0$) models. $N_{\mathcal{Q}}$ | Total #models | $\widehat{E/N}$ | LP-allowed | $N\neq 0$ | #_preferred_ | $\widehat{E/N}$ | photophobic
---|---|---|---|---|---|---|---
| | | fraction of total [%] | | among _preferred_
1 | 20 | $\phantom{-00}44/3$ | 75.00 | 100.00 | 15 | $\phantom{-0}44/3$ | 0.00%
2 | 420 | 0$-184/3$ | 49.52 | 91.67 | 189 | $\phantom{-}122/3$ | 1.59%
3 | 5,740 | $\phantom{-0}368/3$ | 25.98 | 97.40 | 1,442 | $\phantom{-}170/3$ | 1.11%
4 | 61,810 | 0$-538/3$ | 11.60 | 97.37 | 6,905 | $-136/3$ | 1.29%
5 | 543,004 | $\phantom{-0}698/3$ | 4.42 | 98.13 | 23,198 | $-148/3$ | 1.27%
6 | 4,073,300 | 0$-928/3$ | 1.50 | 98.32 | 58,958 | $-160/3$ | 1.28%
7 | 26,762,340 | $-1108/3$ | 0.47 | 98.55 | 120,240 | $\phantom{-}164/3$ | 1.33%
8 | 157,233,175 | $\phantom{-}1292/3$ | 0.14 | 98.68 | 207,910 | $-166/3$ | 1.34%
9 | 838,553,320 | $-1312/3$ | 0.04 | 98.79 | 312,360 | $-142/3$ | 1.37%
Figure 1: Number of non-equivalent models with different properties as a
function of $N_{\mathcal{Q}}$. We show the number of all possible, additive,
LP-allowed, $N_{\text{\tiny DW}}=1$, $N=0$, photophobic ($|E/N-1.92|<0.04$)
models, as well as the number of different and unique (no other non-equivalent
model has the same $E/N$ value such that the underlying model is uniquely
identifiable) $E/N$ values.
Let us discuss a few key findings and properties of the model catalog created
in this work, which we summarize in Table 1 and Fig. 1.
To structure the discussion, we single out two subsets of the total model
space: one where all $\mathcal{Q}_{i}$ transform under the same representation
and one where the representations are arbitrary but the
$\mathrm{U}(1)_{\text{PQ}}$ charges of the quarks have the same sign (we call
these “additive models”).
### IV.1 Subset I. Identical representations
First, consider the case where only representations of the form
$\bigoplus_{j=1}^{N_{\mathcal{Q}}}r_{i}$ with fixed $i\in[1,20]$ are allowed.
The number of possible models for a given $N_{\mathcal{Q}}$ is then simply
$N_{r}=20$, such that the total number of models up to and including some
$N_{\mathcal{Q}}$ is $N_{\text{tot}}=N_{r}\,N_{\mathcal{Q}}$.
Given that all quarks in such models have the same representation and
$\mathrm{U}(1)_{\text{PQ}}$ charge, only twelve discrete values of $E/N$ are
allowed when the LP criterion is taken into account [13]. However, the
relative distribution is determined by the effect of each representation on
the gauge group beta functions. We find that $\bigoplus_{j=1}^{28}r_{1}$ is
the only LP-allowed model for $N_{\mathcal{Q}}=28$ and that there are in total
79 _preferred_ models in this subset.
### IV.2 Subset II. Allowing different additive representations
Next, consider the case where we can have arbitrary additive representations,
written in such a way that they respect the relabeling symmetry:
$\bigoplus_{i=1}^{20}\bigoplus_{j}^{n_{i}}r_{i}$, where
$\sum_{i}n_{i}=N_{\mathcal{Q}}$ with $n_{i}\geq 0$. The number of models in
this subset is
$\displaystyle N(N_{\mathcal{Q}})$
$\displaystyle=\binom{N_{\mathcal{Q}}+N_{r}-1}{N_{\mathcal{Q}}}\,,$ (10)
$\displaystyle N_{\text{tot}}$
$\displaystyle=\sum_{n}\binom{n+N_{r}-1}{n}=\binom{N_{\mathcal{Q}}+N_{r}}{N_{\mathcal{Q}}}\,.$
(11)
We find that, after applying the selection criteria, there are 59,066
_preferred_ models for $N_{\mathcal{Q}}\leq 28$. In particular, for
$N_{\mathcal{Q}}=28$, there are only nine LP-allowed models, none of which can
be extended by another quark while preserving the criterion. The highest
freedom in this subset is found for $N_{\mathcal{Q}}=10$, where 5,481 models
fall in the _preferred_ region.
Among these models, the smallest and largest anomaly ratios are 1/6 and 44/3
respectively, both of which come from $N_{\mathcal{Q}}=1$ models. The median
of the distribution of this set of models is $\mathrm{med}(E/N)\approx 1.87$,
indicating that $|C_{a\gamma\gamma}|\sim 0$ is a real possibility for a larger
fraction of the model space. Indeed, there are several models that have an
$E/N$ ratio close to the nominal value of the model-independent parameter
$C_{a\gamma\gamma}^{(0)}$. We define models as “photophobic” if their $E/N$
ratio is within one standard deviation of the nominal
$C_{a\gamma\gamma}^{(0)}$ value:
$\left|E/N-1.92\right|<0.04\,.$ (12)
We find that 3,255 models ($\approx 5.5\%$) among the 59,066 non-equivalent
models are photophobic. Considering all _preferred_ additive models up to
$N_{\mathcal{Q}}\leq 28$, there are 443 different $E/N$ values. Out of these,
28 are unique in the sense that they are uniquely identifiable since their
anomaly ratio $E/N$ is different from any other non-equivalent model.
### IV.3 Complete set
Figure 2: Example histogram of the anomaly ratio $E/N$ for non-equivalent
$N_{\mathcal{Q}}=5$ models. Blue bars correspond to the “additive” subset and
red bars to the complete set of models i.e. also allowing for opposite
$\mathrm{U}(1)_{\text{PQ}}$ charges.
Finally, let us comment on the complete set of possible models where we may
also subtract representations, denoted by “$\ominus$.” Allowing
$\mathrm{U}(1)_{\text{PQ}}$ charges to have one of the two possible values for
each $\mathcal{Q}_{i}$, we open the window to a much wider range of possible
$E/N$ values. In particular, the anomaly ratio, and thus the axion-photon
coupling, can become negative (see Fig. 2) and, as mentioned before, the
solution to the strong CP problem can be spoilt in models with $N=0$.
For $n_{\oplus}+n_{\ominus}=N_{\mathcal{Q}}$, where $n_{\oplus}$ and
$n_{\ominus}$ are the number of $\mathcal{Q}$s with “positive” and “negative”
$\mathrm{U}(1)_{\text{PQ}}$ charges,333We remind the reader that “positive”
and “negative” are only relative concepts, in the sense that we consider two
models equivalent if the only difference between them is that the
$\mathrm{U}(1)_{\text{PQ}}$ charges of _all_ quarks get flipped going from one
to the other. respectively, the number of models with $n_{\oplus}>n_{\ominus}$
is simply
$\displaystyle
N(n_{\oplus},n_{\ominus})=\binom{n_{\oplus}+N_{r}-1}{n_{\oplus}}\,\binom{n_{\ominus}+N_{r}-1}{n_{\ominus}}\,.$
(13)
In the case where $n\equiv n_{\oplus}=n_{\ominus}$, accounting for the fact
that the anomaly ratio depends on the relative $\mathrm{U}(1)_{\text{PQ}}$
charges of the $\mathcal{Q}$s such that we have an equivalence of the type
$(r_{i}~{}\oplus~{}r_{j})~{}\ominus~{}(r_{k}~{}\oplus~{}r_{l})~{}\sim~{}(r_{k}~{}\oplus~{}r_{l})~{}\ominus~{}(r_{i}~{}\oplus~{}r_{j})$,
we also need to take care not to double-count models exhibiting this symmetry,
giving
$\displaystyle
N(n,n)=\frac{1}{2}\,\binom{n+N_{r}-1}{n}\left[\binom{n+N_{r}-1}{n}+1\right]\,.$
(14)
With this, we find that the number of models grows very fast as
$N_{\mathcal{Q}}$ increases. This also makes it computationally difficult to
compute and store all of the different combinations – let alone check the
criteria for _preferred_ models. We therefore restrict the complete analysis
in this case to $N_{\mathcal{Q}}\leq 9$.
The anomaly ratio distribution in the complete set exhibits a peak near zero,
and we expect the trend to continue even for larger $N_{\mathcal{Q}}$.
However, in general care should be taken when interpreting the “trends”
visible in Fig. 1. For example, the number of LP-allowed models will
eventually go down again as we move towards $N_{\mathcal{Q}}=28$, despite the
quickly growing total number of possible models. One may speculate that the
number of uniquely identifiable $E/N$ ratios could exhibit a similar behavior
as the number of LP-allowed models, while the number of different $E/N$ might
eventually saturate.
Allowing for opposite $\mathrm{U}(1)_{\text{PQ}}$ charges gives rise to models
with large axion-photon coupling; the largest and smallest values of $E/N$
found, $170/3$ and $-166/3$ respectively, give larger $|C_{a\gamma\gamma}|$
than what is possible in the previously discussed subsets. Note that the
$N_{\mathcal{Q}}=8$ model for $E/N=-166/3$ ($r_{2}\oplus r_{2}\oplus
r_{5}\oplus r_{6}\oplus r_{7}\ominus r_{1}\ominus r_{9}\ominus r_{9}$) was not
reported in Refs. [13, 14] as giving the highest possible
$|C_{a\gamma\gamma}|$; instead the authors indicated that $E/N=170/3$ led to
the largest absolute value of the coupling. We find that among the complete
set of 5,753,012 _preferred_ models, there are 81,502 photophobic models and
820 different anomaly ratios, with 79 out of those also being from uniquely
identifiable models.
## V Impact on axion searches
In this section, we discuss possible statistical interpretations of the
hadronic axion model catalog and show the impact of these on the mass-coupling
parameter space.
### V.1 On constructing E/N prior distributions
The catalog of KSVZ models – even after applying the selection criteria – is
but a list of _possible_ models. It does not inherently contain information
about how _probable_ each model is. The model with $E/N=-166/3$ gives the
largest $|C_{a\gamma\gamma}|\approx 57$, which will place an upper bound on
the axion-photon coupling and delimit the upper end of the KSVZ axion band. On
the other end, complete decoupling with photons ($C_{a\gamma\gamma}\approx 0$)
is also possible within the theoretical errors. Since any of the models might
be realized in Nature, perhaps due to a deeper underlying reason that is not
obvious at present, one might be satisfied with this picture.
However, the boundaries of the band are extreme cases and do not take into
account where the bulk of possible models can be found. For example, defining
a desired target sensitivity for an experiment becomes non-trivial in the face
of $C_{a\gamma\gamma}$ potentially being extremely close to zero. We propose
instead that covering a certain fraction of all possible models or
constructing a prior volume might be more meaningful ways to define such a
target.
To directly interpret an $E/N$ histogram as a distribution implicitly makes
the assumption that each model is equally likely to be realized in Nature.
While this interpretation might be considered “fair,” one could argue that
models with many $\mathcal{Q}$s are more “contrived” and consequently
introduce a weighting factor that penalizes models with $N_{\mathcal{Q}}\gg
1$. This could be achieved with e.g. exponential suppression via a weighting
factor $\propto\mathrm{e}^{-N_{\mathcal{Q}}}$, or $\propto
2^{-N_{\mathcal{Q}}}$. Another option could be to choose models that are
minimal extensions ($N_{\mathcal{Q}}=1$) or similar to the family structure of
the SM ($N_{\mathcal{Q}}=3$ or e.g. a weighting $\propto
3^{N_{\mathcal{Q}}}/N_{\mathcal{Q}}!$).
Such consideration are aligned with the Bayesian interpretation of statistics,
and will probably meet criticism for this reason. However, as pointed out in
Ref. [20], at least in the pre-inflationary PQ symmetry breaking scenario,
which is fundamentally probabilistic in nature, the Bayesian approach is well
motivated. Furthermore, Ref. [20] also proposed that the discrete nature of
KSVZ models should be reflected in the prior choice of $E/N$. Such a
physically-motivated prior should further reflect the combinatorics of KSVZ
model building by including the multiplicity of $E/N$ ratios. As mentioned at
the end of Section II, this multiplicity also depends on whether or not the
$\mathcal{Q}_{i}$ are distinguishable by e.g. having different masses.
Figure 3: Anomaly ratio distributions for all _preferred_ additive KSVZ
models, using different weightings. For equal weighting, we show the
underlying histogram (blue shading) and a smooth Gaussian kernel density
estimate of the distribution (blue line), while for others we only show the
latter for simplicity.
With this in mind, we show different statistical interpretations of the
anomaly ratio in Fig. 3. For visualization purposes, we show kernel density
estimates of the distributions for different weighting factors mentioned
above, while reminding the reader that the underlying histograms and
distributions are actually discrete and not continuous.
From Fig. 3 it becomes clear that the different weightings can change the
width of the distribution, introducing a prior dependence in an analysis.
However, the modes of the distributions remain around $E/N\sim 2$, which means
that a partial cancellation of the axion-photon coupling $C_{a\gamma\gamma}$
is typically possible, as already observed in Fig. 2.
### V.2 Experimental constraints on _preferred_ KSVZ axion models
Figure 4: The KSVZ axion band as defined by the 68% and 95% central regions of
$|C_{a\gamma\gamma}|=|E/N-C_{a\gamma\gamma}^{(0)}|$, drawing $E/N$ from a
distribution of all _preferred_ KSVZ axion models (each representation assumed
to be equally probable). The grey line marks the highest possible absolute
value of the coupling ($E/N=-166/3$), while the black line indicates the
classical KSVZ model ($E/N=0$). For context, we show various present (shaded
regions) and future (dashed lines) haloscope (blue) and helioscope (red)
limits and forecasts [32] as well as bounds from hot dark matter [33], energy
loss in SN1987A [34], and recent string simulations [22].
Of course, a possible partial cancellation of the axion-photon coupling has
consequences on the various astrophysical, cosmological, and laboratory
searches (see e.g. Ref. [10]) for axions. The most powerful analyses combine
the results of different experiments to place joint limits on the properties
of different types of axions (e.g. Refs. [35, 36, 20]).
To investigate this further, consider e.g. a prior on $E/N$ where all
_preferred_ (LP-allowed models with $d\leq 5$ operators and $N\neq 0$), non-
equivalent KSVZ models are considered equally probable.444Recall that the
$d\leq 5$ condition is due to the lifetime constraints (see Section III.2) in
the post-inflationary scenario, while it is only an assumption for the pre-
inflationary case (potentially reasonable for as being a minimal extension of
the SM). We can then generate samples for
$C_{a\gamma\gamma}=E/N-C_{a\gamma\gamma}^{(0)}$, where $E/N$ is drawn from its
discrete distribution and
$C_{a\gamma\gamma}^{(0)}\sim\mathcal{N}(1.92,\,0.04)$ i.e. follows a normal
distribution with mean 1.92 and standard deviation 0.04.
We find that the central 68% region of the ensuing distribution corresponds to
$|C_{a\gamma\gamma}|\in[0.39,5.22]$, while the 95% region is
$|C_{a\gamma\gamma}|\in[0.06,17.30]$. The corresponding model bands in the
mass-coupling plane are shown in Fig. 4, and the bulk of these models can be
constrained by present and future experiments. In fact, while complete
cancellation of $C_{a\gamma\gamma}$ is possible within the theoretical
uncertainty for some $E/N$ values, we find that the bulk of models is at worst
somewhat suppressed. This is very encouraging for experimental searches.
Had we only considered additive models, the 95% region would be
$|C_{a\gamma\gamma}|\in[0.02,1.67]$, such that the upper end of the band would
be lower than the traditional KSVZ model with $E/N=0$. This can be readily
understood from the $E/N$ distributions in Fig. 3, whose mode is typically
close to $C_{a\gamma\gamma}^{(0)}$ such that the value of
$|g_{a\gamma\gamma}|$ is lower than what would be expected for
$|C_{a\gamma\gamma}|\sim\mathcal{O}(1)$. In this case, the planned future
experiments would _not_ be able to probe large parts of the band, indicating
that the choice of prior – even if physically-motivated – can induce a
noticeable impact on the results.
## VI Summary and conclusions
We provide a catalog of all hadronic, or KSVZ, axion models with
$N_{\mathcal{Q}}\leq 9$, featuring 1,027,233,129 non-equivalent models in
total. When we apply the selection criteria for _preferred_ models, we find a
limit of $N_{\mathcal{Q}}\leq 28$ and that only 5,753,012 non-equivalent
models with 820 different $E/N$ values exist (59,066 non-equivalent models
with 443 different $E/N$ values for additive representations). While relaxing
existing or adding new criteria can increase or reduce these numbers, we
generically expect that the Landau pole (LP) criterion will be a powerful tool
to limit the number of possible models – even with modified constraints or in
other axion models. This is similar to the Standard Model, where the number of
families can also be restricted by demanding that LPs do not appear below some
energy scale. We further propose that only models with QCD anomaly $N\neq 0$
be considered _preferred_.
Our model catalog can be a useful, searchable database for researchers wishing
to study the KSVZ axion model space. It allows to e.g. make statements about
what fraction of possible models a given experiment is sensitive to. We made
catalogs, histograms, and example Python scripts available on the Zenodo
platform for this purpose [15].
Some models in the catalog might be considered “contrived” as they add many
new particles to the theory. Of course, in case of a discovery or if any other
appealing reason for a seemingly more complicated models is put forward, this
perception might change. In absence of such reasons, the $E/N$ values may be
interpreted as statistical distributions, which encode assumptions about the
probability of the different models. We generally outlined how prior
distributions can be constructed from the catalog and gave concrete examples
of such choices.
For the specific choice of equally probable _preferred_ models, we consider
the consequences for axion searches and the definition of the KSVZ axion band.
Here we suggest that the latter may be defined as the central 95% region of
all models, taking into account uncertainties from the model-independent
contribution to the axion-photon coupling. If only “additive models” are
considered, the bulk of the _preferred_ models can unfortunately not be probed
by current or future experiments since the anomaly ratio distributions in this
case tend to peak around $E/N\sim 2$. In general, using the discrete $E/N$
distributions improves on unphysical prior choices considered in the past
(e.g. Ref. [20]).
Even when ignoring the statistical perspective, it is useful for axion
searches to know that the _preferred_ models only admit 820 different $E/N$
values. In case of an axion detection, one may therefore test these discrete
models against each other to see which models are most compatible with the
detected signal. One could further test them against a generic axion-like
particle or other QCD axion models. In an ideal scenario, this might even
allow an experiment to infer the underlying high-energy structure of a model,
which highlights the known property of axion models to connect high-energy
physics to low-energy observables.
In summary, the powerful LP criterion restricts the number of KSVZ models to a
finite value. In that sense, the catalog presented here is a complete list of
all _preferred_ KSVZ models, which may be used as input for axion searches and
forecasts. Since KSVZ models could e.g. be extended by also considering
multiple complex scalar fields or feature more complex couplings to the SM,
and since there are other kinds of QCD axion models such as the DFSZ-type
models, this work presents another step forward in mapping the landscape of
all phenomenologically interesting axion models.
###### Acknowledgements.
We thank Maximilian Berbig, Joerg Jaeckel, and David ‘Doddy’ J. E. Marsh for
useful comments and discussions. This paper is based on results from VP’s
ongoing M.Sc. project. SH is supported by the Alexander von Humboldt
Foundation and the German Federal Ministry of Education and Research. We used
the Scientific Computing Cluster at GWDG, the joint data center of Max Planck
Society for the Advancement of Science (MPG) and the University of Göttingen.
## References
* [1] S. Weinberg, A new light boson?, _Physical Review Letters_ 40 (1978) 223.
* [2] F. Wilczek, Problem of strong P and T invariance in the presence of instantons, _Physical Review Letters_ 40 (1978) 279.
* [3] R.D. Peccei and H.R. Quinn, CP conservation in the presence of pseudoparticles, _Physical Review Letters_ 38 (1977) 1440.
* [4] R.D. Peccei and H.R. Quinn, Constraints imposed by CP conservation in the presence of pseudoparticles, _Phys. Rev. D_ 16 (1977) 1791.
* [5] J. Preskill, M.B. Wise and F. Wilczek, Cosmology of the invisible axion, _Physics Letters B_ 120 (1983) 127.
* [6] L.F. Abbott and P. Sikivie, A cosmological bound on the invisible axion, _Physics Letters B_ 120 (1983) 133.
* [7] M. Dine and W. Fischler, The not-so-harmless axion, _Physics Letters B_ 120 (1983) 137.
* [8] M.S. Turner, Coherent scalar-field oscillations in an expanding universe, _Phys. Rev. D_ 28 (1983) 1243.
* [9] M.S. Turner, Cosmic and local mass density of “invisible” axions, _Physical Review D_ 33 (1986) 889.
* [10] I.G. Irastorza and J. Redondo, New experimental approaches in the search for axion-like particles, _Progress in Particle and Nuclear Physics_ 102 (2018) 89 [1801.08127].
* [11] J.E. Kim, Weak-interaction singlet and strong CP invariance, _Phys. Rev. Lett._ 43 (1979) 103.
* [12] M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Can confinement ensure natural CP invariance of strong interactions?, _Nuclear Physics B_ 166 (1980) 493.
* [13] L. Di Luzio, F. Mescia and E. Nardi, Redefining the Axion Window, _Physical Review Letters_ 118 (2017) 031801 [1610.07593].
* [14] L. Di Luzio, F. Mescia and E. Nardi, Window for preferred axion models, _Phys. Rev. D_ 96 (2017) 075003 [1705.05370].
* [15] V. Plakkot and S. Hoof, “Model catalogues and histograms of KSVZ axion models with multiple heavy quarks.” Published on Zenodo, 2021. DOI: 10.5281/zenodo.5091707.
* [16] G.G. di Cortona, E. Hardy, J.P. Vega and G. Villadoro, The QCD axion, precisely, _JHEP_ 1 (2016) 34 [1511.02867].
* [17] M. Gorghetto and G. Villadoro, Topological susceptibility and QCD axion mass: QED and NNLO corrections, _JHEP_ 2019 (2019) 33 [1812.01008].
* [18] R. Slansky, Group theory for unified model building, _Phys. Rep._ 79 (1981) 1.
* [19] Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi et al., Planck 2018 results. VI. Cosmological parameters, _A &A_ 641 (2020) A6 [1807.06209].
* [20] S. Hoof, F. Kahlhoefer, P. Scott, C. Weniger and M. White, Axion global fits with Peccei-Quinn symmetry breaking before inflation using GAMBIT, _JHEP_ 3 (2019) 191 [1810.07192].
* [21] M. Gorghetto, E. Hardy and G. Villadoro, Axions from strings: the attractive solution, _JHEP_ 7 (2018) 151 [1806.04677].
* [22] M. Gorghetto, E. Hardy and G. Villadoro, More Axions from Strings, _SciPost Physics_ 10 (2021) 050 [2007.04990].
* [23] M. Kawasaki, K. Kohri and T. Moroi, Big-Bang nucleosynthesis and hadronic decay of long-lived massive particles, _Phys. Rev. D_ 71 (2005) 083502 [astro-ph/0408426].
* [24] J. Chluba, Distinguishing different scenarios of early energy release with spectral distortions of the cosmic microwave background, _Mon. Not. Roy. Astron. Soc._ 436 (2013) 2232 [1304.6121].
* [25] Fermi-LAT collaboration, Fermi LAT Search for Dark Matter in Gamma-ray Lines and the Inclusive Photon Spectrum, _Phys. Rev. D_ 86 (2012) 022002 [1205.2739].
* [26] S. Jäger, S. Kvedaraitė, G. Perez and I. Savoray, Bounds and prospects for stable multiply charged particles at the LHC, _JHEP_ 04 (2019) 041 [1812.03182].
* [27] M.E. Machacek and M.T. Vaughn, Two Loop Renormalization Group Equations in a General Quantum Field Theory. 1. Wave Function Renormalization, _Nucl. Phys. B_ 222 (1983) 83.
* [28] L. Di Luzio, R. Gröber, J.F. Kamenik and M. Nardecchia, Accidental matter at the LHC, _JHEP_ 07 (2015) 074 [1504.00359].
* [29] P. Sikivie, Axions, Domain Walls, and the Early Universe, _Phys. Rev. Lett._ 48 (1982) 1156.
* [30] S.M. Barr, K. Choi and J.E. Kim, Axion Cosmology in Superstring Models, _Nucl. Phys. B_ 283 (1987) 591.
* [31] J.E. Kim, Light pseudoscalars, particle physics and cosmology., _Phys. Rep._ 150 (1987) 1.
* [32] C. O’Hare, “cajohare/AxionLimits: AxionLimits.” Published on Zenodo, 2020. DOI: 10.5281/zenodo.3932430.
* [33] W. Giaré, E.D. Valentino, A. Melchiorri and O. Mena, New cosmological bounds on hot relics: Axions & Neutrinos, _MNRAS_ (2021) [2011.14704].
* [34] P. Carenza, T. Fischer, M. Giannotti, G. Guo, G. Martínez-Pinedo and A. Mirizzi, Improved axion emissivity from a supernova via nucleon-nucleon bremsstrahlung, _JCAP_ 2019 (2019) 016 [1906.11844].
* [35] M. Giannotti, I.G. Irastorza, J. Redondo, A. Ringwald and K. Saikawa, Stellar recipes for axion hunters, _JCAP_ 10 (2017) 010 [1708.02111].
* [36] L. Visinelli and S. Vagnozzi, Cosmological window onto the string axiverse and the supersymmetry breaking scale, _Phys. Rev. D_ 99 (2019) 063517 [1809.06382].
|
arxiv-papers
| 2021-07-26T18:00:00 |
2024-09-04T03:07:19.633908
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Vaisakh Plakkot and Sebastian Hoof",
"submitter": "Sebastian Hoof",
"url": "https://arxiv.org/abs/2107.12378"
}
|
2107.12379
|
††institutetext: aDepartment of Physics, University of Toronto, Toronto,
Ontario, Canada M5S 1A7††institutetext: bInstitut für Experimentalphysik,
Universität Hamburg, Germany††institutetext: cInstitut für Theoretische
Physik, Universität Heidelberg, Germany
# Unsupervised Hadronic SUEP at the LHC
Jared Barron,a [email protected] David Curtin,a
[email protected] Gregor Kasieczka,b [email protected]
Tilman Plehn,c [email protected] and Aris Spourdalakisa
[email protected]
###### Abstract
Confining dark sectors with pseudo-conformal dynamics produce SUEPs, or Soft
Unclustered Energy Patterns, at colliders: isotropic dark hadrons with soft
and democratic energies. We target the experimental nightmare scenario, SUEPs
in exotic Higgs decays, where all dark hadrons decay promptly to SM hadrons.
First, we identify three promising observables: the charged particle
multiplicity, the event ring isotropy, and the matrix of geometric distances
between charged tracks. Their patterns can be exploited through a cut-and-
count search, supervised machine learning, or an unsupervised autoencoder. We
find that the HL-LHC will probe exotic Higgs branching ratios at the per-cent
level, even without a detailed knowledge of the signal features. Our
techniques can be applied to other SUEP searches, especially the unsupervised
strategy, which is independent of overly specific model assumptions and the
corresponding precision simulations.
## 1 Introduction
Hidden sectors are one of the most interesting and generic paradigms for
physics beyond the Standard Model (BSM). These kinds of new particles and
forces are not only plausible from a bottom-up point of view, but also arise
in many top-down BSM theories Strassler:2006im ; Strassler:2008fv ;
Chacko:2005pe ; Schabinger:2005ei ; Patt:2006fw ; Espinosa:2007qk ; March-
Russell:2008lng ; Alimena:2019zri ; Curtin:2021alk ; Holdom:1985ag ;
Abel:2008ai ; Batell:2009yf ; Jaeckel:2010ni ; Foot:2014mia ; Feldman:2007wj ;
Pospelov:2007mp ; Dudas:2013sia ; An:2012va ; Kribs:2018ilo ; Knapen:2021eip ;
Cvetic:2002qa ; Hur:2007uz ; Bai:2013xga ; Grossman:2010iq . In most scenarios
of interest, hidden sector particles couple to SM fields via feeble
interactions or heavy messengers, and the nature of these portal couplings
determines their collider phenomenology. A case of special interest are hidden
valleys Strassler:2006im , referring to hidden sectors with a confining gauge
group which gives rise to rich infrared (IR) dynamics from very simple
ultraviolet (UV) theory structures. The production of dark quarks at the LHC
leads to a dark shower and high-multiplicity production of dark hadrons, in
analogy to QCD jets. Depending on the portal by which the dark hadrons are
produced and decay, these dark showers produce a wide variety of LHC
signatures, which have been the subject of intense theoretical and
experimental study in the last decade Han:2007ae ; Strassler:2008fv ;
Buschmann:2015awa ; Arkani-Hamed:2008kxc ; Ellis:2012zp ; Toro:2012sv ;
ATLAS:2018dfo ; Alimena:2019zri ; Cohen:2015toa ; Burdman:2018ehe ;
Schwaller:2015gea ; Cohen:2017pzm ; Cohen:2020afv .
We consider one of the most challenging varieties of dark showers, Soft
Unclustered Energy Patterns (SUEPs). If a hidden valley possesses a large
gauge coupling that is pseudo-conformal above its confinement scale, then
large-angle emission is unsuppressed for most of the parton shower evolution.
This means the dark hadrons are not arranged in narrow QCD-like jets, but
emitted approximately isotropically in the shower centre-of-mass frame
Strassler:2008bv ; Knapen:2016hky . This defines the SUEP final state as a
high-multiplicity spherically-symmetric shower of hidden sector states. While
existing searches can be sensitive to SUEPs produced at high energy scales, or
with dark hadrons that decay to sufficiently conspicuous final states like
leptons or Long-Lived Particles (LLPs) Knapen:2016hky ; Alimena:2019zri , no
dedicated SUEP searches exist to date. Furthermore, SUEPs _without_
conspicuous final states are not captured by any existing search and represent
an unusually cruel signal, since their soft, isotropic distributions can mimic
the ubiquitous pile-up produced by simultaneous LHC collisions.
To ensure that all types of SUEP signals can be discovered at the LHC, we
focus on a well-motivated SUEP nightmare scenario, where SUEP is produced in
exotic Higgs decays and the dark hadrons decay promptly and exclusively to SM
hadrons. The modest energy scale of exotic Higgs decays and the lack of
conspicuous final states forces us to rely on the kinematics of the resulting
SM hadrons to extract the signal from the overwhelming QCD background. This
production mode also allows us to side-step the problem of how to trigger on
SUEPs Knapen:2016hky by using leptons in associated $Vh$ production. The
analysis techniques we develop will not only allow for the detection of this
SUEP nightmare scenario, but should also increase the LHC experiments’
sensitivity to all other SUEP possibilities.
An acute obstacle for SUEP searches is the lack of rigorous predictions and
simulations for the strongly coupled pseudo-conformal dark sectors. Rather
than hoping for conspicuous final states, we utilize the kinematics of the SM
hadrons without relying on fine details of the signal beyond the robust SUEP
characteristics of isotropic, soft, and democratic dark hadron energies. We
therefore simulate SUEP production using a simple QCD fireball model of
thermal dark hadron emission Knapen:2021eip , and find robust observables and
event representations. An important observable for SUEP searches is the inter-
particle distance matrix $\Delta R_{ij}$ between charged hadrons. This matrix
encodes the essential geometric differences between QCD-like and SUEP-like
hadron production. It forms the backbone of all our analysis strategies,
together with known variables like event isotropy Cesarotti:2020hwb and the
charged particle multiplicity.
To demonstrate the distinguishing power of our observables, as well as the
drastic improvements from more sophisticated techniques, we examine three
strategies for systematic SUEP searches at the HL-LHC. First, we simulate a
simple cut-and-count analysis, which will turn out to allow for impressive
sensitivities to $\mathrm{Br}(h\to\mathrm{SUEP})\sim 1\%$ at the HL-LHC. We
anticipate that a realistic analysis with data-driven background estimation
will perform even better, since our study is limited by background simulation
statistics. To improve on this, we utilize supervised as well as unsupervised
machine learning (ML). Unsupervised analysis concepts along the lines of
autoencoder neural networks Rumelhart1986 have the potential to transform LHC
analyses Asadi:2017qon ; Metodiev:2017vrx ; Andreassen:2018apy ;
Collins:2018epr ; DeSimone:2018efk ; Heimel:2018mkt ; Farina:2018fyg ;
Roy:2019jae ; Cheng:2020dal ; Nachman:2020lpy ; MdAli:2020yzb ;
Bortolato:2021zic ; Dillon:2021nxw ; Finke:2021sdf ; Kasieczka:2021xcg ;
Aarrestad:2021oeb ; Cerri:2018anq , including searches for dark showers
Heimel:2018mkt . We point out how unsupervised methods are especially
appealing for difficult-to-simulate signals like SUEP since they only rely on
the known QCD background for training, while yielding at least several times
greater SUEP sensitivity than the cut-and-count analysis.
Our investigation establishes that SUEP searches need not rely on conspicuous
SM final states for excellent sensitivity at the HL-LHC. The unique dark
hadron kinematics, which robustly follows from their origin in pseudo-
conformal strongly coupled dynamics, allows for the SUEP final state to be
distinguished from its overwhelming QCD background. Our techniques can be
applied to all SUEP searches to dramatically enhance their sensitivity,
regardless of energy scale or SM final state.
This paper is structured as follows. In Section 2, we briefly review the SUEP
theory and define the benchmark scenario for our study. Signal and background
simulation is discussed in in Section 3. In Section 4 we define the relevant
observables to distinguish SUEP from QCD and discuss supervised and
unsupervised machine learning techniques. In Section 5 we present our results,
including projections for the $\mathrm{Br}(h\to\mathrm{SUEP})$ sensitivity at
the HL-LHC, and we conclude in Section 6.
## 2 Theory of SUEPs
Hidden valley models are a large class of BSM theories in which the SM is
extended by additional gauge groups under which SM particles are neutral. New
particles that are charged under non-Abelian extensions can give rise to a
wide range of interesting hidden sector dynamics Strassler:2006im ;
Strassler:2008fv ; Han:2007ae and various challenging SM signatures. These
models appear in the context of many top-down constructions Morrissey:2009tf ,
including string theory, of course Cvetic:2002qa , and are compatible with
various potential resolutions to the hierachy problem such as supersymmetry
Arkani-Hamed:2005zuc , little Higgs models, TeV extra dimensions and Randall-
Sundrum scenarios Strassler:2006im ; Arkani-Hamed:2001nha ; Randall:1999ee ;
Randall:1999vf . In Neutral Naturalness scenarios, hidden valleys actually
solve the little hierarchy problem directly Chacko:2005pe . Models containing
a hidden valley have also been studied in the context of dark matter
Hur:2007uz , matter-antimatter asymmetry Bai:2013xga and the origin of
neutrino masses Grossman:2010iq .
In most hidden sector scenarios with a confining gauge force, the dark parton
shower is qualitatively similar to QCD: the asymptotic freedom of the running
gauge coupling enhances soft and collinear emission, resulting in the
production of hidden sector states in collimated jets. If the hidden ’t Hooft
coupling is large ($\lambda\equiv g^{2}N_{c}\gg 1$) and approximately constant
over a significant energy range, the distribution of the produced dark hadron
momenta will be much more democratic than the hierarchical jet-like behaviour
we see in QCD. This is the SUEP class of signals, characterized by relatively
soft, isotropic emission of dark hadrons Strassler:2008bv ; Knapen:2016hky .
It is worth keeping in mind that strongly coupled hidden sector dynamics is
not the only scenario that can lead to SUEPs. Similar final states can be
produced in many-step cascade decays in the hidden sector Elor:2015tva , or in
theories with extra spatial dimensions (see e.g. Cesarotti:2020uod ;
Costantino:2020msc ; Park:2012fe ). We now describe the general features of
production, evolution and decay that constitute the SUEP signal, and define a
benchmark scenario for our study.
### 2.1 Dark Hadron Production in Exotic Higgs Decays
Production of SUEP can occur through various portals coupling SM particles to
SM-singlet states charged under a dark gauge group. Higgs and vector boson
portals are commonly studied examples. Alternatively, a new particle charged
under both SM and the dark gauge group could be produced Knapen:2021eip . We
assume that hidden sector states $\psi_{D}$ can be produced via the Higgs
portal:
$\mathcal{O}_{\text{production}}\sim|H|^{2}\psi_{D}\bar{\psi}_{D}\;.$ (1)
$\psi_{D}$ could represent a fermion or scalar dark quark charged under the
hidden gauge group. In the fermion case, the above operator is actually
dimension 5 with a coupling ${\sim}1/Lambda$ for some UV scale $\Lambda$. This
operator will give rise to the production of hidden states in exotic Higgs
decays, as long as the dark hadrons are kinematically accessible. The dark
quarks hadronize into a large multiplicity of dark hadrons.
For a confining gauge theory, the evolution of the system from the hard scale
$Q$ at which the first partons charged under the gauge group are generated to
the IR confinement scale $\Lambda$ governs the hadron multiplicity generated
during showering. $Q$ is of the order $m_{h}$ in our case. For QCD jets, this
evolution can be reliably simulated as a parton shower, but for SUEPs
different strategies are required.
The average hadron multiplicity is given by
$\langle n(Q)\rangle=\int_{0}^{1}F(x,Q)dx$ (2)
where $F$ is the fragmentation function describing the final state
distribution of momenta Ellis:1996mzs , and $x$ is the momentum fraction of a
given splitting. Calculating its evolution from the scale $Q$ down to an IR
scale $\Lambda$ involves resumming the divergent contributions from the
anomalous dimensions, with the leading contribution obtained from just the
first ($j=1$) Mellin transform of the anomalous dimension. If the theory is
conformal between $Q$ and $\Lambda$, the running of the coupling can (by
definition) be neglected. In the limit where the ’t Hooft coupling is large
$\lambda\gg(1-x)$ it was first shown in Ref. Hatta:2008tn that
$\langle n\rangle\sim\left(\frac{Q}{\Lambda}\right)^{2\gamma_{T}(1)}\;,$ (3)
where $\gamma_{T}(1)$ is the first Mellin-Moment of the time-like anomalous
dimension of the fragmentation function. Further expanding to zeroth order in
the coupling yields
$\langle
n\rangle\sim\left(\frac{Q}{\Lambda}\right)^{1+\mathcal{O}(1/\sqrt{\lambda})}$
(4)
Note that the small momentum fraction $x$ carried by each individual splitting
follows from $\lambda\gg(1-x)$. In this strong regime branching is expected to
yield emissions with $x\sim\Lambda/Q$ that are relatively isotropic in
direction and democratic in momentum Hatta:2008tn ; Knapen:2016hky . Thus,
with a large enough scale separation $Q\gg\Lambda$, low-$x$, high-multiplicity
final states are generated. Branching terminates after
$N_{\text{final}}\sim\mathrm{log}{\langle n\rangle}$ splittings at
$Q_{N_{\text{final}}}\sim Q/2^{N_{\text{final}}}\sim\Lambda$, at which point
hadronization takes over.
Hadron production in QCD at high temperatures is a close analogue to our
situation, and statistical models have consistently shown that hadron
multiplicities follow a thermal distribution Fermi:1950jd ; Hagedorn:1965st ;
PhysRevD.1.1416 ; Blanchard:2004du ; Hatta:2008tn ; Becattini:2001fg ;
Cleymans:2012wm ; Becattini:2008tx ; Becattini:2010sk ; Becattini:2009ee ;
Ferroni:2011fh . We use this picture as a toy model of dark hadron production
in SUEP, modelling the distribution of dark meson momenta as a relativistic
Boltzmann distribution
$\frac{dN}{d^{3}\bf{p}}\sim\exp\left(-\frac{\sqrt{{\bf{p}}^{2}+m_{D}^{2}}}{T_{D}}\right)\
,$ (5)
where $m_{D}$ is the mass of the final dark states and $T_{D}$ acts as the
Hagedorn temperature of the hidden confining gauge force, with
$T_{D}\sim\Lambda$ Knapen:2016hky ; Blanchard:2004du . This temperature
controls the kinetic energy of the dark hadron distribution.
### 2.2 Dark Hadron Decay
Generically the decay of the dark hadrons $\phi_{D}$ into the SM will occur
through some effective coupling of the form
$\mathcal{O}_{\text{decay}}=\phi_{D}\mathcal{O}_{\text{SM}}$ (6)
where $\mathcal{O}_{\text{SM}}$ contains fields charged under the SM gauge
group. The phenomenology of the SUEP signal is determined to a large extent by
the dominant decay portal. For example, if the dark hadrons decay to massive
dark photons that mix with the SM photon, then the SM final state contains
both hadrons and leptons in roughly gauge-ordered proportions, though for
various dark photon masses in the GeV-regime, hadrons can dominate
Knapen:2016hky . Alternatively, if the dark hadrons decay through the Higgs
portal, the final state will contain hadrons and leptons in roughly Yukawa-
ordered proportions. For sufficiently small portal coupling, the decay length
of the dark hadrons may also be macroscopic, resulting in LLP signatures
Alimena:2019zri .
It is also possible for the hidden sector states to decay purely hadronically,
which is the most experimentally challenging case at the LHC. A very simple
example is the gluon portal as described in Knapen:2021eip :
$\mathcal{L}\supset-\frac{1}{2}m_{a}^{2}a^{2}-\frac{\alpha_{2}}{8\pi}\frac{1}{f_{a}}aG_{\mu\nu}\tilde{G}^{\mu\nu}-iy_{\psi_{D}}a\psi_{D}\psi_{D}^{*}\
.$ (7)
Here, $a$ is a heavy elementary pseudo-scalar in the dark sector and
$\psi_{D}$ is the dark quark. Dark hadrons, which are bound states of
$\psi_{D}$, could then decay to SM hadrons via an effective operator
$\phi_{D}G\tilde{G}$. Another example is the hadrophilic (or leptophobic)
$Z^{\prime}$ portal Bernreuther:2019pfb , where a new heavy gauge boson
couples to SM quarks but not leptons, allowing dark hadrons to decay via an
effective operator like $\phi_{D}q\bar{q}$.
### 2.3 Prompt Hadronic Benchmark Scenario
The production of hidden sector states via the Higgs portal generally and
exotic Higgs decays in particular is one of the most motivated and plausible
discovery scenarios for new physics Curtin:2013fra . It is therefore vital
that our experimental search strategies cover all possibilities for a signal
at the LHC. This is especially urgent since for final states that are not
covered by existing searches, branching fractions of ${\sim}10\%$ are easily
allowed by current measurements of Higgs couplings and invisible decays
ATLAS:2018bnv ; ATLAS:2019cid ; ATLAS:2019nkf ; CMS:2018yfx ;
Biekoetter:2018ypq .
We therefore focus on the experimental worst-case scenario for SUEP produced
in exotic Higgs decays: purely hadronic and prompt decays, with a particular
interest in low dark hadron masses that make resonance searches Pierce:2017taw
; ATLAS:2020ahi or applications of jet substructure techniques Park:2017rfb
challenging. While the simplest gluon portal scenarios suggest that dark
hadrons lighter than ${\sim}10\;\mathrm{GeV}$ have macroscopic decay lengths
Knapen:2021eip (which could allow for the use of long-lived particle search
techniques Schwaller:2015gea ; Alimena:2019zri ), other possibilities can
easily realize prompt, purely hadronic decays over a much wider range of dark
hadron masses. For example, the hadrophilic (leptophobic) vector portal
Bernreuther:2019pfb with a hypothetical confining sector where the lightest
dark hadron is a dark-rho-like vector $\rho_{D}$ would allow dark hadrons
lighter than a GeV to decay promptly into SM hadrons. In focusing on the
prompt case we can develop techniques that allow SUEP production to be
identified using only the geometrical and momentum distribution of its SM
final states. These techniques will enhance our sensitivity for any kind of
general SUEP, in addition to whatever other features of the final state, like
displaced decays, leptons, or photons, can also be exploited.
Figure 1: Cartoon of the our benchmark scenario for SUEP produced in Higgs
decays with prompt decay of dark hadrons into SM hadrons. The SUEP description
applies in the purple area, for $T$ within a factor of a few of the dark
hadron mass. In the green region, dark hadron production is not thermal, but
described by processes more analogous to chiral QCD. In the orange and green
regions, searches for Higgs decays to a few resonances would be sensitive to
this dark sector. The parameter space region indicated with the darker purple
rectangle is the focus of our analysis. Our cuts are optimized for the most
archetypically SUEP-like final states, schematically indicated by the lower-
left corner of this rectangle, demarcated with the diagonal line.
The parameter space of our benchmark scenario is shown schematically in Figure
1. The Higgs production portal sets the high scale for the event to
$Q=125\;\mathrm{GeV}$. With this scale fixed, the distribution of final states
is in principle determined by the dark hadron mass $m_{D}$ and the dark
Hagedorn temperature $T_{D}$ of Eq. (5), shown on the vertical and horizontal
axes of Figure 1. In reality, there may be different dark hadrons with
different spins, decays, and distributions, but this simplified description is
sufficient for our purposes. On the left and right, the relevant parameter
space is kinematically bounded by the red-hatched areas. Dark hadron
production in Higgs decays requires $m_{D}<m_{h}/2$. As we explain below, we
focus on dark hadrons that decay to SM hadrons, which requires
$m_{D}>(2-3)m_{\pi}$, depending on the exact decay portal.
Even within that mass range, hidden sectors with pseudo-conformal dynamics do
not always manifest as SUEP signatures in exotic Higgs decays. If
$m_{D}>{m_{h}}/{3}$ (blue area in Figure 1), then dark hadrons are only pair-
produced in Higgs decays, making this scenario equivalent to standard exotic
Higgs decays to pairs of various new particles (see e.g. Curtin:2013fra ). If
the dark hadron mass is fairly large, $m_{D}\sim m_{h}/(\mathrm{few})$ (yellow
area), or the dark Hagedorn temperature is comparable to or above the Higgs
mass $T\gtrsim m_{h}$ (green area), then exotic Higgs decay would produce only
a small multiplicity of dark hadrons that are either fairly hard or fairly
heavy. In both cases, dark hadron production is not thermal but is described
by processes more akin to chiral QCD. These regions are likely accessible
through modified searches for resonances in exotic Higgs decays Pierce:2017taw
; ATLAS:2020ahi , and we do not focus on them here. The gray area where
$T/m\ll 1$ is not expected to be realized by any pseudo-conformal hidden
sector, since the dark hadron mass and temperature are both related to the
strong coupling scale $\Lambda$. On the other hand, $T/m\gg 1$ is possible in
what we call the “dark pion regime”, where the dark hadrons are pseudo-
Goldstone bosons of an approximate symmetry, meaning their mass can be much
smaller than $\Lambda$. We do not focus on this region, but it would be an
interesting target for future investigations.
This leaves us with the actual SUEP regime for dark hadron production in
exotic Higgs decays, indicated by the light purple area. In this
investigation, we will focus on dark hadron masses below 8 GeV and $T/m$ in
the reasonable range of $\sim$ 0.25 to 4. This target SUEP parameter space is
marked out as the darker purple rectangle. Our cuts will be particularly
optimized for the lower-left region of the rectangle, demarcated with the
thick diagonal line. This is the region of low dark hadron mass and/or
temperature, corresponding to the softest SM final states that are most
difficult to search for using existing techniques.
## 3 Simulation
We briefly outline how we simulate our SUEP signal and the most important QCD
backgrounds, where the latter is necessary to develop our analysis techniques,
even though a realistic experimental analysis would rely on data-driven
background estimation.
### 3.1 Signal
We generate event samples for exotic Higgs decay into SUEP using the
SUEP_Generator plugin Knapen:2021eip for Pythia 8.243 Sjostrand:2014zea ,
which models the dark shower as a spherical distribution of dark pseudo-scalar
mesons with momenta drawn from the relativistic Boltzmann distribution Eq.
(5). As in Ref. Knapen:2016hky , we make the simplifying assumption that there
is only one flavor of dark meson produced in the exotic Higgs decay. The free
parameters of the SUEP shower are the dark hadron mass $m_{D}$ and the
effective temperature $T_{D}$, setting the energy scale at which dark hadrons
are produced dominantly.
We simulate associated Higgs production at the 14 TeV HL-LHC, $pp\to
Vh,V\to\ell\ell/\ell\nu$, in Pythia 8. The Higgs is decayed to the SUEP final
state of dark mesons, which then decay directly to a $u\bar{u}$ quark pair
that in turn undergoes SM hadronization. The exact choice of hadronic decay
mode does not significantly affect our analysis, so we use this single channel
as a stand-in for other purely hadronic portals. The events are then passed
through the simplified detector simulation code Delphes 3 deFavereau:2013fsa
with CMS detector settings. The simulated detector-level objects output by
Delphes are used for our analysis. Our SUEP search will only use charged-track
information. Since charged tracks can be traced back to the primary vertex,
they are very robust with respect to pile-up contamination. We can therefore
neglect the effects of pile-up in the remainder of our study.
In our event samples we cover $m_{D}$ from 400 MeV to 8 GeV, and $T_{D}/m_{D}$
from 0.25 to 4. The lower bound on the dark hadron mass ensures that the dark
mesons are kinematically allowed to decay to two pions. The upper bound is
chosen to show where our search loses sensitivity. The range of temperatures
is chosen to satisfy $T_{D}\sim m_{D}$, which is the regime where the thermal
picture of SUEP production is valid. The signal cross section is
Cepeda:2019klc
$\displaystyle\sigma(pp\to e\nu/\mu\nu+\ \mathrm{SUEP})$ $\displaystyle=$
$\displaystyle(0.34{\ \rm pb})\cdot\mathrm{Br}(h\to\ \mathrm{SUEP})$ (8)
$\displaystyle\sigma(pp\to ee/\mu\mu+\ \mathrm{SUEP})$ $\displaystyle=$
$\displaystyle(0.066{\ \rm pb})\cdot\mathrm{Br}(h\to\ \mathrm{SUEP})$
for SUEP production in exotic Higgs decays in association with leptonic
$W/Z$-bosons that decay into electrons or muons. In total, we generate
$2.0\times 10^{5}$ $Zh$ and $Wh$ signal events, proportional to their
respective cross section, for each set of signal parameters
$(m_{D},T_{D}/m_{D})$.
### 3.2 Background
The dominant background to our signal is production of one or two leptons in
association with any number of QCD jets. It is highly challenging to model
reliably, and in a realistic study, data-driven background estimation would be
employed, see Section 4.4. However, for the purpose of developing our analysis
techniques, we simulate QCD+leptons background samples using MadGraph5_aMC@NLO
2.6.6 and Pythia 8.243.
Ideally, one should simulate fully matched multi-jet $+$ $\ell\ell/\ell\nu$
samples to capture the background distribution as closely as possible.
However, due to the large statistics needed for our analysis, and the fact
that such a simulation is anyway unlikely to be a perfect representation of
the detailed hadronic distributions at the relevant high multiplicities and
relatively low energy scale of the Higgs mass, this approach is not practical.
Instead, we simulate $nj+\ell\ell/\ell\nu$, where $n=2,3,4$ without jet
matching and $p_{T}>15$ GeV at generator level, to determine the effect of jet
multiplicity at the hard event level on our analysis. We find that $n>2$ leads
to lower cross section while being _more distinguishable_ from the SUEP signal
using the analysis techniques we develop here. Therefore, to be conservative,
we simulate $2j+\ell\ell/\ell\nu$ as our background samples for $Zh$ and $Wh$
production and decay into SUEP, respectively. In total, we use $10^{8}$
background events to represent the
$\sigma(\mathrm{QCD}\ +\ \ell\ell/\ell\nu)\approx 3.7\times 10^{3}{\ \rm pb}$
(9)
lowest-order MadGraph5 cross section for this background. While this is
sufficient to develop our analysis techniques, the Monte Carlo background
sample has $\sim 1/100$ the statistics of the full HL-LHC dataset. This is
important for the interpretation of our results in Section 5.
## 4 Analysis
The goal of this paper is to devise strategies for extracting SUEP signals
from a large background events without relying on the details of the simulated
signal. We first describe our trigger assumptions and baseline cuts, and
define a cut-based classifier to establish how sensitive such a simple
approach can be. Section 4.2 introduces a supervised neural network
classifier, to demonstrate both the advantages and limitations of the
supervised approach for our physics problem. We then introduce our primary
tool in Section 4.3 — an unsupervised neural network that we employ as an
anomaly detector for SUEP.
### 4.1 SUEP Observables
We define our trigger pre-selection by requiring that all events have at least
one charged electron or muon with $p_{T}\geq 40$ GeV, or two opposite sign
charged leptons with $p_{T}\geq 30(20)$ GeV. We also require that the scalar
$p_{T}$-sum of hadronic charged tracks from Delphes is above $30$ GeV. Both
signal and background have a trigger efficiency of ${\approx}40\%$, relative
to the cross sections in Eq. (8) and Eq. (9).
We focus on the most challenging region of SUEP parameter space, with either
low dark hadron masses or low dark shower temperatures. This gives rise to the
most archetypically SUEP-like final states with a high multiplicity of
isotropically distributed, relatively soft SM hadrons. The three observables
that best capture the characteristics of this signal are the charged particle
multiplicity $N_{\mathrm{charged}}$, the event isotropy $\mathcal{I}$
Cesarotti:2020hwb , and the interparticle distance. In all steps of the
analysis that follow, we only use charged particle tracks with $p_{T}\geq 300$
MeV from the primary vertex, excluding the one or two hard leptons associated
with the decaying gauge boson.
The event isotropy observable $\mathcal{I}\in(0,1)$ Cesarotti:2020hwb
quantifies the energy mover’s distance between a collider event and an
idealized isotropic event with uniform energy distribution, so $\mathcal{I}=0$
indicates a fully isotropic event. Originally, three different definitions of
the event isotropy are laid out, utilizing different geometries — spherical,
cylindrical, or in a two-dimensional ring. We compute the _ring isotropy_ of
the set of charged hadronic tracks of each event, since at a $pp$ collider we
have no way to know the longitudinal boost of the Higgs that decays to SUEP.
Since the SUEP is isotropic in the Higgs rest frame, we boost the hadronic
charged track system of each event into its transverse rest-frame before
computing the ring isotropy. To do this we assume that all hadronic charged
tracks belong to particles with the pion mass, but this is sufficient to
significantly separate signal and background events.
The variable that we introduce for the specific purpose of studying SUEP
events is the interparticle distance matrix $\Delta R_{ij}$ for charged hadron
tracks in the lab frame. It captures the unique topology of SUEP events while
being very suitable for machine-learning applications, since it is invariant
under re-definitions of the azimuthal angle around the beam axis. It is also
useful to define the mean $\overline{\Delta R}$ of all matrix entries.
Figure 2 shows distributions of $N_{\text{charged}}$, $\mathcal{I}$, and
$\overline{\Delta R}$ for the QCD background and a variety of SUEP benchmark
points after trigger selection. The separation between signal and background
is clear, with SUEP having higher multiplicity, more isotropic distribution of
tracks, and a significantly wider spread of inter-particle distances. To
understand how these observables change across SUEP parameter space, we show
their average values (across the whole sample after trigger selection) as a
function of $m_{D}$ and $T_{D}$ in Figure 3. The pairwise correlations between
each of these observables are included in the Appendix in Figure 8,
demonstrating that each of these three variables encode distinct information
about each event.
Figure 2: Comparison of QCD and SUEP distributions of selected observables.
In addition to the trigger requirements, we therefore impose the following
pre-selection cuts on all events:
$N_{\text{charged}}\geq 70\ \ \ ,\ \ \ \mathcal{I}<0.07\ \ \ ,\ \ \
\overline{\Delta R}<3\ .$ (10)
This cut targets the most SUEP-like parts of signal parameter space with low
dark Hagedorn temperature and/or low dark hadron mass (see Fig. 1). All but
2.2% of the post-trigger background is eliminated, while leaving $31.8\%$ of
signal for $m_{D}=0.4$ GeV and $T_{D}=0.4$ GeV. These requirements are less
optimal for larger dark hadron masses or temperatures — for example, the
signal efficiency is only $1.1\%$ for $m_{D}=5$ GeV, $T_{D}=20$ GeV. However,
larger temperatures and masses generally lead to higher-energy final states or
separable resonances, and are therefore not the focus of our present analysis.
Figure 3: Average values of charged particle multiplicity, event isotropy and
mean-interparticle-distance as a function of $m_{D}$ and $T_{D}$ for SUEP. QCD
average values: $\langle{N}_{\text{charged}}\rangle=51$,
$\langle{\mathcal{I}}\rangle=0.25$, $\langle\overline{\Delta R}\rangle=2.7$.
We now consider three options for SUEP searches at the HL-LHC, using events
which pass the baseline pre-selection as a starting point:
1. 1.
The simplest strategy is a cut-and-count analysis using high-level
observables. It will serve as a baseline for more sophisticated machine-
learning techniques, and can be implemented very easily and effectively with a
stricter cut on $\overline{\Delta R}$ compared to Eq. (10). Varying the cut
threshold yields a significance improvement curve (SIC) of the signal
efficiency vs the background efficiency for each point in signal parameter
space, which we will then be able to compare to results using machine learning
methods. As we will see, this already yields very promising SUEP
sensitivities.
2. 2.
A supervised ML-classifier requires detailed knowledge of the signal, since it
is trained on signal and background. This makes supervised techniques unlikely
to be a realistic analysis candidate for broad SUEP searches. However, we
perform a simple supervised study in Section 4.2 to demonstrate the best-case
scenario for SUEP sensitivity if the signal was very well understood.
3. 3.
In Section 4.3, we use an unsupervised autoencoder trained only on the
background as an anomaly detector to improve on the sensitivity of the cut-
and-count analysis. This is likely to be a realistic analysis candidate since
it can be performed using data-driven background estimation techniques without
precise knowledge of the signal.
### 4.2 Supervised ML-Classifier
While a supervised classifier is explicitly dependent on the characteristics
of the signal (and background) simulation used in its training, which we
cannot trust in detail, it can still provide a useful comparison to an
unsupervised network, and give an indication of how sensitive a search based
on similar methods could be with improved modelling of the SUEP signal. An
additional limitation is that even with reliable signal simulation, the
parameters of the real SUEP are unknown, and a supervised network trained to
recognize SUEP with one set of parameters may fail if the true parameters
change. As we will see, this is indeed the case.
The supervised network architecture we choose is a dynamical graph
convolutional neural network wang2019dynamic ; Bernreuther:2021gds . We
implement the network in PyTorch. The input feature representation for both
the supervised and unsupervised networks is the interparticle distance matrix,
with the redundant left-lower half set to zero and track $p_{T}$ information
added to the diagonal:
$\Delta\tilde{R}_{ij}\equiv\left\\{\begin{array}[]{lll}\Delta R_{ij}&&i>j\\\
p_{T,i}/\mathrm{GeV}&\mathrm{for}&i=j\\\ 0&&i<j\end{array}\right.$ (11)
All events are required to have at least $N_{\text{charged}}\geq 70$ tracks,
and all events are truncated to keep only the $70$ highest-$p_{T}$ tracks to
ensure each event has the same dimensionality in the analysis.
The input matrix $\Delta R_{ij}$ is used to generate graph edges between each
particle (node) and its $k=7$ nearest neighbours in $\Delta R$ space. The node
features for each particle are the 70-dimensional vector of $\Delta R$
distances to all other particles in the event. The graph network has two
EdgeConv blocks, each comprising a three-layer perceptron with leaky ReLU
activation Qu:2019gqs . The EdgeConv operation updates the node features
$x_{i}$ of each particle as
$\vec{x}_{i}^{{}^{\prime}}=\frac{1}{k}\sum_{j=1}^{k}\vec{h}_{\theta}(\vec{x}_{i},\vec{x}_{i}-\vec{x}_{j})\;,$
(12)
where $k$ is the number of neighbours assigned to each node, and $h_{\theta}$
is a non-linear function of learned parameters $\theta$, implemented as a
three-layer perceptron. The graph edges are re-computed between the first and
second blocks using the Euclidean distance between the feature vectors of each
node to determine its $14$ nearest neighbours. The first block has feature
dimension 64, while the second block has feature dimension 32. Batch
normalization follows each layer. The output of the graph layers is averaged,
then passed through two fully connected layers, first expanding to dimension
128, then down to output dimension 2. The loss function is the cross-entropy
loss between the output and the true class label of each event.
$\mathcal{L}(x,\text{class})=-\log\frac{x[\text{class}]}{\sum_{j}x[j]}$ (13)
To test how the performance of the model depends on the choice of training
parameters, we train twelve neural networks on twelve different choices of
$(m_{D},T_{D}/m_{D})$, with $m_{D}=0.5,1,2\;\mathrm{GeV}$ and
$T_{D}/m_{D}=0.5,1,2,4$, and evaluate their effectiveness over the whole
signal parameter space. We also evaluate the efficacy of a ‘cocktail approach’
Aguilar-Saavedra:2017rzt ; Knapp:2020dde ; Bernreuther:2021gds ; Baldi:2016fzo
by training on a mixed sample including signal events from each of the twelve
parameter choices. A conditional training Louppe:2016ylz ; CMS:2019dqq on the
signal parameters would be a similar approach. Each network is trained for 10
epochs with a decaying learning rate. Longer training periods were tested and
found to be unnecessary. Loss values for each test sample event are obtained
for the model realizations of the last 5 training epochs before being
averaged.
### 4.3 Unsupervised Autoencoder
Supervised ML-classification is an extremely powerful analysis tool, but it
only works for signals with well-defined and universal features and
corresponding precision simulations. At the LHC, this is not always the case,
and SUEP with its toy shower is a perfect example for a more broadly defined
signal. Here we prefer not to train a classifier on signal simulations.
Instead, we employ anomaly detection methods, where we train a network only on
the well-understood background dataset, so that it can flag events that are
anomalous in comparison. We use an autoencoder Rumelhart1986 ; Heimel:2018mkt
; Farina:2018fyg and train it on data without class labels, with a loss
function that incentivizes its output to be as close as possible to the input.
The intermediate network layers have a restricted number of nodes compared to
the dimension of the input and output, forcing the network to compress the
information in the input, and then decompress it to recover the output. The
principle of the autoencoder’s use as an anomaly detector is that it should
fail to accurately reconstruct events that are anomalous compared to the
dataset it was trained on. A high reconstruction loss flags an event as being
potentially anomalous, or in collider physics parlance, a signal (SUEP)
candidate event.
The autoencoder uses the same modified $\Delta\tilde{R}_{ij}$ matrix input as
used by the supervised network, see Eq. (11). Other representations were
tested, including the matrices of both $k_{T}$ and anti-$k_{T}$ distances
$d_{ij}=\min(k_{ti}^{\pm 2},k_{tj}^{\pm 2})(\Delta\tilde{R}_{ij})^{2}/R$
between particles Cacciari:2008gp , the high-level observables used in the
pre-selection cuts, as well as the raw $p_{T},\phi,\eta$ values for each
particle; all yielded results that were much less useful than the analysis we
present here. This emphasizes the importance of choosing the correct input
representation over sophisticated network architecture for SUEP searches.
The matrix $\Delta\tilde{R}_{ij}$ is flattened into a vector of length
$N_{\text{charged}}^{2}$ and fed into the autoencoder. The neural network
comprises five fully connected layers, with the number of nodes decreasing to
the bottleneck size in the third layer, then increasing back to
$N_{\text{charged}}^{2}=4900$. An alternative 3-layer network performs only
slightly worse. For the bottleneck size we find that larger bottlenecks
consistently lead to better performance than smaller ones, so we use
$N_{\text{bottleneck}}=1000$. Each layer of the network has a leaky ReLU
activation with slope -0.2 for negative values of x, except the final layer
which has a ReLU activation. The loss function of the network measures the
difference between the network’s input and output as
$\mathcal{L}(y^{\text{in}},y^{\text{out}})=\frac{1}{N_{\text{charged}}}\sum_{i}\frac{|y^{\text{out}}_{i}-\sigma(y^{\text{in}}_{i})|^{m}}{|\sigma(y^{\text{in}}_{i})|^{n}}$
(14)
where $\sigma(x)=1/(1+e^{-x})$. Among different values of $m$ and $n$,
starting with the usual mean squared error $m=2,n=0$, the best-performing is
$m=3,n=0$. The sigmoid normalization of the input is essential to the
network’s success. Without it, the autoencoder encodes SUEP events with
slightly _lower_ loss than the QCD background on which it was trained, and
completely fails to identify anomalous events. We hypothesize that this is
because, unlike many experimental signatures, SUEP is less complex than its
QCD background, and it has smaller values of $\Delta R$ than QCD. This
complexity bias has been noted before Heimel:2018mkt ; Dillon:2021nxw ;
Finke:2021sdf . The sigmoid function reduces sensitivity to large values of
$\Delta R$ and $p_{T}$ by mapping them to values very close to 1, while
remaining approximately linear for small input values, but offset to a minimum
value of 0.5. These effects make it easier to accurately reconstruct QCD
events while enhancing the network’s sensitivity to deviations from the input
on the SUEP events with characteristically smaller absolute values of the
input features.
Out of $8.8\times 10^{5}$ background Monte Carlo events that pass the pre-
selection cuts, $2.4\times 10^{5}$ are used for training, $5\times 10^{4}$ for
validation when tuning network hyperparameters, and $5.9\times 10^{5}$ for
testing. The number of signal events that pass the cuts varies with $m_{D}$
and $T_{D}$, but generally a few $\times 10^{4}$ events remain at each
parameter point to be used for testing. SUEP events with $m_{D}=1$ GeV,
$T_{D}=0.5$ GeV are used for validation purposes.
The network is trained for 15 epochs with a decaying learning rate. Longer
training periods were tested and found to be unnecessary. Loss values for each
test sample event are obtained for the model realizations of the last 5
training epochs before being averaged.
Other architectures were investigated, including variational autoencoders
kingma2014autoencoding and a graph convolutional autoencoder utilizing the
same EdgeConv operations as the supervised network, which itself led to the
use of the $\Delta\tilde{R}_{ij}$ event representation. Interestingly, the
simpler, fully connected architecture consistently delivered much better
background rejection than any of the more sophisticated graph networks in the
unsupervised approach.
### 4.4 Data-driven Background Estimation
Following the logic of this section further, we briefly describe how a data-
driven background sample could be derived for use in a realistic experimental
analysis based on our study. The total background cross-section can be
measured directly (and compared against simulation), since the leptons+QCD
signal region is completely background dominated for SUEP production in exotic
Higgs decays. The background efficiency of the classifier, whether it is based
on cuts, an unsupervised neural network, or a supervised one, can be estimated
by evaluating it on a control region.
A variety of control regions are possible, but perhaps the most promising is
defined by replacing the lepton criterion by a mono-photon criterion. Such a
sample should be free of signal contamination, and its hadronic content is
extremely similar to the hadronic background to our SUEP search, since it is
unconstrained in its detailed production channel except that it recoils off a
hard electroweak boson. The QCD distributions in this mono-photon-plus-jets
sample should therefore closely match those in the $Z$+jets signal region,
especially if the control region sample is reweighted to match the shape of
the photon $p_{T}$ spectrum to the dilepton $p_{T}$ spectrum in the $Z$+jets
signal sample. A variant of this approach can likely be adapted to estimate
the background in the $W$+jets channel as well, by using simulation to compute
a transfer function from $p_{T,W}$ to $p_{T,\ell}$ and applying it to
$p_{T,\gamma}$ in the control region before applying the $p_{T,\ell}$
reweighing.
Based on the assumption that such a strategy can be implemented, we therefore
will use benchmark estimates of $1\%$ and $10\%$ for the systematic background
uncertainty in our analysis to estimate the final physics reach.
## 5 Results
|
---|---
|
Figure 4: Some examples of Significance Improvement Curves (SIC), relative to trigger selection, of the autoencoder (black solid), cut on $\overline{\Delta R}$ (black dashed), and supervised graph networks (black dash-dotted and colored) for SUEP with test parameters of $(m_{D},T_{D}/m_{D})=$ (1.04, 0.435), (1.04, 1), (0.56, 0.435), (0.56, 1). For the autoencoder, the peak sensitivity improvement we are able to probe reliably with the statistics of our Monte Carlo samples (see text) corresponds to signal and background efficiencies of $(1.6\times 10^{-2},1\times 10^{-7})$, $(3.7\times 10^{-4},1.5\times 10^{-7})$, $(6.1\times 10^{-2},1\times 10^{-7})$ and $(2.2\times 10^{-2},1\times 10^{-7})$ relative to trigger selection, respectively. |
---|---
|
Figure 5: Same SIC curves as Fig. 5, but with background efficiency on the
horizontal axis. This demonstrates that with the full HL-LHC dataset,
significantly harder cuts could increase sensitivity beyond the limitations of
what we can demonstrate with our simulated background dataset.
Figure 4 shows significance improvement curves for the cut on
$\overline{\Delta R}$, the autoencoder, and a selection of supervised networks
trained on signal samples with a variety of different dark hadron masses and
temperatures $m_{\text{train}}$ and $T_{\text{train}}$. The horizontal axis
shows either signal efficiency (after triggering) or number of remaining
signal events at the HL-LHC as the threshold is varied. The vertical axis
shows signal significance improvement relative to the trigger sample. From
these specific examples, a few general features are apparent:
* •
The autoencoder consistently outperforms the simple $\overline{\Delta R}$ cut
significantly;
* •
The supervised networks outperform the autoencoder for signal parameters close
to their training values, but can perform much worse for different parameters;
* •
Our analysis is not optimized for larger dark hadron masses and temperatures;
* •
For lower dark hadron masses or temperatures, both the cut-and-count and
autoencoder analysis strategies are very powerful, yielding orders of
magnitude improvement in signal significance compared to the baseline
preselection cuts of Eq. (10).
In fact, the statistics of our QCD background sample is insufficient to
capture the true power of our analysis techniques. We would expect the
significance improvement to increase as the signal efficiency decreases, but
only until the SI curve turns around when the cut becomes too harsh. This is
clearly the case in the top-right panel of Fig. 4, for signal parameters to
which our analysis is not optimized. However, for the three other examples, we
never reach this turn-over point before running out of simulated background
events. It is not even clear if this turn-over is reached with the full
statistics of the HL-LHC (100 times greater than our simulated background
sample).
To understand how much harder we might be able to cut on the background, Fig.
5 shows the same SIC curves but with background efficiency on the vertical
axis. Naively extrapolating, we can anticipate that a realistic autoencoder
search with a fully data-driven background sample at the HL-LHC might be able
to reach the very-low background regime while retaining enough signal to probe
$\mathrm{Br}(h\to\ \mathrm{SUEP})$ as much as an order of magnitude smaller
than the sensitivity estimates we can rigorously derive here.
As a result, the reach projections we present in this paper will be very
conservative. Furthermore, the limited statistics of our background sample
means that reach projections will be very similar for the three analysis
methods despite their obvious differences in performance in Figs. 4 and 5. It
is therefore important to additionally evaluate our classifiers using a
somewhat orthogonal metric.
The area under the curve (AUC) of the signal efficiency as a function of
background rejection can be computed as a function of the number of events
kept after the baseline preselection cuts, see Eq. (10). This metric is
standard to measure the performance classifiers independently of the choice of
threshold. Fig. 6 shows the AUC achieved by the cut-based classifier, the
fully connected autoencoder, and the supervised graph network trained on a
cocktail of signal events, where for the latter the signal parameters are
indicated with red dots in the $(m_{D},T_{D}/m_{D})$-plane). Unsurprisingly,
the highest AUC values are achieved at low $m_{D}$ or $T_{D}/m_{D}$, with the
supervised graph network modestly outperforming the autoencoder, which
outperforms the simple cut, across the SUEP parameter space. The performance
of supervised networks trained on single signal parameter points are presented
in the Appendix.
(a)blablablablablablabla(b)blablablablablablabla(c)bla
Figure 6: AUC for (a) cut on $\overline{\Delta R}$, (b) fully connected
autoencoder, (c) supervised graph network trained using cocktail of signal
parameter choices (training parameters indicated with red dots). These plots
illustrate the significant performance improvements of the autoencoder
relative to the simple $\overline{\Delta R}$ cut, and of the supervised
cocktail approach relative to the autoencoder.
(a)blablablablablablabla(b)blablablablablablabla(c)bla
Figure 7: Minimum excludable $\mathrm{Br}(h\to\mathrm{SUEP})$ at the HL-LHC,
assuming $1\%$ systematic uncertainty on QCD background for (a) cut on
$\overline{\Delta R}$, (b) fully connected autoencoder, (c) supervised graph
network trained using cocktail of signal parameter choices (indicated with red
dots). Note that the limited statistics of our QCD background sample leads
these projections to be very conservative while also de-emphasizing the
performance differences between the three methods.
A key physics result is the actual sensitivity of HL-LHC searches to the SUEP
final state. We therefore extract the smallest $\mathrm{Br}(h\to\
\mathrm{SUEP})$ branching ratio for which $S/\sqrt{B+u_{sys}B^{2}}>2$, where
$u_{sys}$ gives the systematic uncertainty on the background. Because of the
high degree of background rejection required to be sensitive to relevant Higgs
branching ratios, and the limited size of our background dataset, very few or
no simulated background events remain when the classifier’s threshold is set
to maximize sensitivity to low branching ratios. This introduces a significant
statistical uncertainty to the estimated LHC reach. We can decouple our
estimate somewhat from these limitations by demanding the statistical
uncertainty from the limited size of the background dataset to remain below
$50\%$. This is conservative, since a realistic analysis will employ even
harsher cuts with better significance improvement. For the same reason, the
difference in reach between our three methods is likely underestimated. The
performance gaps between the $\overline{\Delta R}$ cut, autoencoder, and
supervised network are more clearly shown by the individual significance
improvement curves and the AUC differences.
Finally, Fig. 7 shows the SUEP sensitivity achievable by the $\overline{\Delta
R}$ cut, the autoencoder, and the cocktail-trained supervised network, all
assuming 1% systematic uncertainty on the background. Both the cut and the
autoencoder can probe %-level branching ratios. In the Appendix we show
sensitivity projections assuming a much larger 10% systematic background
uncertainty, with only very modest degradation in reach. This shows that the
overwhelming QCD background has been reduced to low enough levels to make the
search statistics limited, and speaks to the robustness of our results.
## 6 Conclusion
SUEPs represent a highly plausible but extremely challenging experimental
signature of confining hidden sectors, which typically results in a high
multiplicity of soft SM final states. To date, there are no targeted LHC
searches for SUEP, and most existing searches have limited or no sensitivity.
Furthermore, since SUEP is produced by hidden sectors featuring fairly
strongly-coupled, approximately conformal dynamics with a wide variety of
possible dark hadronization scenarios, modelling the detailed production of
SUEP is fraught with uncertainties. Existing proposals for SUEP searches
Knapen:2016hky ; Alimena:2019zri target conspicuous, qualitative features of
the final state, such as displaced vertices or leptons. Their existence is a
prediction for some decay portals of the dark hadrons, and targeting them with
searches is fairly robust with respect to modelling uncertainties.
An important but previously unaddressed question is how well we can look for
SUEP using only its basic kinematic features, without any conspicuous SM final
states. This is not only an experimental challenge, the design of such a
search also has to be very mindful of uncertainties in the signal modelling.
We investigated this worst-case scenario by studying SUEPs from dark hadrons
produced in exotic Higgs decays, which decay promptly and purely hadronically.
Our choice of model is motivated by the Higgs portal being one of the most
plausible production modes for new physics. The modest energy scale of the
Higgs decay also eliminates high-energy or high-mass observables as
discriminants. Finally, Higgs production in association with a $W/Z$
circumvents the problem of triggering on the SUEP final state directly
Knapen:2016hky .
Our first results are observables which capture the essential SUEP features
from the track momenta. We focused on the charged particle multiplicity
$N_{\text{charged}}$, the event ring-isotropy Cesarotti:2020hwb
$\mathcal{I}$, and the interparticle distance $\Delta R_{ij}$. All three are
robust with respect to modelling uncertainties of the SUEP final state, since
they capture the essential model features: high multiplicity of final states,
and dark hadron production that is more isotropic and democratic in momentum
than the QCD background. The charged track interparticle distance matrix
$\Delta R_{ij}$ is particularly suitable for ML applications.
Based on these observables, we devised three strategies for $h\to\
\mathrm{SUEP}$ searches. The first is based on a simple cut on the
interparticle distance matrix. The second assumes that our naive signal
simulation tools can be trusted, and uses supervised ML techniques. The third
is an unsupervised ML approach using a fully connected autoencoder trained
only on simulated QCD events as an anomaly detector. Both ML approaches used
the slightly modified interparticle distance matrix of Eq. (11) as the event
representation. The cut-and-count approach serves as a simple baseline, over
which the unsupervised ML approach represents a significant improvement, even
without detailed knowledge of the signal. This is to be compared to the higher
sensitivity of the supervised machine learning approach, which is unlikely to
be robust with respect to modeling uncertainties or choice of training
parameters.
All three approaches will probe exotic Higgs decays to prompt, hadronic SUEP
at the HL-LHC for branching fractions at the percent level, with both the
autoencoder and supervised approaches probing rates as low as 1%. We assumed a
systematic background uncertainty of one percent, but increasing this to ten
percent only modestly decreases sensitivity, signaling that our analyses have
sufficient differentiation power to reduce the enormous QCD background to a
statistics-dominated level. These estimates are conservative, since our
simulated background samples were still too small to fully cover the
exceedingly large background rejection. A realistic search combining the
autoencoder with data-driven background estimates will achieve significantly
higher sensitivities.
Our results show that even without a detailed theoretical description of the
SUEP showering process, an analysis using an unsupervised neural network can
be highly sensitive to exotic Higgs decays to SUEP. Our observables and
techniques can equally well be used in SUEP searches using leptons, photons or
displaced vertices, to significantly enhance their sensitivity based on the
inherently SUEP-y kinematics. Generalizations of our methods, for instance
including searches for explicit dark hadron resonances, should allow for
sensitivity to SUEPs with higher dark hadron masses or dark Hagedorn
temperatures than those targeted by our analysis. Our new, unsupervised search
strategy can be applied to a wide range of LHC scenarios to discover new
physics, even if the true BSM model should differ from our exact theory
expectations.
#### Acknowledgements
The authors would like to thank Anja Butter, Simon Knapen, Jessie Shelton, and
Jennifer Thompson for helpful conversations. The research of JB and DC is
supported in part by a Discovery Grant from the Natural Sciences and
Engineering Research Council of Canada, and by the Canada Research Chair
program. JB also acknowledges funding from a Postgraduate Doctoral Scholarship
(PGS D) provided by the Natural Sciences and Engineering Research Council of
Canada. GK acknowledges the support of the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC
2121 “Quantum Universe” – 390833306. The research of TP is supported by the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant
396021762 – TRR 257 Particle Physics Phenomenology after the Higgs Discovery.
Computations were performed on the Niagara supercomputer at the SciNet HPC
Consortium Niagara ; SciNetLessons . SciNet is funded by: the Canada
Foundation for Innovation; the Government of Ontario; Ontario Research Fund -
Research Excellence; and the University of Toronto. This research was enabled
in part by support provided by Compute Canada (www.computecanada.ca).
## Appendix A Additional Results
Figure 8: Two-dimensional unit-normalized histograms showing pairwise
correlations between $N_{charged}$, $\mathcal{I}$, and $\overline{\Delta R}$
for the background sample and the same four choices of signal parameters as in
Figure 4. Figure 9: AUC for supervised graph networks trained using different
signal parameter choices. Red dots indicate the training parameters in each
plot. Figure 10: Minimum branching ratio excludable by supervised graph
networks trained on different choices of ($m_{D}$,$T_{D}$). Red dots indicate
the training parameters in each plot.
Figure 8 shows the two-dimensional correlations between each pair of the
observables $N_{charged}$, $\mathcal{I}$, and $\overline{\Delta R}$. While the
average values of $N_{charged}$ and $\overline{\Delta R}$ vary in a similar
fashion with $m_{D}$ and $T_{D}$, their distributions for a fixed choice of
signal parameters are not highly correlated.
Figure 9 shows the AUC for each of the twelve supervised graph networks
trained on a single sample of signal events, with parameters indicated by the
red dot in each plot. This vividly demonstrates how the the parameter choice
of the signal dataset on which the supervised network is trained has a large
impact on the regime of parameter space where it can achieve high AUC,
providing a strong argument for using the cocktail approach.
The sensitivity projection for these supervised networks with one percent
background systematic is shown in Fig. 10. The peak sensitivity for each
network is also at the 1%-level, similar to the other approaches, but for
potentially a different range of SUEP parameters, depending greatly on the
training parameters of each network.
Figures 11 and 12 show sensitivity projections for 10% systematic background
uncertainty.
(a)blablablablablablabla(b)blablablablablablabla(c)bla
Figure 11: Same as Fig. 7 but for 10% systematic background uncertainty.
Figure 12: Same as Fig. 10 but for 10% systematic background uncertainty.
## References
* (1) M. J. Strassler and K. M. Zurek, _Echoes of a hidden valley at hadron colliders_ , _Phys. Lett. B_ 651 (2007) 374–379, [hep-ph/0604261].
* (2) M. J. Strassler, _On the Phenomenology of Hidden Valleys with Heavy Flavor_ , 0806.2385.
* (3) Z. Chacko, H.-S. Goh and R. Harnik, _The Twin Higgs: Natural electroweak breaking from mirror symmetry_ , _Phys. Rev. Lett._ 96 (2006) 231802, [hep-ph/0506256].
* (4) R. M. Schabinger and J. D. Wells, _A Minimal spontaneously broken hidden sector and its impact on Higgs boson physics at the large hadron collider_ , _Phys. Rev. D_ 72 (2005) 093007, [hep-ph/0509209].
* (5) B. Patt and F. Wilczek, _Higgs-field portal into hidden sectors_ , hep-ph/0605188.
* (6) J. R. Espinosa and M. Quiros, _Novel Effects in Electroweak Breaking from a Hidden Sector_ , _Phys. Rev. D_ 76 (2007) 076004, [hep-ph/0701145].
* (7) J. March-Russell, S. M. West, D. Cumberbatch and D. Hooper, _Heavy Dark Matter Through the Higgs Portal_ , _JHEP_ 07 (2008) 058, [0801.3440].
* (8) J. Alimena et al., _Searching for long-lived particles beyond the Standard Model at the Large Hadron Collider_ , _J. Phys. G_ 47 (2020) 090501, [1903.04497].
* (9) D. Curtin and S. Gryba, _Twin Higgs Portal Dark Matter_ , 2101.11019.
* (10) B. Holdom, _Two U(1)’s and Epsilon Charge Shifts_ , _Phys. Lett. B_ 166 (1986) 196–198.
* (11) S. A. Abel, M. D. Goodsell, J. Jaeckel, V. V. Khoze and A. Ringwald, _Kinetic Mixing of the Photon with Hidden U(1)s in String Phenomenology_ , _JHEP_ 07 (2008) 124, [0803.1449].
* (12) B. Batell, M. Pospelov and A. Ritz, _Probing a Secluded U(1) at B-factories_ , _Phys. Rev. D_ 79 (2009) 115008, [0903.0363].
* (13) J. Jaeckel and A. Ringwald, _The Low-Energy Frontier of Particle Physics_ , _Ann. Rev. Nucl. Part. Sci._ 60 (2010) 405–437, [1002.0329].
* (14) R. Foot, _Mirror dark matter: Cosmology, galaxy structure and direct detection_ , _Int. J. Mod. Phys. A_ 29 (2014) 1430013, [1401.3965].
* (15) D. Feldman, Z. Liu and P. Nath, _The Stueckelberg Z-prime Extension with Kinetic Mixing and Milli-Charged Dark Matter From the Hidden Sector_ , _Phys. Rev. D_ 75 (2007) 115001, [hep-ph/0702123].
* (16) M. Pospelov, A. Ritz and M. B. Voloshin, _Secluded WIMP Dark Matter_ , _Phys. Lett. B_ 662 (2008) 53–61, [0711.4866].
* (17) E. Dudas, L. Heurtier, Y. Mambrini and B. Zaldivar, _Extra U(1), effective operators, anomalies and dark matter_ , _JHEP_ 11 (2013) 083, [1307.0005].
* (18) H. An, X. Ji and L.-T. Wang, _Light Dark Matter and $Z^{\prime}$ Dark Force at Colliders_, _JHEP_ 07 (2012) 182, [1202.2894].
* (19) G. D. Kribs, A. Martin, B. Ostdiek and T. Tong, _Dark Mesons at the LHC_ , _JHEP_ 07 (2019) 133, [1809.10184].
* (20) S. Knapen, J. Shelton and D. Xu, _Perturbative benchmark models for a dark shower search program_ , _Phys. Rev. D_ 103 (2021) 115013, [2103.01238].
* (21) M. Cvetic, P. Langacker and G. Shiu, _Phenomenology of a three family standard like string model_ , _Phys. Rev. D_ 66 (2002) 066004, [hep-ph/0205252].
* (22) T. Hur, D.-W. Jung, P. Ko and J. Y. Lee, _Electroweak symmetry breaking and cold dark matter from strongly interacting hidden sector_ , _Phys. Lett. B_ 696 (2011) 262–265, [0709.1218].
* (23) Y. Bai and P. Schwaller, _Scale of dark QCD_ , _Phys. Rev. D_ 89 (2014) 063522, [1306.4676].
* (24) Y. Grossman and D. J. Robinson, _Composite Dirac Neutrinos_ , _JHEP_ 01 (2011) 132, [1009.2781].
* (25) T. Han, Z. Si, K. M. Zurek and M. J. Strassler, _Phenomenology of hidden valleys at hadron colliders_ , _JHEP_ 07 (2008) 008, [0712.2041].
* (26) M. Buschmann, J. Kopp, J. Liu and P. A. N. Machado, _Lepton Jets from Radiating Dark Matter_ , _JHEP_ 07 (2015) 045, [1505.07459].
* (27) N. Arkani-Hamed and N. Weiner, _LHC Signals for a SuperUnified Theory of Dark Matter_ , _JHEP_ 12 (2008) 104, [0810.0714].
* (28) S. D. Ellis, T. S. Roy and J. Scholtz, _Phenomenology of Photon-Jets_ , _Phys. Rev. D_ 87 (2013) 014015, [1210.3657].
* (29) N. Toro and I. Yavin, _Multiphotons and photon jets from new heavy vector bosons_ , _Phys. Rev. D_ 86 (2012) 055005, [1202.6377].
* (30) ATLAS collaboration, M. Aaboud et al., _A search for pairs of highly collimated photon-jets in $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector_, _Phys. Rev. D_ 99 (2019) 012008, [1808.10515].
* (31) T. Cohen, M. Lisanti and H. K. Lou, _Semivisible Jets: Dark Matter Undercover at the LHC_ , _Phys. Rev. Lett._ 115 (2015) 171804, [1503.00009].
* (32) G. Burdman and G. Lichtenstein, _Displaced Vertices from Hidden Glue_ , _JHEP_ 08 (2018) 146, [1807.03801].
* (33) P. Schwaller, D. Stolarski and A. Weiler, _Emerging Jets_ , _JHEP_ 05 (2015) 059, [1502.05409].
* (34) T. Cohen, M. Lisanti, H. K. Lou and S. Mishra-Sharma, _LHC Searches for Dark Sector Showers_ , _JHEP_ 11 (2017) 196, [1707.05326].
* (35) T. Cohen, J. Doss and M. Freytsis, _Jet Substructure from Dark Sector Showers_ , _JHEP_ 09 (2020) 118, [2004.00631].
* (36) M. J. Strassler, _Why Unparticle Models with Mass Gaps are Examples of Hidden Valleys_ , 0801.0629.
* (37) S. Knapen, S. Pagan Griso, M. Papucci and D. J. Robinson, _Triggering Soft Bombs at the LHC_ , _JHEP_ 08 (2017) 076, [1612.00850].
* (38) C. Cesarotti and J. Thaler, _A Robust Measure of Event Isotropy at Colliders_ , _JHEP_ 08 (2020) 084, [2004.06125].
* (39) D. Rumelhart, G. Hinton and R. Williams, _Parallel Distributed Processing_ , vol. 1, ch. 8. MIT Press, 1986.
* (40) P. Asadi, M. R. Buckley, A. DiFranzo, A. Monteux and D. Shih, _Digging Deeper for New Physics in the LHC Data_ , _JHEP_ 11 (2017) 194, [1707.05783].
* (41) E. M. Metodiev, B. Nachman and J. Thaler, _Classification without labels: Learning from mixed samples in high energy physics_ , _JHEP_ 10 (2017) 174, [1708.02949].
* (42) A. Andreassen, I. Feige, C. Frye and M. D. Schwartz, _JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics_ , _Eur. Phys. J. C_ 79 (2019) 102, [1804.09720].
* (43) J. H. Collins, K. Howe and B. Nachman, _Anomaly Detection for Resonant New Physics with Machine Learning_ , _Phys. Rev. Lett._ 121 (2018) 241803, [1805.02664].
* (44) A. De Simone and T. Jacques, _Guiding New Physics Searches with Unsupervised Learning_ , _Eur. Phys. J. C_ 79 (2019) 289, [1807.06038].
* (45) T. Heimel, G. Kasieczka, T. Plehn and J. M. Thompson, _QCD or What?_ , _SciPost Phys._ 6 (2019) 030, [1808.08979].
* (46) M. Farina, Y. Nakai and D. Shih, _Searching for New Physics with Deep Autoencoders_ , _Phys. Rev. D_ 101 (2020) 075021, [1808.08992].
* (47) T. S. Roy and A. H. Vijay, _A robust anomaly finder based on autoencoders_ , 1903.02032.
* (48) T. Cheng, J.-F. Arguin, J. Leissner-Martin, J. Pilette and T. Golling, _Variational Autoencoders for Anomalous Jet Tagging_ , 2007.01850.
* (49) B. Nachman and D. Shih, _Anomaly Detection with Density Estimation_ , _Phys. Rev. D_ 101 (2020) 075042, [2001.04990].
* (50) M. A. Md Ali, N. Badrud’din, H. Abdullah and F. Kemi, _Alternate methods for anomaly detection in high-energy physics via semi-supervised learning_ , _Int. J. Mod. Phys. A_ 35 (2020) 2050131.
* (51) B. Bortolato, B. M. Dillon, J. F. Kamenik and A. Smolkovič, _Bump Hunting in Latent Space_ , 2103.06595.
* (52) B. M. Dillon, T. Plehn, C. Sauer and P. Sorrenson, _Better Latent Spaces for Better Autoencoders_ , 2104.08291.
* (53) T. Finke, M. Krämer, A. Morandini, A. Mück and I. Oleksiyuk, _Autoencoders for unsupervised anomaly detection in high energy physics_ , 2104.09051.
* (54) G. Kasieczka et al., _The LHC Olympics 2020: A Community Challenge for Anomaly Detection in High Energy Physics_ , 2101.08320.
* (55) T. Aarrestad et al., _The Dark Machines Anomaly Score Challenge: Benchmark Data and Model Independent Event Classification for the Large Hadron Collider_ , 2105.14027.
* (56) O. Cerri, T. Q. Nguyen, M. Pierini, M. Spiropulu and J.-R. Vlimant, _Variational Autoencoders for New Physics Mining at the Large Hadron Collider_ , _JHEP_ 05 (2019) 036, [1811.10276].
* (57) D. E. Morrissey, T. Plehn and T. M. P. Tait, _Physics searches at the LHC_ , _Phys. Rept._ 515 (2012) 1–113, [0912.3259].
* (58) N. Arkani-Hamed, S. Dimopoulos and S. Kachru, _Predictive landscapes and new physics at a TeV_ , hep-th/0501082.
* (59) N. Arkani-Hamed, A. G. Cohen and H. Georgi, _Electroweak symmetry breaking from dimensional deconstruction_ , _Phys. Lett. B_ 513 (2001) 232–240, [hep-ph/0105239].
* (60) L. Randall and R. Sundrum, _A Large mass hierarchy from a small extra dimension_ , _Phys. Rev. Lett._ 83 (1999) 3370–3373, [hep-ph/9905221].
* (61) L. Randall and R. Sundrum, _An Alternative to compactification_ , _Phys. Rev. Lett._ 83 (1999) 4690–4693, [hep-th/9906064].
* (62) G. Elor, N. L. Rodd and T. R. Slatyer, _Multistep cascade annihilations of dark matter and the Galactic Center excess_ , _Phys. Rev. D_ 91 (2015) 103531, [1503.01773].
* (63) C. Cesarotti, M. Reece and M. J. Strassler, _Spheres To Jets: Tuning Event Shapes with 5d Simplified Models_ , _JHEP_ 05 (2021) 096, [2009.08981].
* (64) A. Costantino, S. Fichet and P. Tanedo, _Effective Field Theory in AdS: Continuum Regime, Soft Bombs, and IR Emergence_ , _Phys. Rev. D_ 102 (2020) 115038, [2002.12335].
* (65) S. C. Park, _Black holes and the LHC: A Review_ , _Prog. Part. Nucl. Phys._ 67 (2012) 617–650, [1203.4683].
* (66) R. K. Ellis, W. J. Stirling and B. R. Webber, _QCD and collider physics_ , vol. 8. Cambridge University Press, 2, 2011.
* (67) Y. Hatta and T. Matsuo, _Jet fragmentation and gauge/string duality_ , _Phys. Lett. B_ 670 (2008) 150–153, [0804.4733].
* (68) E. Fermi, _High-energy nuclear events_ , _Prog. Theor. Phys._ 5 (1950) 570–583.
* (69) R. Hagedorn, _Statistical thermodynamics of strong interactions at high-energies_ , _Nuovo Cim. Suppl._ 3 (1965) 147–186.
* (70) J. D. Bjorken and S. J. Brodsky, _Statistical model for electron-positron annihilation into hadrons_ , _Phys. Rev. D_ 1 (Mar, 1970) 1416–1420.
* (71) P. Blanchard, S. Fortunato and H. Satz, _The Hagedorn temperature and partition thermodynamics_ , _Eur. Phys. J. C_ 34 (2004) 361–366, [hep-ph/0401103].
* (72) F. Becattini and G. Passaleva, _Statistical hadronization model and transverse momentum spectra of hadrons in high-energy collisions_ , _Eur. Phys. J. C_ 23 (2002) 551–583, [hep-ph/0110312].
* (73) J. Cleymans, _The Thermal Model at the Large Hadron Collider_ , _Acta Phys. Polon. B_ 43 (2012) 563–570, [1203.5640].
* (74) F. Becattini, P. Castorina, J. Manninen and H. Satz, _The Thermal Production of Strange and Non-Strange Hadrons in e+ e- Collisions_ , _Eur. Phys. J. C_ 56 (2008) 493–510, [0805.0964].
* (75) F. Becattini, P. Castorina, A. Milov and H. Satz, _A Comparative analysis of statistical hadron production_ , _Eur. Phys. J. C_ 66 (2010) 377–386, [0911.3026].
* (76) F. Becattini, P. Castorina, A. Milov and H. Satz, _Predictions of hadron abundances in pp collisions at the LHC_ , _J. Phys. G_ 38 (2011) 025002, [0912.2855].
* (77) L. Ferroni and F. Becattini, _Statistical hadronization with exclusive channels in $e^{+}e^{-}$ annihilation_, _Eur. Phys. J. C_ 71 (2011) 1824, [1109.5185].
* (78) E. Bernreuther, F. Kahlhoefer, M. Krämer and P. Tunney, _Strongly interacting dark sectors in the early Universe and at the LHC through a simplified portal_ , _JHEP_ 01 (2020) 162, [1907.04346].
* (79) D. Curtin et al., _Exotic decays of the 125 GeV Higgs boson_ , _Phys. Rev. D_ 90 (2014) 075004, [1312.4992].
* (80) ATLAS collaboration, M. Aaboud et al., _Search for invisible Higgs boson decays in vector boson fusion at $\sqrt{s}=13$ TeV with the ATLAS detector_, _Phys. Lett. B_ 793 (2019) 499–519, [1809.06682].
* (81) ATLAS collaboration, M. Aaboud et al., _Combination of searches for invisible Higgs boson decays with the ATLAS experiment_ , _Phys. Rev. Lett._ 122 (2019) 231801, [1904.05105].
* (82) ATLAS collaboration, G. Aad et al., _Combined measurements of Higgs boson production and decay using up to $80$ fb-1 of proton-proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment_, _Phys. Rev. D_ 101 (2020) 012002, [1909.02845].
* (83) CMS collaboration, A. M. Sirunyan et al., _Search for invisible decays of a Higgs boson produced through vector boson fusion in proton-proton collisions at $\sqrt{s}=$ 13 TeV_, _Phys. Lett. B_ 793 (2019) 520–551, [1809.05937].
* (84) A. Biekoetter, T. Corbett and T. Plehn, _The Gauge-Higgs Legacy of the LHC Run II_ , _SciPost Phys._ 6 (2019) 064, [1812.07587].
* (85) A. Pierce, B. Shakya, Y. Tsai and Y. Zhao, _Searching for confining hidden valleys at LHCb, ATLAS, and CMS_ , _Phys. Rev. D_ 97 (2018) 095033, [1708.05389].
* (86) ATLAS collaboration, G. Aad et al., _Search for Higgs boson decays into two new low-mass spin-0 particles in the 4 $b$ channel with the ATLAS detector using $pp$ collisions at $\sqrt{s}=13$ TeV_, _Phys. Rev. D_ 102 (2020) 112006, [2005.12236].
* (87) M. Park and M. Zhang, _Tagging a jet from a dark sector with Jet-substructures at colliders_ , _Phys. Rev. D_ 100 (2019) 115009, [1712.09279].
* (88) T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten et al., _An introduction to PYTHIA 8.2_ , _Comput. Phys. Commun._ 191 (2015) 159–177, [1410.3012].
* (89) DELPHES 3 collaboration, J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens et al., _DELPHES 3, A modular framework for fast simulation of a generic collider experiment_ , _JHEP_ 02 (2014) 057, [1307.6346].
* (90) M. Cepeda et al., _Report from Working Group 2: Higgs Physics at the HL-LHC and HE-LHC_ , _CERN Yellow Rep. Monogr._ 7 (2019) 221–584, [1902.00134].
* (91) Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein and J. M. Solomon, _Dynamic graph cnn for learning on point clouds_ , 1801.07829.
* (92) E. Bernreuther, T. Finke, F. Kahlhoefer, M. Krämer and A. Mück, _Casting a graph net to catch dark showers_ , _SciPost Physics_ 10 (Feb, 2021) .
* (93) H. Qu and L. Gouskos, _ParticleNet: Jet Tagging via Particle Clouds_ , _Phys. Rev. D_ 101 (2020) 056019, [1902.08570].
* (94) J. A. Aguilar-Saavedra, J. H. Collins and R. K. Mishra, _A generic anti-QCD jet tagger_ , _JHEP_ 11 (2017) 163, [1709.01087].
* (95) O. Knapp, O. Cerri, G. Dissertori, T. Q. Nguyen, M. Pierini and J.-R. Vlimant, _Adversarially Learned Anomaly Detection on CMS Open Data: re-discovering the top quark_ , _Eur. Phys. J. Plus_ 136 (2021) 236, [2005.01598].
* (96) P. Baldi, K. Cranmer, T. Faucett, P. Sadowski and D. Whiteson, _Parameterized neural networks for high-energy physics_ , _Eur. Phys. J. C_ 76 (2016) 235, [1601.07913].
* (97) G. Louppe, M. Kagan and K. Cranmer, _Learning to Pivot with Adversarial Networks_ , 1611.01046.
* (98) CMS collaboration, A. M. Sirunyan et al., _A deep neural network to search for new long-lived particles decaying to jets_ , _Mach. Learn. Sci. Tech._ 1 (2020) 035012, [1912.12238].
* (99) M. Cacciari, G. P. Salam and G. Soyez, _The anti- $k_{t}$ jet clustering algorithm_, _JHEP_ 04 (2008) 063, [0802.1189].
* (100) D. P. Kingma and M. Welling, _Auto-encoding variational bayes_ , 1312.6114.
* (101) M. Ponce, R. van Zon, S. Northrup, D. Gruner, J. Chen, F. Ertinaz et al., _Deploying a top-100 supercomputer for large parallel workloads: The niagara supercomputer_ , in _Proceedings of the Practice and Experience in Advanced Research Computing on Rise of the Machines (Learning)_ , PEARC ’19, (New York, NY, USA), Association for Computing Machinery, 2019, DOI.
* (102) C. Loken, D. Gruner, L. Groer, R. Peltier, N. Bunn, M. Craig et al., _SciNet: Lessons learned from building a power-efficient top-20 system and data centre_ , .
|
arxiv-papers
| 2021-07-26T18:00:00 |
2024-09-04T03:07:19.647768
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Jared Barron, David Curtin, Gregor Kasieczka, Tilman Plehn, Aris\n Spourdalakis",
"submitter": "Jared Barron",
"url": "https://arxiv.org/abs/2107.12379"
}
|
2107.12383
|
# Searching for time-dependent high-energy neutrino emission from X-ray
binaries with IceCube
The IceCube Collaboration
(a complete list of authors can be found at the end of the proceedings)
###### Abstract
X-ray binaries are long-standing source candidates of Galactic cosmic rays and
neutrinos. The compact object in a binary system can be the site for cosmic-
ray acceleration, while high-energy neutrinos can be produced by the
interactions of cosmic rays in the jet of the compact object, the stellar
wind, or the atmosphere of the companion star. We report a time-dependent
study of high-energy neutrinos from X-ray binaries with IceCube using 7.5
years of muon neutrino data and X-ray observations. In the absence of
significant correlation, we report upper limits on the neutrino fluxes from
these sources and provide a comparison with theoretical predictions.
Corresponding authors: Qinrui Liu1∗, Ali Kheirandish2
1 Wisconsin IceCube Particle Astrophysics Center (WIPAC) and Department of
Physics, University of Wisconsin-Madison, Madison, WI 53706, USA
2 Department of Physics; Department of Astronomy & Astrophysics; Center for
Multimessenger Astrophysics, Institute for Gravitation & the Cosmos, The
Pennsylvania State University, University Park, PA 16802, USA
∗ Presenter
## 1 Introduction
Cosmic rays (CRs) with energies up to several PeV, the "knee" in the CR
spectrum, are believed to be of Galactic origin. However, where and how these
CRs are accelerated remains an open question. Interactions of very high energy
CRs in the Galaxy will lead to the production of pions, which subsequently
decay into gamma rays and neutrinos, with energies reaching hundreds of TeV.
As electromagnetic processes could also contribute to high-energy gamma-ray
emission, only the detection of high-energy neutrinos would be a smoking gun
for such CR interactions (i.e., hadronic interactions) as they are the only
way to produce neutrinos. The sources of the vast majority of high-energy
neutrinos detected by IceCube are yet to be identified. The isotropic
distribution of high-energy neutrino’s arrival direction suggests dominant
contributions from extragalactic sources. The Galactic contribution to the
diffuse neutrino flux is constrained to $\sim$14% above 1 TeV [1]. Studies
have been conducted to identify Galactic point-like sources, extended regions,
and the diffuse emission produced by CRs interacting with the interstellar
medium. Nevertheless, recent searches for correlations do not show remarkable
signals yet [2, 3, 4]. X-ray binaries (XRBs) are binary systems consisting of
a compact object (neutron star (NS) or black hole (BH)) and a non-compact
companion star. These systems are bright in X-rays and sometimes in gamma
rays. XRBs have been proposed as sites of CR acceleration and hadronic
interactions since the 1980s. XRBs with jets, often regarded as a smaller
version of quasars and referred to as microquasars, have been widely discussed
in the context of hadronic processes in jets. Protons can be accelerated in
the jet, and pions are generated through interactions with the external
radiation field of the accretion disk and/or internal synchrotron photons.
Other discussions focus on hadronuclear interactions, e.g., jet-cloud/wind
interactions when the jet is traversing the matter field of the ejected clouds
or stellar wind from the companion star. For other XRBs where there is no
collimated beam present, hadronic interactions can happen in a wider shocked
region. CR acceleration can take place in the magnetosphere of a spinning NS
and CRs can then further interact with matter from either the accretion disk
or the companion star. See e.g. [5, 6, 7, 8] for theories of neutrino
production in XRBs. Some XRBs have been observed at TeV energies, which
illustrates the capability of these sources to accelerate particles to high
energies.
XRBs are known for their outburst and periodic emission. Thus, it is
reasonable to hypothesize that the possible neutrino emission is related to
either the periodicity or the X-ray outburst activity, which might stem from a
change in the power or target material. Time-dependent analyses can be
performed based on such hypotheses, which benefit from the suppression of the
background, which is dominated by the atmospheric neutrino flux. Both time-
integrated and time-dependent analyses searching for high-energy neutrino
emission have been performed by IceCube and ANTARES, e.g., [9, 10], without
significant detection. Here, we present a study focusing on XRBs using the
IceCube muon track data searching for correlation with the X-ray outburst and
persistent emission of the possible neutrino flux from XRBs, covering an ample
list of sources.
## 2 Analysis
Figure 1: 90% sensitivity and 5$\sigma$ discovery potential of a flaring
source V404 Cyg when varying the threshold (Bayesian blocked light curve in
Fig. 2 ) and a comparison to the time-integrated case, which indicates an
improvement in sensitivity. The spectrum shown here is $E^{-2}$.
This search uses an unbinned maximum likelihood method, which follows the one
described in [11, 12], to seek an excess of neutrino events (signal) above the
background. In both the time-dependent and the time-integrated analyses, the
likelihood function describing the signal includes both spatial clustering and
energy information. In the time-dependent analysis, a unique temporal term is
incorporated in the likelihood, which incorporates a correlation between
neutrinos and X-ray light curves. As the majority of the data is expected to
be background events that are uniform in time, the likelihood function of the
background is constructed with time-randomized data for the time-dependent
analysis and right ascension randomized data for the time-integrated analysis.
The test statistic is obtained by maximizing the likelihood function w.r.t. a
set of parameters, which include the number of signal events ($n_{s}$) and the
spectral index ($\gamma$) for both analyses. For the time-dependent analysis,
in addition to $n_{s}$ and $\gamma$, time-related parameters introduced are
the threshold of a light curve $f_{th}$ for picking flares and the time lag
$T_{lag}$ between the X-ray and the neutrino emission.
The time-dependent analysis focuses on searching for a correlation between the
neutrino emission and the X-ray activity of a source. For this purpose, hard
X-ray light curves are used to construct the time probability density function
(PDF). Light curves are obtained from hard X-ray data reported by Swift/BAT in
the energy range 15-50 keV
111https://swift.gsfc.nasa.gov/results/transients/index.html[13] and MAXI in
the energy range 10-20 keV 222http://maxi.riken.jp/top/slist.html [14]. The
X-ray light curve data are binned in days, and a Bayesian block algorithm is
applied to find the optimal segmentation of the data and identify flares [15].
After the light curves are divided into blocks, the value of each block can be
fitted as a constant, taking into account the uncertainty of each data point.
The normalized blocked light curves then act as the temporal PDF. Fig. 1 shows
the sensitivity of the time-dependent analysis compared to the sensitivity of
the time-integrated analysis from the direction of V404 Cyg, where the
expected improvement is demonstrated.
The sources studied are from the Galactic high-mass XRB (HMXB) catalog [16]
and the Galactic low-mass XRB (LMXB) catalog [17], which include 301 sources.
TeV sources from TeVCat 333http://tevcat.uchicago.edu [18] which are not in
the HMXB or LMXB catalog are added. Starting from the initial source list,
sources without available Swift/BAT or MAXI hard X-ray light curves are
removed. As we are only interested in sources active with flaring or variable
behaviors in X-ray, the variability and excess variance of the light curves
are evaluated such that sources with weak emission are taken out. This step is
applied only to the X-ray data in the time frame overlapped by the neutrino
data sample. If both the Swift/BAT and MAXI light curves pass the selection
criteria, the Swift/BAT data is preferred to be used for one source. After
this selection, there are 102 sources in the initial source list left to be
analyzed.
Figure 2: The temporal PDF before normalization (Bayesian blocks) and the
event distribution within $1.5^{\circ}$ around V404 Cyg in the data sample of
2015 at the time indicated by MJD. The Bayesian blocks have been shifted by
the best-fit time lag (-0.5 days) and the dashed gray line indicates the best-
fit threshold (0.011 cm-3 s-1). Vertical lines represent neutrino events. The
color shows the energy proxy while the height shows the weight of each event
in the likelihood function.
We complement the study with a time-integrated search for neutrino signals
from four notable sources: Cyg X-3, LS 5039, LSI +61 303, and SS 433.
Additionally, two time-integrated stacking tests are conducted for
microquasars and TeV sources separately, with the method used in [4] and an
equal weighting scheme when considering the relative contribution of each
source.
For all searches, we use 7.5 years of all-sky muon track data collected
between 2011-05-13 and 2018-10-14, corresponding to a livetime of 2711 days.
The data sample being used consists of high-quality through-going muon track
events from the entire sky, yielding a total of 1502612 events. Details of the
data sample are described in [19].
## 3 Results & Discussion
Analysis | Name | TS | $\hat{n}_{s}$ | $\hat{\gamma}$ | $p$-value | 90% CL upper limits
---|---|---|---|---|---|---
Flare | V404 Cyg | 8.3 | 5.4 | 4.0 | 0.754 (0.014) | 0.91
Time-integrated | Cyg X-3 | 6.8 | 44.6 | 3.3 | 0.036 (0.009) | 1.51
TeV XRB stacking | - | 0.1 | 7.7 | 3.5 | 0.587 | 1.22
microquasar stacking | - | 0 | 0 | - | 1 | 7.32
Table 1: The most significant source in the flare/time-integrated analysis
with the TS, and the best-fitted $\hat{n}_{s}$ and $\hat{\gamma}$. Both post-
trial and pre-trial (bracketed) p-values are shown. The results of the 2
stacking tests are also listed. The 90% CL upper limits are parameterized as
$dN_{\nu_{\mu}+\bar{\nu}_{\mu}}/dE_{\nu}=\mathcal{F}_{\nu_{\mu}+\bar{\nu_{\mu}}}\left(E_{\nu}/\rm{TeV}\right)^{-2}\,\cdot
10^{-4}\;\rm{TeV^{-1}cm^{-1}}$ for the flare analysis and
$dN_{\nu_{\mu}+\bar{\nu}_{\mu}}/dE_{\nu}=\phi_{\nu_{\mu}+\bar{\nu_{\mu}}}\left(E_{\nu}/\rm{TeV}\right)^{-2}\,\cdot
10^{-12}\;\rm{TeV^{-1}cm^{-1}s^{-1}}$ for time-integrated analyses. Figure 3:
The relation between the time-integrated flux at 1TeV and the flaring time for
V404 Cyg. The dashed black line is the flaring time converted from the best-
fit threshold and the red triangle shows the 90% CL upper limit. The orange
line is the 5$\sigma$ discovery potential in IceCube. Purple lines illustrate
the estimated sensitivity at 90% CL and 5$\sigma$ discovery potential in
IceCube-Gen2. The shaded regions are the time-integrated neutrino flux
prediction assuming an E-2 spectrum with an energy cutoff at 100 TeV estimated
following the jet model [20]. The uncertainties are from flux densities in
different frequencies in VLA radio measurements during the flaring time in
2015. The two colors correspond to varying the energy fraction of the jet
carried by accelerated protons $\eta_{p}$. Figure 4: Red and purple lines
indicate a comparison between current upper limits and estimated 10 yr
sensitivity (light) & discovery potential (dark) in IceCube-Gen2 for Cyg X-3.
As the high-energy neutrino events nearby cut at several TeV, an exponential
cutoff at 5 TeV is also applied for computing upper limits. The shaded regions
show predictions of $pp$ [21] and $p\gamma$ [22] scenarios. The inclusion of a
cutoff is also to be compared to the shaded pink region which includes a
cutoff of CR energy at 100 TeV with the spectral index ranging from 2.4-2.7.
3.25 corresponds to the best-fit spectral index. The gray shaded region shows
the uncertainty from the collision radius.
In the search for correlation of high-energy neutrinos and the flaring
activity of XRBs, the lowest p-value is found for the signal events from the
microquasar V404 Cyg, a low-mass BH XRB, with a pre-trial p-value of 0.014.
However, the $p$-value increases to 0.754 after taking into account the trials
for the number of sources in the catalog. V404 Cyg underwent a major X-ray
flaring episode in 2015. There are 5 sub-TeV neutrino events within
$1.5^{\circ}$ of the source during the time of this flare, and the best-fit
threshold indicates a time duration of 11 days, as shown in Fig. 2. This giant
flare was observed with a duration of approximately 13 days by Swift/BAT [23].
In the time-integrated analysis, both the tests on individual sources and
stacked search find no signal with sufficient statistical significance. The
prominent excess in the point source search is found for Cyg X-3, which
exhibits pre-trial p-value of $9\times 10^{-3}$, leading to a post-trial
$p$-value 0.036 after considering the 4 trials. In the flare analysis, Cyg X-3
has a pre-trial $p$-value 0.09, less significant than the time-integrated
results. Within $1^{\circ}$ around the source location, there are 44 events
above 1 TeV, and the most energetic one among them has deposited energy about
5 TeV, leading to a soft best-fit spectrum. Since there is no significant
signal found, we set the 90% confidential level (CL) upper limits to the
neutrino flux from the sources studied. A summary is shown in Table. 1.
For microquasars, relativistic jets are expected to be the CR acceleration
sites. Possible neutrino emission is expected from the beam dump on either
radiation from the compact object itself or gas from the companion star.
Parameters for neutrino flux prediction in [20], based on the photohadronic
model of [5] can be constrained for some microquasars. Nevertheless, the
simplified estimation has large uncertainties. For V404 Cyg, the X-ray flare
in June 2015 was observed in multiple wavelengths, and the jet activity during
that outburst was studied, e.g., in [24, 25]. A simple estimation of the
neutrino flux using the jet model can be performed with the radio jet
information when the source is in an outburst state. The upper limits reported
here are compared to the time-integrated flux estimation in Fig. 3. The
collision region is estimated from the flaring duration. The values of jet
parameters in the estimation are from [24, 25], and the spectrum is assumed to
be a power-law with an index of 2 and an exponential cutoff of 100 TeV.
For Cyg X-3, one of the microquasars identified as a gamma-ray source in early
observations, many predictions have been calculated in the past decades
depending on different models for microquasars. For a comparison to the upper
limits, we take [22] and [21], which discussed the general $p\gamma$ and $pp$
scenarios based on the AGILE observation respectively, shown in Fig. 4. What
needs to be mentioned is that Cyg X-3 lies in the direction of the Cygnus X
region and is close to the Cygnus OB2 association but with a further distance
compared to the Cygnus X region. The possibility of contamination from the
Cygnus X complex cannot be excluded.
The next generation of the IceCube experiment, IceCube-Gen2, will provide a
factor of eight increase in volume [26], leading to an expected $\sim$5-time
increase in the effective area compared to IceCube, corresponding to an
improvement in sensitivity by the same order, which advances the
identification of neutrino sources. Here, we extend the study to IceCube-Gen2
and estimate the sensitivity and discovery potential for V404 Cyg, as an
example of a flaring source and Cyg X-3, for persistently emitting sources.
The estimated improvement can be seen in Fig. 3 and Fig. 4. The effective
areas of muon tracks are computed from the proposed IceCube-Gen2
configuration, and the projection is evaluated similar to that in [26] without
considering a contribution from the existing IceCube detector. In comparison
with theoretical calculations, it demonstrates the power to either identify
those sources or rule out models with IceCube in the future.
## 4 Summary
A Galactic contribution to the high-energy neutrino flux observed by IceCube
is expected. We present a study of neutrino emission from XRBs, long-standing
candidates for the Galactic sources of CRs and neutrinos. We performed a time-
dependent analysis based on the assumption of flaring neutrino emission. In
parallel, a time-integrated search is also performed on 4 notable sources and
2 stacked lists. In the absence of any significant excess, we set upper limits
on the neutrino emission in the scenarios discussed. The results of the most
significant sources in this search are compared to models of neutrino
production in XRBs. Our estimation of the improved detectability by IceCube-
Gen2 due to higher neutrino event statistics demonstrates the potential for
the future detection and presents a promising outlook of identifying Galactic
cosmic-ray accelerators in the upcoming years.
## References
* [1] IceCube Collaboration, M. G. Aartsen et al. Astrophys. J. 849 no. 1, (2017) 67.
* [2] ANTARES, IceCube Collaboration, A. Albert et al. Astrophys. J. Lett. 868 no. 2, (2018) L20.
* [3] IceCube, HAWC Collaboration, A. Kheirandish and J. Wood PoS ICRC2019 (2020) 932.
* [4] IceCube Collaboration, M. G. Aartsen et al. Astrophys. J. 898 no. 2, (2020) 117.
* [5] A. Levinson and E. Waxman Phys. Rev. Lett. 87 (2001) 171101.
* [6] L. A. Anchordoqui, D. F. Torres, T. P. McCauley, G. E. Romero, and F. A. Aharonian Astrophys. J. 589 (2003) 481–486.
* [7] G. E. Romero, D. F. Torres, M. M. K. Bernado, and I. F. Mirabel Astron. Astrophys. 410 (2003) L1–L4.
* [8] W. Bednarek Astrophys. J. 631 (2005) 466.
* [9] IceCube Collaboration, M. G. Aartsen et al. Astrophys. J. 807 no. 1, (2015) 46.
* [10] A. Albert et al. JCAP 04 (2017) 019.
* [11] J. Braun, J. Dumm, F. De Palma, C. Finley, A. Karle, and T. Montaruli Astropart. Phys. 29 (2008) 299–305.
* [12] J. Braun, M. Baker, J. Dumm, C. Finley, A. Karle, and T. Montaruli Astropart. Phys. 33 (2010) 175–181.
* [13] H. A. Krimm et al. Astrophys. J. Suppl. 209 (2013) 14.
* [14] M. Matsuoka, K. Kawasaki, S. Ueno, H. Tomida, M. Kohama, M. Suzuki, Y. Adachi, M. Ishikawa, T. Mihara, M. Sugizaki, et al. Publications of the Astronomical Society of Japan 61 no. 5, (2009) 999–1010.
* [15] J. D. Scargle, J. P. Norris, B. Jackson, and J. Chiang Astrophys. J. 764 (2013) 167.
* [16] Q. Z. Liu, J. van Paradijs, and E. P. J. v. d. Heuvel Astron. Astrophys. 455 (2006) 1165.
* [17] Q. Z. Liu, J. van Paradijs, and E. P. J. v. d. Heuvel Astron. Astrophys. 469 (2007) 807.
* [18] S. P. Wakely and D. Horan, “Tevcat: an online catalog for very high energy gamma-ray astronomy,” in International Cosmic Ray Conference, vol. 3, pp. 1341–1344. 2008\.
* [19] IceCube Collaboration, M. G. Aartsen et al. Astropart. Phys. 92 (2017) 30–41.
* [20] C. Distefano, D. Guetta, E. Waxman, and A. Levinson Astrophys. J. 575 (2002) 378–383.
* [21] N. Sahakyan, G. Piano, and M. Tavani Astrophys. J. 780 (2014) 29.
* [22] P. Baerwald and D. Guetta Astrophys. J. 773 (2013) 159.
* [23] A. Segreto, M. Del Santo, A. D’Aí, V. La Parola, G. Cusumano, T. Mineo, and J. Malzac The Astronomer’s Telegram 7755 (2015) 1.
* [24] J. C. A. Miller-Jones et al. Nature 569 (2019) 374–377.
* [25] A. J. Tetarenko et al. Mon. Not. Roy. Astron. Soc. 482 no. 3, (2019) 2950–2972.
* [26] IceCube Gen2 Collaboration, M. G. Aartsen et al.
## Full Author List: IceCube Collaboration
R. Abbasi17, M. Ackermann59, J. Adams18, J. A. Aguilar12, M. Ahlers22, M.
Ahrens50, C. Alispach28, A. A. Alves Jr.31, N. M. Amin42, R. An14, K.
Andeen40, T. Anderson56, G. Anton26, C. Argüelles14, Y. Ashida38, S. Axani15,
X. Bai46, A. Balagopal V.38, A. Barbano28, S. W. Barwick30, B. Bastian59, V.
Basu38, S. Baur12, R. Bay8, J. J. Beatty20, 21, K.-H. Becker58, J. Becker
Tjus11, C. Bellenghi27, S. BenZvi48, D. Berley19, E. Bernardini59, 60, D. Z.
Besson34, 61, G. Binder8, 9, D. Bindig58, E. Blaufuss19, S. Blot59, M.
Boddenberg1, F. Bontempo31, J. Borowka1, S. Böser39, O. Botner57, J.
Böttcher1, E. Bourbeau22, F. Bradascio59, J. Braun38, S. Bron28, J. Brostean-
Kaiser59, S. Browne32, A. Burgman57, R. T. Burley2, R. S. Busse41, M. A.
Campana45, E. G. Carnie-Bronca2, C. Chen6, D. Chirkin38, K. Choi52, B. A.
Clark24, K. Clark33, L. Classen41, A. Coleman42, G. H. Collin15, J. M.
Conrad15, P. Coppin13, P. Correa13, D. F. Cowen55, 56, R. Cross48, C. Dappen1,
P. Dave6, C. De Clercq13, J. J. DeLaunay56, H. Dembinski42, K. Deoskar50, S.
De Ridder29, A. Desai38, P. Desiati38, K. D. de Vries13, G. de Wasseige13, M.
de With10, T. DeYoung24, S. Dharani1, A. Diaz15, J. C. Díaz-Vélez38, M.
Dittmer41, H. Dujmovic31, M. Dunkman56, M. A. DuVernois38, E. Dvorak46, T.
Ehrhardt39, P. Eller27, R. Engel31, 32, H. Erpenbeck1, J. Evans19, P. A.
Evenson42, K. L. Fan19, A. R. Fazely7, S. Fiedlschuster26, A. T. Fienberg56,
K. Filimonov8, C. Finley50, L. Fischer59, D. Fox55, A. Franckowiak11, 59, E.
Friedman19, A. Fritz39, P. Fürst1, T. K. Gaisser42, J. Gallagher37, E.
Ganster1, A. Garcia14, S. Garrappa59, L. Gerhardt9, A. Ghadimi54, C. Glaser57,
T. Glauch27, T. Glüsenkamp26, A. Goldschmidt9, J. G. Gonzalez42, S. Goswami54,
D. Grant24, T. Grégoire56, S. Griswold48, M. Gündüz11, C. Günther1, C.
Haack27, A. Hallgren57, R. Halliday24, L. Halve1, F. Halzen38, M. Ha Minh27,
K. Hanson38, J. Hardin38, A. A. Harnisch24, A. Haungs31, S. Hauser1, D.
Hebecker10, K. Helbing58, F. Henningsen27, E. C. Hettinger24, S. Hickford58,
J. Hignight25, C. Hill16, G. C. Hill2, K. D. Hoffman19, R. Hoffmann58, T.
Hoinka23, B. Hokanson-Fasig38, K. Hoshina38, 62, F. Huang56, M. Huber27, T.
Huber31, K. Hultqvist50, M. Hünnefeld23, R. Hussain38, S. In52, N. Iovine12,
A. Ishihara16, M. Jansson50, G. S. Japaridze5, M. Jeong52, B. J. P. Jones4, D.
Kang31, W. Kang52, X. Kang45, A. Kappes41, D. Kappesser39, T. Karg59, M.
Karl27, A. Karle38, U. Katz26, M. Kauer38, M. Kellermann1, J. L. Kelley38, A.
Kheirandish56, K. Kin16, T. Kintscher59, J. Kiryluk51, S. R. Klein8, 9, R.
Koirala42, H. Kolanoski10, T. Kontrimas27, L. Köpke39, C. Kopper24, S.
Kopper54, D. J. Koskinen22, P. Koundal31, M. Kovacevich45, M. Kowalski10, 59,
T. Kozynets22, E. Kun11, N. Kurahashi45, N. Lad59, C. Lagunas Gualda59, J. L.
Lanfranchi56, M. J. Larson19, F. Lauber58, J. P. Lazar14, 38, J. W. Lee52, K.
Leonard38, A. Leszczyńska32, Y. Li56, M. Lincetto11, Q. R. Liu38, M.
Liubarska25, E. Lohfink39, C. J. Lozano Mariscal41, L. Lu38, F. Lucarelli28,
A. Ludwig24, 35, W. Luszczak38, Y. Lyu8, 9, W. Y. Ma59, J. Madsen38, K. B. M.
Mahn24, Y. Makino38, S. Mancina38, I. C. Mariş12, R. Maruyama43, K. Mase16, T.
McElroy25, F. McNally36, J. V. Mead22, K. Meagher38, A. Medina21, M. Meier16,
S. Meighen-Berger27, J. Micallef24, D. Mockler12, T. Montaruli28, R. W.
Moore25, R. Morse38, M. Moulai15, R. Naab59, R. Nagai16, U. Naumann58, J.
Necker59, L. V. Nguyễn24, H. Niederhausen27, M. U. Nisa24, S. C. Nowicki24, D.
R. Nygren9, A. Obertacke Pollmann58, M. Oehler31, A. Olivas19, E.
O’Sullivan57, H. Pandya42, D. V. Pankova56, N. Park33, G. K. Parker4, E. N.
Paudel42, L. Paul40, C. Pérez de los Heros57, L. Peters1, J. Peterson38, S.
Philippen1, D. Pieloth23, S. Pieper58, M. Pittermann32, A. Pizzuto38, M.
Plum40, Y. Popovych39, A. Porcelli29, M. Prado Rodriguez38, P. B. Price8, B.
Pries24, G. T. Przybylski9, C. Raab12, A. Raissi18, M. Rameez22, K. Rawlins3,
I. C. Rea27, A. Rehman42, P. Reichherzer11, R. Reimann1, G. Renzi12, E.
Resconi27, S. Reusch59, W. Rhode23, M. Richman45, B. Riedel38, E. J. Roberts2,
S. Robertson8, 9, G. Roellinghoff52, M. Rongen39, C. Rott49, 52, T. Ruhe23, D.
Ryckbosch29, D. Rysewyk Cantu24, I. Safa14, 38, J. Saffer32, S. E. Sanchez
Herrera24, A. Sandrock23, J. Sandroos39, M. Santander54, S. Sarkar44, S.
Sarkar25, K. Satalecka59, M. Scharf1, M. Schaufel1, H. Schieler31, S.
Schindler26, P. Schlunder23, T. Schmidt19, A. Schneider38, J. Schneider26, F.
G. Schröder31, 42, L. Schumacher27, G. Schwefer1, S. Sclafani45, D. Seckel42,
S. Seunarine47, A. Sharma57, S. Shefali32, M. Silva38, B. Skrzypek14, B.
Smithers4, R. Snihur38, J. Soedingrekso23, D. Soldin42, C. Spannfellner27, G.
M. Spiczak47, C. Spiering59, 61, J. Stachurska59, M. Stamatikos21, T.
Stanev42, R. Stein59, J. Stettner1, A. Steuer39, T. Stezelberger9, T.
Stürwald58, T. Stuttard22, G. W. Sullivan19, I. Taboada6, F. Tenholt11, S.
Ter-Antonyan7, S. Tilav42, F. Tischbein1, K. Tollefson24, L. Tomankova11, C.
Tönnis53, S. Toscano12, D. Tosi38, A. Trettin59, M. Tselengidou26, C. F.
Tung6, A. Turcati27, R. Turcotte31, C. F. Turley56, J. P. Twagirayezu24, B.
Ty38, M. A. Unland Elorrieta41, N. Valtonen-Mattila57, J. Vandenbroucke38, N.
van Eijndhoven13, D. Vannerom15, J. van Santen59, S. Verpoest29, M. Vraeghe29,
C. Walck50, T. B. Watson4, C. Weaver24, P. Weigel15, A. Weindl31, M. J.
Weiss56, J. Weldert39, C. Wendt38, J. Werthebach23, M. Weyrauch32, N.
Whitehorn24, 35, C. H. Wiebusch1, D. R. Williams54, M. Wolf27, K. Woschnagg8,
G. Wrede26, J. Wulff11, X. W. Xu7, Y. Xu51, J. P. Yanez25, S. Yoshida16, S.
Yu24, T. Yuan38, Z. Zhang51
1 III. Physikalisches Institut, RWTH Aachen University, D-52056 Aachen,
Germany
2 Department of Physics, University of Adelaide, Adelaide, 5005, Australia
3 Dept. of Physics and Astronomy, University of Alaska Anchorage, 3211
Providence Dr., Anchorage, AK 99508, USA
4 Dept. of Physics, University of Texas at Arlington, 502 Yates St., Science
Hall Rm 108, Box 19059, Arlington, TX 76019, USA
5 CTSPS, Clark-Atlanta University, Atlanta, GA 30314, USA
6 School of Physics and Center for Relativistic Astrophysics, Georgia
Institute of Technology, Atlanta, GA 30332, USA
7 Dept. of Physics, Southern University, Baton Rouge, LA 70813, USA
8 Dept. of Physics, University of California, Berkeley, CA 94720, USA
9 Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
10 Institut für Physik, Humboldt-Universität zu Berlin, D-12489 Berlin,
Germany
11 Fakultät für Physik & Astronomie, Ruhr-Universität Bochum, D-44780 Bochum,
Germany
12 Université Libre de Bruxelles, Science Faculty CP230, B-1050 Brussels,
Belgium
13 Vrije Universiteit Brussel (VUB), Dienst ELEM, B-1050 Brussels, Belgium
14 Department of Physics and Laboratory for Particle Physics and Cosmology,
Harvard University, Cambridge, MA 02138, USA
15 Dept. of Physics, Massachusetts Institute of Technology, Cambridge, MA
02139, USA
16 Dept. of Physics and Institute for Global Prominent Research, Chiba
University, Chiba 263-8522, Japan
17 Department of Physics, Loyola University Chicago, Chicago, IL 60660, USA
18 Dept. of Physics and Astronomy, University of Canterbury, Private Bag 4800,
Christchurch, New Zealand
19 Dept. of Physics, University of Maryland, College Park, MD 20742, USA
20 Dept. of Astronomy, Ohio State University, Columbus, OH 43210, USA
21 Dept. of Physics and Center for Cosmology and Astro-Particle Physics, Ohio
State University, Columbus, OH 43210, USA
22 Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen, Denmark
23 Dept. of Physics, TU Dortmund University, D-44221 Dortmund, Germany
24 Dept. of Physics and Astronomy, Michigan State University, East Lansing, MI
48824, USA
25 Dept. of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2E1
26 Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität
Erlangen-Nürnberg, D-91058 Erlangen, Germany
27 Physik-department, Technische Universität München, D-85748 Garching,
Germany
28 Département de physique nucléaire et corpusculaire, Université de Genève,
CH-1211 Genève, Switzerland
29 Dept. of Physics and Astronomy, University of Gent, B-9000 Gent, Belgium
30 Dept. of Physics and Astronomy, University of California, Irvine, CA 92697,
USA
31 Karlsruhe Institute of Technology, Institute for Astroparticle Physics,
D-76021 Karlsruhe, Germany
32 Karlsruhe Institute of Technology, Institute of Experimental Particle
Physics, D-76021 Karlsruhe, Germany
33 Dept. of Physics, Engineering Physics, and Astronomy, Queen’s University,
Kingston, ON K7L 3N6, Canada
34 Dept. of Physics and Astronomy, University of Kansas, Lawrence, KS 66045,
USA
35 Department of Physics and Astronomy, UCLA, Los Angeles, CA 90095, USA
36 Department of Physics, Mercer University, Macon, GA 31207-0001, USA
37 Dept. of Astronomy, University of Wisconsin–Madison, Madison, WI 53706, USA
38 Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center,
University of Wisconsin–Madison, Madison, WI 53706, USA
39 Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz,
Germany
40 Department of Physics, Marquette University, Milwaukee, WI, 53201, USA
41 Institut für Kernphysik, Westfälische Wilhelms-Universität Münster, D-48149
Münster, Germany
42 Bartol Research Institute and Dept. of Physics and Astronomy, University of
Delaware, Newark, DE 19716, USA
43 Dept. of Physics, Yale University, New Haven, CT 06520, USA
44 Dept. of Physics, University of Oxford, Parks Road, Oxford OX1 3PU, UK
45 Dept. of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, PA
19104, USA
46 Physics Department, South Dakota School of Mines and Technology, Rapid
City, SD 57701, USA
47 Dept. of Physics, University of Wisconsin, River Falls, WI 54022, USA
48 Dept. of Physics and Astronomy, University of Rochester, Rochester, NY
14627, USA
49 Department of Physics and Astronomy, University of Utah, Salt Lake City, UT
84112, USA
50 Oskar Klein Centre and Dept. of Physics, Stockholm University, SE-10691
Stockholm, Sweden
51 Dept. of Physics and Astronomy, Stony Brook University, Stony Brook, NY
11794-3800, USA
52 Dept. of Physics, Sungkyunkwan University, Suwon 16419, Korea
53 Institute of Basic Science, Sungkyunkwan University, Suwon 16419, Korea
54 Dept. of Physics and Astronomy, University of Alabama, Tuscaloosa, AL
35487, USA
55 Dept. of Astronomy and Astrophysics, Pennsylvania State University,
University Park, PA 16802, USA
56 Dept. of Physics, Pennsylvania State University, University Park, PA 16802,
USA
57 Dept. of Physics and Astronomy, Uppsala University, Box 516, S-75120
Uppsala, Sweden
58 Dept. of Physics, University of Wuppertal, D-42119 Wuppertal, Germany
59 DESY, D-15738 Zeuthen, Germany
60 Università di Padova, I-35131 Padova, Italy
61 National Research Nuclear University, Moscow Engineering Physics Institute
(MEPhI), Moscow 115409, Russia
62 Earthquake Research Institute, University of Tokyo, Bunkyo, Tokyo 113-0032,
Japan
### Acknowledgements
USA – U.S. National Science Foundation-Office of Polar Programs, U.S. National
Science Foundation-Physics Division, U.S. National Science Foundation-EPSCoR,
Wisconsin Alumni Research Foundation, Center for High Throughput Computing
(CHTC) at the University of Wisconsin–Madison, Open Science Grid (OSG),
Extreme Science and Engineering Discovery Environment (XSEDE), Frontera
computing project at the Texas Advanced Computing Center, U.S. Department of
Energy-National Energy Research Scientific Computing Center, Particle
astrophysics research computing center at the University of Maryland,
Institute for Cyber-Enabled Research at Michigan State University, and
Astroparticle physics computational facility at Marquette University; Belgium
– Funds for Scientific Research (FRS-FNRS and FWO), FWO Odysseus and Big
Science programmes, and Belgian Federal Science Policy Office (Belspo);
Germany – Bundesministerium für Bildung und Forschung (BMBF), Deutsche
Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics
(HAP), Initiative and Networking Fund of the Helmholtz Association, Deutsches
Elektronen Synchrotron (DESY), and High Performance Computing cluster of the
RWTH Aachen; Sweden – Swedish Research Council, Swedish Polar Research
Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut
and Alice Wallenberg Foundation; Australia – Australian Research Council;
Canada – Natural Sciences and Engineering Research Council of Canada, Calcul
Québec, Compute Ontario, Canada Foundation for Innovation, WestGrid, and
Compute Canada; Denmark – Villum Fonden and Carlsberg Foundation; New Zealand
– Marsden Fund; Japan – Japan Society for Promotion of Science (JSPS) and
Institute for Global Prominent Research (IGPR) of Chiba University; Korea –
National Research Foundation of Korea (NRF); Switzerland – Swiss National
Science Foundation (SNSF); United Kingdom – Department of Physics, University
of Oxford.
|
arxiv-papers
| 2021-07-26T18:00:01 |
2024-09-04T03:07:19.664077
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Qinrui Liu, Ali Kheirandish (for the IceCube Collaboration)",
"submitter": "Qinrui Liu",
"url": "https://arxiv.org/abs/2107.12383"
}
|
2107.12384
|
# Can the Blandford-Znajek mechanism power steady jets?
A.R. King School of Physics and Astronomy, University of Leicester, Leicester,
LE1 7RH, UK Astronomical Institute Anton Pannekoek, University of Amsterdam,
Science Park 904, NL-1098 XH Amsterdam, Netherlands Leiden Observatory,
Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden, Netherlands J.E.
Pringle Institute of Astronomy, University of Cambridge, Madingley Road,
Cambridge CB3 OHA, UK
###### Abstract
We consider the Blandford-Znajek (BZ) mechanism for extracting black hole spin
energy to drive astrophysical jets. Analyses of the BZ mechanism generally
take no account of any electric charge on the black hole. But, as noted by
Wald and others, if the medium surrounding the black hole is an ionised plasma
with mobile charges, then a spinning hole quickly acquires an electric charge.
The effect of this charge is to nullify the electric field structures which
drive the BZ mechanism. Since jets are now observed in a wide variety of
classes of accreting objects, most of which do not contain a central black
hole, it seems likely that the jet driving mechanism in all astrophysical
objects uses energy directly from the accretion disc, rather than black hole
spin.
astrophysical jets – accretion discs – black hole physics
## 1 Introduction
Early maps of double–lobe radio galaxies (e.g. Mitton & Ryle, 1969) showed
amorphous blobs of radio emission symmetrically placed each side of the
central galaxy. Rees (1971) suggested that an unknown object in the galactic
nucleus channels energy to the radio lobes through jets. There is now almost
universal agreement that the central object in radio galaxies is a
supermassive black hole (Rees 1984), and that the high energy activity in the
nucleus is powered by accretion (Salpeter, 1964), most likely through an
accretion disc (Lynden–Bell, 1969). Thus the relativistic jets seen to emanate
from the nucleus are ultimately powered by accretion of matter on to the
central black hole (see the reviews by Begelman, Blandford & Rees, 1984;
Heckman & Best, 2014; Blandford, Meier & Readhead, 2019).
Blandford & Znajek (1977, hereafter BZ77) proposed a radical new mechanism in
which the jets from galactic nuclei are powered by direct electromagnetic
extraction of the spin energy from the central black hole. This is the
Blandford–Znajek (BZ) mechanism.
In Section 2.1 we provide a brief overview of this mechanism, which invokes a
spinning black hole situated in an aligned magnetic field. We note that the
central black hole is tacitly assumed always to have negligible electric
charge, that is, any current entering the hole is assumed to be balanced by a
precisely opposing current entering elsewhere, so that there is an effective
current through the hole. In Section 2.2 we draw attention to the analysis by
Wald (1974; see also King et al., 1975; Petterson, 1975; Gibbons et al., 2013)
in which he finds that if the black hole is surrounded by a plasma which
contains mobile charges, it will acquire a specific electric charge. The
charge is exactly such as to render the BZ mechanism inoperative. We provide a
brief discussion in Section 3.
## 2 The Blandford–Znajek mechanism
### 2.1 The basic mechanism; the uncharged black hole
Black holes are generally assumed to have zero net electric charge. The reason
is that if for example a Schwarzschild (non–rotating) black hole is given a
charge, the neighbourhood of the hole then acquires an electric field. If (and
only if) charge separation is allowed, charged particles in the surrounding
astrophysical plasma move parallel or antiparallel to the electric field. In
this way the black hole selectively acquires charges so that it moves quickly
towards zero net charge. Charge separation is well known to occur in
electrical media in which the charge carriers are able to move independently.
It occurs for example in electrolytes (Debye & Hückel, 1923) and in ionized
astrophysical plasmas (Salpeter, 1954), and it affects the rates of nuclear
burning in stars (e.g. Clayton, 1968). The timescale to reach zero net charge
is typically quite short in realistic astrophysical environments (see, for
example, the discussion by Cardoso et al., 2016). Thus, Blandford (1987)
asserts that “charged, Kerr–Newman black holes are irrelevant to astronomy”.
Soon after black holes were recognised as a realistic astrophysical
possibility, Wald (1974) devised an elegant method to compute the effect on
the electromagnetic field structure when a black hole, with zero net charge,
is placed in a uniform magnetic field. King et al. (1975) extended this result
to consider more general aligned magnetic field structures. Wald (1974) showed
that for a spinning (Kerr) black hole, with zero net electric charge, and with
spin aligned with an external uniform magnetic field ${\bf B}$ in a vacuum,
the spin induces an electric field with ${\bf E\cdot B\neq 0}$ (for a
classical analogy, see Ruffini & Treves, 1973). Wald assumed a field uniform
at infinity, but King et al. (1975) showed that the field structure near the
hole is essentially the same for any realistic aligned field. Wald also noted
that an electric field with $\bf E\cdot B\neq 0$ would lead to movement of any
external charged particles. King et al. (1975) showed that the induced
electric field structure is such that $\bf E\cdot B$ has opposite signs near
the poles and near the equator of the spinning hole. This implies that charges
of one sign would be attracted towards the poles, whereas charges of the
opposite sign would be attracted to a band near the equator.
This result led BZ77 to consider a spinning black hole at the centre of an
accretion disc with the disc providing the required currents to give rise to
an aligned magnetic field. The Bardeen–Petterson effect (Bardeen & Petterson,
1975) ensures that the central disc and black hole spin are generally aligned,
and so by symmetry the magnetic field should also be aligned with the spin.
BZ77 then noted that if the black hole is surrounded by a conducting medium
the structure of the induced electric field found by Wald (1974) and by King
et al. (1975) must produce an effective electric current through the black
hole, returning through the surrounding medium (see, for example, Thorne &
Blandford, 1982; Blandford, 1987). Any dissipation of this current’s energy in
the surrounding material then taps the spin energy of the black hole.
This is the BZ–mechanism. It provides, in principle, a mechanism for the
continuous extraction of energy from a spinning black hole. BZ77 speculated
that it could be used to power the astrophysical jets seen to emanate from
active galactic nuclei (see also Blandford, Meier & Readhead, 2019). The
luminosity $L_{\rm BZ}$ produced by this process is, on dimensional grounds,
$L_{\rm BZ}\sim a^{2}\times\frac{B^{2}}{8\pi}\times\frac{4\pi R_{\rm
s}^{3}}{3}\times\frac{c}{R_{s}}.$ (1)
for a black hole of mass $M$, Schwarzschild radius $R_{\rm s}\sim 2GM/c^{2}$,
dimensionless spin parameter $a$, where $0\leq a^{2}\leq 1$, placed in a
magnetic field of strength $B$, i.e. approximately the magnetic energy
contained by the formal ‘volume’ of the hole, emitted every light crossing
time of the hole, and moderated by the amount of spin energy available.
For a ‘fiducial’ field strength (Begelman, Blandford & Rees, 1984; but see
Ghosh & Abramowicz, 1997) of
$B\sim 2\times 10^{4}M_{8}^{-1/2}\>{\rm G},$ (2)
this gives a luminosity
$L_{\rm BZ}\sim 2\times 10^{45}a^{2}M_{8}\>{\rm erg\,s^{-1}},$ (3)
where $M_{8}$ is the mass of the black hole in units of $10^{8}M_{\odot}$.
This is comparable to the Eddington luminosity
$L_{E}\sim 1.3\times 10^{46}M_{8}\>{\rm erg\,s^{-1}}.$ (4)
BZ77 investigated this process using the force–free approximation in a
charge–separated plasma in which particle inertia and interparticle collision
terms can be ignored (see also Komissarov, 2004). Since then a number of
authors have investigated this mechanism assuming that the surrounding medium
could be modelled using the MHD approximation in which collision terms
dominate, often by numerical means (see for example the reviews by Davis &
Tchekhovskoy, 2020; Komissarov & Porth, 2021). However, in all these
investigations, it is implicitly assumed that the movements of charges in any
surrounding plasma do not permit a change in the net electrical charge of the
black hole.
### 2.2 The acquisition of charge and its implication
The discussion of Section 2.1 above assumes, as is usually the case in
astronomy, that the spinning black hole has negligible net electric charge.
However, in his seminal paper, Wald (1974) reasoned that a black hole would be
surrounded by a standard astrophysical plasma in which the possibility of
charge separation would exist. In this case, it is to be expected that the
electric field drives the charges in such a way as to lead to a drop in
electric potential along the magnetic field lines (cf. Komissarov, 2004). He
argued further that for a spinning black hole in an aligned magnetic field
${\bf B}$ the movements of charge carriers (i.e. currents) induced by the
electric field with ${\bf E\cdot B\neq 0}$, together with the fact that the
charge carriers are individually mobile, would lead to the hole selectively
accreting net charge in such a way as to nullify the effects of the electric
field.
Specifically, Wald showed that the net charge reaches the value
$Q=2BJ$ (5)
in geometrized units (where $J=Ma$ is its total angular momentum). Then the
charge on the hole remains constant, removing the need for currents, which
might then drive the BZ effect.
This result was confirmed and generalised by Petterson (1975), who showed that
the precise value $Q^{\prime}$ of the critical charge in units of $Q$ depends
on the distribution of the source currents of the magnetic field.
The charge $Q^{\prime}$ is utterly negligible ($Q^{\prime 2}\ll M^{2}$) in
gravitational terms (Wald, 1974; Zajacek et al, 2018; Zajacek & Tursunov,
2019), so the spacetime metric is still to a high approximation uncharged
Kerr, in agreement with the remark by Blandford (1987) quoted above. In
particular the motion of uncharged particles is effectively identical to the
case $Q^{\prime}=0$. But charged particle motion is very different, and
strongly influenced by the charge $Q^{\prime}$. This is another illustration
of how extremely weak gravity is by comparison with electromagnetism. Thus, if
the surrounding conducting medium is treated as a realistic space plasma which
permits a net flow of charge into the black hole, then rather than providing a
continuous process for removing spin energy from the hole, the induced
electric currents are an initial transient effect which continues only until
the black hole acquires the charge $Q^{\prime}\simeq 2BJ$ (cf Zajacek et al.,
2018; Zajacek & Tursunov 2019)111For all conceivable boundary conditions far
from the hole its net charge tends monotonically to $Q^{\prime}$..
The energy released in this transient is, using the above numbers,
$E_{Q}\sim L_{\rm BZ}\times\frac{R_{\rm s}}{c}\sim 2\times
10^{48}a^{2}M_{8}^{2}\>{\rm erg},$ (6)
i.e. the transient emits the luminosity $L_{\rm BZ}$ for a time $\sim R_{\rm
s}/c\sim 10^{3}M_{8}\,{\rm s}$. Once the black hole acquires this charge, the
torque on it vanishes and no more spin energy can be extracted.
More recently numerical modelling of the BZ–mechanism has been undertaken
using Particle–In–Cell (PIC) plasma methods, which permit independent mobility
of individual charges. In principle these techniques should be able to test
Wald’s fundamental hypothesis that a spinning black hole immersed in a
magnetic field should acquire a net charge. The same technique has also been
applied to ionised plasma surrounding rotating neutron stars (e.g.
Kalapotharakos et al., 2018). However, in contrast to the neutron star case,
where Kalapotharakos et al. (2018) treat the inner boundary with some care,
noting that they ensure “current closure of charge carriers that reach the
stellar surface”, modelling of the black hole case is often done with inner
boundary conditions which either prevent or ignore the acquisition of charge
by the black hole. For example, Parfrey et al (2019, see also Crinquand et
al., 2019) do not comment on charge acquisition, and Hirotani et al. (2021)
impose that ${\bf E\cdot B}=0$ and that both the radial component of the
electric field and the meridional component of the magnetic field vanish at
the inner boundary222Note added in proof: Parfrey (private communication)
informs us that, during the timespan of the simulations presented in Parfrey
et al (2019), the black hole both acquires an electric charge and exhibits an
outflow of electromagnetic energy..
## 3 Discussion
The BZ mechanism is a standard cited mechanism for producing steady jets in
objects that contain accreting black holes – galactic nuclei and some X–ray
binaries. And indeed the process is often cited in papers which concern
numerical MHD simulations of jets and outflows produced by magnetic accretion
discs (see the review by Davis & Tchekhovskoy, 2020). We have argued above, in
line with the original ideas of Wald (1974; see also Gibbons et al., 2013)
that in a realistic space plasma which permits a net flow of charge into the
black hole the BZ mechanism cannot tap the spin energy of a black hole
continuously, and is therefore not a viable mechanism for powering continuous
astrophysical jets.
This is primarily because the conducting medium surrounding the black hole
should be treated as an astrophysical plasma with mobile charge carriers. When
charge separation is allowed, along with the possibility of a net flux of
charge into the black hole, any spinning black hole quickly acquires the net
electrical charge $Q^{\prime}$, and the electric fields which drive the
currents required for the BZ mechanism are nullified. The same process of
charge separation which ensures that a non–rotating black hole has zero charge
also ensures that a rotating hole acquires the charge $Q^{\prime}$ that makes
the BZ mechanism inoperative. We have noted that these ideas need to be
tested, for example using PIC plasma simulation techniques. For example, it
might be that collective plasma effects serve to counteract the tendency of
the black hole to acquire charge333We thank the referee for stressing this
possibility..
There are, of course, many other kinds of astrophysical objects which do not
contain black holes and nevertheless produce jets (Burgarella et al., 1993;
Smith, 2012). The jets emitted by young stellar objects are particularly
spectacular (see the review by Ray & Ferreira, 2021). Thus the application of
Occam’s razor444‘Do not multiply hypotheses’, or put simply, ‘don’t invent two
theories for the same thing’. has long suggested that the BZ mechanism, even
if it were viable, is in fact not required for the production of astrophysical
jets (Livio, 1997; see also Pringle, 1993; Price, Pringle & King, 2003). In
addition, Russell, Gallo & Fender (2013) have shown that the jet production
mechanism in binary X–ray sources is not consistent with the prediction of the
BZ mechanism that the jet power should depend on the the square of the
dimensionless jet spin parameter (see equation 3).
Thus, following Occam, if one is forced to choose a single mechanism capable
of producing all astrophysical jets which emanate from accreting objects, then
the most likely choice would be some form of MHD process resulting from
poloidal magnetic field threading the accretion disc (Livio, 1997; Livio et
al., 1999). A mechanism like this is already discussed by Blandford & Znajek
(1977), and early ideas on this process are given by Blandford & Payne (1982)
and, in the protostellar case, by Pudritz & Norman (1983, 1986).
## Acknowledgments
We thank Bob Carswell, Gary Gibbons, Chris Nixon, Colin Norman, Kyle Parfrey,
Roger Blandford and Roman Znajek for helpful comments, and the referee for a
thoughtful report.
## REFERENCES
Bardeen, J.A. & Petterson, J.A., 1975, ApJL 195, L65
Begelman, M.C., Blandford, R.D., & Rees, M.J., 1984, Rev. Mod. Phys., 56, 255
Blandford, R.D., 1987, in Three Hundred Years of Gravitation, Cambridge
University Press, eds S. Hawking & W. Israel, pp. 277 – 329
Blandford, R.D., Meier, D. & Readhead, A. 2019, ARA&A, 57, 467
Blandford, R.D., & Payne, D.G., 1982, MNRAS, 199, 883
Blandford, R.D., & Znajek, R. L. 1977, MNRAS 179, 433
Burgarella, D., Livio, M., & O’Dea, C.P. (eds), 1993, Astrophysical Jets,
Cambridge University Press
Cardoso, V., Macedo C. F. B., Pani, P., & Ferrari, V., 2016, JCAP, 054
Clayton, D.D., 1968, Principles of Stellar Evolution and Nucleosynthesis,
Univ. of Chicago Press
Crinquand, B., Cerutti, B., Dubus, G., Parfrey, K., Philippov, A., 2021, A&A,
650, A163
Debye, P., & Hückel, E., 1923, Physikalische Zeitschrift, 24, 185
Ghosh, P, & Abramowicz, M.A., 1997, MNRAS, 292, 887
Gibbons, G. W., Mujtaba, A. H., & Pope, C. N. 2013, Classical and Quantum
Gravity, 30, 125008
Heckman, T.M., & Best, P. N., 2014, ARA&A, 52, 589
Hirotani, K., Krasnopolsky, R., Shang, H., Nishikawa, K., Watson, M., 2021,
ApJ, 908, 88
Kalapotharakos, C., Brambilla, G. Timokhin, A. Harding, A. K. & Kazanas, D.,
2018, ApJ, 857, 44
King, A.R., Lasota, J.P., & Kundt, W., 1975, Phys. Rev. D,12, 3037
Komissarov, S.S., 2004, MNRAS, 350, 427
Komissarov, S.S., & Porth, O., 2021, NewAR, 92, 101610
Livio, M., 1997, in Accretion Phenomena and Related Outflows eds D.T.
Wickramasinghe, L. Ferrario, & G.V. Bicknell, ASP Conference Series 121, 845
Livio., M., Ogilvie, G,I., & Pringle, J.E., 1999, ApJ, 512, 100
Lynden–Bell, D., 1969, Nature, 223, 690L
Mitton., S., & Ryle, M., 1969, MNRAS, 146, 221
Parfrey, K., Philippov, A., Cerutti, B., 2019, Phys. Rev. Lett. 122c5101
Petterson, J. A. 1975, Phys Rev D,12, 2218
Price, D.J., Pringle, J.E., & King, A.R., 2003, MNRAS 339, 1223
Pringle, J.E., 1993, in Astrophysical Jets, Cambridge University Press, eds
Burgarella, D., Livio, M., & O’Dea, C.P.
Pudritz, R.E., & Norman, C.A., 1983, ApJ, 274, 677
Pudritz, R.E., & Norman, C.A., 1986, ApJ, 301, 571
Ray, T.P, & Ferreira, J. 2021, NewAR, 93, 101615
Rees, M.J., 1971, Nature, 229, 312
Rees, M.J., 1984, ARA&A, 22, 471
Ruffini, R. & Treves, A. 1973, Astrophys. Lett., 13, 109
Russell, D.M., Gallo, E., & Fender, R.P., 2013, MNRAS, 431, 405
Salpeter, E.E., 1954, Aus J. Phys., 7, 373
Salpeter, E.E., 1964, ApJ, 140, 796
Smith, M.D., 2012, Astrophysical Jets and Beams Cambridge University Press
Thorne, K.S., & Blandford, R.D., 1982, in Proc IAU Symp 97 eds. Heeschen, D.S.
& Wade, C.M., 255
Wald, R.M., 1974, Phys Rev D, 10, 1680
Zajacek, M., Tursunov, A., Eckhart, A., Britze, S., 2018, MNRAS, 480, 4408
Zajacek, M., & Tursunov, A., 2019, The Observatory, 139, 231
|
arxiv-papers
| 2021-07-26T18:00:01 |
2024-09-04T03:07:19.675373
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "A.R. King and J.E. Pringle",
"submitter": "Andrew King",
"url": "https://arxiv.org/abs/2107.12384"
}
|
2107.12388
|
# Non-Abelian bosonization in a $(3+1)$-d Kondo semimetal via quantum
anomalies
Colin Rylands Joint Quantum Institute and Condensed Matter Theory Center,
University of Maryland, College Park, MD 20742, USA Alireza Parhizkar Joint
Quantum Institute and Condensed Matter Theory Center, University of Maryland,
College Park, MD 20742, USA Victor Galitski Joint Quantum Institute and
Condensed Matter Theory Center, University of Maryland, College Park, MD
20742, USA
###### Abstract
Kondo lattice models have established themselves as an ideal platform for
studying the interplay between topology and strong correlations such as in
topological Kondo insulators or Weyl-Kondo semimetals. The nature of these
systems requires the use of non-perturbative techniques which are few in
number, especially in high dimensions. Motivated by this we study a model of
Dirac fermions in $3+1$ dimensions coupled to an arbitrary array of spins via
a generalization of functional non-Abelian bosonization. We show that there
exists an exact transformation of the fermions which allows us to write the
system as decoupled free fermions and interacting spins. This decoupling
transformation consists of a local chiral, Weyl and Lorentz transformation
parameterized by solutions to a set of nonlinear differential equations which
order by order takes the form of Maxwell’s equations with the spins acting as
sources. Owing to its chiral and Weyl components this transformation is
anomalous and generates a contribution to the action. From this we obtain the
effective action for the spins and expressions for the anomalous transport in
the system. In the former we find that the coupling to the fermions generates
kinetic terms for the spins, a long ranged interaction and a Wess-Zumino like
term. In the latter we find generalizations of the chiral magnetic and Hall
effects.
## I Introduction
Quantum impurity models are a prime example of strongly correlated condensed
matter systems, facilitating our understanding of many physical phenomena
including the ubiquitous Kondo effect. When many impurities are present as is
the case in a Kondo lattice, hybridization between the localized spins and the
itinerant fermions leads to a variety of effects including rich Heavy fermion
physics Hewson (1993); Coleman (2015). More recently, the Kondo lattice has
been the focus of attention for its possible topological properties and in
particular the potential to study the interplay between topology and strong
correlations. In those cases, the strong correlations result in the emergence
of topological phases of matter including topological Kondo insulators Dzero
_et al._ (2010, 2016) and Weyl-Kondo semimetals Dzsaber _et al._ (2017); Lai
_et al._ (2017); Dzsaber _et al._ (2021); Chang and Coleman (2018). In the
latter case, due to the Kondo effect, the low energy excitations of the system
are Weyl fermions. Weyl semimetals are of great interest in and of themselves
providing concrete realizations in a solid-state system of physical phenomena
historically associated with particle physics. Chief amongst these is the
chiral anomaly, the breaking of classical chiral symmetry Adler (1969); Bell
and Jackiw (1969); Fujikawa and Suzuki (2004) in a quantum theory which gives
rise to several distinct transport features in free Nielsen and Ninomiya
(1983); Wan _et al._ (2011); Burkov and Balents (2011); Yang _et al._
(2011); Xu _et al._ (2011); Halász and Balents (2012); Aji (2012); Weng _et
al._ (2015); Lv _et al._ (2015, 2015); Xu _et al._ (2015); Huang _et al._
(2015); Son and Spivak (2013); Goswami and Tewari (2013) and interacting
systems Raines and Galitski (2017); Rylands _et al._ (2021).
Motivated by these considerations and in particular the effects of quantum
anomalies in strongly correlated systems, we study a system of
$3+1$-dimensional Dirac fermions coupled to an arbitrary array of spins. In
lower dimensions there exist many analytic, non-perturbative or exact
techniques to study Kondo models including conformal field theory Affleck
(1995); Fradkin _et al._ (1990), Bethe Ansatz Andrei _et al._ (1983);
Tsvelick and Wiegmann (1983); Rylands and Andrei (2017, 2016, 2018); Pasnoori
_et al._ (2020, 2021) and bosonization Giamarchi (2003); Gogolin _et al._
(2004); Fradkin _et al._ (1989); Tsvelik (1994); Goswami and Si (2011); Lobos
_et al._ (2015). In higher dimensions there is a lack of non-perturbative
techniques and typically a slave particle approach is adopted Read and Newns
(1983); Coleman (1984); Hewson (1993); Coleman (2015); Chang and Coleman
(2018). In this work we will take an alternative approach. Our method will
follow that of the anomaly based formulation of functional bosonization,
appropriately generalized to the present situation. In the original
formulation one considers Dirac fermions in $1+1$ dimensions, with either
Abelian or non-Abelian symmetry, coupled to some fluctuating field e.g. a
Hubbard Stratonovich field. The fermions are decoupled from this field via a
judiciously chosen local chiral and gauge rotation after which the system
consists of free fermions and a decoupled fluctuating field whose effective
action is calculated using the chiral anomaly Gamboa Saraví _et al._ (1981);
Furuya _et al._ (1982); Naón (1985); Lee and Chen (1988). The remaining
fermonic degrees of freedom are easily integrated out resulting in an
effective bosonic theory. We shall follow the same procedure for our system,
with the spin-momentum locking of Dirac fermions necessitating a non-Abelian
formulation. In addition, the increase in dimensions shall make the formalism
more complex and ultimately will not end in an exact solution which can be the
case in lower dimensions Giamarchi (2003); Gogolin _et al._ (2004). In spite
of this, the approach provides us access to some exact results including that
of the anomalous transport in the system.
This paper is organized as follows: In Section II we introduce the model and
discuss the relation between our method and the standard chiral anomaly
treatment of Weyl semimetals. In section III we formulate the decoupling
transformation and show how it can reduce the system to free fermions and a
decoupled interacting spin system. In the subsequent section we present an
iterative scheme for finding this transformation. In section V we calculate
the contributions of the chiral and Weyl anomalies to the action for our
model. These are then used in section VI to determine the low energy effective
action for the spin system. Following this we determine the anomalous
transport, finding a modification to the quantum Hall current and a chiral
magnetic effect (CME) due to the fluctuations of the spins. In the last
section we summarize and conclude.
## II Model
Our system is described by the path integral
$\displaystyle
Z=\int\mathcal{D}\psi\mathcal{D}\bar{\psi}\mathcal{D}\mathbf{S}\,e^{i\mathcal{S}[\psi,\bar{\psi},\mathbf{S}]}$
(1)
where the action is $\mathcal{S}=\mathcal{S}_{D}+\mathcal{S}_{\text{spin}}$
with
$\displaystyle\mathcal{S}_{D}\\!=\\!\\!\int
d^{4}x\,\bar{\psi}(x)i\gamma^{\mu}\left[\partial_{\mu}-ieA_{\mu}(x)-iJ\gamma_{5}S_{\mu}(x)\right]\psi(x).$
(2)
Here $\bar{\psi}(x),\psi(x)$ are four component Dirac fermions, (spinor
indices suppressed for the moment) describing the low energy sector of a
semimetal. They are coupled via a spin exchange of strength $J$ to a system of
spins, $S_{\mu}(x)=(0,\mathbf{S}(x))$, governed by the action
$\mathcal{S}_{\text{spin}}$ and a gauge field $A_{\mu}(x)$. Specific cases of
this model have been studied previously including the Kondo effect for a
single impurity Mitchell and Fritz (2015) and also the Ruderman-Kittel-Kasuya-
Yosida (RKKY) interaction for two impurities Chang _et al._ (2015). Our
approach here does not depend upon the form of $\mathcal{S}_{\text{spin}}$,
$\mathbf{S}(x)$ can be an arbitrary vector field, classical or quantum as in
(1).
This action, (2), can also be viewed as that of a low energy description of a
semimetal subject to strain fields Arjona and Vozmediano (2018); Chernodub and
Vozmediano (2019). The strain induces chiral gauge fields in the action for
which the field $\mathbf{S}(x)$ plays the role of a vector potential e.g. a
rotational strain will induce a chiral magnetic field with strength
$\mathbf{\nabla}\times\mathbf{S}(x)$. Our results are also applicable to these
strained semimetals however in the remainder of the paper we restrict the
terminology and perspective to that of the Kondo semimetal.
We shall perform a rotation on the fermionic degrees of freedom such that
$\mathbf{S}(x)$ is removed from $\mathcal{S}_{D}$. For a simple Weyl
semimetal, when $\mathbf{S}$ is constant this is easily carried out via a
local chiral gauge transformation
$\psi(x)\to
e^{i\gamma_{5}\mathbf{x}\cdot\mathbf{S}}\psi(x),~{}\bar{\psi}(x)\to\bar{\psi}(x)e^{i\gamma_{5}\mathbf{x}\cdot\mathbf{S}}$
which transforms the action $\mathcal{S}_{D}\to\mathcal{S}_{D}^{\prime}=\int
dx\,\bar{\psi}(x)\,i\gamma^{\mu}\left[\partial_{\mu}-ieA_{\mu}(x)\right]\psi(x)$
Zyuzin and Burkov (2012). Such a transformation is known to be anomalous Adler
(1969); Bell and Jackiw (1969), meaning that it results in a nontrivial
Jacobian in the path integral measure i.e.
$\mathcal{D}\psi\mathcal{D}\bar{\psi}\to\mathcal{D}\psi\mathcal{D}\bar{\psi}\,e^{i\mathcal{S}_{\mathcal{A}}}.$
The anomalous contribution to the transformed action,
$\mathcal{S}_{\mathcal{A}}$ is straightforwardly calculated using the method
of Fujikawa Fujikawa (1979, 1980). It takes the form of an axion-like term
$\displaystyle\mathcal{S}_{\mathcal{A}}=J\frac{e^{2}}{4\pi^{2}}\int
d^{4}x\,\epsilon^{\nu\mu\rho\sigma}S_{\mu}A_{\nu}\partial_{\rho}A_{\sigma}.$
(3)
Using this one may then calculate the anomalous transport of the fermions by
varying the action with respect to $A_{\mu}(x)$. This gives the Hall current
$\left<\mathbf{j}\right>=J\frac{e^{2}}{2\pi}\mathbf{S}\times\mathbf{E}$ and
density $\left<\rho\right>=J\frac{e^{2}}{2\pi}\mathbf{S}\cdot\mathbf{B}$ where
$\mathbf{E},\,\mathbf{B}$ are external electric and magnetic fields. When
$S_{0}\neq 0$ inversion symmetry is broken and the chiral magnetic effect
occurs in the presence of a magnetic field Fukushima _et al._ (2008); Chen
_et al._ (2013); Zyuzin and Burkov (2012).
When $\mathbf{S}$ is not constant a local chiral gauge transformation is no
longer sufficient to decouple the fermions from the spins. As we will show
below it is still possible but the transformation which does this is non-
Abelian, consisting of a combination of local chiral, Weyl and Lorentz
transformations. The first two of these are anomalous and will generate a
contribution to action, which includes interactions between the spins and also
provide a route to calculating the exact anomalous transport. For simplicity
we restrict ourselves to the zero chemical potential and zero chiral chemical
potential (i.e. $S_{0}=0$ which is also the case for a stained semimetal)
however both can be straightforwardly accommodated within our approach. In
addition we treat only the zero temperature and infinite volume case.
## III Decoupling Transformation
For arbitrary $\mathbf{S}(x)$ the appropriate decoupling transformation is
$\psi(x)\to U(x)\psi(x)$ and $\bar{\psi}(x)\to\bar{\psi}(x)\overline{U}(x)$
with
$\displaystyle
U(x)=e^{i\gamma_{5}\phi(x)+\Omega(x)+i\gamma_{5}\mathcal{F}_{\mu\nu}(x)\sigma^{\mu\nu}}$
(4)
and $\overline{U}(x)=\gamma_{0}U^{\dagger}(x)\gamma_{0}$ where
$\sigma^{\mu\nu}=[\gamma^{\mu},\gamma^{\nu}]/2$ are the generators of Lorentz
transformations in the spinor representation. Heuristically, we can understand
the form of this transformation in the following way. We envision an array of
spins, each of arbitrary length and orientation. Using a local Lorentz
transformation we can locally rotate to a frame where the spins are parallel
but of differing length. They can then be rescaled in length to be the same
using the local Weyl transformation and following this they can then be
decoupled through the local chiral transformation.
More specifically, the real functions $\phi(x),~{}\Omega(x)$,
$\mathcal{F}_{\mu\nu}(x)$ which parameterize the local chiral, Weyl and
Lorentz transformations respectively are determined by solving
$\displaystyle
i\left[\not{\partial}U(x)\right]U^{-1}(x)=J\gamma_{5}\not{S}(x),$ (5)
where we have employed Dirac slash notation;
$\not{C}\equiv\gamma^{\mu}C_{\mu}$. As opposed to the case of constant
$\mathbf{S}$, the non-Abelian nature of $U(x)$ now makes this a non trivial
task which we shall address in the next section. Using this transformation in
(2) the action is transformed as $\mathcal{S}_{D}\to\mathcal{S}^{\prime}_{D}$,
$\displaystyle\mathcal{S}^{\prime}_{D}$ $\displaystyle=$ $\displaystyle\int
d^{4}x\,\bar{\psi}(x)\overline{U}(x)\gamma^{\mu}U(x)\left[\partial_{\mu}-iA_{\mu}\right]\psi(x)$
(6) $\displaystyle=$ $\displaystyle\int
d^{4}x\,\bar{\psi}(x)e^{2\Omega(x)}\Lambda^{\mu}_{\nu}(x)\gamma^{\nu}\left[\partial_{\mu}-iA_{\mu}\right]\psi(x).$
(7)
In the second line we have introduced
$\Lambda^{\mu}_{\nu}(x)=[e^{i\gamma_{5}\mathcal{F}_{\alpha\beta}(x)\omega^{\alpha\beta}}]^{\mu}_{\,\nu}$
with $\omega^{\alpha\beta}$ being the generators of Lorentz transformations in
the vector representation. We then perform a coordinate transformation $x\to
y(x)$ such that,
$\displaystyle\frac{\text{d}y^{\mu}(x)}{\text{d}x^{\nu}}=e^{-2\Omega(x)/3}\Lambda^{\mu}_{\nu}(x)$
(8)
This transformation does not have unit determinant due to the $\Omega(x)$
term, however the coefficient in the exponent is chosen such that the Jacobian
of this transformation is cancelled. Ultimately, we obtain
$\displaystyle\mathcal{S}^{\prime}_{D}=\int
d^{4}y\,\bar{\psi}(y)\gamma^{\mu}\left[\partial_{\mu}-i\tilde{A}_{\mu}\right]\psi(y).$
(9)
Which is the action of free Dirac fermions coupled to a rotated and rescaled
gauge field
$\displaystyle\tilde{A}_{\mu}(y)=e^{2\Omega(x)/3}\Lambda^{\nu}_{\mu}(x)A_{\nu}(x).$
(10)
In the absence of the gauge field the fermion and spin system have been
decoupled. Therefore, provided that a solution to (5) exists our path integral
is transformed to
$\displaystyle
Z=\int\mathcal{D}\psi\mathcal{D}\bar{\psi}\mathcal{D}\mathbf{S}\,e^{i\mathcal{S_{D}^{\prime}}[\psi,\bar{\psi}]+i\mathcal{S}_{\mathcal{A}}[\mathbf{S}]+i\mathcal{S}_{\text{spin}}[\mathbf{S}]}$
(11)
where $\mathcal{S}_{\mathcal{A}}$ comes from the Jacobian of the chiral and
Weyl transformations which depends upon $\mathbf{S}(x)$ and $A_{\mu}(x)$.
We note that the defining equation for the transformation (5) is a Dirac
equation of the type which the untransformed fermion obeys. In the
noninteracting case, $J=0$, $U(x)$ should reduce to the identity and so we can
view it as the operator which locally transforms the field from the Heisenberg
to the interaction picture. We expect on general grounds that this is
generically possible to implement. In contrast to the standard procedure
however we carry out this transformation in the path integral which turns out
to be anomalous.
## IV Iterative Solution
The task now is to solve (5) for $\phi(x),\Omega(x)$ and
$\mathcal{F}_{\mu\nu}(x)$ in terms of $\mathbf{S}$. To do this we introduce
$\mathcal{E}_{i}=\mathcal{F}_{0i}$ and
$\mathcal{B}_{i}=-\frac{1}{2}\epsilon_{ijk}\mathcal{F}^{jk}$ with Latin
indices reserved for spatial components. Inserting this form into (5) and
using standard vector calculus identities we obtain a set of non linear
differential equations for our unknown functions,
$\phi,\Omega,\bm{\mathbf{\mathcal{E}}}$ and $\bm{\mathbf{\mathcal{B}}}$ which
resemble the equations for a driven two level system Galitski (2011);
Gangopadhyay _et al._ (2010). We solve these by expanding in powers of $J$
i.e. $\phi(x)=\sum_{n=1}^{\infty}J^{n}\phi^{(n)}$ and proceeding iteratively.
The leading order equations resemble Maxwell’s equations with magnetic source
terms. Therein, $\bm{\mathbf{\mathcal{E}}}^{(1)}$ and
$\bm{\mathbf{\mathcal{B}}}^{(1)}$ play the role of pseudo-electric and pseudo-
magnetic fields and $\mathbf{S},~{}\phi^{(1)},~{}\Omega^{(1)}$ provide the
sources,
$\displaystyle\partial_{t}\bm{\mathbf{\mathcal{E}}}^{(1)}-\bm{\mathbf{\nabla}}\times\bm{\mathbf{\mathcal{B}}}^{(1)}$
$\displaystyle=$ $\displaystyle\mathbf{S}-\bm{\mathbf{\nabla}}\phi^{(1)}$ (12)
$\displaystyle\partial_{t}\bm{\mathbf{\mathcal{B}}}^{(1)}+\bm{\mathbf{\nabla}}\times\bm{\mathbf{\mathcal{E}}}^{(1)}$
$\displaystyle=$ $\displaystyle-\bm{\mathbf{\nabla}}\Omega^{(1)}$ (13)
$\displaystyle\bm{\mathbf{\nabla}}\cdot\bm{\mathbf{\mathcal{B}}}^{(1)}=\partial_{t}\Omega^{(1)},~{}\bm{\mathbf{\nabla}}\cdot\bm{\mathbf{\mathcal{E}}}^{(1)}$
$\displaystyle=$ $\displaystyle\partial_{t}\phi^{(1)}.$ (14)
The solution of these equations is known from classical electromagnetism;
$\phi^{(1)}(x)=\bm{\mathbf{\nabla}}\left[G*\mathbf{S}(x)\right]$,
$\bm{\mathbf{\mathcal{E}}}^{(1)}(x)=\partial_{t}\left[G*\mathbf{S}(x)\right]$
and
$\bm{\mathbf{\mathcal{B}}}^{(1)}(x)=-\bm{\mathbf{\nabla}}\times\left[G*\mathbf{S}(x)\right]$
in addition to $\Omega^{(1)}(x)=0$. Here $G(x)$ is the Green’s function for
the d’Alembertian,
$[\partial_{t}^{2}-\bm{\mathbf{\nabla}}^{2}]G(x)=\delta^{(4)}(x)$ and $*$
stands for convolution, $G*\mathbf{S}(x)=\int d^{4}zG(x-z)\mathbf{S}(z)$. Note
that since $\Omega^{(1)}(x)$ vanishes, no Weyl transformation is required at
this order and (12)-(14) reduce to Maxwell’s equations without magnetic
monopole terms.
We may express this linearized solution in a more elegant form. To do this we
recall that $G(x)$ can be related to the Green’s function, $\mathcal{G}(x)$,
for the massless Dirac equation through
$\mathcal{G}(x)\equiv\not{\partial}G(x)$. Using this we have that to linear
order
$\displaystyle U(x)=e^{iJ\gamma_{5}\mathcal{G}*\not{S}(x)}.$ (15)
The higher order corrections to this, $\bm{\mathbf{\mathcal{E}}}^{(n)}$ and
$\bm{\mathbf{\mathcal{B}}}^{(n)}$ are also solutions to Maxwell’s equations
but with sources which are determined by the lower order terms. For example at
second order
$\displaystyle\partial_{t}\bm{\mathbf{\mathcal{E}}}^{(2)}-\bm{\mathbf{\nabla}}\times\bm{\mathbf{\mathcal{B}}}^{(2)}$
$\displaystyle=$
$\displaystyle\text{Re}[\mathbf{S}^{(1)}]-\bm{\mathbf{\nabla}}\phi^{(2)},$
(16)
$\displaystyle\partial_{t}\bm{\mathbf{\mathcal{B}}}^{(2)}+\bm{\mathbf{\nabla}}\times\bm{\mathbf{\mathcal{E}}}^{(2)}$
$\displaystyle=$
$\displaystyle\text{Im}[\mathbf{S}^{(1)}]-\bm{\mathbf{\nabla}}\Omega^{(2)},$
(17) $\displaystyle\bm{\mathbf{\nabla}}\cdot\bm{\mathbf{\mathcal{B}}}^{(2)}$
$\displaystyle=$
$\displaystyle\partial_{t}\Omega^{(2)}-\text{Im}[S^{(1)}_{0}],$ (18)
$\displaystyle\bm{\mathbf{\nabla}}\cdot\bm{\mathbf{\mathcal{E}}}^{(2)}$
$\displaystyle=$ $\displaystyle\partial_{t}\phi^{(2)}-\text{Re}[S^{(1)}_{0}]$
(19)
where we have introduced
$\bm{\mathbf{S}}^{(1)}=\bm{\mathbf{X}}^{(1)}\times\left[\partial_{t}+i\bm{\mathbf{\nabla}}\times\right]\bm{\mathbf{X}}^{(1)}$
and
$S_{0}^{(1)}=\bm{\mathbf{X}}^{(1)}\cdot\bm{\mathbf{\nabla}}\times\bm{\mathbf{X}}^{(1)}$
with
$\bm{\mathbf{X}}^{(1)}=\bm{\mathbf{\mathcal{E}}}^{(1)}+i\bm{\mathbf{\mathcal{B}}}^{(1)}$.
The solution to these can be found from a straightforward generalization of
the linear order solution, i.e. derivative operators acting on terms like
$G*S_{\mu}^{(1)}$. Combining this with (15) we have that up to second order
$U(x)=e^{J\mathcal{G}*\left(J\text{Im}[\not{S}^{(1)}]-i\gamma_{5}(\not{S}+J\text{Re}[\not{S}^{(1)}]\right)}$.
All higher orders proceed along similar lines and we can write the full
solution as
$\displaystyle
U(x)=e^{J\mathcal{G}*(\text{Im}[\not{\mathbb{S}}(x)]-i\gamma_{5}\text{Re}[\not{\mathbb{S}}(x)])}$
(20)
where $\mathbb{S}_{\mu}=\sum_{n=0}^{\infty}J^{n}S_{\mu}^{(n)}$ and
$S_{\mu}^{(0)}=S_{\mu}$. Matching this to (4) then gives
$\phi(x)=\frac{J}{4}\text{tr}(\mathcal{G}*\text{Re}[\not{\mathbb{S}}])$,
$\Omega(x)=\frac{J}{4}\text{tr}(\mathcal{G}*\text{Im}[\not{\mathbb{S}}])$ and
$\mathcal{F}^{\mu\nu}(x)=-\frac{J}{8}\text{tr}[\sigma^{\mu\nu}\mathcal{G}*(\gamma_{5}\text{Re}[\not{\mathbb{S}}]-i\text{Im}[\not{\mathbb{S}}])]$.
The corrections to $\mathbb{S}_{\mu}$ naturally become more complicated at
higher orders. Notably, they contain an increasing number of derivatives each
time, i.e. $S^{(n)}$ contains at least $n$ derivatives acting on $\mathbf{S}$.
Accordingly, if for some $n$, $S^{(n)}$ is constant then no further terms are
generated. For instance, if $\mathbf{S}$ is constant then only the first order
is required. We can view this as a gradient expansion which can be truncated
if one is interested in the long wavelength physics of the system.
## V Anomalous Action
We turn now to calculating the anomalous contribution to the action. Following
Fujikawa’s method, we switch to Euclidean space and suppose that we have
partially performed our transformation so that
$\mathcal{S}_{D}\to\mathcal{S}_{D}(\tau)=\int
d^{4}y\,\bar{\psi}(y)\not{D}(\tau)\psi(y)$ with $\tau\in[0,1]$ and
$\not{D}(\tau)=\gamma^{\mu}\left[\partial_{\mu}-i\tilde{A}_{\mu}(y;\tau)-iJ(1-\tau)\tilde{S}_{\mu}(y;\tau)\right].$
(21)
Here we have introduced the partially rotated and rescaled field
$\tilde{A}_{\mu}(y;\tau)$, (c.f. (10)) which coincides with the original gauge
field at $\tau=0$, $\tilde{A}_{\mu}(y;0)=A_{\mu}(x)$ and the final one at
$\tau=1$, $\tilde{A}_{\mu}(y;1)=\tilde{A}_{\mu}(y)$. A similar definition is
true for $\tilde{S}_{\mu}(y;\tau)$. This partially rotated action coincides
with the initial action, $\mathcal{S}_{D}$ and final action
$\mathcal{S}^{\prime}_{D}$ also at $\tau=0,1$ respectively. The anomalous
contribution is found by considering an infinitesimal rotation such that
$\mathcal{S}_{D}(\tau)\to\mathcal{S}_{D}(\tau+d\tau)$, calculating the
Jacobian due to the transformation on the fields and then integrating this
from $\tau=0$ to $\tau=1$. The result is Fujikawa and Suzuki (2004)
$\mathcal{S}_{\mathcal{A}}=2i\int_{0}^{1}\\!\\!d\tau\\!\\!\int\\!\\!d^{4}x\Big{\\{}\Omega(x;\tau)\text{Tr}[\mathbb{1}]+i\phi(x;\tau)\text{Tr}[\gamma_{5}]\Big{\\}}$
(22)
which is the sum of standard Weyl and chiral anomaly terms. Here the Tr[ ]
denotes a trace over the Hilbert space as well as over spinor indices. The
Hilbert space sum is naively divergent but can be regularized in the standard
heat Kernel way,
$\text{Tr}[\mathcal{O}]=\lim_{M\to\infty}\int\frac{d^{4}k}{(2\pi)^{4}}e^{-ik_{\mu}x^{\mu}}\text{tr}\left[\mathcal{O}e^{-\not{D}^{2}(\tau)/M^{2}}\right]e^{ik_{\mu}x^{\mu}}$
(23)
with $\mathcal{O}=\mathbb{1},\gamma_{5}$ and tr[ ] denoting a trace over
spinor indices only. We note that in contrast to normal circumstances the
generators of the chiral and Weyl transformations $\phi(x;\tau)$,
$\Omega(x;\tau)$ themselves depend upon $\tau$.
To calculate (23) it is sufficient to expand the exponential up to at most
fourth order as all other terms will be suppressed by the $M\to\infty$. After
straightforward but tedious calculation we find
$\displaystyle\text{Tr}[\gamma_{5}]=iJ(1-\tau)\left[\frac{M^{2}}{4\pi^{2}}+\frac{[J(1-\tau)]^{2}\tilde{S}^{2}}{2\pi^{2}}-\frac{\partial_{\mu}\partial^{\mu}}{24\pi^{2}}\right]\partial_{\alpha}\tilde{S}^{\alpha}$
$\displaystyle+\frac{\epsilon^{\mu\nu\rho\sigma}}{8\pi^{2}}\left[\frac{[2J(1-\tau)]^{2}}{3}\partial_{\mu}\tilde{S}_{\nu}\partial_{\rho}\tilde{S}_{\sigma}+e^{2}\tilde{F}_{\mu\nu}\tilde{F}_{\rho\sigma}\right]~{}~{}~{}~{}$
(24)
where
$\tilde{F}_{\mu\nu}=\partial_{\mu}\tilde{A}_{\nu}(y;\tau)-\partial_{\nu}\tilde{A}_{\mu}(y;\tau)$
and $\tilde{S}^{2}=\tilde{S}_{\mu}(y;\tau)\tilde{S}^{\mu}(y;\tau)$. The last
term above is the standard chiral anomaly term. A similar term also appears in
the second line but is constructed purely from the spins. Amongst the
remaining terms we note the divergent term
$iJ(1-\tau)\frac{M^{2}}{4\pi^{2}}\partial_{\alpha}\tilde{S}^{\alpha}$ which we
shall discuss further below. For the Weyl contribution we have
$\displaystyle\text{Tr}[\mathbb{1}]$ $\displaystyle=$
$\displaystyle\frac{M^{4}}{4\pi^{2}}-\frac{J^{2}(1-\tau)^{2}}{24\pi^{2}}\bigg{[}12M^{2}\tilde{S}^{2}+2\partial_{\mu}\tilde{S}_{\nu}\partial^{\mu}\tilde{S}^{\nu}-9\tilde{S}^{4}$
(25)
$\displaystyle+4\tilde{S}_{\mu}\partial_{\nu}\partial^{\nu}\tilde{S}^{\mu}-\left(\partial_{\mu}\tilde{S}^{\mu}\right)^{2}\bigg{]}+\frac{e^{2}}{24\pi^{2}}\tilde{F}_{\mu\nu}\tilde{F}^{\mu\nu}$
Again we see the presence of the usual Weyl anomaly contribution in the first
and last terms. The divergent term is typically discarded when considering the
Weyl anomaly as it does not depend upon $\tilde{S}$ or $\tilde{A}$ but when
calculating (22) it should be retained.
## VI Effective spin action
Combining (22) with (24) and (25) we have the exact anomalous action. To fully
determine this requires us to perform the rather daunting seeming $\tau$
integral in (22) which we do not attempt here. To get some understanding of
what form this takes however we consider the case where the spin field takes
the form
$\displaystyle\mathbf{S}(x)=\bar{\mathbf{S}}+\mathbf{\delta S}(x)$ (26)
where $\bar{\mathbf{S}}$ is constant and $\mathbf{\delta S}$ describe the
fluctuations about this and then proceed by computing the anomalous action
using only the linearized solution (15),
$\displaystyle
U(x)=e^{i\gamma_{5}J\left[\mathbf{x}\cdot\bar{\mathbf{S}}+\mathcal{G}*\delta\not{S}\right]}.$
(27)
The first term in the exponent is the standard chiral transformation used for
Weyl semimetals which was discussed earlier and the second arises due to the
fluctuations. We now make the assumption that (27) provides a reasonable
approximation to the transformation for the purpose of computing the low
energy effective anomalous action. Using (24) we then find
$\displaystyle\mathcal{S}_{\mathcal{A}}=-\\!\\!\\!\int\\!d^{4}x\Bigg{\\{}\\!\frac{e^{2}J}{4\pi^{2}}\left[\bar{S}_{\mu}+\frac{1}{4}\partial_{\mu}\text{tr}\left(\mathcal{G}*\delta\not{S}\right)\right]\\!\epsilon^{\mu\nu\rho\sigma}A_{\nu}\partial_{\rho}A_{\sigma}$
$\displaystyle+\frac{J^{3}}{18\pi^{2}}\left[\bar{S}_{\mu}+\frac{1}{4}\partial_{\mu}\text{tr}\left(\mathcal{G}*\delta\not{S}\right)\right]\epsilon^{\mu\nu\rho\sigma}\delta
S_{\nu}\partial_{\rho}\delta S_{\sigma}\quad\quad$
$\displaystyle+\frac{J^{2}}{12\pi^{2}}\\!\left[\mathbf{\nabla}\\!\cdot\mathbf{S}(x)\right]^{2}\
+\int d^{4}y\,S_{i}(x)V^{ij}(x-y)S_{j}(y)\\!\Bigg{\\}}\quad$ (28)
where $V_{ij}(z)=\mathcal{J}\partial_{i}\partial_{j}G(z)$. Adding this to
$\mathcal{S}_{\text{spin}}$ we arrive at the approximate effective action for
the spin system. The first term here is the typical chiral anomaly term now
modified to include the effect of the fluctuations, it represents a fermion
mediated coupling of the spins to the gauge field. The second has the same
form as the first, arising from a standard chiral anomaly term but built using
spins. Such a 3 spin term suggests a connection with the Wess-Zumino term
occurring in the low energy action of fermions coupled to local moments
Altland and Simons (2010); Goswami and Si (2011); Tsvelik (1994); Goswami and
Si (2014). The third is a kinetic term for the spins generated from the
coupling to the itinerant fermions. Lastly, we have a long range RKKY
interaction between the spins. The coupling constant depends explicitly on the
cutoff introduced earlier $\mathcal{J}=\frac{J^{2}M^{2}}{2\pi^{2}}$ 111 In
(24) a term
$\sim\tilde{S}_{\mu}\tilde{S}^{\mu}\partial_{\alpha}\tilde{S}^{\alpha}$ is
present. Since we are dealing with a spin system however
$\mathbf{S}\cdot\mathbf{S}$ is a scalar of order one. This term contributes to
$\mathcal{J}$ but it is negligible in comparison to $J^{2}M^{2}/2\pi^{2}$..
The appearance of this divergence is natural in models such as ours and is
akin to the well known divergence of the vacuum polarization in QED which is
governed by the same set of diagrams. In a condensed matter context,
deviations from a linear dispersion will cure this divergence giving a finite
but non-universal coupling constant. From this we can determine the leading
order renormalization group (RG) flow of this RKKY coupling
$\frac{\text{d}\mathcal{J}}{\text{d}l}=2\mathcal{J}$ with $l=\log{M}$
indicating it is relevant in an RG sense.
If we were to include terms beyond the linear approximation in our
transformation then this would result in 4 spin terms as well as terms
involving higher derivative terms, which are typically dropped when computing
an effective action. For these reasons we content ourselves with the
linearized approximation but note that the presence of the Weyl transformation
at higher orders provides a means to determine the RG flow of the terms
present in (28).
## VII Transport
We turn our attention now to calculating the anomalous transport in the system
which we will be able to do without resorting to approximations as done in the
previous section. In principle, this requires evaluating the integral (22)
fortunately however, this turns out to be not necessary. To see this we note
that the anomalous current is found by varying $\mathcal{S}_{\mathcal{A}}$
with respect to $A_{\mu}(x)=\tilde{A}_{\mu}(x,\tau=0)$. Thus
$\displaystyle\left<j^{\mu}(x)\right>$ $\displaystyle=$
$\displaystyle\frac{\partial\mathcal{S}_{\mathcal{A}}}{\partial\tilde{A}_{\mu}(x,0)}=-2\phi^{(1)}(x)\frac{\partial\text{Tr}[\gamma_{5}]|_{\tau=0}}{\partial
A_{\mu}(x,0)}$
where the second equality follows from the fact that the variation is carried
out at $\tau=0$ along with $\phi(x,0)=\phi^{(1)}(x)$ and
$\Omega(x,0)=\Omega^{(1)}(x)=0$. From this we find the density response to be
$\rho(x)=\frac{e^{2}}{2\pi^{2}}\bm{\mathbf{\nabla}}\phi^{(1)}(x)\cdot\mathbf{B}$
or in Fourier space,
$\displaystyle\rho(\mathbf{q},\nu)=\frac{e^{2}J}{2\pi^{2}}\int_{\mathbf{k}\omega}\frac{k_{i}k_{j}}{|k|^{2}-\omega^{2}}\left<S^{j}(\mathbf{k},\omega)\right>_{S}B^{i}(\mathbf{q}-\mathbf{k},\nu-\omega)$
(29)
where we have used the shorthand $\int_{\mathbf{k}\omega}=\int
d^{3}k\,d\omega/(2\pi)^{4}$ and $B^{i}(\mathbf{k},\omega)$ is the applied
magnetic field in Fourier space. The expectation value on the right is taken
with respect to the effective spin action (28) or alternatively could
represent some imposed, mean field spin configuration. This generalizes the
result for a Weyl semimetal to the case of non constant $\mathbf{S}(x)$. It
describes the response of the system to a density perturbation in the presence
of an arbitrary magnetic field.
Similarly the current is
$\displaystyle\mathbf{j}(x)=\frac{e^{2}}{2\pi^{2}}\bm{\mathbf{\nabla}}\phi^{(1)}\\!\times\mathbf{E}-\frac{e^{2}}{2\pi^{2}}\partial_{t}\phi^{(1)}\mathbf{B}$
(30)
or in Fourier space,
$\displaystyle
j^{l}(\mathbf{q},\nu)=\frac{e^{2}J}{2\pi^{2}}\int_{\mathbf{k}\omega}\frac{k_{i}\left<S^{j}(\mathbf{k},\omega)\right>_{S}}{|k|^{2}-\omega^{2}}\left[\epsilon^{ljs}k_{j}E_{s}(\mathbf{q}-\mathbf{k},\nu-\omega)\right.$
$\displaystyle\left.+\omega
B^{l}(\mathbf{q}-\mathbf{k},\nu-\omega)\right]~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(31)
In the first line we can recognize the generalization of the standard Hall
current expression to the case of non constant $\mathbf{S}(x)$. In addition to
this we note the presence of the magnetic field which gives rise to a chiral
magnetic effect. This is in contrast to the simple Weyl case discussed above
wherein the CME requires that $S_{0}\neq 0$ which can be the case in the
absence of inversion symmetry. This in turn results in a time dependent chiral
rotation $\sim e^{iJ\gamma_{5}S_{0}t}$ and a corresponding term in the
anomalous action. In the current circumstances, although $S_{0}=0$ and the
symmetry is not broken a CME is still generated via the time dependent nature
of the transformation.
## VIII Discussion & Conclusion
In this paper we have presented an alternative approach to interacting
semimetals based on the technique of functional bosonization from $1+1$
dimensions, generalized to $3+1$ dimensions. We have focused here on the case
of a Kondo-semimetal wherein the semimetal is coupled to an array of spins,
although our method can be applied to strained semimetals also. Our method
relies on the existence of a non-Abelian transformation of the fermions which
decouples them from the spin system. This transformation is anomalous, due to
the presence of the chiral and Weyl anomalies, and by calculating its non-
trivial Jacobian the low energy effective action for the spin system can be
determined in addition to the anomalous transport.
This approach can also be used for the evaluation of correlation functions.
For instance the fermionic Green’s function is given by
$\displaystyle i\left<\psi_{\alpha}(x)\bar{\psi}_{\beta}(0)\right>$
$\displaystyle=$
$\displaystyle\left<U_{\alpha}^{\alpha^{\prime}}(x)\bar{U}_{\beta}^{\beta^{\prime}}(0)\right>_{S}\mathcal{G}_{\alpha^{\prime}\beta^{\prime}}(x)$
where once again $\left<\right>_{S}$ denotes the expectation value with
respect to the spin system only and we have restored spinor indices. The
factorization of the correlation functions into a free fermionic part,
$\mathcal{G}(x)$, and a bosonic part is a hallmark of the bosonization method
and in (1+1)-d provides a simple route to finding non-Fermi liquid behaviour
Naón (1985); Lee and Chen (1988); Giamarchi (2003); Gogolin _et al._ (2004).
Upon adopting the linear approximation $U(x)\approx
e^{i\gamma_{5}J\mathcal{G}*\not{S}(x)}$ this expression simplifies and the
exponential form of the spin factor can facilitate evaluation of the
expectation value and, potentially non-Fermi liquid correlations.
###### Acknowledgements.
This work was supported by the U.S. Department of Energy, Office of Science,
Basic Energy Sciences under Award No. DE-SC0001911 and the Simons Foundation.
## References
* Hewson (1993) A. C. Hewson, _The Kondo Problem to Heavy Fermions_, Cambridge Studies in Magnetism (Cambridge University Press, 1993).
* Coleman (2015) P. Coleman, _Introduction to Many-Body Physics_ (Cambridge University Press, 2015).
* Dzero _et al._ (2010) M. Dzero, K. Sun, V. Galitski, and P. Coleman, Phys. Rev. Lett. 104, 106408 (2010).
* Dzero _et al._ (2016) M. Dzero, J. Xia, V. Galitski, and P. Coleman, Annual Review of Condensed Matter Physics 7, 249 280 (2016).
* Dzsaber _et al._ (2017) S. Dzsaber, L. Prochaska, A. Sidorenko, G. Eguchi, R. Svagera, M. Waas, A. Prokofiev, Q. Si, and S. Paschen, Phys. Rev. Lett. 118, 246601 (2017).
* Lai _et al._ (2017) H.-H. Lai, S. E. Grefe, S. Paschen, and Q. Si, Proceedings of the National Academy of Sciences 115, 93 97 (2017).
* Dzsaber _et al._ (2021) S. Dzsaber, X. Yan, M. Taupin, G. Eguchi, A. Prokofiev, T. Shiroka, P. Blaha, O. Rubel, S. E. Grefe, H.-H. Lai, Q. Si, and S. Paschen, Proceedings of the National Academy of Sciences 118 (2021), 10.1073/pnas.2013386118.
* Chang and Coleman (2018) P.-Y. Chang and P. Coleman, Phys. Rev. B 97, 155134 (2018).
* Adler (1969) S. L. Adler, Phys. Rev. 177, 2426 (1969).
* Bell and Jackiw (1969) J. S. Bell and R. Jackiw, Nuovo Cimento A Serie 60, 47 (1969).
* Fujikawa and Suzuki (2004) K. Fujikawa and H. Suzuki, _Path integrals and quantum anomalies_ , 122 (Oxford University Press on Demand, 2004).
* Nielsen and Ninomiya (1983) H. B. Nielsen and M. Ninomiya, Physics Letters B 130, 389 (1983).
* Wan _et al._ (2011) X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (2011).
* Burkov and Balents (2011) A. A. Burkov and L. Balents, Phys. Rev. Lett. 107, 127205 (2011).
* Yang _et al._ (2011) K.-Y. Yang, Y.-M. Lu, and Y. Ran, Phys. Rev. B 84, 075129 (2011).
* Xu _et al._ (2011) G. Xu, H. Weng, Z. Wang, X. Dai, and Z. Fang, Phys. Rev. Lett. 107, 186806 (2011).
* Halász and Balents (2012) G. B. Halász and L. Balents, Phys. Rev. B 85, 035103 (2012).
* Aji (2012) V. Aji, Phys. Rev. B 85, 241101 (2012).
* Weng _et al._ (2015) H. Weng, C. Fang, Z. Fang, B. A. Bernevig, and X. Dai, Phys. Rev. X 5, 011029 (2015).
* Lv _et al._ (2015) B. Q. Lv, N. Xu, H. M. Weng, J. Z. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen, C. E. Matt, F. Bisti, V. N. Strocov, J. Mesot, Z. Fang, X. Dai, T. Qian, M. Shi, and H. Ding, Nature Physics 11, 724 (2015).
* Lv _et al._ (2015) B. Q. Lv, H. M. Weng, B. B. Fu, X. P. Wang, H. Miao, J. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen, Z. Fang, X. Dai, T. Qian, and H. Ding, Phys. Rev. X 5, 031013 (2015).
* Xu _et al._ (2015) S.-Y. Xu, I. Belopolski, N. Alidoust, M. Neupane, G. Bian, C. Zhang, R. Sankar, G. Chang, Z. Yuan, C.-C. Lee, and et al., Science 349, 613 617 (2015).
* Huang _et al._ (2015) S.-M. Huang, S.-Y. Xu, I. Belopolski, C.-C. Lee, G. Chang, B. Wang, N. Alidoust, G. Bian, M. Neupane, C. Zhang, S. Jia, A. Bansil, H. Lin, and M. Z. Hasan, Nature Communications 6, 7373 (2015).
* Son and Spivak (2013) D. T. Son and B. Z. Spivak, Phys. Rev. B 88, 104412 (2013).
* Goswami and Tewari (2013) P. Goswami and S. Tewari, Phys. Rev. B 88, 245107 (2013).
* Raines and Galitski (2017) Z. M. Raines and V. M. Galitski, Phys. Rev. B 96, 161115 (2017).
* Rylands _et al._ (2021) C. Rylands, A. Parhizkar, A. A. Burkov, and V. Galitski, Phys. Rev. Lett. 126, 185303 (2021).
* Affleck (1995) I. Affleck, Acta Phys. Polon. B 26, 1869 (1995).
* Fradkin _et al._ (1990) E. Fradkin, C. von Reichenbach, and F. A. Schaposnik, Nuclear Physics B 340, 692 (1990).
* Andrei _et al._ (1983) N. Andrei, K. Furuya, and J. H. Lowenstein, Rev. Mod. Phys. 55, 331 (1983).
* Tsvelick and Wiegmann (1983) A. M. Tsvelick and P. B. Wiegmann, Advances in Physics 32, 453 (1983).
* Rylands and Andrei (2017) C. Rylands and N. Andrei, Phys. Rev. B 96, 115424 (2017).
* Rylands and Andrei (2016) C. Rylands and N. Andrei, Phys. Rev. B 94, 115142 (2016).
* Rylands and Andrei (2018) C. Rylands and N. Andrei, Phys. Rev. B 97, 155426 (2018).
* Pasnoori _et al._ (2020) P. R. Pasnoori, C. Rylands, and N. Andrei, Phys. Rev. Research 2, 013006 (2020).
* Pasnoori _et al._ (2021) P. R. Pasnoori, N. Andrei, C. Rylands, and P. Azaria, arXiv e-prints , arXiv:2111.05909 (2021), arXiv:2111.05909 [cond-mat.str-el] .
* Giamarchi (2003) T. Giamarchi, _Quantum Physics in One Dimension_ , International Series of Monographs on Physics (Clarendon Press, 2003).
* Gogolin _et al._ (2004) A. Gogolin, A. Nersesyan, and A. Tsvelik, _Bosonization and Strongly Correlated Systems_ (Cambridge University Press, 2004).
* Fradkin _et al._ (1989) E. Fradkin, C. von Reichenbach, and F. A. Schaposnik, Nuclear Physics B 316, 710 (1989).
* Tsvelik (1994) A. M. Tsvelik, Phys. Rev. Lett. 72, 1048 (1994).
* Goswami and Si (2011) P. Goswami and Q. Si, Phys. Rev. Lett. 107, 126404 (2011).
* Lobos _et al._ (2015) A. M. Lobos, A. O. Dobry, and V. Galitski, Phys. Rev. X 5, 021017 (2015).
* Read and Newns (1983) N. Read and D. M. Newns, Journal of Physics C: Solid State Physics 16, 3273 (1983).
* Coleman (1984) P. Coleman, Phys. Rev. B 29, 3035 (1984).
* Gamboa Saraví _et al._ (1981) R. E. Gamboa Saraví, F. A. Schaposnik, and J. E. Solomin, Nuclear Physics B 185, 239 (1981).
* Furuya _et al._ (1982) K. Furuya, R. E. G. Saraví, and F. A. Schaposnik, Nuclear Physics B 208, 159 (1982).
* Naón (1985) C. M. Naón, Phys. Rev. D 31, 2035 (1985).
* Lee and Chen (1988) D. Lee and Y. Chen, J. Phys. A 21, 4155 (1988).
* Mitchell and Fritz (2015) A. K. Mitchell and L. Fritz, Phys. Rev. B 92, 121109 (2015).
* Chang _et al._ (2015) H.-R. Chang, J. Zhou, S.-X. Wang, W.-Y. Shan, and D. Xiao, Phys. Rev. B 92, 241103 (2015).
* Arjona and Vozmediano (2018) V. Arjona and M. A. H. Vozmediano, Phys. Rev. B 97, 201404 (2018).
* Chernodub and Vozmediano (2019) M. N. Chernodub and M. A. H. Vozmediano, Phys. Rev. Research 1, 032040 (2019).
* Zyuzin and Burkov (2012) A. A. Zyuzin and A. A. Burkov, Phys. Rev. B 86, 115133 (2012).
* Fujikawa (1979) K. Fujikawa, Phys. Rev. Lett. 42, 1195 (1979).
* Fujikawa (1980) K. Fujikawa, Phys. Rev. D 22, 1499 (1980).
* Fukushima _et al._ (2008) K. Fukushima, D. E. Kharzeev, and H. J. Warringa, Phys. Rev. D 78, 074033 (2008).
* Chen _et al._ (2013) Y. Chen, S. Wu, and A. A. Burkov, Phys. Rev. B 88, 125105 (2013).
* Galitski (2011) V. Galitski, Phys. Rev. A 84, 012118 (2011).
* Gangopadhyay _et al._ (2010) A. Gangopadhyay, M. Dzero, and V. Galitski, Phys. Rev. B 82, 024303 (2010).
* Altland and Simons (2010) A. Altland and B. D. Simons, _Condensed Matter Field Theory_, 2nd ed. (Cambridge University Press, 2010).
* Goswami and Si (2014) P. Goswami and Q. Si, Phys. Rev. B 89, 045124 (2014).
* Note (1) In (24) a term $\sim\mathaccentV{tilde}07E{S}_{\mu}\mathaccentV{tilde}07E{S}^{\mu}\partial_{\alpha}\mathaccentV{tilde}07E{S}^{\alpha}$ is present. Since we are dealing with a spin system however $\mathbf{S}\cdot\mathbf{S}$ is a scalar of order one. This term contributes to $\mathcal{J}$ but it is negligible in comparison to $J^{2}M^{2}/2\pi^{2}$.
|
arxiv-papers
| 2021-07-26T18:00:02 |
2024-09-04T03:07:19.688243
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Colin Rylands, Alireza Parhizkar, Victor Galitski",
"submitter": "Colin Rylands",
"url": "https://arxiv.org/abs/2107.12388"
}
|
2107.12389
|
11institutetext: Max Planck Institute for Physics, Föhringer Ring 6, 80805
München, Germany22institutetext: INFN, Sezione di Pavia, Via Bassi 6, 27100
Pavia, Italy33institutetext: Technische Universität München, Physik-
Department, 85748 Garching, Germany
# Searching for pseudo Nambu-Goldstone boson dark matter production in
association with top quarks
Ulrich Haisch 2 Giacomo Polesello 1,3 and Stefan Schulte [email protected]
[email protected] [email protected]
###### Abstract
Pseudo Nambu-Goldstone bosons (pNGBs) are attractive dark matter (DM)
candidates, since they couple to the Standard Model (SM) predominantly through
derivative interactions. Thereby they naturally evade the strong existing
limits inferred from DM direct detection experiments. Working in an effective
field theory that includes both derivative and non-derivative DM-SM operators,
we perform a detailed phenomenological study of the Large Hadron Collider
reach for pNGB DM production in association with top quarks. Drawing on
motivated benchmark scenarios as examples, we compare our results to other
collider limits as well as the constraints imposed by DM (in)direct detection
experiments and the relic abundance. We furthermore explore implications on
the viable parameter space of pNGB DM. In particular, we demonstrate that DM
direct detection experiments become sensitive to many pNGB DM realisations
once loop-induced interactions are taken into account. The search strategies
and pNGB DM benchmark models that we discuss can serve as a starting point for
dedicated experimental analyses by the ATLAS and the CMS collaborations.
††preprint: MPP-2021-115
## 1 Introduction
Weakly interacting massive particles (WIMPs) have been the prime dark matter
(DM) candidate for more than three decades because they can give rise to the
correct abundance of DM today via thermal freeze-out production. However, the
null results from DM direct and indirect detection experiments (see for
instance Klasen _et al._ (2015); Schumann (2019)) along with the failure to
observe anomalous missing transverse energy ($E_{T}^{\mathrm{miss}}$)
production at the Large Hadron Collider (LHC) (see Aaboud _et al._ (2019a)
for an experimental status report) have by now ruled out large portions of the
parameter space of the simplest WIMP hypotheses such as the neutralino in
supersymmetric theories.
Compelling examples of still viable WIMP models are provided by scenarios in
which DM consists of composite pseudo Nambu-Goldstone bosons (pNGBs). Models
of this type can address simultaneously the electroweak (EW) hierarchy problem
of the Standard Model (SM) and the DM puzzle Frigerio _et al._ (2012), and as
a result have received notable attention in recent years Barger _et al._
(2009, 2010); Chala (2013); Marzocca and Urbano (2014); Barnard _et al._
(2015); Fonseca _et al._ (2015); Brivio _et al._ (2016); Kim _et al._
(2016); Chala _et al._ (2016); Barducci _et al._ (2017); Wu _et al._
(2017); Balkin _et al._ (2017, 2018a); Gross _et al._ (2017); Alanne _et
al._ (2018); Balkin _et al._ (2018); Ishiwata and Toma (2018); Huitu _et
al._ (2019); Karamitros (2019); Davoli _et al._ (2019); Ruhdorfer _et al._
(2020); Ramos (2020); Arina _et al._ (2020); Abe _et al._ (2020); Okada _et
al._ (2021a); Xing _et al._ (2021); Okada _et al._ (2021b); Coito _et al._
(2021). In models in which both the SM Higgs boson and DM emerge from a TeV-
scale strongly-coupled sector as pNGBs, one key feature is that the leading
coupling between the SM and DM is provided by higher-dimensional, derivative
interactions with the Higgs field. The derivative Higgs portal mediates
$s$-wave annihilation to SM particles, but leads to a strong suppression of
the DM scattering rate on ordinary matter. Thermal freeze-out can therefore
yield the observed relic density for a DM mass of the order of $100\,{\rm
GeV}$, while the current severe limits of DM direct detection experiments are
naturally evaded. Probes of composite pNGB DM include indirect detection
searches and collider experiments. The collider reach on the derivative Higgs
portal has been recently analysed in vector-boson-fusion (VBF) Higgs
production Ruhdorfer _et al._ (2020), finding a limited sensitivity at the
LHC. This motivates studies of the indirect constraints on the derivative
Higgs portal that arise from off-shell single-Higgs and on-shell double-Higgs
production at hadron colliders Haisch _et al._ (2020, ).
Besides the derivative Higgs portal, composite pNGB DM models necessarily
contain additional interactions to provide a potential and Yukawa couplings
for the Higgs boson and a mass for the DM candidate. A theoretically motivated
situation is one in which DM couples most strongly to the third generation of
SM fermions. At the level of dimension-six operators, such interactions can
either be of Yukawa type or involve the product of a DM and a SM current.
Detailed studies of the DM phenomenology of composite pNGB models where the
Goldstone shift symmetry of DM is broken by the top or the bottom Yukawa
coupling can be found in Balkin _et al._ (2017, 2018). These analyses show
that scenarios in which the shift symmetry is broken in the bottom sector are
significantly less constrained by DM direct detection than those in which the
top sector provides the leading symmetry breaking. In composite pNGB models
with sizeable DM-SM Yukawa couplings and a successful DM phenomenology, the
leading $E_{T}^{\rm miss}$ signature is therefore expected to be DM production
in association with bottom quarks. Unfortunately, this process can only be
constrained poorly at the LHC Sirunyan _et al._ (2017a); Aaboud _et al._
(2018a); Aad _et al._ (2021a). If, on the other hand, effective current-
current interactions provide a relevant portal between the dark and the
visible sector, large DM-top couplings are compatible with both the bounds
from DM (in)direct detection and the observed relic abundance if DM is
sufficiently heavy Ruhdorfer _et al._ (2020). As a result, such composite
pNGB DM models can be tested at the LHC by searching for DM production in
association with top-quark pairs
$\big{(}t\bar{t}+E_{T}^{\mathrm{miss}}\big{)}$ or a top quark and a $W$ boson
$\big{(}tW+E_{T}^{\mathrm{miss}}\big{)}$. These mono-$X$ channels, from now on
referred to as $tX+E_{T}^{\mathrm{miss}}$, have received a lot of attention
from the DM collider community Lin _et al._ (2013); Buckley _et al._ (2015);
Haisch and Re (2015); Arina _et al._ (2016); Haisch _et al._ (2017);
Sirunyan _et al._ (2017a); Aaboud _et al._ (2018a); Sirunyan _et al._
(2018, 2019a); Haisch and Polesello (2019a); Sirunyan _et al._ (2019b); Aad
_et al._ (2021b, c).
The main goal of this article is to analyse the LHC reach of the
$tX+E_{T}^{\mathrm{miss}}$ channels and to constrain the parameter space of
composite pNGB DM models. To keep our discussion as model-independent as
possible we will work in an effective field theory focusing on the subset of
operators that lead to DM production in association with top quarks. Through
loops such operators also lead to a $j+E_{T}^{\mathrm{miss}}$ signal, and we
study the limits on the parameter space of the pNGB DM effective field theory
that are imposed by the corresponding mono-jet searches. We then offer a
comprehensive discussion of the phenomenological features of pNGB DM models,
including an analysis of the DM direct and indirect detection constraints as
well as of the physics of thermal freeze-out. The search strategies and pNGB
DM benchmark models that we discuss are meant to set the stage for dedicated
experimental analyses by ATLAS and CMS.
Our work is organised as follows. In Section 2 we describe the structure of
the composite pNGB DM models that we consider. Our Monte Carlo (MC) generation
and our detector simulation are spelled out in Section 3, while Section 4
describes the analysis strategies to search for the relevant mono-$X$ signals.
In Section 5 we examine the sensitivity of the studied pNGB DM signatures at
upcoming LHC runs. The present and future constraints on the pNGB DM effective
field theory that arise from invisible Higgs decays are discussed in Section
6. The relevant non-collider limits are presented in Section 7. We discuss our
main results and give an outlook in Section 8. The impact of the assumed
systematic background uncertainties on our $tX+E_{T}^{\mathrm{miss}}$
projections is studied in the supplementary material that can be found in
Appendix A.
## 2 Theoretical framework
Throughout this article we will consider theories in which both the SM Higgs
doublet $H$ and the DM candidate $\chi$ arise as light pNGBs from a strongly-
coupled sector. The DM candidate is a singlet under the SM gauge group and we
assume it to be a complex scalar. The terms of the interaction Lagrangian
relevant for the further discussion can be written as Ruhdorfer _et al._
(2020)
$\begin{split}{\cal L}_{\chi H}&=\frac{c_{d}}{f^{2}}\hskip
0.7113pt\partial_{\mu}|\chi|^{2}\hskip
0.7113pt\partial^{\mu}|H|^{2}-\lambda\,|\chi|^{2}|H|^{2}\,,\\\\[5.69054pt]
{\cal L}_{\chi\psi}&=\frac{|\chi|^{2}}{f^{2}}\left(c_{t}\hskip
0.7113pty_{t}\hskip 0.7113pt\bar{q}_{L}\tilde{H}t_{R}+{\rm
h.c.}\right)+\frac{i}{f^{2}}\hskip
1.42262pt\chi^{\ast}\overset{\leftrightarrow}{\partial_{\mu}}\hskip
1.42262pt\chi\sum_{\psi=q_{L},t_{R},b_{R}}d_{\psi}\hskip
0.7113pt\bar{\psi}\gamma^{\mu}\psi\,.\end{split}$ (1)
Here the terms in ${\cal L}_{\chi H}$ correspond to the derivative and
marginal Higgs portal, respectively, while the terms in ${\cal L}_{\chi\psi}$
correspond to the Yukawa-type DM-top coupling and the current-current type
interactions between DM and the third-generation SM quarks, respectively. The
common decay constant of the pNGBs is denoted by $f$, while the coefficients
$c_{i}$, $\lambda$ and $d_{j}$ are $O(1)$ constants that we assume to be real
such that CP is conserved. In (1) we have furthermore used the definition
$\chi^{\ast}\overset{\leftrightarrow}{\partial_{\mu}}\hskip
1.42262pt\chi=\chi^{\ast}\partial_{\mu}\chi-\chi\partial_{\mu}\chi^{\ast}$,
and $q_{L}=(t_{L},b_{L})^{T}$ denotes the left-handed third-generation quark
doublet, $t_{R}$ ($b_{R}$) is the right-handed top-quark (bottom-quark)
singlet, $y_{t}=\sqrt{2}m_{t}/v$ is the top Yukawa coupling with $m_{t}\simeq
163\,{\rm GeV}$ the top mass and $v\simeq 246\,{\rm GeV}$ the Higgs vacuum
expectation value (VEV), and we have defined
$\tilde{H}^{i}=\varepsilon_{ij}\hskip 1.42262pt\big{(}H^{j}\big{)}^{\ast}$
with $\varepsilon_{ij}$ totally antisymmetric and $\varepsilon_{12}=1$. Notice
that the current-current type operator in ${\cal L}_{\chi\psi}$ is absent if
hidden-charge conjugation (i.e. $\chi\to-\chi^{\ast}$ and $\psi\to\psi$) is
preserved as in all explicit pNGB DM models studied in Balkin _et al._
(2018). Moreover, this operator vanishes trivially if the DM candidate is a
real scalar.
Besides the four types of interactions introduced in (1), the full pNGB DM
effective field theory can contain additional dimension-six operators such as
$\chi^{\ast}\overset{\leftrightarrow}{\partial_{\mu}}\hskip
1.42262pt\chi\hskip 0.7113pt\partial_{\nu}B^{\mu\nu}$ and $|\chi|^{2}\hskip
0.7113ptV_{\mu\nu}\hskip 0.7113ptV^{\mu\nu}$. Here
$V_{\mu\nu}=B_{\mu\nu},W^{i}_{\mu\nu},G^{a}_{\mu\nu}$ denotes the $U(1)_{Y}$,
$S\\!U(2)_{L}$ and $S\\!U(3)_{C}$ field-strength tensor, respectively. Since
the latter two types of operators do not lead to a relevant
$tX+E_{T}^{\mathrm{miss}}$ signal at tree level, such terms are not directly
testable in DM production in association with top quarks. In contrast, the
presence of DM couplings with gauge bosons may have an important impact on the
calculation of the DM (in)direct detection bounds and on the derivation of the
DM relic density. To highlight the complementarity of collider and non-
collider bounds in a simple fashion, we therefore restrict our analysis to the
subclass of models in which the leading effects at the scale at which DM and
the Higgs boson emerge as composite pNGBs are well captured by the effective
Lagrangians ${\cal L}_{\chi H}$ and ${\cal L}_{\chi\psi}$. However, we will
discuss and include pNGB DM interactions with gauge bosons that are generated
from (1) once radiative corrections are included, whenever these yield
significant contributions (see Section 7).
We finally mention that under the assumption that the cancellation of gauge
anomalies only depends on the SM fermion representations and not on the
structure of the pNGB DM effective field theory $\big{(}$in particular the
coefficients $d_{\psi}$ in (1)$\big{)}$, the current-current type DM-top
operator does not lead to a $j+E_{T}^{\mathrm{miss}}$ signal. In practice this
requires one to introduce local counterterms that cancel the anomalous
contributions in the five-point diagrams like the one shown on the right-hand
side in Figure 2 — see Durieux _et al._ (2018); Bonnefoy _et al._ (2021);
Feruglio (2021) for related discussions of gauge anomalies in the context of
the so-called SMEFT. Since we envisage that (1) describes new-physics
scenarios in which the full SM gauge symmetry is preserved, a matching
calculation in the full theory will always result in the required anomaly
cancellation, and consequently a cancellation of the current-current type
contributions to the mono-jet signature for any value of the parameters
$d_{\psi}$.
## 3 MC generation and detector simulation
In our work we study the $t\bar{t}+E_{T}^{\mathrm{miss}}$, the
$tW+E_{T}^{\mathrm{miss}}$ and the $j+E_{T}^{\mathrm{miss}}$ signatures that
arise from insertions of the pNGB DM operators introduced in (1). Examples of
leading-order (LO) diagrams that involve DM-Higgs and DM-top operators are
displayed in Figure 1 and Figure 2, respectively. Notice that only DM-top
operators can lead to a LO mono-jet signal as illustrated by the graph shown
on the right-hand side in Figure 2. All our signal predictions assume proton-
proton ($pp$) collisions at a centre-of-mass (CM) energy of $14\,{\rm TeV}$
and are calculated using a FeynRules 2 Alloul _et al._ (2014) implementation
of the Lagrangian (1) in the UFO format Degrande _et al._ (2012). The
generation and showering of the mono-$X$ samples is performed with
MadGraph5_aMC@NLO Alwall _et al._ (2014) at LO and PYTHIA 8.2 Sjöstrand _et
al._ (2015), respectively, using NNPDF3.0 parton distribution functions (PDFs)
Ball _et al._ (2015). In order to preserve both spin correlations and finite-
width effects, final-state top quarks and $W$ bosons are decayed with MadSpin
Artoisenet _et al._ (2013).
In the case of the $tX+E_{T}^{\mathrm{miss}}$ signatures, all SM processes
that contain at least two charged leptons ($\ell=e,\mu$) coming from the decay
of an EW gauge boson $V=W,Z$ are included in the background simulation. We do
not consider backgrounds with either fake electrons from jet misidentification
or with real non-isolated leptons from the decay of heavy-flavoured hadrons. A
reliable estimate of these backgrounds depends on a detailed simulation of
detector effects beyond the scope of this article. For the most recent ATLAS
analyses involving leptonic final states Aad _et al._ (2021c, b), the
background from non-prompt leptons is a few percent of the total background.
The backgrounds from $t\bar{t}$ Campbell _et al._ (2015), $tW$ Re (2011),
$WW$, $WZ$ and $ZZ$ production Melia _et al._ (2011); Nason and Zanderighi
(2014) are all generated at the next-to-leading order (NLO) in QCD with POWHEG
BOX Alioli _et al._ (2010). The $V+{\rm jets}$ backgrounds are generated at
LO using MadGraph5_aMC@NLO and include up to four additional jets.
MadGraph5_aMC@NLO is also used to simulate the $t\bar{t}V$ backgrounds with a
multiplicity of up to two jets, while the $tZ$ and $tWZ$ backgrounds are
obtained at LO with the same MC generator. All partonic events are showered
with PYTHIA 8.2. The samples produced with POWHEG BOX are normalised to the
corresponding NLO QCD cross sections, except for $t\bar{t}$, which is
normalised to the cross section obtained at the next-to-next-to-leading order
(NNLO) in QCD plus next-to-next-to-leading logarithmic QCD corrections Czakon
and Mitov (2014); Czakon _et al._ (2013). The $V+{\rm jets}$ samples are
normalised to the NNLO QCD cross sections Anastasiou _et al._ (2004); Gavin
_et al._ (2013) and the $t\bar{t}V$ samples are normalised to the NLO QCD
cross section as calculated by MadGraph5_aMC@NLO.
Figure 1: Examples of diagrams with insertions of the DM-Higgs operators
(filled red circles) in (1) that lead to a $t\bar{t}+E_{T}^{\mathrm{miss}}$
(left) and $tW+E_{T}^{\mathrm{miss}}$ (right) signal. The black dots indicate
SM interactions.
For the $j+E_{T}^{\mathrm{miss}}$ signature, the dominant SM backgrounds arise
from $V+{\rm jets}$ production. The only relevant process not included in the
$tX+E_{T}^{\mathrm{miss}}$ backgrounds described above is the
$Z+\mathrm{jets}$ channel followed by the decay $Z\to\nu\bar{\nu}$. Like in
the earlier works Haisch and Polesello (2019b, 2021) the corresponding
background is generated at LO with MadGraph5_aMC@NLO, and can contain up to
two additional jets. The generation is performed in slices of the vector-boson
transverse momentum ($p_{T}$), and the resulting events are showered with
PYTHIA 8.2 employing a Catani-Krauss-Kuhn-Webber jet matching procedure Catani
_et al._ (2001). The inclusive signal region IM3 of the ATLAS analysis Aad
_et al._ (2021d) requires $E_{T}^{\mathrm{miss}}>350\,{\rm GeV}$, and for
these selections the background from $V+{\rm jets}$ production amounts to
around 95% of the total SM background. The $V+{\rm jets}$ samples are
normalised such that the different contributions match the number of events in
the IM3 signal region as estimated by ATLAS scaled from a CM energy of
$13\,{\rm TeV}$ to $14\,{\rm TeV}$ and to the appropriate integrated
luminosity. The additional minor backgrounds from $t\bar{t}$, $tW$ and diboson
production are the same as in the $tX+E_{T}^{\mathrm{miss}}$ case.
The actual physics analyses use experimentally identified electrons, muons,
photons, jets ($j$) and $E_{T}^{\mathrm{miss}}$. These objects are constructed
from the stable particles in the generator output. Jets are built out of the
momenta of all the stable particles depositing energy in the calorimeter
except for muons using the anti-$k_{t}$ algorithm Cacciari _et al._ (2008)
with a radius parameter of $R=0.4$, as implemented in FastJet Cacciari _et
al._ (2012). Jets originating from the hadronisation of bottom quarks
($b$-jets) are experimentally identified (i.e. $b$-tagged) with high
efficiency. The $\vec{p}_{T}^{\,{\rm miss}}$ vector with magnitude
$E_{T}^{\mathrm{miss}}$ is constructed from the transverse momenta of all the
invisible particles in the event. Detector effects are simulated by smearing
the momenta of the analysis objects and by applying efficiency factors where
applicable. The used smearing and efficiency functions are tuned to reproduce
the performance of the ATLAS detector Aad _et al._ (2008, 2009). In
particular, the performance of the ATLAS $b$-tagging algorithm is taken from
Aad _et al._ (2019). For the mono-$X$ analyses performed in this article, a
$b$-tagging working point is chosen that yields a $b$-tagging efficiency of
77%, a $c$-jet rejection of 5 and a light-flavour jet rejection of 110. More
details on our detector simulation can be found in the earlier papers Haisch
_et al._ (2017); Haisch and Polesello (2018).
Figure 2: Assortment of graphs with insertions of the DM-top operators
(filled green circles) entering (1) that give rise to a
$t\bar{t}+E_{T}^{\mathrm{miss}}$ (left), $tW+E_{T}^{\mathrm{miss}}$ (middle)
and $j+E_{T}^{\mathrm{miss}}$ (right) signature.
## 4 Mono-$X$ analysis strategies
Below we describe the analysis strategies to target the
$tX+E_{T}^{\mathrm{miss}}$ and $j+E_{T}^{\mathrm{miss}}$ signals that are due
to the interactions described by (1). For each analysis strategy we define the
signal regions, spell out all selection criteria and quantify the systematic
uncertainties that plague the search strategy in question.
### 4.1 $tX+E_{T}^{\mathrm{miss}}$ final states
The considered signal events include the decays of two $W$ bosons. We address
the final states where only one or both of the $W$ bosons decay into charged
leptons, which hereafter will be called semileptonic or fully-leptonic,
respectively. Our $tX+E_{T}^{\mathrm{miss}}$ analysis is based on the
definition of three orthogonal signal regions. The first two signal regions
target the associated production of a $t\bar{t}$ pair and DM with SR1 (SR2)
selecting semileptonic (fully-leptonic) events. The third signal region called
SR3 instead considers the associated production of a top quark, a $W$ boson
and DM, which is searched for in fully-leptonic events. The corresponding
final states therefore involve a single isolated charged lepton and two
$b$-tagged jets (SR1), two isolated charged leptons and two $b$-tagged jets
(SR2) or two isolated charged leptons and a single $b$-tagged jet (SR3).
Notice that $tW+E_{T}^{\mathrm{miss}}$ production typically has a smaller
cross section than $t\bar{t}+E_{T}^{\mathrm{miss}}$ production. However, in
the case of the two-lepton final state, it has been shown in Haisch and
Polesello (2019a) that it is possible to devise a selection strategy that
combines the $t\bar{t}+E_{T}^{\mathrm{miss}}$ and the
$t\bar{W}+E_{T}^{\mathrm{miss}}$ channels and has a significantly larger
sensitivity than $t\bar{t}+E_{T}^{\mathrm{miss}}$ alone. Such a selection is
based on the observation that events produced by a fully-leptonic $t\bar{t}$
decay contain two $\ell b$ pairs for both of which the invariant mass $m_{\ell
b}$ is bounded from above by $\sqrt{m_{t}^{2}-M_{W}^{2}}\simeq 153\,{\rm
GeV}$. This is not the case for the $tW$ production which contains only one
$\ell b$ pair satisfying this bound. The two processes can thus be separated
by defining the variable
$m_{b\ell}^{t}=\mathrm{min}\hskip 1.42262pt\Big{(}\mathrm{max}\hskip
1.42262pt\big{(}m_{\ell_{1}j_{a}},m_{\ell_{2}j_{b}}\big{)}\Big{)}\,,$ (2)
and putting a cut on $m_{b\ell}^{t}$ of around $160\,{\rm GeV}$ to separate
$t\bar{t}$ from $tW$ events. In (2) the variables $m_{\ell_{1}j_{a}}$ and
$m_{\ell_{2}j_{b}}$ denotes the invariant mass of the leading and subleading
leptons $\ell_{1}$ and $\ell_{2}$ and the jets $j_{a}$ and $j_{b}$. The
minimisation with respect to the jet pairs $j_{a}$ and $j_{b}$ runs over all
of the $b$-tagged jets if the number of $b$-tagged jets satisfies $N_{b}\geq
3$ or over the $b$-tagged jets and the untagged jet with the highest
$b$-tagging weight if $N_{b}\leq 2$. Since the three signal regions are
designed to have no events in common, the final search sensitivity of the
$tX+E_{T}^{\mathrm{miss}}$ channel will be calculated after the statistical
combination of SR1, SR2 and SR3. The selection criteria corresponding to the
three signal regions are summarised in Tables 1 and 2.
Variable | SR1 selection
---|---
$N_{\ell}$ | $=1\,,$ $p_{T}(\ell)>25\,{\rm GeV}\,,$ $|\eta(\ell)|<2.5$
$N_{j}$ | $\geq 4\,,$ $p_{T}(j)>(80,60,30,25)\,{\rm GeV}\,,$ $|\eta(j)|<2.5$
$N_{b}$ | $\geq 2\,,$ $p_{T}(b)>(80,25)\,{\rm GeV}\,,$ $|\eta(b)|<2.5$
$E_{T}^{\mathrm{miss}}$ | $>550\,{\rm GeV}$
$m_{T}^{\ell}$ | $>180\,{\rm GeV}$
Topness | $>8$
$m_{\rm top}^{\rm reclustered}$ | $>150\,{\rm GeV}$
$H_{T,{\rm sig}}^{{\rm miss}}$ | $>15$
$|\Delta\phi_{\ell,{\rm miss}}|$ | $>1.3$
$|\Delta\phi_{\rm min}|$ | $>0.9$
$|\Delta\phi_{bb}|$ | $<2.5$
Table 1: Definition of the signal region SR1. The number of charged leptons,
light-flavoured jets and $b$-tagged jets are denoted by $N_{\ell}$, $N_{j}$
and $N_{b}$, respectively. For further details consult the text.
In the case of SR1 the selection requirements are similar to the ones imposed
in the signal region DM of Aad _et al._ (2021b). However, some variables have
been modified and the values of the cuts have been optimised to our MC
simulations of both the signal and the background at the high-luminosity
upgrade of the LHC (HL-LHC). The basic selection requires one and only one
isolated charged lepton and at least four jets of which exactly two must be
tagged as $b$-jets. Furthermore, jets tagged as hadronic decays of a $\tau$
lepton are vetoed. The employed cuts on the $p_{T}$ and pseudorapidities
($\eta)$ of the leptons and jets can be found in Table 1. After the initial
selections the dominant background is $t\bar{t}$ production with one top quark
decaying leptonically and the other one decaying hadronically. This background
is strongly reduced by demanding $E_{T}^{\mathrm{miss}}>550\,{\rm GeV}$ and
requiring a lower limit of $180\,{\rm GeV}$ on the transverse mass of the
charged lepton defined as
$m_{T}^{\ell}=\sqrt{2\hskip 1.42262pt|\vec{p}_{T}(\ell)|\hskip
1.42262pt|\vec{p}_{T}^{\,{\rm miss}}|\left(1-\cos\Delta\phi_{\ell,{\rm
miss}}\right)}\,.$ (3)
Here $\vec{p}_{T}(\ell)$ denotes the components of the lepton momentum
transverse to the beam, $\vec{p}_{T}^{\,{\rm miss}}$ is the vector sum of the
transverse momenta of the invisible particles and $\Delta\phi_{\ell,{\rm
miss}}=\Delta\phi(\vec{p}_{T}(\ell),\vec{p}_{T}^{\,{\rm miss}})$ is the
azimuthal angular separation between these two vectors. To reject events which
are incompatible with top-quark decays, selections on the variables $\rm
topness$ Graesser and Shelton (2013) and $m_{\rm top}^{\rm reclustered}$ Aad
_et al._ (2021b) are imposed. An additional rejection of the SM background is
achieved with selections on $H_{T,{\rm sig}}^{\rm miss}$, i.e. the ratio of
$E_{T}^{\mathrm{miss}}$ built as the vector sum of the momenta of all the
signal jets and leptons in the event, reduced by $100\,{\rm GeV}$ and divided
by its experimental resolution Aad _et al._ (2014); ATL (2018). Finally, cuts
on the azimuthal angular separations $\Delta\phi_{\ell,{\rm miss}}$,
$\Delta\phi_{\rm min}$ between $\vec{p}_{T}(j)$ and $\vec{p}_{T}^{\,{\rm
miss}}$ for the four leading jets and on $\Delta\phi_{bb}$ between the two
$b$-tagged jets are imposed as detailed in Table 1.
Variable | SR2 selection | SR3 selection
---|---|---
$N_{\ell}$ | $=2\,,$ $p_{T}(\ell)>(25,20)\,{\rm GeV}\,,$ $|\eta(\ell)|<2.5$
$m_{\ell\ell}$ | $>20\,{\rm GeV}\,,$ $Z$-boson veto for OS leptons
$N_{b}$ | $\geq 1$, $p_{T}(b)>30\,{\rm GeV}$, $|\eta(b)|<2.5$
$m_{b\ell}^{t}$ | $<160\,{\rm GeV}$ | $>160\,{\rm GeV}$ or $N_{j}=1$
$E_{T}^{\mathrm{miss}}$ | $>550\,{\rm GeV}$ | $>350\,{\rm GeV}$
$|\Delta\phi_{min}|$ | n/a | $>0.8$
$|\Delta\phi_{\rm boost}|$ | $<1.5$ | $<2.5$
$M_{\rm scal}$ | n/a | $<500\,{\rm GeV}$
$m_{T2}$ | $>100\,{\rm GeV}$, shape fit | $>170\,{\rm GeV}$
Table 2: As Table 1 but for the signal regions SR2 and SR3. More details can
be found in the main text.
The basis selection of events is common for the signal regions SR2 and SR3. It
consists of the requirement of having exactly two isolated opposite-sign (OS)
leptons and the invariant mass of the OS leptons has to fulfil
$m_{\ell\ell}>20\,{\rm GeV}$. If the charged leptons are of the same flavour,
events with $71\,{\rm GeV}<m_{\ell\ell}<111\,{\rm GeV}$ are discarded to
suppress backgrounds where the lepton pair arises from the decay
$Z\to\ell^{+}\ell^{-}$. Furthermore, each event is required to contain at
least one $b$-tagged jet. The relevant $p_{T}$ and $\eta$ selections of the OS
leptons and $b$-jets are specified in Table 2. The first selection that
differs between the two signal regions is a cut on the $m_{b\ell}^{t}$
observable defined in (2), which for SR2 (SR3) is required to be smaller
(larger) than $160\,{\rm GeV}$. The variable $m_{b\ell}^{t}$ is only defined
for events with at least two reconstructed jets and events with only one
reconstructed jet are assigned to SR3. Further selections are used to optimise
the rejection of the SM backgrounds. In the case of SR2 (SR3) we require
$E_{T}^{\mathrm{miss}}>550\,{\rm GeV}$ ($E_{T}^{\mathrm{miss}}>350\,{\rm
GeV}$). The four leading jets furthermore have to satisfy $|\Delta\phi_{\rm
min}|>0.8$ in the signal region SR3. The variable $\Delta\phi_{\rm boost}$
defined as the azimuthal angle difference between $\vec{p}_{T}^{\,{\rm miss}}$
and the vector sum of $\vec{p}_{T}^{\,{\rm miss}}$, $\vec{p}_{T}(\ell_{1})$
and $\vec{p}_{T}(\ell_{2})$, must satisfy the requirement $|\Delta\phi_{\rm
boost}|<1.5$ ($|\Delta\phi_{\rm boost}|<2.5$) for SR2 (SR3). In the case of
the signal region SR3, we additionally demand that the scalar sum $M_{\rm
scal}$ of the transverse momenta of all the jets observed in the event
satisfies $M_{\rm scal}<500\,{\rm GeV}$. Finally, in the signal region SR2 we
require $m_{T2}>100\,{\rm GeV}$ and fit the shape of the $m_{T2}$ distribution
(see for instance Haisch and Polesello (2019a)), whereas for the signal region
SR3 we impose the cut $m_{T2}>170\,{\rm GeV}$. Here $m_{T2}$ denotes the
stransverse mass introduced in Lester and Summers (1999).
Assuming an integrated luminosity of $3\,{\rm ab}^{-1}$ at a CM energy of
$14\,{\rm TeV}$, the number of background events surviving the discussed
requirements amounts to 123, 34 and 48 in the case of SR1, SR2 and SR3,
respectively. The signal efficiency depends on the DM mass and on the specific
pNGB DM model, and in the considered cases it is between a few tens of a
percent and a few percent. Given the relatively large number of surviving
background events, the experimental reach will depend sensitively on the
systematic uncertainty of the estimated SM backgrounds. The size of these
uncertainties depends on the detector performance and the techniques used for
the background evaluation, which are typically based on a mixed MC and data-
driven approach. Existing LHC analyses addressing signatures and a phase space
similar to our $tX+E_{T}^{\mathrm{miss}}$ strategy have background
uncertainties of 10% to 30% $\big{(}$see Aaboud _et al._ (2018a); Aad _et
al._ (2021c, b)$\big{)}$. In our numerical analysis we will assume a 15%
uncertainty on the backgrounds and a 5% uncertainty on the pNGB DM signals.
The latter uncertainty should account for the effect of scale variations and
PDF uncertainties on the signal modelling.
In addition to the analysis strategy described in detail above, we have also
studied the sensitivity of the fully-leptonic signal regions SRt3 of Aaboud
_et al._ (2018a) and ${\rm SR}^{\text{2-body}}$ of Aad _et al._ (2021c), the
semileptonic signal region DM of Aad _et al._ (2021b) and the fully-hadronic
signal regions SRt1 and SRt2 of Aaboud _et al._ (2018a) and SRA-TT of Aad
_et al._ (2020) to the parameter space of the pNGB DM effective field theory .
Our analyses rely in these cases on CheckMATE 2 Dercks _et al._ (2017), which
uses DELPHES 3 de Favereau _et al._ (2014) as a fast detector simulation. We
find that for what concerns leptonic final states, the best limits on the
parameters of (1) follow either from the signal region DM or ${\rm
SR}^{\text{2-body}}$, while in the case of a fully-hadronic search the
strategies SRt2 and SRA-TT fare equally well. It furthermore turns out that
the event selections employed in Aaboud _et al._ (2018a); Aad _et al._
(2021b, 2020, c) perform at most as good but not better than our optimised
$tX+E_{T}^{\mathrm{miss}}$ search strategy. We finally observe that for
comparable sets of selection criteria the results from our parametrised
simulation and the recast of the ATLAS analyses are in good agreement which
validates our simulation approach.
### 4.2 $j+E_{T}^{\mathrm{miss}}$ final state
In the case of the $j+E_{T}^{\mathrm{miss}}$ final state, the relevant pNGB DM
signal consists of a single high-transverse momentum jet and
$E_{T}^{\mathrm{miss}}$ associated to the production of a pair of DM
particles. The signature therefore resembles the canonical mono-jet signal,
which has received a significant amount of experimental Aaboud _et al._
(2016, 2018b); Sirunyan _et al._ (2017b); ATL (2020a) and theoretical Lindert
_et al._ (2017) attention at the LHC, resulting in high-precision estimates of
the dominant $E_{T}^{\mathrm{miss}}$ backgrounds that are associated to the
production of an EW gauge boson accompanied by at least one high-transverse
momentum jet.
In our article we rely on the latest ATLAS mono-jet analysis Aad _et al._
(2021d). Specifically, we employ $E_{T}^{\mathrm{miss}}>350\,{\rm GeV}$ and
require a high-transverse momentum jet with $p_{T}(j)>150\,{\rm GeV}$ within
$|\eta(j)|<2.4$, and no more than four jets with $p_{T}(j)>30\,{\rm GeV}$
within $|\eta(j)|<2.8$. The selection $|\Delta\phi_{\rm min}|>0.4$ is used to
fully suppress the multi-jet background. All events containing a reconstructed
electron or muon, or the hadronic decay of a tau lepton are rejected. Our
selection thus closely resembles the signal region IM3 of Aad _et al._
(2021d). The systematic uncertainty quoted by ATLAS in IM3 is 1.4%, and we
adopt this value as the systematic uncertainty on the total number of
background events. Since we perform a multi-bin comparison of the shape of the
$E_{T}^{\mathrm{miss}}$ variable, we also need to take into account
uncertainties related to the $E_{T}^{\mathrm{miss}}$ shape. For each of the
$E_{T}^{\mathrm{miss}}$ bins considered in the analysis, ATLAS gives an
uncertainty which increases from around 1.4% to 4% between $350\,{\rm GeV}$ to
$1.2\,{\rm TeV}$. We apply these systematic uncertainties as bin-by-bin shape
uncertainties in our $j+E_{T}^{\mathrm{miss}}$ analysis. For the bins between
$1.5\,{\rm TeV}$ and $2\,{\rm TeV}$ we furthermore assume an uncertainty of
$5\%$, while we take an uncertainty of $8\%$ for the total number of events in
the overflow bin with $E_{T}^{\mathrm{miss}}>2\,{\rm TeV}$. Notice that our
uncertainty treatment corresponds to taking the uncertainties among different
$E_{T}^{\mathrm{miss}}$ bins to be uncorrelated. In addition, since the
statistical uncertainties of the control regions, that are used to constrain
the background, will get reduced with more luminosity, also the systematic
uncertainties are expected to decrease with larger data samples. We thus
believe that our mono-jet study provides conservative results when applied to
the full data set of the HL-LHC.
## 5 Constraints from $tX+E_{T}^{\mathrm{miss}}$ and
$j+E_{T}^{\mathrm{miss}}$ searches at the LHC
On the basis of the selection criteria given in Section 4, we will study the
LHC sensitivity to the discussed mono-$X$ signatures. For each signature and
each studied pNGB DM benchmark, we evaluate the value of the cross section
which can be excluded at 95% confidence level (CL) normalised to the nominal
LO cross section for the relevant model realisation as calculated by
MadGraph5_aMC@NLO. The experimental sensitivity is evaluated using a test
statistic based on a profiled likelihood ratio and we make use of the CLs
method Read (2002) as implemented in RooStats Moneta _et al._ (2010).
In Table 3 we present the 95% CL bounds that derive from our
$tX+E_{T}^{\mathrm{miss}}$ analysis for seven different DM masses in the range
from $70\,{\rm GeV}$ to $1\,{\rm TeV}$. DM masses $m_{\chi}<m_{h}/2$ where
$m_{h}\simeq 125\,{\rm GeV}$ is the SM Higgs mass are not considered, because
in this case invisible Higgs decays generically represent the best way to
probe pNGB DM (see the discussion in Section 6). The shown limits correspond
to the full data set of $3\,{\rm ab}^{-1}$ that the HL-LHC is expected to
collect at a CM energy of $14\,{\rm TeV}$. Only one free pNGB DM effective
field theory parameter is allowed at a time. One observes that HL-LHC
$tX+E_{T}^{\mathrm{miss}}$ searches are most sensitive to the current-current
type DM-fermion operators followed by the derivative Higgs portal operator and
the Yukawa-type DM-top operator. The most difficult operator to probe is the
marginal Higgs portal, since it leads compared to the other pNGB DM effective
field theory interactions in (1) to softer kinematic distributions, making a
background suppression generically harder. Notice that in the case of the
marginal Higgs portal we have indicated the limits that correspond to a non-
perturbative coupling, i.e. $|\lambda|>4\pi$, by putting parentheses around
the corresponding results. We finally add that for $m_{\chi}=1\,{\rm TeV}$ the
bounds on $f/\sqrt{|c_{d}|}$ and $f/\sqrt{|c_{t}|}$ following from our
$tX+E_{T}^{\mathrm{miss}}$ search strategy are so low that an effective field
theory description might not be valid. The corresponding exclusion limits are
therefore only indicative.
| DM mass
---|---
Parameter | $70\,{\rm GeV}$ | $100\,{\rm GeV}$ | $200\,{\rm GeV}$ | $300\,{\rm GeV}$ | $400\,{\rm GeV}$ | $500\,{\rm GeV}$ | $1\,{\rm TeV}$
$f/\sqrt{|c_{d}|}$ | $165\,{\rm GeV}$ | $154\,{\rm GeV}$ | $138\,{\rm GeV}$ | $123\,{\rm GeV}$ | $109\,{\rm GeV}$ | $96\,{\rm GeV}$ | $51\,{\rm GeV}$
$|\lambda|$ | 2.4 | 6.0 | (23) | (55) | (107) | (198) | (2315)
$f/\sqrt{|c_{t}|}$ | $153\,{\rm GeV}$ | $150\,{\rm GeV}$ | $137\,{\rm GeV}$ | $122\,{\rm GeV}$ | $107\,{\rm GeV}$ | $96\,{\rm GeV}$ | $50\,{\rm GeV}$
$f/\sqrt{|d_{t_{R}}|}$ | $325\,{\rm GeV}$ | $324\,{\rm GeV}$ | $305\,{\rm GeV}$ | $278\,{\rm GeV}$ | $255\,{\rm GeV}$ | $231\,{\rm GeV}$ | $129\,{\rm GeV}$
Table 3: 95% CL bounds that derive from the $tX+E_{T}^{\mathrm{miss}}$ search strategy described in Section 4.1 for seven different DM masses. All bounds assume $3\,{\rm ab}^{-1}$ of integrated luminosity collected at a CM energy of $14\,{\rm TeV}$. Only the parameter shown in each line is taken into account, while all the remaining couplings in (1) are set to zero. See text for further explanations. | DM mass
---|---
Parameter | $70\,{\rm GeV}$ | $100\,{\rm GeV}$ | $200\,{\rm GeV}$ | $300\,{\rm GeV}$ | $400\,{\rm GeV}$ | $500\,{\rm GeV}$ | $1\,{\rm TeV}$
$f/\sqrt{|c_{t}|}$ | $96\,{\rm GeV}$ | $95\,{\rm GeV}$ | $90\,{\rm GeV}$ | $81\,{\rm GeV}$ | $74\,{\rm GeV}$ | $65\,{\rm GeV}$ | $36\,{\rm GeV}$
Table 4: As Table 3 but for the $j+E_{T}^{\mathrm{miss}}$ search strategy
described in Section 4.2.
The 95% CL bounds that follow from our $j+E_{T}^{\mathrm{miss}}$ search
strategy are collected in Table 4. As discussed at the end of Section 2, mono-
jet searches only allow to test the Wilson coefficient $c_{t}$ of the Yukawa-
type DM-top operator in (1). It is evident from the shown results that the
mono-jet bounds on $f/\sqrt{|c_{t}|}$ are not competitive with those obtained
from $tX+E_{T}^{\mathrm{miss}}$. We add that neglecting the uncertainty on the
shape of the $E_{T}^{\mathrm{miss}}$ distribution (see Section 4.2) in our
$j+E_{T}^{\mathrm{miss}}$ analysis would improve the given 95% CL limits by
around 35%. However, even then the mono-jet limits on $f/\sqrt{|c_{t}|}$ fall
short of the bounds obtained from our $tX+E_{T}^{\mathrm{miss}}$ search
strategy. Like in the case of the $tX+E_{T}^{\mathrm{miss}}$ bounds, at high
DM mass the $j+E_{T}^{\mathrm{miss}}$ limits should only be taken as
indicative, because an effective field theory description may not be
applicable in this regime. Benchmark scenarios with more than one non-zero
pNGB DM effective field theory coefficient $c_{i}$, $\lambda$ and $d_{j}$ are
discussed in Section 8.
## 6 Constraints from invisible Higgs decays at the LHC
The terms in the first line of (1) will lead to invisible Higgs decays at tree
level if this process is kinematically allowed, i.e. for $m_{\chi}<m_{h}/2$.
The relevant partial Higgs decay width reads
$\Gamma\left(h\to\chi^{\ast}\chi\right)=\frac{v^{2}}{16\hskip
0.35565pt\pi\hskip 0.35565ptm_{h}}\,\sqrt{1-\frac{4\hskip
0.7113ptm_{\chi}^{2}}{m_{h}^{2}}}\,\left(\frac{m_{h}^{2}\hskip
0.7113ptc_{d}}{f^{2}}-\lambda\right)^{2}\,,$ (4)
This formula can be used to translate experimental limits on the Higgs
invisible branching ratio ${\rm BR}\left(h\to{\rm inv}\right)$ into
constraints on $f/\sqrt{|c_{d}|}$ and $|\lambda|$. In fact, in the limit
$m_{\chi}\ll m_{h}/2$ one obtains the 95% CL exclusion limits
$\frac{f}{\sqrt{|c_{d}|}}>1.5\,{\rm TeV}\,,\qquad|\lambda|<7.2\cdot
10^{-3}\qquad(\text{LHC Run II})\,,$ (5)
by employing the best existing LHC bound of ${\rm BR}\left(h\to{\rm
inv}\right)<0.11$ ATL (2020b). At the HL-LHC it may be possible to set a limit
on the Higgs invisible branching ratio of ${\rm BR}\left(h\to{\rm
inv}\right)<2.5\cdot 10^{-2}$ Cepeda _et al._ (2019). This implies that the
bounds (5) may be improved to
$\frac{f}{\sqrt{|c_{d}|}}>2.2\,{\rm TeV}\,,\qquad|\lambda|<3.3\cdot
10^{-3}\qquad(\text{HL-LHC})\,.$ (6)
Similar limits have also been given in Ruhdorfer _et al._ (2020). Although
the exclusion limits (5) and (6) have been derived under the assumption that
either $c_{d}$ or $\lambda$ is non-zero but not both, the obtained stringent
limits indicate that invisible Higgs decays are the main avenue to probe the
pNGB DM couplings $c_{d}$ and $\lambda$ for DM masses $m_{\chi}<m_{h}/2$.
At the loop level the first interaction term in the second line of (1) can
also lead to invisible Higgs decays, because the Yukawa-type DM-top operator
mixes into the marginal Higgs portal operator through fermion loops — see the
left Feynman diagram in Figure 3. Assuming that the marginal Higgs portal
coupling vanishes at the scale $\mu_{f}=O\left(f\right)$, we obtain the
following leading-logarithmic (LL) result
$\lambda=-\frac{3\hskip 0.7113ptm_{h}^{2}\hskip 0.7113pty_{t}^{2}\hskip
0.7113ptc_{t}}{8\hskip 0.7113pt\pi^{2}\hskip 0.7113ptf^{2}}\,\ln\hskip
1.42262pt\frac{\mu_{f}}{\mu_{h}}\,,$ (7)
for the marginal Higgs portal coupling at the EW scale
$\mu_{h}=O\left(m_{h}\right)$. Notice that despite the fact that the
contributions of the Yukawa-type DM-top operator to the invisible decays of
the Higgs are loop suppressed the resulting constraints can still be important
given the stringent bounds on ${\rm BR}\left(h\to{\rm inv}\right)$ that the
HL-LHC is expected to set. For instance, taking as an example $c_{t}=1$,
$y_{t}\simeq 0.94$, $\mu_{f}=f$ and $\mu_{h}=m_{h}$, we find numerically that
the bound on $|\lambda|$ quoted in (6) leads to the limit
$f>450\,{\rm GeV}\qquad(c_{t}=1\,,\text{HL-LHC})\,,$ (8)
on the suppression scale of the Yukawa-type DM-top interactions introduced in
(1). In contrast to the Yukawa-type DM-top operator, the current-current type
DM-quark operators do not mix into the DM-Higgs operators appearing in (1)
since the sum over all one-loop diagrams of the type shown on the right-hand
side of Figure 3 vanishes. The pNGB DM current-current type interactions
therefore cannot be constrained by invisible Higgs decays even if
$m_{\chi}<m_{h}/2$.
Figure 3: Left: An example of a diagram that describes the mixing of the
Yukawa-type DM-top operator into the marginal Higgs portal operator. Right:
Example graph that could lead to a mixing of the current-current type DM-top
operator into the DM-Higgs operators in (1). See text for further
explanations.
## 7 Constraints from DM (in)direct detection and the relic density
Even under the assumption that the interactions in (1) provide the leading
new-physics effects at the scale $\mu_{f}$ at which the spin-$0$ fields emerge
as composite pNGBs, the inclusion of radiative corrections can spoil this
picture at the low energies probed in DM-nucleon scattering or DM annihilation
(see Hisano _et al._ (2010); Freytsis and Ligeti (2011); Hisano _et al._
(2011); Hill and Solon (2012); Frandsen _et al._ (2012); Haisch and
Kahlhoefer (2013); Hill and Solon (2014); Crivellin _et al._ (2014);
Crivellin and Haisch (2014); D’Eramo and Procura (2015); D’Eramo _et al._
(2016); Bishara _et al._ (2020) for further examples of relevant loop
corrections in DM interactions). In fact, in the case at hand, we find that
loop diagrams like those displayed in Figure 4 induce couplings between DM and
the $U(1)_{Y}$ gauge boson or a pair of gluons. After EW symmetry breaking the
DM gauge-boson interactions relevant for DM-nucleon scattering can be cast
into the form
${\cal L}_{\chi V}=\frac{i\hskip 0.7113pte\hskip 0.7113ptc_{A}}{16\hskip
0.7113pt\pi^{2}\hskip 0.7113ptf^{2}}\hskip
1.42262pt\chi^{\ast}\overset{\leftrightarrow}{\partial_{\mu}}\hskip
1.42262pt\chi\hskip 0.7113pt\partial_{\nu}F^{\mu\nu}+\frac{g_{s}^{2}\hskip
0.7113ptd_{G}}{16\hskip 0.7113pt\pi^{2}\hskip 0.7113ptf^{2}}\hskip
1.42262pt|\chi|^{2}\hskip 0.7113ptG_{\mu\nu}^{a}G^{a,\mu\nu}\,,$ (9)
where $e\simeq 0.3$ is the elementary electromagnetic charge, $g_{s}\simeq
1.2$ denotes the strong coupling constant and $F_{\mu\nu}$ represents the
electromagnetic field strength tensor. The leading contributions to the Wilson
coefficients of the operators in (9) read
$c_{A}=\frac{4}{3}\left(d_{q_{L}}+2d_{t_{R}}-d_{b_{R}}\right)\,\ln\hskip
1.42262pt\frac{\mu_{f}}{\mu_{h}}\,,\qquad d_{G}=-\frac{c_{t}}{3}\,.$ (10)
Notice that the Wilson coefficient $c_{A}$ contains only the LL correction
associated to operator mixing, while the result for $d_{G}$ corresponds to a
finite matching correction obtained in the limit of infinite top-quark mass.
Including the tree-level contributions that arise from the marginal Higgs
portal operator appearing in (1) as well as loop-induced interactions
described by (10), the spin-independent (SI) DM-nucleon cross section can be
written as
$\sigma_{\rm SI}=\frac{1}{\pi}\left(\frac{m_{\chi}\hskip
0.7113ptm_{N}}{m_{\chi}+m_{N}}\right)^{2}\frac{1}{A^{2}}\hskip
0.7113pt\left\\{\hskip 1.42262pt\frac{A\hskip 0.7113ptm_{N}}{2\hskip
0.7113ptm_{\chi}}\left[\left(1-\frac{7\hskip
0.7113ptf^{N}_{T_{G}}}{9}\right)\frac{\lambda}{m_{h}^{2}}-\frac{2\hskip
0.7113ptf^{N}_{T_{G}}\hskip 0.35565ptd_{G}}{9\hskip
0.7113ptf^{2}}\right]+\frac{Z\hskip 0.7113pte^{2}\hskip
0.7113ptc_{A}}{16\hskip 0.7113pt\pi^{2}\hskip 0.7113ptf^{2}}\hskip
1.42262pt\right\\}^{2}\,.$ (11)
Here $A$ ($Z$) is the mass (atomic) number of the nucleus, $m_{N}\simeq
0.939\,{\rm GeV}$ denotes the average nucleon mass and
$f^{N}_{T_{G}}=1-\sum_{q=u,d,s}f_{T_{q}}^{N}\simeq 0.89$ is the effective
gluon-nucleon coupling, and its numerical value corresponds to the values
$f_{T_{u}}^{N}\simeq 0.019$, $f_{T_{d}}^{N}\simeq 0.045$ and
$f_{T_{s}}^{N}\simeq 0.043$ Junnarkar and Walker-Loud (2013); Hoferichter _et
al._ (2015) for the quark-nucleon matrix elements. Furthermore, notice that
the contribution in (11) proportional to $c_{A}$ arises from $t$-channel
photon exchange and that the corresponding form factors simply count the
number of valence quarks of the nucleons, i.e. $f^{p}_{V_{u}}=f^{n}_{V_{d}}=2$
and $f^{p}_{V_{d}}=f^{n}_{V_{u}}=1$.
Figure 4: Left: Example diagram that describes the LL contribution of the
current-current type DM-fermion operators to the Wilson coefficient of the DM-
photon operator appearing in (9). Right: A possible graph involving the
insertion of the Yukawa-type DM-top operator that leads to a finite matching
correction to the Wilson coefficient of the DM-gluon operator in (9). See text
for further details.
For $m_{\chi}=100\,{\rm GeV}$ the latest XENON1T 90% CL upper limit on the SI
DM-nucleon cross section reads $\sigma_{\rm SI}<9.12\cdot 10^{-47}\,{\rm
cm^{2}}$ Aprile _et al._ (2018). Using (11) with $A=131$ and $Z=54$ for
xenon, this bound can be readily translated into limits on the Wilson
coefficients of the relevant pNGB DM operators in (1). In the case of the
marginal Higgs portal, we find in agreement with Ruhdorfer _et al._ (2020)
the 90% CL exclusion limit
$|\lambda|<1.0\cdot 10^{-2}\,.$ (12)
Setting $c_{t}=1$ in (7) and (10) as well as using $\mu_{f}=f$ and
$\mu_{h}=m_{h}$, and setting $d_{q_{L}}=d_{t_{R}}=d_{b_{R}}=~{}1$ in (10) , we
obtain in addition the lower bounds
$\begin{split}&f>510\,{\rm GeV}\qquad(c_{t}=1)\,,\\\\[5.69054pt] &f>1.3\,{\rm
TeV}\qquad\hskip 4.2679pt(d_{q_{L}}=d_{t_{R}}=d_{b_{R}}=1)\,,\end{split}$ (13)
on the suppression scale of the Yukawa-type and the current-current type DM-
fermion interactions entering (1), respectively. Although we have considered
in all cases only the effect of one type of pNGB DM operator at the scale
$\mu_{f}$ at a time, the limits (12) and (13) show that the null results of
the DM direct detection experiments generically allow to set stringent bounds
on the Wilson coefficients of the marginal Higgs portal and the pNGB DM-
fermion operators in (1). In contrast the derivative Higgs portal operator
remains unconstrained by DM direct detection even after one-loop corrections
are included in the calculation of the SI DM-nucleon cross section.
In order to understand the physics of DM indirect detection and thermal-freeze
out in composite pNGB DM models, we first write the velocity-averaged cross
section for annihilation of DM into a SM final state $X$ as
$\left\langle\sigma\left(\chi^{\ast}\chi\to
X\right)v\right\rangle\left(T\right)=a_{X}+T\hskip 0.7113ptb_{X}\,.$ (14)
Here $T$ denotes the DM temperature and thus the coefficient $a_{X}$ ($b_{X}$)
describes the $s$-wave ($p$-wave) contribution. Notice that in today’s
Universe $T_{0}\simeq 0$, while at freeze-out $T_{f}\simeq m_{\chi}/25$. This
means that the $p$-wave coefficient $b_{X}$ can usually be neglected in the
calculation of the DM indirect detection constraints, while it can be relevant
in the computation of the relic abundance $\Omega_{\chi}h^{2}$, in particular
if the corresponding $s$-wave coefficient $a_{X}$ is parametrically
suppressed.
An example where such a parametric suppression is at work in the context of
(1) is the annihilation of DM into a bottom-antibottom quark pair, i.e.
$\chi^{\ast}\chi\to b\bar{b}$. In this case, we find that the relevant
$s$-wave and $p$-wave coefficients are well approximated by
$a_{b\bar{b}}\simeq\frac{3\hskip 0.7113ptm_{b}^{2}}{4\pi}\left|\hskip
0.7113pt\frac{1}{4\hskip 0.7113ptm_{\chi}^{2}-m_{h}^{2}+i\hskip
0.7113ptm_{h}\hskip 0.7113pt\Gamma_{h}}\,\left(\frac{4\hskip
0.7113ptm_{\chi}^{2}\hskip 0.7113ptc_{d}}{f^{2}}-\lambda\right)\hskip
0.7113pt\right|^{2}\,,\qquad b_{b\bar{b}}\simeq\frac{3\hskip
0.7113ptm_{\chi}}{8\hskip 0.35565pt\pi}\frac{d_{q_{L}}^{\hskip
0.35565pt2}+d_{b_{R}}^{\hskip 0.35565pt2}}{f^{4}}\,,$ (15)
if the DM mass is sufficiently above the bottom-quark threshold at
$m_{\chi}=m_{b}\simeq 4.2\,{\rm GeV}$. In the above expression for
$a_{b\bar{b}}$, the total decay width of the Higgs boson including
contributions from $h\to\chi^{\ast}\chi$ $\big{(}$see Section 6$\big{)}$ is
denoted by $\Gamma_{h}$. For $m_{b}<m_{\chi}\lesssim m_{W}$ with the $W$-boson
mass $m_{W}\simeq 80.4\,{\rm GeV}$, the $\chi^{\ast}\chi\to b\bar{b}$ channel
generically provides the dominant mechanism to set $\Omega_{\chi}h^{2}$ in
composite pNGB DM models described by (1). In fact, it turns out that for
$m_{\chi}\ll m_{h}/2$ the velocity suppression of the $p$-wave contribution in
(15) is less severe than the bottom-mass suppression of the $s$-wave
contribution in (15). The current-current type DM-fermion operators introduced
in (1) can therefore play an important role in thermal freeze-out for
$m_{\chi}<m_{h}/2$.
For $m_{\chi}\gtrsim m_{W}$ the $\chi^{\ast}\chi\to W^{+}W^{-},ZZ,hh,t\bar{t}$
channels dominate DM annihilation. These processes all receive unsuppressed
$s$-wave contributions, rendering the associated $p$-wave contributions
phenomenologically irrelevant. For DM masses sufficiently far above the EW
scale, we find the following approximations for the $s$-wave coefficients
$\begin{split}a_{X}\simeq\frac{N_{X}\hskip
0.7113ptm_{\chi}^{2}}{4\pi}\left[\frac{c_{d}}{f^{2}}-\frac{\lambda}{4\hskip
0.7113ptm_{\chi}^{2}}\right]^{2}\,,\qquad a_{t\bar{t}}\simeq\frac{3\hskip
0.7113ptm_{t}^{2}}{4\pi}\left[\frac{c_{d}+c_{t}}{f^{2}}-\frac{\lambda}{4\hskip
0.7113ptm_{\chi}^{2}}\right]^{2}\,,\end{split}$ (16)
where $X=W^{+}W^{-},ZZ,hh$ and $N_{W^{+}W^{-}}=2$, $N_{ZZ}=N_{hh}=1$. The
above results can be shown to agree with the calculations performed in
McDonald (1994) after taking the limit of large DM mass. Notice that in this
limit, DM annihilation to $W$ and $Z$ bosons reduces to three times the
contribution from annihilation to the Higgs boson, as expected in the
$S\\!U(2)_{L}\times U(1)_{Y}$ symmetric limit. Given that the size of the
marginal Higgs portal coupling $\lambda$ is strongly constrained by DM direct
detection $\big{(}$see (12)$\big{)}$, the expressions (16) also imply that in
viable composite pNGB DM models the derivative Higgs portal operator
generically provides the dominant contribution to DM annihilation for
$m_{\chi}\gg m_{t}$. As a result thermal freeze-out becomes a model-
independent prediction in this limit, in the sense that the value of
$\Omega_{\chi}h^{2}$ to first approximation only depends on $m_{\chi}$ and
$f/\sqrt{|c_{d}|}$.
Figure 5: Example diagrams that lead to the process
$\chi^{\ast}\chi\to\gamma\gamma$ . Further details can be found in the text.
In addition to the DM annihilation channels discussed so far, DM annihilation
into mono-chromatic photons can provide a relevant indirect-detection
signature in composite pNGB DM models. As shown in Figure 5, this signature
receives two types of contributions. The first is associated to $s$-channel
exchange of a Higgs boson with subsequent decay of the Higgs into a pair of
photons, i.e. $\chi^{\ast}\chi\to h\to\gamma\gamma$, and proceeds through the
insertion of a DM-Higgs operator and a loop of top quarks (left diagram) or
$W$ bosons (middle diagram). The corresponding form factors describing fermion
and gauge-boson loops are given by
$\begin{split}F_{\psi}\hskip 0.7113pt(\tau)&=\frac{3\hskip
0.7113pt\tau}{2}\left[1+\left(1-\tau\right)\arctan^{2}\frac{1}{\sqrt{\tau-1}}\right]\,,\\\\[5.69054pt]
F_{V}\hskip 0.7113pt(\tau)&=\frac{1}{7}\left[2+3\hskip 0.7113pt\tau+3\hskip
0.7113pt\tau\left(2-\hskip
0.7113pt\tau\right)\arctan^{2}\frac{1}{\sqrt{\tau-1}}\right]\,,\end{split}$
(17)
respectively, and are normalised such that $F_{\psi}\hskip
0.7113pt(\infty)=F_{V}\hskip 0.7113pt(\infty)=1$. The second type of
contributions involves the insertion of the Yukawa-type DM-top operator
introduced in (1) and leads directly to the $\chi^{\ast}\chi\to\gamma\gamma$
transition via a top-quark loop (right diagram in Figure 5). Including both
types of contributions, the $s$-wave coefficient corresponding to
$\chi^{\ast}\chi\to\gamma\gamma$ annihilation can be written as
$a_{\gamma\gamma}=\frac{\alpha^{2}m_{\chi}^{2}}{8\hskip
0.7113pt\pi^{3}}\left|\hskip 0.7113pt\frac{1}{4\hskip
0.7113ptm_{\chi}^{2}-m_{h}^{2}+i\hskip 0.7113ptm_{h}\hskip
0.7113pt\Gamma_{h}}\left(\frac{4\hskip 0.7113ptm_{\chi}^{2}\hskip
0.7113ptc_{d}}{f^{2}}-\lambda\right)\left[\frac{8\hskip 0.7113ptF_{\psi}\hskip
0.7113pt(\tau_{t})}{9}-\frac{7\hskip 0.7113ptF_{V}\hskip
0.7113pt(\tau_{W})}{2}\right]+\frac{8\hskip
0.7113ptc_{t}}{9f^{2}}F_{\psi}\hskip 0.7113pt(\tau_{t})\hskip
0.7113pt\right|^{2}\,,$ (18)
where $\tau_{i}=m_{i}^{2}/m_{\chi}^{2}-i\varepsilon$ with $\varepsilon$ being
a positive infinitesimal real number. Notice that the $s$-channel Higgs
exchange contribution in (18) is resonantly enhanced at $m_{\chi}=m_{h}/2$,
and as a result the DM indirect detection constraints from the observation of
$\gamma$-ray lines are generically most stringent in the vicinity of the Higgs
pole.
Based on (14) to (16), the present abundance of DM in the Universe is
approximately given by the following formula
$\frac{\Omega_{\chi}h^{2}}{0.12}\simeq\frac{3\cdot 10^{-26}\,{\rm cm}^{3}/{\rm
s}}{\langle\sigma v\rangle_{f}}\,,\qquad\langle\sigma
v\rangle_{f}=\frac{1}{2}\sum_{X}\left\langle\sigma\left(\chi^{\ast}\chi\to
X\right)v\right\rangle\big{(}T_{f}\big{)}\,,$ (19)
where the sum over $X$ involves all annihilation channels that are
kinematically accessible at a given DM mass. Notice that the factor of $1/2$
in the definition of $\langle\sigma v\rangle_{f}$ takes into account that DM
is not self-conjugate in our case. The same factor of $1/2$ appears when one
calculates the $\gamma$-ray flux from the annihilation cross section (18).
While (19) represents a useful expression to estimate $\Omega_{\chi}h^{2}$, we
will use micrOMEGAs Bélanger _et al._ (2018) in our numerical analysis of the
constraints on the pNGB DM parameter space following from the requirement to
reproduce the relic abundance of $\Omega_{\chi}h^{2}=0.120\pm 0.001$ as
measured by PLANCK Aghanim _et al._ (2020). micrOMEGAs is also used to
determine the DM indirect detection exclusion limits.
## 8 Discussion
In Figures 6 to 8 we summarise the most important constraints in the
$m_{\chi}\hskip 0.7113pt$–$\hskip 0.7113ptf$ plane for the three benchmark
models with $c_{d}=1$, $c_{d}=c_{t}=1$ and
$c_{d}=d_{q_{L}}=d_{t_{R}}=d_{b_{R}}=1$. Similar benchmark models have also
been considered in Ruhdorfer _et al._ (2020). The pNGB DM effective field
theory parameters not shown in the headline of each figure are set to zero to
obtain the displayed results. The dark red and blue regions are excluded by
the projected HL-LHC limit on the Higgs invisible branching ratio of ${\rm
BR}\left(h\to{\rm inv}\right)<2.5\cdot 10^{-2}$ Cepeda _et al._ (2019) and by
the 90% CL bounds on the SI DM-nucleon cross section set by XENON1T Aprile
_et al._ (2018), respectively. The vertical grey bands indicate the DM mass
ranges that are excluded at 95% CL by the $\gamma$-ray observations of dwarf
spheroidal galaxies (dSphs) of the Fermi-LAT and DES collaborations in Albert
_et al._ (2017). The used experimental bounds assume DM annihilation into
$b\bar{b}$ final states and that the measured relic density is reproduced. The
constraints that follow from the latest Fermi-LAT search for $\gamma$-ray
lines Ackermann _et al._ (2015) lead to weaker constraints on the DM mass of
$62.5\,{\rm GeV}\lesssim m_{\chi}\lesssim 64\,{\rm GeV}$ compared to
$\chi^{\ast}\chi\to b\bar{b}$ even if a favourable DM distribution
$\big{(}$such as an adiabatically contracted Navarro-Frenk-White profile
Navarro _et al._ (1996)$\big{)}$ is used to calculate the limits. These
bounds are hence not shown in Figures 6 to 9. The green curves correspond to
the PLANCK value $\Omega_{\chi}h^{2}=0.12$ Aghanim _et al._ (2020) of the DM
relic abundance. The orange regions displayed in the figures correspond to the
95% CL exclusion limits found in Ruhdorfer _et al._ (2020) from a HL-LHC
study of off-shell invisible Higgs production in the VBF channel. The magenta
domains finally correspond to the 95% CL constraints obtained by the
$tX+E_{T}^{\mathrm{miss}}$ analysis strategy discussed in Section 4.1.
Figure 6: Constraints in the $m_{\chi}\hskip 0.7113pt$–$\hskip 0.7113ptf$
plane for the derivative Higgs portal model. The pNGB DM effective field
theory parameters not shown in the headline of the plot are set to zero to
obtain the displayed results. The dark red region is excluded by the projected
HL-LHC 95% CL limit on the Higgs invisible branching ratio of ${\rm
BR}\left(h\to{\rm inv}\right)<2.5\cdot 10^{-2}$ Cepeda _et al._ (2019). The
vertical grey band displays the DM mass range that is excluded at 95% CL by
the dSphs analysis of Fermi-LAT and DES Albert _et al._ (2017) assuming
$\chi^{\ast}\chi\to b\bar{b}$ annihilation. The green curve corresponds to the
value $\Omega_{\chi}h^{2}=0.12$ of the DM relic density as determined by
PLANCK Aghanim _et al._ (2020). In the parameter space above the green curves
the Universe is overclosed. The orange region indicates the 95% CL exclusion
limit derived in Ruhdorfer _et al._ (2020) from a study of off-shell
invisible Higgs production in the VBF channel at the HL-LHC, while the magenta
region represents the corresponding exclusion limit obtained by our
$tX+E_{T}^{\mathrm{miss}}$ search strategy. Consult the main text for further
details. Figure 7: As Figure 6 but for the pNGB DM benchmark model with
$c_{d}=c_{t}=1$. The blue region is excluded by the 90% CL bound on the SI DM-
nucleon cross section $\sigma_{\rm SI}$ as determined by XENON1T Aprile _et
al._ (2018). Figure 8: As Figure 7 but for the pNGB DM benchmark model with
$c_{d}=d_{q_{L}}=d_{t_{R}}=d_{b_{R}}=1$. Figure 9: Constraints in the
$m_{\chi}\hskip 0.7113pt$–$\hskip 0.7113pt|\lambda|$ plane for the marginal
Higgs portal model. Apart from the fact that in the parameter space below the
green curve the Universe is overclosed, the meaning and colour coding of the
shown constraints resemble those of Figure 7.
In the case of the derivative Higgs portal model, one observes from Figure 6
that in the Higgs on-shell region corresponding to $m_{\chi}<m_{h}/2$, HL-LHC
measurements of invisible Higgs decays exclude large parts of the parameter
space that leads to the correct DM relic density via standard thermal freeze-
out. Only a narrow corridor around the Higgs resonance survives this
constraint, which is however excluded by DM indirect detection measurements.
Since the DM-nucleon scattering rate is momentum suppressed, the stringent
limits from DM direct detection experiments do not put constraints on the pNGB
DM benchmark model with only $c_{d}=1$. This opens up the possibility to test
such models with $m_{\chi}>m_{h}/2$ using mono-$X$ searches at the HL-LHC,
however only if these models lead to a DM underabundance, i.e.
$\Omega_{\chi}h^{2}<0.12$. Given that the VBF limits taken from Ruhdorfer _et
al._ (2020) are around $30\%$ better than the $tX+E_{T}^{\mathrm{miss}}$
bounds on $f$, the best test of the derivative Higgs portal model in the Higgs
off-shell region seems to be provided by invisible Higgs production in the VBF
channel. In this context it is however important to realise that the study
Ruhdorfer _et al._ (2020) assumes a systematic uncertainty on the relevant SM
background of $1\%$, while the shown $tX+E_{T}^{\mathrm{miss}}$ exclusion is
based on a systematic uncertainty on the relevant SM background of $15\%$ (see
Section 4.1). Assuming a reduction of the systematic background uncertainties
in $tX+E_{T}^{\mathrm{miss}}$ down to $5\%$ would bring the VBF and
$tX+E_{T}^{\mathrm{miss}}$ exclusion limits closer together. See Appendix A
for details.
As can be seen from Figures 7 and 8, the HL-LHC potential to test viable
models through mono-$X$ searches is less favourable in the case of the pNGB DM
benchmarks with $c_{d}=c_{t}=1$ or $c_{d}=d_{q_{L}}=d_{t_{R}}=d_{b_{R}}=1$
since in these cases the limits from DM direct detection, though loop
suppressed, turn out to be still severe. In the first case the LL corrections
to $\lambda$ in (7) and the finite matching correction to $d_{G}$ in (10) are
both relevant, while in the second case the LL corrections to $c_{A}$ in (10)
play an essential role in determining the correct DM direct detection limits.
The above LL corrections have not been discussed in the work Ruhdorfer _et
al._ (2020), but it is known (see for example Hill and Solon (2012); Frandsen
_et al._ (2012); Haisch and Kahlhoefer (2013); Crivellin _et al._ (2014);
Crivellin and Haisch (2014); D’Eramo and Procura (2015); D’Eramo _et al._
(2016); Bishara _et al._ (2020)) that the inclusion of radiative corrections
can have important effects in the calculation of $\sigma_{\rm SI}$. Comparing
the VBF and $tX+E_{T}^{\mathrm{miss}}$ constraints, one sees that in both
cases $c_{d}=c_{t}=1$ and $c_{d}=d_{q_{L}}=d_{t_{R}}=d_{b_{R}}=1$ the limits
on $f$ derived here are stronger than the bounds that have been obtained in
Ruhdorfer _et al._ (2020). This result follows straightforwardly from the
fact that invisible VBF Higgs off-shell production is only sensitive to
$c_{d}$, while the $tX+E_{T}^{\mathrm{miss}}$ signature receives contributions
from $c_{d}$ but also from $c_{t}$, $d_{q_{L}}$ and $d_{t_{R}}$.
In Figure 9 we finally summarise the constraints on the marginal Higgs portal
model set by DM (in)direct detection experiments, the relic density and future
HL-LHC searches. One observes that the constraints on $|\lambda|$ from DM
direct detection and the HL-LHC are comparable for DM masses
$m_{\chi}<m_{h}/2$. However, in the case $m_{\chi}>m_{h}/2$ the bounds that
follow from $\sigma_{\rm SI}$ are by more than two orders of magnitude
stronger than those that one can hope to obtain at the HL-LHC from mono-$X$
searches. Like in the case of the derivative Higgs portal model, off-shell
invisible Higgs production in the VBF channel Ruhdorfer _et al._ (2020) again
seems to be the best way to probe the marginal Higgs portal model at the LHC
if $m_{\chi}>m_{h}/2$. This conclusion once more depends on the actual size of
systematic background uncertainties of the VBF and $tX+E_{T}^{\mathrm{miss}}$
channels in the HL-LHC environment. Combining the two mono-$X$ channels as
done in the case of the LHC searches for the invisible Higgs boson decays (see
for instance ATL (2020b); Aaboud _et al._ (2019b); Sirunyan _et al._
(2019c); CMS (2019)) can be expected to improve the ultimate HL-LHC reach.
Performing an actual combination of the VBF and $tX+E_{T}^{\mathrm{miss}}$
channels is however beyond the scope of this article. We add that the
potential of the high-energy option of the LHC, the future circular hadron-
hadron collider, the compact linear collider and a muon collider in
constraining the marginal Higgs portal through VBF off-shell Higgs production
has been studied in the article Ruhdorfer _et al._ (2020). See also Matsumoto
_et al._ (2010); Kanemura _et al._ (2011); Chacko _et al._ (2014); Craig
_et al._ (2016); Ko and Yokoya (2016) for similar analyses.
pNGB DM models in which both the SM Higgs boson as well as the DM candidate
are composites of a TeV-scale strongly-coupled sector provide a simultaneous
explanation of the EW hierarchy problem and the DM puzzle. Key features in
this class of beyond the SM theories are that the SM Higgs boson and the DM
particle are both naturally light, and that the leading coupling between DM
and the SM is the derivative Higgs portal. This portal is strongly suppressed
in the regime of small momentum transfer that is probed by DM scattering with
heavy nuclei, making this type of WIMP easily compatible with the existing
strong constraints from DM direct detection experiments. At the same time, the
interaction strength of DM annihilation turns out to be in the right range to
obtain the observed relic density through thermal freeze-out without tuning.
However, as we have shown in our work, this simple and attractive picture can
be significantly altered by explicit symmetry breaking effects that lead to
pNGB DM interactions beyond the derivative Higgs portal. In fact, once
radiative effects are taken into account, only pNGB DM realisations of the
form (1) with $c_{d}\neq 0$ and all other pNGB DM effective field theory
parameters sufficiently small typically survive the constraints from DM direct
detection experiments. In such scenarios, collider searches for DM production
are the only known direct way to explore the pNGB DM parameter space. If the
DM candidate is kinematically accessible, searches for invisible Higgs boson
decays play a key role in such explorations, while DM masses above the Higgs
threshold can be probed by studying mono-$X$ signatures. In our article, we
have extended the earlier study of off-shell invisible Higgs production via
VBF Ruhdorfer _et al._ (2020) by developing a search strategy that allows to
probe pNGB DM using $tX+E_{T}^{\mathrm{miss}}$ signatures. The
$tX+E_{T}^{\mathrm{miss}}$ channels are complementary to VBF Higgs production
since they are able to test pNGB DM interactions like the Yukawa-type DM-top
coupling and the current-current type interactions in (1) that are not
accessible via the latter mode. Together with Ruhdorfer _et al._ (2020) the
work presented here provides the blueprints to search for pNGB DM at the LHC,
and we encourage ATLAS and CMS to perform dedicated experimental searches and
interpretations of the relevant mono-$X$ signatures.
###### Acknowledgements.
We thank Maximilian Ruhdorfer, Ennio Salvioni and Andreas Weiler for useful
discussions, for their helpful comments on this manuscript and for providing
us with the computer codes employed in their paper Ruhdorfer _et al._ (2020)
to determine the DM indirect detection and relic density constraints on
composite pNGB DM models. Our analytic calculations made use of the
Mathematica packages FeynArts Hahn (2001), FormCalc Hahn and Perez-Victoria
(1999); Hahn _et al._ (2016) and FeynCalc Mertig _et al._ (1991);
Shtabovenko _et al._ (2016, 2020). This research has been partially supported
by the Collaborative Research Center SFB1258.
## Appendix A Supplementary material
Figure 10: 95% CL constraints in the $m_{\chi}\hskip 0.7113pt$–$\hskip
0.7113ptf$ plane for the derivative Higgs portal model (upper panel) and in
the $m_{\chi}\hskip 0.7113pt$–$\hskip 0.7113pt|\lambda|$ plane for the
marginal Higgs portal model (lower panel). The orange regions correspond to
the 95% CL exclusion limits determined in Ruhdorfer _et al._ (2020) from a
HL-LHC study of off-shell invisible Higgs production in the VBF channel, while
the magenta contours represent the results of our $tX+E_{T}^{\mathrm{miss}}$
search assuming a systematic background uncertainty of 15% (solid curves), 5%
(dashed curves) and 1% (dotted curves).
In this appendix we present HL-LHC projections based on alternative more
aggressive assumptions about the systematic uncertainties of our
$tX+E_{T}^{\mathrm{miss}}$ search strategy. Anticipating improvements in
detector performance and modelling of SM background processes, we assume that
the systematic uncertainties on the number of expected events in the signal
regions SR1, SR2 and SR3 are reduced from 15% to 5% and 1%. In Figure 10 we
show the 95% CL constraints in the $m_{\chi}\hskip 0.7113pt$–$\hskip
0.7113ptf$ plane for the derivative Higgs portal model (upper panel) and in
the $m_{\chi}\hskip 0.7113pt$–$\hskip 0.7113pt|\lambda|$ plane for the
marginal Higgs portal model (lower panel). The orange regions indicate the
exclusion limits derived in the study of off-shell invisible Higgs production
in the VBF channel Ruhdorfer _et al._ (2020). The displayed results assume a
1% systematic uncertainty on the relevant SM backgrounds. For comparison we
show in magenta the 95% CL limits that derive from the
$tX+E_{T}^{\mathrm{miss}}$ search strategy discussed in Section 4.1. Here the
solid, dashed and dotted contours correspond to assumed systematic background
uncertainties of 15%, 5% and 1%, respectively. It is evident from both panels
that reducing the systematic uncertainties from 15% to 5% has a visible impact
on the obtained $tX+E_{T}^{\mathrm{miss}}$ exclusion limits, while a further
uncertainty reduction to 1% has only a minor effect on the bounds in the shown
parameter planes. Notice that a reduction of the systematic uncertainties to
5% may be possible given the steady progress of both experiment and theory. In
the case of the marginal Higgs portal, such an improvement would lead to a
reach in the $tX+E_{T}^{\mathrm{miss}}$ channel that is very similar to the
one of VBF invisible Higgs production in the off-shell region.
## References
* Klasen _et al._ (2015) M. Klasen, M. Pohl, and G. Sigl, Prog. Part. Nucl. Phys. 85, 1 (2015), arXiv:1507.03800 [hep-ph].
* Schumann (2019) M. Schumann, J. Phys. G 46, 103003 (2019), arXiv:1903.03026 [astro-ph.CO].
* Aaboud _et al._ (2019a) M. Aaboud _et al._ (ATLAS), JHEP 05, 142 (2019a), arXiv:1903.01400 [hep-ex].
* Frigerio _et al._ (2012) M. Frigerio, A. Pomarol, F. Riva, and A. Urbano, JHEP 07, 015 (2012), arXiv:1204.2808 [hep-ph].
* Barger _et al._ (2009) V. Barger, P. Langacker, M. McCaskey, M. Ramsey-Musolf, and G. Shaughnessy, Phys. Rev. D 79, 015018 (2009), arXiv:0811.0393 [hep-ph].
* Barger _et al._ (2010) V. Barger, M. McCaskey, and G. Shaughnessy, Phys. Rev. D 82, 035019 (2010), arXiv:1005.3328 [hep-ph].
* Chala (2013) M. Chala, JHEP 01, 122 (2013), arXiv:1210.6208 [hep-ph].
* Marzocca and Urbano (2014) D. Marzocca and A. Urbano, JHEP 07, 107 (2014), arXiv:1404.7419 [hep-ph].
* Barnard _et al._ (2015) J. Barnard, T. Gherghetta, T. S. Ray, and A. Spray, JHEP 01, 067 (2015), arXiv:1409.7391 [hep-ph].
* Fonseca _et al._ (2015) N. Fonseca, R. Zukanovich Funchal, A. Lessa, and L. Lopez-Honorez, JHEP 06, 154 (2015), arXiv:1501.05957 [hep-ph].
* Brivio _et al._ (2016) I. Brivio, M. Gavela, L. Merlo, K. Mimasu, J. No, R. del Rey, and V. Sanz, JHEP 04, 141 (2016), arXiv:1511.01099 [hep-ph].
* Kim _et al._ (2016) M. Kim, S. J. Lee, and A. Parolini, (2016), arXiv:1602.05590 [hep-ph].
* Chala _et al._ (2016) M. Chala, G. Nardini, and I. Sobolev, Phys. Rev. D 94, 055006 (2016), arXiv:1605.08663 [hep-ph].
* Barducci _et al._ (2017) D. Barducci, A. Bharucha, N. Desai, M. Frigerio, B. Fuks, A. Goudelis, S. Kulkarni, G. Polesello, and D. Sengupta, JHEP 01, 078 (2017), arXiv:1609.07490 [hep-ph].
* Wu _et al._ (2017) Y. Wu, T. Ma, B. Zhang, and G. Cacciapaglia, JHEP 11, 058 (2017), arXiv:1703.06903 [hep-ph].
* Balkin _et al._ (2017) R. Balkin, M. Ruhdorfer, E. Salvioni, and A. Weiler, JHEP 11, 094 (2017), arXiv:1707.07685 [hep-ph].
* Balkin _et al._ (2018a) R. Balkin, G. Perez, and A. Weiler, Eur. Phys. J. C 78, 104 (2018a), arXiv:1707.09980 [hep-ph].
* Gross _et al._ (2017) C. Gross, O. Lebedev, and T. Toma, Phys. Rev. Lett. 119, 191801 (2017), arXiv:1708.02253 [hep-ph].
* Alanne _et al._ (2018) T. Alanne, D. Buarque Franzosi, M. T. Frandsen, and M. Rosenlyst, JHEP 12, 088 (2018), arXiv:1808.07515 [hep-ph].
* Balkin _et al._ (2018) R. Balkin, M. Ruhdorfer, E. Salvioni, and A. Weiler, JCAP 11, 050 (2018), arXiv:1809.09106 [hep-ph].
* Ishiwata and Toma (2018) K. Ishiwata and T. Toma, JHEP 12, 089 (2018), arXiv:1810.08139 [hep-ph].
* Huitu _et al._ (2019) K. Huitu, N. Koivunen, O. Lebedev, S. Mondal, and T. Toma, Phys. Rev. D 100, 015009 (2019), arXiv:1812.05952 [hep-ph].
* Karamitros (2019) D. Karamitros, Phys. Rev. D 99, 095036 (2019), arXiv:1901.09751 [hep-ph].
* Davoli _et al._ (2019) A. Davoli, A. De Simone, D. Marzocca, and A. Morandini, JHEP 10, 196 (2019), arXiv:1905.13244 [hep-ph].
* Ruhdorfer _et al._ (2020) M. Ruhdorfer, E. Salvioni, and A. Weiler, SciPost Phys. 8, 027 (2020), arXiv:1910.04170 [hep-ph].
* Ramos (2020) M. Ramos, JHEP 07, 128 (2020), arXiv:1912.11061 [hep-ph].
* Arina _et al._ (2020) C. Arina, A. Beniwal, C. Degrande, J. Heisig, and A. Scaffidi, JHEP 04, 015 (2020), arXiv:1912.04008 [hep-ph].
* Abe _et al._ (2020) Y. Abe, T. Toma, and K. Tsumura, JHEP 05, 057 (2020), arXiv:2001.03954 [hep-ph].
* Okada _et al._ (2021a) N. Okada, D. Raut, and Q. Shafi, Phys. Rev. D 103, 055024 (2021a), arXiv:2001.05910 [hep-ph].
* Xing _et al._ (2021) C.-Y. Xing, L.-X. Xu, and S.-h. Zhu, Phys. Rev. D 103, 113006 (2021), arXiv:2011.06264 [hep-ph].
* Okada _et al._ (2021b) N. Okada, D. Raut, Q. Shafi, and A. Thapa, (2021b), arXiv:2105.03419 [hep-ph].
* Coito _et al._ (2021) L. Coito, C. Faubel, J. Herrero-Garcia, and A. Santamaria, (2021), arXiv:2106.05289 [hep-ph].
* Haisch _et al._ (2020) U. Haisch, M. Ruhdorfer, E. Salvioni, E. Venturini, and A. Weiler, JHEP 04, 164 (2020), [Erratum: JHEP 07, 066 (2020)], arXiv:2003.05936 [hep-ph].
* (34) U. Haisch, M. Ruhdorfer, E. Salvioni, E. Venturini, and A. Weiler, in preparation.
* Sirunyan _et al._ (2017a) A. Sirunyan _et al._ (CMS), Eur. Phys. J. C 77, 845 (2017a), arXiv:1706.02581 [hep-ex].
* Aaboud _et al._ (2018a) M. Aaboud _et al._ (ATLAS), Eur. Phys. J. C 78, 18 (2018a), arXiv:1710.11412 [hep-ex].
* Aad _et al._ (2021a) G. Aad _et al._ (ATLAS), (2021a), arXiv:2101.12527 [hep-ex].
* Lin _et al._ (2013) T. Lin, E. W. Kolb, and L.-T. Wang, Phys. Rev. D 88, 063510 (2013), arXiv:1303.6638 [hep-ph].
* Buckley _et al._ (2015) M. R. Buckley, D. Feld, and D. Goncalves, Phys. Rev. D 91, 015017 (2015), arXiv:1410.6497 [hep-ph].
* Haisch and Re (2015) U. Haisch and E. Re, JHEP 06, 078 (2015), arXiv:1503.00691 [hep-ph].
* Arina _et al._ (2016) C. Arina _et al._ , JHEP 11, 111 (2016), arXiv:1605.09242 [hep-ph].
* Haisch _et al._ (2017) U. Haisch, P. Pani, and G. Polesello, JHEP 02, 131 (2017), arXiv:1611.09841 [hep-ph].
* Sirunyan _et al._ (2018) A. M. Sirunyan _et al._ (CMS), Phys. Rev. D 97, 032009 (2018), arXiv:1711.00752 [hep-ex].
* Sirunyan _et al._ (2019a) A. M. Sirunyan _et al._ (CMS), Phys. Rev. Lett. 122, 011803 (2019a), arXiv:1807.06522 [hep-ex].
* Haisch and Polesello (2019a) U. Haisch and G. Polesello, JHEP 02, 029 (2019a), arXiv:1812.00694 [hep-ph].
* Sirunyan _et al._ (2019b) A. M. Sirunyan _et al._ (CMS), JHEP 03, 141 (2019b), arXiv:1901.01553 [hep-ex].
* Aad _et al._ (2021b) G. Aad _et al._ (ATLAS), JHEP 04, 174 (2021b), arXiv:2012.03799 [hep-ex].
* Aad _et al._ (2021c) G. Aad _et al._ (ATLAS), JHEP 04, 165 (2021c), arXiv:2102.01444 [hep-ex].
* Durieux _et al._ (2018) G. Durieux, J. Gu, E. Vryonidou, and C. Zhang, Chin. Phys. C 42, 123107 (2018), arXiv:1809.03520 [hep-ph].
* Bonnefoy _et al._ (2021) Q. Bonnefoy, L. Di Luzio, C. Grojean, A. Paul, and A. N. Rossia, JHEP 05, 153 (2021), arXiv:2012.07740 [hep-ph].
* Feruglio (2021) F. Feruglio, JHEP 03, 128 (2021), arXiv:2012.13989 [hep-ph].
* Alloul _et al._ (2014) A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, and B. Fuks, Comput. Phys. Commun. 185, 2250 (2014), arXiv:1310.1921 [hep-ph].
* Degrande _et al._ (2012) C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer, and T. Reiter, Comput. Phys. Commun. 183, 1201 (2012), arXiv:1108.2040 [hep-ph].
* Alwall _et al._ (2014) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, JHEP 07, 079 (2014), arXiv:1405.0301 [hep-ph].
* Sjöstrand _et al._ (2015) T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, Comput. Phys. Commun. 191, 159 (2015), arXiv:1410.3012 [hep-ph].
* Ball _et al._ (2015) R. D. Ball _et al._ (NNPDF), JHEP 04, 040 (2015), arXiv:1410.8849 [hep-ph].
* Artoisenet _et al._ (2013) P. Artoisenet, R. Frederix, O. Mattelaer, and R. Rietkerk, JHEP 03, 015 (2013), arXiv:1212.3460 [hep-ph].
* Campbell _et al._ (2015) J. M. Campbell, R. K. Ellis, P. Nason, and E. Re, JHEP 04, 114 (2015), arXiv:1412.1828 [hep-ph].
* Re (2011) E. Re, Eur. Phys. J. C 71, 1547 (2011), arXiv:1009.2450 [hep-ph].
* Melia _et al._ (2011) T. Melia, P. Nason, R. Röntsch, and G. Zanderighi, JHEP 11, 078 (2011), arXiv:1107.5051 [hep-ph].
* Nason and Zanderighi (2014) P. Nason and G. Zanderighi, Eur. Phys. J. C 74, 2702 (2014), arXiv:1311.1365 [hep-ph].
* Alioli _et al._ (2010) S. Alioli, P. Nason, C. Oleari, and E. Re, JHEP 06, 043 (2010), arXiv:1002.2581 [hep-ph].
* Czakon and Mitov (2014) M. Czakon and A. Mitov, Comput. Phys. Commun. 185, 2930 (2014), arXiv:1112.5675 [hep-ph].
* Czakon _et al._ (2013) M. Czakon, P. Fiedler, and A. Mitov, Phys. Rev. Lett. 110, 252004 (2013), arXiv:1303.6254 [hep-ph].
* Anastasiou _et al._ (2004) C. Anastasiou, L. J. Dixon, K. Melnikov, and F. Petriello, Phys. Rev. D 69, 094008 (2004), arXiv:hep-ph/0312266.
* Gavin _et al._ (2013) R. Gavin, Y. Li, F. Petriello, and S. Quackenbush, Comput. Phys. Commun. 184, 208 (2013), arXiv:1201.5896 [hep-ph].
* Haisch and Polesello (2019b) U. Haisch and G. Polesello, JHEP 02, 128 (2019b), arXiv:1812.08129 [hep-ph].
* Haisch and Polesello (2021) U. Haisch and G. Polesello, JHEP 05, 057 (2021), arXiv:2012.11474 [hep-ph].
* Catani _et al._ (2001) S. Catani, F. Krauss, R. Kuhn, and B. Webber, JHEP 11, 063 (2001), arXiv:hep-ph/0109231.
* Aad _et al._ (2021d) G. Aad _et al._ (ATLAS), Phys. Rev. D 103, 112006 (2021d), arXiv:2102.10874 [hep-ex].
* Cacciari _et al._ (2008) M. Cacciari, G. P. Salam, and G. Soyez, JHEP 04, 063 (2008), arXiv:0802.1189 [hep-ph].
* Cacciari _et al._ (2012) M. Cacciari, G. P. Salam, and G. Soyez, Eur. Phys. J. C 72, 1896 (2012), arXiv:1111.6097 [hep-ph].
* Aad _et al._ (2008) G. Aad _et al._ (ATLAS), JINST 3, S08003 (2008).
* Aad _et al._ (2009) G. Aad _et al._ (ATLAS), (2009), arXiv:0901.0512 [hep-ex].
* Aad _et al._ (2019) G. Aad _et al._ (ATLAS), Eur. Phys. J. C 79, 970 (2019), arXiv:1907.05120 [hep-ex].
* Haisch and Polesello (2018) U. Haisch and G. Polesello, JHEP 09, 151 (2018), arXiv:1807.07734 [hep-ph].
* Graesser and Shelton (2013) M. L. Graesser and J. Shelton, Phys. Rev. Lett. 111, 121802 (2013), arXiv:1212.4495 [hep-ph] .
* Aad _et al._ (2014) G. Aad _et al._ (ATLAS), JHEP 11, 118 (2014), arXiv:1407.0583 [hep-ex].
* ATL (2018) _Object-based missing transverse momentum significance in the ATLAS detector_, Tech. Rep. ATLAS-CONF-2018-038 (CERN, Geneva, 2018).
* Lester and Summers (1999) C. G. Lester and D. J. Summers, Phys. Lett. B 463, 99 (1999), arXiv:hep-ph/9906349.
* Aad _et al._ (2020) G. Aad _et al._ (ATLAS), Eur. Phys. J. C 80, 737 (2020), arXiv:2004.14060 [hep-ex].
* Dercks _et al._ (2017) D. Dercks, N. Desai, J. S. Kim, K. Rolbiecki, J. Tattersall, and T. Weber, Comput. Phys. Commun. 221, 383 (2017), arXiv:1611.09856 [hep-ph].
* de Favereau _et al._ (2014) J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi (DELPHES 3), JHEP 02, 057 (2014), arXiv:1307.6346 [hep-ex].
* Aaboud _et al._ (2016) M. Aaboud _et al._ (ATLAS), Phys. Rev. D 94, 032005 (2016), arXiv:1604.07773 [hep-ex].
* Aaboud _et al._ (2018b) M. Aaboud _et al._ (ATLAS), JHEP 01, 126 (2018b), arXiv:1711.03301 [hep-ex].
* Sirunyan _et al._ (2017b) A. M. Sirunyan _et al._ (CMS), JHEP 07, 014 (2017b), arXiv:1703.01651 [hep-ex].
* ATL (2020a) _Search for new phenomena in events with jets and missing transverse momentum in $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector_, Tech. Rep. ATLAS-CONF-2020-048 (CERN, Geneva, 2020).
* Lindert _et al._ (2017) J. Lindert _et al._ , Eur. Phys. J. C 77, 829 (2017), arXiv:1705.04664 [hep-ph] .
* ATL (2020b) _Combination of searches for invisible Higgs boson decays with the ATLAS experiment_, Tech. Rep. ATLAS-CONF-2020-052 (CERN, Geneva, 2020).
* Cepeda _et al._ (2019) M. Cepeda _et al._ , CERN Yellow Rep. Monogr. 7, 221 (2019), arXiv:1902.00134 [hep-ph].
* Read (2002) A. L. Read, _Advanced Statistical Techniques in Particle Physics. Proceedings, Conference, Durham, UK, March 18-22, 2002_ , J. Phys. G28, 2693 (2002).
* Moneta _et al._ (2010) L. Moneta, K. Belasco, K. S. Cranmer, S. Kreiss, A. Lazzaro, D. Piparo, G. Schott, W. Verkerke, and M. Wolf, _Proceedings, 13th International Workshop on Advanced computing and analysis techniques in physics research (ACAT2010)_ , PoS ACAT2010, 057 (2010), arXiv:1009.1003 [physics.data-an] .
* Hisano _et al._ (2010) J. Hisano, K. Ishiwata, and N. Nagata, Phys. Rev. D 82, 115007 (2010), arXiv:1007.2601 [hep-ph].
* Freytsis and Ligeti (2011) M. Freytsis and Z. Ligeti, Phys. Rev. D 83, 115009 (2011), arXiv:1012.5317 [hep-ph].
* Hisano _et al._ (2011) J. Hisano, K. Ishiwata, N. Nagata, and T. Takesako, JHEP 07, 005 (2011), arXiv:1104.0228 [hep-ph].
* Hill and Solon (2012) R. J. Hill and M. P. Solon, Phys. Lett. B 707, 539 (2012), arXiv:1111.0016 [hep-ph].
* Frandsen _et al._ (2012) M. T. Frandsen, U. Haisch, F. Kahlhoefer, P. Mertsch, and K. Schmidt-Hoberg, JCAP 10, 033 (2012), arXiv:1207.3971 [hep-ph].
* Haisch and Kahlhoefer (2013) U. Haisch and F. Kahlhoefer, JCAP 04, 050 (2013), arXiv:1302.4454 [hep-ph].
* Hill and Solon (2014) R. J. Hill and M. P. Solon, Phys. Rev. Lett. 112, 211602 (2014), arXiv:1309.4092 [hep-ph] .
* Crivellin _et al._ (2014) A. Crivellin, F. D’Eramo, and M. Procura, Phys. Rev. Lett. 112, 191304 (2014), arXiv:1402.1173 [hep-ph].
* Crivellin and Haisch (2014) A. Crivellin and U. Haisch, Phys. Rev. D 90, 115011 (2014), arXiv:1408.5046 [hep-ph].
* D’Eramo and Procura (2015) F. D’Eramo and M. Procura, JHEP 04, 054 (2015), arXiv:1411.3342 [hep-ph].
* D’Eramo _et al._ (2016) F. D’Eramo, B. J. Kavanagh, and P. Panci, JHEP 08, 111 (2016), arXiv:1605.04917 [hep-ph].
* Bishara _et al._ (2020) F. Bishara, J. Brod, B. Grinstein, and J. Zupan, JHEP 03, 089 (2020), arXiv:1809.03506 [hep-ph].
* Junnarkar and Walker-Loud (2013) P. Junnarkar and A. Walker-Loud, Phys. Rev. D 87, 114510 (2013), arXiv:1301.1114 [hep-lat].
* Hoferichter _et al._ (2015) M. Hoferichter, J. Ruiz de Elvira, B. Kubis, and U.-G. Meißner, Phys. Rev. Lett. 115, 092301 (2015), arXiv:1506.04142 [hep-ph].
* Aprile _et al._ (2018) E. Aprile _et al._ (XENON), Phys. Rev. Lett. 121, 111302 (2018), arXiv:1805.12562 [astro-ph.CO] .
* McDonald (1994) J. McDonald, Phys. Rev. D 50, 3637 (1994), arXiv:hep-ph/0702143.
* Bélanger _et al._ (2018) G. Bélanger, F. Boudjema, A. Goudelis, A. Pukhov, and B. Zaldivar, Comput. Phys. Commun. 231, 173 (2018), arXiv:1801.03509 [hep-ph].
* Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO] .
* Albert _et al._ (2017) A. Albert _et al._ (Fermi-LAT, DES), Astrophys. J. 834, 110 (2017), arXiv:1611.03184 [astro-ph.HE] .
* Ackermann _et al._ (2015) M. Ackermann _et al._ (Fermi-LAT), Phys. Rev. D 91, 122002 (2015), arXiv:1506.00013 [astro-ph.HE] .
* Navarro _et al._ (1996) J. F. Navarro, C. S. Frenk, and S. D. M. White, Astrophys. J. 462, 563 (1996), arXiv:astro-ph/9508025.
* Aaboud _et al._ (2019b) M. Aaboud _et al._ (ATLAS), Phys. Rev. Lett. 122, 231801 (2019b), arXiv:1904.05105 [hep-ex].
* Sirunyan _et al._ (2019c) A. M. Sirunyan _et al._ (CMS), Phys. Lett. B 793, 520 (2019c), arXiv:1809.05937 [hep-ex].
* CMS (2019) _First constraints on invisible Higgs boson decays using $t\bar{t}H$ production at $\sqrt{s}$ = 13 TeV_, Tech. Rep. CMS-PAS-HIG-18-008 (CERN, Geneva, 2019).
* Matsumoto _et al._ (2010) S. Matsumoto, K. Fujii, T. Honda, S. Kanemura, T. Nabeshima, N. Okada, Y. Takubo, and H. Yamamoto, in _International Linear Collider Workshop_ (2010) arXiv:1006.5268 [hep-ph] .
* Kanemura _et al._ (2011) S. Kanemura, S. Matsumoto, T. Nabeshima, and H. Taniguchi, Phys. Lett. B 701, 591 (2011), arXiv:1102.5147 [hep-ph].
* Chacko _et al._ (2014) Z. Chacko, Y. Cui, and S. Hong, Phys. Lett. B 732, 75 (2014), arXiv:1311.3306 [hep-ph].
* Craig _et al._ (2016) N. Craig, H. K. Lou, M. McCullough, and A. Thalapillil, JHEP 02, 127 (2016), arXiv:1412.0258 [hep-ph].
* Ko and Yokoya (2016) P. Ko and H. Yokoya, JHEP 08, 109 (2016), arXiv:1603.04737 [hep-ph].
* Hahn (2001) T. Hahn, Comput. Phys. Commun. 140, 418 (2001), arXiv:hep-ph/0012260 [hep-ph] .
* Hahn and Perez-Victoria (1999) T. Hahn and M. Perez-Victoria, Comput. Phys. Commun. 118, 153 (1999), arXiv:hep-ph/9807565.
* Hahn _et al._ (2016) T. Hahn, S. Paßehr, and C. Schappacher, PoS LL2016, 068 (2016), arXiv:1604.04611 [hep-ph].
* Mertig _et al._ (1991) R. Mertig, M. Böhm, and A. Denner, Comput. Phys. Commun. 64, 345 (1991).
* Shtabovenko _et al._ (2016) V. Shtabovenko, R. Mertig, and F. Orellana, Comput. Phys. Commun. 207, 432 (2016), arXiv:1601.01167 [hep-ph].
* Shtabovenko _et al._ (2020) V. Shtabovenko, R. Mertig, and F. Orellana, Comput. Phys. Commun. 256, 107478 (2020), arXiv:2001.04407 [hep-ph].
|
arxiv-papers
| 2021-07-26T18:00:02 |
2024-09-04T03:07:19.701864
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Ulrich Haisch, Giacomo Polesello and Stefan Schulte",
"submitter": "Ulrich Haisch",
"url": "https://arxiv.org/abs/2107.12389"
}
|
2107.12390
|
# General parameter-shift rules for quantum gradients
David Wierichs Xanadu, Toronto, ON, M5G 2C8, Canada Institute for
Theoretical Physics, University of Cologne, Germany [email protected]
Josh Izaac Xanadu, Toronto, ON, M5G 2C8, Canada Cody Wang AWS Quantum
Technologies, Seattle, Washington 98170, USA Cedric Yen-Yu Lin AWS Quantum
Technologies, Seattle, Washington 98170, USA
###### Abstract
Variational quantum algorithms are ubiquitous in applications of noisy
intermediate-scale quantum computers. Due to the structure of conventional
parametrized quantum gates, the evaluated functions typically are finite
Fourier series of the input parameters. In this work, we use this fact to
derive new, general parameter-shift rules for single-parameter gates, and
provide closed-form expressions to apply them. These rules are then extended
to multi-parameter quantum gates by combining them with the stochastic
parameter-shift rule. We perform a systematic analysis of quantum resource
requirements for each rule, and show that a reduction in resources is possible
for higher-order derivatives. Using the example of the quantum approximate
optimization algorithm, we show that the generalized parameter-shift rule can
reduce the number of circuit evaluations significantly when computing
derivatives with respect to parameters that feed into many gates. Our approach
additionally reproduces reconstructions of the evaluated function up to a
chosen order, leading to known generalizations of the Rotosolve optimizer and
new extensions of the quantum analytic descent optimization algorithm.
## 1 Introduction
With the advent of accessible, near-term quantum hardware, the ability to
rapidly test and prototype quantum algorithms has never been as approachable
[1, 2, 3, 4]. However, many of the canonical quantum algorithms developed over
the last three decades remain unreachable in practice — requiring a large
number of error corrected qubits and significant circuit depth. As a result, a
new class of quantum algorithms — variational quantum algorithms (VQAs) [5, 6]
— have come to shape the noisy intermediate-scale quantum (NISQ) era. First
rising to prominence with the introduction of the variational quantum
eigensolver (VQE) [7], they have evolved to cover topics such as optimization
[8], quantum chemistry [9, 10, 11, 12, 13], integer factorization [14],
compilation [15], quantum control [16], matrix diagonalization [17, 18], and
variational quantum machine learning [19, 20, 21, 22, 23, 24, 25, 26, 27, 28,
29, 30, 31].
These algorithms have a common structure: a parametrized circuit is executed
and a cost function is composed from expectation values measured in the
resulting state. A classical optimization routine is then used to optimize the
circuit parameters by minimizing said cost function. Initially, gradient-free
optimization methods, such as Nelder-Mead and COBYLA, were common. However,
gradient-based optimization provides significant advantages, from convergence
guarantees [32] to the availability of workhorse algorithms (e.g., stochastic
gradient descent) and software tooling developed for machine learning [33, 34,
35, 36, 37].
The so-called parameter-shift rule [16, 23, 38, 39] can be used to estimate
the gradient for these optimization techniques, without additional hardware
requirements and — in contrast to naïve numerical methods — without bias; the
cost function is evaluated at two shifted parameter positions, and the
rescaled difference of the results forms an unbiased estimate of the
derivative. However, this two-term parameter-shift rule is restricted to gates
with two distinct eigenvalues, potentially requiring expensive decompositions
in order to compute hardware-compatible quantum gradients [40]. While various
extensions to the shift rule have been discovered, they remain restricted to
gates with a particular number of distinct eigenvalues [10, 41].
In this manuscript, we use the observation that the restriction of a
variational cost function to a single parameter is a finite Fourier series
[42, 43, 44, 45]; as a result, the restricted cost function can be
_reconstructed_ from circuit evaluations at shifted positions using a discrete
Fourier transform (DFT). By analytically computing the derivatives of the
Fourier series, we extract general parameter-shift rules for arbitrary quantum
gates and provide closed-form expressions to apply them. In the specific case
of unitaries with equidistant eigenvalues, the general parameter-shift rule
recovers known parameter-shift rules from the literature, including the
original two-term parameter-shift rule. We then generalize our approach in two
steps: first from equidistant to arbitrary eigenvalues of the quantum gate,
and from there — by making use of stochastic parameter shifts — to more
complicated unitaries like multi-parameter gates. This enables us to cover
_all_ practically relevant quantum gates. An overview of the existing
parameter-shift rules and our new results is shown in Fig. 1.
Afterwards, we perform an extensive resource analysis to compare the
computational expenses required by both the general shift rule presented here,
and decomposition-based approaches. In particular, we note that evaluating the
cost of gradient recipes by comparing the number of unique executed circuits
leads to fundamentally different conclusions on the optimal differentiation
technique than when comparing the total number of measurements.
Figure 1: Overview of existing and new parameter-shift rules for first-order
univariate derivatives as Venn diagram on the space of quantum gates. Each
rule produces the analytic derivative for a set of gates, with more general
rules reproducing the more specific ones. For gates of the form
$U(x)=\exp(ixG)$ the rules are deterministic (_left_) whereas more involved
gates of the form $U_{F}=\exp(i(xG+F))$ require stochastic evaluations of
shifted values (_right_). See Sec. 2.2 for a summary of previously known shift
rules. The fermionic four-term shift rule in Ref. [41] covers the same gates
as the shown four-term rule (_purple_).
Our analysis not only is fruitful for understanding the structure of
variational cost functions, but also has several practical advantages.
Firstly, second-order derivatives (such as the Hessian [46] and the Fubini-
Study metric tensor [47, 48]) can be computed with fewer evaluations compared
to naïvely iterating the two-term parameter-shift rule. We also show, using
the example of the _quantum approximate optimization algorithm_ (QAOA), that
the generalized parameter-shift rule can reduce the number of quantum circuit
evaluations required for ansätze with repeated parameters.
Finally, we generalize the _quantum analytic descent_ (QAD) algorithm [49]
using the reconstruction of variational cost functions discussed here. We also
reproduce the known generalizations of _Rotosolve_ [50, 51] from single Pauli
rotations to groups of rotations controlled by the same parameter [42, 45];
reconstructing functions with _arbitrary_ spectrum extends this algorithm even
further. Furthermore, the cost reduction for the gradient we present in the
context of QAOA applies to Rotosolve as well. Similarly, future improvements
that reduce the cost for gradient computations might improve the efficiency of
these model-based algorithms, based on the analysis presented here.
This manuscript is structured as follows. In Sec. 2, we lay out the setting
for our results by deriving the general functional form for variational cost
functions, followed by a survey of existing parameter-shift rules. In Sec. 3
we show how to fully reconstruct univariate variational cost functions from a
finite number of evaluations assuming an equidistant frequency spectrum, and
derive parameter-shift rules for arbitrary-order univariate derivatives,
including a generalization of the stochastic parameter-shift rule. In Sec. 4
we demonstrate how to compute second-order derivatives, in particular the
Hessian and the metric tensor, more cheaply compared to existing methods. In
Sec. 5 we discuss applications, applying the new generalized parameter-shift
rules to QAOA, and using the full univariate reconstruction to extend existing
model-based optimization methods. We end the main text in Sec. 6 with a
discussion of our work and potential future directions. Finally, in the
appendix we summarize some technical derivations (App. A), and extend the
results to more general frequency spectra (App. B). The general stochastic
parameter-shift rule and details on quantum analytic descent can be found in
Apps. C and D.
_Related work:_ In Ref. [42], the functions of VQAs were considered as Fourier
series and parameter-shift rules were derived. Regarding the shift rules, the
authors of Ref. [42] consider integer eigenvalues and derive a rule with
$2R+1$ evaluations for equidistant eigenvalues. In particular, the two-term
and four-term shift rules are reviewed and formulated as special cases with
_fewer_ evaluations than the general result presented there. In contrast, our
work results in the exact generalization of those shift rules, which requires
$2R$ evaluations. Remarkably, Refs. [42, 45] also propose a generalized
Rotosolve algorithm prior to its eponymous paper.
In addition, during the final stages of preparation of this work, a related
work considering algebraic extensions of the parameter-shift rule appeared
online [52]. The general description of quantum expectation values in Sec. 2.1
of the present work, along with its initial consequences in Sec. 3.1, are
shown in Sec. II A of this preprint. We present a simpler derivation and
further explore the implications this description has. The generalization of
the parameter-shift rule in Ref. [52] is obtained by decomposing the gate
generator using Cartan subalgebras, which can yield fewer shifted evaluations
than decompositions of the gate itself. In particular, decompositions into
non-commuting terms, which do not lead to a gate decomposition into native
quantum gates directly, can be used in this approach.
At a similar time, yet another work appeared [53], presenting a derivation
similar to Sec. 2.1 and parameter-shift rules for the first order derivative.
These rules are based on the ideas discussed here in Secs. 3.1 and 3.2.
## 2 Background
We start by deriving the form of a VQA cost function of a single parameter for
a general single-parameter quantum gate. Then we review known parameter-shift
rules and briefly discuss resource measures to compare these gradient recipes.
### 2.1 Cost functions arising from quantum gates
Let us first consider the expectation value for a general gate
$U(x)=\exp(ixG)$, defined by a Hermitian generator $G$ and parametrized by a
single parameter $x$. Let $\ket{\psi}$ denote the quantum state that $U$ is
applied to, and $B$ the measured observable111Here we consider any pure state
in the Hilbert space; in the context of VQAs, $\ket{\psi}$ is the state
prepared by the subcircuit prior to $U(x)$. Similarly, $B$ includes the
subcircuit following up on $U(x)$.. The eigenvalues of $U(x)$ are given by
$\left\\{\exp(i\omega_{j}x)\right\\}_{j\in[d]}$ with real-valued
$\\{\omega_{j}\\}_{j\in[d]}$ where we denote $[d]\coloneqq\\{1,\dots,d\\}$ and
have sorted the $\omega_{j}$ to be non-decreasing. Thus, we have:
$\displaystyle E(x)$
$\displaystyle\coloneqq\bra{\psi}U^{\dagger}(x)BU(x)\ket{\psi}$ (1)
$\displaystyle=\sum_{j,k=1}^{d}\overline{\psi_{j}e^{i\omega_{j}x}}b_{jk}\psi_{k}e^{i\omega_{k}x}$
(2) $\displaystyle=\sum_{\begin{subarray}{c}j,k=1\\\
j<k\end{subarray}}^{d}\left[\overline{\psi_{j}}b_{jk}\psi_{k}e^{i(\omega_{k}-\omega_{j})x}\right.$
(3) $\displaystyle\hskip
28.45274pt+\left.\psi_{j}\overline{b_{jk}\psi_{k}e^{i(\omega_{k}-\omega_{j})x}}\right]$
$\displaystyle\hskip 5.69046pt+\sum_{j=1}^{d}|\psi_{j}|^{2}b_{jj},$
where we have expanded $B$ and $\ket{\psi}$ in the eigenbasis of $U$, denoted
by $b_{jk}$ and $\psi_{j}$, respectively.
We can collect the $x$-independent part into coefficients
$c_{jk}\coloneqq\overline{\psi_{j}}b_{jk}\psi_{k}$ and introduce the $R$
_unique positive_ differences
$\\{\Omega_{\ell}\\}_{\ell\in[R]}\coloneqq\\{\omega_{k}-\omega_{j}|j,k\in[d],\omega_{k}>\omega_{j}\\}$.
Note that the differences are not necessarily equidistant, and that for
$r=\left|\\{\omega_{j}\\}_{j\in[d]}\right|$ _unique_ eigenvalues of the gate
generator, there are at most $R\leq\frac{r(r-1)}{2}$ unique differences.
However, many quantum gates will yield $R\leq r$ _equidistant_ differences
instead; a common example for this is
$\displaystyle G=\sum_{k=1}^{\mathcal{P}}\pm P_{k}$ (4)
for commuting Pauli words $P_{k}$ ($P_{k}P_{k^{\prime}}=P_{k^{\prime}}P_{k}$),
which yields the frequencies $[\mathcal{P}]$ and thus $R=\mathcal{P}$.
In the following, we implicitly assume a mapping between the two indices
$j,k\in[d]$ and the frequency index $\ell\in[R]$ such that
$c_{\ell}=c_{\ell(j,k)}$ is well-defined222That is,
$\ell(j,k)=\ell(j^{\prime},k^{\prime})\Leftrightarrow\omega_{k}-\omega_{j}=\omega_{k^{\prime}}-\omega_{j^{\prime}}$..
We can then write the expectation value as a trigonometric polynomial (a
finite-term Fourier series):
$\displaystyle E(x)$
$\displaystyle=a_{0}+\sum_{\ell=1}^{R}c_{\ell}e^{i\Omega_{\ell}x}+\sum_{\ell=1}^{R}\overline{c_{\ell}}e^{-i\Omega_{\ell}x}$
(5)
$\displaystyle=a_{0}+\sum_{\ell=1}^{R}a_{\ell}\cos(\Omega_{\ell}x)+b_{\ell}\sin(\Omega_{\ell}x),$
(6)
with frequencies given by the differences $\\{\Omega_{\ell}\\}$, where we
defined $c_{\ell}\eqqcolon\frac{1}{2}(a_{\ell}-ib_{\ell})\;\forall\ell\in[R]$
with $a_{\ell},b_{\ell}\in\mathbb{R}$, and
$a_{0}\coloneqq\sum_{j}|\psi_{j}|^{2}b_{jj}\in\mathbb{R}$.
Since $E(x)$ is a finite-term Fourier series, the coefficients
$\\{a_{\ell}\\}$ and $\\{b_{\ell}\\}$ can be obtained from a finite number of
evaluations of $E(x)$ through a _discrete Fourier transform_. This observation
(and variations thereof in Sec. 3) forms the core of this work: we can obtain
the full functional form of $E(x)$ from a finite number of evaluations of
$E(x)$, from which we can compute arbitrary order derivatives.
### 2.2 Known parameter-shift rules
_Parameter-shift rules_ relate derivatives of a quantum function to
evaluations of the function itself at different points. In this subsection, we
survey known parameter-shift rules in the literature.
For functions of the form (6) with a single frequency $\Omega_{1}=\Omega$
(i.e., $G$ has two eigenvalues), the derivative can be computed via the
parameter-shift rule [16, 23, 38]
$\displaystyle E^{\prime}(0)=\frac{\Omega}{2\sin(\Omega
x_{1})}[E(x_{1})-E(-x_{1})],$ (7)
where $x_{1}$ is a freely chosen shift angle from $(0,\pi)$ 333The position
$0$ for the derivative is chosen for convenience but the rule can be applied
at any position. To see this, note that shifting the argument of $E$ does not
change its functional form..
This rule was generalized to gates with eigenvalues $\\{-1,0,1\\}$, which
leads to $R=2$ frequencies, in Refs. [41, 10] in two distinct ways. The rule
in Ref. [10] is an immediate generalization of the one above:
$\displaystyle E^{\prime}(0)$ $\displaystyle=y_{1}[E(x_{1})-E(-x_{1})]$ (8)
$\displaystyle-y_{2}[E(x_{2})-E(-x_{2})],$
with freely chosen shift angles $x_{1,2}$ and corresponding coefficients
$y_{1,2}$, requiring four evaluations to obtain $E^{\prime}(0)$. A
particularly symmetric choice of shift angles is $x_{1,2}=\pi/2\mp\pi/4$ with
coefficients $y_{1,2}=\frac{\sqrt{2}\pm 1}{2\sqrt{2}}$. In contrast, the rule
in Ref. [41] makes use of an auxiliary gate to implement slightly altered
circuits, leading to a structurally different rule:
$\displaystyle
E^{\prime}(0)=\frac{1}{4}[E^{+}_{+}-E^{+}_{-}+E^{-}_{+}-E^{-}_{-}],$ (9)
where $E^{\alpha}_{\pm}$ is the measured energy when replacing the gate $U(x)$
in question by $U(x\pm\pi/2)\exp(\mp\alpha i\frac{\pi}{4}P_{0})$ and $P_{0}$
is the projector onto the zero-eigenspace of the generator of $U$. Remarkably,
this structure allows a reduction of the number of distinct circuit
evaluations to two if the circuit and the Hamiltonian are real-valued, which
is often the case for simulations of fermionic systems and forms a unique
feature of this approach. This second rule is preferable whenever this
condition is fulfilled, the auxiliary gates $\exp(\pm i\frac{\pi}{4}P_{0})$
are available, and simultaneously the number of distinct circuits is the
relevant resource measure.
Furthermore, the two-term parameter-shift rule Eq. (7) was generalized to
gates with the more complicated gate structure $U_{F}(x)=\exp(i(xG+F))$ via
the _stochastic parameter-shift rule_ [39]
$\displaystyle E^{\prime}(x_{0})=\frac{\Omega}{2\sin(\Omega
x_{1})}\int_{0}^{1}[E_{+}(t)-E_{-}(t)]\mathrm{d}t.$ (10)
Here, $E_{\pm}(t)$ is the energy measured in the state prepared by a modified
circuit that splits $U_{F}(x_{0})$ into $U_{F}(tx_{0})$ and
$U_{F}((1-t)x_{0})$, and interleaves these two gates with $U_{F=0}(\pm
x_{1})$. See Sec. 3.6 and App. C for details. The first-order parameter-shift
rules summarized here and their relationship to each other is also visualized
in Fig. 1.
A parameter-shift rule for higher-order derivatives based on repeatedly
applying the original rule has been proposed in Ref. [46]. The shift can be
chosen smartly so that two function evaluations suffice to obtain the second-
order derivative:
$\displaystyle E^{\prime\prime}(0)=\frac{1}{2}[E(\pi)-E(0)],$ (11)
which like Eq. (7) is valid for single-frequency gates. Various expressions to
compute combinations of derivatives with few evaluations were explored in Ref.
[54].
### 2.3 Resource measures for shift rules
While the original parameter-shift rule Eq. (7) provides a unique, unbiased
method to estimate the derivative $E^{\prime}(0)$ via evaluations of $E$ if it
contains a single frequency, we will need to compare different shift rules for
the general case. To this end, we consider two resource measures. Firstly, the
number of distinct circuits that need to be evaluated to obtain all terms of a
shift rule, $N_{\text{eval}}$. This is a meaningful quantity on both,
simulators that readily produce many measurement samples after executing each
unique circuit once, as well as quantum hardware devices that are available
via cloud services. In the latter case, quantum hardware devices are typically
billed and queued per unique circuit, and as a result $N_{\text{eval}}$ often
dictates both the financial and time cost. Note that overhead due to circuit
compilation and optimization scale with this quantity as well.
Secondly, we consider the overall number $N$ of measurements — or _shots_ —
irrespective of the number of unique circuits they are distributed across. To
this end, we approximate the physical (one-shot) variance $\sigma^{2}$ of the
cost function $E$ to be constant across its domain444As it is impossible in
general to compute $\sigma^{2}$ analytically, we are forced to make this
potentially very rough approximation.. For an arbitrary quantity $\Delta$
computed from $\mathcal{M}$ values of $E$ via a shift rule,
$\displaystyle\Delta=\sum_{\mu}^{\mathcal{M}}y_{\mu}E(\boldsymbol{x}_{\mu}),$
(12)
we obtain the variance for the estimate of $\Delta$ as
$\displaystyle\varepsilon^{2}=\sum_{\mu}^{\mathcal{M}}|y_{\mu}|^{2}\frac{\sigma^{2}}{N_{\mu}},$
(13)
where $N_{\mu}$ expresses the number of shots used to measure
$E(\boldsymbol{x}_{\mu})$. For a total budget of $N$ shots, the optimal shot
allocation is $N_{\mu}=N|y_{\mu}|/\lVert\boldsymbol{y}\rVert_{1}$ such that
$\displaystyle
N=\frac{\sigma^{2}\lVert\boldsymbol{y}\rVert^{2}_{1}}{\varepsilon^{2}}.$ (14)
This can be understood as the number of shots needed to compute $\Delta$ to a
tolerable standard deviation $\varepsilon$.
The number of shots $N$ is a meaningful quantity for simulators whose runtime
scales primarily with the number of requested samples (e.g., Amazon Braket’s
TN1 tensor network simulator [1]), and for actual quantum devices when
artificial resource measures like pricing per unique circuit and queueing time
do not play a role.
In this work we will mostly use $N_{\text{eval}}$ to compare the requirements
of different parameter-shift rules as it is more accessible, does not rely on
the assumption of constant physical variance like $N$ does, and the
coefficients $\boldsymbol{y}$ to estimate $N$ are simply not known
analytically in most general cases. For the case of equidistant frequencies
and shift angles as discussed in Sec. 3.4 we will additionally compare the
number of shots $N$ in Sec. 3.5.
## 3 Univariate cost functions
In this section we study how a quantum cost function, which in general depends
on multiple parameters, varies if only one of these parameters is changed. The
results of this section will be sufficient to evaluate the gradient as well as
the diagonal of the Hessian of a quantum function. We restrict ourselves to
functions that can be written as the expectation value of an observable with
respect to a state that is prepared using a unitary $U(x)=\exp(ixG)$ —
capturing the full dependence on $x$. That is, all parameters but $x$ are
fixed and the operations they control are considered as part of the prepared
state and the observable. As shown in Sec. 2.1, this yields a trigonometric
polynomial, i.e.,
$\displaystyle
E(x)=a_{0}+\sum_{\ell=1}^{R}a_{\ell}\cos(\Omega_{\ell}x)+b_{\ell}\sin(\Omega_{\ell}x).$
(15)
In the following, we will assume the frequencies to be equidistant, i.e.,
$\Omega_{\ell}=\ell\Omega$, and generalize to arbitrary frequencies in App. B.
While it is easy to construct gate sequences that do not lead to equidistant
frequencies, many conventional gates and layers of gates do yield such a
regular spectrum. The equidistant frequency case has two major advantages over
the general case: we can derive closed-form parameter-shift rules (Sec. 3.4);
and the number of circuits required for the parameter-shift rule scales much
better (Sec. 3.5).
Without loss of generality, we further restrict the frequencies to integer
values, i.e., $\Omega_{\ell}=\ell$. For $\Omega\neq 1$, we may rescale the
function argument to achieve $\Omega_{\ell}=\ell$ and once we reconstruct the
rescaled function, the original function is available, too.
### 3.1 Determining the full dependence on $x$
As we have seen, the functional form of $E(x)$ is known exactly. We can thus
determine the function by computing the $2R+1$ coefficients $\\{a_{\ell}\\}$
and $\\{b_{\ell}\\}$. This is the well-studied problem of _trigonometric
interpolation_ (see e.g., [55, Chapter X]).
To determine $E(x)$ completely, we can simply evaluate it at $2R+1$ distinct
points $x_{\mu}\in[-\pi,\pi)$. We obtain a set of $2R+1$ equations
$\displaystyle E(x_{\mu})=a_{0}+\sum_{\ell=1}^{R}a_{\ell}\cos(\ell
x_{\mu})+b_{\ell}\sin(\ell x_{\mu}),\;\mu\in[2R]_{0}$
where we denote $[2R]_{0}\coloneqq\\{0,1,\dots,2R\\}$. We can then solve these
linear equations for $\\{a_{\ell}\\}$ and $\\{b_{\ell}\\}$; this process is in
fact a nonuniform _discrete Fourier transform (DFT)_.
A reasonable choice is $x_{\mu}=\frac{2\pi\mu}{2R+1},\mu=-R,\dots,R$, in which
case the transform is the usual (uniform) DFT. For this choice, an explicit
reconstruction for $E$ follows directly from [55, Chapter X]; we reproduce it
in App. A.1.1.
### 3.2 Determining the odd part of $E(x)$
It is often the case in applications that we only need to determine the odd
part of $E$,
$\displaystyle E_{\text{odd}}(x)$ $\displaystyle=\frac{1}{2}(E(x)-E(-x))$ (16)
$\displaystyle=\sum_{\ell=1}^{R}b_{\ell}\sin(\ell x).$ (17)
For example, calculating odd-order derivatives of $E(x)$ at $x=0$ only
requires knowledge of $E_{\text{odd}}(x)$, since those derivatives of the even
part vanish. Note that the reference point with respect to which
$E_{\text{odd}}$ is odd may be chosen arbitrarily, and does not have to be
$0$.
The coefficients in $E_{\text{odd}}$ can be determined by evaluating
$E_{\text{odd}}$ at $R$ distinct points $x_{\mu}$ with $0<x_{\mu}<\pi$. This
gives us a system of $R$ equations
$\displaystyle E_{\text{odd}}(x_{\mu})=\sum_{\ell=1}^{R}b_{\ell}\sin(\ell
x_{\mu}),\quad\mu\in[R]$ (18)
which we can use to solve for the $R$ coefficients $\\{b_{\ell}\\}$.
Using Eq. (16) we see that each evaluation of $E_{\text{odd}}$ can be done
with two evaluations of $E(x)$. Thus, the odd part of $E$ can be completely
determined with $2R$ evaluations of $E$, saving one evaluation compared to the
general case. Note however that the saved $E(0)$ evaluation is evaluated
regardless in many applications, and may be used to recover the full
reconstruction — so, in effect, this saving does not have a significant
impact555If $E(0)$ is available, we can recover the full function, allowing us
to, for example, evaluate its second derivative $E^{\prime\prime}(0)$ “for
free”. However, in practice many more repetitions may be needed for reasonable
accuracy. This fact was already noted in [46] for the $R=1$ case..
### 3.3 Determining the even part of $E(x)$
We might similarly want to obtain the even part of $E$,
$\displaystyle E_{\text{even}}(x)$ $\displaystyle=\frac{1}{2}(E(x)+E(-x))$
(19) $\displaystyle=a_{0}+\sum_{\ell=1}^{R}a_{\ell}\cos(\ell x),$ (20)
which can be used to compute even-order derivatives of $E$.
Determining $E_{\text{even}}(x)$ requires $R+1$ evaluations of
$E_{\text{even}}$, which leads to $2R+1$ evaluations of $E$ for arbitrary
frequencies. However, in the case where $\Omega_{\ell}$ are integers, $R+1$
evaluations of $E_{\text{even}}$ can be obtained with $2R$ evaluations of
$E(x)$ by using periodicity:
$\displaystyle E_{\text{even}}(0)$ $\displaystyle=E(0)$ (21) $\displaystyle
E_{\text{even}}(x_{\mu})$ $\displaystyle=\frac{1}{2}(E(x_{\mu})+E(-x_{\mu})),$
(22) $\displaystyle\hskip 28.45274pt0<x_{\mu}<\pi,\ \mu\in[R-1]$
$\displaystyle E_{\text{even}}(\pi)$ $\displaystyle=E(\pi).$ (23)
Thus, in this case $2R$ evaluations of $E(x)$ suffice to determine its even
part, saving one evaluation over the general case. In contrast to the odd
part, this saving genuinely reduces the required computations as $E(0)$ is
also used in the cheaper computation of $\\{a_{\ell}\\}$; therefore, if $E(0)$
is already known, we only require $2R-1$ new evaluations.
We note that even though both the odd and the even part of $E(x)$ require $2R$
evaluations, the full function can be obtained at the price of $2R+1$
evaluations.
### 3.4 Explicit parameter-shift formulas
Consider again the task of determining $E_{\text{odd}}$ ($E_{\text{even}}$)
based on its value at the shifted points $\\{x_{\mu}\\}$ with $\mu\in[R]$
($\mu\in[R]_{0}$). This can be done by linearly combining elementary functions
that vanish on all but one of the $\\{x_{\mu}\\}$, i.e., kernel functions,
using the evaluation $E(x_{\mu})$ as coefficients. If we restrict ourselves to
evenly spaced points $x_{\mu}=\frac{2\mu-1}{2R}\pi$
($x_{\mu}=\frac{\mu}{R}\pi$), we can choose these functions to be Dirichlet
kernels. In addition to a straightforward reconstruction of the odd (even)
function this delivers the _general parameter-shift rules_ , which we derive
in App. A.1:
$\displaystyle E^{\prime}(0)$
$\displaystyle=\sum_{\mu=1}^{2R}E\left(\frac{2\mu-1}{2R}\pi\right)\frac{(-1)^{\mu-1}}{4R\sin^{2}\left(\frac{2\mu-1}{4R}\pi\right)},$
(24) $\displaystyle E^{\prime\prime}(0)$
$\displaystyle=-E(0)\frac{2R^{2}+1}{6}+\sum_{\mu=1}^{2R-1}E\left(\frac{\mu\pi}{R}\right)\frac{(-1)^{\mu-1}}{2\sin^{2}\left(\frac{\mu\pi}{2R}\right)}.$
(25)
We remark that derivatives of higher order can be obtained in an analogous
manner, and with the same function evaluations for all odd (even) orders.
Furthermore, this result reduces to the known two-term (Eq. (7)) and four-term
(Eq. (8)) parameter-shift rules for $R=1$ and $R=2$, respectively, as well as
the second-order derivative for $R=1$ (Eq. (11)).
We again note that the formulas above use different evaluation points for the
first and second derivatives ($2R$ evaluations for each derivative). Closed-
form parameter-shift rules that use $2R+1$ shared points can be obtained by
differentiating the reconstruction formula Eq. (57).
### 3.5 Resource comparison
As any unitary may be compiled from (single-qubit) Pauli rotations, which
satisfy the original parameter-shift rule, and CNOT gates, an alternative
approach to compute $E^{\prime}(0)$ is to decompose $U(x)$ into such gates and
combine the derivatives based on the elementary gates. As rotation gates about
any multi-qubit Pauli word satisfy the original parameter-shift rule as well,
a more coarse-grained decomposition might be possible and yield fewer
evaluations for this approach.
For instance, for the $\operatorname{\textsc{MaxCut}}$ QAOA ansatz666A more
detailed description of the QAOA ansatz can be found in Sec. 5.1. on a graph
$G=(\mathcal{V},\mathcal{E})$ with vertices $\mathcal{V}$ and edges
$\mathcal{E}$, one of the operations is to evolve under the problem
Hamiltonian:
$\displaystyle U_{P}(x)$
$\displaystyle\propto\exp\left(-i\frac{x}{2}\sum_{(a,b)\in\mathcal{E}}Z_{a}Z_{b}\right)$
(26)
$\displaystyle=\prod_{(a,b)\in\mathcal{E}}\exp\left(-i\frac{x}{2}Z_{a}Z_{b}\right).$
(27)
Eq. (26) treats $U_{P}(x)$ as a single operation with at most
$M=|\mathcal{E}|$ frequencies $1,\dots,R\leq M$, and we can apply the
generalized parameter-shift rules of this section. Alternatively, we could
decompose $U_{P}(x)$ with Eq. (27), apply the two-term parameter-shift rule to
each $R_{ZZ}$ rotation, and sum up the contributions using the chain rule.
#### 3.5.1 Number of unique circuits
If there are $\mathcal{P}$ gates that depend on $x$ in the decomposition, this
approach requires $2\mathcal{P}$ unique circuit evaluations; as a result, the
general parameter-shift rule is cheaper if $R<\mathcal{P}$. The evaluations
used in the decomposition-based approach cannot be expressed by $E$ directly
because the parameter is shifted only in one of the $\mathcal{P}$ gates per
evaluation, which makes the general parameter-shift rule more convenient and
may reduce compilation overhead for quantum hardware, and the number of
operations on simulators.
In order to compute $E^{\prime\prime}(0)$ via the decomposition, we need to
obtain and sum the full Hessian of all elementary gates that depend on $x$
(see App. A.4.2), which requires $2\mathcal{P}^{2}-\mathcal{P}+1$ evaluations,
including $E(0)$, and thus is significantly more expensive than the $2R$
evaluations for the general parameter-shift rule.
While the derivatives can be calculated from the functional form of
$E_{\text{odd}}$ or $E_{\text{even}}$, the converse is not true for $R>1$,
i.e., the full functional dependence on $x$ cannot be extracted from the first
and second derivative alone. Therefore, the decomposition-based approach would
demand a full multivariate reconstruction for all $\mathcal{P}$ parametrized
elementary gates to obtain this dependence, requiring
$\mathcal{O}(2^{\mathcal{P}})$ evaluations. The approach shown here allows us
to compute the dependence in $2R+1$ evaluations and thus is the only method
for which the univariate reconstruction is viable.
#### 3.5.2 Number of shots
For equidistant evaluation points, we explicitly know the coefficients of the
first and second-order shift rule given in Eqs. (24, 25), and thus can compare
the variance of the derivatives in the context and under the assumptions of
Sec. 2.3.
The coefficients satisfy (see App. A.4.1)
$\displaystyle\sum_{\mu=1}^{2R}\left(4R\sin^{2}\left(\frac{2\mu-1}{4R}\pi\right)\right)^{-1}$
$\displaystyle=R$
$\displaystyle\frac{2R^{2}+1}{6}+\sum_{\mu=1}^{2R-1}\left(2\sin^{2}\left(\frac{\mu\pi}{2R}\right)\right)^{-1}$
$\displaystyle=R^{2}.$
This means that the variance-minimizing shot allocation requires a shot budget
of
$\displaystyle N_{\text{genPS, 1}}$
$\displaystyle=\frac{\sigma^{2}R^{2}}{\varepsilon^{2}}$ (28) $\displaystyle
N_{\text{genPS, 2}}$ $\displaystyle=\frac{\sigma^{2}R^{4}}{\varepsilon^{2}}$
(29)
using the generalized parameter-shift rule for the first and second
derivative, respectively.
Assuming integer-valued frequencies in the cost function typically means, in
the decomposition-based approach, that $x$ enters the elementary gates without
any additional prefactors777Of course, one can construct less efficient
decompositions that do not satisfy this rule of thumb.. Thus, optimally all
evaluations for the first-order derivative rule are performed with the same
portion of shots; whereas the second-order derivative requires an adapted shot
allocation which, in particular, measures $E(0)$ with high precision as it
enters $E^{\prime\prime}(0)$ with the prefactor $\mathcal{P}/2$. This yields
(see App. A.4.2)
$\displaystyle N_{\text{decomp, 1}}$
$\displaystyle=\frac{\sigma^{2}\mathcal{P}^{2}}{\varepsilon^{2}}$ (30)
$\displaystyle N_{\text{decomp, 2}}$
$\displaystyle=\frac{\sigma^{2}\mathcal{P}^{4}}{\varepsilon^{2}}.$ (31)
Comparing with $N_{\text{genPS, 1}}$ and $N_{\text{genPS, 2}}$ above, we see
that the shot budgets are equal at $\mathcal{P}=R$. That is, for both the
first and second derivative, the general parameter-shift rule does not show
lower shot requirements in general, in contrast to the previous analysis that
showed a significantly smaller number of unique circuits for the second
derivative. This shows that the comparison of recipes for gradients and
higher-order derivatives crucially depends on the chosen resource measure. In
specific cases we may be able to give tighter upper bounds on $R$ so that
$R<\mathcal{P}$ (see Sec. 5.1) and the general shift rule becomes favourable
regarding the shot count as well.
### 3.6 General stochastic parameter-shift rule
Next, we will apply the _stochastic parameter-shift rule_ to our general shift
rule. For this section we do _not_ assume the frequencies to be equidistant
but address arbitrary spectra directly. Additionally we make the reference
point $x_{0}$ at which the derivative is computed explicit.
In Ref. [39], the authors derive the stochastic parameter-shift rule for gates
of the form
$\displaystyle U_{F}(x)=\exp(i(xG+F))$ (32)
where $G$ is a Hermitian operator with eigenvalues $\pm 1$ (so that
$G^{2}=\mathds{1}$), e.g., a Pauli word. $F$ is any other Hermitian operator,
which may not necessarily commute with $G$888If $GF=FG$, the exponential may
be split into $\exp(ixG)$ and $\exp(iF)$ and we are back at the situation
$\exp(ixG)$.. Key to the derivation of the stochastic rule is an identity
relating the derivative of the quantum channel
$\mathcal{U}_{F}(x)[\rho]=U_{F}^{\dagger}(x)\rho U_{F}(x)$ to the derivative
of the generator channel $\mathcal{G}(x)[\rho]=i[(xG+F),\rho]$. We may extend
this directly to the general parameter-shift rule for the case when
$G^{2}=\mathds{1}$ is no longer satisfied (see App. C for the derivation):
$\displaystyle E^{\prime}(x_{0})$
$\displaystyle=\int_{0}^{1}\sum_{\mu=1}^{R}y_{\mu}[E_{\mu}(x_{0},t)-E_{-\mu}(x_{0},t)]\mathrm{d}t$
(33) $\displaystyle E_{\pm\mu}(x_{0},t)$ $\displaystyle\coloneqq\langle
B\rangle_{U_{F}(tx_{0})U(\pm x_{\mu})U_{F}((1-t)x_{0})\ket{\psi}}.$
The integration is implemented in practice by sampling values for $t$ for each
measurement of $E_{\mu}(x_{0},t)$ and $E_{-\mu}(x_{0},t)$.
The stochastic parameter-shift rule in combination with the generalized shift
rule in Eq. (24) allows for the differentiation of any unitary with
equidistant frequencies. As $F$ in $U_{F}(x)$ above is allowed to contain
terms that depend on other variational parameters, this includes multi-
parameter gates in particular. Furthermore, combining Eq. (33) with the
generalized shift rule for arbitrary frequencies in Eq. (90) allows us to
compute the derivative of _any_ quantum gate as long as the frequencies of
$U_{F=0}(x)$ are known. We thus obtain an improved rule for $U_{F\neq 0}(x)$
over the original stochastic shift rule whenever the generalized shift rule is
beneficial for $U(x)=U_{F=0}(x)$, compared to the decomposition-based
approach.
## 4 Second-order derivatives
As noted in Sec. 3.3, higher-order derivatives of univariate functions are
easily computed using the even or odd part of the function. In the following
sections, we will extend our discussion to multivariate functions
$E(\boldsymbol{x})$, where derivatives may be taken with respect to different
variables. Each single parameter dependence is assumed to be of the form Eq.
(5), with equidistant (and by rescaling integer-valued) frequencies
$\\{\Omega_{\ell}^{(k)}\\}_{\ell\in[R_{k}]}=[R_{k}]$ for the $k$th parameter.
We may collect the numbers of frequencies in a vector
$(\boldsymbol{R})_{k}=R_{k}$. It will again be useful in the following to make
the reference point $\boldsymbol{x}_{0}$, at which these derivatives are
computed, explicit.
### 4.1 Diagonal shift rule for the Hessian
Here we show how to compute the Hessian $H$ of a multivariate function
$E(\boldsymbol{x})$ at some reference point $\boldsymbol{x}_{0}$ using the
Fourier series representation of $E$. We allow for single-parameter gates
$U(x)=\exp(ixG)$ with equidistant frequencies and will use fewer evaluations
of $E$ than known schemes. An indication that this may be possible for gates
with two eigenvalues was made in [54, Eq. (37)].
First, for the $k$th diagonal entry
$H_{kk}=\partial^{2}_{k}E(\boldsymbol{x}_{0})$ of the Hessian, we previously
noted in Sec. 3.3 that $2R_{k}$ evaluations are sufficient as it is the second
derivative of a univariate restriction of $E$. Recall that one of the $2R_{k}$
evaluations is $E(\boldsymbol{x}_{0})$; we can reuse this evaluation for all
diagonal entries of $H$, and thus require
$1+\sum_{k=1}^{n}(2R_{k}-1)=2\lVert\boldsymbol{R}\rVert_{1}-n+1$ evaluations
for the full diagonal. Further, if we compute the Hessian diagonal
$(\boldsymbol{\nabla}^{\odot 2}E)_{k}\coloneqq\partial_{k}^{2}E$ in addition
to the gradient, we may reuse the $2\lVert\boldsymbol{R}\rVert_{1}$
evaluations computed for the gradient, only requiring a single additional
function value, namely $E(\boldsymbol{x}_{0})$. In this case, we do not make
use of the periodicity
$E(\boldsymbol{x}_{0}+\pi\boldsymbol{v}_{k})=E(\boldsymbol{x}_{0}-\pi\boldsymbol{v}_{k})$,
where $\boldsymbol{v}_{k}$ is the $k$th canonical basis vector, because this
shift is not used in the gradient evaluation (see Sec. 3.2).
Next, for an off-diagonal entry
$H_{km}=\partial_{k}\partial_{m}E(\boldsymbol{x}_{0})$, consider the
_univariate_ trigonometric function that shifts the two parameters $x_{k}$ and
$x_{m}$ _simultaneously_ :
$\displaystyle E^{(km)}(x)\coloneqq
E(\boldsymbol{x}_{0}+x\boldsymbol{v}_{k,m}),$ (34)
where we abbreviated
$\boldsymbol{v}_{k,m}\coloneqq\boldsymbol{v}_{k}+\boldsymbol{v}_{m}$. We show
in App. A.2 that $E^{(km)}$ again is a Fourier series of $x$ with
$R_{km}=R_{k}+R_{m}$ equidistant frequencies. This means that we can compute
${E^{(km)}}^{\prime\prime}(0)$ via Eq. (25) with $R=R_{km}$, using $2R_{km}-1$
evaluations of $E$ (as we may reuse $E(\boldsymbol{x}_{0})$ from the diagonal
computation). Note that
$\displaystyle\left.\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}E^{(km)}(x)\right|_{x=0}=H_{kk}+H_{mm}+2H_{km},$
(35)
and that we have already computed the diagonal entries. We thus may obtain
$H_{km}$ via the _diagonal parameter-shift rule_
$\displaystyle
H_{km}=\frac{1}{2}\left({E^{(km)}}^{\prime\prime}(0)-H_{kk}-H_{mm}\right).$
(36)
In Fig. 2, we visually compare the computation of $H_{km}$ via the diagonal
shift rule to the chained application of univariate parameter-shift rules for
$x_{k}$ and $x_{m}$.
Figure 2: Visual representation of two approaches to compute a Hessian entry
$H_{km}$ at the position $\boldsymbol{x}_{0}$ (_red cross_). The parameters
$x_{k}$ and $x_{m}$ lie on the coordinate axes and the heatmap displays the
cost function $E(\boldsymbol{x})$. We may either combine the general shift
rule for $x_{k}$ and $x_{m}$ (_grey triangles_) or compute the univariate
derivative ${E^{(km)}}^{\prime\prime}(0)$ and extract $H_{km}$ via Eq. (36)
(_green circles_).
As an example, consider the case when $R_{k}=R_{m}=1$ (e.g., where all
parametrized gates are of the form $\exp(ix_{k}G_{k}/2)$ with
$G_{k}^{2}=\mathds{1}$). By setting $R=2$ in Eq. (25), we obtain the explicit
formula for ${E^{(km)}}^{\prime\prime}(0)$,
$\displaystyle{E^{(km)}}^{\prime\prime}(0)$
$\displaystyle=-\frac{3}{2}E(\boldsymbol{x}_{0})-\frac{1}{2}E(\boldsymbol{x}_{0}+\pi\boldsymbol{v}_{k,m})$
(37)
$\displaystyle+E\left(\boldsymbol{x}_{0}+\frac{\pi}{2}\boldsymbol{v}_{k,m}\right)+E\left(\boldsymbol{x}_{0}-\frac{\pi}{2}\boldsymbol{v}_{k,m}\right)$
which can be combined with Eq. (36) to give an explicit formula for the
Hessian. This formula (for $R_{k}=R_{m}=1$) was already discovered in [54, Eq.
(37)].
The computation of $H_{km}$ along the main diagonal in the
$x_{k}$-$x_{m}$-plane can be modified by making use of the second diagonal as
well: define
$\overline{\boldsymbol{v}}_{k,m}\coloneqq\boldsymbol{v}_{k}-\boldsymbol{v}_{m}$
and $\overline{E}^{(km)}(x)\coloneqq
E(\boldsymbol{x}_{0}+x\overline{\boldsymbol{v}}_{k,m})$, and compute
$\displaystyle\left.\frac{\text{d}^{2}}{\text{d}x^{2}}\overline{E}^{(km)}(x)\right|_{x=0}$
$\displaystyle=H_{kk}+H_{mm}-2H_{km},$ (38) $\displaystyle H_{km}$
$\displaystyle=\frac{1}{4}\left({E^{(km)}}^{\prime\prime}(0)-{\overline{E}^{(km)}}^{\prime\prime}(0)\right).$
This means we can replace the dependence on the diagonal elements $H_{kk}$ and
$H_{mm}$ by another univariate second-order derivative on the second diagonal.
We will not analyze the resources required by this method in detail but note
that for many applications it forms a compromise between the two approaches
shown in Fig. 2.
We note that an idea similar to the ones presented here can be used for
higher-order derivatives, but possibly requires more than one additional
univariate reconstruction per derivative.
### 4.2 Resource comparison
For the Hessian computation, we will again look at the number of unique
circuit evaluations $N_{\text{eval}}$ and the number of shots $N$, as
introduced in Sec. 2.3.
#### 4.2.1 Number of unique circuits
Quantity | Decomposition | Gen. shift rule, equidistant | Gen. shift rule
---|---|---|---
$E(\boldsymbol{x}_{0})$ | $1$ | $1$ | $1$
$\partial_{k}E(\boldsymbol{x}_{0})$ | $2\mathcal{P}_{k}$ | $2R_{k}$ | $2R_{k}$
$\boldsymbol{\nabla}E(\boldsymbol{x}_{0})$ | $2\lVert\boldsymbol{\mathcal{P}}\rVert_{1}$ | $2\lVert\boldsymbol{R}\rVert_{1}$ | $2\lVert\boldsymbol{R}\rVert_{1}$
$\partial_{k}^{2}E(\boldsymbol{x}_{0})$ | $2\mathcal{P}_{k}^{2}-\mathcal{P}_{k}+1$ | $2R_{k}$ | $2R_{k}+1$
$\boldsymbol{\nabla}^{\odot 2}E(\boldsymbol{x}_{0})$ | $2\lVert\boldsymbol{\mathcal{P}}\rVert_{2}^{2}-\lVert\boldsymbol{\mathcal{P}}\rVert_{1}+1$ | $2\lVert\boldsymbol{R}\rVert_{1}-n+1$ | $2\lVert\boldsymbol{R}\rVert_{1}+1$
$\partial_{k}\partial_{m}E(\boldsymbol{x}_{0})$ | $4\mathcal{P}_{k}\mathcal{P}_{m}$ | $2(R_{k}+R_{m})-1^{(\ast)}$ | $4R_{k}R_{m}+2R_{k}+2R_{m}-4^{(\ast)}$
$\boldsymbol{\nabla}^{\otimes 2}E(\boldsymbol{x}_{0})$ | $2\lVert\boldsymbol{\mathcal{P}}\rVert_{1}^{2}-\lVert\boldsymbol{\mathcal{P}}\rVert_{1}+1$ | $2n\lVert\boldsymbol{R}\rVert_{1}-\frac{1}{2}(n^{2}+n-2)$ | $2\left(\lVert\boldsymbol{R}\rVert_{1}^{2}-\lVert\boldsymbol{R}\rVert_{2}^{2}+n\lVert\boldsymbol{R}\rVert_{1}\right)$
$\hskip 71.13188pt-2n(n-1)+1$
$\partial_{k}E(\boldsymbol{x}_{0})$ & $\partial_{k}^{2}E(\boldsymbol{x}_{0})$ | $2\mathcal{P}_{k}^{2}+1$ | $2R_{k}+1$ | $2R_{k}+1$
$\boldsymbol{\nabla}E(\boldsymbol{x}_{0})$ & $\boldsymbol{\nabla}^{\odot 2}E(\boldsymbol{x}_{0})$ | $2\lVert\boldsymbol{\mathcal{P}}\rVert_{2}^{2}+1$ | $2\lVert\boldsymbol{R}\rVert_{1}+1$ | $2\lVert\boldsymbol{R}\rVert_{1}+1$
$\boldsymbol{\nabla}E(\boldsymbol{x}_{0})$ & $\boldsymbol{\nabla}^{\otimes 2}E(\boldsymbol{x}_{0})$ | $2\lVert\boldsymbol{\mathcal{P}}\rVert_{1}^{2}+1$ | $2n\lVert\boldsymbol{R}\rVert_{1}-\frac{1}{2}(n^{2}-n-2)$ | $2\left(\lVert\boldsymbol{R}\rVert_{1}^{2}-\lVert\boldsymbol{R}\rVert_{2}^{2}+n\lVert\boldsymbol{R}\rVert_{1}\right)$
$\hskip 71.13188pt-2n(n-1)+1$
Table 1: Number of distinct circuit evaluations $N_{\text{eval}}$ for
measuring combinations of derivatives of a parametrized expectation value
function $E$ at parameter position $\boldsymbol{x}_{0}$. The compared
approaches include decomposition of the unitaries together with the original
parameter-shift rule (_left_), and the generalized parameter-shift rule Eq.
(24) together with the diagonal shift rule for the Hessian in Eq. (36). The
requirements for the latter differ significantly for equidistant (_center_)
and arbitrary frequencies (_right_ , see App. B.2). A third approach is to
repeat the general parameter-shift rule, the cost of which can be read off by
replacing $\boldsymbol{\mathcal{P}}$ by $\boldsymbol{R}$ in the left column.
Here, $n$ is the number of parameters in the circuit, $\mathcal{P}_{k}$ is the
number of elementary gates with two eigenvalues in the decomposition of the
$k$th parametrized unitary, and $R_{k}$ denotes the number of frequencies for
the $k$th parameter. The asterisk (∗) indicates that the derivatives
$\partial^{2}_{k}E$ and $\partial^{2}_{m}E$ need to be known in order to
obtain the mixed derivative at the shown price (see main text). The evaluation
numbers take savings into account that are based on using evaluated energies
for multiple derivative quantities; hence, they are not additive in general.
In Tab. 1, we summarize the number of distinct circuit evaluations required to
compute several combinations of derivatives of $E(\boldsymbol{x})$, either by
decomposing the gate or by using the general parameter-shift rule together
with the diagonal shift rule for the Hessian. We also include the generalized
case of non-equidistant frequencies covered in App. B.2 for completeness. To
obtain the cost for the repeated general shift rule, i.e., without the
diagonal shift rule for the Hessian or decomposition, simply replace
$\boldsymbol{\mathcal{P}}$ by $\boldsymbol{R}$ in the left column.
For equidistant frequencies, the diagonal shift rule for $H_{km}$ requires
$2(R_{k}+R_{m})-1$ evaluations, assuming the diagonal and thus
$E(\boldsymbol{x}_{0})$ to be known already. Like the gradient, $H_{km}$ may
instead be computed by decomposing $U_{k}(x_{k})$ and $U_{m}(x_{m})$ into
$\mathcal{P}_{k}$ and $\mathcal{P}_{m}$ elementary gates, respectively, and
repeating the parameter-shift rule twice [46, 56]. All combinations of
parameter shifts are required, leading to $4\mathcal{P}_{k}\mathcal{P}_{m}$
evaluations. Finally, as a third option, one may repeat the general parameter-
shift rule in Eq. (24) twice, leading to $4R_{k}R_{m}$ evaluations999These
$4R_{k}R_{m}$ shifted evaluations are _not_ simultaneous shifts in both
directions of the form Eq. (34)..
The repeated general shift rule requires strictly more circuit evaluations
than the diagonal shift rule, since
$\displaystyle
2\lVert\boldsymbol{R}\rVert_{1}^{2}-\lVert\boldsymbol{R}\rVert_{1}+1>2n\lVert\boldsymbol{R}\rVert_{1}-\frac{1}{2}(n^{2}+n-2).$
(39)
Similar to the discussion for the scaling of gradient computations, the
optimal approach depends on $R_{k,m}$ and $\mathcal{P}_{k,m}$, but
$\mathcal{P}$ and $R$ often have a linear relation so that the diagonal shift
rule will be significantly cheaper for many cost functions than decomposing
the unitaries.
#### 4.2.2 Number of shots
Next we compare the numbers of measurements required to reach a precision
$\varepsilon$. While the approach via repeated shift rules uses distinct
circuit evaluations for each Hessian entry, the diagonal shift rule in Eq.
(36) reuses entries of the Hessian and thus correlates the optimal shot
allocations and the statistical errors of the Hessian entries. We therefore
consider an error measure on the full Hessian matrix instead of a single
entry, namely the root mean square of the Frobenius norm of the difference
between the true and the estimated Hessian. This norm is computed in App. A.5
for the three presented approaches, and we conclude the number of shots
required to achieve a norm of $\varepsilon$ to be
$\displaystyle N_{\text{diag}}$
$\displaystyle=\frac{\sigma^{2}}{2\varepsilon^{2}}\Big{[}\bigl{(}\sqrt{n+1}+n-2\bigr{)}\lVert\boldsymbol{R}\rVert_{2}^{2}+\lVert\boldsymbol{R}\rVert_{1}^{2}\Big{]}^{2}$
(40) $\displaystyle N_{\text{genPS}}$
$\displaystyle=\frac{\sigma^{2}}{2\varepsilon^{2}}\Big{[}\bigl{(}\sqrt{2}-1\bigr{)}\lVert\boldsymbol{R}\rVert_{2}^{2}+\lVert\boldsymbol{R}\rVert_{1}^{2}\Big{]}^{2}$
(41) $\displaystyle N_{\text{decomp}}$
$\displaystyle=\frac{\sigma^{2}}{2\varepsilon^{2}}\Big{[}\bigl{(}\sqrt{2}-1\bigr{)}\lVert\boldsymbol{\mathcal{P}}\rVert_{2}^{2}+\lVert\boldsymbol{\mathcal{P}}\rVert_{1}^{2}\Big{]}^{2}$
(42)
In general, the diagonal shift rule for the Hessian is significantly less
efficient than the repeated execution of the general parameter-shift rule if
the shot count is the relevant resource measure. This is in sharp contrast to
the number of unique circuits, which is strictly smaller for the diagonal
shift rule. We note that the two resource measures yield _incompatible_
recommendations for the computation of the Hessian. The overhead of the
diagonal shift rule reduces to a (to leading order in $n$) constant prefactor
if $R_{k}=R$ for all $k\in[n]$: in this case, we know
$\lVert\boldsymbol{R}\rVert_{1}=n=\lVert\boldsymbol{R}\rVert_{2}^{2}$ and
therefore
$\displaystyle\frac{N_{\text{diag}}}{N_{\text{genPS}}}=\frac{2n+\sqrt{n+1}-2}{n+\sqrt{2}-1}\underset{n\to\infty}{\longrightarrow}2.$
(43)
### 4.3 Metric tensor
The Fubini-Study metric tensor $\mathcal{F}$ is the natural metric on the
manifold of (parametrized) quantum states, and the key ingredient in quantum
natural gradient descent [48]. The component of the metric belonging to the
parameters $x_{k}$ and $x_{m}$ can be written as
$\displaystyle\mathcal{F}_{km}(\boldsymbol{x}_{0})=$
$\displaystyle\real{\braket{\partial_{k}\psi(\boldsymbol{x})}{\partial_{m}\psi(\boldsymbol{x})}}\,\Big{|}_{\boldsymbol{x}=\boldsymbol{x}_{0}}$
(44)
$\displaystyle-\braket{\partial_{k}\psi(\boldsymbol{x})}{\psi(\boldsymbol{x})}\braket{\psi(\boldsymbol{x})}{\partial_{m}\psi(\boldsymbol{x})}\,\Big{|}_{\boldsymbol{x}=\boldsymbol{x}_{0}},$
or, alternatively, as a Hessian [46]:
$\displaystyle\mathcal{F}_{km}(\boldsymbol{x}_{0})$
$\displaystyle=-\frac{1}{2}\partial_{k}\partial_{m}|\\!\braket{\psi(\boldsymbol{x})}{\psi(\boldsymbol{x}_{0})}\\!|^{2}\,\Big{|}_{\boldsymbol{x}=\boldsymbol{x}_{0}}$
$\displaystyle\eqqcolon\partial_{k}\partial_{m}f(\boldsymbol{x}_{0}).$ (45)
It follows that we can compute the metric using the same method as for the
Hessian, with $f(\boldsymbol{x})$ as the cost function. We know the value of
$f$ without shift as
$\displaystyle
f(\boldsymbol{x}_{0})=-\frac{1}{2}|\\!\braket{\psi(\boldsymbol{x}_{0})}{\psi(\boldsymbol{x}_{0})}\\!|^{2}=-\frac{1}{2}.$
(46)
The values with shifted argument can be calculated as the probability of the
zero bitstring $\mathbf{0}$ when measuring the state
$V^{\dagger}(\boldsymbol{x})V(\boldsymbol{x}_{0})\ket{\mathbf{0}}$ in the
computational basis, which requires circuits with up to doubled depth compared
to the original circuit $V(\boldsymbol{x})$. Alternatively, we may use a
Hadamard test to implement $f$, requiring an auxiliary qubit, two operations
controlled by that qubit as well as a measurement on it, but only halved depth
on average (see App. A.3). With either of these methods, the terms for the
shift rule in Eq. (36) and thus the metric tensor can be computed via the
parameter-shift rule.
The metric can also be computed analytically without parameter shifts via a
_linear combination of unitaries (LCU)_ [57, 58], which also employs Hadamard
tests. As it uses the generator as an operation in the circuit, any non-
unitary generator needs to be decomposed into Pauli words for this method to
be available on quantum hardware, similar to a gate decomposition. Afterwards,
this method uses one circuit evaluation per pair of Pauli words from the $k$th
and $m$th generator to compute the entry $\mathcal{F}_{km}$. A modification of
all approaches that use a Hadamard test is possible by replacing it with
projective measurements [56].
Metric entries that belong to operations that commute _within the circuit_
101010For example, operations on distinct wires commute in general but not
necessarily within the circuit if entangling operations are carried out
between them. can be computed block-wise without any auxiliary qubits,
additional operations or deeper circuits [48]. For a given block, we execute
the subcircuit $V_{1}$ prior to the group of mutually commuting gates and
measure the covariance matrix of the generators $\\{G_{k}\\}$ of these gates:
$\displaystyle\mathcal{F}_{km}$
$\displaystyle=\bra{\mathbf{0}}V_{1}^{\dagger}G_{k}G_{m}V_{1}\ket{\mathbf{0}}$
(47)
$\displaystyle-\bra{\mathbf{0}}V_{1}^{\dagger}G_{k}V_{1}\ket{\mathbf{0}}\bra{\mathbf{0}}V_{1}^{\dagger}G_{m}V_{1}\ket{\mathbf{0}}.$
By grouping the measurement bases of all $\\{G_{k}G_{m}\\}$ and $\\{G_{k}\\}$
of the block, the covariance matrix can typically be measured with only a few
unique circuit evaluations111111For a layer of simultaneous single-qubit
rotations on all $N$ qubits, even a single measurement basis is sufficient for
the corresponding $N\times N$ block., making this method the best choice for
the block-diagonal. One may then either use the result as an approximation to
the full metric tensor, or use one of the other methods to compute the off-
block-diagonal entries; the approximation has been shown to work well for some
circuit structures [48], but not for others [59]. The methods to obtain the
metric tensor and their resource requirements are shown in Tab. 2.
| Parameter shift rule | LCU | Covariance
---|---|---|---
| Overlap | Hadamard | |
Aux. qubits | $0$ | $1$ | $1$ | $0$
off-block-diag. | $\checkmark$ | $\checkmark$ | $\checkmark$ |
Depth (avg) | $\sim\frac{4}{3}D_{V}$ | $\sim\frac{2}{3}D_{V}$ | $\sim\frac{2}{3}D_{V}$ | $\frac{2}{3}D_{V}$
Depth (max) | $2D_{V}$ | $\sim D_{V}$ | $\sim D_{V}$ | $D_{V}$
$N_{\text{eval}}(\mathcal{F}_{kk})$ | $\begin{cases}2R_{k}-1\\\ 2R_{k}\\\ \end{cases}$ | $\mathcal{Q}_{k}\leq\frac{1}{2}(\mathcal{P}_{k}^{2}-\mathcal{P}_{k})$ | $\overline{\mathcal{P}}_{k}\leq\mathcal{P}_{k}$
$N_{\text{eval}}(\mathcal{F}_{km})$ | $\begin{cases}2(R_{k}+R_{m})-1\\\ 2(2R_{k}R_{m}+R_{k}+R_{m}-2)\\\ \end{cases}$ | $\mathcal{P}_{k}\mathcal{P}_{m}$ | $\overline{\mathcal{P}}_{km}\leq\mathcal{P}_{k}\mathcal{P}_{m}$
$N_{\text{eval}}(\mathcal{F})$ | $\begin{cases}2n\lVert\boldsymbol{R}\rVert_{1}-\frac{1}{2}(n^{2}+n)\\\ 2\left(\lVert\boldsymbol{R}\rVert_{1}^{2}-\lVert\boldsymbol{R}\rVert_{2}^{2}+n(\lVert\boldsymbol{R}\rVert_{1}-n+1)\right)\\\ \end{cases}$ | $\frac{1}{2}\left(\lVert\boldsymbol{\mathcal{P}}\rVert_{1}^{2}-\lVert\boldsymbol{\mathcal{P}}\rVert_{2}^{2}\right)+\lVert\boldsymbol{\mathcal{Q}}\rVert_{1}$ | —
Table 2: Quantum hardware-ready methods to compute the Fubini-Study metric
tensor and their resource requirements. The cost function $f(\boldsymbol{x})$
(see Eq. (4.3)) for the parameter-shift rule can be implemented with increased
depth by applying the adjoint of the original circuit to directly realize the
overlap (_left_) or with an auxiliary qubit and Hadamard tests (_center left_
, App. A.3). The LCU method (_center right_) is based on Hadamard tests as
well and both these methods can spare the auxiliary qubit and instead employ
projective measurements [56]. The cheapest method is via measurements of the
covariance of generators (_right_) but it can only be used for the block-
diagonal of the tensor, i.e., not for all $\mathcal{F}_{km}$. We denote the
depth of the original circuit $V$ by $D_{V}$ and the number of Pauli words in
the decomposition of $G_{k}$ and its square with $\mathcal{P}_{k}$ and
$\mathcal{Q}_{k}$, respectively. The $\mathcal{P}_{k}$ Pauli words of $G_{k}$
can be grouped into $\overline{\mathcal{P}}_{k}$ groups of pairwise commuting
words; the number of groups of pairwise commuting Pauli words in the product
$G_{k}G_{m}$ similarly is $\overline{\mathcal{P}}_{km}$. For the covariance-
based approach, we overestimate the number of required circuits, as typically
many of the measurement bases of the entries in the same block will be
compatible. The number of unique circuits to be evaluated for a diagonal
element $\mathcal{F}_{kk}$, an off-diagonal element $\mathcal{F}_{km}$, and
the full tensor $\mathcal{F}$ is given in terms of the number of frequencies
$R_{k}$ and of $\mathcal{Q}_{k}$, $\mathcal{P}_{k}$
$\overline{\mathcal{P}}_{k}$ and $\overline{\mathcal{P}}_{km}$. The entries
for $N_{\text{eval}}$ in the first and second row of the braces refer to
equidistant (main text) and arbitrary frequencies (see App. B.2),
respectively.
Since we run a different circuit for the metric tensor than for the cost
function itself, the $2R_{k}-1$ evaluations at shifted positions needed for
the $k$th diagonal entry cannot reuse any prior circuit evaluations, as is the
case for the cost function Hessian. Consequentially, the natural gradient of a
(single term) expectation value function $E$,
$\displaystyle\boldsymbol{\nabla}\\!_{\text{n}}\
E(\boldsymbol{x}):=\mathcal{F}^{-1}(\boldsymbol{x})\boldsymbol{\nabla}E(\boldsymbol{x}),$
(48)
with $\boldsymbol{\nabla}E$ referring to the Euclidean gradient, requires more
circuit evaluations than its Hessian and gradient together.
However, the utility of the metric tensor becomes apparent upon observing that
it depends solely on the _ansatz_ , and not the observable being measured.
This means that if a cost function has multiple terms, like in VQEs, the
metric only needs to be computed once per epoch, rather than once per term, as
is the case of the cost function Hessian. Therefore, an epoch of quantum
natural gradient descent can be cheaper for such cost functions than an epoch
of optimizers using the Hessian of the cost function. In addition, the block-
diagonal of the metric tensor can be obtained with few circuit evaluations per
block for conventional gates without any further requirements and with reduced
average circuit depth.
## 5 Applications
In this section, we will present QAOA as concrete application for our general
parameter-shift rule, which reduces the required resources significantly when
computing derivatives. Afterwards, we use the approach of trigonometric
interpolation to generalize the Rotosolve algorithm. This makes it applicable
to arbitrary quantum gates with equidistant frequencies, which reproduces the
results in Refs. [42, 45], and extends them further to more general frequency
spectra. In addition, we make quantum analytic descent (QAD) available for
arbitrary quantum gates with equidistant frequencies, which previously
required a higher-dimensional Fourier reconstruction and thus was infeasible.
### 5.1 QAOA and Hamiltonian time evolution
In Eq. (24) we presented a generalized parameter-shift rule that makes use of
$2R$ function evaluations for $R$ frequencies in $E$. A particular example for
single-parameter unitaries with many frequencies are layers of single- or two-
qubit rotation gates, as can be found e.g., in QAOA circuits or digitized
Hamiltonian time evolution algorithms.
The quantum approximate optimization algorithm (QAOA) was first proposed in
2014 by Farhi, Goldstone and Gutmann to solve classical combinatorial
optimization problems on near-term quantum devices [8]. Since then, it has
been investigated analytically [60, 61, 62], numerically [63, 64], and on
quantum computers [65, 66].
In general, given a problem Hamiltonian $H_{P}$ that encodes the solution to
the problem of interest onto $N$ qubits, QAOA applies two types of layers
alternatingly to an initial state $\ket{+}^{\otimes N}$:
$\displaystyle
V_{\text{QAOA}}(\boldsymbol{x})=\prod_{j=p}^{1}U_{M}(x_{2j})U_{P}(x_{2j-1}),$
(49)
where $p$ is the number of blocks which determines the depth of the circuit,
$U_{M}(x)=\exp\left(-ixH_{M}\right)$ with $H_{M}=\sum_{k=1}^{N}X_{k}$ is the
so-called _mixing layer_ , and $U_{P}(x)=\exp(-ixH_{P})$ is the time evolution
under $H_{P}$. The parameters $\boldsymbol{x}$ can then be optimized to try to
minimize the objective function
$\displaystyle E(\boldsymbol{x})=\bra{+}^{\otimes
N}V^{\dagger}_{\text{QAOA}}(\boldsymbol{x})H_{P}V_{\text{QAOA}}(\boldsymbol{x})\ket{+}^{\otimes
N}.$ (50)
Here we focus on the layer $U_{P}$, and we look at the example of
$\operatorname{\textsc{MaxCut}}$ in particular. The corresponding problem
Hamiltonian for an unweighted graph $G=(\mathcal{V},\mathcal{E})$ with $N$
vertices $\mathcal{V}$ and $M$ edges $\mathcal{E}$ reads
$\displaystyle H_{P}=\sum_{(a,b)\in\mathcal{E}}\frac{1}{2}(1-Z_{a}Z_{b}),$
(51)
and $U_{P}$ correspondingly contains $M$ two-qubit Pauli-$Z$ rotations
$R_{ZZ}$.
We note that $H_{M}$ has eigenvalues $-N,-N+2,\cdots,N$, which means the
corresponding frequencies (differences of eigenvalues) are $2,\cdots,2N$.
Thus, treating $U_{M}(x_{2j})$ as a single operation, Eq. (6) implies that
$E(\boldsymbol{x})$ can be considered an $N$-order trigonometric polynomial in
$x_{2j}$, and the parameter-shift rules we derive in Sec. 3 will apply with
$R=N$. Similarly, $H_{P}$ has corresponding frequencies in the set $[M]$, and
it will obey the parameter-shift rule for $R=M$, although we may be able to
give better upper bounds $\lambda$ for $R$. Thus the unique positive
differences $\\{\Omega_{\ell}\\}$ for those layers, i.e., the frequencies of
$E(\boldsymbol{x})$ with respect to the parameter $\\{x_{2j-1}\\}_{j\in[p]}$,
take integer values within the interval $[0,\lambda]$ as well. We may
therefore use Eq. (24), with the knowledge that $R\leq\lambda\leq M$.
Note that knowing all frequencies of $E(x)$ requires knowledge of the full
spectrum of $H_{P}$ — and in particular of $\lambda$ — which in turn is the
solution of $\operatorname{\textsc{MaxCut}}$ itself. As a consequence, the
motivation for performing QAOA becomes obsolete. Therefore, in general we
cannot assume to know $\\{\Omega_{\ell}\\}$ (or even $R$), but instead require
upper bounds $\varphi(G)\geq\operatorname{\textsc{MaxCut}}(G)=\lambda$ which
can be used to bound the largest frequency, and thus the number of frequencies
$R$ and subsequently the number of terms in the parameter-shift rule. It is
noteworthy that even if the _largest_ frequency $\lambda$ is known exactly via
a tight bound — which restricts the Fourier spectrum to the integers
$[\lambda]$ — not _all_ integers smaller than $\lambda$ need to be present in
the set of frequencies $\\{\Omega_{\ell}\\}$, so that the estimate for $R$ may
be too large121212A simple example for this is the case of $2k$-regular
graphs; here, $H_{P}$ only has even eigenvalues, and therefore all frequencies
are even as well. Given an upper bound $\varphi$, we thus know the number of
frequencies to satisfy $R\leq\varphi/2$..
One way to obtain an upper bound uses analytic results based on the Laplacian
of the graph of interest [67, 68], for which automatic bound generating
programs exist [69]. An alternative approach uses semi-definite programs
(SDPs) that solve relaxations of the $\operatorname{\textsc{MaxCut}}$ problem,
the most prominent being the _Goemans-Williamson (GW)_ algorithm [70] and
recent extensions thereof that provide tighter upper bounds [71, 72]. The
largest eigenvalue is guaranteed to be within $\sim 0.878$ of these SDP upper
bounds.
To demonstrate the above strategy, we summarize the number of evaluations
required for the gradient and Hessian of an $n$-parameter QAOA circuit on $N$
qubits for $\operatorname{\textsc{MaxCut}}$ in Tab. 3, comparing the approach
via decomposing the circuit, to the one detailed above based on $\varphi$ and
the improved Hessian measurement scheme in Sec. 4.1. Here, we take into
account that half of the layers are of the form $U_{P}$, and the other half
are mixing layers with $R=N$ frequencies. We systematically observe the number
of evaluations for the gradient to be cut in half, and the those for the
gradient and Hessian together to scale with halved order in $N$ (and $k$, for
regular graphs).
Graph type | Decomposition-based | Gen. shift rule
---|---|---
$\boldsymbol{\nabla}E$ | $\boldsymbol{\nabla}E\&\boldsymbol{\nabla}^{\otimes 2}E$ | Bound $\varphi$ | $\boldsymbol{\nabla}E$ | $\boldsymbol{\nabla}E\&\boldsymbol{\nabla}^{\otimes 2}E$
General | $(M+N)n$ | $\mathcal{O}(n^{2}(M+N)^{2})$ | $\varphi$ | $n(\varphi+N)$ | $\mathcal{O}(n^{2}(\varphi+N))$
Complete | $\frac{1}{2}n(N^{2}+N)$ | $\mathcal{O}(n^{2}N^{4})$ | $\left\lfloor\frac{N^{2}}{4}\right\rfloor$ | $n\\!\left(\left\lfloor\frac{N^{2}}{4}\right\rfloor+N\right)$ | $\mathcal{O}(n^{2}N^{2})$
$2k$-regular | $(k+1)nN$ | $\mathcal{O}(k^{2}n^{2}N^{2})$ | $kN$ | $\frac{k+2}{2}nN$ | $\mathcal{O}(kn^{2}N)$
$(2k\\!\\!+\\!\\!1)$-regular | $\frac{2k+3}{2}nN$ | $\mathcal{O}(k^{2}n^{2}N^{2})$ | $\frac{2k+1}{2}N$ | $\frac{2k+3}{2}nN$ | $\mathcal{O}(kn^{2}N)$
Table 3: Evaluation numbers for the gradient, or both the gradient and the
Hessian, for QAOA circuits for $\operatorname{\textsc{MaxCut}}$ on several
types of graphs. Each graph has $N$ vertices and a graph type-specific number
$M$ of edges, and the (even) number of parameters is denoted as $n$. For
$K$-regular graphs, we know $M=\min\\{(N^{2}-N)/2,KN/2\\}$, and the latter
value is used in the displayed evaluation costs; if the former value forms the
minimum, the graph is in fact complete. The left column is based on
decomposing the circuit, applying the conventional two-term parameter-shift
rule per elementary gate and iterating it for the Hessian. The right column
employs the generalized parameter-shift rule Eq. (24) combined with an upper
bound $\varphi$ for the largest eigenvalue $\lambda$ of the problem
Hamiltonian, as well as the reduced number of evaluations for Hessian off-
diagonal terms from Sec. 4.1. The bound $\varphi$ for complete graphs can be
found in Ref. [67].
In addition, we display the numbers of circuit evaluations from Tab. 3
together with SDP-based bounds for $\lambda$ and the true minimal number of
evaluations required for the parameter-shift rule in Fig. 3. For this, we
sampled random unweighted graphs of the corresponding type and size and ran
the GW algorithm as well as an improvement thereof to obtain tighter bounds
[71]. On one hand we observe the advantage of the generalized parameter-shift
rule and the cheaper Hessian method that can be read off already from the
scalings in Tab. 3. On the other hand, we find both SDP-based upper bounds to
provide an exact estimate of the largest eigenvalue in the $N\leq 20$ regime,
as can be seen from the cut values obtained from the GW algorithm that
coincide with the upper bound. In cases in which the frequencies
$\\{\Omega_{\ell}\\}$ occupy all integers in $[R]$, this leads to an exact
estimate of $R$ and the evaluations in the shift rule. For all graph types but
complete graphs, the SDP-based upper bounds yield a better estimate for the
number of terms than the respective analytic bound $\varphi$, which improves
the generalized shift rule further.
In summary, we find the generalized parameter-shift rule to offer a constant
prefactor improvement when computing the gradient and an improvement of at
least $\mathcal{O}(N)$ when computing both the gradient and the Hessian. For
certain graph types, knowledge about the structure of the spectrum and tight
analytic bounds provide this advantage already, whereas for other graph types
the SDP-based bounds reduce the evaluation numbers significantly.
Figure 3: Evaluation numbers $N_{\text{eval}}$ for the gradient (_left_) or
both the gradient and the Hessian (_right_) for $n=6$ parameter QAOA circuits
for $\operatorname{\textsc{MaxCut}}$ on graphs of several types and sizes.
Using numerical upper bounds together with our new parameter-shift rule (GW –
_purple triangles_ and its generalization – _dashed turquoise_) reduces the
resource requirements for both quantities significantly, compared to the
previously available decomposition-based method (_solid orange_). The rows
correspond to the various considered graph types (_top to bottom_): complete,
$5$-regular, $6$-regular and (up to) $4N$ randomly sampled edges. The
requirements for the decomposition-based approach and the analytic upper bound
(_dotted blue_) correspond to the results in the left and right column of Tab.
3, respectively. The numerical _upper_ bounds both use the minimized objective
value of SDPs for relaxations of $\operatorname{\textsc{MaxCut}}$ to obtain
the bound $\varphi$, which depends on the graph instance. The GW-based _lower_
bound (_pink triangles_) is obtained by randomly mapping the output state of
the GW algorithm to $10$ valid cuts and choosing the one with the largest cut
value. Note that $K$-regular graphs are only defined for $N>K$ and $NK\mod
2=0$ and that graphs with $\kappa N$ sampled edges are complete for $N\leq
2\kappa+1$, leading to a change in the qualitative behaviour in the last row
at $N=2\kappa+2=10$.
### 5.2 Rotosolve
The _Rotosolve_ algorithm is a coordinate descent algorithm for minimizing
quantum cost functions. It has been independently discovered multiple times
[42, 45, 51, 50], with [50] giving the algorithm its name but only (along with
[51]) considering parametrized Pauli rotations, and [42, 45] covering other
unitaries with integer-valued generator eigenvalues.
The Rotosolve algorithm optimizes the rotation angles sequentially: for one
variational parameter $x_{k}$ at a time, the cost function is reconstructed as
a function of that parameter using $2R_{k}+1$ evaluations, the minimum of the
reconstruction is calculated, and then the parameter is updated to the
minimizing angle. For the case of Pauli rotation gates this minimum can be
found via a closed-form expression. Recent studies have shown such coordinate
descent methods to work well on many tasks [73, 50, 45, 74], although there
are limited cases where these methods fail [75].
While Rotosolve is not gradient-based, our cost reduction for the gradient
presented in Sec. 5.1 stems from a cost reduction for function reconstruction,
and hence is applicable to Rotosolve as well.
As shown in Sec. 3.1, the univariate objective function can also be fully
reconstructed if the parametrized unitaries are more complicated than Pauli
rotations, using the function value itself and the evaluations from the
generalized parameter-shift rule. Since the generalized parameter-shift rule
also applies for non-equidistant frequencies (see App. B), the reconstruction
works in the same way for arbitrary single-parameter gates. This extends our
generalization of Rotosolve beyond the previously known integer-frequency case
[42, 45], although the number of frequencies—and thus the cost—for the
reconstruction are typically significantly increased for non-integer
frequencies. While the minimizing angle might not be straightforward to
express in a closed form as it is the case for a single frequency, the one-
dimensional minimization can efficiently be carried out numerically to high
precision, via grid search or semi-definite programming [76, Chapter 4.2].
### 5.3 Quantum analytic descent
Quantum analytic descent (QAD) [49] also approaches the optimization problem
in VQAs via trigonometric interpolation. In contrast to Rotosolve, it
considers a model of all parameters simultaneously and includes second-order
derivatives, but this model only is a _local approximation_ of the full cost
function. Additionally, QAD has been developed for circuits that exclusively
contain Pauli rotations as parametrized gates.
The algorithm evaluates the cost function $E$ at $2n^{2}+n+1$ points around a
reference point $\boldsymbol{x}_{0}$, and then constructs a trigonometric
model of the form131313We slightly modify the trigonometric basis functions
from Ref. [49] to have leading order coefficients $1$.
$\displaystyle\hat{E}(\boldsymbol{x}_{0}+\boldsymbol{x})$
$\displaystyle=A(\boldsymbol{x})\left[E^{(A)}+2\boldsymbol{E}^{(B)}\cdot\tan\left(\frac{\boldsymbol{x}}{2}\right)\right.$
$\displaystyle+2\boldsymbol{E}^{(C)}\cdot\tan\left(\frac{\boldsymbol{x}}{2}\right)^{\odot
2}$ (52) $\displaystyle\left.+4\tan\left(\frac{\boldsymbol{x}}{2}\right)\cdot
E^{(D)}\cdot\tan\left(\frac{\boldsymbol{x}}{2}\right)\right],$
Here, we introduced
$A(\boldsymbol{x})\coloneqq\prod_{k}\cos^{2}\left(\frac{x_{k}}{2}\right)$ and
the element-wise square of a vector $\boldsymbol{v}$, $(\boldsymbol{v}^{\odot
2})_{k}\coloneqq v_{k}^{2}$ as for the Hessian diagonal. The coefficients
$E^{(A/B/C/D)}$ are derived from the circuit evaluations, taking the form of a
scalar, two vectors and an upper triangular matrix. More precisely, the
expansion basis is chosen such that
$\boldsymbol{E}^{(B)}=\boldsymbol{\nabla}E(\boldsymbol{x}_{0})$,
$\boldsymbol{E}^{(C)}=\boldsymbol{\nabla}^{\odot 2}E(\boldsymbol{x}_{0})$, and
$E^{(D)}$ is the strictly upper triangular part of the Hessian. Note that for
this model $2n^{2}+n+1$ evaluations are used to obtain $n^{2}/2+3n/2+1$
parameters. In the presence of statistical noise from these evaluations, it
turns out that building the model to a desired precision and inferring
modelled gradients close to the reference point $\boldsymbol{x}_{0}$ has
resource requirements similar to measuring the gradient directly [49].
This model coincides with $E(\boldsymbol{x})$ at $\boldsymbol{x}_{0}$ up to
second order, and in the vicinity its error scales with the third order of the
largest parameter deviation [49]. After the construction phase, the model cost
is minimized in an inner optimization loop, which only requires classical
operations. For an implementation and demonstration of the optimization, we
also refer the reader to [77] and [78].
In the light of the parameter-shift rules and reconstruction methods, we
propose three (alternative) modifications of QAD. The first change is to
reduce the required number of evaluations. As the coefficients $E^{(A/B/C/D)}$
consist of the gradient and Hessian, they allow us to exploit the reduced
resource requirements presented in Tab. 1 141414In addition, we may skip the
$n$ evaluations with shift angle $\pi$ proposed in Ref. [49], and instead
measure the Hessian diagonal as discussed in Sec. 4.1.. In the case originally
considered by the authors, i.e., for Pauli rotations only, this reduces the
number of evaluations from $2n^{2}+n+1$ to $(3n^{2}+n)/2+1$.
A second, alternative modification of QAD is to keep all evaluations as
originally proposed to obtain the full second-order terms, i.e., we may
combine the shift angles for each pair of parameters, and use them for
coefficients of additional higher-order terms. This extended model (see App.
D.1) has the form
$\displaystyle\mathring{E}(\boldsymbol{x}_{0}+\boldsymbol{x})$
$\displaystyle=\hat{E}(\boldsymbol{x}_{0}+\boldsymbol{x})+4A(\boldsymbol{x})\tan\left(\frac{\boldsymbol{x}}{2}\right)^{\odot
2}$ (53)
$\displaystyle\cdot\left[E^{(F)}\cdot\tan\left(\frac{\boldsymbol{x}}{2}\right)+E^{(G)}\cdot\tan\left(\frac{\boldsymbol{x}}{2}\right)^{\odot
2}\right],$
where $E^{(F)}$ is symmetric with zeros on its diagonal and $E^{(G)}$ is a
strictly upper triangular matrix. This extended model has $2n^{2}+1$ degrees
of freedom, which matches the number of evaluations exactly.
While the QAD model reconstructs the univariate restrictions of $E$ to the
coordinate axes correctly, the extended model $\mathring{E}$ does so for the
bivariate restrictions to the plane spanned by any pair of coordinate axes. It
remains to investigate whether and for which applications the extension yields
a better optimization behaviour; for functions in which pairs of parameters
yield a good local approximation of the landscape, it might provide an
improvement.
The third modification we consider is to generalize the previous, extended QAD
model to general single-parameter quantum gates. This can be done via a full
trigonometric interpolation to second order, which is detailed in App. D.2,
exactly reconstructing the energy function when restricted to any coordinate
plane at the price of
$2(\lVert\boldsymbol{R}\rVert_{1}^{2}-\lVert\boldsymbol{R}\rVert_{2}^{2}+\lVert\boldsymbol{R}\rVert_{1})+1$
evaluations.
Using toy model circuits and Hamiltonians, we demonstrate the qualitative
difference between the QAD model, its extension $\mathring{E}$, and the
generalization to multiple frequencies in Fig. 4.
Figure 4: The QAD model (_left_), its extension $\mathring{E}$, see Eq. (53),
that includes full second-order terms (_center left_), and the second-order
trigonometric interpolation model (_center right_), as well as the original
expectation value $E$ (_right_). The original function is generated from toy
Hamiltonians in a two-parameter example circuit, with one frequency (_top_)
and two frequencies (_bottom_) per parameter. The QAD model produces a local
approximation to $E$ that deviates away from $\boldsymbol{x}_{0}$ at a slow
rate for $R=1$ but faster for $R=2$. The extension $\mathring{E}$ reuses
evaluations made for the Hessian to capture the full bivariate dependence for
a single frequency but is not apt to model multiple frequencies either.
Finally, the trigonometric interpolation generalizes $\mathring{E}$. This
means it coincides with $\mathring{E}$ for $R=1$, but also reproduces the full
bivariate function for $R>1$.
## 6 Discussion
In this work, we derive interpolation rules to exactly express quantum
functions $E(x)$ as a linear combination of evaluations $E(x_{\mu})$, assuming
$E(x)$ derives from parametrized gates of the form $U(x)=\exp(ixG)$. Our
method relies on the observation that $E(x)$ can be expressed as trigonometric
polynomial in $x$, characterized by a set of $R$ _frequencies_ that correspond
to distinct differences in the eigenvalues of $G$. This observation allows us
to derive our results using trigonometric interpolation methods.
In addition to a full reconstruction of $E(x)$, the presented approach offers
parameter-shift rules for derivatives of arbitrary order and recipes to
evaluate multivariate derivatives more cheaply. Using the concept of the
stochastic parameter-shift rule, quantum gates of the form
$U_{F}(x)=\exp(i(xG+F))$ can be differentiated as well.
Nevertheless, much remains unknown about the practicality of our new
parameter-shift rules. For the common case that we have $R$ equidistant
frequencies, Sec. 3.5 shows that the scaling of the required resources is
similar between naïvely applying our generalized parameter-shift rules, and
applying parameter-shift rules to a decomposition of $U(x)$. This holds for
the first derivative and also for the required shot budget when computing the
second derivative, whereas the number of unique circuits is significantly
smaller for the new, generalized shift rule.
Our observations lead to several open questions:
* $\bullet$
In which situations can we obtain better bounds on the number of frequencies?
We investigated an example for QAOA in Sec. 5.1, but are there other examples?
* $\bullet$
For general $G$ (e.g., $G=\sum_{j}c_{j}P_{j}$ with real $c_{j}$ and Pauli
words $P_{j}$), the frequencies will not be equidistant, and in fact $R$ may
scale quadratically in the size of $U$. Naïvely applied, our method would then
scale poorly compared to decomposing $G$. Can we apply an approximate or
stochastic parameter-shift rule with a better scaling?
* $\bullet$
Would it ever make sense to _truncate_ these parameter-shift rules to keep
only terms corresponding to smaller frequencies? This is inspired by the idea
of using low-pass filters to smooth out rapid changes of a signal.
* $\bullet$
Our work on function reconstruction extends QAD to all gates with equidistant
frequencies. Similarly, it allows Rotosolve, which has been shown to work
remarkably well on some applications, to be used on all quantum gates with
arbitrary frequencies. Is there a classification of problems on which these
model-based algorithms work well? And can we reduce the optimization cost
based on the above ideas?
* $\bullet$
More generally, can we apply the machinery of Fourier analysis more broadly,
e.g., to improve optimization methods in the presence of noise?
We hope that this work serves as an impetus for future work that will further
apply signal processing methods to the burgeoning field of variational quantum
computing.
## Acknowledgements
We would like to thank Nathan Killoran, Maria Schuld, Matthew Beach, and Eric
Kessler for helpful comments on the manuscript, as well as Christian Gogolin
and Gian-Luca Anselmetti for valuable discussions.
## Code availability
The scripts used to create the data and plots for Figs. 3 and 4 can be found
at [79].
## References
* [1] Amazon Web Services. “Amazon Braket”. url: aws.amazon.com/braket/.
* [2] J.M. Arrazola, V. Bergholm, K. Brádler, T.R. Bromley, M.J. Collins, I. Dhand, A. Fumagalli, T. Gerrits, A. Goussev, L.G. Helt, J. Hundal, T. Isacsson, R.B. Israel, J. Izaac, S. Jahangiri, R. Janik, N. Killoran, S.P. Kumar, J. Lavoie, A.E. Lita, D.H. Mahler, M. Menotti, B. Morrison, S.W. Nam, L. Neuhaus, H.Y. Qi, N. Quesada, A. Repingon, K.K. Sabapathy, M. Schuld, D. Su, J. Swinarton, A. Száva, K. Tan, P. Tan, V.D. Vaidya, Z. Vernon, Z. Zabaneh, and Y. Zhang. “Quantum circuits with many photons on a programmable nanophotonic chip”. Nature 591, 54–60 (2021).
* [3] IBM Corporation. “IBM Quantum”. url: quantum-computing.ibm.com/.
* [4] Microsoft. “Azure Quantum”. url: azure.microsoft.com/../quantum/.
* [5] Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. “Parameterized quantum circuits as machine learning models”. Quantum Science and Technology 4, 043001 (2019).
* [6] Marco Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. “Variational quantum algorithms”. Nature Reviews Physics 3, 625–644 (2021).
* [7] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, Alán Aspuru-Guzik, and Jeremy L. O’Brien. “A variational eigenvalue solver on a photonic quantum processor”. Nature Communications 5, 4213 (2014).
* [8] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. “A quantum approximate optimization algorithm” (2014). arXiv:1411.4028.
* [9] Tyson Jones, Suguru Endo, Sam McArdle, Xiao Yuan, and Simon C. Benjamin. “Variational quantum algorithms for discovering Hamiltonian spectra”. Phys. Rev. A 99, 062304 (2019).
* [10] Gian-Luca R Anselmetti, David Wierichs, Christian Gogolin, and Robert M Parrish. “Local, expressive, quantum-number-preserving VQE ansätze for fermionic systems”. New Journal of Physics 23, 113010 (2021).
* [11] Harper R. Grimsley, Sophia E. Economou, Edwin Barnes, and Nicholas J. Mayhall. “An adaptive variational algorithm for exact molecular simulations on a quantum computer”. Nature communications 10, 1–9 (2019).
* [12] Ken M. Nakanishi, Kosuke Mitarai, and Keisuke Fujii. “Subspace-search variational quantum eigensolver for excited states”. Phys. Rev. Research 1, 033062 (2019).
* [13] Alain Delgado, Juan Miguel Arrazola, Soran Jahangiri, Zeyue Niu, Josh Izaac, Chase Roberts, and Nathan Killoran. “Variational quantum algorithm for molecular geometry optimization”. Phys. Rev. A 104, 052402 (2021).
* [14] Eric Anschuetz, Jonathan Olson, Alán Aspuru-Guzik, and Yudong Cao. “Variational quantum factoring”. In International Workshop on Quantum Technology and Optimization Problems. Pages 74–85. Springer (2019).
* [15] Sumeet Khatri, Ryan LaRose, Alexander Poremba, Lukasz Cincio, Andrew T. Sornborger, and Patrick J. Coles. “Quantum-assisted quantum compiling”. Quantum 3, 140 (2019).
* [16] Jun Li, Xiaodong Yang, Xinhua Peng, and Chang-Pu Sun. “Hybrid quantum-classical approach to quantum optimal control”. Phys. Rev. Lett. 118, 150503 (2017).
* [17] Ryan LaRose, Arkin Tikku, Étude O’Neel-Judy, Lukasz Cincio, and Patrick J. Coles. “Variational quantum state diagonalization”. npj Quantum Information 5, 1–10 (2019).
* [18] Benjamin Commeau, Marco Cerezo, Zoë Holmes, Lukasz Cincio, Patrick J. Coles, and Andrew Sornborger. “Variational Hamiltonian diagonalization for dynamical quantum simulation” (2020). arXiv:2009.02559.
* [19] Jonathan Romero, Jonathan P. Olson, and Alan Aspuru-Guzik. “Quantum autoencoders for efficient compression of quantum data”. Quantum Science and Technology 2, 045001 (2017).
* [20] Guillaume Verdon, Michael Broughton, and Jacob Biamonte. “A quantum algorithm to train neural networks using low-depth circuits” (2017). arXiv:1712.05304.
* [21] Edward Farhi and Hartmut Neven. “Classification with quantum neural networks on near term processors” (2018). arXiv:1802.06002.
* [22] Maria Schuld and Nathan Killoran. “Quantum machine learning in feature Hilbert spaces”. Phys. Rev. Lett. 122, 040504 (2019).
* [23] Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. “Quantum circuit learning”. Phys. Rev. A 98, 032309 (2018).
* [24] Maria Schuld, Alex Bocharov, Krysta M. Svore, and Nathan Wiebe. “Circuit-centric quantum classifiers”. Phys. Rev. A 101, 032308 (2020).
* [25] Edward Grant, Marcello Benedetti, Shuxiang Cao, Andrew Hallam, Joshua Lockhart, Vid Stojevic, Andrew G. Green, and Simone Severini. “Hierarchical quantum classifiers”. npj Quantum Information 4, 1–8 (2018).
* [26] Jin-Guo Liu and Lei Wang. “Differentiable learning of quantum circuit Born machines”. Phys. Rev. A 98, 062324 (2018).
* [27] Vojtěch Havlíček, Antonio D. Córcoles, Kristan Temme, Aram W. Harrow, Abhinav Kandala, Jerry M. Chow, and Jay M. Gambetta. “Supervised learning with quantum-enhanced feature spaces”. Nature 567, 209–212 (2019).
* [28] Hongxiang Chen, Leonard Wossnig, Simone Severini, Hartmut Neven, and Masoud Mohseni. “Universal discriminative quantum neural networks”. Quantum Machine Intelligence 3, 1–11 (2021).
* [29] Nathan Killoran, Thomas R. Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolás Quesada, and Seth Lloyd. “Continuous-variable quantum neural networks”. Phys. Rev. Research 1, 033063 (2019).
* [30] Gregory R. Steinbrecher, Jonathan P. Olson, Dirk Englund, and Jacques Carolan. “Quantum optical neural networks”. npj Quantum Information 5, 1–9 (2019).
* [31] Andrea Mari, Thomas R. Bromley, Josh Izaac, Maria Schuld, and Nathan Killoran. “Transfer learning in hybrid classical-quantum neural networks”. Quantum 4, 340 (2020).
* [32] Ryan Sweke, Frederik Wilde, Johannes Meyer, Maria Schuld, Paul K. Faehrmann, Barthélémy Meynard-Piganeau, and Jens Eisert. “Stochastic gradient descent for hybrid quantum-classical optimization”. Quantum 4, 314 (2020).
* [33] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. “TensorFlow: a system for large-scale machine learning”. In OSDI. Volume 16, pages 265–283. Berkeley, CA, USA (2016). USENIX Association. url: dl.acm.org/..3026877.3026899.
* [34] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. “Automatic differentiation in PyTorch”. NIPS 2017 Workshop Autodiff (2017). url: openreview.net/forum?id=BJJsrmfCZ.
* [35] Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. “Autograd: Effortless gradients in NumPy”. In ICML 2015 AutoML Workshop. (2015). url: indico.ijclab.in2p3.fr/..
* [36] Atılım Güneş Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. “Automatic differentiation in machine learning: a survey”. Journal of Machine Learning Research 18, 1–153 (2018). url: http://jmlr.org/papers/v18/17-468.html.
* [37] Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, M. Sohaib Alam, Shahnawaz Ahmed, Juan Miguel Arrazola, Carsten Blank, Alain Delgado, Soran Jahangiri, Keri McKiernan, Johannes Jakob Meyer, Zeyue Niu, Antal Száva, and Nathan Killoran. “PennyLane: Automatic differentiation of hybrid quantum-classical computations” (2020). arXiv:1811.04968.
* [38] Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. “Evaluating analytic gradients on quantum hardware”. Phys. Rev. A 99, 032331 (2019).
* [39] Leonardo Banchi and Gavin E. Crooks. “Measuring analytic gradients of general quantum evolution with the stochastic parameter shift rule”. Quantum 5, 386 (2021).
* [40] Gavin E. Crooks. “Gradients of parameterized quantum gates using the parameter-shift rule and gate decomposition” (2019). arXiv:1905.13311.
* [41] Jakob S. Kottmann, Abhinav Anand, and Alán Aspuru-Guzik. “A feasible approach for automatically differentiable unitary coupled-cluster on quantum computers”. Chemical Science 12, 3497–3508 (2021).
* [42] Javier Gil Vidal and Dirk Oliver Theis. “Calculus on parameterized quantum circuits” (2018). arXiv:1812.06323.
* [43] Francisco Javier Gil Vidal and Dirk Oliver Theis. “Input redundancy for parameterized quantum circuits”. Frontiers in Physics 8, 297 (2020).
* [44] Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. “Effect of data encoding on the expressive power of variational quantum-machine-learning models”. Phys. Rev. A 103, 032430 (2021).
* [45] Ken M. Nakanishi, Keisuke Fujii, and Synge Todo. “Sequential minimal optimization for quantum-classical hybrid algorithms”. Phys. Rev. Research 2, 043158 (2020).
* [46] Andrea Mari, Thomas R. Bromley, and Nathan Killoran. “Estimating the gradient and higher-order derivatives on quantum hardware”. Phys. Rev. A 103, 012405 (2021).
* [47] Johannes Jakob Meyer. “Fisher information in noisy intermediate-scale quantum applications”. Quantum 5, 539 (2021).
* [48] James Stokes, Josh Izaac, Nathan Killoran, and Giuseppe Carleo. “Quantum natural gradient”. Quantum 4, 269 (2020).
* [49] Bálint Koczor and Simon C. Benjamin. “Quantum analytic descent” (2020). arXiv:2008.13774.
* [50] Mateusz Ostaszewski, Edward Grant, and Marcello Benedetti. “Structure optimization for parameterized quantum circuits”. Quantum 5, 391 (2021).
* [51] Robert M. Parrish, Joseph T. Iosue, Asier Ozaeta, and Peter L. McMahon. “A Jacobi diagonalization and Anderson acceleration algorithm for variational quantum algorithm parameter optimization” (2019). arXiv:1904.03206.
* [52] Artur F. Izmaylov, Robert A. Lang, and Tzu-Ching Yen. “Analytic gradients in variational quantum algorithms: Algebraic extensions of the parameter-shift rule to general unitary transformations”. Phys. Rev. A 104, 062443 (2021).
* [53] Oleksandr Kyriienko and Vincent E. Elfving. “Generalized quantum circuit differentiation rules”. Phys. Rev. A 104, 052417 (2021).
* [54] Thomas Hubregtsen, Frederik Wilde, Shozab Qasim, and Jens Eisert. “Single-component gradient rules for variational quantum algorithms” (2021). arXiv:2106.01388v1.
* [55] Antoni Zygmund. “Trigonometric series, Volume II”. Cambridge University Press (1988).
* [56] Kosuke Mitarai and Keisuke Fujii. “Methodology for replacing indirect measurements with direct measurements”. Phys. Rev. Research 1, 013006 (2019).
* [57] Sam McArdle, Tyson Jones, Suguru Endo, Ying Li, Simon C. Benjamin, and Xiao Yuan. “Variational ansatz-based quantum simulation of imaginary time evolution”. npj Quantum Information 5 (2019).
* [58] Ying Li and Simon C. Benjamin. “Efficient variational quantum simulator incorporating active error minimization”. Phys. Rev. X 7, 021050 (2017).
* [59] David Wierichs, Christian Gogolin, and Michael Kastoryano. “Avoiding local minima in variational quantum eigensolvers with the natural gradient optimizer”. Phys. Rev. Research 2, 043246 (2020).
* [60] Mauro E. S. Morales, Jacob D. Biamonte, and Zoltán Zimborás. “On the universality of the quantum approximate optimization algorithm”. Quantum Information Processing 19, 1–26 (2020).
* [61] Seth Lloyd. “Quantum approximate optimization is computationally universal” (2018). arXiv:1812.11075.
* [62] Matthew B. Hastings. “Classical and quantum bounded depth approximation algorithms” (2019). arXiv:1905.07047.
* [63] Zhihui Wang, Stuart Hadfield, Zhang Jiang, and Eleanor G. Rieffel. “Quantum approximate optimization algorithm for MaxCut: A fermionic view”. Phys. Rev. A 97, 022304 (2018).
* [64] Wen Wei Ho and Timothy H. Hsieh. “Efficient variational simulation of non-trivial quantum states”. SciPost Phys 6, 29 (2019).
* [65] Leo Zhou, Sheng-Tao Wang, Soonwon Choi, Hannes Pichler, and Mikhail D. Lukin. “Quantum approximate optimization algorithm: Performance, mechanism, and implementation on near-term devices”. Phys. Rev. X 10, 021067 (2020).
* [66] Matthew P. Harrigan, Kevin J. Sung, Matthew Neeley, Kevin J. Satzinger, Frank Arute, Kunal Arya, Juan Atalaya, Joseph C. Bardin, Rami Barends, Sergio Boixo, et al. “Quantum approximate optimization of non-planar graph problems on a planar superconducting processor”. Nature Physics 17, 332–336 (2021).
* [67] Charles Delorme and Svatopluk Poljak. “The performance of an eigenvalue bound on the MaxCut problem in some classes of graphs”. Discrete Mathematics 111, 145–156 (1993).
* [68] William N. Anderson Jr. and Thomas D. Morley. “Eigenvalues of the Laplacian of a graph”. Linear and Multilinear Algebra 18, 141–145 (1985).
* [69] Vladimir Brankov, Pierre Hansen, and Dragan Stevanović. “Automated conjectures on upper bounds for the largest Laplacian eigenvalue of graphs”. Linear Algebra and its Applications 414, 407–424 (2006).
* [70] Michel X. Goemans and David P. Williamson. “Improved approximation algorithms for Maximum Cut and satisfiability problems using semidefinite programming”. J. ACM 42, 1115–1145 (1995).
* [71] Miguel F. Anjos and Henry Wolkowicz. “Geometry of semidefinite MaxCut relaxations via matrix ranks”. Journal of Combinatorial Optimization 6, 237–270 (2002).
* [72] Liu Hongwei, Sanyang Liu, and Fengmin Xu. “A tight semidefinite relaxation of the MaxCut problem”. J. Comb. Optim. 7, 237–245 (2003).
* [73] Andrea Skolik, Jarrod R. McClean, Masoud Mohseni, Patrick van der Smagt, and Martin Leib. “Layerwise learning for quantum neural networks”. Quantum Machine Intelligence 3, 1–11 (2021).
* [74] Marcello Benedetti, Mattia Fiorentini, and Michael Lubasch. “Hardware-efficient variational quantum algorithms for time evolution”. Phys. Rev. Research 3, 033083 (2021).
* [75] Ernesto Campos, Aly Nasrallah, and Jacob Biamonte. “Abrupt transitions in variational quantum circuit training”. Phys. Rev. A 103, 032607 (2021).
* [76] Aharon Ben-Tal and Arkadi Nemirovski. “Lectures on modern convex optimization: Analysis, algorithms, and engineering applications”. SIAM (2001).
* [77] Elies Gil-Fuster and David Wierichs. “Quantum analytic descent (demo)”. url: pennylane.ai/qml/demos/.. (accessed: 2022-01-23).
* [78] Bálint Koczor (2021). code: balintkoczor/quantum-analytic-descent.
* [79] David Wierichs, Josh Izaac, Cody Wang, and Cedric Yen-Yu Lin (2022). code: dwierichs/General-Parameter-Shift-Rules.
* [80] Leonard Benjamin William Jolley. “Summation of series”. Dover Publications (1961).
* [81] falagar. “Prove that $\sum\limits_{k=1}^{n-1}\tan^{2}\frac{k\pi}{2n}=\frac{(n-1)(2n-1)}{3}$”. url: math.stackexchange.com/q/2343. (accessed: 2022-01-23).
## Appendix A Technical derivations
### A.1 Derivation of explicit parameter-shift rules
Here we derive the trigonometric interpolation via Dirichlet kernels.
#### A.1.1 Full reconstruction
We start out by exactly determining $E(x)$ given its value at points
$\\{x_{\mu}=\frac{2\mu}{2R+1}\pi\\},\mu\in\\{-R,\cdots,R\\}$. This is a well-
known problem [55, Chapter X]; we reproduce the result below for completeness.
Consider the _Dirichlet kernel_
$\displaystyle D(x)$
$\displaystyle=\frac{1}{2R+1}+\frac{2}{2R+1}\sum_{\ell=1}^{R}\cos(\ell x)$
(54)
$\displaystyle=\frac{\sin\left(\frac{2R+1}{2}x\right)}{(2R+1)\sin\left(\frac{1}{2}x\right)}$
(55)
where the limit $x\rightarrow 0$ is taken when evaluating $D(0)$. The
functions $D(x-x_{\mu})$ are linear combinations of the basis functions
$\\{\sin(\ell x)\\}_{\ell\in[R]}$, $\\{\cos(\ell x)\\}_{\ell\in[R]_{0}}$, and
they satisfy $D(x_{\mu^{\prime}}-x_{\mu})=\delta_{\mu\mu^{\prime}}$. Therefore
it is evident that
$\displaystyle E(x)$ $\displaystyle=\sum_{\mu=-R}^{R}E(x_{\mu})D(x-x_{\mu})$
(56)
$\displaystyle=\frac{\sin\left(\frac{2R+1}{2}x\right)}{2R+1}\sum_{\mu=-R}^{R}E\left(x_{\mu}\right)\frac{(-1)^{\mu}}{\sin\left(\frac{x-x_{\mu}}{2}\right)}.$
(57)
As an example, for $R=1$ (e.g., when the generator $G$ satisfies
$G^{2}=\mathds{1}$) we have the formula
$\displaystyle E(x)=\frac{\sin\left(\frac{3}{2}x\right)}{3}$
$\displaystyle\left[-\frac{E(-\frac{2}{3}\pi)}{\sin(\frac{x}{2}+\frac{\pi}{3})}\right.$
(58)
$\displaystyle\left.+\frac{E(0)}{\sin(\frac{x}{2})}-\frac{E(\frac{2}{3}\pi)}{\sin(\frac{x}{2}-\frac{\pi}{3})}\right].$
Derivatives of $E(x)$ can be straightforwardly extracted from this full
reconstruction.
#### A.1.2 Odd kernels
We now consider the case of determining $E_{\text{odd}}$ given its value at
evenly spaced points $\\{x_{\mu}=\frac{2\mu-1}{2R}\pi\\}_{\mu\in[R]}$
151515Unlike Sec. A.1.1, we are not aware of a prior reference for the
derivations for this subsection (reconstructing the odd part) and the next
(reconstructing the even part).. Consider the _modified Dirichlet kernel_ :
$\displaystyle D^{\ast}(x)$
$\displaystyle=\frac{1}{2R}+\frac{1}{2R}\cos(Rx)+\frac{1}{R}\sum_{\ell=1}^{R-1}\cos(\ell
x)$ (59) $\displaystyle=\frac{\sin(Rx)}{2R\tan\left(\frac{1}{2}x\right)}$ (60)
where we again assume the limit $x\rightarrow 0$ is taken when evaluating
$D^{\ast}(0)$. This kernel satisfies the relations
$D^{\ast}(x_{\mu^{\prime}}-x_{\mu})=\delta_{\mu\mu^{\prime}},\quad
D^{\ast}(x_{\mu^{\prime}}+x_{\mu})=0,$ (61)
but unfortunately, $D^{\ast}(x)$ is a linear combination of cosines, not
sines; it’s an even function, not an odd function. We therefore instead
consider the linear combinations
$\displaystyle\tilde{D}_{\mu}(x)$ $\displaystyle\coloneqq
D^{\ast}(x-x_{\mu})-D^{\ast}(x+x_{\mu})$ (62)
$\displaystyle=\frac{\sin(R(x-x_{\mu}))}{2R\tan\left(\frac{1}{2}(x-x_{\mu})\right)}-\frac{\sin(R(x+x_{\mu}))}{2R\tan\left(\frac{1}{2}(x+x_{\mu})\right)}$
$\displaystyle=\frac{1}{R}\cos(x_{\mu})\left[\frac{1}{2}\sin(Rx)+\sum_{\ell=1}^{R-1}\sin(\ell
x)\right].$
Similarly to $D^{\ast}$, this kernel satisfies
$\tilde{D}_{\mu}(x_{\mu^{\prime}})=\delta_{\mu\mu^{\prime}}$ but it’s a linear
combination of the odd basis functions $\sin(\ell x),\ell\in[R]$. Following
from these two properties, we know that
$\displaystyle E_{\text{odd}}(x)$
$\displaystyle=\sum_{\mu=1}^{R}E_{\text{odd}}(x_{\mu})\tilde{D}_{\mu}(x)$ (63)
$\displaystyle=\sum_{\mu=1}^{R}\frac{E_{\text{odd}}(x_{\mu})}{2R}$
$\displaystyle\quad\times\left[\frac{\sin(R(x-x_{\mu}))}{\tan\left(\frac{1}{2}(x-x_{\mu})\right)}-\frac{\sin(R(x+x_{\mu}))}{\tan\left(\frac{1}{2}(x+x_{\mu})\right)}\right]$
and we thus can reconstruct $E_{\text{odd}}$ with the $R$ evaluations
$E_{\text{odd}}(x_{\mu})$.
We also can extract from here a closed-form formula for the derivative at
$x=0$, as it only depends on the odd part of $E$. We arrive at the _general
parameter-shift rule_ :
$\displaystyle E^{\prime}(0)$
$\displaystyle=\sum_{\mu=1}^{R}E_{\text{odd}}(x_{\mu})\tilde{D}_{\mu}^{\prime}(0)$
(64)
$\displaystyle=\sum_{\mu=1}^{R}E_{\text{odd}}(x_{\mu})\frac{\sin(Rx_{\mu})}{2R\sin^{2}(\frac{1}{2}x_{\mu})}$
(65)
$\displaystyle=\sum_{\mu=1}^{R}E_{\text{odd}}\left(\frac{2\mu-1}{2R}\pi\right)\frac{(-1)^{\mu-1}}{2R\sin^{2}\left(\frac{2\mu-1}{4R}\pi\right)}.$
Similarly, as the higher-order derivatives of $\tilde{D}_{\mu}$ can be
computed analytically, we may obtain derivatives of $E$ of higher odd orders.
#### A.1.3 Even kernels
Next we reconstruct the even part $E_{\text{even}}$ again using the kernel
$D^{\ast}(x)$ from above but choosing the $R+1$ points $x_{\mu}=\mu\pi/R$ for
$\mu\in[R]_{0}$. As the spacing between these points is the same as between
the previous $\\{x_{\mu}\\}$, we again have
$D^{\ast}(x_{\mu^{\prime}}-x_{\mu})=\delta_{\mu\mu^{\prime}}$; but note we
cannot directly use $D^{\ast}(x-x_{\mu})$ as our kernel because
$D^{\ast}(x-x_{\mu})$ is an even function in $x-x_{\mu}$ but not in $x$.
Instead we take the even linear combination
$\displaystyle\hat{D}_{\mu}(x)\coloneqq\begin{cases}D^{\ast}(x)&\text{if
}\mu=0\\\ D^{\ast}(x-x_{\mu})+D^{\ast}(x+x_{\mu})&\text{if }0<\mu<R\\\
D^{\ast}(x-\pi)&\text{if }\mu=R\ .\end{cases}$
Then the $\hat{D}_{\mu}$ are even functions and satisfy
$\hat{D}_{\mu}(x_{\mu^{\prime}})=\delta_{\mu\mu^{\prime}}$, leading to
$\displaystyle E_{\text{even}}(x)$
$\displaystyle=\sum_{\mu=0}^{R}E_{\text{even}}(x_{\mu})\hat{D}_{\mu}(x).$ (66)
The second derivative of $D^{\ast}$ is
$\displaystyle{D^{\ast}}^{\prime\prime}(x)$
$\displaystyle=\frac{\sin(Rx)\left[1-2R^{2}\sin^{2}(\frac{1}{2}x)\right]}{4R\tan(\frac{1}{2}x)\sin^{2}(\frac{1}{2}x)}-\frac{\cos(Rx)}{2\sin^{2}\left(\frac{1}{2}x\right)}$
and if we take the limit $x\rightarrow 0$:
$\displaystyle{D^{\ast}}^{\prime\prime}(0)=-\frac{2R^{2}+1}{6}.$ (67)
This yields the explicit parameter-shift rule for the second derivative:
$\displaystyle E^{\prime\prime}(0)$
$\displaystyle=-E_{\text{even}}(0)\frac{2R^{2}+1}{6}+E_{\text{even}}(\pi)\frac{(-1)^{R-1}}{2}$
$\displaystyle\quad+\sum_{\mu=1}^{R-1}E_{\text{even}}\left(\frac{\mu\pi}{R}\right)\frac{(-1)^{\mu-1}}{\sin^{2}\left(\frac{\mu\pi}{2R}\right)}.$
(68)
Again, derivatives of $E$ of higher even order can be computed in a similar
manner, using the same evaluations
$E_{\text{even}}\left(\frac{\mu\pi}{R}\right)$.
### A.2 Hessian parameter-shift rule
Here we consider the spectrum of the function
$\displaystyle E^{(km)}(x)\coloneqq
E(\boldsymbol{x}_{0}+x\boldsymbol{v}_{k,m}),$ (69)
with $\boldsymbol{v}_{k,m}=\boldsymbol{v}_{k}+\boldsymbol{v}_{m}$. Without
loss of generality, we assume $U_{k}$ to act first within the circuit and set
$\boldsymbol{x}_{0}=\boldsymbol{0}$. As for the univariate case in Sec. 2.1,
we may explicitly write the cost function as
$\displaystyle E^{(km)}(x)$
$\displaystyle=\bra{\psi}U_{k}^{\dagger}(x)V^{\dagger}U_{m}^{\dagger}(x)BU_{m}(x)VU_{k}(x)\ket{\psi}$
$\displaystyle=\sum_{j_{1},\dots
j_{4}=1}^{d}\overline{\psi_{j_{1}}v_{j_{2}j_{1}}}b_{j_{2}j_{3}}v_{j_{3}j_{4}}\psi_{j_{4}}$
(70)
$\displaystyle\times\exp\left({i\left(\omega_{j_{4}}^{(k)}-\omega_{j_{1}}^{(k)}+\omega_{j_{3}}^{(m)}-\omega_{j_{2}}^{(m)}\right)x}\right),$
where $\omega^{(k,m)}$ are the eigenvalues of the generators of $U_{k}$ and
$U_{m}$, respectively, and we denoted the entries of matrices by lowercase
letters as before. We may read off the occuring frequencies in this Fourier
series in terms of the unique positive differences $\Omega^{(k,m)}$, leading
to $\delta\Omega_{l_{1}l_{2}}=\pm\Omega_{l_{1}}^{(k)}\pm\Omega_{l_{2}}^{(m)}$.
We again only collect the positive values as they come in pairs161616That is,
for any $\delta\Omega$, we also have $-\delta\Omega$ in the Fourier series,
and the representation as real-valued function subsums the two frequencies..
In case of integer-valued frequencies, there are $R_{km}=R_{k}+R_{m}$ such
positive frequencies, namely all integers in $[R_{k}+R_{m}]$. For arbitrary
frequencies, all $\\{\delta\Omega\\}$ might be unique and we obtain up to
$R_{km}=2R_{k}R_{m}+R_{k}+R_{m}$ frequencies. Rescaling the smallest frequency
enforces a small degree of redundancy so that
$R_{km}=2R_{k}R_{m}+R_{k}+R_{m}-2$ is always achievable; for some scenarios
specific rescaling factors might drastically reduce $R_{km}$ 171717Recall that
we used rescaling for the equidistant frequency case to arrive at integer-
valued $\\{\Omega\\}$, which in turn made the significant reduction above
possible..
### A.3 Hadamard tests for the metric tensor
In order to compute the metric tensor as the Hessian of the overlap
$f(\boldsymbol{x})=-\frac{1}{2}|\\!\braket{\psi(\boldsymbol{x})}{\psi(\boldsymbol{x}_{0})}\\!|^{2}$,
we need to evaluate it at shifted positions
$\boldsymbol{x}=\boldsymbol{x}_{0}+x\boldsymbol{v}_{k,m}$. This can be done by
executing the circuit $V(\boldsymbol{x}_{0})$ and the adjoint circuit
$V^{\dagger}(\boldsymbol{x})$ at the shifted position, and returning the
probability to measure the $\mathbf{0}$ bitstring in the computational basis.
As all operations after the latter of the two parametrized gates of interest
cancel between the two circuits, those operations can be spared, but the
maximal depth is (almost) the doubled depth of $V$.
Alternatively, we may use a Hadamard test as derived in the appendix of Ref.
[57]. There, it was designed to realize the derivative overlaps
$\real{\braket{\partial_{k}\psi(\boldsymbol{x})}{\partial_{m}\psi(\boldsymbol{x})}}$
for the metric tensor directly, assuming the generator to be a Pauli word and
therefore unitary. However, it can also be used to calculate the real or
imaginary part of
$\displaystyle\braket{\psi(\boldsymbol{x})}{\psi(\boldsymbol{x}_{0})}$
$\displaystyle=\bra{\boldsymbol{0}}U_{1}^{\dagger}((\boldsymbol{x}_{0})_{1})\cdots
U_{k}^{\dagger}((\boldsymbol{x}_{0})_{k}+x)$ $\displaystyle\cdots
U_{m-1}^{\dagger}((\boldsymbol{x}_{0})_{m-1})U_{m}^{\dagger}(x)U_{m-1}((\boldsymbol{x}_{0})_{m-1})$
$\displaystyle\cdots U_{1}((\boldsymbol{x}_{0})_{1})\ket{\boldsymbol{0}}.$
(71)
by measuring the auxiliary qubit in the $Z$ or $Y$ basis. The corresponding
circuit is shown in Fig. LABEL:fig:hadamard_test_parshift.
While the original proposal has to split up the generators into Pauli words
and implement one circuit per combination of Pauli words from $x_{k}$ and
$x_{m}$, the number of circuits here is dictated by the number of evaluations
in the parameter-shift rule. In order to measure $f(\boldsymbol{x})$, the real
and the imaginary part both have to be measured, doubling the number of
circuits.
### A.4 Coefficient norms for univariate derivatives via equidistant shifts
The $\ell_{1}$-norm of the coefficients in parameter-shift rules dictates the
number of shots required to reach certain precision (see Sec. 2.3). Here, we
explicitly compute this norm for both the general and decomposition-based
parameter-shift rule for the first- and second-order univariate derivative.
For the entire analysis, we approximate the single-shot variance $\sigma^{2}$
to be constant as detailed in the main text.
#### A.4.1 Norm for general parameter-shift rule
For the case of equidistant shift angles, we can compute the norm of the
coefficient vector $\boldsymbol{y}^{(1,2)}$ in the parameter-shift rules in
Eqs. (24,25) explicitly, in order to estimate the required shot budget for the
obtained derivative. For the first order, we note that the evaluations of $E$
come in pairs, with the same coefficient up to a relative sign. This yields
(recalling that $x_{\mu}=\frac{2\mu-1}{4R}\pi$):
$\displaystyle\lVert\boldsymbol{y}^{(1)}\rVert_{1}$
$\displaystyle=\frac{1}{2R}\sum_{\mu=1}^{R}\frac{1}{\sin^{2}(x_{\mu})}=R,$
(72)
which follows from $\sin^{-2}(x_{\mu})=\cot^{2}(x_{\mu})+1$ and [80, Formula
(445)]:
$\displaystyle\sum_{\mu=1}^{R}\cot^{2}(x_{\mu})$ $\displaystyle=2R^{2}-R.$
(73)
A derivation for Eq. (73) can be adapted from Ref. [81], which we present
below for completeness:
$\displaystyle-i(-1)^{\mu}$ $\displaystyle=\exp(i2Rx_{\mu})$
$\displaystyle=\Big{(}\cos(x_{\mu})+i\sin(x_{\mu})\Big{)}^{2R}$
$\displaystyle=\sum_{r=0}^{2R}\binom{2R}{r}\left(\cos(x_{\mu})\right)^{2R-r}\left(i\sin(x_{\mu})\right)^{r}$
$\displaystyle\Rightarrow\quad 0$
$\displaystyle=\sum_{r=0}^{R}\binom{2R}{2r}\left(\cos(x_{\mu})\right)^{2R-2r}\left(i\sin(x_{\mu})\right)^{2r}$
$\displaystyle=\sum_{r=0}^{R}\binom{2R}{2r}\Big{(}-\cot^{2}(x_{\mu})\Big{)}^{R-r}$
Here we have applied the binomial theorem, extracted the real part, and
divided by $(i\sin(x_{\mu}))^{2R}$ (note that $0<x_{\mu}<\pi/2$). From the
last equation above, we see that $\cot^{2}(x_{\mu})$ is a root of the function
$g(\chi)=\sum_{r=0}^{R}\binom{2R}{2r}(-\chi)^{R-r}$ for all $\mu\in[R]$. As
$g$ is a polynomial of degree $R$, we thus know _all_ its roots and may use
the simplest of Vieta’s formulas:
$\displaystyle\sum_{\mu=1}^{R}\tau_{\mu}$
$\displaystyle=-\frac{g_{R-1}}{g_{R}}$ (74)
with roots $\\{\tau_{\mu}\\}_{\mu}$ of $g$, and $g_{j}$ the $j$th order Taylor
coefficient of $g$. Plugging in the known roots and coefficients we get
$\displaystyle\sum_{\mu=1}^{R}\cot^{2}(x_{\mu})$
$\displaystyle=-\frac{(-1)^{R-1}\binom{2R}{2}}{(-1)^{R}\binom{2R}{0}}$ (75)
$\displaystyle=2R^{2}-R.$ (76)
For the second order we may repeat the above computation with small
modifications181818Recall that the angles differ between the two derivatives.,
arriving at $g(\chi)=\sum_{r=0}^{R-1}\binom{2R}{2r+1}(-\chi)^{R-r}$ and
therefore at
$\displaystyle\lVert\boldsymbol{y}^{(2)}_{1}\rVert$
$\displaystyle=\frac{2R^{2}+1}{6}+\frac{1}{2}+(R-1)-\frac{(-1)^{R-1}\binom{2R}{3}}{(-1)^{R}\binom{2R}{1}}$
$\displaystyle=R^{2}.$ (77)
#### A.4.2 Norm for decomposition
If we compute the first- and second-order derivatives via a decomposition that
contains $\mathcal{P}$ parametrized elementary gates, we need to apply the
original two-term parameter-shift rule to each of these gates separately. For
the first-order derivative, we simply sum all elementary derivatives. For
integer-valued frequencies, $x$ typically feeds without prefactor into the
gates in the decomposition, so that the decomposition-based shift rule reads
$\displaystyle
E^{\prime}(0)=\frac{1}{2\sin(x_{1})}\sum_{k=1}^{\mathcal{P}}[E^{(k)}(x_{1})-E^{(k)}(-x_{1})],$
(78)
where $E^{(k)}$ denotes the cost function based on the decomposition, in which
only the parameter of the $k$th elementary gate is set to the shifted angle
$x_{1}$ and to $0$ in all other gates. To maximize $\sin(x_{1})$, we choose
$x_{1}=\pi/2$, and as a reuslt all $2\mathcal{P}$ coefficients have magnitude
$1/2$, and therefore
$\displaystyle\lVert\boldsymbol{y}_{\text{decomp}}^{(1)}\rVert_{1}=\mathcal{P}.$
(79)
Due to all coefficients being equal, the optimal shot allocation is
$N/(2\mathcal{P})$ for all terms.
For the second-order derivative, the full Hessian has to be computed from the
decomposition as described in Ref. [46] and all elements have to be
summed191919Here we do not anticipate the cheaper Hessian evaluation from Sec.
4.1.:
$\displaystyle E^{\prime\prime}(0)$
$\displaystyle=\frac{1}{2\sin^{2}(x_{1})}\sum_{\begin{subarray}{c}k,m=1\\\
k<m\end{subarray}}^{\mathcal{P}}$ (80)
$\displaystyle\bigg{[}E^{(km)}(x_{1},x_{1})-E^{(km)}(-x_{1},x_{1})$
$\displaystyle-E^{(km)}(x_{1},-x_{1})+E^{(km)}(-x_{1},-x_{1})\bigg{]}$
$\displaystyle+\frac{1}{2}\sum_{k=1}^{\mathcal{P}}[E^{(k)}(\pi)-E(0)]$
where $E^{(km)}(x_{1},x_{2})$ is defined analogously to $E^{(k)}$ but the
shift angles put into the $k$th and $m$th elementary gate may differ. Fixing
the shift angle to $\pi/2$ again, we have $2\mathcal{P}(\mathcal{P}-1)$
coefficients of magnitude $1/2$ for the off-diagonal terms, $\mathcal{P}$
coefficients of magnitude $1/2$ for the $E^{(k)}(\pi)$ and one coefficient
with magnitude $\mathcal{P}/2$ for $E(0)$, summing to
$\displaystyle\lVert\boldsymbol{y}_{\text{decomp}}^{(1)}\rVert_{1}=2\mathcal{P}(\mathcal{P}-1)\frac{1}{2}+\mathcal{P}\frac{1}{2}+\frac{\mathcal{P}}{2}=\mathcal{P}^{2}.$
(81)
Here the optimal shot allocation is to measure all shifted terms with
$N/(2\mathcal{P}^{2})$ shots, and $E(0)$ with $N/(2\mathcal{P})$ shots.
### A.5 Coefficient norms for the Hessian
Similar to the previous section, we compute the coefficient norms for three
methods to compute the Hessian for equidistant frequencies and shifts: We may
use the diagonal shift rule in Eq. (36), repeat the general parameter-shift
rule, or decompose the circuit and repeat the original parameter-shift rule.
For the first approach, the diagonal entries of the Hessian—and thus the
shifted evaluations for those entries—are reused to compute the off-diagonal
ones, whereas the shifted evaluations for the repeated shift rule are distinct
for all Hessian entries. This difference makes the cost comparison for a
single Hessian entry difficult. We therefore consider the root mean square of
the Frobenius norm of the difference between the true and the estimated
Hessian as quality measure. The matrix of expected deviations is given by the
standard deviations $\sigma_{km}$ so that we need to compute
$\displaystyle\varepsilon=\sqrt{\sum_{k,m=1}^{n}\sigma_{km}^{2}}=\sqrt{\sum_{k=1}^{n}\sigma_{k}^{2}+\sum_{k<m}2\sigma_{km}^{2}}\
.$ (82)
#### A.5.1 Hessian shift rule
The variance for a Hessian diagonal entry $H_{kk}$ is
$\sigma^{2}R_{k}^{4}/N_{kk}$ if we use $N_{kk}$ shots to estimate it (see Eq.
(29))202020Recall that $\sigma^{2}$ is the single-shot variance.. For an off-
diagonal element $H_{km}$ computed via the diagonal shift rule in Eq. (36),
the variance is
$\displaystyle\sigma_{km}^{2}=\frac{1}{4}\left(\frac{\sigma^{2}(R_{k}+R_{m})^{4}}{N_{km}}+\frac{\sigma^{2}R_{k}^{4}}{N_{kk}}+\frac{\sigma^{2}R_{m}^{4}}{N_{mm}}\right),$
(83)
where we used that $R_{km}=R_{k}+R_{m}$ for equidistant frequencies. Overall,
this yields
$\displaystyle\varepsilon^{2}=\sum_{k=1}^{n}\frac{\sigma^{2}R_{k}^{4}}{N_{kk}}\frac{n+1}{2}+\sum_{k<m}\frac{\sigma^{2}(R_{k}+R_{m})^{4}}{2N_{km}}$
(84)
If we allocate $N_{\text{diag}}$ shots optimally, that is $N_{km}$ is
proportional to the square root of the coefficient of $N_{km}^{-1}$, we
require
$\displaystyle N_{\text{diag}}$
$\displaystyle=\frac{\sigma^{2}}{\varepsilon^{2}}\left[\sum_{k=1}^{n}R_{k}^{2}\sqrt{\frac{n+1}{2}}+\sum_{k<m}\frac{1}{\sqrt{2}}(R_{k}+R_{m})^{2}\right]^{2}$
$\displaystyle=\frac{\sigma^{2}}{2\varepsilon^{2}}\Big{[}\bigl{(}\sqrt{n+1}+n-2\bigr{)}\lVert\boldsymbol{R}\rVert_{2}^{2}+\lVert\boldsymbol{R}\rVert_{1}^{2}\Big{]}^{2}$
(85)
shots to estimate $H$ to a precision of $\varepsilon$.
#### A.5.2 Repeated general parameter-shift rule
Without the diagonal shift rule, we compute $H_{km}$ by executing the
univariate general parameter-shift rule in Eq. (24) for $x_{k}$ and $x_{m}$
successively, i.e., we apply the rule for $x_{m}$ to all terms from the rule
for $x_{k}$. This leads to $4R_{k}R_{m}$ terms with their coefficients arising
from the first-order shift rule coefficients by multiplying them together:
$\displaystyle\lVert\boldsymbol{y}^{(km)}\rVert_{1}$
$\displaystyle=\frac{1}{4R_{k}R_{m}}\sum_{\mu=1}^{R_{k}}\frac{1}{\sin^{2}(x_{\mu})}\sum_{\mu^{\prime}=1}^{R_{m}}\frac{1}{\sin^{2}(x_{\mu^{\prime}})}$
$\displaystyle=R_{k}R_{m},$ (86)
where we used Eq. (72). Correspondingly, the variance for $H_{km}$ computed by
this methods with an optimal shot allocation of $N_{km}$ shots is
$\sigma_{km}^{2}=\sigma^{2}R_{k}^{2}R_{m}^{2}/N_{km}$. The mean square of the
Frobenius norm then is
$\displaystyle\varepsilon^{2}=\sum_{k=1}^{n}\frac{\sigma^{2}R_{k}^{4}}{N_{kk}}+\sum_{k<m}\frac{2\sigma^{2}R_{k}^{2}R_{m}^{2}}{N_{km}}$
(87)
and an optimal shot allocation across the entries of the Hessian to achieve a
precision of $\varepsilon$ will require
$\displaystyle N_{\text{genPS}}$
$\displaystyle=\frac{\sigma^{2}}{\varepsilon^{2}}\left[\sum_{k=1}^{n}R_{k}^{2}+\sum_{k<m}\sqrt{2}R_{k}R_{m}\right]^{2}$
$\displaystyle=\frac{\sigma^{2}}{2\varepsilon^{2}}\Big{[}\bigl{(}\sqrt{2}-1\bigr{)}\lVert\boldsymbol{R}\rVert_{2}^{2}+\lVert\boldsymbol{R}\rVert_{1}^{2}\Big{]}^{2}$
(88)
shots in total.
#### A.5.3 Decomposition and repeated original shift rule
For the third approach, we only require the observation that again all
(unique) Hessian entries are estimated independently and that the coefficients
arise from all products of two coefficients from the separate shift rules for
$x_{k}$ and $x_{m}$. This yields $4\mathcal{P}_{k}\mathcal{P}_{m}$
coefficients with magnitude $1/4$, so that the calculation of $\varepsilon$ is
the same as for the previous approach, replacing $\boldsymbol{R}$ by
$\mathcal{P}$. The required shot budget for a precision of $\varepsilon$ is
thus
$\displaystyle N_{\text{decomp}}$
$\displaystyle=\frac{\sigma^{2}}{2\varepsilon^{2}}\Big{[}\bigl{(}\sqrt{2}-1\bigr{)}\lVert\boldsymbol{\mathcal{P}}\rVert_{2}^{2}+\lVert\boldsymbol{\mathcal{P}}\rVert_{1}^{2}\Big{]}^{2}$
(89)
## Appendix B Generalization to arbitrary spectra
Throughout this work, we mostly focused on cost functions $E$ with equidistant
— and thus, by rescaling, integer-valued — frequencies $\\{\Omega_{\ell}\\}$.
Here we will discuss the generalization to arbitrary frequencies, mostly
considering the changed cost.
### B.1 Univariate functions
The nonuniform DFT used to reconstruct the full function $E$ in Sec. 3.1, and
its modifications for the odd and even part in Secs. 3.2 and 3.3, can be used
straightforwardly for arbitrary frequencies. However, choosing equidistant
shift angles $\\{x_{\mu}\\}$ will no longer make the DFT uniform, as was the
case for equidistant frequencies. Correspondingly, the explicit parameter-
shift rules for $E^{\prime}(0)$ and $E^{\prime\prime}(0)$ in Eqs. (24, 25) do
not apply and in general we do not know a closed-form expression for the DFT
or the parameter-shift rules. Symbolically, the parameter-shift rule takes the
form
$\displaystyle E^{\prime}(0)$
$\displaystyle=\sum_{\mu=1}^{R}y^{(1)}_{\mu}[E(x_{\mu})-E(-x_{\mu})]$ (90)
$\displaystyle E^{\prime\prime}(0)$
$\displaystyle=y^{(2)}_{0}E(0)+\sum_{\mu=1}^{R}y^{(2)}_{\mu}[E(x_{\mu})+E(-x_{\mu})].$
(91)
Regarding the evaluation cost, the odd part and thus odd-order derivatives can
be obtained at the same price of $2R$ evaluations of $E$ as before, but the
even part might no longer be periodic in general; as a consequence,
$\displaystyle E_{\text{even}}(\pi)=\frac{1}{2}(E(\pi)+E(-\pi))\neq E(\pi)$
(92)
actually may require two evaluations of $E$, leading to $2R+1$ evaluations
overall. If the even part is periodic, which is equivalent to all involved
frequencies being commensurable, with some period $T$, evaluating
$E_{\text{even}}(T/2)$ allows to skip the additional evaluation.
When comparing to the first derivative based on a decomposition into
$\mathcal{P}$ parametrized elementary gates, the break-even point for the
number of unique circuits remains at $R=\mathcal{P}$ as for equidistant
frequencies, but we note that e.g., a decomposition of the form
$\displaystyle U(x)=\prod_{k=1}^{\mathcal{P}}U_{k}(\beta_{k}x),$ (93)
namely where $x$ is rescaled individually in each elementary gate by some
$\beta_{k}\in\mathbb{R}$, in general will result in $R=\mathcal{P}^{2}$
frequencies of $E$, making the decomposition-based parameter-shift rule
beneficial. For the second-order derivative, the number of evaluations $2R+1$
might be quadratic in $\mathcal{P}$ in the same way, but the decomposition
requires $2\mathcal{P}^{2}-\mathcal{P}+1$ as well, so that the requirements
are similar if $R=\mathcal{P}$.
Regarding the required number of shots, we cannot make concrete statements for
the general case as we don’t have a closed-form expression for the
coefficients $\boldsymbol{y}$, but note that for the decomposition approach,
rescaling factors like the $\\{\beta_{k}\\}$ in Eq. (93) above have to be
factored in via the chain rule, leading to a modified shot requirement.
An example for unitaries with non-equidistant frequencies would be the QAOA
layer that implements the time evolution under the problem Hamiltonian (see
Eq. (26)) for $\operatorname{\textsc{MaxCut}}$ on _weighted_ graphs with non-
integer weights.
For the stochastic parameter-shift rule in Sec. 3.6 we did not restrict
ourselves to equidistant frequencies and derive it in App. C for general
unitaries of the form $U_{F}=\exp(i(xG+F))$ directly.
### B.2 Multivariate functions
While the univariate functions do not differ strongly for equidistant and
arbitrary frequencies in $E$ and mostly the expected relation between $R$ and
$\mathcal{P}$ changes, the shift rule for the Hessian and the metric tensor
are affected heavily by generalizing the spectrum. First, the univariate
restriction $E^{(km)}(x)$ in Eq. (34) still can be used to compute the off-
diagonal entry $H_{km}$ of the Hessian but this may require up to
$2R_{km}+1=4R_{k}R_{m}+2R_{k}+2R_{m}-3$ evaluations (see App. A.2), in
contrast to $2R_{km}=2(R_{k}+R_{m})$ in the equidistant case. Compared to the
resource requirements of the decomposition-based approach,
$4\mathcal{P}_{k}\mathcal{P}_{m}$, this makes our general parameter-shift rule
more expensive if $R_{k}\gtrsim\mathcal{P}_{k}$.
As we use the same method to obtain the metric tensor $\mathcal{F}$, the
number of evaluations grows in the same manner, making the decomposition-based
shift rule more feasible for unitaries with non-equidistant frequencies. As
$f(\boldsymbol{x}_{0})$ does not have to be evaluated, an off-diagonal element
$\mathcal{F}_{km}$ requires one evaluation fewer than $H_{km}$, namely
$4R_{k}R_{m}+2R_{k}+2R_{m}-4$.
## Appendix C General stochastic shift rule
In this section we describe a stochastic variant of the general parameter-
shift rule which follows immediately from combining the rule for single-
parameter gates in Eq. (90) with the result from Ref. [39].
First, note that any shift rule
$\displaystyle E^{\prime}(x_{0})=\sum_{\mu}y_{\mu}E(x_{0}+x_{\mu}),$ (94)
with coefficients $\\{y_{\mu}\\}$ and shift angles $\\{x_{\mu}\\}$ for a
unitary $U(x)=\exp(ixG)$, implies that we can implement the commutator with
$G$:
$\displaystyle i[G,\rho]=\sum_{\mu}y_{\mu}U(x_{\mu})\rho
U^{\dagger}(x_{\mu}),$ (95)
since the commutator between $G$ and the Hamiltonian directly expresses the
derivative of the expectation value $E^{\prime}(0)$ on the operator level, and
shift rules hold for arbitrary states.
Now consider the extension $U_{F}(x)=\exp(i(xG+F))$ of the above unitary. In
the original stochastic parameter-shift rule, the authors show212121To be
precise, we here combine Eqs. (11-13) in Ref. [39] into a general expression
for $E^{\prime}$.
$\displaystyle E^{\prime}(x_{0})=$
$\displaystyle\int_{0}^{1}\mathrm{d}t\;\operatorname{tr}\bigg{\\{}U_{F}^{\dagger}(tx_{0})B\,U_{F}(tx_{0})$
(96) $\displaystyle\times i\left[G\ ,\
U_{F}\bigl{(}(1-t)x_{0}\bigr{)}\ket{\psi}\\!\\!\bra{\psi}U_{F}^{\dagger}\bigl{(}(1-t)x_{0}\bigr{)}\right]\bigg{\\}}$
where we again denoted the state prepared by the circuit before $U_{F}$ by
$\ket{\psi}$ and the observable transformed by the circuit following $U_{F}$
by $B$. By using Eq. (95) to express the commutator, we obtain
$\displaystyle E^{\prime}(x_{0})$
$\displaystyle=\int_{0}^{1}\mathrm{d}t\;\sum_{\mu}y_{\mu}\operatorname{tr}\bigg{\\{}U_{F}^{\dagger}(tx_{0})B\,U_{F}(tx_{0})$
(97) $\displaystyle\times
U(x_{\mu})U_{F}\bigl{(}(1-t)x_{0}\bigr{)}\ket{\psi}\\!\\!\bra{\psi}U_{F}^{\dagger}\bigl{(}(1-t)x_{0}\bigr{)}U^{\dagger}(x_{\mu})\bigg{\\}}.$
We abbreviate the interleaved unitaries
$\displaystyle U_{F,\mu}(x_{0},t)\coloneqq
U_{F}(tx_{0})U(x_{\mu})U_{F}\bigl{(}(1-t)x_{0}\bigr{)}$ (98)
and denote the cost function that uses $U_{F,\mu}(x_{0},t)$ instead of
$U_{F}(x_{0})$ as
$\displaystyle E_{\mu}(x_{0},t)\coloneqq\operatorname{tr}\left\\{B\
U_{F,\mu}^{\dagger}(x_{0},t)\ket{\psi}\\!\\!\bra{\psi}U_{F,\mu}(x_{0},t)\right\\}.$
Rewriting Eq. (97) then yields the _generalized stochastic parameter-shift
rule_
$\displaystyle
E^{\prime}(x_{0})=\int_{0}^{1}\mathrm{d}t\sum_{\mu}y_{\mu}E_{\mu}(x_{0},t).$
(99)
It can be implemented by sampling values for the splitting time $t$, combining
the shifted energies $E_{\mu}(x_{0},t)$ for each sampled $t$ with the
coefficients $y_{\mu}$, and averaging over the results.
## Appendix D Details on QAD
In this section we provide details on the latter two of the three
modifications of the QAD algorithm discussed in Sec. 5.3.
### D.1 Extended QAD model for Pauli rotations
The QAD model introduced in Ref. [49] contains trigonometric functions up to
second (leading) order. The free parameters of the model cannot be extracted
with one function evaluation per degree of freedom, because unlike standard
monomials in a Taylor expansion, the trigonometric basis functions mix the
orders in the input parameters. This leads to the mismatch of $2n^{2}+n+1$
(original QAD) or $3n^{2}/2+n/2+1$ (see above) evaluations to obtain
$n^{2}/2+3n/2+1$ model parameters. We note that the QAD model contains full
univariate reconstructions at optimal cost, extracting the $2n+1$ model
parameters $E^{(A)}$, $\boldsymbol{E}^{(B)}$ and $\boldsymbol{E}^{(C)}$ from
$2n+1$ function evaluations. The doubly shifted evaluations, however, are used
for the Hessian entry only:
$\displaystyle
E^{(D)}_{km}=\frac{1}{4}\left[E^{++}_{km}-E^{+-}_{km}-E^{-+}_{km}+E^{--}_{km}\right],$
(100)
where
$E^{\pm\pm}_{km}=E(\boldsymbol{x}_{0}\pm\frac{\pi}{2}\boldsymbol{v}_{k}\pm\frac{\pi}{2}\boldsymbol{v}_{m})$
and we recall that this QAD model is restricted to Pauli rotations only.
Let us now consider a slightly larger truncation of the cost function than the
one presented in App. A 2 in [49]:
$\displaystyle\mathring{E}(\boldsymbol{x}_{0}+\boldsymbol{x})$
$\displaystyle=A(\boldsymbol{x})\biggl{[}E^{(A)}$
$\displaystyle+2\boldsymbol{E}^{(B)}\cdot\tan\left(\frac{\boldsymbol{x}}{2}\right)+2\boldsymbol{E}^{(C)}\cdot\tan\left(\frac{\boldsymbol{x}}{2}\right)^{\odot
2}$
$\displaystyle+4\tan\left(\frac{\boldsymbol{x}}{2}\right)E^{(D)}\tan\left(\frac{\boldsymbol{x}}{2}\right)$
(101)
$\displaystyle+4\tan\left(\frac{\boldsymbol{x}}{2}\right)E^{(F)}\tan^{2}\left(\frac{\boldsymbol{x}}{2}\right)$
$\displaystyle+4\tan^{2}\left(\frac{\boldsymbol{x}}{2}\right)E^{(G)}\tan^{2}\left(\frac{\boldsymbol{x}}{2}\right)\biggr{]}$
with $A(\boldsymbol{x})=\prod_{k}\cos^{2}(x_{k}/2)$. $E^{(F)}$ and $E^{(G)}$
have zeros on their diagonals because there are no terms of the form
$\sin^{3}(x_{k}/2)$ or $\sin^{4}(x_{k}/2)$ in the cost function, and for
$E^{(G)}$ we only require the strictly upper triangular entries due to
symmetry. The higher-order terms contain at least three distinct variables
$x_{k}$, $x_{l}$ and $x_{m}$ because all bivariate terms are captured in the
above truncation. Using
$\displaystyle
A\left(\pm\frac{\pi}{4}\boldsymbol{v}_{k}\pm\frac{\pi}{4}\boldsymbol{v}_{m}\right)=\frac{1}{4}\;\text{
and }\;\tan\left(\pm\frac{\pi}{4}\right)=\pm 1,$
we now can compute:
$\displaystyle E^{++}_{km}-E^{-+}_{km}+E^{+-}_{km}-E^{--}_{km}$
$\displaystyle=E^{(B)}_{k}+E^{(F)}_{km}$ $\displaystyle
E^{++}_{km}+E^{-+}_{km}+E^{+-}_{km}+E^{--}_{km}$
$\displaystyle=E^{(A)}+2E^{(C)}_{k}$
$\displaystyle+2E^{(C)}_{m}+4E^{(G)}_{km}.$
This means that the $4$ function evaluations $E^{\pm\pm}_{km}$ that are used
for $E_{km}^{(D)}$ in the original QAD can be recycled to obtain the $3$
parameters $E_{km}^{(F)}$, $E_{mk}^{(F)}$ and $E_{km}^{(G)}$. The
corresponding model is of the form Eq. (D.1) and therefore includes _all_
terms that depend on two parameters only. Consequentially, the constructed
model exactly reproduces the cost function not only on the coordinate axes but
also on all coordinate planes spanned by any two of the axes. The number of
model parameters is $2n^{2}+1$, which matches the total number of function
evaluations.
### D.2 Trigonometric interpolation for QAD
Both the original QAD algorithm, and the extension introduced above, assume
the parametrized quantum circuit to consist of Pauli rotation gates
exclusively. In the spirit of the generalized function reconstruction and
parameter-shift rule, we would like to relax this assumption and generalize
the QAD model. However, there is no obvious unique way to do this, because the
correspondence between the gradient and $\boldsymbol{E}^{(B)}$ and between the
Hessian and $\boldsymbol{E}^{(C,D)}$ is not preserved for multiple
frequencies. Instead, the uni- and bivariate Fourier coefficients of $E$ form
the model parameters and the derivative quantities are contractions with the
frequencies thereof. There are multiple ways in which we could generalize QAD
to multiple frequencies.
The first way to generalize QAD is to compute the gradient and Hessian with
the generalized parameter-shift rule Eq. (24) and the shift rule for Hessian
entries Eq. (36) and to construct a single-frequency model as in original QAD.
Even though we know the original energy function to contain multiple
frequencies, this would yield a local model with the correct second-order
expansion at $\boldsymbol{x}_{0}$ that exploits the evaluations savings shown
in this work. As QAD is supposed to use the model only in the neighbourhood of
$\boldsymbol{x}_{0}$, this might be sufficient for the optimization.
As a second generalization we propose a full trigonometric interpolation of
$E$ up to second order, similar to the univariate reconstruction in Sec. 3.1.
First we consider the univariate part of the model: Start by evaluating $E$ at
positions shifted in the $k$th coordinate by equidistant points and subtract
$E(\boldsymbol{x}_{0})$,
$\displaystyle E_{\mu}^{(k)}$ $\displaystyle\coloneqq
E(\boldsymbol{x}_{0}+x_{\mu}\boldsymbol{v}_{k})-E(\boldsymbol{x}_{0})$ (102)
$\displaystyle x_{\mu}$
$\displaystyle\coloneqq\frac{2\mu\pi}{2R_{k}+1},\quad\mu\in[2R_{k}].$ (103)
Then consider the (shifted) Dirichlet kernels
$\displaystyle D_{\mu}^{(k)}(x)$
$\displaystyle=\frac{1}{2R_{k}+1}\left(1+2\sum_{\ell=1}^{R_{k}}\cos(\ell(x-x_{\mu}))\right)$
(104)
$\displaystyle=\frac{\sin\left(\frac{1}{2}(2R_{k}+1)(x-x_{\mu})\right)}{(2R_{k}+1)\sin\left(\frac{1}{2}(x-x_{\mu})\right)}$
(105)
which satisfy $D^{(k)}_{\mu}(x_{\mu^{\prime}})=\delta_{\mu\mu^{\prime}}$ and
are Fourier series with integer frequencies up to $R_{k}$. Therefore, the
function222222One might be wondering why to subtract $E(\boldsymbol{x}_{0})$
just to add it manually back into the reconstruction now. This is because we
need to avoid duplicating this term when adding up the univariate and
bivariate terms of all parameters later on.
$\displaystyle\hat{E}^{(k)}(x)=\sum_{\mu=1}^{2R_{k}}E_{\mu}^{(k)}D^{(k)}_{\mu}(x)$
(106)
coincides with
$E(\boldsymbol{x}_{0}+x\boldsymbol{v}_{k})-E(\boldsymbol{x}_{0})$ at
$2R_{k}+1$ points and is a trigonometric polynomial with the same $R_{k}$
frequencies.
Similarly, the product kernels
$D_{\mu\mu^{\prime}}^{(km)}(x_{k},x_{m})=D_{\mu}^{(k)}(x_{k})D_{\mu^{\prime}}^{(m)}(x_{m})$
can be used to reconstruct the bivariate restriction of $E$ to the
$x_{k}-x_{m}$ plane. For this, evaluate the function at doubly shifted
positions and subtract both, $E(\boldsymbol{x}_{0})$ and the univariate parts:
$\displaystyle E_{\mu\mu^{\prime}}^{(km)}$ $\displaystyle\coloneqq
E(\boldsymbol{x}_{0}+x_{\mu}\boldsymbol{v}_{k}+x_{\mu^{\prime}}\boldsymbol{v}_{m})$
(107)
$\displaystyle-\hat{E}^{(k)}(x_{\mu})-\hat{E}^{(m)}(x_{\mu^{\prime}})-E(\boldsymbol{x}_{0})$
(108)
Then, the bivariate Fourier series
$\displaystyle\hat{E}^{(km)}(x_{k},x_{m})=\sum_{\mu,\mu^{\prime}=1}^{2R_{k},2R_{m}}E_{\mu\mu^{\prime}}^{(km)}D_{\mu\mu^{\prime}}^{(km)}(x_{k},x_{m})$
(109)
coincides with
$E(\boldsymbol{x}_{0}+x_{k}\boldsymbol{v}_{k}+x_{m}\boldsymbol{v}_{m})-E(\boldsymbol{x}_{0})-\hat{E}^{(k)}(x_{k})-\hat{E}^{(m)}(x_{m})$
on the entire coordinate plane spanned by $\boldsymbol{v}_{k}$ and
$\boldsymbol{v}_{m}$.
As we constructed the terms such that they do not contain the respective lower
order terms, we finally can combine them to the full trigonometric
interpolation:
$\displaystyle\hat{E}_{\text{interp}}(\boldsymbol{x})=E(\boldsymbol{x}_{0})$
$\displaystyle+\sum_{k=1}^{n}\hat{E}^{(k)}(x_{k})$ (110)
$\displaystyle+\sum_{k<m}\hat{E}^{(km)}(x_{k},x_{m}).$
This model has as many parameters as function evaluations, namely
$2(\lVert\boldsymbol{R}\rVert_{1}^{2}-\lVert\boldsymbol{R}\rVert_{2}^{2}+\lVert\boldsymbol{R}\rVert_{1})+1$,
and therefore, the trigonometric interpolation is the generalization of the
extended QAD model in App. D.1. Indeed, for $R_{k}=1$ for all $k$ we get back
$2(n^{2}-n+n)+1=2n^{2}+1$ evaluations and model parameters.
We note that the trigonometric interpolation can be implemented for non-
equidistant evaluation points in a similar manner and with the same number of
evaluations, although the elementary functions are no longer Dirichlet kernels
but take the form
$\displaystyle\mathring{D}_{\mu}^{(k)}(x)=\frac{\sin\left(\frac{1}{2}x\right)}{\sin\left(\frac{1}{2}x_{\mu}\right)}\prod_{\mu^{\prime}=1}^{2R_{k}}\frac{\sin\left(\frac{1}{2}(x-x_{\mu^{\prime}})\right)}{\sin\left(\frac{1}{2}(x_{\mu}-x_{\mu^{\prime}})\right)}.$
(111)
|
arxiv-papers
| 2021-07-26T18:00:02 |
2024-09-04T03:07:19.721050
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "David Wierichs and Josh Izaac and Cody Wang and Cedric Yen-Yu Lin",
"submitter": "David Wierichs",
"url": "https://arxiv.org/abs/2107.12390"
}
|
2107.12395
|
# Constraining dark matter annihilation with cosmic ray antiprotons using
neural networks
Felix Kahlhoefer Michael Korsmeier Michael Krämer Silvia Manconi and
Kathrin Nippel
###### Abstract
The interpretation of data from indirect detection experiments searching for
dark matter annihilations requires computationally expensive simulations of
cosmic-ray propagation. In this work we present a new method based on
Recurrent Neural Networks that significantly accelerates simulations of
secondary and dark matter Galactic cosmic ray antiprotons while achieving
excellent accuracy. This approach allows for an efficient profiling or
marginalisation over the nuisance parameters of a cosmic ray propagation model
in order to perform parameter scans for a wide range of dark matter models. We
identify importance sampling as particularly suitable for ensuring that the
network is only evaluated in well-trained parameter regions. We present
resulting constraints using the most recent AMS-02 antiproton data on several
models of Weakly Interacting Massive Particles. The fully trained networks are
released as DarkRayNet together with this work and achieve a speed-up of the
runtime by at least two orders of magnitude compared to conventional
approaches.
## 1 Introduction
The central prediction of the Weakly Interacting Massive Particles (WIMP)
paradigm is that Dark Matter (DM) particles should have a thermally averaged
annihilation cross section of $\langle\sigma v\rangle\sim
10^{-26}\,\mathrm{cm^{3}\,s^{-1}}$ during freeze-out. In many DM models, the
present-day annihilation cross section in astrophysical systems is predicted
to be of a similar magnitude, providing a clear target for indirect detection
experiments searching for the products of DM annihilation processes.
While the most robust constraints on the DM annihilation cross section stem
from observations of the CMB [1] and of the $\gamma$-ray sky, in particular
from Fermi-LAT [2, 3, 4], highly complementary information can be obtained by
precisely measuring the flux of charged anti-particles arriving on Earth. Very
recently, AMS-02 has released results from the first seven years of data
taking [5], which include in particular the flux of antiprotons with
unprecedented precision. Theoretical predictions for this flux however require
detailed modelling of the production and propagation of charged cosmic rays
(CRs) in the Galaxy, which are subject to significant uncertainties and are
currently constrained using CR data (see e.g. Ref. [6]), as well as their non-
thermal emissions (see e.g. Ref. [7]).
While various numerical codes, such as Galprop [8] and Dragon [9], exist to
address this challenge and simulate the propagation of CRs, they require as
input a large number of parameters that need to be varied to assess their
impact on the predictions. As a result these simulations are typically
computationally so expensive that they become prohibitive in the context of a
global analysis of DM models, where also the fundamental model parameters need
to be varied [10]. Recent analyses of the AMS-02 antiproton data have
therefore typically focused on simplified DM models with only a single
annihilation channel, see e.g. Ref. [11, 12, 13, 14].
In the present work we explore the potential of artificial neural networks
(ANNs) to solve this issue and substantially speed up the calculation of
predictions for the primary antiproton flux for a very broad range of DM
models.111For other recent works on the use of machine learning for cosmic ray
propagation in the context of DM we refer to Refs. [15, 16]. Specifically, we
employ recurrent neural networks (RNNs), which are particularly well suited
for the prediction of continuous spectra. The network is trained on a large
sample of antiproton fluxes based on propagation parameters that are chosen to
broadly agree with recent AMS-02 data, and a general parametrisation of the
properties of the DM particle in terms of its mass and the branching fractions
for various different final states. Using the same approach we have also
developed and trained ANNs to accurately predict further CR species, like
secondary antiprotons, protons or helium.
The predictions of the network can then be used to calculate the likelihood of
the AMS-02 data for a given DM model and varying propagation parameters in
order to calculate exclusion limits using a range of frequentist or Bayesian
methods. However, it is important to ensure that the resulting constraints are
not biased by regions of parameter space for which the ANN has not been
sufficiently trained. In the Bayesian approach this potential pitfall is
avoided by evaluating the likelihood for a fixed sample of propagation
parameter points drawn from the posterior probability distribution in the
absence of a DM signal. The marginalisation over propagation uncertainties can
then be performed via importance sampling, i.e. by appropriately reweighing
and combining the points in the sample. This approach is particularly well
suited for the analysis of antiproton data, since the propagation parameters
are rather tightly constrained by the proton flux and the secondary antiproton
flux, so that the presence of a DM signal does not dramatically shift the
relevant regions of parameter space.
We emphasise that, while the initial generation of a sample from the posterior
is computationally expensive, it does not require specific assumptions on the
properties of DM and therefore only needs to be carried out once in advance.
Moreover, the same simulation step can be used to set up the training data for
the ANN, ensuring that the network is trained specifically on the most
interesting regions of parameter space. Once training is completed, the
remaining steps are computationally cheap and can be performed for a large
number of DM parameters. Indeed, the full marginalisation over propagation
parameters can be performed in a similar amount of time as it would take to
simulate a single parameter point in the conventional approach.
We apply our fully trained ANN to a number of cases of particular interest.
For the case of DM annihilations exclusively into bottom quarks we show that
the most recent AMS-02 data leads to results that are compatible with previous
studies. In particular, we recover a notable excess for DM masses around 100
GeV in the case that no correlations in the AMS-02 data are considered. We
also present new constraints on the well-studied model of scalar singlet DM
and find that antiproton data places competitive constraints on this model.
However, we emphasise that the ANN is not limited to these cases and can be
applied to a wide variety of DM models. Moreover, the general approach that we
present can be extended to consider different propagation models (provided a
suitable simulator exists), systematic uncertainties (such as correlations in
the AMS-02 data) or cross-section uncertainties, enabling the community to
fully explore the wealth of the available CR data.
The remainder of this work is structured as follows. In section 2 we briefly
review the fundamental concepts of CR production and propagation and present
the specific implementation that we adopt in the present work. We also carry
out a first analysis of the most recent AMS-02 data and perform a parameter
scan to identify the most interesting regions of parameter space. In section 3
we introduce our machine learning approach to simulating CRs and discuss how
we train and validate our ANNs. Finally, in section 4 we apply the fully
trained ANNs to constrain DM models. We present the relevant statistical
methods and discuss the resulting exclusion limits.
## 2 Cosmic-ray antiprotons in the Galaxy
For the following discussion it is useful to distinguish between primary and
secondary CRs. Primary CRs are directly accelerated and emitted by standard
astrophysical sources like supernova remnants or pulsars. But also more exotic
scenarios such as the production of (anti)particles by DM annihilation or
decay are considered as primary origin. Protons provide the dominant
contribution to primary CRs (about 90%) while helium (He) makes up about 10%.
Heavier nuclei only contribute at the percent level. On the other hand,
secondary CRs are produced during the propagation of primary CRs by
fragmentation or decay. More specifically, when the primary CRs interact with
the gas in the Galactic disc, commonly called interstellar medium (ISM),
secondary particles are produced. Because of the different production
mechanism, secondaries are suppressed with respect to primary CRs. It is
commonly believed that CR antiprotons do not have standard astrophysical
sources222 We note that the possibility of primary antiprotons that are
directly produced and accelerated at supernova remnants [17, 18, 19, 20] is
also discussed in literature. such that their dominant contribution comes
from secondary production. As a consequence, antiprotons are suppressed by 4–5
orders of magnitude with respect to protons, which makes them (together with
other antimatter CRs, e.g. antideuterons [21, 22]) a promising channel for
constraining DM signals.
In this section we first discuss the production of antiprotons in the
annihilation of dark matter particles in our Galaxy, followed by a discussion
of backgrounds from secondary antiprotons. We then present the framework that
we use to simulate CR propagation and the strategy to fit the resulting
spectra to data. Finally, we perform a scan over the propagation parameters in
order to create the training set for the machine learning approach introduced
in section 3.
### 2.1 Antiprotons from dark matter annihilation
CR antiprotons are a long standing target used to search for signals of WIMP
DM in our Galaxy [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39]. More recently, there has been a discussion of an antiproton excess at
about 20 GeV, which could be fitted with a primary DM source [11, 12, 13, 40,
14]. However, the excess might also be accounted for by a combination of
systematic effects [41, 42, 43]. If DM particles annihilate into standard
model particle final states $f$ within the diffusion halo of our Galaxy as
${\rm DM}\\!+\\!{\rm DM}\to f\\!+\\!\bar{f}$, we expect a corresponding flux
contribution to antiprotons in CRs, coming from the subsequent decay of for
example $q\\!+\\!\bar{q}$ modes (see e.g. [44]). The source term of this
primary antiproton component, $q_{\bar{p}}^{(\mathrm{DM})}$, is a function of
the Galactic coordinates $\bm{x}$ and the antiproton kinetic energy
$E_{\mathrm{kin}}$. For a generic combination of standard model final states
$f$ it reads:
$\displaystyle
q_{\bar{p}}^{(\mathrm{DM})}(\bm{x},E_{\mathrm{kin}})=\frac{1}{2}\left(\frac{\rho(\bm{x})}{m_{\mathrm{DM}}}\right)^{2}\sum_{f}\left\langle\sigma
v\right\rangle_{f}\frac{\mathrm{d}N^{f}_{\bar{p}}}{\mathrm{d}E_{\mathrm{kin}}}\;.$
(2.1)
The factor $1/2$ in eq. (2.1) corresponds to Majorana fermion DM. Furthermore,
$m_{\mathrm{DM}}$ is the DM mass, $\rho(\bm{x})$ the DM halo energy density
profile, and $\langle\sigma v\rangle_{f}$ is the thermally averaged
annihilation cross section for the individual final states $f$. In the
following, we fix $\langle\sigma v\rangle$ independent of $f$ and account for
this by assigning branching fractions into the relevant final states. Finally,
$\mathrm{d}N^{f}_{\bar{p}}/\mathrm{d}E_{\mathrm{kin}}$ denotes the energy
spectrum of antiprotons for a single DM annihilation. This quantity depends on
the DM mass and the standard model final state. Here we implement the widely
used tabulated results for the antiproton energy spectrum presented in Ref.
[44] which include electroweak corrections.333 If DM annihilates into a pair
of $W$ or $Z$ bosons it is possible to produce one of them off-shell. This
possibility is not taken into account in the original tables. We extend the
tables of $W$ and $Z$ bosons to lower DM masses using the tables from Ref.
[45].
We assume that the DM density in our Galaxy follows an NFW profile [46]
$\rho_{\mathrm{NFW}}(r)=\rho_{h}\,r_{h}/r\,\left(1+r/r_{h}\right)^{-2}$, with
a scale radius of $r_{h}=20\;$kpc and a characteristic halo density,
$\rho_{h}$, which is normalised such that the local DM density at the solar
position of $8.5\;$kpc is fixed to $0.43\;$GeV/cm3 [47], compatible also with
more recent estimates [48]. We note that the NFW profile is only one of many
viable DM profiles currently investigated. Other profiles can have a
significantly different behavior towards the Galactic center, see e.g. the
discussion in Ref. [49]. However, we stress that choosing a different DM
density profile only has a small impact on the results presented in this paper
since CR antiprotons from DM annihilation dominantly arrive from the local
environment. Therefore they are mostly sensitive to the local DM density and
the resulting flux depends only weakly on the shape of the DM density profile
at the Galactic center. More specifically, the impact of changing the cuspy
NFW profile to the cored Burkert profile [50] has been quantified in Ref.
[51]; it was found that a core radius of $5\;$kpc only weakens DM limits by
about 20%.
### 2.2 Secondary antiprotons
The ISM consists of roughly 90% hydrogen (H) and 10% He. Thus secondary
antiprotons are mostly produced by the interaction of $p$ and He CRs with the
H and He components of the ISM. The source term for the secondary antiprotons
$q_{\bar{p}}^{(\mathrm{sec})}$ is thus given by the convolution of the primary
CR fluxes $\phi$ of isotope $i$, the ISM density $n_{\mathrm{ISM}}$ of
component $j\in\\{\mathrm{H},\mathrm{He}\\}$, and the energy-differential
production cross section
$\mathrm{d}\sigma_{ij\rightarrow\bar{p}}/\mathrm{d}E_{\mathrm{kin},\bar{p}}$:
$\displaystyle
q_{\bar{p}}^{(\mathrm{sec})}({\bm{x}},E_{\mathrm{kin},\bar{p}})$
$\displaystyle=$
$\displaystyle\\!\\!\\!\\!\sum\limits_{j\in\\{\mathrm{H},\mathrm{He}\\}}\\!\\!\\!\\!4\pi\,n_{\mathrm{ISM},j}({\bm{x}})\sum\limits_{i}\int\mathrm{d}E_{\mathrm{kin},i}\,\phi_{i}(E_{\mathrm{kin},i})\,\frac{\mathrm{d}\sigma_{ij\rightarrow\bar{p}}}{\mathrm{d}E_{\mathrm{kin},\bar{p}}}(E_{\mathrm{kin},i},E_{\mathrm{kin},\bar{p}})\,.$
(2.2)
By construction, secondaries are suppressed with respect to primary CRs. In
the case of antiprotons, the experimentally observed suppression compared to
protons is 5 orders of magnitude at 1 GV and increases to about 4 orders of
magnitude above 10 GV. Since secondary CRs constitute the dominant
contribution of the measured antiproton flux, considering standard
astrophysical sources only already results in a good fit to the data [52, 11,
41], see also discussion in section 2.4.
The cross section of secondary antiproton production is a very important
ingredient of eq. (2.2), which has been discussed by various groups recently
[53, 54, 55, 56]. In general there are two different strategies to determine
this cross section. On the one hand, Monte Carlo generators, which are tuned
to the relevant cross section data [56], can be used to infer the relevant
cross section. On the other hand, a parametrisation of the Lorentz invariant
cross section can be fitted to all available cross section data. Then the
required energy-differential cross section is obtained by an angular
integration [53, 54, 55]. We follow the second approach and use the analytic
cross section parametrisation from Ref. [54] with the updated parameters from
Ref. [55]. An important advantage of the analytic cross section
parametrisation is that it is explicitly tuned to cross-section data at low
energies, and therefore more reliable below antiproton energies of $\sim 10$
GeV as discussed in Ref. [57].
Finally, we consider that secondary antiprotons may scatter inelastically with
the ISM and lose energy. This antiproton contribution, commonly referred to as
tertiary [58], is suppressed with respect to the secondaries.
### 2.3 Propagation in the Galaxy and solar modulation
The sources, acceleration and propagation of Galactic CRs are research topics
by themselves [59, 60]. Fast evolution and progresses has been driven in the
last years by newly available and very precise data by AMS-02 [5], PAMELA [61]
and Voyager [62]. Some recent developments include the studies of systematic
uncertainties from solar modulation, correlated experimental data points,
secondary production/fragmentation cross sections as well as detailed studies
of propagation phenomena below a rigidity of 10 GV to disentangle diffusion
and reacceleration [63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 6], where the
rigidity $R$ of a CR particle is defined as its momentum divided by the
absolute value of its charge. Here we will not explore these exciting
directions and instead focus on one standard setup of CR propagation, which
was already studied in the context DM searches with antiprotons in Ref. [40].
The machine learning approach and the statistical methods introduced below can
however be readily applied also to alternative assumptions and more refined
descriptions. We briefly summarise below the main ingredients of this specific
approach and refer to Ref. [40] for a more detailed discussion.
Charged CRs propagate within a diffusion halo assumed to be cylindrically
symmetric, which extends a few kpc above and below the Galactic plane. In
particular, it has a fixed radial extent of 20 kpc, while the half height of
the diffusion halo is denoted by $z_{\mathrm{h}}$ and typically enters CRs
fits as a free parameters (see section 2.4). When CRs cross the boundary of
the diffusion halo they escape from the Galaxy, while the propagation within
the halo is described by a chain of coupled diffusion equations.
The diffusion equation for the CR number density per volume and absolute
momentum $\psi_{i}(\bm{x},p,t)$ of CR species $i$ at position $\bm{x}$ and
momentum $p$ is given by [73]:
$\displaystyle\frac{\partial\psi_{i}(\bm{x},p,t)}{\partial t}=q_{i}(\bm{x},p)$
$\displaystyle+$
$\displaystyle\bm{\nabla}\cdot\left(D_{xx}\bm{\nabla}\psi_{i}-\bm{V}\psi_{i}\right)$
$\displaystyle+$ $\displaystyle\frac{\partial}{\partial
p}p^{2}D_{pp}\frac{\partial}{\partial
p}\frac{1}{p^{2}}\psi_{i}-\frac{\partial}{\partial
p}\left(\frac{\mathrm{d}p}{\mathrm{d}t}\psi_{i}-\frac{p}{3}(\bm{\nabla\cdot
V})\psi_{i}\right)-\frac{1}{\tau_{f,i}}\psi_{i}-\frac{1}{\tau_{r,i}}\psi_{i}\;.$
We briefly describe each of the terms in eq. (2.3) below. To solve these
equations numerically we employ Galprop 56.0.2870 [8, 74] and Galtoollibs
855444https://galprop.stanford.edu/download.php with a few custom modification
as described in Ref. [40]. Alternatively, solutions might be obtained
analytically, utilizing various simplifying assumption [75, 76], or using
other fully numerically codes like Dragon [9, 77] or Picard [78]. Galprop
assumes that CRs are in a steady state and solves the diffusion equations on a
3-dimensional grid. Two dimensions describe the spatial distribution of CRs,
the radial distance $r$ from the Galactic center and distance $z$
perpendicular to the plane, and one dimension contains the CR’s kinetic
energy. The grid points of the spatial dimensions are spaced linearly with
step size of $\Delta r=1$ kpc and $\Delta z=0.1$ kpc, respectively, while the
grid is spaced logarithmically in kinetic energy with a ratio between
successive grid points of 1.4.
The source term $q_{i}$ in eq. (2.3) depends on the CR species. For secondary
antiprotons and antiprotons from DM annihilation it takes the form of eq.
(2.2) and eq. (2.1), respectively. For primary CRs the source term factorizes
into a spatial and a rigidity-dependent term. The spatial term traces the
distribution of supernova remnants.555 We use the default prescription of
Galprop where the parameters of the source term distribution are fixed to
$\alpha=0.5$, $\beta=2.2$, $r_{s}=8.5$ kpc, and $z_{0}=0.2$ kpc. This is
slightly different from recent values in the literature [79]. We note,
however, that nuclei are only very weakly sensitive to the chosen distribution
as discussed in Ref. [52]. On the other hand, the rigidity dependence is
modeled as a smoothly broken power-law:
$\displaystyle q_{R}(R)$ $\displaystyle=$
$\displaystyle\left(\frac{R}{R_{0}}\right)^{-\gamma_{1}}\left(\frac{R_{0}^{1/s}+R^{1/s}}{2\,R_{0}^{1/s}}\right)^{-s(\gamma_{2}-\gamma_{1})},$
(2.4)
where $R_{0}$ is the break position and $\gamma_{1,i}$ and $\gamma_{2,i}$ are
the spectral indices above and below the break for the CR species $i$,
respectively. The parameter $s$ regulates the amount of smoothing at the
break. In the following analysis we will assume that all primary nuclei except
for protons have a universal injection spectrum such that we adopt
$\gamma_{1,i}=\gamma_{1}$ and $\gamma_{2,i}=\gamma_{2}$. For protons we allow
different spectral behaviour and keep the subscript $i=p$. The broken power-
law spectrum in eq. (2.4) is a widely used phenomenological approximation
which describes well the data in the considered rigidity range. All CR species
are affected by several processes that contribute to CR propagation, which are
diffusion, reacceleration, convection, and energy losses. We assume that
diffusion is spatially homogeneous and isotropic. In this case, the diffusion
coefficient, $D_{xx}$, can be modeled as a broken power-law in rigidity
$\displaystyle D_{xx}$ $\displaystyle=$ $\displaystyle\begin{cases}\beta
D_{0}\left(\frac{R}{4\,\mathrm{GV}}\right)^{\delta}&\text{if}\;R<R_{1}\\\
\beta
D_{0}\left(\frac{R_{1}}{4\,\mathrm{GV}}\right)^{\delta}\left(\frac{R}{R_{1}}\right)^{\delta_{h}}&\text{otherwise}\,,\end{cases}$
(2.5)
where $D_{0}$ is an overall normalisation and $\delta$ and $\delta_{h}$ are
the power-law indices below and above the break at position $R_{1}$. At low
energies the diffusion coefficient is proportional to the velocity $\beta=v/c$
of the CRs. We allow for a diffusive reacceleration of CRs by scattering off
Alfvèn magnetic waves. The amount of reacceleration is then determined by the
velocity $v_{\mathrm{Alfven}}$ of the waves [80, 81]:
$\displaystyle
D_{pp}=\frac{4\left(p\,v_{\mathrm{Alfven}}\right)^{2}}{3(2-\delta)(2+\delta)(4-\delta)\,\delta\,D_{xx}}\;.$
(2.6)
The terms proportional $\bm{V}(\bm{x})$ in eq. (2.3) describe convective winds
which drive the CRs away from the Galactic plane. They are taken constant and
orthogonal to the Galactic plane, such that $\bm{V}(\bm{x})={\rm
sign}(z)\,v_{0,{\rm c}}\,{\bm{e}}_{z}$. The remaining terms describe different
contributions of energy losses, for which we adopt the default Galprop
implementation. In particular, continuous energy losses like ionisation or
bremsstrahlung are included in the term $\mathrm{d}p/\mathrm{d}t$, while
catastrophic energy losses by fragmentation or decay are modeled by the last
two terms. The parameters $\tau_{f}$ and $\tau_{r}$ are the corresponding
lifetimes.
We emphasise again that the setup of CR propagation described above reflects
one specific choice. The available measurements of CR nuclei are also
described well by other setups. In particular, a model without diffusive re-
acceleration, but with an additional break in the diffusion coefficient
between 5 and 10 GeV is currently discussed in the literature [63, 68, 6].
CRs measured at the top of the atmosphere have to traverse a significant part
of the heliosphere where they are deflected and decelerated by solar winds.
The strength of this effect varies in a 22-year cycle and is commonly known as
solar modulation. It mostly affects low-energetic CRs; in practice the impact
on the spectra above a few tens of GV is negligible. We use the common force-
field approximation [82] to model the impact on the CR spectra:
$\displaystyle\phi_{\oplus,i}(E_{\oplus,i})$ $\displaystyle=$
$\displaystyle\frac{E_{\oplus,i}^{2}-m_{i}^{2}}{E_{\text{LIS},i}^{2}-m_{i}^{2}}\phi_{\text{LIS},i}(E_{\text{LIS},i})\,,$
(2.7) $\displaystyle E_{\oplus,i}$ $\displaystyle=$ $\displaystyle
E_{\text{LIS},i}-e|Z_{i}|\varphi_{i}\,.$ (2.8)
Here $\phi$ and $E$ label the energy-differential flux and the kinetic energy,
respectively. The subscripts on the energy or flux denote the position which
can either be local interstellar (LIS) or top of the atmosphere ($\oplus$).
Furthermore, $Z_{i}$ is the charge number, $e$ is the elementary charge, and
$\varphi_{i}$ is the solar modulation potential. The potential is known to be
time and charge-sign dependent. We note that the force-field approximation is
probably an oversimplified treatment of solar modulation.666 In more
sophisticated models solar modulation is described by propagation equations
similar to eq. (2.3) but tuned to the environment of the heliosphere. These
are typically solved numerically [83, 84, 85, 86, 87]. To minimise systematic
impacts from solar modulation on our results we will exclude data below 5 GV
from our analysis. Furthermore, we allow a different solar modulation
potential for antiprotons to account for a possible charge-sign dependence.
### 2.4 Fit to AMS-02 data
In the following we summarise very briefly the considered data sets and the
fit strategies, where the latter are directly adopted from Ref. [40]. The most
precise measurement of CR antiprotons above 1 GV is currently provided by the
AMS-02 experiment [5]. We consider the data sets of proton, helium, and the
antiproton-to-proton ratio from AMS-02 [5] collected over 7 years from 2011 to
2018 and complement with low-energy data for protons and helium from Voyager
[88]. When fitting the CR data with the model outlined below, the CR
likelihood is defined by
$\displaystyle-2\,\log{{\cal L}_{{\rm CR}}}(\bm{\theta})=\chi^{2}_{\rm
CR}(\bm{\theta})=\sum\limits_{e,s,i}\left(\frac{\phi^{(e)}_{{s},i}-\phi^{(\text{m})}_{s,i,e}(\bm{\theta})}{\sigma^{\left(e\right)}_{s,i}}\right)^{2}\,,$
(2.9)
where $\phi^{(e)}_{{s},i}$ denotes the flux of the CR species $s$ that was
measured by the experiment $e$ at the rigidity $R_{i}$ or energy
$E_{\mathrm{kin},i}$, while $\phi^{(\text{m})}_{s,i,e}(\theta)$ is the flux
computed with Galprop for the corresponding species and energy. Finally,
$\sigma^{\left(e\right)}_{s,i}$ is the experimental uncertainty of the flux
measurement. The AMS-02 experiment provides separate statistical and
systematic uncertainties. Here we assume that the systematic uncertainties are
uncorrelated and add the two contribution in quadrature. This is certainly a
simplified treatment. In particular, it was shown that the significance of a
potential excess can depend critically on the assumptions made for the
correlations of this uncertainty [41, 42]. However, we expect that the impact
on DM limits is less severe, because of two opposing effects: The covariance
matrices as modeled in Refs. [41, 42] contain contributions with both large
and small correlation lengths. A large correlation length corresponds to a
change of the overall normalisation which looks very different from a peaked
signature such as expected from DM annihilation. This potentially leads to a
stronger DM limit. On the other hand, a small correlation length allows for
signatures similar to those from DM and, therefore, potentially weakens the
limit. Overall, we expect these two effects to partly cancel each other. We
leave the study of different systematics and correlations within the new
methods introduced in this paper to future investigation.
Within the phenomenological description of CR injection and propagation
outlined in section 2.3, the parameters of eq. (2.3) are largely unconstrained
a priori and are directly inferred from CR data. We allow for a total of 15
parameters to describe CR injection and propagation. To sample this large
parameter space efficiently we use the Nested Sampling algorithm implemented
in the MultiNest code [89]. The computing efficiency is increased even further
by exploiting a hybrid strategy where only a subset of parameters is sampled
by MultiNest (“slow parameters”) and the remaining parameters are profiled on-
the-fly (“fast parameters”). The slow parameters are the ones that are needed
as input for Galprop and thus changing them is time consuming. More
specifically, these are the following eleven parameters: the slopes of the
primary injection spectra $\gamma_{1,p}$, $\gamma_{1}$, $\gamma_{2,p}$, and
$\gamma_{2}$, the break position $R_{0}$ and its smoothing $s$, the
normalisation $D_{0}$ and slope $\delta$ of the diffusion coefficient, the
half-height of the diffusion halo $z_{\mathrm{h}}$, and the velocities of
convection $v_{0c}$ and Alfvèn magnetic waves $v_{\text{Alfv{\\`{e}}n}}$. The
scan ranges for all of these parameters are summarised in table 1. In the
following we will give results in the frequentist and the Bayesian
interpretation. For the Bayesian interpretation we assume flat priors in the
scan ranges.
The four remaining parameters describe the normalisation of the proton
($A_{p}$) and helium ($A_{\mathrm{He}}$) fluxes and the solar modulation
potentials ($\varphi_{\text{AMS-02},p,{\rm He}}$ for $p$ and He and
$\varphi_{\text{AMS-02},{\bar{p}}}$ for ${\bar{p}}$). These are the fast
parameters, which are treated in a simplified way in our analysis and
therefore can be varied much more easily. Instead of explicitly including them
in the MultiNest parameter scans, we profile over them on-the-fly at each
likelihood evaluation of MultiNest, i.e. we maximise the likelihood over the
fast parameters using Minuit [90]. A very weak Gaussian prior is applied to
$\varphi_{\text{AMS-02},{\bar{p}}}$ by adding to the main likelihood the term
$-2\,\log({{\cal L}_{{\rm SM}}})=(\varphi_{\text{AMS-02},p,{\rm
He}}-\varphi_{\text{AMS-02},{\bar{p}}})^{2}/\sigma_{\varphi}^{2}$ where
$\sigma_{\varphi}=100$ MV,777 The prior expresses that the solar modulation
potential of antiprotons and the one of protons and helium are related, even
if they are not forced to be the same. In Ref. [91] the average difference
between the potential of electrons and positrons was found to be around 100
MV. while no priors are applied on $\varphi_{\text{AMS-02},p,{\rm He}}$.
We truncate the rigidity range of the fit to the range between 5 to 300 GV. As
mentioned above, data below 5 GV is excluded to avoid a strong bias from our
modeling of solar modulation.888 It was shown in Ref. [40] that the cut at
$R=5$ GV does not artificially enhance the significance of a potential DM
signal. At high energies, the spectra of CR nuclei show a break at $R\sim
300$ GV, which is more pronounced in secondaries with respect to primaries
[92, 93]. While in general it would be possible to introduce spectral breaks
in the injection spectrum or in the diffusion coefficient, only the latter
naturally explains the different behavior of the primaries and secondaries
[94]. We therefore fix the parameters of eq. (2.5) to $R_{1}=300$ GV and
$\delta_{h}=\delta-0.12$. The proton and helium data of AMS-02 is described
well by this choice. Truncating our fit at $R\sim 300$ GV avoids unnecessary
bias.999 As an alternative, $R_{1}$ and $\delta_{h}-\delta$ could be treated
at free parameters in the fit which would, however, increase the complexity of
the already high-dimensional parameter fit.
Table 1: Results of the CR fits to AMS-02 and Voyager data of protons, helium, and antiprotons. The parameter ranges for the MultiNest scan are stated in column 2. In the remaining columns we state the best-fit parameter values and their uncertainty at the 68% C.L. for a fit with and without a DM signal. Results are given both in the frequentist and the Bayesian interpretation. | | Frequentist | Bayesian
---|---|---|---
Parameter | Scan ranges | w/o DM | w/ DM | w/o DM | w/ DM
$\gamma_{1,p}$ | $[1.2,2]$ | ${1.80}^{+0.04}_{-0.03}$ | ${1.79}^{+0.07}_{-0.06}$ | ${1.77}^{+0.07}_{-0.04}$ | ${1.68}^{+0.14}_{-0.07}$
$\gamma_{1}$ | $[1.2,2]$ | ${1.79}^{+0.04}_{-0.04}$ | ${1.74}^{+0.08}_{-0.06}$ | ${1.75}^{+0.07}_{-0.04}$ | ${1.63}^{+0.15}_{-0.07}$
$\gamma_{2,p}$ | $[2.3,2.6]$ | ${2.405}^{+0.013}_{-0.007}$ | ${2.48}^{+0.02}_{-0.03}$ | ${2.41}^{+0.01}_{-0.01}$ | ${2.48}^{+0.02}_{-0.03}$
$\gamma_{2}$ | $[2.3,2.6]$ | ${2.357}^{+0.014}_{-0.005}$ | ${2.42}^{+0.02}_{-0.03}$ | ${2.366}^{+0.009}_{-0.012}$ | ${2.42}^{+0.02}_{-0.02}$
$R_{0}\,\mathrm{[10^{3}\;MV]}$ | $[1,20]$ | ${7.92}^{+0.82}_{-0.80}$ | ${7.32}^{+1.16}_{-0.83}$ | ${7.06}^{+0.93}_{-1.04}$ | ${6.42}^{+0.97}_{-1.13}$
$s$ | $[0.1,0.9]$ | ${0.37}^{+0.03}_{-0.03}$ | ${0.40}^{+0.03}_{-0.04}$ | ${0.38}^{+0.04}_{-0.04}$ | ${0.44}^{+0.04}_{-0.06}$
$D_{0}\,\mathrm{[10^{28}\;cm^{2}/s]}$ | $[0.5,10]$ | ${2.05}^{+1.48}_{-0.39}$ | ${2.92}^{+2.09}_{-0.96}$ | ${3.58}^{+1.30}_{-0.73}$ | ${5.37}^{+1.52}_{-1.78}$
$\delta$ | $[0.2,0.6]$ | ${0.419}^{+0.009}_{-0.012}$ | ${0.35}^{+0.03}_{-0.02}$ | ${0.42}^{+0.01}_{-0.01}$ | ${0.33}^{+0.03}_{-0.03}$
$v_{\mathrm{Alfven}}\,\mathrm{[km/s]}$ | $[0,30]$ | ${8.84}^{+1.45}_{-2.58}$ | ${10.25}^{+2.12}_{-2.06}$ | ${6.02}^{+3.57}_{-2.51}$ | ${7.70}^{+4.15}_{-3.10}$
$v_{0,\mathrm{c}}\,\mathrm{[km/s]}$ | $[0,60]$ | ${0.09}^{+1.08}_{-0.08}$ | ${0.90}^{+6.77}_{-0.78}$ | ${2.48}^{+0.32}_{-2.48}$ | ${13.36}^{+2.44}_{-13.36}$
$z_{\mathrm{h}}\,\mathrm{[kpc]}$ | $[2,7]$ | ${2.60}^{+2.25}_{-0.48}$ | ${2.79}^{+2.87}_{-0.75}$ | ${4.70}^{+2.30}_{-0.86}$ | ${4.84}^{+2.13}_{-0.75}$
$\log_{10}(m_{\mathrm{DM}}/\mathrm{MeV})$ | $[4,7]$ | - | ${5.07}^{+0.03}_{-0.05}$ | - | ${5.08}^{+0.04}_{-0.05}$
$\log_{10}({\langle}\sigma v{\rangle}\mathrm{s/cm^{3}})$ | $[-27,-22]$ | - | ${-25.42}^{+0.22}_{-0.48}$ | - | ${-25.76}^{+0.13}_{-0.26}$
$\varphi_{\mathrm{AMS-02,pHe}}\,\mathrm{[GV]}$ | | ${0.26}^{+0.04}_{-0.03}$ | ${0.25}^{+0.05}_{-0.03}$ | ${0.30}^{+0.04}_{-0.05}$ | ${0.28}^{+0.04}_{-0.06}$
$(\varphi_{\bar{p}}-\varphi_{p})_{\mathrm{AMS-02}}\,\mathrm{[GV]}$ | | ${0.200}^{+0.000}_{-0.036}$ | ${0.13}^{+0.07}_{-0.12}$ | ${0.177}^{+0.023}_{-0.001}$ | ${0.09}^{+0.11}_{-0.03}$
$A_{\mathrm{p,AMS-02}}$ | | ${1.173}^{+0.004}_{-0.003}$ | ${1.173}^{+0.003}_{-0.004}$ | ${1.178}^{+0.004}_{-0.004}$ | ${1.177}^{+0.004}_{-0.004}$
$A_{\mathrm{He,AMS-02}}$ | | ${1.257}^{+0.006}_{-0.014}$ | ${1.20}^{+0.02}_{-0.01}$ | ${1.253}^{+0.010}_{-0.010}$ | ${1.20}^{+0.02}_{-0.02}$
$\chi^{2}_{\mathrm{p,AMS-02}}$ | | $7.2$ | $6.2$ | |
$\chi^{2}_{\mathrm{He,AMS-02}}$ | | $3.2$ | $2.1$ | |
$\chi^{2}_{\mathrm{pbar/p,AMS-02}}$ | | $35.0$ | $21.5$ | |
$\chi^{2}_{\mathrm{p,Voyager}}$ | | $7.9$ | $4.1$ | |
$\chi^{2}_{\mathrm{He,Voyager}}$ | | $3.9$ | $3.2$ | |
$\chi^{2}$ | | $57.2$ | $37.1$ | |
To gain a first understanding of the allowed regions of parameter space, we
perform a CR fit as detailed above. The scan is conducted using 1000 live
points, a stopping criterion of tol=0.1 and an enlargement factor efr=0.7. The
final efficiency of the scan is found to be around 9%, with about 350 000
likelihood evaluations in total. The fit is heavily parallelised using 96
cores at the same time. The Galprop code is parallelised using openMP while
the nested sampling algorithm of MultiNest can be expanded to multiple MPI
tasks. We follow a hybrid strategy with 24 MPI tasks using 4 cores for each
task. We have verified that the parallelisation efficiency lies above 70%. In
total the fit requires about 5.5 days to converge and consumes 12500 cpu
hours, which means that a single likelihood evaluation requires on average
about 130 cpu seconds. To perform this fit, MultiNest starts by broadly
sampling the entire parameter space and then continuously shrinks to the
allowed parameters. The result is an ensemble of parameter points which is
denser in the most interesting parameter region. We will make use of this
property in the following section, where our goal is to train the ANNs in such
a way that they perform particularly well in the parameter range preferred by
data. Thus, we save all the sample points during the fit and then use them as
a starting point for the training in section 3.
In table 1 we summarise the best-fit (most probable) values of the various
parameters based on a frequentist (Bayesian) interpretation as well as their
68% confidence intervals (credible intervals). The best-fit point corresponds
to $\chi^{2}=57.2$ for the AMS-02 data. These results broadly agree with the
ones from Ref. [40] even though we use the more recent 7-year AMS-02 data for
$p$, He, and $\bar{p}$. As expected, for the parameters that are well-
constrained by data, there is good agreement between the frequentist and the
Bayesian approach. For less constrained parameters (such as for example
$z_{\mathrm{h}}$) there can be sizeable differences between the best-fit point
(obtained by maximizing the profile likelihood) and the most probable point
(obtained by maximizing the marginalised likelihood). We will return to this
issue in section 4.
Previous analysis have discussed a potential DM signal that could be
accommodated at antiproton energies between 10 and 20 GeV where the antiproton
flux shows a small anomaly at the level of a few percent. This potential
signal corresponds, for example, to DM particles with a mass of about 80 GeV
that self-annihilate into $b\bar{b}$ final states at a thermal cross section.
However, the significance of this potential signal has been discussed
controversially in the literature. The most recent works suggest that the
anomaly is well explained by the combination of several systematic
uncertainties, namely uncertainties in the secondary antiproton production
cross section, correlated systematics in the AMS-02 data, and some additional
freedom in the CR propagation model [41, 42, 43], which we do not include
here.
The focus of this work lies instead on developing new methods for exploiting
ANNs and importance sampling to derive DM limits. In contrast to a DM signal,
we expect the limits to be only weakly dependent on those systematics and
leave their investigation to future studies. Nevertheless, for comparison we
also perform one fit where antiprotons from DM annihilation are included. We
choose DM annihilation into a pair of $b\bar{b}$ quarks as our benchmark. In
this case, two further parameters are considered, the mass of the DM particle
$m_{\rm DM}$ and the thermally averaged annihilation cross section
$\langle\sigma v\rangle$ (see eq. (2.2)). We explore values of $m_{\rm DM}$
from 10 GeV to 10 TeV and values of $\langle\sigma v\rangle$ between
$10^{-22}$ and $10^{-27}\;\mathrm{cm^{3}/s}$, with our results being
independent of the precise choice of these ranges. Theses two additional
parameters are sampled with MultiNest.
The results of the additional fit are also shown in table 1. Including a DM
signal formally improves the $\chi^{2}$ by 20.1 which, however, given the
discussion above should not be interpreted as significant. Nonetheless, we can
take this value as a point of comparison for the performance of the ANN in
section 4. We furthermore observe that, while the additional DM signal affects
most CR propagation parameters only marginally, there is a sizeable shift of
the preferred parameter regions for $\gamma_{2}$, $\gamma_{2,p}$ and $\delta$.
While this shift is likely overestimated in our analysis for the reasons
mentioned earlier, it highlights the challenges for the training of the ANN
(see section 3) and for the statistical inference via importance sampling (see
section 4).
In the previous paragraph, we focused on a specific case of DM annihilation
into a pair of bottom quarks which serves as an example and a point of
comparison. In general, much more complex scenarios with a range of different
final states and combinations at different branching fractions are possible.
The naive approach to obtain results would be to perform an entirely new
parameter scan for each case of interest which obviously requires a
substantial amount of computational resources. Instead, in the following we
will discuss methods to speed up the calculation of CR spectra in a model-
independent fashion to quickly obtain constraints for any given DM model.
## 3 Deep neural network setup and training
Our aim is to predict the output of Galprop for a wide range of input
parameters representing both uncertainties in the propagation model and the
unknown properties of DM. This output can then be used to calculate
experimental likelihoods as described in section 2.4 without computationally
expensive simulations. To achieve this goal, we build and train suitable ANNs
and validate their performance. Considering the two different contributions to
the antiproton flux (i.e. primary and secondary CRs), we construct two
separate ANNs to provide fast predictions of each component based on the
relevant physical parameters. We will refer to the networks for the DM
component and the secondary component as DMNet and sNet, respectively. As the
underlying method in the development of the neural networks is the same, both
ANNs will be presented in parallel in this section.
### 3.1 Training Set
The information that a neural network should be able to learn, in general, has
to be represented in the data that is used to train the network. This allows
for the interpolation of data within the parameter space that would, in a
conventional approach, require new simulations. To remain impartial on the
specific parameters of the DM model, we consider a wide range in the mass of
the DM particle from 5 GeV to 5 TeV and randomly sample from a logarithmic
distribution in this range. A similar approach is taken for the branching
fractions, where we consider all SM final states that give a non-negligible
contribution to a CR antiproton flux [44]: $q\bar{q}$, $c\bar{c}$, $b\bar{b}$,
$t\bar{t}$, $W^{+}W^{-}$, $ZZ$, $hh$ and $gg$. We logarithmically sample each
branching fractions in the range $[10^{-5},1]$ and then normalise the result
in such a way that the sum of all branching fractions equals one. The DM
annihilation cross section is fixed to $\langle\sigma v\rangle=3\times
10^{-26}\,\text{cm}^{3}\,\text{s}^{-1}$ in the complete training set, as
variations in this parameter can be included at a later stage by an
appropriate rescaling of the flux. These DM parameters, which we will
collectively denote by $\mathbf{x}_{\text{DM}}$, are only relevant to the DM
component of the antiproton flux and the corresponding neural network, while
the secondary flux is independent of $\mathbf{x}_{\text{DM}}$ and hence these
parameters will not be used as inputs to the sNet.
Figure 1: Triangle plot: One and two dimensional histograms showing the
frequency of propagation parameters used in the training set, constructed in
such a way that the highest density is achieved in the regions most favoured
by the combination of AMS-02 proton, antiproton and helium data without DM
signal. Top right: One dimensional histogram of the training set for each of
the branching fractions $\chi\chi\rightarrow$ SM SM.
For the propagation parameters we face the significant challenge that the
parameter space introduced in section 2 is 11-dimensional and that only a very
small volume of this parameter space gives an acceptable fit to AMS-02 data.
If we were to simply perform a grid scan or draw random samples from this
parameter space, we would include large regions of parameter space for which
accurate predictions are unnecessary, as they will anyways be strongly
excluded. Conversely, in the preferred regions of parameter space, we want to
achieve an accuracy that is significantly better than the typical relative
errors of about 5% in AMS-02 data, which requires large amounts of training
data.
To obtain sufficiently accurate network predictions in the most interesting
regions of parameter space without spending unnecessary time on the simulation
and training of less interesting regions, we want to make use of the AMS-02
data already for the creation of the training set. Indeed, we can directly use
the MultiNest scan described in section 2.4 to obtain a sample of propagation
parameters (denoted by $\bm{\theta}_{\text{prop}}$ in the following) that is
focused on the regions of parameter space with the highest likelihood (see
also Ref. [95]). Since in the following we will be most interested in the
calculation of exclusion limits, we will base our training on the MultiNest
scan without DM signal. For a detailed investigation of the excess the same
procedure outlined below could be applied to the sample of propagation
parameters from the MultiNest scan with DM signal.
For the creation of the training set we exclude any parameter point in the
MultiNest sample that gives a poor fit to AMS-02 data, specifically with
$\Delta\chi^{2}\geq 30$ compared to the best-fit point. This results in a
total of 117335 remaining parameter points which we show in figure 1. We
emphasise that for each parameter the training data extends well beyond the
68% confidence/credible intervals without DM annihilations quoted in table 1.
To ensure a sufficiently good coverage also of the DM parameter space, we
sample 8 combinations of DM parameters for each propagation parameter point,
leading to a very large simulation set of $\mathcal{O}(10^{6})$ parameter
points.
### 3.2 Neural Network Architectures
Although the two networks that we use to predict the two components of the
antiproton flux can be set up and trained in a similar way, we face distinct
challenges in each component. For the DMNet the key challenge is the very
large number of input parameters, namely the DM mass plus 8 branching
fractions in $\mathbf{x}_{\text{DM}}$ and a total of 11 propagation parameters
in $\bm{\theta}_{\text{prop}}$, each with a different physical effect on the
output, i.e. the antiproton flux. As we want to have accurate predictions for
variations in each of the parameters, we treat the DM mass, the branching
fractions, and the propagation parameters as three distinct sets of inputs,
which are first processed by three independent dense networks before combining
the outputs (see below).
For the sNet the key challenge is to achieve sufficient accuracy in the
prediction of the secondary antiprotons flux, which is tightly constrained by
AMS-02 data. Given these constraints, the secondary antiproton flux only
exhibits relatively small variations across the training set, which
nevertheless need to be accurately captured using the sNet. To achieve the
desired accuracy, we provide an increased number of trainable parameters that
define the network. As we will show within the following sections, the
training duration consequently increases with respect to the DMNet but a very
good accuracy is achieved.
Rather than directly feeding the physical parameters as inputs to the network,
we map the logarithm of $\mathbf{x}_{\text{DM}}$ to values in the range
$\left[0,1\right]$ and the remaining parameters $\bm{\theta}_{\text{prop}}$ to
a distribution with a mean of 0 and a standard deviation of 1. Each of the
networks is then trained in a supervised approach. The simulated fluxes serve
as training labels or ‘true’ fluxes to which the network output can be
compared.
Given the large variations in the CR fluxes that are desired as the output of
the ANNs, here we choose a natural scaling of the original (simulated) flux
$\Phi(E)$ for the sNet outputs,
$\tilde{\Phi}_{\text{s}}(E)=\log_{10}\left(\Phi(E)\,E^{2.7}\right)\,.$ (3.1)
The $\log_{10}$ further decreases the variations in the flux values, which
would otherwise cover several orders of magnitude. The energies and their
respective fluxes are binned values, identical to the output from the
simulations, which extend over the energy range of the AMS-02 antiproton
measurement. Consequently, we have sequences of distinct values in the scaled
flux as training labels. The transformation in eq. (3.1) is easily invertible
and thus allows for direct comparison of the network output to the simulated
spectra.
As the DM component of the flux predominantly scales with the DM mass, we
choose a different scaling for that flux component,
$\tilde{\Phi}_{\text{DM}}(x)=\log_{10}\left(m_{\text{DM}}^{3}\,x\,\Phi(E)\right)\,,$
(3.2)
where $x=E/m_{\text{DM}}$ is a dimensionless quantity. We use a grid in $x$
with 40 points logarithmically spaced in the interval $[10^{-3.7},1]$, on
which we evaluate the training labels and DMNet output. The advantage of this
scaling compared to eq. (3.1) is that it substantially reduces the impact of
changing the DM mass and therefore leads to much less variation across the
training set.101010To first approximation the DM component of the antiproton
flux follows the source term
$\Phi_{\overline{p},\text{DM}}\left(E\right)\propto q_{\text{DM}}\propto
m_{\text{DM}}^{-2}\,\mathrm{d}N/\mathrm{d}E\propto
m_{\text{DM}}^{-3}x^{-1}\,\mathrm{d}N/\mathrm{d}\log_{10}x$, where
$\mathrm{d}N/\mathrm{d}\log_{10}x$ depends only very mildly on
$m_{\text{DM}}$. This is illustrated in figure 2, which shows the resulting
DM antiproton fluxes $\tilde{\Phi}_{\text{DM}}$ as a function of $x$ for a
representative set of final state combinations and DM masses in the training
set. We find that for each combination of input parameters we obtain a slowly-
varying function of $x$ that reaches a maximum and then drops towards $x\to
1$. The general trend is similar across the entire range of DM masses that we
consider, but some information on the DM mass is retained. We find that this
approach significantly improves the training of the DMNet compared to the
scaling in eq. (3.1).
Figure 2: Transformed DM antiproton fluxes following eq. (3.2) for our
training set, which varies propagation parameters, branching fractions and DM
masses as discussed in sec. 3.1. The modest amount of variation across
different parameter points results in a more easily processable version of the
input GALPROP simulated flux for the DMNet.
Subsequent to the pre-processing of the input, the ANNs contain densely
connected (’dense’) layers, that process the information from the inputs. To
address the individual challenges for the networks we set up the architecture
as depicted in figure 3. We provide dense layers for each of the different
inputs in the DMNet which are concatenated in the next step and followed by
large dense layers. In the sNet the pre-processed input is fed through a more
intricate set of dense layer, specifically (56, 28, 14, 28, 56) nodes in the
set of layers. We use ReLU activations and add a small dropout probability of
0.1% between the layers. The precise values of these hyperparameters do not
significantly affect the training performance.
The main feature of each of the networks is a recurrent layer. The choice to
work with a recurrent setup instead of other network architecture types has
lead to significant improvements in the architecture development process. Even
though the typical application for RNNs is time-series data, we find our
spectra as functions of energy to be handled just as well by this network
type. In particular, it can be reasoned that the information on the flux that
is contained in a specific energy bin is highly correlated with the prior and
subsequent energy bins and a network architecture that is able to propagate
the information of neighbouring units is very beneficial for the task at hand.
We chose a GRU layer as proposed in Ref. [96] in the DMNet and a LSTM layer
following [97] in the sNet. Each of these layer types is useful for long data
sequences and far-reaching information propagation without leading to
vanishing or exploding gradients during training. While both methods can in
principle be used for either networks, the final implementations that achieved
the best results was based on different layer types. As network output a final
dense layer is set up. We build the networks using the deep learning API Keras
[98] which uses Tensorflow [99] as backend.
Hyperparameters
---
Activation | ReLU
Dropout fraction | 0.1 %
Optimizer | Adam, learning rate scheduling $l\in[10^{-2},10^{-5}]$, patience 10 epochs
Loss | Mean squared error (MSE)
Batch size | 500
Validation split | 20 %
Early stopping | Monitor val. loss, patience = 40
Figure 3: Schematic of the network structure. Top left: Architecture set up
for handling the complete set of inputs. This network type can be used to be
trained on the DM component of the $\overline{p}$ flux (DMNet). Top right:
Simplified architecture for networks that require only the CR propagation
parameters as input. This network architecture is designed for learning the
$\overline{p}_{\text{secondary}}$ fluxes (sNet) and can be employed to train
on proton and helium spectra as well (see appendix A). Bottom: The
hyperparameters used during the training process for each of these networks.
### 3.3 Training process
We use approximately $75$% of the previously described simulation set for the
network training.111111Note that the sNet has a smaller training set compared
to the DMNet, as here we have fewer unique spectra following from our
parameter sampling for simulating the training set. The remainder is used as a
test set on which network performance evaluations can be conducted. Within the
training set, a validation split of $20$% is used during training to monitor
the generalisation capabilities of the network. Unlike the training loss, the
loss calculated on the validation set is not used to update the model
parameters during the optimisation process.
The network training was conducted using the ADAM optimizer [100] and a mean
squared error (MSE) loss. The initial learning rate of $10^{-2}$ is decreased
during the training process, based on the behaviour of the validation loss,
for an optimal convergence to a minimal loss. After the learning rate reaches
its predefined minimum (lr = $10^{-5}$) the training process is terminated
after 40 epochs without improvement of the validation loss, using an early
stopping mechanism. This process helps ensure the convergence of the network
optimisation. The MSE loss for both the training and validation loss over the
training epochs is shown in figure 4 for both ANNs.
Figure 4: Evolution of the MSE loss for the DMNet (left) and sNet (right) over
the training epochs.
We performed the training on a V100-SXM2 GPU. Given the depth of the
individual networks, this resulted in training durations of about $4$ minutes
per epoch of the DMNet and about $12$ minutes per epoch for the sNet.
### 3.4 Validation of the Network Performance
Training performance measures, such as the loss based on the training set, can
be helpful while adjusting the architecture and hyperparameters of the
networks. The usage of the networks however, requires an evaluation of their
ability to replace the simulations. Using the fully trained networks we can
compare the simulated spectra from Galprop within the test set to the network
predictions based on the same parameter point. An example for such a
comparison is shown in figure 5. We show the simulated spectra and the output
of the respective ANNs for both the secondary and DM component of the
antiproton flux (as well as for their sum). In the top panel we depict the
fluxes in physical space alongside the AMS-02 antiproton data, demonstrating
that the network provides fluxes that are extremely similar to the
corresponding simulations. This is illustrated even more clearly in the bottom
panel, which shows the relative differences between the ANN and the simulation
with respect to the simulated total antiproton flux, compared to the relative
uncertainties of the AMS-02 antiproton data. Prior to plotting each CR flux we
infer the solar modulation and overall normalization by maximizing the
likelihood for the AMS-02 data, as outlined in section 2.4. This enables a fit
to the data measured within the heliosphere and is automatically applied to
each CR flux evaluated in the following. As this is not computationally
expensive, it is not necessary to already include this step in the training
process for the ANN. The parameters inferred for the Galprop and ANN fluxes
respectively are in agreement with each other. The parameter point for the
specific example presented in figure 5 was randomly selected from the
extensive test set.
Figure 5: Exemplary comparison of the ANN versus Galprop antiprotons flux of
only the DM component and the combination of secondary and DM component where
the listed parameters and simulated fluxes are randomly sampled from the test
set. Each component of the neural network flux is predicted by the individual
networks. The lower panel depicts the relative difference between the Galprop
(‘true’) and ANN (‘predicted’) fluxes with respect to the Galprop flux
compared to the relative AMS-02 uncertainty. The listed solar modulation
potential and overall normalization were inferred based on the AMS-02 data for
each combined antiproton flux as described in section 2.4.
We conclude that the accuracy of the sNet is fully sufficient: the relative
difference between the fluxes predicted by the ANN and the simulated Galprop
fluxes are always well below the relative uncertainty of the AMS-02
measurements. The architecture and training process used for the sNet can
analogously be applied to train an ANN on proton and Helium spectra based on
the same Galprop simulation set, achieving a comparable accuracy. We provide
additional details on these networks in appendix A.
Given that the DMNet is trained on a parameter space of much higher
dimensionality, it is unsurprising that its predictions are on average less
accurate than the ones of the sNet. Indeed when calculating the relative
differences between simulations and network predictions for the DM component
only, we find that only 72% of samples lie on average within the AMS-02
relative uncertainties. However, it is essential to realise that in any
realistic setting the DM component will only constitute a subdominant
contribution to the antiproton flux. Indeed, if the DM contribution in a given
bin significantly exceeds the uncertainty of the AMS-02 data (which is
typically at the level of 5%) the model is expected to be excluded.
In order to provide more realistic estimates of the general accuracy and
stability of the DMNet performance within the test set, we therefore focus on
DM signals that contribute 5% to the total antiproton flux in the bin where
the relative DM contribution is largest. We then calculate the differences
between simulations and network output relative to the total antiproton flux.
This approach shows that even if the DMNet itself is only accurate at the
level of 10%, the total antiproton flux can still be predicted with an
accuracy at the sub-percent level for allowed DM models.
In figure 6, we show this accuracy estimate for a total of 3000 DM component
samples from the test set (1000 samples each for three different mass bins
corresponding to the three different rows). Here we compute the deviation
between DMNet prediction and Galprop simulation to the corresponding total
antiproton flux, as in the lower panel of figure 5. Since the deviations are
found to be miniscule when compared to the AMS-02 relative uncertainties, we
provide in the right column of figure 6 a zoomed-in version. It can be seen
that the uncertainty bands (containing the central 68% of the network
predictions) are typically at the level of 0.1% and do not exhibit any
systematic shifts nor any significant dependence on the DM mass.
In the following we will be interested in comparing the total antiproton flux
to data in order to determine which DM signals are allowed by observations.
The comparison between the network accuracy and the AMS-02 uncertainties in
figure 6 clearly shows that it is fully sufficient for this purpose to use the
ANNs instead of running Galprop. Indeed, we will show explicitly in the next
section that both approaches lead to very similar values for the $\chi^{2}$
statistic described in section 2.4.
Figure 6: Relative deviations between predicted and simulated fluxes for the
DM component binned into three mass bins and their 68 percentile. Each of the
panel shows 1000 samples from the test set. As in the lower panels in figure
5, in the left panel we again compare this to the benchmark of the relative
AMS-02 uncertainties. The right panel shows a zoomed-in version of the left
panel in the interval [-0.010, 0.010].
The fully trained networks, as described in this section and appendix A, are
publicly available as DarkRayNet at https://github.com/kathrinnp/DarkRayNet.
In this repository, we provide an interface to easily access flux predictions
for the corresponding CR species. This tool can for example be used for
indirect DM searches as we outline in our analysis in the subsequent section.
## 4 Constraining the dark matter annihilation cross section
### 4.1 Statistical method
The ANNs described in the previous section enable us to obtain predictions of
the primary and secondary antiproton flux as a function of the DM parameters
$\mathbf{x}_{\text{DM}}$ and the propagation parameters
$\bm{\theta}_{\text{prop}}$. Given data from observations we can then
construct a likelihood function
$\mathcal{L}(\mathbf{x}_{\text{DM}},\bm{\theta}_{\text{prop}})$. We emphasise
that, given suitable predictions for the CR fluxes, this likelihood is quick
to evaluate and therefore does not need to be predicted by the ANN. This has
the significant advantage that the ANN does not need to learn the various
fluctuations that may be present in the data.
The likelihood function can then be used to constrain both
$\mathbf{x}_{\text{DM}}$ and $\bm{\theta}_{\text{prop}}$. In the present work
we primarily focus on the constraints on the DM parameter space, meaning that
we will treat the propagation parameters simply as nuisance parameters that
need to be varied in order to draw robust conclusions. The two main ways to
achieve this is to either calculate the profile likelihood
$\hat{\mathcal{L}}(\mathbf{x}_{\text{DM}})=\mathcal{L}(\mathbf{x}_{\text{DM}},\hat{\bm{\theta}}_{\text{prop}}(\mathbf{x}_{\text{DM}}))\;,$
(4.1)
where $\hat{\bm{\theta}}_{\text{prop}}(\mathbf{x}_{\text{DM}})$ denote the
propagation parameters that maximise the likelihood for given DM parameters
$\mathbf{x}_{\text{DM}}$, or to calculate the marginalised likelihood
$\bar{\mathcal{L}}(\mathbf{x}_{\text{DM}})=\int\mathcal{L}(\mathbf{x}_{\text{DM}},\bm{\theta}_{\text{prop}})p(\bm{\theta}_{\text{prop}})\mathrm{d}\bm{\theta}_{\text{prop}}\;,$
(4.2)
where $p(\bm{\theta}_{\text{prop}})$ denotes the prior probability for the
propagation parameters. Given sufficiently constraining data, the profile
likelihood and the marginalised likelihood are expected to be similar and the
dependence of the result on the chosen priors should be small. We find that
this is largely true for the case considered here, with some notable
exceptions to be discussed below.
From the point of view of our machine learning approach, however, the two ways
of varying the nuisance parameters are very different. The profile likelihood
depends only on the antiproton flux for a single value of
$\bm{\theta}_{\text{prop}}$, meaning that highly accurate predictions are
needed close to the maximum of the likelihood. For extreme choices of the DM
parameters this maximum may be pushed to corners of parameter space where the
network has not been sufficiently trained. A single outlier in the prediction
will then completely bias the result and lead to numerical instabilities when
sampling the parameter space. This makes accurate calculations of the profile
likelihood a highly challenging task.
The marginalised likelihood, on the other hand, depends on the likelihood
across a range of propagation parameters, which should have substantial
overlap with the parameter regions seen during training. The impact of
individual outliers in the predictions is also reduced significantly compared
to the case of the profile likelihood, making the calculation of marginalised
likelihoods based on ANN predictions more robust. Nevertheless, the challenge
remains to ensure that results are not biased by regions of parameter space
where only little training has been performed. In the present work we address
this challenge using the technique of importance sampling [101], which we
describe in the following.121212For a different approach to Bayesian analyses
of cosmic ray propagation with the help of neural networks we refer to Ref.
[37].
First of all, we note that an approximate marginalisation can be performed by
drawing a random sample of parameter points $\bm{\theta}_{i}$ from the prior
probability $p(\bm{\theta}_{\text{prop}})$ and calculating the sum
$\bar{\mathcal{L}}(\mathbf{x}_{\text{DM}})\approx\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(\mathbf{x}_{\text{DM}},\bm{\theta}_{i})\;.$
(4.3)
In fact, the same can be done by drawing a random sample from any probability
distribution function $q(\bm{\theta}_{\text{prop}})$ provided the individual
points are reweighted accordingly (so-called importance sampling):
$\bar{\mathcal{L}}(\mathbf{x}_{\text{DM}})\approx\frac{\sum_{i=1}^{N}\mathcal{L}(\mathbf{x}_{\text{DM}},\bm{\theta}_{i})\frac{p(\bm{\theta}_{i})}{q(\bm{\theta}_{i})}}{\sum_{i=1}^{N}\frac{p(\bm{\theta}_{i})}{q(\bm{\theta}_{i})}}\;.$
(4.4)
A particularly interesting case is that $q(\bm{\theta}_{\text{prop}})$ is
taken to be the posterior probability for the propagation parameters in the
absence of a DM signal, i.e.
$q(\bm{\theta}_{\text{prop}})\propto\mathcal{L}(\mathbf{x}_{\text{DM}}=0,\bm{\theta}_{\text{prop}})\,p(\bm{\theta}_{\text{prop}})\equiv\mathcal{L}_{0}(\bm{\theta}_{\text{prop}})\,p(\bm{\theta}_{\text{prop}})\,.$
(4.5)
In this case $p(\bm{\theta}_{i})/q(\bm{\theta}_{i})\propto
1/\mathcal{L}_{0}(\theta_{i})$ and hence
$\bar{\mathcal{L}}(\mathbf{x}_{\text{DM}})\approx\frac{\sum_{i=1}^{N}\frac{\mathcal{L}(\mathbf{x}_{\text{DM}},\bm{\theta}_{i})}{\mathcal{L}_{0}(\bm{\theta}_{i})}}{\sum_{i=1}^{N}\frac{1}{\mathcal{L}_{0}(\bm{\theta}_{i})}}\;.$
(4.6)
The great advantage of this approach is that the likelihood is only evaluated
for plausible values of the propagation parameters, meaning for values that
give a large posterior probability in the absence of a DM signal. These are
exactly the same parameter regions on which we have focused for the training
of the ANNs described above. Indeed, it is possible to generate the training
data and sample the posterior probability using the same prior probabilities
and the same MultiNest runs such that a large overlap between the two is
ensured.131313We emphasise that the posterior sample is not part of the
training data, i.e. the ANN is never evaluated on the exact same values seen
during training. Another significant advantage is that it is straight-forward
to include additional constraints on the propagation parameters that are
independent of the DM parameters and therefore not part of the ANN training.
For example, to also include likelihoods for proton data $\mathcal{L}_{p}$ and
He data $\mathcal{L}_{\text{He}}$, it is sufficient to draw a sample from the
joint posterior
$q(\bm{\theta}_{\text{prop}})\propto\mathcal{L}_{0}(\bm{\theta}_{\text{prop}})\,\mathcal{L}_{p}(\bm{\theta}_{\text{prop}})\,\mathcal{L}_{\text{He}}(\bm{\theta}_{\text{prop}})\,p(\bm{\theta}_{\text{prop}})\,.$
(4.7)
To conclude this discussion, we note that in the case that the likelihood can
be written in terms of a $\chi^{2}$ function, $\mathcal{L}\propto
e^{-\chi^{2}/2}$, we can define a marginalised $\chi^{2}$ function as
$\bar{\chi}^{2}(\mathbf{x}_{\text{DM}})\equiv-2\log\bar{\mathcal{L}}(\mathbf{x}_{\text{DM}})$.
Importance sampling then yields
$\bar{\chi}^{2}(\mathbf{x}_{\text{DM}})=-2\log{\frac{\sum_{i=1}^{N}\exp{\left(-\frac{\Delta\chi^{2}(\mathbf{x}_{\text{DM}},\bm{\theta}_{\text{prop}})}{2}\right)}}{\sum_{i=1}^{N}\exp{\left(\frac{\chi^{2}_{0}(\bm{\theta}_{\text{prop}})}{2}\right)}}}\,,$
(4.8)
where
$\chi^{2}_{0}(\bm{\theta}_{\text{prop}})=\chi^{2}(\mathbf{x}_{\text{DM}}=0,\bm{\theta}_{\text{prop}})$
and
$\Delta\chi^{2}(\mathbf{x}_{\text{DM}},\bm{\theta}_{\text{prop}})=\chi^{2}(\mathbf{x}_{\text{DM}},\bm{\theta}_{\text{prop}})-\chi^{2}_{0}(\bm{\theta}_{\text{prop}})$.
To calculate confidence intervals and exclusion limits for the DM parameters,
we then define
$\Delta\bar{\chi}^{2}(\mathbf{x}_{\text{DM}})=\bar{\chi}^{2}(\mathbf{x}_{\text{DM}})-\bar{\chi}^{2}_{0}\,.$
(4.9)
Hence, $\Delta\bar{\chi}^{2}<0$ corresponds to a preference for a DM signal,
while parameter points with $\Delta\bar{\chi}^{2}>3.84$ can be excluded at 95%
confidence level.141414Note that although our treatment of nuisance parameters
is motivated by Bayesian statistics, we still interpret the resulting
marginalised likelihood using frequentist methods, such that there is no need
to choose priors for the DM parameters.
### 4.2 Example A: Single Dark Matter Annihilation Channel
Figure 7: One and two dimensional histograms of $\Delta\chi^{2}$ for the
AMS-02 antiproton measurement based on the antiproton fluxes provided by the
Neural Network and Galprop for different combinations of propagation
parameters. We consider the annihilations of DM particles with
$m_{\text{DM}}=100$ GeV (left) and 1 TeV (right) into $b\overline{b}$ with a
cross section of $\langle\sigma v\rangle=10^{-26}$ cm3 s-1. The values for
$\Delta\hat{\chi}^{2}$ indicated by the black dashed lines represent the
marginalised values obtained by the importance sampling technique described in
section 4.1.
Let us first consider a frequently-used benchmark scenario and assume that the
DM particles annihilate exclusively into pairs of bottom quarks, such that the
injection spectrum is fully characterised by the (velocity-independent)
annihilation cross section $\langle\sigma v\rangle$ and the DM mass
$m_{\text{DM}}$. As a first step, we can then calculate
$\Delta\chi^{2}(m_{\text{DM}},\langle\sigma
v\rangle,\bm{\theta}_{\text{prop}})$ for different values of the propagation
parameters. Figure 7 compares the results that we obtain when using the ANN
predictions of the antiproton flux and when employing Galprop. The two panels
correspond to different values of the DM mass and use the same 10122 sets of
propagation parameters drawn randomly from the posterior distribution
$q(\bm{\theta}_{\text{prop}})$ as discussed above. In both cases we find a
very strong correlation between the two ways of calculating $\Delta\chi^{2}$
($r>0.98$). Indeed, for 95% of parameter points the absolute difference in
$\Delta\chi^{2}$ is smaller than $2.1$ ($0.9$) for
$m_{\text{DM}}=100\,\mathrm{GeV}$ ($m_{\text{DM}}=1\,\mathrm{TeV}$),
confirming the excellent performance of our ANN.
In each case we use a dashed line to indicate $\Delta\bar{\chi}^{2}$ as
defined in eq. (4.8). We emphasise that, since we average over
$\exp(-\Delta\chi^{2}/2)$, the final result is dominated by the points with
the smallest $\Delta\chi^{2}$. Again, we find very good agreement between the
marginalised $\Delta\chi^{2}$ obtained from the ANN and from Galprop. The
values obtained in the left panel correspond to a substantial preference for a
DM signal, while the parameter point considered in the right panel is slightly
disfavoured by data. Although the value $\Delta\bar{\chi}^{2}=-31.5$ ($-32.7$)
that we obtain for $m_{\text{DM}}=100\,\mathrm{GeV}$ from the ANN (Galprop)
would at face value correspond to quite a significant excess, we emphasize
that our set-up is not designed to provide an accurate characterisation of
this excess. In particular we caution the reader that due to our simplified
implementation of AMS-02 data (in particular neglecting correlations) this
number should be interpreted with care. We expect that a more detailed
analysis of AMS-02 data would lead to a much lower significance.
Comparing the evaluations of the marginalised $\Delta\chi^{2}$ with the ANN
and Galprop respectively, the reduction of the computational cost achieved
with our neural network method becomes apparent. For the ANN the prediction of
the set of CR fluxes for each of the specific DM parameter points only takes
$\mathcal{O}(1)$ cpu second in total for the 10122 parameter points, but the
calculation of the respective $\chi^{2}$ while inferring the solar modulation
potential takes up the majority of the computation time ($\mathcal{O}(10)$ cpu
seconds in total). This time is however negligible compared to the Galprop
simulations which take $\mathcal{O}(10)$ cpu hours to obtain the same number
of CR fluxes.
Figure 8: $\Delta\chi^{2}$ for the AMS-02 antiproton measurement based on the
antiproton fluxes provided by the Neural Network and Galprop as a function of
$\langle\sigma v\rangle$ and for different values of $m_{\text{DM}}$. We
assume a dominant DM DM $\rightarrow\,b\overline{b}$ annihilation in each
case. Left: Propagation parameters are fixed to the best-fit values in a
frequentist setup when only secondary antiprotons are considered (see table
1). Right: Propagation parameters are marginalised over using importance
sampling. We also include the 95 % upper bound values of the annihilation
cross section following eq. (4.9).
A complementary perspective to the results in figure 7 is provided in figure
8, which shows $\Delta\chi^{2}$ as a function of $\langle\sigma v\rangle$ for
different values of the DM mass. In the left panel we fix the propagation
parameters to their best-fit values in the absence of a DM signal (see table
1), while in the right panel we marginalise over all propagation parameters
using importance sampling. Solid (dotted) curves correspond to the ANN
(Galprop) predictions and again show excellent agreement. The horizontal
dashed lines indicate the 95% confidence level upper bound on $\langle\sigma
v\rangle$ obtained following eq. (4.9).
As expected, allowing variations in the propagation parameters generally leads
to smaller values of $\Delta\chi^{2}$ and hence relaxes the upper bounds on
the annihilation cross section. This effect is most dramatic for the case
$m_{\text{DM}}=100\,\mathrm{GeV}$ (blue line), where there is a preference for
a DM signal in the data and hence the exclusion limit is relaxed by about an
order of magnitude. The small bumps in the blue curve in the right panel are a
result of the finite size of the sample of propagation parameters used for the
marginalisation and result from the approximation made in eq. (4.4).
Repeating this procedure for different values of the DM mass, we can obtain
exclusion limits on $\langle\sigma v\rangle$ as a function of $m_{\text{DM}}$.
These are shown in figure 9 for the case of fixed propagation parameters
(left) and when marginalising over propagation parameters (right). The colour
shading indicates parameter regions where $\Delta\chi^{2}>0$, such that a DM
signal is disfavoured, while greyscale is used to indicate parameter regions
where $\Delta\chi^{2}<0$ such that a DM signal is preferred. We find that this
is the case for DM masses in the range $50\text{--}250\,\mathrm{GeV}$. Again,
marginalisation leads to relaxed exclusion bounds and an increased preference
for a DM signal. We reiterate however that the magnitude of this preference is
likely overestimated in our analysis.
Figure 9: $\Delta\chi^{2}$ for the AMS-02 antiproton measurement as a function
of $\langle\sigma v\rangle$ and $m_{\text{DM}}$ using the fixed propagation
parameters specified in table 1 (_left_) and performing the marginalisation
via importance sampling (_right_). The dashed lines represent the 95 % CL
upper bounds on the annihilation cross section. The white regions in the upper
part of each panel correspond to $\Delta\chi^{2}>1000$ and are excluded to
improve numerical stability.
To assess the impact of marginalisation let us finally compare our results
with those obtained using a profile likelihood. As discussed in section 4.1,
special care needs to be taken when using the ANN predictions to calculate a
profile likelihood in order to ensure that the result is not dominated by
regions of parameter space with insufficient training data. We achieve this
goal by restricting the allowed parameter regions as follows: $0.1<s<0.6$,
$1\,\mathrm{GV}<R_{0}<10\,\mathrm{GV}$, $0.35<\delta<0.6$ and
$2.3<\gamma_{2,(p)}<2.5$. We then use MultiNest to explore the remaining
parameter space for fixed values of the DM mass and varying annihilation cross
section in order to find the largest value of $\langle\sigma v\rangle$ such
that $\Delta\hat{\chi}^{2}(m_{\text{DM}},\langle\sigma
v\rangle)\equiv-2\Delta\log\hat{\mathcal{L}}((m_{\text{DM}},\langle\sigma
v\rangle))<3.84$. Repeating this procedure for different values of
$m_{\text{DM}}$ then yields the exclusion limit.
The results are shown in figure 10 together with the exclusion limits obtained
for fixed propagation parameters and when marginalising over propagation
parameters as shown in figure 9. We find that in most regions of parameter
space the profile likelihood approach yields somewhat weaker exclusion limits
than the marginalisation. Such a difference is to be expected whenever
substantial tuning in the propagation parameters is required in order to
accommodate a DM signal. For example, for $m_{\text{DM}}=1\,\mathrm{TeV}$ and
$\langle\sigma v\rangle=5\times 10^{-26}\,\mathrm{cm^{3}\,s^{-1}}$ we find
that $\Delta\hat{\chi}^{2}<3.84$ can be achieved only if $D_{0}$, $v_{0,c}$
and $z_{\mathrm{h}}$ all take values close to their lower bounds. Such a
tuning is not penalised in the profile likelihood, but the contribution of
these solutions to the marginalised likelihood will be suppressed according to
the small volume of the viable parameter space. The same conclusion can be
reached from the right panel of figure 7: Although there are sets of
propagation parameters that yield $\Delta\chi^{2}\approx 0$, most parameter
combinations give significantly larger $\Delta\chi^{2}$, such that
marginalisation leads to $\Delta\hat{\chi}^{2}\approx 2.6$, close to the 95%
confidence level upper bound. In other words, the difference between the two
approaches is a direct consequence of the different statistical methods and
not an artefact of the ANN predictions.
In general the dependence of the DM limit on the chosen value for the halo
height is very well known. To first order the normalisation of the DM flux is
proportional to $z_{\mathrm{h}}$ and thus the DM limit is anti-proportional to
$z_{\mathrm{h}}$ as again nicely demonstrated in a very recent analysis [102].
The CR fit conducted in section 2 varies $z_{\mathrm{h}}$ between 2 and 7 kpc.
Because of the well-known $z_{\mathrm{h}}$-$D_{0}$ degeneracy the resulting
posterior of $z_{\mathrm{h}}$ is almost flat in the entire fit range. The DM
limit derived from the marginalisation of the $\Delta\hat{\chi}^{2}$ should be
understood to refer to 4.8 kpc, namely the average value of $z_{\mathrm{h}}$
in the posterior. This is in perfect agreement with recent analyses of
secondary fluxes by AMS-02 [65, 69, 71, 6]. On the other hand, when limits are
derived in a frequentist approach and in the absence of a DM preference,
$z_{\mathrm{h}}$ values are pushed towards the lower boundary of the fit range
at 2 kpc. This again explains the difference between the marginalised and
profiled limit in the figure 10. One possible way to study the
$z_{\mathrm{h}}$ dependence explicitly in the marginalisation framework is to
further restrict the range of $z_{\mathrm{h}}$.
Figure 10: A comparison of the 95 % CL exclusion bounds in figure 9 (blue and
light blue) with the bounds obtained when profiling over the propagation
parameters using the CR spectra provided by our ANNs (green). The black dashed
line indicates the thermal annihilation cross section for WIMPs from [103].
The differences between marginalised and profiled limits are particularly
relevant given how they affect the conclusions drawn from figure 10. When
using the marginalised likelihood we find that the thermal cross section
(indicated by the black dashed line) can be excluded for DM masses in the
range $300\text{--}2000\,\mathrm{GeV}$, implying that WIMP models in this mass
range can only be viable if the injection of antiprotons are suppressed. When
using the profile likelihood, on the other hand, almost the entire mass range
above $70\,\mathrm{GeV}$ is found to be viable. We note that the agreement
between the frequentist and Bayesian approach will improve with a better
determination of $z_{\mathrm{h}}$ as expected from the analysis of the
forthcoming Be isotope measurements by AMS-02 [104].
In addition to the reduction in computing time achieved when using the ANN
instead of Galprop, we find that the use of importance sampling leads to
another improvement compared to the more conventional profiling approach.
Crucially, our marginalisation using importance sampling is based on a fixed
set of 10122 data points in the propagation model, which can be evaluated in
parallel. The ANN therefore gives a negligible contribution to the time needed
to calculate the upper bound on the annihilation cross section for each of the
100 mass bins shown in figures 9 and 10. For the profiling approach on the
other hand the evaluation of the data points cannot be performed in parallel
by the ANN due to their sampling. This leads to an increase in computation
time, such that the speed-up of the runtime when using the ANN instead of
Galprop is reduced to two orders of magnitude (rather than three orders of
magnitude for importance sampling).
### 4.3 Example B: Scalar Singlet Dark Matter
We now illustrate the use of the ANN for the analysis of a specific model of
DM with a singlet scalar field $S$. Imposing a $Z_{2}$ symmetry, $S\to-S$, the
scalar particle is stable and thus a DM candidate. The Lagrangian of this
scalar singlet DM (SSDM) model reads [105, 106, 107]
${\cal L}={\cal
L}_{\text{SM}}+\frac{1}{2}\partial_{\mu}S\partial^{\mu}S-\frac{1}{2}m_{S,0}^{2}S^{2}-\frac{1}{4}\lambda_{S}S^{4}-\frac{1}{2}\lambda_{H\\!S}\,S^{2}H^{\dagger}H\,,$
(4.10)
where ${\cal L}_{\text{SM}}$ is the Standard Model Lagrangian and $H$ is the
Standard Model Higgs field. After electroweak symmetry breaking, the last
three terms of the Lagrangian become
${\cal
L}\supset-\frac{1}{2}m_{S}^{2}\,S^{2}-\frac{1}{4}\lambda_{S}\,S^{4}-\frac{1}{4}\lambda_{H\\!S}\,h^{2}S^{2}-\frac{1}{2}\lambda_{H\\!S}\,vhS^{2}\,,$
(4.11)
with $H=(h+v,0)/\sqrt{2}\,$, $v=246\,$GeV, and where we introduced the
physical mass of the singlet field,
$m_{S}^{2}=m_{S,0}^{2}+\lambda_{H\\!S}\,v^{2}/2$. The DM phenomenology of the
SSDM has been extensively studied in the literature, see e.g. [108, 109, 110,
45, 111, 112] and references therein.
The DM phenomenology of the SSDM is fully specified by the mass of the DM
particle, $m_{S}=m_{\rm DM}$, and the strength of the coupling between the DM
and Higgs particle, $\lambda_{H\\!S}$. Below the Higgs-pair threshold,
$m_{S}<m_{h}$, DM annihilation proceeds through $s$-channel Higgs exchange
only, and the relative weight of the different SM final states is determined
by the SM Higgs branching ratios, independent of the Higgs-scalar coupling
$\lambda_{H\\!S}$. Above the Higgs-pair threshold, $m_{S}\geq m_{h}$, the $hh$
final state opens up. The strength of the annihilation into Higgs pairs, as
compared to $W$, $Z$ or top-quark pairs, depends on the size of the Higgs-
scalar coupling. For our specific analysis we require that the SSDM provide
the correct DM relic density, $\Omega h^{2}=0.1198\pm 0.0015$ [113], which in
turn determines the size of $\lambda_{H\\!S}$ for any given DM mass $m_{S}$.
The corresponding branching fractions for DM annihilation within the SSDM are
shown in figure 11 (left panel) as a function of the DM mass.
Using the ANN we analyse the $\Delta\chi^{2}$ distribution of the model,
marginalising over propagation uncertainties as described in section 4.1. The
result is shown in figure 11 (right panel). Comparing figure 11 with the
analogous result for the single annihilation channel into $b\bar{b}$, figure 9
(right panel), we observe a similar overall shape of the $\Delta\chi^{2}$
distribution.
For light DM the SSDM annihilates dominantly into bottom final states, so one
expects results that are very similar to the case of the single $b\bar{b}$
channel. However, for the smallest DM masses that we consider
($m_{\chi}\approx 10\,\mathrm{GeV}$) we find that the constraints become
considerably stronger when including even a sub-dominant contribution from
$c\bar{c}$. The reason is that in this mass range, most antiprotons resulting
from annihilation into bottom quarks have energies below $5\,\mathrm{GeV}$ and
do therefore not give a contribution in our fits. Annihilation into charm
quarks, on the other hand, can give rise to more energetic antiprotons,
leading to stronger constraints. For DM masses above about $50\,\mathrm{GeV}$,
a variety of SM final states contributes in the SSDM, including in particular
$WW$, $hh$ and $ZZ$. However, as shown in Ref. [51], the limits for heavy DM
are similar for these final states and for annihilation into bottom quarks, so
that the overall constraints for the SSDM are comparable to those for
annihilation into bottom quarks only.
Figure 11: Left: Mass dependence of branching fractions of
$SS\rightarrow\text{SM}\text{ SM}$ in the SSDM model for
$\lambda_{\mathrm{HS}}$ fixed by the relic density requirement. Right:
Marginal $\chi^{2}$ distribution of the $\langle\sigma v\rangle-m_{\text{DM}}$
parameter space in the SSDM model.
## 5 Conclusions
The analysis of cosmic ray (CR) antiprotons is a powerful method for the
indirect detection of dark matter (DM). The accurate experimental
measurements, in particular from AMS-02, allow to probe DM annihilation cross
sections close to the value predicted by thermal freeze-out for a wide range
of DM masses. However, a precise description of CR propagation through the
Galaxy is required to exploit the potential of the experimental data. The
propagation models depend on a large number of parameters, and the standard
numerical simulation tools, such as Galprop, are computationally expensive.
Therefore, global analyses of generic models of DM can only be carried out
with an immense computational effort, if at all.
In this work we have developed an artificial neural network (ANN) that allows
extremely fast and accurate predictions of the cosmic ray flux for generic DM
models. Specifically, we have employed recurrent neural networks (RNNs) to
predict the CR energy spectrum. RNNs are particularly well suited to learn the
correlations between the fluxes contained in neighbouring energy bins.
Additional improvements in performance are achieved by grouping input
parameters that have similar physical origin and by performing a suitable
rescaling of the output spectra.
We have trained the ANN with a large set of antiproton fluxes simulated with
Galprop, where the propagation parameters have been chosen to be broadly
compatible with the most recent AMS-02 data, and a generic parametrisation of
the dark matter model in terms of the DM mass and the branching fractions for
the annihilation into various Standard Model final states. We emphasise that
the contribution of different DM models to the antiproton flux only has a
marginal impact on the preferred range of the propagation parameters. It is
therefore possible to focus the training of the ANN on the relevant range of
propagation parameters without specifying the details of the DM model in
advance. We have validated the performance and accuracy of the network by
comparing both the predicted antiproton fluxes and the resulting AMS-02
likelihoods to the ones obtained from explicit Galprop simulations for a range
of different propagation and DM model parameters.
We have then used the neural network predictions to test specific DM models
against current AMS-02 data. We have focused on the DM parameter space and
treated the propagation parameters as nuisance parameters by calculating both
the corresponding profile and marginalised likelihoods. While the former
approach requires an explicit restriction of the parameter space to the
regions where the ANN has been sufficiently trained, this requirement can be
automatically fulfilled in the latter case by employing importance sampling.
Comparing the ANN to Galprop we find a speed-up in runtime of about two
(three) orders of magnitude when using profiling (importance sampling).
For DM annihilation into bottom quarks we have obtained results that are
consistent with previous studies based on simulations and a profile likelihood
approach. We find more stringent bounds on the DM parameter space when using
the marginalised likelihood; here a thermal cross section can be excluded for
DM annihilating fully into bottom quarks for DM masses in the range between
approximately 300 GeV and 2 TeV. To illustrate the flexibility of our
approach, we have also used the ANN to derive constraints on scalar singlet
DM, for which DM annihilation results in a variety of Standard Model final
states with branching fractions that depend strongly on the DM mass.
The ANN developed in this work, and the corresponding method for efficient
training, can also be used to study more closely the potential DM
interpretation of the antiproton excess around 20 GeV, for example regarding
the impact of correlations in AMS-02 data. Moreover, it can be easily extended
to alternative propagation models and can be applied to a wide class of DM
scenarios. It will thus be possible to fully exploit the potential of current
and future cosmic-ray data in global analyses of general DM models. In future
work a transformation of the ANNs into Bayesian neural networks can be
incorporated in the analysis. With this step, additional more in-depth studies
of the uncertainties of the network predictions will be possible. The fully
trained networks together with a suitable user interface are publicly
available as DarkRayNet at https://github.com/kathrinnp/DarkRayNet.
## Acknowledgments
We thank Thorben Finke and Christoph Weniger for discussions, Alessandro Cuoco
and Jan Heisig for helpful comments on the manuscript and Sven Guenther for
testing DarkRayNet. F.K. is supported by the Deutsche Forschungsgemeinschaft
(DFG) through the Emmy Noether Grant No. KA 4662/1-1. M.Ko. is partially
supported by the Swedish National Space Agency under contract 117/19 and the
European Research Council under grant 742104. Simulations and ANN training
were performed with computing resources granted by RWTH Aachen University
under project jara0184 and rwth0754.
## Appendix A Predicting proton and helium spectra
When simulating the antiproton fluxes as described in section 3.1 we can also
obtain the CR spectra of protons, deuterium, and helium (3He and 4He) without
significant additional computation costs due to the setup of Galprop. The task
of modelling these spectra using an ANN is very comparable with the task
fulfilled by the sNet. We have thus examined the ability of the sNet
architecture (as described in sec. 3.2) to also accurately predict proton and
helium spectra. The inputs of the sNet remain the same, but we have extended
the length of the final output layer, to accommodate a wider energy range,
appropriate for the proton and Helium AMS-02 and Voyager data. Using also the
same training process (see sec. 3.3) we achieve a similar accuracy as for the
secondary antiprotons, as each of the predictions deviates from the
simulations only marginally with respect to the experimental uncertainties. In
figures 12 and 13 we show exemplary results for protons, resp. helium, and
their individual components analogous to figure 5.
Figure 12: Exemplary comparison of the simulated versus predicted protons flux
of the individual components protons and Deuterium and the combination of both
where the listed parameters and simulated fluxes are randomly sampled from the
test set. Each component of the neural network flux is predicted by the
individual networks. Lower panel as figure 5. Figure 13: Exemplary comparison
of the simulated versus predicted He flux of the individual components 3He and
4He and the combination of both where the listed parameters and simulated
fluxes are randomly sampled from the test set. Each component of the neural
network flux is predicted by the individual networks. Lower panel as figure 5.
## References
* [1] Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, 1807.06209.
* [2] Fermi-LAT Collaboration, M. Ackermann et al., Dark Matter Constraints from Observations of 25 Milky Way Satellite Galaxies with the Fermi Large Area Telescope, Phys. Rev. D89 (2014) 042001, [1310.0828].
* [3] Fermi-LAT Collaboration, M. Ackermann et al., Searching for Dark Matter Annihilation from Milky Way Dwarf Spheroidal Galaxies with Six Years of Fermi Large Area Telescope Data, Phys. Rev. Lett. 115 (2015), no. 23 231301, [1503.02641].
* [4] Fermi-LAT Collaboration, M. Ackermann et al., Updated search for spectral lines from Galactic dark matter interactions with pass 8 data from the Fermi Large Area Telescope, Phys. Rev. D91 (2015), no. 12 122002, [1506.00013].
* [5] AMS Collaboration, M. Aguilar et al., The Alpha Magnetic Spectrometer (AMS) on the international space station: Part II — Results from the first seven years, Phys. Rept. 894 (2021) 1–116.
* [6] M. Korsmeier and A. Cuoco, Implications of Lithium to Oxygen AMS-02 spectra on our understanding of cosmic-ray diffusion, Phys. Rev. D 103 (2021), no. 10 103016, [2103.09824].
* [7] E. Orlando, Imprints of Cosmic Rays in Multifrequency Observations of the Interstellar Emission, Mon. Not. Roy. Astron. Soc. 475 (2018), no. 2 2724–2742, [1712.07127].
* [8] A. W. Strong, I. V. Moskalenko, and O. Reimer, Diffuse continuum gamma-rays from the galaxy, Astrophys. J. 537 (2000) 763–784, [astro-ph/9811296]. [Erratum: Astrophys. J.541,1109(2000)].
* [9] C. Evoli, D. Gaggero, D. Grasso, and L. Maccione, Cosmic-Ray Nuclei, Antiprotons and Gamma-rays in the Galaxy: a New Diffusion Model, JCAP 0810 (2008) 018, [0807.4730]. [Erratum: JCAP1604,no.04,E01(2016)].
* [10] F. Ambrogi, C. Arina, M. Backovic, J. Heisig, F. Maltoni, et al., MadDM v.3.0: a Comprehensive Tool for Dark Matter Studies, Phys. Dark Univ. 24 (2019) 100249, [1804.00044].
* [11] A. Cuoco, M. Krämer, and M. Korsmeier, Novel Dark Matter Constraints from Antiprotons in Light of AMS-02, Phys. Rev. Lett. 118 (2017), no. 19 191102, [1610.03071].
* [12] M.-Y. Cui, Q. Yuan, Y.-L. S. Tsai, and Y.-Z. Fan, Possible dark matter annihilation signal in the AMS-02 antiproton data, Phys. Rev. Lett. 118 (2017), no. 19 191101, [1610.03840].
* [13] A. Reinert and M. W. Winkler, A Precision Search for WIMPs with Charged Cosmic Rays, JCAP 1801 (2018), no. 01 055, [1712.00002].
* [14] I. Cholis, T. Linden, and D. Hooper, A Robust Excess in the Cosmic-Ray Antiproton Spectrum: Implications for Annihilating Dark Matter, Phys. Rev. D99 (2019), no. 10 103026, [1903.02549].
* [15] S.-J. Lin, X.-J. Bi, and P.-F. Yin, Investigating the dark matter signal in the cosmic ray antiproton flux with the machine learning method, Phys. Rev. D 100 (2019), no. 10 103014, [1903.09545].
* [16] Y.-L. S. Tsai, Y.-L. Chung, Q. Yuan, and K. Cheung, Inverting cosmic ray propagation by Convolutional Neural Networks, 2011.11930.
* [17] P. Blasi and P. D. Serpico, High-energy antiprotons from old supernova remnants, Phys. Rev. Lett. 103 (2009) 081103, [0904.0871].
* [18] P. Mertsch and S. Sarkar, AMS-02 data confront acceleration of cosmic ray secondaries in nearby sources, Phys. Rev. D90 (2014) 061301, [1402.0855].
* [19] P. Mertsch, A. Vittino, and S. Sarkar, Explaining cosmic ray antimatter with secondaries from old supernova remnants, 2012.12853.
* [20] K. Kohri, K. Ioka, Y. Fujita, and R. Yamazaki, Can we explain AMS-02 antiproton and positron excesses simultaneously by nearby supernovae without pulsars or dark matter?, PTEP 2016 (2016), no. 2 021E01, [1505.01236].
* [21] T. Aramaki et al., Review of the theoretical and experimental status of dark matter identification with cosmic-ray antideuterons, Phys. Rept. 618 (2016) 1–37, [1505.07785].
* [22] P. von Doetinchem et al., Cosmic-ray antinuclei as messengers of new physics: status and outlook for the new decade, JCAP 08 (2020) 035, [2002.04163].
* [23] L. Bergstrom, J. Edsjo, and P. Ullio, Cosmic anti-protons as a probe for supersymmetric dark matter?, Astrophys. J. 526 (1999) 215–235, [astro-ph/9902012].
* [24] F. Donato, N. Fornengo, D. Maurin, and P. Salati, Antiprotons in cosmic rays from neutralino annihilation, Phys. Rev. D69 (2004) 063501, [astro-ph/0306207].
* [25] T. Bringmann and P. Salati, The galactic antiproton spectrum at high energies: Background expectation vs. exotic contributions, Phys. Rev. D75 (2007) 083006, [astro-ph/0612514].
* [26] F. Donato, D. Maurin, P. Brun, T. Delahaye, and P. Salati, Constraints on WIMP Dark Matter from the High Energy PAMELA $\bar{p}/p$ data, Phys. Rev. Lett. 102 (2009) 071301, [0810.5292].
* [27] N. Fornengo, L. Maccione, and A. Vittino, Constraints on particle dark matter from cosmic-ray antiprotons, JCAP 1404 (2014), no. 04 003, [1312.3579].
* [28] C. Evoli, I. Cholis, D. Grasso, L. Maccione, and P. Ullio, Antiprotons from dark matter annihilation in the Galaxy: astrophysical uncertainties, Phys. Rev. D85 (2012) 123511, [1108.0664].
* [29] T. Bringmann, M. Vollmann, and C. Weniger, Updated cosmic-ray and radio constraints on light dark matter: Implications for the GeV gamma-ray excess at the Galactic center, Phys. Rev. D90 (2014), no. 12 123001, [1406.6027].
* [30] V. Pettorino, G. Busoni, A. De Simone, E. Morgante, A. Riotto, et al., Can AMS-02 discriminate the origin of an anti-proton signal?, JCAP 1410 (2014), no. 10 078, [1406.5377].
* [31] M. Cirelli, D. Gaggero, G. Giesen, M. Taoso, and A. Urbano, Antiproton constraints on the GeV gamma-ray excess: a comprehensive analysis, JCAP 1412 (2014), no. 12 045, [1407.2173].
* [32] J. A. R. Cembranos, V. Gammaldi, and A. L. Maroto, Antiproton signatures from astrophysical and dark matter sources at the galactic center, JCAP 1503 (2015), no. 03 041, [1410.6689].
* [33] D. Hooper, T. Linden, and P. Mertsch, What Does The PAMELA Antiproton Spectrum Tell Us About Dark Matter?, JCAP 1503 (2015), no. 03 021, [1410.1527].
* [34] M. Boudaud, M. Cirelli, G. Giesen, and P. Salati, A fussy revisitation of antiprotons as a tool for Dark Matter searches, JCAP 1505 (2015), no. 05 013, [1412.5696].
* [35] G. Giesen, M. Boudaud, Y. Genolini, V. Poulin, M. Cirelli, et al., AMS-02 antiprotons, at last! Secondary astrophysical component and immediate implications for Dark Matter, JCAP 1509 (2015), no. 09 023, [1504.04276].
* [36] C. Evoli, D. Gaggero, and D. Grasso, Secondary antiprotons as a Galactic Dark Matter probe, JCAP 1512 (2015), no. 12 039, [1504.05175].
* [37] G. Jóhannesson et al., Bayesian analysis of cosmic-ray propagation: evidence against homogeneous diffusion, Astrophys. J. 824 (2016), no. 1 16, [1602.02243].
* [38] P. De La Torre Luque, Combined analyses of the antiproton production from cosmic-ray interactions and its possible dark matter origin, 2107.06863.
* [39] M. Di Mauro and M. W. Winkler, Multimessenger constraints on the dark matter interpretation of the Fermi-LAT Galactic center excess, Phys. Rev. D 103 (2021), no. 12 123005, [2101.11027].
* [40] A. Cuoco, J. Heisig, L. Klamt, M. Korsmeier, and M. Krämer, Scrutinizing the evidence for dark matter in cosmic-ray antiprotons, Phys. Rev. D99 (2019), no. 10 103014, [1903.01472].
* [41] M. Boudaud, Y. Génolini, L. Derome, J. Lavalle, D. Maurin, et al., AMS-02 antiprotons’ consistency with a secondary astrophysical origin, Phys. Rev. Res. 2 (2020), no. 2 023022, [1906.07119].
* [42] J. Heisig, M. Korsmeier, and M. W. Winkler, Dark matter or correlated errors: Systematics of the AMS-02 antiproton excess, Phys. Rev. Res. 2 (2020), no. 4 043017, [2005.04237].
* [43] J. Heisig, Cosmic-ray antiprotons in the AMS-02 era: A sensitive probe of dark matter, Mod. Phys. Lett. A 36 (2021), no. 05 2130003, [2012.03956].
* [44] M. Cirelli, G. Corcella, A. Hektor, G. Hutsi, M. Kadastik, et al., PPPC 4 DM ID: A Poor Particle Physicist Cookbook for Dark Matter Indirect Detection, JCAP 1103 (2011) 051, [1012.4515]. [Erratum: JCAP1210,E01(2012)].
* [45] A. Cuoco, J. Heisig, M. Korsmeier, and M. Krämer, Probing dark matter annihilation in the Galaxy with antiprotons and gamma rays, JCAP 1710 (2017), no. 10 053, [1704.08258].
* [46] J. F. Navarro, C. S. Frenk, and S. D. M. White, The Structure of cold dark matter halos, Astrophys. J. 462 (1996) 563–575, [astro-ph/9508025].
* [47] P. Salucci, F. Nesti, G. Gentile, and C. F. Martins, The dark matter density at the Sun’s location, Astron. Astrophys. 523 (2010) A83, [1003.3101].
* [48] P. F. de Salas and A. Widmark, Dark matter local density determination: recent observations and future prospects, 2012.11477.
* [49] M. Benito, A. Cuoco, and F. Iocco, Handling the Uncertainties in the Galactic Dark Matter Distribution for Particle Dark Matter Searches, JCAP 03 (2019) 033, [1901.02460].
* [50] A. Burkert, The Structure of dark matter halos in dwarf galaxies, IAU Symp. 171 (1996) 175, [astro-ph/9504041]. [Astrophys. J.447,L25(1995)].
* [51] A. Cuoco, J. Heisig, M. Korsmeier, and M. Krämer, Constraining heavy dark matter with cosmic-ray antiprotons, JCAP 1804 (2018), no. 04 004, [1711.05274].
* [52] M. Korsmeier and A. Cuoco, Galactic cosmic-ray propagation in the light of AMS-02: Analysis of protons, helium, and antiprotons, Phys. Rev. D94 (2016), no. 12 123019, [1607.06093].
* [53] M. di Mauro, F. Donato, A. Goudelis, and P. D. Serpico, New evaluation of the antiproton production cross section for cosmic ray studies, Phys. Rev. D90 (2014), no. 8 085017, [1408.0288]. [Erratum: Phys. Rev.D98,no.4,049901(2018)].
* [54] M. W. Winkler, Cosmic Ray Antiprotons at High Energies, JCAP 1702 (2017), no. 02 048, [1701.04866].
* [55] M. Korsmeier, F. Donato, and M. Di Mauro, Production cross sections of cosmic antiprotons in the light of new data from the NA61 and LHCb experiments, Phys. Rev. D97 (2018), no. 10 103019, [1802.03030].
* [56] M. Kachelrieß, I. V. Moskalenko, and S. Ostapchenko, AAfrag: Interpolation routines for Monte Carlo results on secondary production in proton-proton, proton-nucleus and nucleus-nucleus interactions, 1904.05129.
* [57] F. Donato, M. Korsmeier, and M. Di Mauro, Prescriptions on antiproton cross section data for precise theoretical antiproton flux predictions, Phys. Rev. D96 (2017), no. 4 043007, [1704.03663].
* [58] I. V. Moskalenko, A. W. Strong, J. F. Ormes, and M. S. Potgieter, Secondary anti-protons and propagation of cosmic rays in the galaxy and heliosphere, Astrophys. J. 565 (2002) 280–296, [astro-ph/0106567].
* [59] E. Amato and P. Blasi, Cosmic ray transport in the Galaxy: A review, Adv. Space Res. 62 (2018) 2731–2749, [1704.05696].
* [60] S. Gabici, C. Evoli, D. Gaggero, P. Lipari, P. Mertsch, et al., The origin of Galactic cosmic rays: challenges to the standard paradigm, Int. J. Mod. Phys. D 28 (2019), no. 15 1930022, [1903.11584].
* [61] O. Adriani et al., Measurement of boron and carbon fluxes in cosmic rays with the PAMELA experiment, Astrophys. J. 791 (2014), no. 2 93, [1407.1657].
* [62] E. C. Stone, A. C. Cummings, F. B. McDonald, B. C. Heikkila, N. Lal, et al., Voyager 1 Observes Low-Energy Galactic Cosmic Rays in a Region Depleted of Heliospheric Ions, Science 341 (Jul, 2013) 150–153.
* [63] Y. Génolini et al., Cosmic-ray transport from AMS-02 boron to carbon ratio data: Benchmark models and interpretation, Phys. Rev. D99 (2019), no. 12 123028, [1904.08917].
* [64] C. Evoli, R. Aloisio, and P. Blasi, Galactic cosmic rays after the AMS-02 observations, Phys. Rev. D 99 (2019), no. 10 103023, [1904.10220].
* [65] C. Evoli, G. Morlino, P. Blasi, and R. Aloisio, AMS-02 beryllium data and its implication for cosmic ray transport, Phys. Rev. D 101 (2020), no. 2 023013, [1910.04113].
* [66] M. Boschini et al., Deciphering the local Interstellar spectra of primary cosmic ray species with HelMod, Astrophys. J. 858 (2018), no. 1 61, [1804.06956].
* [67] M. Boschini et al., Deciphering the local Interstellar spectra of secondary nuclei with GALPROP/HelMod framework and a hint for primary lithium in cosmic rays, 1911.03108.
* [68] N. Weinrich, Y. Genolini, M. Boudaud, L. Derome, and D. Maurin, Combined analysis of AMS-02 (Li,Be,B)/C, N/O, 3He, and 4He data, Astron. Astrophys. 639 (2020) A131, [2002.11406].
* [69] N. Weinrich, M. Boudaud, L. Derome, Y. Genolini, J. Lavalle, et al., Galactic halo size in the light of recent AMS-02 data, Astron. Astrophys. 639 (2020) A74, [2004.00441].
* [70] P. De La Torre Luque, M. N. Mazziotta, F. Loparco, F. Gargano, and D. Serini, Markov chain Monte Carlo analyses of the flux ratios of B, Be and Li with the DRAGON2 code, 2102.13238.
* [71] P. De La Torre Luque, M. N. Mazziotta, F. Loparco, F. Gargano, and D. Serini, Implications of current nuclear cross sections on secondary cosmic rays with the upcoming DRAGON2 code, JCAP 03 (2021) 099, [2101.01547].
* [72] B. Schroer, C. Evoli, and P. Blasi, Heavy Galactic cosmic-ray nuclei: the case of new AMS-02 measurements, 2102.12576.
* [73] A. W. Strong, I. V. Moskalenko, and V. S. Ptuskin, Cosmic-ray propagation and interactions in the Galaxy, Ann. Rev. Nucl. Part. Sci. 57 (2007) 285–327, [astro-ph/0701517].
* [74] A. W. Strong, Recent extensions to GALPROP, 1507.05020.
* [75] A. Putze, L. Derome, and D. Maurin, A Markov Chain Monte Carlo technique to sample transport and source parameters of Galactic cosmic rays: II. Results for the diffusion model combining B/C and radioactive nuclei, Astron. Astrophys. 516 (2010) A66, [1001.0551].
* [76] D. Maurin, USINE: semi-analytical models for Galactic cosmic-ray propagation, 1807.02968.
* [77] C. Evoli, D. Gaggero, A. Vittino, M. Di Mauro, D. Grasso, et al., Cosmic-ray propagation with DRAGON2: II. Nuclear interactions with the interstellar gas, JCAP 1807 (2018), no. 07 006, [1711.09616].
* [78] R. Kissmann, PICARD: A novel code for the Galactic Cosmic Ray propagation problem, Astropart. Phys. 55 (2014) 37–50, [1401.4035].
* [79] D. A. Green, Constraints on the distribution of supernova remnants with Galactocentric radius, Mon. Not. Roy. Astron. Soc. 454 (2015), no. 2 1517–1524, [1508.02931].
* [80] V. A. Dogiel, V. S. Berezinsky, S. V. Bulanov, and V. S. Ptuskin, Astrophysics of cosmic rays. Elsevier Since Publischers B.V., 1990.
* [81] E. S. Seo and V. S. Ptuskin, Stochastic reacceleration of cosmic rays in the interstellar medium, Astrophys. J. 431 (Aug., 1994) 705–714.
* [82] L. A. Fisk, Solar Modulation and a Galactic Origin for the Anomalous Component Observed in Low-Energy Cosmic Rays, Astrophys. J. 206 (1976) 333–341.
* [83] R. Kappl, SOLARPROP: Charge-sign Dependent Solar Modulation for Everyone, Comput. Phys. Commun. 207 (2016) 386–399, [1511.07875].
* [84] A. Vittino, C. Evoli, and D. Gaggero, Cosmic-ray transport in the heliosphere with HelioProp, PoS ICRC2017 (2018) 024, [1707.09003].
* [85] M. J. Boschini, S. Della Torre, M. Gervasi, G. La Vacca, and P. G. Rancoita, Propagation of cosmic rays in heliosphere: The HELMOD model, Adv. Space Res. 62 (2018) 2859–2879, [1704.03733].
* [86] E. Fiandrini, N. Tomassetti, B. Bertucci, F. Donnini, M. Graziani, et al., Numerical modeling of cosmic rays in the heliosphere: Analysis of proton data from AMS-02 and PAMELA, Phys. Rev. D 104 (2021), no. 2 023012, [2010.08649].
* [87] M. D. Ngobeni, O. P. M. Aslam, D. Bisschoff, M. S. Potgieter, D. C. Ndiitwani, et al., The 3D numerical modeling of the solar modulation of galactic protons and helium nuclei related to observations by PAMELA between 2006 and 2009, Astrophys. Space Sci. 365 (2020), no. 11 182.
* [88] A. C. Cummings, E. C. Stone, B. C. Heikkila, N. Lal, W. R. Webber, et al., Galactic Cosmic Rays in the Local Interstellar Medium: Voyager 1 Observations and Model Results, Astrophys. J. 831 (2016), no. 1 18.
* [89] F. Feroz, M. P. Hobson, and M. Bridges, MultiNest: an efficient and robust Bayesian inference tool for cosmology and particle physics, Mon. Not. Roy. Astron. Soc. 398 (2009) 1601–1614, [0809.3437].
* [90] F. James and M. Roos, Minuit: A System for Function Minimization and Analysis of the Parameter Errors and Correlations, Comput. Phys. Commun. 10 (1975) 343–367.
* [91] A. Vittino, P. Mertsch, H. Gast, and S. Schael, Breaks in interstellar spectra of positrons and electrons derived from time-dependent AMS data, Phys. Rev. D 100 (2019), no. 4 043007, [1904.05899].
* [92] AMS Collaboration, M. Aguilar et al., Observation of New Properties of Secondary Cosmic Rays Lithium, Beryllium, and Boron by the Alpha Magnetic Spectrometer on the International Space Station, Phys. Rev. Lett. 120 (2018), no. 2 021101.
* [93] AMS Collaboration, M. Aguilar et al., Observation of the Identical Rigidity Dependence of He, C, and O Cosmic Rays at High Rigidities by the Alpha Magnetic Spectrometer on the International Space Station, Phys. Rev. Lett. 119 (2017), no. 25 251101.
* [94] Y. Genolini et al., Indications for a high-rigidity break in the cosmic-ray diffusion coefficient, Phys. Rev. Lett. 119 (2017), no. 24 241101, [1706.09812].
* [95] A. Coccaro, M. Pierini, L. Silvestrini, and R. Torre, The DNNLikelihood: enhancing likelihood distribution with Deep Learning, Eur. Phys. J. C 80 (2020), no. 7 664, [1911.03305].
* [96] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, et al., Learning phrase representations using rnn encoder-decoder for statistical machine translation, Oct., 2014.
* [97] S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation 9 (1997), no. 8 1735–1780.
* [98] F. Chollet et al., Keras, 2015\.
* [99] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015\. Software available from tensorflow.org.
* [100] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization.” Conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015, 2017.
* [101] A. B. Owen, Monte Carlo theory, methods and examples. 2013\.
* [102] Y. Génolini, M. Boudaud, M. Cirelli, L. Derome, J. Lavalle, et al., New minimal, median, and maximal propagation models for dark matter searches with Galactic cosmic rays, 2103.04108.
* [103] G. Steigman, B. Dasgupta, and J. F. Beacom, Precise relic wimp abundance and its impact on searches for dark matter annihilation, Phys. Rev. D 86 (Jul, 2012) 023506.
* [104] L. Derome, Cosmic-Ray Isotopes with the Alpha Magnetic Spectrometer, PoS ICRC2021 (2021) 119. [https://pos.sissa.it/395/119].
* [105] V. Silveira and A. Zee, Scalar Phantoms, Phys. Lett. 161B (1985) 136–140.
* [106] J. McDonald, Gauge singlet scalars as cold dark matter, Phys. Rev. D50 (1994) 3637–3649, [hep-ph/0702143].
* [107] C. P. Burgess, M. Pospelov, and T. ter Veldhuis, The Minimal model of nonbaryonic dark matter: A Singlet scalar, Nucl. Phys. B619 (2001) 709–728, [hep-ph/0011335].
* [108] J. M. Cline, K. Kainulainen, P. Scott, and C. Weniger, Update on scalar singlet dark matter, Phys. Rev. D88 (2013) 055025, [1306.4710]. [Erratum: Phys. Rev.D92,no.3,039906(2015)].
* [109] A. Beniwal, F. Rajec, C. Savage, P. Scott, C. Weniger, et al., Combined analysis of effective Higgs portal dark matter models, Phys. Rev. D93 (2016), no. 11 115016, [1512.06458].
* [110] A. Cuoco, B. Eiteneuer, J. Heisig, and M. Krämer, A global fit of the $\gamma$-ray galactic center excess within the scalar singlet Higgs portal model, JCAP 1606 (2016), no. 06 050, [1603.08228].
* [111] GAMBIT Collaboration, P. Athron et al., Status of the scalar singlet dark matter model, Eur. Phys. J. C 77 (2017), no. 8 568, [1705.07931].
* [112] P. Athron, J. M. Cornell, F. Kahlhoefer, J. Mckay, P. Scott, et al., Impact of vacuum stability, perturbativity and XENON1T on global fits of $\mathbb{Z}_{2}$ and $\mathbb{Z}_{3}$ scalar singlet dark matter, Eur. Phys. J. C 78 (2018), no. 10 830, [1806.11281].
* [113] Planck Collaboration, P. A. R. Ade et al., Planck 2015 results. XIII. Cosmological parameters, Astron. Astrophys. 594 (2016) A13, [1502.01589].
|
arxiv-papers
| 2021-07-26T18:00:04 |
2024-09-04T03:07:19.743407
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Felix Kahlhoefer, Michael Korsmeier, Michael Kr\\\"amer, Silvia Manconi,\n Kathrin Nippel",
"submitter": "Kathrin Nippel",
"url": "https://arxiv.org/abs/2107.12395"
}
|
2107.12405
|
# Moonshine at Landau-Ginzburg points
Andrei Căldăraru Yunfan He Shengyuan Huang Mathematics Department,
University of Wisconsin–Madison, 480 Lincoln Drive, Madison, WI 53706–1388,
USA
[email protected], [email protected], [email protected]
###### Abstract
Abstract: We formulate a conjecture predicting unexpected relationships among
the coefficients of the elliptic expansions of Klein’s modular $j$-function
around $j=0$ and $j=1728$. Our conjecture is inspired by recent developments
in mirror symmetry, in particular by work of Tu [Tu19] computing categorical
enumerative invariants of matrix factorization categories and by work of Li-
Shen-Zhou [LSZ20] computing FJRW invariants of elliptic curves.
1\. The conjecture
## T
he Monstrous Moonshine conjecture describes a surprising relationship,
discovered in the late 1970s, between the coefficients of the Fourier
expansion of Klein’s $j$-function around the cusp
$j(\tau)=\frac{1}{q}+744+196884q+21393760q^{2}+864299970q^{3}+20235856256q^{4}+\cdots$
and dimensions of irreducible representations of the Monster group. Fourier
expansions of other modular forms around the cusp are critically important in
number theory and algebraic geometry. In particular such expansions appear
directly in computations of Gromov-Witten invariants of elliptic curves
[Dij95].
## I
n this note we study the elliptic expansion of the $j$-function around the
hexagonal point $j=0$ and the square point $j=1728$, instead of around the
cusp $j=\infty$. At $j=0$ the elliptic curve is the Fermat cubic, cut out in
$\mathbb{P}^{2}$ by $x^{3}+y^{3}+z^{3}=0$, while at $j=1728$ it is given by
$x^{4}+y^{4}+z^{2}=0$ in the weighted projective space
$\mathbb{P}^{2}_{1,1,2}$.
From an enumerative geometry perspective the fact that we work around the
hexagonal and square points instead of around the cusp suggests that we are
working with Fan-Jarvis-Ruan-Witten (FJRW) invariants instead of Gromov-Witten
invariants. See (LABEL:subsec:justification) for details.
## L
et $\mathbb{H}$ and $\mathbb{D}$ denote the upper half plane and the unit disk
in the complex plane, respectively. Fix $\tau_{*}=e^{\pi{\mathtt{i}}/3}$ or
$\tau_{*}={\mathtt{i}}$ as the points***Any other point in the
$\operatorname{SL}(2,\mathbb{Z})$ orbit of $\tau_{*}$ works equally well, with
only minor changes in the constants below. in $\mathbb{H}$ around which to
carry out the expansion.
The uniformizing map $S$ around $\tau_{*}$ is the map
$\displaystyle S:\mathbb{H}\to\mathbb{D},\quad\quad$ $\displaystyle
S(\tau)=\frac{\tau-\tau_{*}}{\tau-{\bar{\tau}}_{*}},$ with inverse
$\displaystyle S^{-1}:\mathbb{D}\to\mathbb{H},\quad\quad$ $\displaystyle
S^{-1}(w)=\frac{\tau_{*}-{\bar{\tau}}_{*}w}{1-w}.$
The elliptic expansion of $j$ around $\tau_{*}$ is simply the Taylor expansion
of $j\circ S^{-1}$ around $w=0$. Its coefficients are closely related [Zag08,
Proposition 17] to the values of the higher modular derivatives
$\partial^{n}j(\tau_{*})$,
$j\left(S^{-1}(w)\right)=\sum_{n=0}^{\infty}\frac{(4\pi\operatorname{Im}\tau_{*})^{n}\partial^{n}j(\tau_{*})}{n!}w^{n}.$
##
subsec:rescale The values of the higher modular derivatives of $j$ can be
computed term-by-term by a well-known recursive procedure. The results are
rational multiples of products of powers of the Chowla-Selberg period†††The
exact value of $\Omega$ is unimportant, but in this case
$\Omega=1/\sqrt{6\pi}\left(\Gamma(1/3)/\Gamma(2/3)\right)^{3/2}$ for the
hexagonal point and $\Omega=1/\sqrt{8\pi}\left(\Gamma(1/4)/\Gamma(3/4)\right)$
for the square point. $\Omega$ and of $\pi$.
Let $s(w)=2\pi\Omega^{2}\cdot S(w)$ denote the rescaling of $S$ by the factor
$2\pi\Omega^{2}$. Then around $\tau_{*}=\exp(\pi{\mathtt{i}}/3)$ we have
$j\left(s^{-1}(w)\right)=13824w^{3}-39744w^{6}+\frac{1920024}{35}w^{9}-\frac{1736613}{35}w^{12}+\cdots,$
while around $\tau_{*}={\mathtt{i}}$ we have
$j\left(s^{-1}(w)\right)=1728+20736w^{2}+105984w^{4}+\frac{1594112}{5}w^{6}+\frac{3398656}{5}w^{8}+\cdots.$
##
subsec:gh In his study of categorical Saito theory of Fermat cubics [Tu19,
Section 4] Tu introduced the following two power series with rational
coefficients:
$\displaystyle g(t)$
$\displaystyle=\sum_{n=0}^{\infty}(-1)^{n}\frac{\left((3n-2)!!!\right)^{3}}{(3n)!}t^{3n},$
$\displaystyle h(t)$
$\displaystyle=\sum_{n=0}^{\infty}(-1)^{n}\frac{\left((3n-1)!!!\right)^{3}}{(3n+1)!}t^{3n+1}.$
He argued that the ratio $h(t)/g(t)$ gives a flat coordinate on the moduli
space of versal deformations $x^{3}+y^{3}+z^{3}+3txyz=0$ of the Fermat cubic.
Similarly, for the elliptic quartic we introduce the two power series below
$\displaystyle g(t)$
$\displaystyle=\sum_{n=0}^{\infty}\frac{\left((4n-3)!!!!\right)^{2}}{(2n)!}t^{2n},$
$\displaystyle h(t)$
$\displaystyle=\sum_{n=0}^{\infty}\frac{\left((4n-1)!!!!\right)^{2}}{(2n+1)!}t^{2n+1}.$
Even though the notation $g,h$ appears overloaded, it should be evident from
context which power series we refer to.
Our main result is the following conjecture.
(a) Around the hexagonal point the elliptic expansion of the $j$-function
satisfies
$\displaystyle j\left(s^{-1}\left(\frac{h(t)}{g(t)}\right)\right)$
$\displaystyle=27t^{3}\left(\frac{8-t^{3}}{1+t^{3}}\right)^{3}$
$\displaystyle=13824t^{3}-46656t^{6}+99144t^{9}-171315t^{12}+263169t^{15}-\cdots.$
(b) Around the square point the elliptic expansion of the $j$-function
satisfies
$\displaystyle j\left(s^{-1}\left(\frac{h(t)}{g(t)}\right)\right)$
$\displaystyle=(192+256t)\left(\frac{3+4t}{1-4t^{2}}\right)^{2}$
$\displaystyle=1728+20736t^{2}+147456t^{4}+851968t^{6}+4456448t^{8}+\cdots.$
## Notes.
It is remarkable that the coefficients in the above power series are all
integers, despite $j(s^{-1}(w))$ only having rational coefficients. We were
unable to find other modular forms with this property. We verified the
validity of the conjectures up to $t^{24}$ in both cases, by computer
calculations.
## Acknowledgments.
We would like to thank Junwu Tu, Jie Zhou, Michael Martens, and Ken Ono for
helping out at various stages of the project.
This work was partially supported by the National Science Foundation through
grant number DMS-1811925.
2\. Mirror symmetry origin of the conjecture
## T
he original statement of mirror symmetry is formulated as the equality of two
power series associated to a pair $(X,\check{X\,}\\!)$ of mirror symmetric
families of Calabi-Yau varieties. These two power series are
1. (a)
the generating series, in a formal variable $Q$, of the enumerative invariants
of the family $X$ (the A-model potential);
2. (b)
the Taylor expansion of a Hodge-theoretic function (the period) on the moduli
space of complex structures $M^{\mathsf{cx}}$ of the mirror family
$\check{X\,}\\!$, with respect to a flat coordinate $q$ on this moduli space
(the B-model potential).
In order to compare the two power series, the variables $q$ and $Q$ are
identified via an invertible map $\psi$ called the mirror map.
In physics the formal variable $Q$ is viewed as a flat coordinate on the (ill-
defined mathematically) complexified Kähler moduli space
$M^{\mathsf{K\ddot{a}h}}$, and the mirror map is interpreted as an isomorphism
$\psi:M^{\mathsf{cx}}\to M^{\mathsf{K\ddot{a}h}}$
between germs of $M^{\mathsf{cx}}$ and $M^{\mathsf{K\ddot{a}h}}$ around
special points. Traditionally these special points are the large volume and
large complex limit points, respectively.
## T
he original mirror symmetry computation of [COGP91] follows this pattern. It
predicts a formula for the generating series of genus zero Gromov-Witten
invariants of the quintic $X$, by equating it to the expansion of a period
(solution of the Picard-Fuchs equation) for the family of mirror quintics
$\check{X\,}\\!$. The equality of the two sides allows one to calculate the
genus zero Gromov-Witten invariants, by expanding the period map of the family
$\check{X\,}\\!$ with respect to a certain flat coordinate on the moduli space
of complex structures of mirror quintics.
As another example consider a two-torus $X$ (elliptic curve with arbitrary
choice of complex structure). The $(1,1)$ Gromov-Witten invariant of degree
$d\geq 1$ with insertion the Poincaré dual class of a point counts in this
case the number of isogenies of degree $d$ to a fixed elliptic curve. As such
it satisfies
$\langle[{\mathsf{pt}}]^{\mathsf{PD}}\rangle_{1,1}^{X,d}=\sum_{k|d}k=\sigma_{1}(d),$
and hence the generating series of these invariants (including the $d=0$ case)
is $-\frac{1}{24}E_{2}(Q)$ where $E_{2}$ denotes the quasi-modular Eisenstein
form of weight two. The main result of ([CT17]) is that this equals the
expansion in $q=\exp(2\pi{\mathtt{i}}\tau)$, around $q=0$, of the function of
categorical enumerative $(1,1)$ invariants for the corresponding family
$\check{X\,}\\!$ of mirror elliptic curves.
## I
mplicit in the above calculation for elliptic curves are the two facts that
1. (a)
$q$ is the flat coordinate, around the cusp, on the moduli space of elliptic
curves;
2. (b)
the mirror map $\psi$ for elliptic curves identifies $q$ with $Q$.
The main intuition behind Conjecture ‣ Moonshine at Landau-Ginzburg points is
a similar set of assumptions, but for the flat coordinates around the
hexagonal or square points instead of around the cusp. Below we will give
precise conjectural descriptions of the flat coordinates $q$ and $Q$ around
the hexagonal point $\check{F\,}\\!\in M^{\mathsf{cx}}$ and its mirror $F\in
M^{\mathsf{K\ddot{a}h}}$. The analysis for the square point is entirely
similar.
## T
o understand these flat coordinates we need good descriptions of
$M^{\mathsf{K\ddot{a}h}}$ and $M^{\mathsf{cx}}$ around $F$ and
$\check{F\,}\\!$. We will review first the classical situation (around the
cusp) described in the work of Polishchuk-Zaslow [PZ98].
Polishchuk-Zaslow take the space $M^{\mathsf{K\ddot{a}h}}$ on a two-torus to
be the quotient of $\mathbb{H}$, with coordinate $\rho$, by $\rho\sim\rho+1$.
For each $\rho\in M^{\mathsf{K\ddot{a}h}}$ they construct a Fukaya category
$\mathcal{F}^{0}(X^{\rho})$ on the two-torus $X^{\rho}$ endowed with this
structure. The quotient above is precisely the same as the neighborhood of the
cusp on the moduli space $M^{\mathsf{cx}}$ of complex structures on a two-
torus‡‡‡We ignore the stack structure of $M^{\mathsf{cx}}$, which only adds an
extra $\mathbb{Z}/2\mathbb{Z}$ stabilizer.. For Polishchuk-Zaslow the mirror
map is simply the identity $\tau\leftrightarrow\rho$: the complex elliptic
curve $\check{X\,}\\!^{\tau}$ with modular parameter $\tau$ corresponds to the
two-torus $X^{\rho}$ with complexified Kähler structure $\rho=\tau$.
##
subsec:mkah Even without explicitly constructing $M^{\mathsf{K\ddot{a}h}}$ as
a moduli space of geometric objects we could have understood its structure
around the large volume limit point through mirror symmetry. Indeed, we could
have simply taken $M^{\mathsf{K\ddot{a}h}}$ to be the neighborhood of the
large complex structure limit point in $M^{\mathsf{cx}}$, a space we
understand. With this point of view the mirror map is always the identity.
The same approach makes sense around the hexagonal point $\check{F\,}\\!\in
M^{\mathsf{cx}}$ and its mirror $F\in M^{\mathsf{K\ddot{a}h}}$. The germ of
$M^{\mathsf{cx}}$ around $\check{F\,}\\!$ is the quotient of $\mathbb{H}$ by
$\tau\sim\frac{\tau-1}{\tau},$
exhibiting the germ of $\mathbb{H}$ around $\tau_{*}$ as a triple cover of
$M^{\mathsf{cx}}$ branched over $\check{F\,}\\!$. As above, we will define the
germ of $M^{\mathsf{K\ddot{a}h}}$ around $F$ to be the quotient of
$\mathbb{H}$ (with coordinate $\rho$) by $\rho\sim(\rho-1)/\rho$. We think of
$\rho\in\mathbb{H}$ as giving an (ill-defined) “complexified Kähler class” on
the two torus, and write $X^{\rho}$ for this symplectic geometry object. The
mirror map is, as before, $\tau\leftrightarrow\rho$.
##
subsec:justification In the A-model we conjecture that $Q=s(\rho)^{3}$ is a
flat coordinate on $M^{\mathsf{K\ddot{a}h}}$. The justification for this comes
from work of Li-Shen-Zhou [LSZ20], where the authors suggest that the natural
way to interpret the generating series of FJRW invariants for two-tori as a
function of $\rho$ is via the map $s$ (with a different rescaling from ours).
It would be natural to guess from their work that $s(\rho)$ is the flat
coordinate. However, since $\rho$ is only defined up to the equivalence
$\rho\sim(\rho-1)/\rho$, the equality
$s\left(\frac{\rho-1}{\rho}\right)^{3}=s(\rho)^{3}$
implies that $Q$ descends§§§This is not the only modification of $s(\tau)$
that descends to a coordinate on $M^{\mathsf{K\ddot{a}h}}$, which in general
will not be flat. The same issue appears in the B-model. to a coordinate on
$M^{\mathsf{K\ddot{a}h}}$, which we conjecture to be the flat coordinate
around $F$.
## I
n the B-model Tu [Tu19, Section 4] argued that $h(t)/g(t)$ gives a flat
coordinate on the base $\mathbb{A}^{1}_{t}$ of the Hesse pencil of elliptic
curves,
$E_{t}:~{}\quad x^{3}+y^{3}+z^{3}+3txyz=0.$
Tu’s work was motivated by a study of categories of graded matrix
factorizations, but via Orlov’s correspondence [Orl06] these are equivalent to
the derived categories of the above elliptic curves.
Again, $h(t)/g(t)$ does not give a coordinate on $M^{\mathsf{cx}}$ because
locally $\mathbb{A}^{1}_{t}$ is a branched triple cover of $M^{\mathsf{cx}}$
around $\check{F\,}\\!$. Its replacement $q=\left(h(t)/g(t)\right)^{3}$ does
descend to a coordinate on $M^{\mathsf{cx}}$ around $\check{F\,}\\!$, and we
conjecture it is flat.
## B
y our construction of $M^{\mathsf{K\ddot{a}h}}$ the mirror map $\psi$ is the
identity, so the mirror of the complex curve $\check{X\,}\\!^{\tau}$ with
modular parameter $\tau$ is the symplectic object $X^{\rho}$ with
$\rho=\psi(\tau)=\tau$. (Despite being equal we prefer to keep $\rho$ and
$\tau$ distinct since they represent different geometric objects.)
Flat coordinates are unique up to multiplication by a scalar when the moduli
spaces $M^{\mathsf{K\ddot{a}h}}$ and $M^{\mathsf{cx}}$ are one-dimensional.
(The rescaling factor $2\pi\Omega^{2}$ in (LABEL:subsec:rescale) was chosen so
that this constant equals one.) It follows that the flat coordinates of
$X^{\rho}$ and $\check{X\,}\\!^{\tau}$ are equal for $\rho=\tau$.
Consider a Hesse elliptic curve $E_{t}$ for some value of $t$. It can be
written as $\check{X\,}\\!^{\tau}$ for some (non-unique) modular parameter
$\tau\in\mathbb{H}$. The mirror of this curve is $X^{\rho}$ for $\rho=\tau$.
(We think of $\rho\in M^{\mathsf{K\ddot{a}h}}$, so the ambiguity in $\tau$
disappears.) It follows that
$\left(\frac{h(t)}{g(t)}\right)^{3}=q(\check{X\,}\\!^{\tau})=Q(X^{\rho})=s(\rho)^{3},$
or, using the fact that $s$ is invertible,
$s^{-1}\left(\frac{h(t)}{g(t)}\right)\sim\rho$
where $\sim$ is the equivalence relation used to define $M^{\mathsf{cx}}$ in
(LABEL:subsec:mkah). Applying the $j$-function to both sides and noting that
it is $\sim$-invariant we get
$j\left(s^{-1}\left(\frac{h(t)}{g(t)}\right)\right)=j(\rho)=j(E_{t}).$
For the Hesse pencil the $j$-function can be computed easily [AD09] and the
result is
$j(E_{t})=27t^{3}\left(\frac{8-t^{3}}{1+t^{3}}\right)^{3}.$
This is the statement of the conjecture.
## References
* [AD09] Artebani, M., Dolgachev, I., The Hesse pencil of plane cubic curves, Enseign. Math. (2) 55 (2009), no. 3-4, 235-273.
* [CT17] Căldăraru, A., Tu, J., Computing a categorical Gromov-Witten invariant, Compos. Math. 156 (2020), no. 7, 1275-1309.
* [COGP91] Candelas, P., de la Ossa, X., C., Green, P., S., Parkes L., A pair of Calabi-Yau manifolds as an exact soluble superconformal theory, Nucl. Phys. B 359 (1991), 21-74.
* [Dij95] Dijkgraaf, R., Mirror symmetry and elliptic curves, The moduli space of curves, Progress in Mathematics, vol 129\. Birkhäuser Boston, 1995, 149-163.
* [LSZ20] Li, J., Shen, Y., Zhou, J., Higher genus FJRW invariants of a Fermat cubic, preprint, arXiv:2001.00343
* [Orl06] Orlov, D., O., Triangulated categories of singularities, and equivalences between Landau-Ginzburg models, Mat. Sb. 197 (2006), no. 12, 117-132; translation in Sb. Math. 197 (2006), no. 11-12, 1827-1840
* [PZ98] Polishchuk, A., Zaslow, E., Categorical mirror symmetry: the elliptic curve, Adv. Theor. Math. Phys. 2 (1998), 443-470.
* [Tu19] Tu, J., Categorical Saito theory, II: Landau-Ginzburg orbifolds, preprint, arXiv: 1910.00037
* [Zag08] Zagier, D., Elliptic modular forms and their applications, The 1-2-3 of modular forms, 1-103, Universitext, Springer, Berlin, 2008.
|
arxiv-papers
| 2021-07-26T18:01:08 |
2024-09-04T03:07:19.763808
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Andrei Caldararu, Yunfan He, Shengyuan Huang",
"submitter": "Andrei Caldararu",
"url": "https://arxiv.org/abs/2107.12405"
}
|
2107.12410
|
††thanks: orcid # 0000-0003-4139-5670††thanks: orcid #
0000-0003-2028-6782††thanks: orcid # 0000-0002-7190-1581††thanks: orcid #
0000-0001-6749-0022
# JUNO’s prospects for determining
the neutrino mass ordering
David V. Forero [email protected] Universidad de Medellín, Carrera 87 N°
30 - 65 Medellín, Colombia Stephen J. Parke [email protected] Theoretical
Physics Dept., Fermi National Accelerator Laboratory, Batavia, IL, USA
Christoph A. Ternes [email protected] INFN, Sezione di Torino, Via P. Giuria
1, I–10125 Torino, Italy Renata Zukanovich Funchal [email protected]
Instituto de Física, Universidade de São Paulo, São Paulo, Brazil
###### Abstract
The flagship measurement of the JUNO experiment is the determination of the
neutrino mass ordering. Here we revisit its prospects to make this
determination by 2030, using the current global knowledge of the relevant
neutrino parameters as well as current information on the reactor
configuration and the critical parameters of the JUNO detector. We pay
particular attention to the non-linear detector energy response. Using the
measurement of $\theta_{13}$ from Daya Bay, but without information from other
experiments, we estimate the probability of JUNO determining the neutrino mass
ordering at $\geq$ 3$\sigma$ to be 31% by 2030. As this probability is
particularly sensitive to the true values of the oscillation parameters,
especially $\Delta m^{2}_{21}$, JUNO’s improved measurements of
$\sin^{2}\theta_{12}$, $\Delta m^{2}_{21}$ and $|\Delta m^{2}_{ee}|$, obtained
after a couple of years of operation, will allow an updated estimate of the
probability that JUNO alone can determine the neutrino mass ordering by the
end of the decade. Combining JUNO’s measurement of $|\Delta m^{2}_{ee}|$ with
other experiments in a global fit will most likely lead to an earlier
determination of the mass ordering.
Neutrino Physics, JUNO.
††preprint: FERMILAB-PUB-21-201-T
###### Contents
1. I Introduction
2. II The $\bar{\nu}_{e}$ survival probability
3. III Simulation of a medium baseline reactor experiment
4. IV Mean (or Average) Determination of the neutrino mass ordering
1. IV.1 Effect of the Reactor Distribution and Backgrounds
2. IV.2 Effect of bin to bin Flux Uncertainties
3. IV.3 Effect of varying the number of Energy Bins
4. IV.4 Effect of varying the Energy Resolution
5. V Effect of varying the true values of the Neutrino Oscillation Parameters
6. VI Non-linear detector energy response
7. VII Fluctuations about the Mean for the neutrino mass ordering determination
8. VIII Combining JUNO with the Global Fit
9. IX Conclusions
10. A Artificial Constraints
11. B $\nu_{e}$ Disappearance Probability in Vacuum
12. C Verification of our code
13. D On the contribution to the determination of $|\Delta m^{2}_{ee}|$ from the $|\Delta m^{2}_{\mu\mu}|$ sensitive experiments
## I Introduction
After the first observation of the so-called solar neutrino puzzle by the
Homestake experiment in the late 60’s, it took us about 30 years to establish
that neutrino flavor oscillations are prevalent in nature, impacting
cosmology, astrophysics as well as nuclear and particle physics. In the last
20 years we have consolidated our understanding of neutrino oscillations both
at the experimental as well as the theoretical level. We know now, thanks to a
great number of experimental efforts involving solar, atmospheric, accelerator
and reactor neutrino oscillation experiments, that neutrino oscillations are
genuine three flavor phenomena driven by two independent mass squared
differences ($\Delta m^{2}_{21}$ and $\Delta m^{2}_{32}$) and three mixing
angles ($\theta_{12}$, $\theta_{13}$ and $\theta_{23}$) and possibly a charge-
parity violating phase ($\delta_{\rm CP}$). See Nunokawa:2007qh for a review
of how these parameters are defined.
Today a single class of experiments dominates the precision of the measurement
of each of these aforementioned parameters deSalas:2020pgw ; Capozzi:2021fjo ;
Esteban:2020cvm . In the solar sector ($12$), $\Delta m^{2}_{21}$ is
determined to less than 3% mainly by the KamLAND Gando:2013nba long-baseline
$\bar{\nu}_{e}$ disappearance reactor experiment, while $\sin^{2}\theta_{12}$
is determined by the combination of solar neutrino experiments
Cleveland:1998nv ; Kaether:2010ag ; Abdurashitov:2009tn ; Bellini:2011rx ;
Bellini:2013lnn ; Hosaka:2005um ; Cravens:2008aa ; Abe:2010hy ; Nakano:PhD ;
yasuhiro_nakajima_2020_4134680 ; Aharmim:2011vm ; Ahmad:2002jz to $\sim 4\%$.
In the atmospheric sector ($23$), $\Delta m^{2}_{32}$ (or $\Delta m^{2}_{31}$)
and $\sin^{2}\theta_{23}$ are dominantly determined by the $\nu_{\mu}$ and
$\bar{\nu}_{\mu}$ disappearance accelerator experiments MINOS Adamson:2013ue ,
NOvA Acero:2019ksn ; alex_himmel_2020_3959581 and T2K Abe:2021gky , with
corresponding precision of better than 1.5% and 8%, respectively. The mixing
$\sin^{2}\theta_{13}$, that connects the solar and atmospheric sectors, is
determined by the short-baseline $\bar{\nu}_{e}$ disappearance reactor
experiments Daya Bay Adey:2018zwh , RENO Bak:2018ydk ;
jonghee_yoo_2020_4123573 and Double Chooz DoubleChooz:2019qbj to a precision
of $\sim 3\%$. Regarding the CP-phase $\delta_{\rm CP}$ there is a small
tension in the determination among current experiments T2K and NOvA
deSalas:2020pgw ; Esteban:2020cvm ; Kelly:2020fkv . The determination of
$\delta_{\rm CP}$ remains an open problem that probably will have to be
addressed by the next generation of long-baseline neutrino experiments such as
DUNE and Hyper-K. There is, nevertheless, an important open question that
influences the better determination of some of these parameters: what is the
neutrino mass ordering?
If one defines the mass eigenstates $\nu_{1}$, $\nu_{2}$, $\nu_{3}$ in terms
of decreasing amount of electron neutrino flavor content, then the results of
the SNO solar neutrino experiment determined that $m_{1}<m_{2}$. However, the
available information does not allow us to know the complete ordering yet:
both $m_{1}<m_{2}<m_{3}$ (normal ordering, NO) and $m_{3}<m_{1}<m_{2}$
(inverted ordering, IO) are compatible with the current data deSalas:2020pgw ;
Esteban:2020cvm ; Kelly:2020fkv . The measurement of the neutrino mass
ordering is one of the most pressing and delicate challenges of our times.
Besides its direct impact on the precise knowledge of the oscillation
parameters, neutrino mass ordering affects the sum of neutrino masses from
cosmology, the search for neutrinoless double-$\beta$ decay and ultimately,
our better understanding of the pattern of masses and mixing in the leptonic
sector.
The use of $\bar{\nu}_{e}$ from nuclear reactors with a medium-baseline
detector to determine the mass ordering, exploring genuine three generation
effects as long as $\sin^{2}\theta_{13}\gtrsim$ few %, was first proposed in
Petcov:2001sy . This idea was further investigated in Choubey:2003qx for a
general experiment and more recently in Bilenky:2017rzu , specifically for
JUNO. In all three of these papers, different artificial constraints were
imposed on the $\Delta m^{2}_{3i}$’s when comparing the NO and IO spectra. As
we will see, any and all of these artificial constrains increases the
difference between the NO and IO, see appendix A for a more detailed
discussion. In fact, it was shown in Ref. Minakata:2007tn that what these
experiments can precisely measure is the effective combination Nunokawa:2005nx
$\Delta m^{2}_{ee}\equiv\cos^{2}\theta_{12}\Delta
m^{2}_{31}+\sin^{2}\theta_{12}\Delta m^{2}_{32}\,,$ (1)
and the sign ($+$ for NO, $-$ for IO) of a phase ($\Phi_{\odot}$) that depends
on the solar parameters. This subtlety is of crucial importance in correctly
assessing the sensitivity to the neutrino mass ordering.
The Jiangmen Underground Neutrino Observatory (JUNO) An:2015jdp , a 20 kton
liquid scintillator detector located in the Guangdong Province at about 53 km
from the Yangjiang and Taishan nuclear power plants in China, will be the
first experiment to implement this idea. This medium-baseline facility offers
the unprecedented opportunity to access in a single experiment four of the
oscillation parameters: $\Delta m^{2}_{21}$, $\sin^{2}\theta_{12}$, $|\Delta
m^{2}_{ee}|$, $\sin^{2}\theta_{13}$ and the sign of phase advance,
$\Phi_{\odot}(L/E)$, which determines the mass ordering. JUNO aims in the
first few years to measure $\Delta m^{2}_{21}$, $\sin^{2}\theta_{12}$ and
$|\Delta m^{2}_{ee}|$ with a precision $\lesssim 1\%$ to be finally able,
after approximately 8 years111 Assuming 26.6 GW of reactor power. The original
6 years assumed 35.8 GW., to determine the neutrino mass ordering at 3$\sigma$
confidence level (C.L.).
Many authors have studied the neutrino mass ordering determination at medium-
baseline reactor neutrino experiments such as JUNO. In Ref. Zhan:2008id a
Fourier analysis was proposed, but no systematic effects were considered. The
effect of energy resolution was investigated in Ref. Ge:2012wj . The
importance of also taking into account non-linear effects in the energy
spectrum reconstruction was first pointed out in Ref. Parke:2008cz and
addressed in Ref. Qian:2012xh ; Li:2013zyd , where limited impact on the mass
ordering was observed. Matter effects, geo-neutrino background, energy
resolution, energy-scale and spectral shape uncertainties were investigated in
Capozzi:2013psa . The impact of the energy-scale and flux-shape uncertainties
was further explored in Capozzi:2015bpa . The benefits of a near detector for
JUNO was demonstrated in Forero:2017vrg and in Cheng:2020ivh the impact of
the sub-structures in the reactor antineutrino spectrum, due to Coulomb
effects in beta decay, was studied in the light of a near detector under
various assumptions of the detector energy resolution. This was further
explored in Capozzi:2020cxm . In Blennow:2013oma the distribution for the
test statistics, to address the mass ordering determination, was proven to be
normally distributed and this was also applied to quantify the JUNO
sensitivity. It was also shown that without statistical fluctuations, the
mentioned test statistics is equivalent to the widely adopted $\Delta\chi^{2}$
approach used in sensitivity studies. Finally, the combined sensitivity of
JUNO and PINGU was also recently studied by the authors of Bezerra:2019dao ,
while a combined sensitivity study of JUNO and T2K or NOvA was performed in
Cabrera:2020own .
One can appreciate the difficulty in establishing the mass ordering with this
setup by noticing that after 8 years (2400 days) of data taking, the
difference in the number of events for NO and IO is only a few tens of events
per energy bin which is smaller than the statistical uncertainty in each bin.
It is clear that this formidable endeavor depends on stringent requirements on
the experiment’s systematic uncertainties, but also on the actual values of
the oscillation parameters as well as on statistical fluctuations. This is why
we think it is meaningful to revisit the prospect that JUNO can obtain a
3$\sigma$ preference of the neutrino mass ordering by 2030. This is the task
we undertake in this paper.
Our paper is structured as follows. In Sec. II we describe the $\bar{\nu}_{e}$
survival probability in a way that highlights the physics that is relevant for
medium-baseline reactor neutrino experiments and how it depends on the
oscillation parameters. In Sec. III we explain how we simulate the experiment
and show the statistical challenges associated with extracting the mass
ordering in a medium-baseline reactor experiment. Sec. IV addresses how the
following experimental details affect the determination power of the neutrino
mass ordering of the JUNO experiment; (A) reactor distribution and
backgrounds, (B) bin to bin flux uncertainties, (C) the number of energy bins
used in the analysis, (D) the size of the energy resolution. In Sec. V we show
how varying the true values of the neutrino oscillation parameters improves or
reduces the prospects for JUNO’s determination of the neutrino mass ordering.
Sec. VI addresses the effects of the non-linear detector response on the mass
ordering determination. In Sec. VII we simulate 60 k experiments consistent
with the current best fit values and uncertainties of the oscillation
parameters. From this simulation we estimate the probabilities that JUNO can
determine the mass ordering at $\geq 3\sigma$ with 4, 8 and 16 years of data
taking. In Sec. VIII we show how JUNO’s measurement of $\Delta m^{2}_{ee}$,
when combined with other experiments can determine the mass ordering after a
few years of data taking. Finally in Sec. IX we draw our conclusions. There
are four Appendices: Appendix A addresses the effects of imposing artificial
constraints on the $\Delta m^{2}$’s, Appendix B gives a derivation of the
oscillation probability used in this paper, Appendix C compares our analysis
with the JUNO collaboration’s analysis and Appendix D discusses the current
impact of T2K, NOvA and the atmospheric neutrino data on the determination of
$|\Delta m^{2}_{ee}|$.
## II The $\bar{\nu}_{e}$ survival probability
The neutrino survival probability for reactor experiments in vacuum is given
by
$P_{\overline{\nu}_{e}\to\overline{\nu}_{e}}=1-\sin^{2}2\theta_{13}\left[\cos^{2}\theta_{12}\sin^{2}\Delta_{31}+\sin^{2}\theta_{12}\sin^{2}\Delta_{32}\right]-P_{\odot}\,,$
(2)
where the kinematic phases are $\Delta_{ij}\equiv\Delta m_{ij}^{2}L/(4E)$ and
$P_{\odot}=\sin^{2}2\theta_{12}\cos^{4}\theta_{13}\sin^{2}\Delta_{21}$. This
survival probability was first rewritten, without approximation, in a more
useful way for the medium baseline reactor experiments in Minakata:2007tn , as
$P_{\overline{\nu}_{e}\to\overline{\nu}_{e}}=1-\frac{1}{2}\sin^{2}2\theta_{13}\left[1-\sqrt{1-\sin^{2}2\theta_{12}\sin^{2}\Delta_{21}}\,\cos(2|\Delta_{ee}|\pm\Phi_{\odot})\right]-P_{\odot}\,,$
(3)
where $\Delta m^{2}_{ee}$, defined in Eq. (1), is the effective atmospheric
$\Delta m^{2}$ for $\nu_{e}$ disappearance, see Nunokawa:2005nx ;
Parke:2016joa . The mass ordering is determined by the sign in front of
$\Phi_{\odot}$, ‘+’ (‘–’) for NO (IO). The phase advance or retardation
$\Phi_{\odot}$ is
$\displaystyle\Phi_{\odot}=$ $\displaystyle\arctan\left(\cos
2\theta_{12}\tan\Delta_{21}\right)-\Delta_{21}\cos 2\theta_{12}\,.$ (4)
Note that the survival probability depends only on four of the oscillation
parameters, $\theta_{12}$, $\theta_{13}$, $\Delta m_{21}^{2}$ and $|\Delta
m_{ee}^{2}|$ and the sign for the mass ordering. The determination of the sign
of $\Delta m_{21}^{2}~{}\cos 2\theta_{12}>0$ by SNO Aharmim:2005gt is crucial
for this measurement. See Appendix B for more details on the survival
probability.
For $\Delta_{21}\ll\pi/2$, the phase advance/retardation can be approximated
by
$\Phi_{\odot}\approx\frac{1}{3}\,\sin^{2}2\theta_{12}\,\cos
2\theta_{12}\,\Delta^{3}_{21}+{\cal O}(\Delta^{5}_{21})\,,$ (5)
and then near $\Delta_{21}\approx 1$ rises rapidly so that
$\Phi_{\odot}(\Delta_{21}=\pi/2)=\pi\sin^{2}\theta_{12}\approx 1\,.$ (6)
This behavior is illustrated in Fig. 1, using the central and 1$\sigma$ bands
for the solar parameters given in Table 1 taken from the recent global fit
deSalas:2020pgw . Here, we show the advance/retardation as a function of $E$
at $L=52.5$ km, and in Appendix B also as function of $L/E$.
Figure 1: The kinematic phase advance/retardation, $\Phi_{\odot}$, of the
survival probability as a function of $E$ at $L=52.5$ km. The blue band is
obtained from the exact formula, while the red curve shows the approximation
for values of $L/E<10$ km/MeV. The dashed vertical and horizontal lines mark
the solar oscillation minimum, i.e. $\Delta_{21}=\pi/2$, where
$\Phi_{\odot}=\pi\,\sin^{2}\theta_{12}\approx 0.999$. The gray and blue bands
are obtained by varying the solar parameters in their corresponding 1$\sigma$
intervals as given in Table 1. $\Phi_{\odot}$ as function of $L/E$ is given in
Appendix B. Normal Ordering
---
Parameter | Nominal Value | $1\sigma$
$\sin^{2}\theta_{12}$ | 0.318 | $\pm 0.016$
$\Delta m^{2}_{21}$[$10^{-5}$eV2] | $7.50$ | $\pm 0.21$
$\sin^{2}\theta_{13}$ | 0.02200 | $\pm 0.00065$
$\Delta m^{2}_{ee}$ [$10^{-3}$eV2] | $2.53$ | $+0.03$/$-0.02$
Table 1: Nominal values and uncertainties of the neutrino oscillation
parameters used to simulate data in this paper. Throughout the paper we
simulate data assuming NO. These values were taken from the global fit to
neutrino oscillation data found in Ref. deSalas:2020pgw .
Matter effects are important in the determination of the best fit values of
the solar parameters $\Delta m^{2}_{21}$ and $\sin^{2}\theta_{12}$. The size
of these effects, which do not satisfy the naive expectations, was first given
in a numerical simulation in Li:2016txk , and later explained in a semi-
analytical way in Khan:2019doq . For $\Delta m^{2}_{21}$ and
$\sin^{2}\theta_{12}$, the sizes of these shifts are -1.1% and 0.20%,
respectively. Since we are interested only in sensitivities, we can ignore
matter effects in the propagation of neutrinos in this paper and will use here
the vacuum expression for the survival probability, Eq. (3). However, in a
full analysis of real data matter effects must be included. In the next
section we will describe details of our simulation of the JUNO reactor
experiment.
## III Simulation of a medium baseline reactor experiment
For the simulation of JUNO we use the information given in Refs. An:2015jdp ;
Bezerra:2019dao but with the updates of baselines, efficiencies and
backgrounds provided in Ref. Abusleme:2021zrw . In order to simulate the event
numbers and to perform the statistical analysis we use the GLoBES software
Huber:2004ka ; Huber:2007ji . We start with an idealized configuration where
all reactors that provide the 26.6 GW${}_{\text{th}}$ total thermal power are
at 52.5 km baseline from the detector. The antineutrinos are mainly created in
the fission of four isotopes, 235U (56.1%), 238U (7.6%), 239Pu (30.7%) and
241Pu (5.6%) Abusleme:2020bzt . For our simulation we use the Huber-Mueller
flux predictions Mueller:2011nm ; Huber:2011wv for each isotope. The
$\bar{\nu}_{e}$ propagate to the JUNO detector and are observed via inverse
beta decay $\overline{\nu}_{e}+p\rightarrow e^{+}+n$ Vogel:1999zy . We assume
a liquid scintillator detector with a 20 kton fiducial mass and a running time
of 2400 days (8 years @ 82% live time)222An exposure of 26.6
GW${}_{\text{th}}$ for 2400 days (8 years @ 82%) is equivalent to 35.8
GW${}_{\text{th}}$ for 1800 days (6 years @ 82%) as used in An:2015jdp .. We
will include the detector energy resolution of 3.0% unless otherwise stated.
The aforementioned quantities affect the calculation of the event numbers. The
number of events, $N_{i}$, in the $i$-th bin corresponding to the
reconstructed neutrino energy $E_{i}$ is given by
$N_{i}=\mathcal{N_{T}}\int
dE\int_{E_{i}^{\text{min}}}^{E_{i}^{\text{max}}}dE^{\prime}~{}\phi_{\overline{\nu}_{e}}(E)~{}P_{\overline{\nu}_{e}\to\overline{\nu}_{e}}(E,L)~{}\sigma(E)~{}R(E,E_{i}^{\prime})\,.$
(7)
Here, $\mathcal{N_{T}}$ is a normalization constant taking into account the
exposure time, efficiency, fiducial mass of the detector and reactor-detector
distance, $\phi_{\overline{\nu}_{e}}(E)$ is the antineutrino flux,
$P_{\overline{\nu}_{e}\to\overline{\nu}_{e}}(E,L)$ is the survival probability
in Eq. (3), $\sigma(E)$ is the cross section, and $R(E,E_{i}^{\prime})$ is the
energy resolution function
$R(E,E^{\prime})=\frac{1}{\sqrt{2\pi}\sigma_{E}(E)}\exp\left(-\frac{(E-E^{\prime})^{2}}{2\sigma_{E}^{2}(E)}\right)\,,$
(8)
which relates the reconstructed and true neutrino energies. The energy
resolution is given by
$\sigma_{E}(E)=\epsilon~{}\sqrt{E_{p}/\rm MeV}~{}\text{MeV}\,,$ (9)
where the prompt energy $E_{p}$ is given by
$E_{p}=E-\Delta M,\quad{\rm with}\quad\Delta M\equiv
m_{n}-m_{p}-m_{e}=0.78~{}\text{MeV}.$
The variable $\epsilon$ is the detector energy resolution. In this paper we
will use $\epsilon=3.0\%$ except when discussing the effects of varying this
parameter in Sec. IV D, where we also will use 2.9% and 3.1%.
Figure 2: In the upper left panel we show the oscillated spectra for NO
(blue) and for IO (red) for 8 years (2,400 live days) of data using 26.6
GW${}_{\text{th}}$ with all core-detector baselines set at 52.5 km. No
systematic effects and no backgrounds are included. There are 200 bins between
1.8 and 8.0 MeV, with a bin size of 31 keV, and 3.0% resolution was used.
While $\Delta m^{2}_{ee}~{}[\text{NO}]$ is the input, $\Delta
m^{2}_{ee}~{}[\text{IO}]$ is chosen to minimize the statistical
$\overline{\chi^{2}}$ between the two spectra, see right panel
($\overline{\chi^{2}}_{\rm min}[\text{IO}]=14.5$, see right panel). The
parameters $\sin^{2}\theta_{13}$, $\sin^{2}\theta_{12}$ and $\Delta
m^{2}_{21}$ are from Table 1. In the left lower panel, the difference between
the two oscillated spectra in each bin (green), $N^{\rm NO}_{i}-N^{\rm
IO}_{i}$, is given, as well as plus/minus statistical uncertainty in each
oscillated bin (orange band), $\pm\sqrt{N^{\rm NO}_{i}}\approx\pm\sqrt{N^{\rm
IO}_{i}}$. Note, the difference is always within the statistical uncertainty
for that bin.
In Fig. 2 we have plotted the event spectrum for JUNO using 200 bins for 8
years (2,400 live days) of data taking and 26.6 GW${}_{\text{th}}$. In the top
panel, the blue and red spectra corresponds to
$\Delta m_{ee}^{2}~{}~{}[\text{NO}]=2.530\times
10^{-3}~{}\text{eV}^{2}~{}~{}\text{and}~{}~{}\Delta
m_{ee}^{2}~{}~{}[\text{IO}]=-2.548\times 10^{-3}~{}\text{eV}^{2}\,,$
respectively333Note, that the value for $\Delta m_{ee}^{2}~{}[\text{IO}]$ does
not correspond to any of the artificial constraints on the atmospheric mass
splitting imposed in Refs. Petcov:2001sy ; Choubey:2003qx ; Bilenky:2017rzu ,
see Appendix A for more details.. The $\Delta m_{ee}^{2}$ for NO is input
whereas the value for IO is chosen so as to minimize the
$\overline{\Delta\chi^{2}}=\overline{\chi^{2}}_{\text{min}}[\rm
IO]-\overline{\chi^{2}}_{\text{min}}[\rm NO]$ between the two spectra. By
construction $\overline{\chi^{2}}_{\text{min}}[\rm NO]=0$, so minimizing
$\overline{\Delta\chi^{2}}$ is equivalent to minimizing
$\overline{\chi^{2}}_{\text{min}}[\rm IO]$. In the lower panel, we plot the
difference in event spectra obtained for NO and IO. Note that this difference
is less than 20 events/bin. Also shown is the statistical uncertainty in each
oscillated bin (orange band), which for all bins exceeds the difference
between the NO and IO event spectra444Caveat: if one halves the bin size in
this figure the difference between the NO and IO goes down by a factor of 2
whereas the statistical uncertainty by only $\sqrt{2}$ making the difference
more challenging to observe. If one doubles the bin size the statistical
uncertainty increases by the $\sqrt{2}$ whereas the difference would increase
by 2, this improves the situation except for the fact that at low energy there
is some washing out of the difference.. This figure demonstrates the
statistical challenges for JUNO to determine the mass ordering and will be
addressed in more detail in Sec. VII. For reference, we also show on the right
panel of Fig. 2 the corresponding $\overline{\chi^{2}}$ distributions.
Throughout this paper we will use dashed (solid) lines for the fit with NO
(IO).
Note that including systematic uncertainties as well as the real distribution
of core-reactor distances and backgrounds will further decrease the difference
between the two spectra. But first let us address the simulation details and
systematic uncertainties.
To perform the statistical analysis we create a spectrum of fake data
$N_{i}^{\text{dat}}$ for some set of oscillation parameters. Next we try to
reconstruct this spectrum varying the relevant oscillation parameters
$\vec{p}$. For each set $\vec{p}$ we calculate a $\chi^{2}$ function
$\chi^{2}(\vec{p})=\min_{\vec{\alpha}}\sum_{i}\frac{(N_{i}^{\text{dat}}-N_{i}(\vec{p},\vec{\alpha}))^{2}}{N_{i}^{\text{dat}}}+\sum_{j}\left(\frac{\alpha_{j}}{\sigma_{j}}\right)^{2}+\chi^{2}_{\text{NL}},$
(10)
where $N_{i}(\vec{p},\vec{\alpha})$ is the predicted number of events555The
number of events includes the background events extracted from Ref. An:2015jdp
. for parameters $\vec{p}$, $\vec{\alpha}=(\alpha_{1},\alpha_{2},\ldots)$ are
the systematic uncertainties with their corresponding standard deviations
$\sigma_{k}$. $\chi^{2}_{\text{NL}}$ is the penalty for the non-linear
detector response and will be discussed in more detail in Sec. VI.
As in Ref. An:2015jdp , we included systematic uncertainties concerning the
flux, the detector efficiency (which are normalizations correlated among all
bins, i.e. $N_{i}\rightarrow\alpha N_{i}$) and a bin-to-bin uncorrelated shape
uncertainty. The shape uncertainty is simply introduced as an independent
normalization for each bin in reconstructed energy, i.e.
$N_{i}\rightarrow\alpha_{i}N_{i}$.
In the next section we will discuss in detail how some experimental issues can
affect JUNO’s ability to determine the neutrino mass ordering666For a
verification of our simulation, see Appendix C.. We will concentrate on the
impact of the real reactor core distribution, the inclusion of background
events, the bin to bin flux uncertainty, the number of equal-size energy bins
of data and the detector energy resolution. We leave the discussion of the
dependence on the true value of the neutrino oscillation parameters, on the
non-linearity of the detector energy response and on statistical fluctuations
for later sections.
## IV Mean (or Average) Determination of the neutrino mass ordering
In the following subsections we will discuss in which way the following
quantities affect the determination power of the neutrino mass ordering of the
JUNO experiment:
1. A.
Effect of the reactor distribution and backgrounds,
2. B.
Effect of bin to bin flux uncertainties,
3. C.
Effect of varying the number of energy bins,
4. D.
Effect of varying the energy resolution.
Unless otherwise stated, we generate fake data fixing the neutrino oscillation
parameters as in Tab. 1 and assume the nominal values for the energy
resolution, number of data bins and total exposure for JUNO given in Tab. 2.
Quantity | Nominal Value | Lowest Value | Largest Value
---|---|---|---
$\epsilon$ (resolution @ 1MeV) | 3.0% | 2.9% | 3.1%
b2b | 1% | 0% | 3%
$\sigma_{\text{bias}}$ | 0.7% | 0% | no penalty
number of bins | 200 | 100 | 300
exposure (years) @ 26.6 GW${}_{\text{th}}$ | 8 | 2 | 16
Table 2: Nominal values, as well as lowest and largest values, assumed in this
paper for the JUNO energy resolution, systematic uncertainties (b2b=bin to bin
and the energy scale bias $\sigma_{\text{bias}}$), number of energy data bins
and exposure. One year is 300 days of live time.
### IV.1 Effect of the Reactor Distribution and Backgrounds
The real position of the reactor cores and background events are expected to
impact JUNO’s sensitivity. Fig. 3 shows the reduction in
$\overline{\Delta\chi^{2}}$ as one goes from the ideal reactor core-detector
disposition (all cores at 52.5 km) with no backgrounds included to the real
reactor core-detector baseline distribution given in Table 3 with all
backgrounds taken into account. The blue lines, labeled “ideal, wo BG”, are
the same as on the right panel of Fig. 2.
There are two types of background events at JUNO: one from remote reactors
(Daya Bay and Huizhou) and the other includes accidental events, cosmogenic
decays and geo-neutrinos. The first we compute, the latter we take from
An:2015jdp .
Figure 3: The effects of the real reactor core-detector baseline distribution
as well as of the two types of backgrounds: from the distant reactors Daya Bay
(DB) and Huizhou (HZ) as well as from other sources (accidental, cosmogenic,
etc.). Going from the ideal distribution (all cores at 52.5 km) with no
backgrounds (blue) to the real distribution (Table 3) with all backgrounds
(dark yellow) the $\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ goes from 14.5
to 9.1, i.e. a reduction of more than 5 units. Here “wo” is abreviation for
“without”.
Notice the $\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ goes from 14.5
(ideal, wo BG) down to 9.1 (real, all BG), a decrease of more than 5 units.
The real core positioning alone causes a reduction in sensitivity of 2.8 and
the background events an extra 2.6 (1.8 from DB and HZ). We use the real
baseline distribution and include all backgrounds in the rest of this paper.
Reactor | YJ-C1 | YJ-C2 | YJ-C3 | YJ-C4 | YJ-C5 | YJ-C6 | TS-C1 | TS-C2 | DB | HZ
---|---|---|---|---|---|---|---|---|---|---
Power (GW${}_{\text{th}}$) | 2.9 | 2.9 | 2.9 | 2.9 | 2.9 | 2.9 | 4.6 | 4.6 | 17.4 | 17.4
Baseline (km) | 52.74 | 52.82 | 52.41 | 52.49 | 52.11 | 52.19 | 52.77 | 52.64 | 215 | 265
Table 3: The thermal power and core-detector baselines for the Yangjiang (YJ)
and Taishan (TS) reactors, see Abusleme:2021zrw . The total power is 26.6
GW${}_{\text{th}}$. The remote reactors Daya Bay (DB) and Huizhou (HZ) produce
background events for the neutrino mass ordering.
### IV.2 Effect of bin to bin Flux Uncertainties
There is uncertainty related to the exact shape of the reactor $\bar{\nu}_{e}$
flux, inherent to the flux calculation. This uncorrelated bin to bin (b2b)
shape uncertainty is included in our analysis by varying each predicted event
bin with a certain penalty. The primary purpose of the TAO near detector is to
reduce this bin to bin shape uncertainty, see Abusleme:2020bzt .
The effect of this systematic bias is shown in Fig. 4. The lines labeled “stat
only” is the same as the one labeled “real, all BG” in Fig. 3. We find
$\overline{\chi^{2}}_{\text{min}}[\text{IO}]=8.5,7.1$ and $5.6$, respectively,
for 1%, 2% and 3%. When the b2b systematic uncertainty is not included, we
recall, $\overline{\chi^{2}}_{\text{min}}[\text{IO}]=9.1$. So if the shape
uncertainty is close to $1\%$ (the nominal value), the sensitivity to the
neutrino mass ordering is barely affected. However, for 2% and 3% we see a
clear loss in sensitivity. This is because increasing the uncorrelated
uncertainty for each bin, makes it easier to shift from a NO spectrum into an
IO one and vice versa. We use 1% b2b in the rest of the paper.
Figure 4: The effect of the bin to bin (b2b) systematic uncertainty on the
$\overline{\chi^{2}}$. The real distribution of reactors is used and all
backgrounds are included. A 1% b2b uncertainty is expected to be achieved with
the TAO near detector Abusleme:2020bzt .
### IV.3 Effect of varying the number of Energy Bins
Here we examine the impact on $\overline{\chi^{2}}$ of changing the size of
the neutrino energy bins in the range [1.8, 8.0] MeV. In Fig. 5, we show the
result obtained from varying the number of energy bins. We obtain
$\overline{\chi^{2}}_{\text{min}}[\text{IO}]=6.0,8.5$ and 8.9, respectively,
for 100, 200 and 300 bins. So increasing the number of bins above 200 causes a
marginal improvement, whereas lowering the number of bins below 200 reduces
the significance of the neutrino mass ordering determination. The background
per bin from Ref. An:2015jdp is re-scaled as we vary the number of bins. The
red lines in Fig. 5 (200 bins) are the same as the blue lines in Fig. 4 (1%
b2b). We always use 200 bins elsewhere in this paper.
Figure 5: The effect of varying the number of neutrino energy binsin the
range [1.8, 8.0] MeV on the $\overline{\chi^{2}}$. Below 200 bins the
$\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ decreases substantially whereas
above 200 the increase is marginal.
### IV.4 Effect of varying the Energy Resolution
Next we consider variations of the detector energy resolution. In this section
we assume, that this number can be slightly better and slightly worse than the
nominal 3.0%. Small variations of the resolution have large impacts on the
determination of the neutrino mass ordering, as shown in Fig. 6. The red line
corresponds to the nominal energy resolution of 3.0$\%$. The blue and green
lines are obtained for 2.9$\%$ and $3.1\%$, respectively, with corresponding
$\chi^{2}_{\text{min}}[\text{IO}]=9.7$ and 7.5. Clearly
$\chi^{2}_{\text{min}}[\text{IO}]$ is quite sensitive to the exact value of
the resolution that will be achieved by JUNO. Therefore even a small
improvement on the energy resolution would have a sizable impact on the
determination potential of the neutrino mass ordering. However, it appears
challenging for JUNO to reach an energy resolution even slightly better than
3.0%, see Abusleme:2020lur . Note the red lines in Fig. 6 (3.0% res.) are also
the same as the blue lines in Fig. 4 (1% b2b). We always use 3.0% resolution
elsewhere in this paper.
Figure 6: Here we show the effect of varying the detector energy away from the
nominal 3.0%. A 0.1% reduction (increase) in this resolution increases
(decreases) the $\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ by approximately
1 unit.
## V Effect of varying the true values of the Neutrino Oscillation
Parameters
In this section we explore how varying the true values of the neutrino
oscillation parameters improves or reduces the prospects for JUNO’s
determination of the neutrino mass ordering. We first consider the variation
of single parameters with the others held fixed and then consider the
correlations varying both $\Delta m^{2}_{21}$ and $\sin^{2}\theta_{12}$ with
$\Delta m^{2}_{ee}$ and $\sin^{2}\theta_{13}$ held fixed and vice versa.
We start by creating fake data sets using the upper and lower 1$\sigma$ bounds
obtained in Ref. deSalas:2020pgw (see Tab. 1), always for one parameter at
the time. The result of these analyses is shown in Fig. 7, where in each panel
we vary one of the parameters as indicated. Here again solid (dashed) lines
are used for IO (NO). As can be seen, changes in any of the oscillation
parameters can have large effects on the determination power of the neutrino
mass ordering. Especially remarkable is the effect of a smaller $\Delta
m_{21}^{2}$, which shifts $\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ from
8.5 to 7.1. Note that the best fit value obtained from the analysis of solar
neutrino data from Super-K yasuhiro_nakajima_2020_4134680 is even smaller
than the one considered here and therefore the determination would then be
even more difficult. On the other hand side, a larger value of the solar mass
splitting improves significantly the determination of the mass ordering. In
this case we obtain $\overline{\chi^{2}}_{\text{min}}[\text{IO}]=10.2$. The
effect of the other parameters is not as pronounced as in the case of the
solar mass splitting, but still appreciable: $\Delta
m^{2}_{ee}$/$\sin^{2}\theta_{13}$/ $\sin^{2}\theta_{12}$, within 1$\sigma$ of
their current best fit value, can move
$\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ by approximately $\pm$ 0.5.
Figure 7: How the $\overline{\chi^{2}}$ dependency on the true values of the
neutrino oscillation parameters can impact the neutrino mass ordering
determination. The curves for the global best fit value (red) and the curves
for a value 1$\sigma$ above (below) from the global best fit are shown in blue
(green), according to Tab. 1. Only the labeled parameter is varied in each
plot, the others are held at their best fit values. Here we use the nominal
values for resolution, b2b systematics, number of energy bins and exposure
given in Tab. 2 and include all backgrounds.
In Fig. 8 we show the correlated variation of the
$\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ as a function of
($\sin^{2}\theta_{12}$, $\Delta m^{2}_{21}$) holding ($\sin^{2}\theta_{13}$,
$\Delta m^{2}_{ee}$) fixed as well as a function of ($\sin^{2}\theta_{13}$,
$\Delta m^{2}_{ee}$) holding ($\sin^{2}\theta_{12}$, $\Delta m^{2}_{21}$)
fixed. Even varying these parameters within 3$\sigma$ of their current best
fit, there are very significant changes to the
$\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ contour plots. This implies that
JUNO’s prospect for the determination of the neutrino mass ordering could be
improved or weakened by Nature’s choice for the true values of these
oscillation parameters. The values that were used in Li:2013zyd (also in
An:2015jdp ) are shown by the gray stars in these figures.
Figure 8: Contours of $\overline{\chi^{2}}_{\text{min}}[\text{IO}]$ as the
oscillation parameters are varied: left panel varying ($\sin^{2}\theta_{12}$,
$\Delta m^{2}_{21}$) holding ($\sin^{2}\theta_{13}$, $\Delta m^{2}_{ee}$)
fixed at their best fit values, right panel varying ($\sin^{2}\theta_{13}$,
$\Delta m^{2}_{ee}$) holding ($\sin^{2}\theta_{12}$, $\Delta m^{2}_{21}$)
fixed at their best fit values. The red cross is the current best fit point
whereas the gray star is the value of the parameters used in Li:2013zyd (also
in An:2015jdp ). Even for a variation of about 3$\sigma$ around the best fit
values of Tab. 1, we see substantial change in the
$\overline{\chi^{2}}_{\text{min}}[\text{IO}]$. Here we use the nominal values
for resolution, b2b systematics, number of energy bins and exposure given in
Tab. 2 and include all backgrounds.
## VI Non-linear detector energy response
In a liquid scintillator detector, the true prompt energy, $E_{p}$, (positron
energy plus $m_{e}$) is not a linear function of to the visible energy,
$E^{\text{vis}}$, in the detector. The main energy-dependent effects are the
intrinsic non-linearity related to the light emitting mechanisms
(scintillation and Cherenkov emission) and instrumental non-linearities. The
non-linear detector response can be modeled by a four parameter function
An:2013zwz ; An:2016ses ; Abusleme:2020lur which relates the true prompt
energy to the visible detector energy according to
$\displaystyle E_{p}$
$\displaystyle=\frac{E^{\text{vis}}}{f_{\text{NL}}(a_{1},a_{2},a_{3},a_{4};E_{p})}\quad{\rm
where}\quad
f_{\text{NL}}(a_{1},a_{2},a_{3},a_{4};E_{p})\equiv\frac{a_{1}+a_{2}\,(E_{p}/\text{MeV})}{1+a_{3}\,\exp\left(-a_{4}\,(E_{p}/\text{MeV})\right)}\,,$
(11)
and the coefficients $(a_{1},a_{2},a_{3},a_{4})$ are determined by prompt
energy calibration techniques. We use the prompt energy scale calibration
curve shown in Fig. 1 of Ref. Abusleme:2020lur , which can be well described
by the $f_{\text{NL}}$ given in Eq. (11) with
$\bar{a}_{1}=1.049,\quad\bar{a}_{2}=2.062\times
10^{-4},\quad\bar{a}_{3}=9.624\times 10^{-2},\quad\bar{a}_{4}=1.184\,.$
Then the true neutrino energy, $E$, is then constructed by $E=E_{p}+\Delta M$.
To allow for deviations from this calibration, we use in our simulation the
reconstructed prompt energy, $E^{\,\prime}_{p}$, given by
$\displaystyle\frac{E^{\,\prime}_{p}}{E_{p}}=\frac{f_{\text{NL}}(\bar{a}_{1},\bar{a}_{2},\bar{a}_{3},\bar{a}_{4};E_{p})}{f_{\text{NL}}(a_{1},a_{2},a_{3},a_{4};E_{p})}.$
(12)
Note, with this definition $E^{\text{vis}}$ is held fixed as we change the
$a_{i}$’s from their calibration values, $\bar{a}_{i}$. In the simulation, we
generate a distribution of $E_{p}$’s for the true mass ordering and use Eq.
(12) to generate a distribution of $E^{\,\prime}_{p}$’s for the test mass
ordering777For the neutrino energy, the equivalent expression is
$\frac{E^{\,\prime}}{E}=\frac{f_{\text{NL}}(\bar{a}_{1},\bar{a}_{2},\bar{a}_{3},\bar{a}_{4};E-\Delta
M)}{f_{\text{NL}}(a_{1},a_{2},a_{3},a_{4};E-\Delta M)}\left(1-\frac{\Delta
M}{E}\right)+\frac{\Delta M}{E}\,.$ . .
The allowed range of the $a_{i}$’s is constrained by including a penalty term
for the derivation of
$\frac{|E^{\,\prime}_{p}-E_{p}|}{E_{p}}\,,$
when fitting the simulated spectra to the test mass ordering. Explicitly, we
allow the $a_{i}$’s to vary from their calibration values and then penalize
the fit by using the simplified $\chi^{2}_{\text{NL}}$ defined, as in Ref.
Capozzi:2015bpa , as
$\chi^{2}_{\text{NL}}=\max_{E_{p}}\left(\frac{f_{\text{NL}}(\bar{a}_{1},\bar{a}_{2},\bar{a}_{3},\bar{a}_{4};E_{p})}{f_{\text{NL}}(a_{1},a_{2},a_{3},a_{4};E_{p})}-1\right)^{2}\biggr{/}(\sigma_{\text{bias}})^{2}\,,$
(13)
where $\\{a_{i},i=1,...,4\\}$ are the best fit of these parameters for the
test mass ordering spectra, $\sigma_{\text{bias}}$ is the uncertainty on the
energy scale, $\displaystyle\max_{E_{p}}$ indicates that we take only the
maximal difference which happens at $E\sim 2.75$ MeV (see Fig. 9). We consider
the following sizes for the bias, $\sigma_{\text{bias}}=0.0,0.2,0.4,0.7\%$, as
well as no penalty, i.e. $\chi^{2}_{\text{NL}}=0$. Using Fig. 2 of Ref.
Abusleme:2020lur , we see that JUNO expects an approximately energy
independent systematic uncertainty on the energy scale of about 0.7%, mainly
due to instrumental non-linearity and position dependent effects. Therefore,
$\sigma_{\text{bias}}=0.7\%$ is our nominal value from here on.
On the left panel of Fig. 9 we show the ratio $E_{p}^{\prime}/E_{p}$ as a
function of $E$ for the corresponding $f_{\text{NL}}$ coefficients listed in
Tab. 4 obtained for the best fit to IO of the NO input spectra. On the right
panel we see the effect of the uncertainty on the energy scale on
$\overline{\chi^{2}}$. In particular, as the uncertainty increases
$\overline{\chi^{2}}_{\text{min}}[\rm IO]$ goes from 8.5 (no NL effect) down
to 8.0 (0.2%), 7.5 (0.4%) and 7.2 (0.7%). Note that if we introduce the non-
linearity shift with no penalty $\overline{\chi^{2}}_{\text{min}}[\rm
IO]=6.8$. Even with the nominal 0.7% bias, this is a significant effect,
reducing $\overline{\chi^{2}}_{\text{min}}[\rm IO]$ by more than 1 unit (8.5
to 7.2) and in this manner further lowering the mass ordering discrimination
power.
We also observe that the precision on the determination of $|\Delta
m^{2}_{ee}|$ is notably degraded when the non-linearity in the energy scale is
included. In addition, the best fit value for $|\Delta m^{2}_{ee}|[\rm IO]$
moves slightly toward the best fit value for $|\Delta m^{2}_{ee}|[\rm NO]$.
This means that the fit for IO adjusts the $f_{\text{NL}}$ coefficients in
order to get a value for $|\Delta m^{2}_{ee}|[\rm IO]$ closer to the input
value of $|\Delta m^{2}_{ee}|[\rm NO]$.
Figure 9: On the left panel we show the ratio between the reconstructed prompt energy $E^{\prime}_{p}$ and the true prompt energy $E_{p}$ as a function of the neutrino energy $E,$ for $\sigma_{\text{bias}}=0.2\%$ (yellow), $0.4\%$ (green) and $0.7\%$ (red) and no penalty (magenta) for the best fit to IO for the NO spectra. In blue we show the line for perfect reconstruction (no NL) as a reference. On the right panel we show the changes to $\overline{\chi^{2}}$ caused by the addition of the corresponding $\chi^{2}_{\text{NL}}$. | $a_{1}$ | $a_{2}\times 10^{4}$ | $a_{3}\times 10^{1}$ | $a_{4}$
---|---|---|---|---
Calibration | 1.049 | 2.062 | 0.9624 | 1.184
0.2% | 1.049 | 1.918 | 1.156 | 1.347
0.4% | 1.049 | 1.633 | 1.424 | 1.534
0.7% | 1.050 | 0.474 | 1.614 | 1.627
No Penalty | 1.051 | -1.148 | 1.840 | 1.716
Table 4: Values of the coefficients of the function $f_{\text{NL}}$ for the
calibration and 0.2, 0.4, 0.7% bias as well as no penaltly.
## VII Fluctuations about the Mean for the neutrino mass ordering
determination
It has been already pointed out that statistical fluctuations are important
for JUNO, see for instance Ref. Ge:2012wj where they estimate the statistical
uncertainty on $\overline{\Delta\chi^{2}}$ by an analytical expression and a
Monte Carlo simulation. The calculation was performed just after the first
measurement of $\sin^{2}\theta_{13}$ by RENO and Daya Bay, under different
detector resolution and systematic assumptions. It is timely to reevaluate
this here.
We have already shown in Fig. 2 that the difference between the spectra for NO
and IO is smaller than the statistical uncertainty in each bin. We consider
here the effects of fluctuating the number of events in each bin. We evaluate
the impact of this fluctuations on the mass ordering determination by
performing a simulation of 60000 JUNO pseudo-experiments for each exposure and
obtain the distributions given in Fig. 10. To generate this figure, we create
a fake data set {$N^{0}_{i},i=1,...,N_{\text{bins}}$} using the neutrino
oscillation parameters in Tab. 1. The fluctuated spectrum
{$N^{f}_{i},i=1,...,N_{\text{bins}}$} is generated by creating normal
distributed random values around $N^{0}_{i}\pm\sqrt{N^{0}_{i}}$. We analyze
this fluctuated spectrum for NO and IO and add the corresponding
$\Delta\chi^{2}\equiv\chi^{2}_{\text{min}}[\rm IO]-\chi^{2}_{\text{min}}[\rm
NO]$ value to a histogram. Note that here, because of the statistical
fluctuations, $\chi^{2}_{\text{min}}$[NO] is not necessarily zero, so
$\Delta\chi^{2}<0$ means
$\chi^{2}_{\text{min}}$[NO]$>\chi^{2}_{\text{min}}$[IO], so the wrong mass
ordering is selected in this case.
We use the nominal values for the systematic uncertainties and energy
resolution given in Tab. 2 for three exposures: 4, 8 and 16 years. The
corresponding $\Delta\chi^{2}$ distributions are shown in Fig. 10. These
distributions are Gaussian (as was proven analytically in Ref. Blennow:2013oma
) with corresponding central values $\Delta\chi^{2}=3.4,6.7$ and 12.4 and
standard deviations 3.4, 4.7 and 6.1, respectively. Our pseudo-experiments
reveal that after 8 years in only 31% of the trials JUNO can determine the
neutrino mass ordering at the level of 3$\sigma$ or better. We also find that
there is even a non negligible probability ($\sim$8%) to obtain the wrong mass
ordering, i.e., $\Delta\chi^{2}<0$. For a shorter (longer) exposure of 4 (16)
years, $5\%$ $(71\%)$ of the pseudo-experiments rule out IO at 3$\sigma$ or
more. In these cases in about 16% (2%) of the trials the IO is preferred.
Figure 10: Distributions of the $\Delta\chi^{2}\equiv\chi^{2}_{\text{min}}[\rm
IO]-\chi^{2}_{\text{min}}[\rm NO]$ values obtained in the analyses of 60 k
trial pseudo-experiments where statistical fluctuations of the trial data have
been taken into account for three different exposures: 4 (green), 8 (red) and
16 (blue) years. We use the neutrino oscillation parameters at the values
given in Tab. 1 and take into account the experimental nominal systematic
uncertainties and energy resolution given in Tab. 2.
## VIII Combining JUNO with the Global Fit
In the previous section we have shown that the significant impact of
statistical fluctuations on top of the detector systematic effects, can make
it very challenging for JUNO by itself to determine at 3$\sigma$ or more the
neutrino mass ordering even after 16 years. However, as was shown in
Nunokawa:2005nx , muon disappearance experiments measure888In fact, there is a
small correction to this definition whose leading term depends on
$\cos\delta\sin\theta_{13}\sin 2\theta_{12}\tan\theta_{23}\Delta m^{2}_{21}$
whose magnitude is less than $10^{-5}$ eV2. This term is included in all
numerical calculations.
$\Delta m^{2}_{\mu\mu}\equiv\sin^{2}\theta_{12}\Delta
m^{2}_{31}+\cos^{2}\theta_{12}\Delta m^{2}_{32}\,,$ (14)
whose relationship to $|\Delta m^{2}_{ee}|$ is given by
$|\Delta m^{2}_{ee}|=|\Delta m^{2}_{\mu\mu}|\pm\cos 2\theta_{12}\Delta
m^{2}_{21}\,,$ (15)
the positive (minus) sign is for NO (IO). Therefore, by using muon
disappearance measurements we have a constraint on the allowed $|\Delta
m^{2}_{ee}|$’s for the two mass orderings,
$|\Delta m^{2}_{ee}|\,[{\rm NO}]-|\Delta m^{2}_{ee}|\,[{\rm IO}]=2\cos
2\theta_{12}\Delta m^{2}_{21}\approx 0.06\times 10^{-3}\,\text{eV}^{2}\,,$
(16)
i.e. $|\Delta m^{2}_{ee}|\,[{\rm IO}]$ is 2.4% smaller than $|\Delta
m^{2}_{ee}|\,[{\rm NO}]$. Whereas, because of the phase advance (NO) or
retardation (IO) given in Eq. (4), the medium baseline reactor experiments
give $|\Delta m^{2}_{ee}|\,[{\rm IO}]$ about 0.7% larger than $|\Delta
m^{2}_{ee}|\,[{\rm NO}]$. Of course, the measurement uncertainty on $|\Delta
m^{2}_{\mu\mu}|$ must be smaller than this 3.1% difference for this
measurement to impact the confidence level at which the false mass ordering is
eliminated. The short baseline reactor experiments, Daya Bay and RENO, measure
the same $|\Delta m^{2}_{ee}|$ for both orderings with uncertainties much
larger than JUNO’s uncertainty.
This physics is illustrated in Fig. 11 where we show the allowed region in the
plane $\Delta m^{2}_{21}$ versus $|\Delta m^{2}_{ee}|$ by JUNO for NO (blue)
and IO (red) after 2 years of data taking and the corresponding 1$\sigma$ CL
allowed region by the current global fit constraint on $|\Delta
m^{2}_{\mu\mu}|$. We see that the global fit and JUNO NO regions overlap while
the corresponding IO regions do not. This tension between the position of the
best fit values of $|\Delta m^{2}_{ee}|$ for IO with respect to NO gives extra
leverage to the data combination.
Figure 11: The ellipses are the allowed regions for JUNO in the $\Delta
m^{2}_{21}$ versus $|\Delta m^{2}_{ee}|$ plane for NO (blue, 2 and 3$\sigma$
CL) and IO (red, 2 and 3$\sigma$ CL) after 2 years. The best fit for NO (IO)
is depicted by a black star (dot). We assume NO here and the
$\Delta\chi^{2}$’s are determined with respect to NO best fit point. We use
the neutrino oscillation parameters at the values given in Tab. 1 and take
into account the experimental nominal systematic uncertainties and energy
resolution given in Tab. 2. We also show, as red (for IO) and blue (for NO)
bands, the 1$\sigma$ CL allowed regions by the current global fit constraint
on $|\Delta m^{2}_{\mu\mu}|$. Note, the not overlap for the allowed regions
for IO.
Figure 12: On the left panel, we show the mean $\chi^{2}$ distributions,
$\overline{\chi^{2}}$, for the current global fit, our predictions for JUNO
after 2 years. NO fits shown as dashed lines are the assumed true mass
ordering. The fits for IO are shown as solid lines. On the right panel we show
the distributions of the $\Delta\chi^{2}\equiv\chi^{2}_{\text{min}}[\rm
IO]-\chi^{2}_{\text{min}}[\rm NO]$ values obtained combining the current
global fit $\chi^{2}$ distributions with 60 k trial pseudo-experiments where
statistical fluctuations of the trial data have been taken into account for
three different exposures: 2 (yellow), 4 (green) and 8 (red) years. For JUNO
we use the neutrino oscillation parameters at the values give in Tab. 1 and
take into account the experimental nominal systematic uncertainties and energy
resolution given in Tab. 2.
Therefore, combining JUNO’s measurement of $|\Delta m^{2}_{ee}|$ with other
experiments, in particular T2K and NOvA, expressed by the current global fits,
see Refs. deSalas:2020pgw ; Capozzi:2021fjo ; Esteban:2020cvm , turns out to
be very powerful in unraveling the neutrino mass ordering at a high confidence
level, as shown in the left panel of Fig. 12 for 2 years of JUNO data. As we
can see $\overline{\chi^{2}}_{\rm min}[\rm IO]$ combined (green solid line)
turns out to be about 16. As a result with only two years of JUNO data taking
the mass ordering is determined at better than 3$\sigma$ in 99% of the trials,
see right panel of Fig. 12. Of course, the actual value of
$\overline{\chi^{2}}_{\rm min}[\rm IO]$ will depend on the value of $|\Delta
m^{2}_{ee}|$ measured by JUNO and the updates of the other experiments used in
the global fit. In Appendix D we discuss the separate contributions from T2K,
NOvA and the atmospheric neutrino data (Super-Kamiokande and DeepCore) to the
$\overline{\chi}^{2}$ distribution for the global fit determination of
$|\Delta m^{2}_{ee}|$ and the corresponding impact on the combination with
JUNO, for completeness and comparison with Fig. 5 of Ref. Cabrera:2020own .
So even though JUNO cannot determine the ordering alone, a couple of years
after the start of the experiment, it’s precise measurement of $|\Delta
m^{2}_{ee}|$ will allow us to know the mass ordering at better than 3$\sigma$
when the measurement on $|\Delta m^{2}_{\mu\mu}|$ from other neutrino
oscillation experiments is combined in a global analysis.
## IX Conclusions
The neutrino mass ordering is one of the most pressing open questions in
neutrino physics. It will be most likely measured at different experiments,
using atmospheric neutrinos at ORCA Adrian-Martinez:2016fdl ; Capozzi:2017syc
, PINGU Aartsen:2014oha ; Winter:2013ema ; Capozzi:2015bxa , Hyper-K
Abe:2018uyc or DUNE Ternes:2019sak or accelerator neutrinos at T2HK
Ishida:2013kba or DUNE Abi:2020qib . It also is a flagship measurement for
the up-coming JUNO experiment. This is why we have examined here in detail the
impact of various factors on the determination power of the neutrino mass
ordering by JUNO.
We have assumed NO as the true mass ordering, but our general conclusions do
not depend on this assumption. In this case the power of discrimination can be
encoded on the value of $\overline{\chi^{2}}_{\rm min}[\rm IO]$, the larger it
is the larger the confidence level one can discriminate between the two mass
orderings using JUNO.
We have determined that the real reactor distribution and backgrounds account
for a reduction in sensitivity of more than 5 units (i.e.
$\overline{\chi^{2}}_{\rm min}[\rm IO]$ going from 14.5 to 9.1), the bin to
bin flux uncertainty, at its nominal value of 1%, to an extra reduction of 0.6
down to $\overline{\chi^{2}}_{\rm min}[\rm IO]=8.5$, both assuming 3% energy
resolution and 200 energy bins. Note that an improvement on the energy
resolution from 3% to 2.9%, a challenging feat to achieve, would represent an
increase of $\overline{\chi^{2}}_{\rm min}[\rm IO]$ from 8.5 to 9.7.
The values of neutrino oscillation parameters that will impact JUNO’s
measurement are currently known within a few % uncertainty. We have determined
the effect of these uncertainties on the mass ordering discrimination. We
remark, in particular, the influence of the true value of $\Delta m^{2}_{21}$,
a smaller (larger) value than the current bet fit could shift
$\overline{\chi^{2}}_{\rm min}[\rm IO]$ from 8.5 to 7.1 (10.2). Another
important factor is the non-linear energy response of the detector. Assuming a
bias of 0.7% we have verified that this would decrease
$\overline{\chi^{2}}_{\rm min}[\rm IO]$ further from 8.5 to 7.2.
We have also examined the consequence of statistical fluctuations of the data
by performing 60 k Monte Carlo simulated JUNO pseudo-experiments. Using them
we have determined that after 8 (16) years in only 31% (71%) of the trials
JUNO can determined the neutrino mass ordering at 3$\sigma$ or more. This
means that JUNO by itself will have difficulty determining the mass ordering.
However, JUNO can still be used for a plethora of different interesting
physics analysis An:2015jdp ; Ohlsson:2013nna ; Khan:2013hva ; Li:2014rya ;
Bakhti:2014pva ; Chan:2015mca ; Abrahao:2015rba ; Liao:2017awz ; Li:2018jgd ;
Anamiati:2019maf ; Porto-Silva:2020gma ; deGouvea:2020hfl ; Cheng:2020jje . In
particular, JUNO will be able to measure $\Delta m^{2}_{21}$,
$\sin^{2}\theta_{12}$ and $|\Delta m^{2}_{ee}|$ with unmatched precision. This
will be very useful to improve our understanding of the pattern of neutrino
oscillations and to guide future experiments.
Finally, this inauspicious prediction is mitigated by combining JUNO’s
$|\Delta m^{2}_{ee}|$ measurement into the current global fits, in particular
the measurement of $|\Delta m^{2}_{\mu\mu}|$. As we have shown, this
combination will most likely result in the determination of the mass ordering
at better than 3$\sigma$ with only two years of JUNO data. Our conclusion for
the global fits result is consistent with the results of Cabrera:2020own . So
we can predict that in approximately two years after the start of JUNO we will
finally know, via global analyses, the order of the neutrino mass spectrum,
i.e. whether the lightest neutrino mass eigenstate has the most $\nu_{e}$
($\nu_{1}$) or the least $\nu_{e}$ ($\nu_{3}$).
###### Acknowledgements.
We would like to thank Pedro Machado for useful comments on a preliminary
version of this paper. CAT and RZF are very thankful for the hospitality of
the Fermilab Theoretical Physics Department, where this work was initiated.
Fermilab is operated by the Fermi Research Alliance under contract no. DE-
AC02-07CH11359 with the U.S. Department of Energy. CAT is supported by the
research grant “The Dark Universe: A Synergic Multimessenger Approach” number
2017X7X85K under the program “PRIN 2017” funded by the Ministero
dell’Istruzione, Università e della Ricerca (MIUR). RZF is partially supported
by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and Conselho
Nacional de Ciência e Tecnologia (CNPq). This project has received
funding/support from the European Union’s Horizon 2020 research and innovation
programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN.
## Appendix A Artificial Constraints
Using $\Delta m^{2}_{ee}~{}[\text{NO}]=2.530\times 10^{-3}$ eV2:
1. 1.
then for the artificial constraint that $|\Delta
m^{2}_{32}~{}[\text{IO}]|=\Delta m^{2}_{32}~{}[\text{NO}]$ Petcov:2001sy
$|\Delta m^{2}_{ee}~{}[\text{IO}]|=\Delta
m^{2}_{ee}~{}[\text{NO}]{-2\cos^{2}\theta_{12}\Delta m^{2}_{21}=2.428\times
10^{-3}~{}\text{eV}^{2}}$
2. 2.
then for the artificial constraint that $|\Delta
m^{2}_{31}~{}[\text{IO}]|=\Delta m^{2}_{31}~{}[\text{NO}]$ Choubey:2003qx
$|\Delta m^{2}_{ee}~{}[\text{IO}]|=\Delta
m^{2}_{ee}~{}[\text{NO}]+{2\sin^{2}\theta_{12}\Delta m^{2}_{21}}=2.578\times
10^{-3}~{}\text{eV}^{2}$
3. 3.
then for the artificial constraint that $|\Delta
m^{2}_{32}~{}[\text{IO}]|=\Delta m^{2}_{31}~{}[\text{NO}]$ Bilenky:2017rzu
(See also Ref. Tanabashi:2018oca , Neutrino review, Section 14.7, eq. 14.48.).
$|\Delta m^{2}_{ee}~{}[\text{IO}]|=\Delta m^{2}_{ee}~{}[\text{NO}]-\cos
2\theta_{12}\Delta m^{2}_{21}=2.503\times 10^{-3}~{}\text{eV}^{2}$
The actual $\chi^{2}$ minimum, obtained numerically in Fig. 2, is when
$|\Delta m^{2}_{ee}~{}[\text{IO}]|\approx 2.548\times 10^{-3}~{}\text{eV}^{2}$
i.e. midway between the $|\Delta m^{2}_{31}~{}[\text{IO}]|=\Delta
m^{2}_{31}~{}[\text{NO}]$ and the $|\Delta m^{2}_{ee}~{}[\text{IO}]|=\Delta
m^{2}_{ee}~{}[\text{NO}]$ artificial constraints. It is also easy to see from
Fig. 2 that imposing any of these artificial constraints significantly
increases the size of the $\overline{\Delta\chi^{2}}$ between the fits of the
two mass orderings and therefore gives misleading confidence levels for the
determination of the neutrino mass ordering. Note that all of the below give
equivalent $\Delta m^{2}_{ij}$’s :
$\displaystyle\Delta m^{2}_{ee}~{}[\text{NO}]=2.530\times
10^{-3}~{}\text{eV}^{2},\quad$ $\displaystyle|\Delta
m^{2}_{ee}~{}[\text{IO}]|\approx 2.548\times 10^{-3}~{}\text{eV}^{2},$ (17)
$\displaystyle\Delta m^{2}_{32}~{}[\text{NO}]=2.479\times
10^{-3}~{}\text{eV}^{2},\quad$ $\displaystyle|\Delta
m^{2}_{32}~{}[\text{IO}]|\approx 2.581\times 10^{-3}~{}\text{eV}^{2},$ (18)
$\displaystyle\Delta m^{2}_{31}~{}[\text{NO}]=2.554\times
10^{-3}~{}\text{eV}^{2},\quad$ $\displaystyle|\Delta
m^{2}_{31}~{}[\text{IO}]|\approx 2.506\times 10^{-3}~{}\text{eV}^{2}\,.$ (19)
When minimizing the $\chi^{2}$ difference for Fig. 2, the change in ($|\Delta
m^{2}_{ee}|$, $|\Delta m^{2}_{32}|$, $|\Delta m^{2}_{31}|$) going from NO to
IO is (+0.7%, +4.0%, -1.9%) respectively, i.e. the minimal difference is for
$|\Delta m^{2}_{ee}|$.
## Appendix B $\nu_{e}$ Disappearance Probability in Vacuum
Figure 13: The kinematic phase advance/retardation for the survival
probability, $\Phi_{\odot}$, as a function of $L/E$ (left) and $E$ at $L=52.5$
km (right). The blue band is obtained from the exact formula, while the red
curve shows the approximation for values of $L/E<10$ km/MeV. The dashed
vertical and horizontal lines mark the solar oscillation minimum, i.e.
$\Delta_{21}=\pi/2$ where $\Phi_{\odot}=\pi~{}\sin^{2}\theta_{12}\approx
0.999$. The gray bands are obtained by varying the solar parameters in their
corresponding 1$\sigma$ intervals as given in Table 1.
We start from the usual expression for the $\nu_{e}$ disappearance probability
in vacuum,
$\displaystyle P_{\overline{\nu}_{e}\to\overline{\nu}_{e}}=1$ $\displaystyle-$
$\displaystyle\sin^{2}2\theta_{12}\cos^{4}\theta_{13}\sin^{2}\Delta_{21}$ (20)
$\displaystyle-$
$\displaystyle\sin^{2}2\theta_{13}\left[\cos^{2}\theta_{12}\sin^{2}\Delta_{31}+\sin^{2}\theta_{12}\sin^{2}\Delta_{32}\right]\,.$
Using the methods from Ref. Parke:2016joa , the simplest way to show that
$\displaystyle\cos^{2}\theta_{12}\sin^{2}\Delta_{31}+\sin^{2}\theta_{12}\sin^{2}\Delta_{32}$
$\displaystyle=$
$\displaystyle\frac{1}{2}\biggl{(}1-\sqrt{1-\sin^{2}2\theta_{12}\sin^{2}\Delta_{21}}~{}\cos\Omega\biggr{)}$
(21)
with
$\displaystyle\Omega$ $\displaystyle=$ $\displaystyle
2\Delta_{ee}+{\Phi_{\odot}},$ (22) $\displaystyle{\rm where}\quad\Delta
m^{2}_{ee}$ $\displaystyle\equiv$
$\displaystyle\frac{\partial~{}\Omega}{\partial(L/2E)}\left|{}_{\frac{L}{E}\rightarrow
0}\right.=\cos^{2}\theta_{12}\Delta m^{2}_{31}+\sin^{2}\theta_{12}\Delta
m^{2}_{32}$ (23) $\displaystyle{\rm and}\quad\quad{\Phi_{\odot}}$
$\displaystyle\equiv$ $\displaystyle\Omega-2\Delta_{ee}=\arctan(\cos
2\theta_{12}\tan\Delta_{21})-\Delta_{21}\cos 2\theta_{12},$ (24)
as shown in Fig. 13, is to write
$\displaystyle c^{2}_{12}\sin^{2}\Delta_{31}+s^{2}_{12}\sin^{2}\Delta_{32}$
$\displaystyle=$ $\displaystyle\frac{1}{2}\biggl{(}1-(c^{2}_{12}\cos
2\Delta_{31}+s^{2}_{12}\cos 2\Delta_{32})\biggr{)},$ (25)
using $c^{2}_{12}\equiv\cos^{2}\theta_{12}$ and
$s^{2}_{12}\equiv\sin^{2}\theta_{12}$. Then, if we rewrite $2\Delta_{31}$ and
$2\Delta_{32}$ in terms of $(\Delta_{31}+\Delta_{32})$ and $\Delta_{21}$, we
have
$\displaystyle c^{2}_{12}\cos 2\Delta_{31}+s^{2}_{12}\cos 2\Delta_{32}$
$\displaystyle=$ $\displaystyle
c^{2}_{12}\cos(\Delta_{31}+\Delta_{32}+\Delta_{21})+s^{2}_{12}\cos(\Delta_{31}+\Delta_{32}-\Delta_{21})$
$\displaystyle=$
$\displaystyle\cos(\Delta_{31}+\Delta_{32})\cos\Delta_{21}-\sin(\Delta_{31}+\Delta_{32})\cos
2\theta_{12}\sin\Delta_{21}.$
Since
$\displaystyle\cos^{2}\Delta_{21}+\cos^{2}2\theta_{12}\sin^{2}\Delta_{21}=1-\sin^{2}2\theta_{12}\sin^{2}\Delta_{21}$
we can then write
$\displaystyle c^{2}_{12}\cos 2\Delta_{31}+s^{2}_{12}\cos 2\Delta_{32}$
$\displaystyle=$
$\displaystyle\sqrt{1-\sin^{2}2\theta_{12}\sin^{2}\Delta_{21}}~{}\cos\Omega,$
(26)
where
$\displaystyle\Omega$ $\displaystyle=$
$\displaystyle\Delta_{31}+\Delta_{32}+\arctan(\cos
2\theta_{12}\tan\Delta_{21}).$
To separate $\Omega$ into an effective $2\Delta$ and a phase, $\Phi_{\odot}$,
we have
$\displaystyle\frac{\partial~{}\Omega}{\partial(L/2E)}\left|{}_{\frac{L}{E}\rightarrow
0}\right.$ $\displaystyle=$ $\displaystyle\cos^{2}\theta_{12}\Delta
m^{2}_{31}+\sin^{2}\theta_{12}\Delta m^{2}_{32}=\Delta m^{2}_{ee}$
$\displaystyle{\rm and}\quad\Phi_{\odot}$ $\displaystyle=$
$\displaystyle\Omega-2\Delta_{ee}=\arctan(\cos
2\theta_{12}\tan\Delta_{21})-\Delta_{21}\cos 2\theta_{12}\,.$
Thus
$\displaystyle\Omega$ $\displaystyle=$ $\displaystyle
2\Delta_{ee}+(\arctan(\cos 2\theta_{12}\tan\Delta_{21})-\Delta_{21}\cos
2\theta_{12}).$ (27)
Since $\Omega$ appears only as $\cos\Omega$, one could use
$\Omega=2|\Delta_{ee}|\pm\Phi_{\odot}$ as in Eq. (3).
The factor $\sqrt{1-\sin^{2}2\theta_{12}\sin^{2}\Delta_{21}}$ in front of
$\cos\Omega$ in Eq. (26), modulates the amplitude of the $\theta_{13}$
oscillations as this factor varies from 1 to $\cos 2\theta_{12}\approx 0.4$ as
$\Delta_{21}$ goes from 0 to $\pi/2$. So the $\sqrt{(\cdots)}$ modulates the
amplitude and $\Phi_{\odot}$ modulates the phase of the $\theta_{13}$
oscillations.
## Appendix C Verification of our code
In this appendix, we show that using our code we can reproduce former results
obtained by the JUNO collaboration. In particular, we compare with the results
from Ref. Bezerra:2019dao . Note that some of the experimental features have
improved since this analysis has been performed, in particular the overall
detection efficiency and a reduction of accidental background events.
We assume 6 years of exposure time (1800 days). No NL effects are included in
the analysis and the 1% shape uncertainty is included as a modification of the
denominator of the $\chi^{2}$ function Bezerra:2019dao . In particular, we use
for this cross check
$\chi^{2}(\vec{p})=\min_{\vec{\alpha}}\sum_{i}\frac{(N_{i}^{\text{dat}}-N_{i}(\vec{p},\vec{\alpha}))^{2}}{N_{i}(\vec{p},\vec{\alpha})+\sigma_{s}^{2}N_{i}(\vec{p},\vec{\alpha})^{2}}+\sum_{j}\left(\frac{\alpha_{j}}{\sigma_{j}}\right)^{2},$
(28)
in accordance with Ref. Bezerra:2019dao , but slightly different to our Eq.
(10). Here, $\sigma_{s}=0.01$. In Fig. 14 we compare the results from our
analysis (dashed lines) with the lines extracted directly from Ref.
Bezerra:2019dao (solid lines). As can be seen the results agree very well
with each other. In perfect agreement with the collaboration, we obtain
$\chi^{2}_{\rm min}[{\text{IO}}]=7.3$.
Figure 14: Here we reproduce Figs. 4 and 11 from Ref. Bezerra:2019dao , using
the oscillation parameters and technical details of that reference. Our code,
written for this paper, gives the solid lines whereas the results extracted
from the above reference are dashed lines, normal (inverted) ordering is in
blue (red).
## Appendix D On the contribution to the determination of $|\Delta
m^{2}_{ee}|$ from the $|\Delta m^{2}_{\mu\mu}|$ sensitive experiments
Figure 15: Separate contributions of T2K data (upper left panel), NOvA data
(upper right panel) and Super-K and DeepCore atmospheric data, labeled ATM,
(lower panel) to the $\overline{\chi}^{2}$ fit of $|\Delta m^{2}_{ee}|$ to NO
(dashed lines) and IO (solid lines) included in the global fit (blue) and in
the combination of the current global fit with 2 years of JUNO data (green).
JUNO fit only is in red.
It is informative to examine the contributions of the $|\Delta
m^{2}_{\mu\mu}|$ sensitive experiments included in the global fit to the final
determination of $|\Delta m^{2}_{ee}|$. We will focus here on the major
players: T2K, NOvA and the atmospheric neutrino oscillation experiments Super-
Kamiokande and DeepCore (ATM). The analyses of T2K, NOvA and ATM data shown in
this section correspond to the analyses performed in Ref. deSalas:2020pgw .
For this purpose we show in Fig. 15 the separate contributions to the
determination of $|\Delta m^{2}_{ee}[\rm NO]|$ and $|\Delta m^{2}_{ee}[\rm
IO]|$ coming from T2K (upper left panel), NOvA (upper right panel) and the ATM
(lower panel) neutrino oscillation data. We show their effect on the global
fit and on the corresponding global fit combination with 2 years of JUNO data.
From these plots we see that T2K prefers $|\Delta m^{2}_{ee}[\rm NO,IO]|$
closer to the global fit best fit values, while NOvA (ATM) prefers lower
(higher) values. Note that both accelerator neutrino oscillation experiments,
however, prefer $|\Delta m^{2}_{ee}[\rm IO]|$ smaller than the value JUNO will
prefer (NO assumed true). Since none of the $\overline{\chi}^{2}$
distributions are very Gaussian at this point, the combined
$\overline{\chi}^{2}_{\rm min}[\rm IO]$ is a result of broad distributions
pulling for different minima that at JUNO’s best fit value for $|\Delta
m^{2}_{ee}[\rm IO]|$ contribute to an increase of $\overline{\chi}^{2}_{\rm
min}[\rm IO]$ of about 7 (NOvA), 3 (T2K) and 5 (ATM) units, resulting on the
final power of the combination.
The addition of the atmospheric data, and also to a minor extent of MINOS data
(which is compatible with NOvA), to the global fit used in this paper explains
the difference of about 4 units in the predicted boost for the determination
of the mass ordering we show here with respect to what is predicted in Fig. 5
of Ref. Cabrera:2020own , where only simulated data from T2K and NOvA were
used.
## References
* (1) H. Nunokawa, S. J. Parke, and J. W. F. Valle, “CP Violation and Neutrino Oscillations,” Prog. Part. Nucl. Phys. 60 (2008) 338–402, arXiv:0710.0554 [hep-ph].
* (2) P. F. de Salas, D. V. Forero, S. Gariazzo, P. Martínez-Miravé, O. Mena, C. A. Ternes, M. Tórtola, and J. W. F. Valle, “2020 Global reassessment of the neutrino oscillation picture,” JHEP 21 (2020) 071, arXiv:2006.11237 [hep-ph].
* (3) F. Capozzi, E. Di Valentino, E. Lisi, A. Marrone, A. Melchiorri, and A. Palazzo, “The unfinished fabric of the three neutrino paradigm,” arXiv:2107.00532 [hep-ph].
* (4) I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz, and A. Zhou, “The fate of hints: updated global analysis of three-flavor neutrino oscillations,” JHEP 09 (2020) 178, arXiv:2007.14792 [hep-ph].
* (5) KamLAND Collaboration, A. Gando et al., “Reactor On-Off Antineutrino Measurement with KamLAND,” Phys. Rev. D 88 no. 3, (2013) 033001, arXiv:1303.4667 [hep-ex].
* (6) B. Cleveland et al., “Measurement of the solar electron neutrino flux with the Homestake chlorine detector,” Astrophys.J. 496 (1998) 505–526.
* (7) F. Kaether, W. Hampel, G. Heusser, J. Kiko, and T. Kirsten, “Reanalysis of the GALLEX solar neutrino flux and source experiments,” Phys.Lett.B 685 (2010) 47–54, arXiv:1001.2731 [hep-ex].
* (8) SAGE Collaboration, J. N. Abdurashitov et al., “Measurement of the solar neutrino capture rate with gallium metal. III: Results for the 2002–2007 data-taking period,” Phys.Rev.C 80 (2009) 015807, arXiv:0901.2200 [nucl-ex].
* (9) G. Bellini et al., “Precision measurement of the 7Be solar neutrino interaction rate in Borexino,” Phys. Rev. Lett. 107 (2011) 141302, arXiv:1104.1816 [hep-ex].
* (10) Borexino Collaboration, G. Bellini et al., “Final results of Borexino Phase-I on low energy solar neutrino spectroscopy,” Phys.Rev.D 89 (2014) 112007, arXiv:1308.0443 [hep-ex].
* (11) Super-Kamiokande Collaboration, J. Hosaka et al., “Solar neutrino measurements in super-Kamiokande-I,” Phys.Rev.D 73 (2006) 112001, arXiv:hep-ex/0508053 [hep-ex].
* (12) Super-Kamiokande Collaboration, J. Cravens et al., “Solar neutrino measurements in Super-Kamiokande-II,” Phys.Rev.D 78 (2008) 032002, arXiv:0803.4312 [hep-ex].
* (13) Super-Kamiokande Collaboration, K. Abe et al., “Solar neutrino results in Super-Kamiokande-III,” Phys.Rev.D 83 (2011) 052010, arXiv:1010.0118 [hep-ex].
* (14) Y. Nakano, “PhD Thesis, University of Tokyo.” http://www-sk.icrr.u-tokyo.ac.jp/sk/_pdf/articles/2016/doc_thesis_naknao.pdf, 2016\.
* (15) Y. Nakajima, “Recent results and future prospects from Super- Kamiokande,” June, 2020. https://doi.org/10.5281/zenodo.4134680.
* (16) SNO Collaboration, B. Aharmim et al., “Combined Analysis of all Three Phases of Solar Neutrino Data from the Sudbury Neutrino Observatory,” Phys. Rev. C 88 (2013) 025501, arXiv:1109.0763 [nucl-ex].
* (17) SNO Collaboration, Q. Ahmad et al., “Direct evidence for neutrino flavor transformation from neutral current interactions in the Sudbury Neutrino Observatory,” Phys.Rev.Lett. 89 (2002) 011301, nucl-ex/0204008.
* (18) MINOS Collaboration, P. Adamson et al., “Electron neutrino and antineutrino appearance in the full MINOS data sample,” Phys. Rev. Lett. 110 no. 17, (2013) 171801, arXiv:1301.4581 [hep-ex].
* (19) NOvA Collaboration, M. Acero et al., “First measurement of neutrino oscillation parameters using neutrinos and antineutrinos by NOvA,” Phys.Rev.Lett. 123 (2019) 151803, arXiv:1906.04907 [hep-ex].
* (20) Alex Himmel, “New Oscillation Results from the NOvA Experiment,” Jul, 2020\. https://doi.org/10.5281/zenodo.3959581.
* (21) T2K Collaboration, K. Abe et al., “Improved constraints on neutrino mixing from the T2K experiment with $\mathbf{3.13\times 10^{21}}$ protons on target,” Phys. Rev. D 103 no. 11, (2021) 112008, arXiv:2101.03779 [hep-ex].
* (22) Daya Bay Collaboration, D. Adey et al., “Measurement of the Electron Antineutrino Oscillation with 1958 Days of Operation at Daya Bay,” Phys. Rev. Lett. 121 no. 24, (2018) 241805, arXiv:1809.02261 [hep-ex].
* (23) RENO Collaboration, G. Bak et al., “Measurement of Reactor Antineutrino Oscillation Amplitude and Frequency at RENO,” Phys.Rev.Lett. 121 (2018) 201801, arXiv:1806.00248 [hep-ex].
* (24) J. Yoo, “Reno,” June, 2020. https://doi.org/10.5281/zenodo.4123573.
* (25) Double Chooz Collaboration, H. de Kerret et al., “Double Chooz $\theta$13 measurement via total neutron capture detection,” Nature Phys. 16 no. 5, (2020) 558–564, arXiv:1901.09445 [hep-ex].
* (26) K. J. Kelly, P. A. N. Machado, S. J. Parke, Y. F. Perez-Gonzalez, and R. Zukanovich Funchal, “Neutrino mass ordering in light of recent data,” Phys. Rev. D 103 no. 1, (2021) 013004, arXiv:2007.08526 [hep-ph].
* (27) S. T. Petcov and M. Piai, “The LMA MSW solution of the solar neutrino problem, inverted neutrino mass hierarchy and reactor neutrino experiments,” Phys. Lett. B 533 (2002) 94–106, arXiv:hep-ph/0112074.
* (28) S. Choubey, S. T. Petcov, and M. Piai, “Precision neutrino oscillation physics with an intermediate baseline reactor neutrino experiment,” Phys. Rev. D 68 (2003) 113006, arXiv:hep-ph/0306017.
* (29) S. M. Bilenky, F. Capozzi, and S. T. Petcov, “An alternative method of determining the neutrino mass ordering in reactor neutrino experiments,” Phys. Lett. B 772 (2017) 179–183, arXiv:1701.06328 [hep-ph]. [Erratum: Phys.Lett.B 809, 135765 (2020)].
* (30) H. Minakata, H. Nunokawa, S. J. Parke, and R. Zukanovich Funchal, “Determination of the Neutrino Mass Hierarchy via the Phase of the Disappearance Oscillation Probability with a Monochromatic $\bar{\nu}_{e}$ Source,” Phys. Rev. D 76 (2007) 053004, arXiv:hep-ph/0701151. [Erratum: Phys.Rev.D 76, 079901 (2007)].
* (31) H. Nunokawa, S. J. Parke, and R. Zukanovich Funchal, “Another possible way to determine the neutrino mass hierarchy,” Phys. Rev. D 72 (2005) 013009, arXiv:hep-ph/0503283.
* (32) JUNO Collaboration, F. An et al., “Neutrino Physics with JUNO,” J. Phys. G43 no. 3, (2016) 030401, arXiv:1507.05613 [physics.ins-det].
* (33) L. Zhan, Y. Wang, J. Cao, and L. Wen, “Determination of the Neutrino Mass Hierarchy at an Intermediate Baseline,” Phys. Rev. D78 (2008) 111103, arXiv:0807.3203 [hep-ex].
* (34) S.-F. Ge, K. Hagiwara, N. Okamura, and Y. Takaesu, “Determination of mass hierarchy with medium baseline reactor neutrino experiments,” JHEP 05 (2013) 131, arXiv:1210.8141 [hep-ph].
* (35) S. J. Parke, H. Minakata, H. Nunokawa, and R. Zukanovich Funchal, “Mass Hierarchy via Mossbauer and Reactor Neutrinos,” Nucl. Phys. B Proc. Suppl. 188 (2009) 115–117, arXiv:0812.1879 [hep-ph].
* (36) X. Qian, D. A. Dwyer, R. D. McKeown, P. Vogel, W. Wang, and C. Zhang, “Mass Hierarchy Resolution in Reactor Anti-neutrino Experiments: Parameter Degeneracies and Detector Energy Response,” Phys. Rev. D 87 no. 3, (2013) 033005, arXiv:1208.1551 [physics.ins-det].
* (37) Y.-F. Li, J. Cao, Y. Wang, and L. Zhan, “Unambiguous Determination of the Neutrino Mass Hierarchy Using Reactor Neutrinos,” Phys. Rev. D88 (2013) 013008, arXiv:1303.6733 [hep-ex].
* (38) F. Capozzi, E. Lisi, and A. Marrone, “Neutrino mass hierarchy and electron neutrino oscillation parameters with one hundred thousand reactor events,” Phys. Rev. D89 no. 1, (2014) 013001, arXiv:1309.1638 [hep-ph].
* (39) F. Capozzi, E. Lisi, and A. Marrone, “Neutrino mass hierarchy and precision physics with medium-baseline reactors: Impact of energy-scale and flux-shape uncertainties,” Phys. Rev. D92 no. 9, (2015) 093011, arXiv:1508.01392 [hep-ph].
* (40) D. V. Forero, R. Hawkins, and P. Huber, “The benefits of a near detector for JUNO,” arXiv:1710.07378 [hep-ph].
* (41) Z. Cheng, N. Raper, W. Wang, C. F. Wong, and J. Zhang, “Potential impact of sub-structure on the determination of neutrino mass hierarchy at medium-baseline reactor neutrino oscillation experiments,” Eur. Phys. J. C 80 no. 12, (2020) 1112, arXiv:2004.11659 [hep-ex].
* (42) F. Capozzi, E. Lisi, and A. Marrone, “Mapping reactor neutrino spectra from TAO to JUNO,” Phys. Rev. D 102 no. 5, (2020) 056001, arXiv:2006.01648 [hep-ph].
* (43) M. Blennow, P. Coloma, P. Huber, and T. Schwetz, “Quantifying the sensitivity of oscillation experiments to the neutrino mass ordering,” JHEP 03 (2014) 028, arXiv:1311.1822 [hep-ph].
* (44) IceCube-Gen2, JUNO members Collaboration, M. G. Aartsen et al., “Combined sensitivity to the neutrino mass ordering with JUNO, the IceCube Upgrade, and PINGU,” Phys. Rev. D 101 no. 3, (2020) 032006, arXiv:1911.06745 [hep-ex].
* (45) A. Cabrera et al., “Earliest Resolution to the Neutrino Mass Ordering?,” arXiv:2008.11280 [hep-ph].
* (46) S. Parke, “What is $\Delta m^{2}_{ee}$ ?,” Phys. Rev. D 93 no. 5, (2016) 053008, arXiv:1601.07464 [hep-ph].
* (47) SNO Collaboration, B. Aharmim et al., “Electron energy spectra, fluxes, and day-night asymmetries of B-8 solar neutrinos from measurements with NaCl dissolved in the heavy-water detector at the Sudbury Neutrino Observatory,” Phys. Rev. C 72 (2005) 055502, arXiv:nucl-ex/0502021.
* (48) Y.-F. Li, Y. Wang, and Z.-z. Xing, “Terrestrial matter effects on reactor antineutrino oscillations at JUNO or RENO-50: how small is small?,” Chin. Phys. C40 no. 9, (2016) 091001, arXiv:1605.00900 [hep-ph].
* (49) A. N. Khan, H. Nunokawa, and S. J. Parke, “Why matter effects matter for JUNO,” Phys. Lett. B 803 (2020) 135354, arXiv:1910.12900 [hep-ph].
* (50) JUNO Collaboration, A. Abusleme et al., “JUNO Physics and Detector,” arXiv:2104.02565 [hep-ex].
* (51) P. Huber, M. Lindner, and W. Winter, “Simulation of long-baseline neutrino oscillation experiments with GLoBES (General Long Baseline Experiment Simulator),” Comput. Phys. Commun. 167 (2005) 195, arXiv:hep-ph/0407333 [hep-ph].
* (52) P. Huber, J. Kopp, M. Lindner, M. Rolinec, and W. Winter, “New features in the simulation of neutrino oscillation experiments with GLoBES 3.0: General Long Baseline Experiment Simulator,” Comput. Phys. Commun. 177 (2007) 432–438, arXiv:hep-ph/0701187 [hep-ph].
* (53) JUNO Collaboration, A. Abusleme et al., “TAO Conceptual Design Report: A Precision Measurement of the Reactor Antineutrino Spectrum with Sub-percent Energy Resolution,” arXiv:2005.08745 [physics.ins-det].
* (54) T. Mueller et al., “Improved Predictions of Reactor Antineutrino Spectra,” Phys. Rev. C 83 (2011) 054615, arXiv:1101.2663 [hep-ex].
* (55) P. Huber, “On the determination of anti-neutrino spectra from nuclear reactors,” Phys. Rev. C 84 (2011) 024617, arXiv:1106.0687 [hep-ph]. [Erratum: Phys.Rev.C 85, 029901 (2012)].
* (56) P. Vogel and J. F. Beacom, “The angular distribution of the neutron inverse beta decay, $\overline{\nu}_{e}+p\rightarrow e^{+}+n$,” Phys. Rev. D60 (1999) 053003, hep-ph/9903554.
* (57) JUNO Collaboration, A. Abusleme et al., “Calibration Strategy of the JUNO Experiment,” JHEP 03 (2021) 004, arXiv:2011.06405 [physics.ins-det].
* (58) Daya Bay Collaboration, F. P. An et al., “Spectral measurement of electron antineutrino oscillation amplitude and frequency at Daya Bay,” Phys. Rev. Lett. 112 (2014) 061801, arXiv:1310.6732 [hep-ex].
* (59) Daya Bay Collaboration, F. P. An et al., “Measurement of electron antineutrino oscillation based on 1230 days of operation of the Daya Bay experiment,” Phys. Rev. D 95 no. 7, (2017) 072006, arXiv:1610.04802 [hep-ex].
* (60) KM3Net Collaboration, S. Adrian-Martinez et al., “Letter of intent for KM3NeT 2.0” J. Phys. G43 no. 8, (2016) 084001, arXiv:1601.07459 [astro-ph.IM].
* (61) F. Capozzi, E. Lisi, and A. Marrone, “Probing the neutrino mass ordering with KM3NeT-ORCA: Analysis and perspectives,” J. Phys. G45 no. 2, (2018) 024003, arXiv:1708.03022 [hep-ph].
* (62) IceCube PINGU Collaboration, M. G. Aartsen et al., “Letter of Intent: The Precision IceCube Next Generation Upgrade (PINGU),” arXiv:1401.2046 [physics.ins-det].
* (63) W. Winter, “Neutrino mass hierarchy determination with IceCube-PINGU,” Phys. Rev. D88 no. 1, (2013) 013013, arXiv:1305.5539 [hep-ph].
* (64) F. Capozzi, E. Lisi, and A. Marrone, “PINGU and the neutrino mass hierarchy: Statistical and systematic aspects,” Phys. Rev. D91 (2015) 073011, arXiv:1503.01999 [hep-ph].
* (65) Hyper-Kamiokande Collaboration, K. Abe et al., “Hyper-Kamiokande Design Report,” arXiv:1805.04163 [physics.ins-det].
* (66) C. A. Ternes, S. Gariazzo, R. Hajjar, O. Mena, M. Sorel, and M. Tórtola, “Neutrino mass ordering at DUNE: An extra $\nu$ bonus,” Phys. Rev. D 100 no. 9, (2019) 093004, arXiv:1905.03589 [hep-ph].
* (67) Hyper-Kamiokande Working Group Collaboration, T. Ishida, “T2HK: J-PARC upgrade plan for future and beyond T2K,” in 15th International Workshop on Neutrino Factories, Super Beams and Beta Beams. 11, 2013. arXiv:1311.5287 [hep-ex].
* (68) DUNE Collaboration, B. Abi et al., “Long-baseline neutrino oscillation physics potential of the DUNE experiment,” Eur. Phys. J. C 80 no. 10, (2020) 978, arXiv:2006.16043 [hep-ex].
* (69) T. Ohlsson, H. Zhang, and S. Zhou, “Nonstandard interaction effects on neutrino parameters at medium-baseline reactor antineutrino experiments,” Phys. Lett. B728 (2014) 148–155, arXiv:1310.5917 [hep-ph].
* (70) A. N. Khan, D. W. McKay, and F. Tahir, “Sensitivity of medium-baseline reactor neutrino mass-hierarchy experiments to nonstandard interactions,” Phys. Rev. D88 (2013) 113006, arXiv:1305.4350 [hep-ph].
* (71) Y.-F. Li and Z.-h. Zhao, “Tests of Lorentz and CPT Violation in the Medium Baseline Reactor Antineutrino Experiment,” Phys. Rev. D90 no. 11, (2014) 113014, arXiv:1409.6970 [hep-ph].
* (72) P. Bakhti and Y. Farzan, “Shedding light on LMA-Dark solar neutrino solution by medium baseline reactor experiments: JUNO and RENO-50,” JHEP 07 (2014) 064, arXiv:1403.0744 [hep-ph].
* (73) Y.-L. Chan, M.-C. Chu, K. M. Tsui, C. F. Wong, and J. Xu, “Wave-packet treatment of reactor neutrino oscillation experiments and its implications on determining the neutrino mass hierarchy,” Eur. Phys. J. C 76 no. 6, (2016) 310, arXiv:1507.06421 [hep-ph].
* (74) T. Abrahão, H. Minakata, H. Nunokawa, and A. A. Quiroga, “Constraint on Neutrino Decay with Medium-Baseline Reactor Neutrino Oscillation Experiments,” JHEP 11 (2015) 001, arXiv:1506.02314 [hep-ph].
* (75) J. Liao, D. Marfatia, and K. Whisnant, “Nonstandard interactions in solar neutrino oscillations with Hyper-Kamiokande and JUNO,” Phys. Lett. B771 (2017) 247–253, arXiv:1704.04711 [hep-ph].
* (76) Y.-F. Li, Z.-z. Xing, and J.-y. Zhu, “Indirect unitarity violation entangled with matter effects in reactor antineutrino oscillations,” Phys. Lett. B 782 (2018) 578–588, arXiv:1802.04964 [hep-ph].
* (77) G. Anamiati, V. De Romeri, M. Hirsch, C. A. Ternes, and M. Tórtola, “Quasi-Dirac neutrino oscillations at DUNE and JUNO,” Phys. Rev. D100 no. 3, (2019) 035032, arXiv:1907.00980 [hep-ph].
* (78) Y. P. Porto-Silva, S. Prakash, O. L. G. Peres, H. Nunokawa, and H. Minakata, “Constraining visible neutrino decay at KamLAND and JUNO,” Eur. Phys. J. C 80 no. 10, (2020) 999, arXiv:2002.12134 [hep-ph].
* (79) A. de Gouvea, V. de Romeri, and C. A. Ternes, “Probing neutrino quantum decoherence at reactor experiments,” JHEP 08 (2020) 018, arXiv:2005.03022 [hep-ph].
* (80) Z. Cheng, W. Wang, C. F. Wong, and J. Zhang, “Studying the neutrino wave-packet effects at medium-baseline reactor neutrino oscillation experiments and the potential benefits of an extra detector,” Nucl. Phys. B 964 (2021) 115304, arXiv:2009.06450 [hep-ph].
* (81) Particle Data Group Collaboration, M. Tanabashi et al., “2018 Review of Particle Physics,” Phys. Rev. D 98 no. 3, (2018) 030001.
|
arxiv-papers
| 2021-07-26T18:04:52 |
2024-09-04T03:07:19.775004
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "David V. Forero, Stephen J. Parke, Christoph A. Ternes and Renata\n Zukanovich Funchal",
"submitter": "Stephen Parke",
"url": "https://arxiv.org/abs/2107.12410"
}
|
2107.12412
|
# Existence of solutions to reaction cross diffusion systems
Matt Jacobs
###### Abstract.
Reaction cross diffusion systems are a two species generalization of the
porous media equation. These systems play an important role in the mechanical
modelling of living tissues and tumor growth. Due to their mixed parabolic-
hyperbolic structure, even proving the existence of solutions to these
equations is challenging. In this paper, we exploit the parabolic structure of
the system to prove the strong compactness of the pressure gradient in
$L^{2}$. The key ingredient is the energy dissipation relation, which along
with some compensated compactness arguments, allows us to upgrade weak
convergence to strong convergence. As a consequence of the pressure
compactness, we are able to prove the existence of solutions in a very general
setting and pass to the Hele-Shaw/incompressible limit in any dimension.
## 1\. Introduction
In this paper, we consider the following two species reaction cross diffusion
system
(1.1) $\begin{cases}\partial_{t}\rho_{1}-\nabla\cdot(\rho_{1}(\nabla
p-V))=\rho_{1}F_{1,1}(p,n)+\rho_{2}F_{1,2}(p,n),\\\
\partial_{t}\rho_{2}-\nabla\cdot(\rho_{2}(\nabla
p-V))=\rho_{1}F_{2,1}(p,n)+\rho_{2}F_{2,2}(p,n),\\\ \rho
p=z(\rho)+z^{*}(p),\\\ \partial_{t}n-\alpha\Delta
n=-n(c_{1}\rho_{1}+c_{2}\rho_{2}),\end{cases}$
on the spacetime domain $Q_{\infty}:=[0,\infty)\times\mathbb{R}^{d}$. The
study of these systems has become extremely important in the modelling of
tissue growth and cancer [BKMP03, PT08, RBE+10] and has drawn substantial
interest from the mathematical community [PQV14, PV15, GPŚG19, KT20, BCP20,
BPPS19, JKT21, AKY14, BM14]. The equations models the growth and death of two
populations of cells whose densities are given by $\rho_{1},\rho_{2}$. The
densities are linked through a convex energy $z$ (and its convex dual
$z^{*}$), which opposes the concentration of the total density
$\rho=\rho_{1}+\rho_{2}$. The energy induces a pressure function $p$, which
dissipates energy by pushing the densities down $\nabla p$. In addition, the
densities flow along an external vector field $V$. The source terms that
control the growth/death of the two populations depend on both the pressure
and a nutrient variable $n$. The nutrient evolves through a coupled equation
that accounts for both diffusion and consumption.
Throughout the paper, we assume that $V\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ and
$\nabla\cdot V\in L^{\infty}(Q_{\infty})$. We will also have the following
assumptions on the energy $z$:
1. (z1)
$z:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ is proper, lower semicontinuous,
and convex,
2. (z2)
$z(a)=+\infty$ if $a<0$ and $z(0)=0$,
3. (z3)
there exists $a>0$ such that $z$ is differentiable at $a$ and $\sup\partial
z(0)<z^{\prime}(a)$,
as well as the following assumptions on the source terms:
1. (F1)
the $F_{i,j}$ are continuous on $\mathbb{R}\times[0,\infty)$ and uniformly
bounded,
2. (F2)
the cross terms $F_{1,2},F_{2,1}$ are nonnegative.
In certain cases, we will need the additional assumption:
1. (F3)
for $n$ fixed, $p\mapsto(F_{1,1}(p,n)+F_{2,1}(p,n))$ and
$p\mapsto(F_{1,2}(p,n)+F_{2,2}(p,n))$ are decreasing.
Constructing weak solutions to the system (1.1) is challenging due to the
highest order nonlinear terms $\rho_{1}\nabla p,\rho_{2}\nabla p$. Given a
sequence of approximate solutions, one needs either strong convergence of the
densities or of the pressure gradient to pass to the limit. Due to the
hyperbolic character of the first two equations, the regularity of the
individual densities need not improve over time. Furthermore, it is not clear
if densities with BV initial data will remain BV in dimensions $d>1$ (see
[CFSS18] and [BPPS19] for results in one dimension). On the other hand,
summing the first two equations, one sees that the pressure $p$ and the
_total_ density $\rho$ satisfy the parabolic equation
(1.2) $\partial_{t}\rho-\nabla\cdot(\rho(\nabla
p-V))=\rho_{1}\big{(}F_{1,1}(p,n)+F_{2,1}(p,n)\big{)}+\rho_{2}\big{(}F_{1,2}(p,n)+F_{2,2}(p,n)\big{)},$
(note (1.2) needs to be coupled with the duality relation $\rho
p=z(\rho)+z^{*}(p)$ in order to fully appreciate the parabolic structure).
Hence, attacking the problem through the pressure appears to be more
promising.
Indeed, recently, several authors have been able to construct solutions to
certain cases of (1.1) by exploiting (1.2) to obtain strong convergence of the
pressure gradient [GPŚG19, BCP20]. The strategy of these approaches is to use
the parabolic structure to obtain a priori estimates on the pressure that are
strong enough to guarantee compactness. In particular, following these
approaches, one tries to bound the pressure Laplacian in at least $L^{1}$ and
then obtain some additional (arbitrarily weak) time regularity. As it turns
out, both space and time regularity can be problematic. It is not clear
whether spatial regularity can hold without some structural assumptions on the
sources terms $F_{i,j}$ or in the presence of a non-zero vector field $V$.
Time regularity also becomes problematic in the (important) special case where
the energy $z$ enforces the incompressibility constraint $\rho\leq 1$. Indeed,
in the incompressible case, the coupling between the total density $\rho$ and
the pressure $p$ is degenerate and it is not clear how to convert time
regularity for $\rho$ (easy) into time regularity for $p$ (hard).
In this paper, rather than establish the strong convergence of the pressure
gradient through regularity, we instead prove it directly by exploiting the
energy dissipation relation associated to (1.2). In order to explain our
strategy more fully, we need to introduce a change of variables that will make
our subsequent analysis easier. Thanks to the duality relation $\rho
p=z(\rho)+z^{*}(p)$, the term $\rho\nabla p$ is equivalent to $\nabla
z^{*}(p)$. This suggests the natural change of variables $q=z^{*}(p)$. Since
the pressure is only relevant on the set $\rho>0$, we can essentially treat
$z^{*}$ as a strictly increasing function. As a result, we can completely
rewrite the system (1.1) and the parabolic equation (1.2) in terms of $q$
instead of $p$ (c.f. Section 2 and 5 for the rigorous justification). Doing
so, we get the equivalent system
(1.3)
$\begin{cases}\partial_{t}\rho_{1}-\nabla\cdot(\frac{\rho_{1}}{\rho}\nabla
q)+\nabla\cdot(\rho_{1}V)=\rho_{1}F_{1,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{1,2}\big{(}(z^{*})^{-1}(q),n\big{)},\\\
\partial_{t}\rho_{2}-\nabla\cdot(\frac{\rho_{2}}{\rho}\nabla
q)+\nabla\cdot(\rho_{2}V)=\rho_{1}F_{2,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{2,2}\big{(}(z^{*})^{-1}(q),n\big{)},\\\
\rho q=e(\rho)+e^{*}(q),\\\ \partial_{t}n-\alpha\Delta
n=-n(c_{1}\rho_{1}+c_{2}\rho_{2}),\end{cases}$
where $e$ is the unique convex function such that
$e(a)=\begin{cases}az(a)-2\int_{0}^{a}z(s)\,ds&\textup{if}\;\;z(a)\neq+\infty,\\\
+\infty&\textup{otherwise.}\end{cases}$
It is worth noting that the change of variables from $p$ to $q$ is essentially
the reverse direction of Otto’s celebrated interpretation of the porous media
equation as a $W^{2}$ gradient flow [Ott01]. Indeed, the $p$ variable can be
interpreted as a Kantorovich potential for the quadratic optimal transport
distance, while the $q$ variable is instead the dual potential for an $H^{-1}$
distance. While the optimal transport interpretation of the system is more
physically natural, the linearity of the $H^{-1}$ structure is advantageous
for our arguments. Indeed, summing the first two equations of (1.3), we get a
more linear analogue of (1.2):
(1.4) $\partial_{t}\rho-\Delta q+\nabla\cdot(\rho V)=\mu,$
where we have defined
$\mu:=\rho_{1}\big{(}F_{1,1}\big{(}(z^{*})^{-1}(q),n\big{)}+F_{2,1}\big{(}(z^{*})^{-1}(q),n\big{)}\big{)}+\rho_{2}\big{(}F_{1,2}\big{(}(z^{*})^{-1}(q),n\big{)}+F_{2,2}\big{(}(z^{*})^{-1}(q),n\big{)}\big{)}$
for convenience.
Now we are ready to give an outline of our strategy. As we mentioned earlier,
the key idea is to exploit the energy dissipation relation associated to
(1.4). Given any nonnegative test function $\omega\in
W^{1,\infty}_{c}([0,\infty))$ depending on time only, the dissipation relation
states that
(1.5) $\int_{Q_{\infty}}\omega|\nabla q|^{2}-e(\rho)\partial_{t}\omega+\omega
e^{*}(q)\nabla\cdot V-\omega\mu q=\int_{\mathbb{R}^{d}}\omega(0)e(\rho^{0})$
where $\rho^{0}$ is the initial total density and we recall that
$Q_{\infty}=[0,\infty)\times\mathbb{R}^{d}$ is the full space-time domain.
Suppose we have a sequence $(\rho_{k},q_{k},\mu_{k})$ of solutions to equation
(1.4) with the same initial data $\rho^{0}$ that converges weakly to a limit
point $(\bar{\rho},\bar{q},\bar{\mu})$. Thanks to the linearity of (1.4), the
limit point $(\bar{\rho},\bar{q},\bar{\mu})$ will also be a solution of (1.4).
As a result, we can expect that both $(\rho_{k},q_{k},\mu_{k})$ and
$(\bar{\rho},\bar{q},\bar{\mu})$ satisfy the dissipation relation (1.5).
Hence, we can conclude that
$\int_{Q_{\infty}}\omega|\nabla
q_{k}|^{2}-e(\rho_{k})\partial_{t}\omega+\omega e^{*}(q_{k})\nabla\cdot
V-\omega\mu_{k}q_{k}=\int_{Q_{\infty}}\omega|\nabla\bar{q}|^{2}-e(\bar{\rho})\partial_{t}\omega+\omega
e^{*}(\bar{q})\nabla\cdot V-\omega\bar{\mu}\bar{q},$
If we can prove that $e(\rho_{k}),e^{*}(q_{k})$ converge weakly to
$e(\bar{\rho}),e^{*}(\bar{q})$ respectively and
(1.6)
$\limsup_{k\to\infty}\int_{Q_{\infty}}\omega\mu_{k}q_{k}\leq\int_{Q_{\infty}}\omega\bar{\mu}\bar{q},$
then we have the upper semicontinuity property
(1.7) $\limsup_{k\to\infty}\int_{Q_{\infty}}\omega|\nabla
q_{k}|^{2}\leq\int_{Q_{\infty}}\omega|\nabla q|^{2},$
which automatically implies that $\nabla q_{k}$ converges strongly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to
$\nabla\bar{q}$. Thus, the energy dissipation relation gives us a way to
upgrade some weak convergence properties into strong gradient convergence.
Of course, in order to exploit this idea, we need:
1. (i)
enough regularity to ensure that the dissipation relation (1.5) is valid,
2. (ii)
enough compactness to prove the weak convergence of the energies
$e(\rho_{k}),e^{*}(q_{k})$,
3. (iii)
enough compactness to verify the nonlinear limit (1.6).
The amount of a priori regularity needed for (i) is very low, thus, this point
does not pose much of a problem. However, obtaining the compactness needed for
points (ii) and (iii) is more delicate. Exploiting convex duality, the weak
convergence of the energies $e(\rho_{k}),e^{*}(q_{k})$ is essentially
equivalent to the weak convergence of the product $\rho_{k}q_{k}$ (c.f.
Proposition 3.2). While we may not know strong convergence of either
$\rho_{k}$ or $q_{k}$ separately, we can still obtain the weak convergence of
the product through compensated compactness arguments (c.f. Lemma 3.3). When
$e^{*}$ is strictly convex, the weak convergence of the energy $e^{*}(q_{k})$
to $e^{*}(q)$ actually implies that $q_{k}$ converges to $q$ locally in
measure. Thus, in this case, verifying the limit (1.6) becomes trivial. When
the strict convexity of $e^{*}$ fails, we will still be able to verify the
limit (1.6) as long as we add the additional structural assumption (F3) on the
source terms.
Once we have obtained the strong convergence of the pressure gradient,
constructing solutions to the system (1.3) (and hence the system (1.1)) is
straightforward via a vanishing viscosity approach (note adding viscosity to
the system is compatible with our energy dissipation based argument).
Furthermore, the above strategy works even when the energy is allowed to
change along the approximating sequence. Hence, we can also use the above
arguments to show that solutions to the system (1.1) with the porous media
energy $z_{m}(a)=\frac{1}{m-1}a^{m}$ converge to the incompressible limit
system with the energy $z_{\infty}(a)=0$ if $a\in[0,1]$ and $+\infty$
otherwise.
### 1.1. Main results
For the reader’s convenience, in this subsection, we collect some of our main
results. To prevent the introduction from becoming too bloated, we shall state
our results somewhat informally. The rigorous analogues of these results can
be found in Section 5.
Our first result concerns the case where the density-pressure coupling is non-
degenerate i.e. $z$ is differentiable on $(0,\infty)$.
###### Theorem 1.1.
Suppose that $z$ is an energy satisfying assumptions (z1-z3) such that
$\partial z(a)$ is a singleton for all $a>0$ and suppose that the source terms
satisfy assumptions (F1-F2). Given initial data
$\rho_{1}^{0},\rho_{2}^{0},n^{0}$ such that $e(\rho_{1}^{0}+\rho_{2}^{0})\in
L^{1}(\mathbb{R}^{d})$, there exists a weak solution $(\rho_{1},\rho_{2},p,n)$
to the system (1.1).
When the density-pressure coupling becomes degenerate, we need to add the
additional assumption (F3) on the source terms.
###### Theorem 1.2.
Suppose that $z$ is an energy satisfying assumptions (z1-z3) and suppose that
the source terms satisfy assumptions (F1-F3). Given initial data
$\rho_{1}^{0},\rho_{2}^{0},n^{0}$ such that $e(\rho_{1}^{0}+\rho_{2}^{0})\in
L^{1}(\mathbb{R}^{d})$, there exists a weak solution $(\rho_{1},\rho_{2},p,n)$
to the system (1.1).
In addition to our existence results, we also show that solutions of the
system with the porous media energy $z_{m}(a):=\frac{1}{m-1}a^{m}$ converge to
a solution of the system with the incompressible energy
$z_{\infty}(a):=\begin{cases}0&\textup{if}\;\;a\in[0,1]\\\
+\infty&\textup{otherwise}\\\ \end{cases}$
as $m\to\infty$.
###### Theorem 1.3.
Let $\rho_{1}^{0},\rho_{2}^{0},n^{0}$ be initial data such that
$\rho_{1}^{0}+\rho_{2}^{0}\leq 1$ almost everywhere. Suppose that the source
terms satisfy (F1-F3). If $(\rho_{1,m},\rho_{2,m},p_{m},n_{m})$ is a sequence
of solutions to the system (1.1) with the energy $z_{m}$ and the fixed initial
data $(\rho_{1}^{0},\rho_{2}^{0},n^{0})$, then there exists a limit point of
the sequence $(\rho_{1,\infty},\rho_{2,\infty},p_{\infty},n_{\infty})$ that
solves the system (1.1) with the incompressible energy $z_{\infty}$.
Theorem 1.3 is just a special case of our more general convergence result,
Theorem 5.5, which shows that one can extract limit solutions for essentially
any reasonable sequence of energies. Nonetheless, the statement of Theorem 5.5
is a bit too complicated to be cleanly summarized in the introduction, so we
leave it to be stated for the first time in Section 5.
### 1.2. Limitations and other directions
Unfortunately, our approach cannot handle the more challenging case where
$\rho_{1},\rho_{2}$ have different mobilities or where $\rho_{1},\rho_{2}$
flow along different vector fields $V_{1},V_{2}$. These cases are known to be
extremely difficult, however see [KM18] and [KT20] for some partial results.
When the mobilities are different, the analogue of (1.4) is a nonlinear
parabolic equation with potentially discontinuous coefficients. As a result,
one cannot do much with the limiting variables $\bar{\rho},\bar{q}$. When the
densities flow along different vector fields, verifying the upper
semicontinuity property (1.7) requires proving the weak convergence of the
terms $\rho_{1,k}\nabla q_{k}$ and $\rho_{2,k}\nabla q_{k}$. Since this
essentially requires knowing strong compactness for $\nabla q_{k}$ in the
first place, it completely defeats the purpose of the argument.
Nonetheless, it would be interesting to see if this strategy could be applied
to other systems of equations that have some parabolic structure. For
instance, if $\\{W_{i,j}\\}_{i,j\in\\{1,2\\}}$ are convolution kernels whose
symbols are dominated by $(-\Delta)^{1/2}$ i.e.
$\limsup_{|\xi|\to\infty}\frac{|\hat{W}_{i,j}(\xi)|}{|\xi|}=0$, then it should
be possible to extend our arguments to the more general system
(1.8)
$\begin{cases}\partial_{t}\rho_{1}-\nabla\cdot(\frac{\rho_{1}}{\rho}\nabla
q)+\nabla\cdot(\rho_{1}V)+W_{1,1}*\rho_{1}+W_{1,2}*\rho_{2}=\rho_{1}F_{1,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{1,2}\big{(}(z^{*})^{-1}(q),n\big{)},\\\
\partial_{t}\rho_{2}-\nabla\cdot(\frac{\rho_{2}}{\rho}\nabla
q)+\nabla\cdot(\rho_{2}V)+W_{2,1}*\rho_{1}+W_{2,2}*\rho_{2}=\rho_{1}F_{2,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{2,2}\big{(}(z^{*})^{-1}(q),n\big{)},\\\
\rho q=e(\rho)+e^{*}(q),\\\ \partial_{t}n-\alpha\Delta
n=-n(c_{1}\rho_{1}+c_{2}\rho_{2}),\end{cases}$
(perhaps with some other mild requirements on the $W_{i,j}$), however, we will
not pursue this line of inquiry further in this work.
### 1.3. Paper outline
The rest of the paper is organized as follows. In Section 2, we explore some
of the consequences of the change of variables $q=z^{*}(p)$. After this
Section, we will focus only on the transformed system (1.3) until Section 5.
In Section 3, we provide some generic convex analysis and compensated
compactness arguments needed for the weak convergence of the primal and dual
energies. In Section 4, we analyze parabolic PDEs, establishing basic
estimates and the energy dissipation relation. Finally, in Section 5, we
combine our work to prove the main results of the paper.
## 2\. The transformation $q=z^{*}(p)$
In this section, we will explore some of the consequences of the
transformation $q=z^{*}(p)$. Note that the full verification of the
equivalence between the systems (1.1) and (1.3) will not occur until the final
section, Section 5. Before we begin our work in this section, let us give a
bit more motivation for introducing this change of variables. First of all,
the spatial derivative in the parabolic equation (1.4) is linear with respect
to $q$, whereas the spatial derivative in parabolic equation for the $p$
variable (1.2) is not. As a result, establishing the strong $L^{2}$ gradient
compactness for $q$ is simpler than for $p$. Furthermore, the $q$ variable is
always nonnegative, while certain choices of $z$ will lead to a $p$ variable
that is not bounded from below. The lack of lower bounds on $p$ leads to some
very annoying integrability issues that are completely absent when one works
with $q$ instead.
We begin by establishing the fundamental properties of the transformation
$q=z^{*}(p)$. In particular, we will show that the transformation is
essentially invertible.
###### Lemma 2.1.
If $z$ is an energy satisfying (z1-z3), then $z^{*}$ is nonnegative,
nondecreasing, and $(z^{*})^{-1}$ is well defined and Lipschitz on
$z^{*}(\mathbb{R})\cap(0,\infty)$.
###### Proof.
Given any $b\in\mathbb{R}$, we have
$z^{*}(b)=\sup_{a\in\mathbb{R}}ab-z(a)\geq 0-z(0)=0.$
It is also clear that $\inf\partial z^{*}(b)\geq 0$ since $z(a)=+\infty$ for
any $a<0$. If $b_{1}<b_{2}$, then $z^{*}(b_{2})-z^{*}(b_{1})\geq
a_{1}(b_{2}-b_{1})\geq 0$ where $a_{1}$ is any element of $\partial
z^{*}(b_{1})$. Thus, $z^{*}$ is both nonnegative and nondecreasing.
Since $z$ is proper, we know that $z(a)\neq-\infty$ for all $a$. Thus given
some $a_{0}>0$, there must exist some $b_{0}\in\mathbb{R}$ such that
$b_{0}\leq\frac{z(a_{0})}{a_{0}}$. It then follows that for all $a\geq a_{0}$
$ab_{0}-z(a)\leq
ab_{0}-z(a_{0})-(a-a_{0})\frac{z(a_{0})}{a_{0}}=a(b_{0}-\frac{z(a_{0})}{a_{0}})\leq
0.$
Therefore, for all $b\leq b_{0}$
$\sup_{a\in\mathbb{R}}ab-z(a)=\sup_{a\in[0,a_{0}]}ab-z(a).$
Fix $\epsilon>0$ and let $a_{n}\in[0,a_{0}]$ be a decreasing sequence such
that $z^{*}(-n)\leq\epsilon-na_{n}-z(a_{n})$ (note that from the above logic
such choices of $a_{n}$ must exist once $n$ is sufficiently large). Since
$a_{n}$ is decreasing and bounded from below, it must converge to a limit
point $\bar{a}$ as $n\to\infty$. Thus,
$0\leq\liminf_{n\to\infty}z^{*}(-n)\leq\epsilon-z(\bar{a})-\limsup_{n\to\infty}na_{n},$
which immediately implies that $\bar{a}=0$. We can then rewrite the above as
$\liminf_{n\to\infty}z^{*}(-n)\leq\epsilon-\limsup_{n\to\infty}na_{n}\leq\epsilon.$
Therefore, $\liminf_{n\to\infty}z^{*}(-n)=0.$
It now follows that if $z^{*}(b)\in(0,\infty)$, then there must exist some
$b_{0}<b$ such that $2z^{*}(b_{0})\leq z^{*}(b)$. We then have
$\inf\partial z^{*}(b)\geq\frac{z^{*}(b)}{2(b-b_{0})}>0.$
Thus, $z^{*}$ is strictly increasing at $b$ whenever $z^{*}(b)\in(0,\infty)$.
Hence $(z^{*})^{-1}$ is well defined and Lipschitz on
$z^{*}(\mathbb{R})\cap(0,\infty)$. ∎
While the invertibility of $q=z^{*}(p)$ can fail when $z^{*}(p)=0$, this will
not cause a problem for our study of the systems (1.1) and (1.3), as the
failure cannot happen on the support of $\rho$.
###### Lemma 2.2.
Suppose that $z$ satisfies assumptions (z1-z3). If $(z^{*})^{-1}$ cannot be
extended to a continuous function on $[0,\infty)\cap z^{*}(\mathbb{R})$, then
$\partial z^{*}(p)=\\{0\\}$ whenever $z^{*}(p)=0$.
###### Proof.
Let $p_{0}=\sup\\{p\in\mathbb{R}:z^{*}(p)=0\\}$. If $p_{0}=-\infty$, then the
statement is vacuously true.
Otherwise, we define $(z^{*})^{-1}(0)=p_{0}$. If
$z^{*}(\mathbb{R})\cap[0,\infty)=\\{0\\}$, then $(z^{*})^{-1}$ is trivially
continuous on $[0,\infty)\cap z^{*}(\mathbb{R})$. Thus, we only need to worry
about the case where $z^{*}(\mathbb{R})\cap(0,\infty)\neq\varnothing$ and
there exists $a_{0}\in\partial z^{*}(p_{0})$ such that $a_{0}>0$. Convexity
then implies that for any $p>p_{0}$ with $z^{*}(p)\neq+\infty$ we have
$\inf\partial z^{*}(p)\geq a_{0}$. Thus, the Lipschitz constant of
$(z^{*})^{-1}$ must be bounded in a neighborhood of zero and therefore the
extension $(z^{*})^{-1}(0)=p_{0}$ must be continuous. ∎
Perhaps the most significant aspect of the change of variables $q=z^{*}(p)$ is
the change in the energy controlling the primal and dual coupling. Recall that
we defined the new energy $e$ through the formula
(2.1)
$e(a)=\begin{cases}az(a)-2\int_{0}^{a}z(s)\,ds&\textup{if}\;\;z(a)\neq+\infty,\\\
+\infty&\textup{otherwise.}\end{cases}$
While this formula appears somewhat mysterious, $e$ is the unique (up to an
irrelevant constant factor) convex function such that $\partial
e(a)=z^{*}\circ\partial z(a)$ when $\partial z(a)\neq\varnothing$. Thus, when
$p\in\partial z(\rho)$ we will know that $q\in\partial e(\rho)$. Note that the
monotonicity of $z^{*}$ is key, otherwise $e$ would fail to be convex. The
following Lemma records the properties that $e$ inherits from $z$.
###### Lemma 2.3.
Suppose that $z$ is an energy satisfying (z1-z3). If we define
$e:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ according to (2.1), then $e$
satisfies the following properties
1. (e1)
$e:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ is proper, convex, and lower
semicontinuous.
2. (e2)
$e(a)=+\infty$ if $a<0$, $e(0)=0$, and $e$ is increasing on
$e^{-1}(\mathbb{R})$.
3. (e3)
$\limsup_{a\to 0^{+}}\frac{e(a)}{a}=0,$ and
$\liminf_{b\to\infty}\frac{e^{*}(b)}{b}>0$.
Furthermore, if $a\neq 0$, we have
$\partial e(a)=\\{ab-z(a):b\in\partial z(a)\\}=\\{z^{*}(b):b\in\partial
z(a)\\},$
and so $\partial e(a)$ is a singleton if and only if $\partial z(a)$ is a
singleton.
###### Proof.
It is clear that $e(0)=0$ and $e(a)=+\infty$ if $z(a)=+\infty$.
Given any two points $a_{0},a_{1}\in z^{-1}(\mathbb{R})$, convexity implies
that
(2.2) $2(a_{1}-a_{0})z(\frac{a_{1}+a_{0}}{2})\leq
2\int_{a_{0}}^{a_{1}}z(s)\,ds\leq(a_{1}-a_{0})(z(a_{0})+z(a_{1})).$
Thus, if $z(a)\neq+\infty$, then
$0\leq e(a)\leq az(a)-2az(\frac{a}{2})<\infty.$
Therefore $e(a)=+\infty$ if and only if $z(a)=+\infty$. Thus, the set
$e^{-1}(\mathbb{R})$ is an interval. Furthermore, the above inequalities
combined with $(z3)$ clearly imply that $\limsup_{a\to
0^{+}}\frac{e(a)}{a}=0.$
Again using (2.2),
$e(a_{1})-e(a_{0})=a_{0}(z(a_{1})-z(a_{0}))+(a_{1}-a_{0})z(a_{1})-2\int_{a_{0}}^{a_{1}}z(s)\,ds\geq
a_{0}(z(a_{1})-z(a_{0}))-(a_{1}-a_{0})z(a_{0})$
If $b_{0}\in\partial z(a_{0})$, then
$e(a_{1})-e(a_{0})\geq(a_{1}-a_{0})\big{(}a_{0}b_{0}-z(a_{0})).$
Thus, $b\in\partial z(a)$ implies that $ab-z(a)\in\partial e(a)$ whenever
$a\in e^{-1}(\mathbb{R})$. Thus, the subdifferential of $e$ is nonempty
whenever the subdifferential of $z$ is nonempty. Combining this with the
equality $z^{-1}(\mathbb{R})=e^{-1}(\mathbb{R})$, it follows that $e$ is
convex, lower semicontinuous and proper.
Note that $b\in\partial z(a)$ implies that $z^{*}(b)=ab-z(a)$. Therefore,
$\\{ab-z(a):a\in\partial z(a)\\}=\\{z^{*}(b):b\in\partial z(a)\\}$. Since
$\int_{0}^{a}z(s)\,ds$ is everywhere differentiable on the interior of
$z^{-1}(\mathbb{R})$, every element of $\partial e(a)$ must have the form
$ab-z(a)$ for $b\in\partial z(a)$. Convexity implies that
$ab-z(a)\geq-z(0)=0$, thus $e$ is increasing on the interior
$e^{-1}(\mathbb{R})$.
It remains to show that $\lim_{b\to\infty}\frac{e^{*}(b)}{b}>0$. Since
$\limsup_{a\to 0^{+}}\frac{e(a)}{a}=0$, there must exist some $a_{0}>0$ such
that $e(a_{0})<\infty$. Thus,
$\liminf_{b\to\infty}\frac{e^{*}(b)}{b}\geq\liminf_{b\to\infty}a_{0}-\frac{e(a_{0})}{b}=a_{0}.$
∎
Parameter | $z$ energy $a\in[0,\infty)$ | $z^{*}$ energy $b\in\mathbb{R}$ | $e$ energy $a\in[0,\infty)$ | $e^{*}$ energy $b\in\mathbb{R}$
---|---|---|---|---
$m\in(0,\infty]\setminus\\{1\\}$ | $\frac{1}{m-1}(a^{m}-a)$ | $\max(\frac{(m-1)b+1}{m},0)^{m/(m-1)}$ | $\frac{1}{m+1}a^{m+1}$ | $\frac{m}{m+1}\max(b,0)^{\frac{m+1}{m}}$
$m\to 1$ | $a\log(a)-a$ | $\exp(b)$ | $\frac{1}{2}a^{2}$ | $\frac{1}{2}\max(b,0)^{2}$
Table 1. Some examples of the transformation from $z$ to $e$.
Now that we have established properties of the transformation $q=z^{*}(p)$ we
can temporarily forget about the original system (1.1) and focus on (1.3). We
will eventually return to (1.1) in the final section, where we show that
solutions to (1.3) can be transformed into solutions to (1.1). Until then, our
efforts will be concentrated on establishing the energy dissipation strategy
described in the introduction.
## 3\. Convex analysis and compensated compactness
In this section, we collect some results that we will need to establish the
weak convergence of the primal and dual energy terms. We begin by defining
some convex spaces that we will work with throughout the paper.
###### Definition 3.1.
Given an energy $e$ satisfying (e1-e3), we define
$X(e):=\\{\rho\in L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty}):e(\rho)\in
L^{\infty}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))\\},$
$Y(e^{*}):=\\{q\in L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty}):e^{*}(q)\in
L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))\\}.$
We are now ready to introduce a result that is one of the cornerstones of our
argument.
###### Proposition 3.2.
Let $e:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ be an energy satisfying
$(e1-e3)$. Let $e_{k}:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ be a sequence
of energies satisfying (e1-e3) such that $e_{k}$ converges pointwise
everywhere to $e$. Suppose we have a sequence of nonnegative density and
pressure functions $\rho_{k}\in X(e_{k})$, $q_{k}\in Y(e^{*}_{k})$ such that
$\rho_{k}q_{k}=e_{k}(\rho_{k})+e^{*}_{k}(q_{k})$ almost everywhere and
$\rho_{k},q_{k}$ converge weakly in
$L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ to limits $\rho,q\in
L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ respectively. If $\rho q\in
L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))$ and for
every nonnegative $\varphi\in C^{\infty}_{c}(Q_{\infty})$
$\limsup_{k\to\infty}\int_{Q_{\infty}}\varphi\rho_{k}q_{k}\leq\int_{Q_{\infty}}\varphi\rho
q,$
then $\rho\in X(e),q\in Y(e^{*})$, $\rho q=e(\rho)+e^{*}(q)$ almost
everywhere, and $\rho_{k}q_{k},e_{k}(\rho_{k}),e_{k}^{*}(q_{k})$ converge
weakly in
$L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))$ to
$\rho q,e(\rho),e^{*}(q)$ respectively.
###### Proof.
Given some nonnegative $\varphi\in C_{c}^{\infty}(Q_{\infty})$, let $D$ be a
compact set containing the support of $\varphi$. From our assumptions, we have
$\int_{Q_{\infty}}\varphi\rho
q\geq\limsup_{k\to\infty}\int_{Q_{\infty}}\varphi\rho_{k}q_{k}=\limsup_{k\to\infty}\int_{Q_{\infty}}\varphi
e_{k}(\rho_{k})+\varphi e^{*}_{k}(q_{k}).$
Fix some simple functions $g_{1},g_{2}\in L^{\infty}(D)$ such that every value
of $g_{1}$ is a value where $e_{k}^{*}$ converges to $e^{*}$ (c.f. Lemma A.1).
It then follows that
$\limsup_{k\to\infty}\int_{Q_{\infty}}\varphi\big{(}e_{k}(\rho_{k})+e^{*}_{k}(q_{k})\big{)}\geq\limsup_{k\to\infty}\int_{Q_{\infty}}\varphi\big{(}g_{1}\rho_{k}-e^{*}_{k}(g_{1})+g_{2}q_{k}-e_{k}(g_{2})\big{)}=\int_{Q_{\infty}}\varphi\big{(}g_{1}\rho-e^{*}(g_{1})+g_{2}q-e(g_{2})\big{)}.$
Taking a supremum over $g_{1},g_{2}$, we can conclude that
$\int_{Q_{\infty}}\varphi\rho
q\geq\limsup_{k\to\infty}\int_{Q_{\infty}}\varphi\big{(}e_{k}(\rho_{k})+e^{*}_{k}(q_{k})\big{)}\geq\int_{Q_{\infty}}\varphi\big{(}e(\rho)+e^{*}(q)\big{)}.$
On the other hand, Young’s inequality immediately implies that
$\rho q\leq e(\rho)+e^{*}(q)$
almost everywhere. Thus, $\rho q=e(\rho)+e^{*}(q)$ almost everywhere. This
also now implies that $\rho\in X(e)$ and $q\in Y(e^{*})$.
The previous calculation shows that $e_{k}(\rho_{k})+e_{k}^{*}(q_{k})$ is
uniformly bounded in
$L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))$. Thus,
for any time $T>0$, there exists $w_{1},w_{2}\in C(Q_{T})^{*}$ such that
$e_{k}(\rho_{k}),e_{k}^{*}(q_{k})$ converge (along a subsequence that we will
not relabel) to $w_{1},w_{2}$ respectively. Arguing as in the first paragraph,
it follows that
$\int_{Q_{T}}\varphi w_{1}=\liminf_{k\to\infty}\int_{Q_{T}}\varphi
e_{k}(\rho_{k})\geq\int_{Q_{T}}\varphi e(\rho),\quad\int_{Q_{T}}\varphi
w_{2}=\liminf_{k\to\infty}\int_{Q_{T}}\varphi
e_{k}^{*}(q_{k})\geq\int_{Q_{T}}\varphi e^{*}(q).$
Hence,
$\int_{Q_{T}}\varphi|w_{1}-e(\rho)|+\varphi|w_{2}-e^{*}(q)|=\int_{Q_{T}}\varphi\big{(}w_{1}-e(\rho)+w_{2}-e^{*}(q)\big{)}=$
$\limsup_{k\to\infty}\int_{Q_{T}}\varphi\big{(}e_{k}(\rho_{k})+e^{*}_{k}(q_{k})-e(\rho)-e^{*}(q)\big{)}=\limsup_{k\to\infty}\int_{Q_{T}}\varphi\big{(}\rho_{k}q_{k}-\rho
q\big{)}\leq 0.$
Thus, $w_{1}=e(\rho)$ and $w_{2}=e^{*}(q)$. Since $w_{1},w_{2}$ and $T>0$ were
arbitrary, it follows that $e(\rho),e^{*}(q)$ are the only weak limit points
of $e_{k}(\rho_{k}),e_{k}^{*}(q_{k})$ in
$L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))$. Thus,
the full sequences $e_{k}(\rho_{k}),e_{k}^{*}(q_{k})$ must converge weakly in
$L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))$ to
$e(\rho)$ and $e^{*}(q)$ respectively. The weak
$L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))$
convergence of $\rho_{k}q_{k}$ to $\rho q$ is an immediate consequence.
∎
Of course, to even be able to use Proposition 3.2, we somehow need to know an
upper semicontinuity type property for the product $\rho_{k}q_{k}$. In
practice, this seems to require establishing the weak convergence of
$\rho_{k}q_{k}$ to $\rho q$. Luckily, the following “compensated
compactness”-type Lemma shows that the weak convergence of the product can
hold even when the strong convergence of both $\rho_{k}$ and $q_{k}$ is
unknown. Unlike typical compensated compactness arguments that decompose the
codomain of the function, the following compensated compactness argument is
based on a decomposition of the domain of the functions. Indeed, we show that
if $\rho_{k}$ has some time regularity and $q_{k}$ has some space regularity
then their product weakly converges. This argument was inspired by the proof
of the main Theorem in [MRCS10], although we would not be surprised if this
result was already established in an earlier work.
###### Lemma 3.3.
Fix some $r\in(1,\infty)$ and let $r^{\prime}$ be the Holder conjugate of $r$.
Let $Z_{r}=L^{r}_{\operatorname{\textup{loc}}}(Q_{\infty})\times
L^{r^{\prime}}_{\operatorname{\textup{loc}}}(Q_{\infty})$ and let $\eta$ be a
spatial mollifier. Suppose that $(u_{k},v_{k})\in Z_{r}$ is a sequence that
converges weakly in $Z_{r}$ to a limit point $(u,v)\in Z_{r}$. If $u_{k}$ is
equicontinuous with respect to space in
$L^{r}_{\operatorname{\textup{loc}}}(Q_{\infty})$ and for any $\epsilon>0$,
$\eta_{\epsilon}*v_{k}$ is equicontinuous with respect to space and time in
$L^{r^{\prime}}_{\operatorname{\textup{loc}}}(Q_{\infty})$, then $u_{k}v_{k}$
converges weakly in $(C_{c}(Q_{\infty}))^{*}$ to $uv$.
###### Proof.
Define $v_{k,\epsilon}:=\eta_{\epsilon}*v_{k}$ and
$v_{\epsilon}:=\eta_{\epsilon}*v$. For $\epsilon>0$ fixed and any compact set
$D\subset Q_{\infty}$, the Riesz-Frechet-Kolmogorov compactness theorem
implies that $v_{k,\epsilon}$ converges strongly in $L^{r^{\prime}}(D)$ to
$v_{\epsilon}$ as $k\to\infty$.
Given $\varphi\in C_{c}^{\infty}(Q_{\infty})$, we must have
$\lim_{\epsilon\to 0}\int_{Q_{\infty}}\varphi(v-v_{\epsilon})u=0,$
and
$\lim_{k\to\infty}\int_{Q_{\infty}}\varphi(v_{k,\epsilon}-v_{\epsilon})u_{k}+v_{\epsilon}(u-u_{k})=0.$
Thus, to prove the weak convergence of $u_{k}v_{k}$ to $uv$, it will suffice
to show that
$\lim_{\epsilon\to
0}\lim_{k\to\infty}\int_{Q_{\infty}}\varphi(v_{k}-v_{k,\epsilon})u_{k}=0.$
Rearranging the convolution, this is equivalent to showing
$\lim_{\epsilon\to
0}\lim_{k\to\infty}\int_{Q_{\infty}}v_{k}\big{(}\eta_{\epsilon}*\varphi
u_{k}-\varphi u_{k}\big{)}=0.$
Choose some compact set $D\subset Q_{\infty}$ such that for any $\epsilon$
sufficiently small, the support of $\varphi,\eta_{\epsilon}*\varphi$ is
contained in $D$. We then have the estimate
$\Big{|}\int_{Q_{\infty}}v_{k}\big{(}\eta_{\epsilon}*\varphi u_{k}-\varphi
u_{k}\big{)}\Big{|}\lesssim\lVert
v_{k}\rVert_{L^{r^{\prime}}(D)}\big{(}\lVert\varphi\rVert_{L^{\infty}(Q_{\infty})}\lVert
u_{k}-\eta_{\epsilon}*u_{k}\rVert_{L^{r}(D)}+\epsilon\lVert
u_{k}\rVert_{L^{r}(D)}\lVert\nabla\varphi\rVert_{L^{\infty}(Q_{\infty})}\big{)}.$
The weak convergence of $(u_{k},v_{k})$ to $(u,v)$ in $Z_{r}$ implies that
$\lVert u_{k}\rVert_{L^{r}(D)}+\lVert v_{k}\rVert_{L^{r^{\prime}}(D)}$ is
bounded with respect to $k$. Spatial equicontinuity gives us
$\lim_{\epsilon\to 0}\sup_{k}\lVert
u_{k}-\eta_{\epsilon}*u_{k}\rVert_{L^{r}(D)}=0.$
Thus, it follows that
$\lim_{\epsilon\to
0}\sup_{k}\Big{|}\int_{Q_{\infty}}v_{k}\big{(}\eta_{\epsilon}*\varphi
u_{k}-\varphi u_{k}\big{)}\Big{|}=0,$
and so we can conclude that $u_{k}v_{k}$ converges in
$(C_{c}(Q_{\infty}))^{*}$ to $uv$. ∎
## 4\. Energy dissipation and estimates
We will now begin to analyze the parabolic structure of the equation (1.4). In
order to do this, we will need to upgrade the spaces $X(e),Y(e^{*})$ into
spaces that are more appropriate for solving PDEs
###### Definition 4.1.
Given an energy $e$ satisfying (e1-e3), we define
$\mathcal{X}(e):=\\{\rho\in X(e):\rho\in
L^{\infty}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d})\cap
L^{\infty}(\mathbb{R}^{d}))\cap
H^{1}_{\operatorname{\textup{loc}}}([0,\infty);H^{-1}(\mathbb{R}^{d}))\\},$
$\mathcal{Y}(e^{*}):=\\{q\in Y(e^{*}):q\in
L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}_{\operatorname{\textup{loc}}}(\mathbb{R}^{d}))\cap
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);\dot{H}^{1}(\mathbb{R}^{d}))\\}.$
Note that the seemingly strange choice of time integrability for $\mathcal{Y}$
will become clear later.
###### Proposition 4.2.
Given an energy $e:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ satisfying
(e1-e3), suppose that $e(\rho^{0})\in L^{1}(\mathbb{R}^{d})\cap
L^{\infty}(\mathbb{R}^{d})$ and $\rho^{0}\in L^{1}(\mathbb{R}^{d})$. Let
$\rho\in\mathcal{X}(e)$ be a density function and $q\in\mathcal{Y}(e^{*})$ a
pressure function that satisfy the duality relation $\rho q=e(\rho)+e^{*}(q)$
almost everywhere. Suppose that $\mu\in L^{\infty}(\frac{1}{\rho})$ is a
growth rate and $V\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ is a
vector field such that $\nabla\cdot V\in L^{\infty}(Q_{\infty})$. If for every
$\psi\in W^{1,1}_{c}([0,\infty);\mathcal{Y}(e^{*}))$, $\rho,q$ are weak
solutions of the parabolic equation
(4.1) $\int_{\mathbb{R}^{d}}\psi(0,x)\rho^{0}(x)\,dx=\int_{Q_{\infty}}\nabla
q\cdot\nabla\psi-\rho\partial_{t}\psi-\rho V\cdot\nabla\psi-\mu\psi,$
then for any nonnegative $\omega\in W^{1,\infty}_{c}([0,\infty))$ that depends
only on time, we have the dissipation relation
(4.2)
$\int_{\mathbb{R}^{d}}\omega(0)e(\rho^{0}(x))\,dx=\int_{Q_{\infty}}-e(\rho)\partial_{t}\omega+\omega|\nabla
q|^{2}+\omega e^{*}(q)\nabla\cdot V-\omega\mu q.$
###### Proof.
Let $\tilde{q}\in C^{\infty}_{c}(\mathbb{R}^{d})$ such that
$e^{*}(\tilde{q})\in L^{1}(\mathbb{R}^{d})$. Extend $q$ backwards in time by
defining $q(-t,x)=\tilde{q}(x)$ for all $t\in(0,\infty)$. Fix $\epsilon>0$,
and define
$q_{\epsilon}(t,x):=\frac{1}{\epsilon}\int_{t-\epsilon}^{t}q(s,x)\,ds$
for all $(t,x)\in\mathbb{R}\times\mathbb{R}^{d}$.
By Jensen’s inequality, $q_{\epsilon}\in\mathcal{Y}(e^{*})$ and a direct
computation shows that $\partial_{t}q_{\epsilon}$ is the linear combination of
two $\mathcal{Y}(e^{*})$ functions for any $\epsilon>0$. Given any nonnegative
$\omega\in W^{1,\infty}_{c}([0,\infty))$ that is a function of time only, it
now follows that $q_{\epsilon}\omega$ is a valid test function for the weak
equation (4.1). Thus, we have
(4.3)
$\int_{\mathbb{R}^{d}}q_{\epsilon}(0,x)\omega(0)\rho^{0}(x)\,dx=\int_{Q_{\infty}}-\rho\partial_{t}(\omega
q_{\epsilon})+(\nabla q-\rho V)\cdot\nabla(q_{\epsilon}\omega)-\mu\omega
q_{\epsilon},$
Note that for almost every $(t,x)\in Q_{\infty}$
$\rho\partial_{t}(\omega
q_{\epsilon})=\rho(t,x)q_{\epsilon}(t,x)\partial_{t}\omega(t,x)+\omega(t,x)\frac{q(t,x)-q(t-\epsilon,x)}{\epsilon}\rho(t,x).$
Hence, we can apply Young’s inequality to deduce that
(4.4)
$(\frac{q(t,x)-q(t-\epsilon,x)}{\epsilon})\rho(t,x)\geq\frac{e^{*}(q(t,x))-e^{*}(q(t-\epsilon,x))}{\epsilon}$
By defining
$(e^{*}(q))_{\epsilon}:=\frac{1}{\epsilon}\int_{t-\epsilon}^{t}e^{*}(q(s,x))\,ds$
we can write the above inequality in the more compact form
$\rho\partial_{t}q_{\epsilon}\geq\partial_{t}(e^{*}(q))_{\epsilon}$
Plugging this into (4.3), we get the inequality
$\int_{\mathbb{R}^{d}}q_{\epsilon}(0,x)\omega(0)\rho^{0}(x)\,dx\leq\int_{Q_{\infty}}-\rho
q_{\epsilon}\partial_{t}\omega-\omega\partial_{t}(e^{*}(q))_{\epsilon}+(\nabla
q-\rho V)\cdot\nabla(q_{\epsilon}\omega)-\mu\omega q_{\epsilon},$
Moving time derivatives back on to $\omega$, we get the equivalent inequality
(4.5)
$\int_{\mathbb{R}^{d}}\omega(0)\Big{(}q_{\epsilon}(0,x)\rho^{0}(x)-e^{*}\big{(}q_{\epsilon}(0,x)\big{)}\Big{)}\,dx$
$\leq\int_{Q_{\infty}}\partial_{t}\omega((e^{*}(q))_{\epsilon}-\rho
q_{\epsilon})+(\nabla q-\rho V)\cdot\nabla(q_{\epsilon}\omega)-\mu\omega
q_{\epsilon}.$
Note that we also have
$\int_{\mathbb{R}^{d}}\omega(0)\Big{(}q_{\epsilon}(0,x)\rho^{0}(x)-e^{*}\big{(}q_{\epsilon}(0,x)\big{)}\Big{)}\,dx=\int_{\mathbb{R}^{d}}\omega(0)\Big{(}\tilde{q}(x)\rho^{0}(x)-e^{*}\big{(}\tilde{q}(x)\big{)}\Big{)}\,dx$
thanks to our construction of $q_{\epsilon}$.
Since all of the time derivatives are now on $\omega$, we can safely send
$\epsilon\to 0$. Thus, it follows that
$\int_{\mathbb{R}^{d}}\omega(0)\Big{(}\tilde{q}(x)\rho^{0}(x)-e^{*}\big{(}\tilde{q}(x)\big{)}\Big{)}\,dx$
$\leq\int_{Q_{\infty}}\partial_{t}\omega(e^{*}(q)-\rho q)+\omega|\nabla
q|^{2}+\omega e^{*}(q)\nabla\cdot V-\mu\omega q$
where we have used the fact that $\nabla e^{*}(q)=\rho\nabla q$ (note that
this is just a consequence of the chain rule for Sobolev functions).
Exploiting the duality relation $\rho q=e(\rho)+e^{*}(q)$, we have arrived at
the inequality
(4.6)
$\int_{\mathbb{R}^{d}}\omega(0)\Big{(}\tilde{q}(x)\rho^{0}(x)-e^{*}\big{(}\tilde{q}(x)\big{)}\Big{)}\leq\int_{Q_{\infty}}-e(\rho)\partial_{t}\omega+\omega|\nabla
q|^{2}+\omega e^{*}(q)\nabla\cdot V-\omega\mu q.$
$\tilde{q}$ was arbitrary, thus, taking a supremum over $\tilde{q}$ we obtain
one direction of the dissipation relation.
To get the other direction, we instead smooth $q$ forwards in time by defining
$\bar{q}_{\epsilon}:=\frac{1}{\epsilon}\int_{t}^{t+\epsilon}q(s,x).$
The argument will then proceed identically to the above except that the
forward-in-time smoothing does not allow us to conclude that
$q_{\epsilon}(0,x)=\tilde{q}$. Luckily, Young’s inequality is now in our favor
and so we just use
$\int_{\mathbb{R}^{d}}\omega(0)\Big{(}q_{\epsilon}(0,x)\rho^{0}(x)-e^{*}\big{(}q_{\epsilon}(0,x)\big{)}\Big{)}\,dx\leq\int_{\mathbb{R}^{d}}\omega(0)e(\rho^{0}(x))dx.$
∎
In the next proposition we collect some a priori estimates for solutions to
(1.4). In fact, we will consider a slightly more general equation where we add
an additional viscosity term $-\gamma\Delta\rho$ where the constant $\gamma$
is possibly zero. As we will see, the estimates will give us uniform control
when we consider sequences of solutions.
###### Proposition 4.3.
Let $e$ be an energy function satisfying (e1-e3), let $V\in
L^{2}_{\operatorname{\textup{loc}}}(Q_{\infty})$ be a vector field such that
$\nabla\cdot V\in L^{\infty}(Q_{\infty})$, let $\frac{\mu}{\rho}\in
L^{\infty}(Q_{\infty})$ and let $\gamma$ be a positive constant. Suppose that
$\rho\in\mathcal{X}(e)\cap
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$,
$q\in\mathcal{Y}(e^{*})$ satisfy the duality relation $\rho
q=e(\rho)+e^{*}(q)$ almost everywhere. If $e(\rho^{0})\in
L^{1}(\mathbb{R}^{d})$ and the variables satisfy the weak equation
(4.7)
$\int_{\mathbb{R}^{d}}\psi(0,x)\rho^{0}(x)\,dx=\int_{Q_{\infty}}\gamma\nabla\rho\cdot\nabla\psi+\nabla
q\cdot\nabla\psi-\rho\partial_{t}\psi-\rho V\cdot\nabla\psi-\mu\psi,$
for every test function $\psi\in
W^{1,1}_{c}([0,\infty);L^{1}(\rho)\cap\dot{H}^{1}(\mathbb{R}^{d}))$, then for
any nonnegative $\omega\in W^{1,\infty}_{c}([0,\infty))$ that depends only on
time and for every $m\in(1,\infty)$, we have the dissipation inequalities
(4.8) $\int_{Q_{\infty}}-e(\rho)\partial_{t}\omega+\omega|\nabla q|^{2}+\omega
e^{*}(q)\nabla\cdot V-\omega\mu
q\leq\int_{\mathbb{R}^{d}}\omega(0)e(\rho^{0}(x))\,dx$ (4.9)
$\int_{Q_{\infty}}\omega\gamma(m-1)\rho^{m-2}|\nabla\rho|^{2}-\rho^{m}\big{(}\frac{1}{m}\partial_{t}\omega+\omega(\frac{\mu}{\rho}-\frac{m-1}{m}\nabla\cdot
V)\big{)}\leq\int_{\mathbb{R}^{d}}\frac{\omega(0)}{m}(\rho^{0})^{m}\,dx$
and if we set $\beta=\inf\\{b\in\mathbb{R}:e^{*}(b)\geq 1\\}$ then the
following estimates hold for almost all $T\in[0,\infty)$:
(4.10)
$\gamma\lVert\nabla\rho\rVert_{L^{2}(Q_{T})}^{2}\leq\lVert\rho^{0}\rVert_{L^{2}(\mathbb{R}^{d})}^{2}+\lVert\rho\rVert_{L^{2}(Q_{T})}^{2}\big{(}\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})}+\lVert\nabla\cdot
V\rVert_{L^{\infty}(Q_{\infty})}),$ (4.11)
$\lVert\rho(T\,\cdot)\rVert_{L^{1}(\mathbb{R}^{d})}\leq\lVert\rho^{0}\rVert_{L^{1}(\mathbb{R}^{d})}\exp(T\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})})$
(4.12)
$\lVert\partial_{t}\rho\rVert_{L^{2}([0,T];H^{-1}(\mathbb{R}^{d}))}\leq\gamma\lVert\nabla\rho\rVert_{L^{2}(Q_{T})}+\lVert\nabla
q\rVert_{L^{2}(Q_{T})}+\lVert\mu\rVert_{L^{2}(Q_{T})}+\lVert\rho
V\rVert_{L^{2}(Q_{T})}$ (4.13)
$\lVert\rho(T,\cdot)\rVert_{L^{\infty}(\mathbb{R}^{d})}\leq\lVert\rho^{0}\rVert_{L^{\infty}(\mathbb{R}^{d})}\exp\big{(}2T(\lVert\nabla\cdot
V\rVert_{L^{\infty}(Q_{T})}+\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})})\big{)},$
(4.14) $\lVert\nabla
q\rVert_{L^{2}(Q_{T})}^{2}\lesssim_{d}\int_{\mathbb{R}^{d}}e(\rho^{0})\,dx+\max(\beta,1)\Big{(}\lVert\rho\rVert_{L^{1}(Q_{T})}+\lVert\rho\rVert_{L^{\infty}[0,T];L^{1}(\mathbb{R}^{d})}^{\frac{2}{d}}\lVert\rho\rVert_{L^{2}(Q_{T})}^{2}\Big{)}\big{(}1+\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})}+\lVert\nabla\cdot
V\rVert_{L^{\infty}(Q_{T})}\big{)}^{2},$ (4.15) $\lVert
e^{*}(q)\rVert_{L^{1}(Q_{T})}+\lVert
e(\rho)\rVert_{L^{1}(Q_{T})}\lesssim_{d}\beta\lVert\rho\rVert_{L^{1}(Q_{T})}+(\beta\lVert\rho\rVert_{L^{\infty}[0,T];L^{1}(\mathbb{R}^{d})})^{\frac{1}{d}}\big{(}\lVert\rho\rVert_{L^{2}(Q_{T})}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}\big{)},$ (4.16) $\lVert
e^{*}(q)\rVert_{L^{\frac{2d+4}{d+4}}([0,T];L^{2}(\mathbb{R}^{d}))}\lesssim_{d}\lVert
e^{*}(q)\rVert_{L^{1}(Q_{T})}^{\frac{2}{d+2}}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}^{\frac{d}{d+2}}\lVert\rho\rVert_{L^{\infty}(Q_{T})}^{\frac{d}{d+2}}.$
and for any compact set $K\subset\mathbb{R}^{d}$,
(4.17) $\lVert q\rVert_{L^{\frac{2d+4}{d+4}}([0,T];L^{2}(K))}\lesssim_{d}\beta
T|K|+\beta\lVert e^{*}(q)\rVert_{L^{1}(Q_{T})}^{\frac{2}{d+2}}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}^{\frac{d}{d+2}}\lVert\rho\rVert_{L^{\infty}(Q_{T})}^{\frac{d}{d+2}}.$
###### Proof.
The dissipation inequalities (4.8) and (4.9) follow from choosing the test
functions $q$ and $\rho^{m-1}$ respectively. These test functions do not have
the required time regularity, however, by following an identical argument to
Proposition 4.2, this technicality can be overcome. In addition, note that in
both inequalities we have dropped a term involving $\nabla\rho\cdot\nabla q$,
which is nonnegative thanks to the duality relation.
Estimates (4.11) and (4.12) are straightforward consequences of the weak
equation (4.7). Estimate (4.10) follows from (4.9) with $m=2$. Estimate (4.13)
follows from applying a Gronwall argument to (4.9) and then sending
$m\to\infty$.
The estimates (4.14-4.17), are all linked. We begin by fixing a time
$T\in[0,\infty)$ and considering $\lVert\rho q\rVert_{L^{1}(Q_{T})}$. Define
$\tilde{q}:=\max(q,\beta)-\beta$. It is then clear that
$\lVert\rho
q\rVert_{L^{1}(Q_{T})}\leq\beta\lVert\rho\rVert_{L^{1}(Q_{T})}+\lVert\rho\tilde{q}\rVert_{L^{1}(Q_{T})},\quad\lVert\nabla\tilde{q}\rVert_{L^{2}(Q_{T})}\leq\lVert\nabla
q\rVert_{L^{2}(Q_{T})}.$
Working in Fourier space, we have
$\lVert\rho\tilde{q}\rVert_{L^{1}(Q_{T})}\leq\int_{0}^{T}\int_{\mathbb{R}^{d}}|\hat{\rho}(t,\xi)\hat{\tilde{q}}(t,\xi)|\,d\xi\,dt\leq\int_{0}^{T}|B_{R}|\lVert\rho(t,\cdot)\rVert_{L^{1}(\mathbb{R}^{d})}\lVert\tilde{q}(t,\cdot)\rVert_{L^{1}(\mathbb{R}^{d})}+\int_{|\xi|>R}|\hat{\rho}(t,\xi)\hat{\tilde{q}}(t,\xi)|\,d\xi\,dt$
$\leq
T|B_{R}|\lVert\rho\rVert_{L^{\infty}([0,T];L^{1}(\mathbb{R}^{d}))}\lVert\tilde{q}\rVert_{L^{1}(Q_{T})}+R^{-1}\lVert\rho\rVert_{L^{2}(Q_{T})}\lVert\nabla\tilde{q}\rVert_{L^{2}(Q_{T})}$
where $R>0$ and $B_{R}$ is the ball of radius $R$. Optimizing over $R$ and
dropping dimensional constants, it follows that
$\int_{Q_{T}}\rho\tilde{q}\lesssim_{d}\big{(}\lVert\rho\rVert_{L^{\infty}([0,T];L^{1}(\mathbb{R}^{d}))}\lVert\tilde{q}\rVert_{L^{1}(Q_{T})}\big{)}^{\frac{1}{d+1}}\big{(}\lVert\rho\rVert_{L^{2}(Q_{T})}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}\big{)}^{\frac{d}{d+1}}.$
If $b>\beta$ and $e^{*}(b)\neq+\infty$, then it follows from the definition of
$\beta$ that $\beta^{-1}=\liminf_{\epsilon\to
0^{+}}\frac{e^{*}(\beta+\epsilon)-e^{*}(0)}{\beta+\epsilon}\leq\inf\partial
e^{*}(b)$. Therefore,
$\max(e^{*}(q)-e^{*}(\beta),0)\geq\beta^{-1}\tilde{q}$
It then follows that
$\lVert\tilde{q}\rVert_{L^{1}(Q_{T})}\leq\beta\lVert
e^{*}(q)\rVert_{L^{1}(Q_{T})}\leq\beta\lVert\rho q\rVert_{L^{1}(Q_{T})}.$
As a result,
$\lVert\rho
q\rVert_{L^{1}(Q_{T})}\lesssim_{d}\beta\lVert\rho\rVert_{L^{1}(Q_{T})}+\big{(}\beta\lVert\rho\rVert_{L^{\infty}([0,T];L^{1}(\mathbb{R}^{d}))}\lVert\rho
q\rVert_{L^{1}(Q_{T})}\big{)}^{\frac{1}{d+1}}\big{(}\lVert\rho\rVert_{L^{2}(Q_{T})}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}\big{)}^{\frac{d}{d+1}}$
Now using Young’s inequality (suboptimally), it follows that
$\lVert\rho
q\rVert_{L^{1}(Q_{T})}\lesssim_{d}\beta\lVert\rho\rVert_{L^{1}(Q_{T})}+(\beta\lVert\rho\rVert_{L^{\infty}[0,T];L^{1}(\mathbb{R}^{d})})^{\frac{1}{d}}\big{(}\lVert\rho\rVert_{L^{2}(Q_{T})}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}\big{)}.$
Since
$\lVert e(\rho)\rVert_{L^{1}(Q_{T})}+\lVert
e^{*}(q)\rVert_{L^{1}(Q_{T})}=\lVert\rho q\rVert_{L^{1}(Q_{T})}$
we have obtained the bound in (4.15).
Now we turn to estimating $\lVert\nabla q\rVert_{L^{2}(Q_{T})}$. From the
dissipation relation (4.8), we have
$\int_{Q_{\infty}}\omega|\nabla
q|^{2}-e(\rho)(\partial_{t}\omega+\frac{\mu}{\rho}\omega)+\omega
e^{*}(p)(\nabla\cdot
V-\frac{\mu}{\rho})\leq\int_{\mathbb{R}^{d}}\omega(0)e(\rho^{0})\,dx$
for any nonnegative $\omega\in W^{1,\infty}_{c}((0,\infty))$. Fix a time $T>0$
that is a Lebesgue point for the mapping $T\mapsto\lVert\nabla
q\rVert_{L^{2}(Q_{T})}$. Assume that $\omega$ is a decreasing function
supported on $[0,T]$ and $\omega\leq 1$ everywhere. We can then eliminate the
term $-e(\rho)\partial_{t}\omega$. Thus, it follows from our previous work
that
$\int_{Q_{\infty}}\omega|\nabla
q|^{2}\leq\int_{\mathbb{R}^{d}}e(\rho^{0})\,dx+\lVert\rho
q\rVert_{L^{1}(Q_{T})}\big{(}\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})}+\lVert\nabla\cdot
V\rVert_{L^{\infty}(Q_{T})}\big{)}$
$\lesssim_{d}\int_{\mathbb{R}^{d}}e(\rho^{0})\,dx+\max(\beta,1)\Big{(}\lVert\rho\rVert_{L^{1}(Q_{T})}+\lVert\rho\rVert_{L^{\infty}[0,T];L^{1}(\mathbb{R}^{d})}^{\frac{1}{d}}\big{(}\lVert\rho\rVert_{L^{2}(Q_{T})}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}\big{)}\Big{)}\big{(}\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})}+\lVert\nabla\cdot
V\rVert_{L^{\infty}(Q_{T})}\big{)}$
If we let $\omega$ approach the characteristic function of $[0,T]$, then we
deduce that $\lVert\nabla q\rVert_{L^{2}(Q_{T})}^{2}$ is
$\lesssim_{d}\int_{\mathbb{R}^{d}}e(\rho^{0})\,dx+\Big{(}\beta\lVert\rho\rVert_{L^{1}(Q_{T})}+(\beta\lVert\rho\rVert_{L^{\infty}[0,T];L^{1}(\mathbb{R}^{d})})^{\frac{1}{d}}\lVert\rho\rVert_{L^{2}(Q_{T})}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}\Big{)}\big{(}\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})}+\lVert\nabla\cdot
V\rVert_{L^{\infty}(Q_{T})}\big{)}.$
Now we can use Young’s inequality (suboptimally again) to get (4.14)
$\lVert\nabla
q\rVert_{L^{2}(Q_{T})}^{2}\lesssim_{d}\int_{\mathbb{R}^{d}}e(\rho^{0})\,dx+\max(\beta,1)\Big{(}\lVert\rho\rVert_{L^{1}(Q_{T})}+\lVert\rho\rVert_{L^{\infty}[0,T];L^{1}(\mathbb{R}^{d})}^{\frac{2}{d}}\lVert\rho\rVert_{L^{2}(Q_{T})}^{2}\Big{)}\big{(}1+\lVert\frac{\mu}{\rho}\rVert_{L^{\infty}(Q_{T})}+\lVert\nabla\cdot
V\rVert_{L^{\infty}(Q_{T})}\big{)}^{2}.$
Finally, working in Fourier space again, it follows that for any exponent
$r\in[1,\frac{d+2}{2})$ and radius $R>0$,
$\lVert
e^{*}(q)\rVert_{L^{r}([0,T];L^{2}(\mathbb{R}^{d}))}^{r}\lesssim_{d}\int_{0}^{T}\Big{(}R^{d}\lVert
e^{*}(q(t,\cdot))\rVert_{L^{1}(\mathbb{R}^{d})}^{2}+R^{-2}\lVert\nabla
e^{*}(q(t,\cdot))\rVert_{L^{2}(\mathbb{R}^{d})}^{2}\Big{)}^{r/2}\,dt.$
Once again optimizing over $R$, we have
$\lVert
e^{*}(q)\rVert_{L^{r}([0,T];L^{2}(\mathbb{R}^{d}))}^{r}\lesssim_{d}\int_{0}^{T}\lVert
e^{*}(q(t,\cdot)\rVert_{L^{1}(\mathbb{R}^{d})}^{\frac{2r}{(d+2)}}\lVert\nabla
e^{*}(q(t,\cdot))\rVert_{L^{2}(\mathbb{R}^{d})}^{\frac{dr}{(d+2)}}\,dt$
$\lesssim_{d}\lVert e^{*}(q)\rVert_{L^{1}(Q_{T})}^{\frac{2r}{d+2}}\lVert\nabla
e^{*}(q)\rVert_{L^{\frac{dr}{d+2-2r}}([0,T];L^{2}(\mathbb{R}^{d}))}^{\frac{dr}{d+2}}.$
Thus,
$\lVert e^{*}(q)\rVert_{L^{r}([0,T];L^{2}(\mathbb{R}^{d}))}\lesssim_{d}\lVert
e^{*}(q)\rVert_{L^{1}(Q_{T})}^{\frac{2}{d+2}}\lVert\nabla
e^{*}(q)\rVert_{L^{\frac{dr}{d+2-2r}}([0,T];L^{2}(\mathbb{R}^{d}))}^{\frac{d}{d+2}}.$
If we choose $r=\frac{2d+4}{d+4}$ we get
$\lVert
e^{*}(q)\rVert_{L^{\frac{2d+4}{d+4}}([0,T];L^{2}(\mathbb{R}^{d}))}\lesssim_{d}\lVert
e^{*}(q)\rVert_{L^{1}(Q_{T})}^{\frac{2}{d+2}}\lVert\nabla
e^{*}(q)\rVert_{L^{2}(Q_{T})}^{\frac{d}{d+2}}.$
Finally, since $\nabla e^{*}(q)=\rho\nabla q$ by the chain rule for Sobolev
functions, we have
$\lVert
e^{*}(q)\rVert_{L^{\frac{2d+4}{d+4}}([0,T];L^{2}(\mathbb{R}^{d}))}\lesssim_{d}\lVert\rho\rVert_{L^{\infty}(Q_{T})}^{\frac{d}{d+2}}\lVert
e^{*}(q)\rVert_{L^{1}(Q_{T})}^{\frac{2}{d+2}}\lVert\nabla
q\rVert_{L^{2}(Q_{T})}^{\frac{d}{d+2}}.$
Fixing a compact set $K\subset\mathbb{R}^{d}$, we also have
$\lVert q\rVert_{L^{\frac{2d+4}{d+4}}([0,T];L^{2}(K))}\leq\beta
T|K|+\lVert\bar{q}\rVert_{L^{\frac{2d+4}{d+4}}([0,T];L^{2}(Q_{T}))}\leq\beta
T|K|+\beta\lVert
e^{*}(q)\rVert_{L^{\frac{2d+4}{d+4}}([0,T];L^{2}(\mathbb{R}^{d}))}$
∎
## 5\. Main results
At last, we are ready to combine our work to prove the main results of this
paper. We will begin by constructing solutions to the system (1.3) and then we
will show that these can be converted into solutions to the original system
(1.1).
The construction of solutions to (1.3) is based on a vanishing viscosity
approach. To that end, we consider a viscous analogue of system (1.3) where we
add viscosity to both of the species $\rho_{1},\rho_{2}$. Given a viscosity
parameter $\gamma\geq 0$, we introduce the system:
(5.1)
$\begin{cases}\partial_{t}\rho_{1}-\gamma\Delta\rho_{1}-\nabla\cdot(\frac{\rho_{1}}{\rho}\nabla
q)+\nabla\cdot(\rho_{1}V)=\rho_{1}F_{1,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{1,2}\big{(}(z^{*})^{-1}(q),n\big{)},\\\
\partial_{t}\rho_{2}-\gamma\Delta\rho_{2}-\nabla\cdot(\frac{\rho_{2}}{\rho}\nabla
q)+\nabla\cdot(\rho_{2}V)=\rho_{1}F_{2,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{2,2}\big{(}(z^{*})^{-1}(q),n\big{)},\\\
\rho q=e(\rho)+e^{*}(q),\\\ \partial_{t}n-\alpha\Delta
n=-n(c_{1}\rho_{1}+c_{2}\rho_{2}).\end{cases}$
We define weak solutions to this system as follows.
###### Definition 5.1.
Given a viscosity parameter $\gamma\geq 0$ and initial data
$\rho_{1}^{0},\rho_{2}^{0}\in X(e)$ and $n^{0}\in L^{2}(\mathbb{R}^{d})$, we
say that
$(\rho_{1},\rho_{2},q,n)\in\mathcal{X}(e)\times\mathcal{X}(e)\times\mathcal{Y}(e^{*})\times
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ is a
weak solution to the system (5.1) with initial data
$(\rho_{1}^{0},\rho_{2}^{0},n^{0})$, if $\rho q=e(\rho)+e^{*}(q)$ almost
everywhere, $\gamma\nabla\rho_{1},\gamma\nabla\rho_{2}\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$, and
for every test function $\psi\in H^{1}_{c}([0,\infty);H^{1}(\mathbb{R}^{d}))$
(5.2)
$\int_{\mathbb{R}^{d}}\psi(0,x)\rho_{1}^{0}=\int_{Q_{\infty}}\nabla\psi\cdot\big{(}\frac{\rho_{1}}{\rho}\nabla
q+\gamma\nabla\rho_{1}-\rho_{1}V\big{)}-\rho_{1}\partial_{t}\psi-\psi\big{(}\rho_{1}F_{1,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{1,2}\big{(}(z^{*})^{-1}(q),n\big{)}\big{)},$
(5.3)
$\int_{\mathbb{R}^{d}}\psi(0,x)\rho_{2}^{0}=\int_{Q_{\infty}}\nabla\psi\cdot\big{(}\frac{\rho_{2}}{\rho}\nabla
q+\gamma\nabla\rho_{2}-\rho_{2}V\big{)}-\rho_{2}\partial_{t}\psi-\psi\big{(}\rho_{1}F_{2,1}\big{(}(z^{*})^{-1}(q),n\big{)}+\rho_{2}F_{2,2}\big{(}(z^{*})^{-1}(q),n\big{)}\big{)},\\\
$ (5.4)
$\int_{\mathbb{R}^{d}}\psi(0,x)n^{0}=\int_{Q_{\infty}}\alpha\nabla\psi\cdot\nabla
n-n\partial_{t}\psi+n(c_{1}\rho_{1}+c_{2}\rho_{2})\psi$
where $\rho=\rho_{1}+\rho_{2}$.
When $\gamma>0$, the existence of weak solutions to (5.1) is straightforward,
as the individual densities will be bounded in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))\cap
H^{1}_{\operatorname{\textup{loc}}}([0,\infty);H^{-1}(\mathbb{R}^{d}))$. Since
this space is compact in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$, one
can construct the solutions as limits of an even more regularized system (with
enough regularity existence of solutions can be shown with a standard but
tedious Picard iteration). Thus, we can assume the existence of a sequence
$(\rho_{1,k},\rho_{2,k},q_{k},n_{k})$ such that for each $k$ the variables are
a weak solution to (5.1) with viscosity parameter $\gamma_{k}>0$. We will then
use our efforts from the past two sections to show that when $\gamma_{k}\to 0$
we can still pass to the limit in equations (5.2-5.4) to obtain a solution to
(1.3). In fact, we will show that we can pass to the limit even when the
underlying energy function $e_{k}$ is changing along the sequence.
We begin with the strong precompactness for the pressure gradient.
###### Proposition 5.2.
Let $e_{k}$ be a sequence of energy functions satisfying (e1-e3) and suppose
there exists an energy $e$ satisfying (e1-e3) such that $e_{k}$ converges
pointwise everywhere to $e$. Let
$\rho_{k}\in\mathcal{X}(e_{k}),q_{k}\in\mathcal{Y}(e_{k}^{*})$, and
$\mu_{k}\in L^{\infty}(\frac{1}{\rho_{k}})$ be sequences of densities,
pressure, and growth terms that converge weakly in
$L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ to limits
$\rho\in\mathcal{X}(e),q\in\mathcal{Y}(e^{*}),\mu\in
L^{\infty}(\frac{1}{\rho})$. If $\rho_{k}q_{k}$ converges weakly in
$L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ to $\rho q$ and for every
$\omega\in W^{1,\infty}_{c}([0,\infty))$
(5.5) $\int_{Q_{\infty}}-e_{k}(\rho_{k})\partial_{t}\omega+\omega|\nabla
q_{k}|^{2}+\omega e^{*}_{k}(q_{k})\nabla\cdot
V-\omega\mu_{k}q_{k}\leq\int_{\mathbb{R}^{d}}\omega(0)e_{k}(\rho_{k}(0,x))\,dx,$
(5.6)
$\int_{\mathbb{R}^{d}}\omega(0)e(\rho(0,x))\,dx\leq\int_{Q_{\infty}}-e(\rho)\partial_{t}\omega+\omega|\nabla
q|^{2}+\omega e^{*}(q)\nabla\cdot V-\omega\mu q,$
and
(5.7)
$\limsup_{k\to\infty}\int_{\mathbb{R}^{d}}\omega(0)e_{k}(\rho_{k}(0,x))+\int_{Q_{\infty}}\omega
q_{k}\mu_{k}\leq\int_{\mathbb{R}^{d}}\omega(0)e(\rho(0,x))+\int_{Q_{\infty}}\omega
q\mu,$
then $\nabla q_{k}$ converges strongly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to
$\nabla q$.
###### Proof.
If we combine (5.5), (5.7) and (5.6), we get the string of inequalities
$\limsup_{k\to\infty}\int_{Q_{\infty}}-e_{k}(\rho_{k})\partial_{t}\omega+\omega|\nabla
q_{k}|^{2}+\omega e^{*}_{k}(q_{k})\nabla\cdot V$
$\leq\limsup_{k\to\infty}\int_{\mathbb{R}^{d}}\omega(0)e(\rho_{k}(0,x))+\int_{Q_{\infty}}\omega
q_{k}\mu_{k}\leq\int_{\mathbb{R}^{d}}\omega(0)e(\rho(0,x))+\int_{Q_{\infty}}\omega\mu
q$ $\leq\int_{Q_{\infty}}-e(\rho)\partial_{t}\omega+\omega|\nabla
q|^{2}+\omega e^{*}(q)\nabla\cdot V$
Thanks to Prop 3.2, the weak convergence of $\rho_{k}q_{k}$ to $\rho q$
implies that $e_{k}(\rho_{k}),e^{*}_{k}(q_{k})$ converge weakly in
$L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ to $e(\rho),e^{*}(q)$
respectively. Therefore,
(5.8) $\limsup_{k\to\infty}\int_{Q_{\infty}}\omega|\nabla
q_{k}|^{2}\leq\int_{Q_{\infty}}\omega|\nabla q|^{2}<\infty.$
The $L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$
boundedness of $\nabla q_{k}$ along with the weak
$L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ convergence of $q_{k}$ to
$q$ implies that $\nabla q_{k}$ converges weakly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to
$\nabla q$. Combining the weak convergence with the upper semicontinuity
property (5.8), it now follows that $\nabla q_{k}$ converges strongly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to
$\nabla q$. ∎
The next two Lemmas are technical results that will help us guarantee that we
can pass to the limit in all of the terms in (5.2) and (5.3).
###### Lemma 5.3.
Let $e_{k}$ be a sequence of energies satisfying (e1-e3) and suppose there
exists an energy $e$ satisfying (e1-e3) such that $e_{k}$ converges pointwise
everywhere to $e$. Let
$\rho_{k}\in\mathcal{X}(e_{k}),q_{k}\in\mathcal{Y}(e^{*}_{k})$ be sequences of
uniformly bounded density and pressure variables that satisfy the duality
relation $\rho_{k}q_{k}=e_{k}(\rho_{k})+e^{*}_{k}(q_{k})$ almost everywhere.
If $q_{k}$ converges strongly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);\dot{H}^{1}(\mathbb{R}^{d}))\cap
L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}(Q_{\infty})$ to a limit $q$
and $\rho_{k}$ converges weakly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to a
limit $\rho$, then
$\limsup_{k\to\infty}\int_{D}|\rho-\rho_{k}||\nabla q|^{2}=0$
for any compact set $D\subset Q_{\infty}$
###### Proof.
Clearly for any $\varphi\in C^{\infty}_{c}(Q_{\infty})$ we have
$\limsup_{k\to\infty}\int_{Q_{\infty}}\varphi\rho_{k}q_{k}=\int_{Q_{\infty}}\varphi\rho
q.$
Thus, by Proposition 3.2, the limiting variables satisfy the duality relation
$\rho q=e(\rho)+e^{*}(q)$ almost everywhere.
Let $M=\sup_{k}\lVert\rho_{k}\rVert_{L^{\infty}(D)}<\infty$. Define
$\bar{e}_{k}^{*}$ and $\bar{e}^{*}$ such that
$\bar{e}_{k}^{*}(0)=0,\bar{e}^{*}(0)=0$, and
$\partial\bar{e}_{k}^{*}(b)=\\{\min(a,M):a\in\partial
e^{*}_{k}(b)\\},\quad\partial\bar{e}^{*}(b)=\\{\min(a,M):a\in\partial
e^{*}(b)\\}$
Let $\bar{e}_{k}=(\bar{e}_{k}^{*})^{*}$ and $\bar{e}=(\bar{e}^{*})^{*}$.
Clearly, we still have the duality relations
$\rho_{k}q_{k}=\bar{e}(\rho_{k})+\bar{e}^{*}(q_{k})$ and $\rho
q=\bar{e}(\rho)+\bar{e}^{*}(q)$ almost everywhere. It also follows that
$\bar{e}_{k}^{*},\bar{e}^{*}$ are uniformly Lipschitz on the entire real line
and uniformly bounded on compact subsets of $\mathbb{R}$. As a result,
$\bar{e}_{k}^{*}$ must converge uniformly on compact subsets of $\mathbb{R}$
to $\bar{e}^{*}.$
Fix some $\delta>0$. Convexity and the duality relation imply that
$\rho_{k}\leq\frac{\bar{e}^{*}_{k}(q_{k}+\delta)-\bar{e}^{*}_{k}(q_{k})}{\delta},\quad\rho\leq\frac{\bar{e}^{*}(q+\delta)-\bar{e}^{*}(q)}{\delta},$
and
$\rho_{k}\geq\frac{\bar{e}^{*}_{k}(q_{k})-\bar{e}^{*}_{k}(q_{k}-\delta)}{\delta},\quad\rho\geq\frac{\bar{e}^{*}_{k}(q)-\bar{e}^{*}(q-\delta)}{\delta}.$
Therefore,
$\int_{D}|\rho-\rho_{k}||\nabla q|^{2}$
$\leq\int_{D}\Big{(}|\frac{\bar{e}^{*}_{k}(q_{k}+\delta)+\bar{e}^{*}(q-\delta)-\bar{e}^{*}_{k}(q_{k})-\bar{e}^{*}(q)}{\delta}|+|\frac{\bar{e}^{*}(q+\delta)+\bar{e}_{k}^{*}(q_{k}-\delta)-\bar{e}^{*}_{k}(q_{k})-\bar{e}^{*}(q)}{\delta}|\Big{)}|\nabla
q|^{2}.$
Thus, it follows that
$\limsup_{k\to\infty}\int_{D}|\rho-\rho_{k}||\nabla q|^{2}\leq
2\int_{D}|\frac{\bar{e}^{*}(q+\delta)+\bar{e}^{*}(q-\delta)-2\bar{e}^{*}(q)}{\delta}||\nabla
q|^{2}$
If $\bar{e}^{*}$ is continuously differentiable at a point $b\in\mathbb{R}$,
then
$\lim_{\delta\to
0}\frac{\bar{e}^{*}(b+\delta)+\bar{e}^{*}(b-\delta)-2\bar{e}^{*}(b)}{\delta}=0.$
The singular set $S\subset\mathbb{R}$ of values where $\bar{e}^{*}$ is not
continuously differentiable is at most countable. Therefore, $|\nabla q|$ is
zero almost everywhere on the set $\\{(t,x)\in D:q(t,x)\in S\\}$. Hence, by
dominated convergence,
$\lim_{\delta\to
0}2\int_{D}|\frac{\bar{e}^{*}(q+\delta)+\bar{e}^{*}(q-\delta)-2\bar{e}^{*}(q)}{\delta}||\nabla
q|^{2}=0.$
∎
###### Lemma 5.4.
Let $z_{k}$ be a sequence of energies satisfying (z1-z3) and suppose there
exists an energy $z$ satisfying (z1-z3) such that $z_{k}$ converges pointwise
everywhere to $z$. Define $e_{k},e$ by formula (2.1). Suppose that
$(\rho_{1,k},\rho_{2,k},q_{k},n_{k})\in\mathcal{X}(e_{k})\times\mathcal{X}(e_{k})\times\mathcal{Y}(e^{*}_{k})\times
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ is a
sequence such that
$(\rho_{1,k}+\rho_{2,k})q_{k}=e_{k}(\rho_{1,k}+\rho_{2,k})+e^{*}_{k}(q_{k})$
almost everywhere. Suppose that $\rho_{1,k},\rho_{2,k}$ converge weakly in
$L^{r}_{\operatorname{\textup{loc}}}([0,\infty);L^{r}(\mathbb{R}^{d})$ to
limits $\rho_{1},\rho_{2}\in\mathcal{X}(e)$, $q_{k}$ converges strongly in
$L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}_{\operatorname{\textup{loc}}}(\mathbb{R}^{d}))\cap
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);\dot{H}^{1}(\mathbb{R}^{d}))$
to a limit $q$, and $n_{k}$ converges strongly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to a
limit $n$. If the growth terms $F_{i,j}$ satisfy assumptions (F1-F2), then
$\rho_{j,k}F_{i,j}\big{(}z_{k}^{-1}(q_{k}),n_{k}\big{)}$ converges weakly in
$L^{r}_{\operatorname{\textup{loc}}}([0,\infty);L^{r}(\mathbb{R}^{d}))$ to
$\rho_{j}F_{i,j}\big{(}z^{-1}(q),n)\big{)}$ for all $i,j\in\\{1,2\\}$ and any
$r<\infty$.
###### Proof.
It suffices to prove the convergence of
$\rho_{1,k}F_{1,1}\big{(}z_{k}^{-1}(q_{k}),n_{k}\big{)}$ to
$\rho_{1}F_{1,1}\big{(}z^{-1}(q),n\big{)}$, the argument for the other terms
is identical. Let $\varphi\in C_{c}^{\infty}(Q_{\infty})$ and let $D\subset
Q_{\infty}$ be a compact set containing the support of $\varphi$. For
$N\in\mathbb{R}$ define $S_{k,N}:=\\{(t,x)\in D:q_{k}(t,x)+n_{k}(t,x)>N\\}.$
From the uniform bounds on the norms of $q_{k},n_{k}$ it follows that
$\lim_{N\to\infty}\sup_{k}|S_{k,N}|=0.$ Thus, we can assume without loss of
generality that $q_{k},n_{k}$ are uniformly bounded by some $M>0$ (and of
course this same logic applies to $q,n$ as well).
Let $b_{\infty}=\sup\\{b\in\mathbb{R}:z^{*}(b)<\infty\\}$. Fix
$\epsilon\in(0,z^{*}(b_{\infty})/2)$ and let
$q_{k,\epsilon}=\min(\max(\epsilon,q_{k}),z^{*}(b_{\infty})-\epsilon),q_{\epsilon}=\min(\max(\epsilon,q),z^{*}(b_{\infty})-\epsilon)$.
It now follows that
$(z_{k}^{*})^{-1}(q_{k,\epsilon}),(z^{*})^{-1}(q_{\epsilon})$ are uniformly
bounded in $L^{\infty}(D)$. Thanks to Lemma A.1, we know that
$(z_{k}^{*})^{-1}$ converges uniformly to $(z^{*})^{-1}$ on
$(\epsilon,z^{*}(b_{\infty})-\epsilon)$. Combining this with properties
(F1-F2), and the various convergence properties of $q_{k},n_{k},\rho_{1,k}$ it
follows that
$\limsup_{k\to\infty}\Big{|}\int_{Q_{\infty}}\varphi\Big{(}\rho_{1,k}F_{1,1}\big{(}(z^{*}_{k})^{-1}(q_{k,\epsilon}),n_{k}\big{)}-\rho_{1}F_{1,1}\big{(}(z^{*})^{-1}(q_{\epsilon}),n\big{)}\Big{)}\Big{|}=0.$
Thus, it remains to show that
(5.9) $\lim_{\epsilon\to
0^{+}}\Big{|}\int_{Q_{\infty}}\varphi\rho_{1}\Big{(}F_{1,1}\big{(}(z^{*})^{-1}(q_{\epsilon}),n\big{)}-F_{1,1}\big{(}(z^{*})^{-1}(q),n\big{)}\Big{)}\Big{|}=0$
and
(5.10) $\lim_{\epsilon\to
0^{+}}\limsup_{k\to\infty}\Big{|}\int_{Q_{\infty}}\varphi\rho_{1,k}\Big{(}F_{1,1}\big{(}(z^{*}_{k})^{-1}(q_{k,\epsilon}),n_{k}\big{)}-F_{1,1}\big{(}(z^{*}_{k})^{-1}(q_{k}),n_{k}\big{)}\Big{)}\Big{|}=0.$
To do this we will exploit the density pressure duality relationship. Thanks
to the relationship between $e$ and $z$, we can express the duality relation
as
$(\rho_{1,k}+\rho_{2,k})(z^{*}_{k})^{-1}(q_{k})=z_{k}(\rho_{1,k}+\rho_{2,k})+q_{k}$.
Fix some $\delta>0$ and split the support of $\rho_{1,k}$ into the sets
$\rho_{1,k}<\delta$ and $\rho_{1,k}\geq\delta$. Again using duality, we have
$0\leq\rho_{1,k}\leq\rho_{1,k}+\rho_{2,k}\in\partial
z_{k}^{*}\circ(z_{k}^{*})^{-1}\circ q_{k}$
Thus, for almost every $(t,x)$ where $\rho_{1,k}(t,x)\geq\delta$, it follows
that $(z_{k}^{*})^{-1}$ is at worst $\delta^{-1}$ Lipschitz at the value
$q_{k}(t,x)$ and $(z_{k}^{*})^{-1}(q_{k}(t,x))$ is uniformly bounded with
respect to $k$. Thus,
$\Big{|}\int_{Q_{\infty}}\varphi\rho_{1,k}\Big{(}F_{1,1}\big{(}(z^{*}_{k})^{-1}(q_{k,\epsilon}),n_{k}\big{)}-F_{1,1}\big{(}(z^{*}_{k})^{-1}(q_{k}),n_{k}\big{)}\Big{)}\Big{|}$
$\leq
B\delta\lVert\varphi\rVert_{L^{1}(D)}+\omega_{\delta}(2\epsilon\delta^{-1})\lVert\rho_{1,k}\rVert_{L^{1}(D)}\lVert\varphi\rVert_{L^{\infty}(D)}+\lVert\rho_{1,k}\varphi\rVert_{L^{\infty}(D)}|D_{k,\epsilon}|$
where $B$ is a bound on $F_{1,1}$ and $\omega_{\delta}$ is the modulus of
continuity of $F_{1,1}$ on the bounded set
$\Big{(}\bigcup_{k}\\{(z_{k}^{*})^{-1}(q_{k}(t,x)):\rho_{1,k}(t,x)\geq\delta\\}\Big{)}\times[0,M]$
and $D_{k,\epsilon}=\\{(t,x)\in D:q_{k}(t,x)>z^{*}(b_{\infty})+\epsilon\\}$.
The convergence of $z_{k}$ to $z$ implies that
$\limsup_{k\to\infty}|D_{k,\epsilon}|=0$ for all fixed $\epsilon>0$. Thus,
sending $k\to\infty$, then $\epsilon\to 0^{+}$, and then $\delta\to 0^{+}$, we
get (5.10). The strong convergence of $q_{k}$ implies that the duality
relation $(\rho_{1}+\rho_{2})(z^{*})^{-1}(q)=z(\rho_{1}+\rho_{2})+q$ holds,
thus we can use a similar argument to obtain (5.9).
∎
At last, we are ready to prove our main result, which will let us pass to the
limit when we consider sequences of weak solutions to (5.1). Note that the
following theorem applies in the case where the viscosity is decreasing to
zero along the sequence, as well as when the viscosity is zero along the
entire sequence.
###### Theorem 5.5.
Let $z_{k}$ be a sequence of energies satisfying (z1-z3). Suppose there exists
an energy $z$ satisfying (z1-z3) such that $z_{k}$ converges pointwise
everywhere to $z$. Define $e_{k},e$ by formula (2.1). Let
$\rho_{1}^{0},\rho_{2}^{0}\in L^{1}(\mathbb{R}^{d})\cap
L^{\infty}(\mathbb{R}^{d}),n^{0}\in L^{2}(\mathbb{R}^{d})$ be initial data
such that $e(\rho_{1}^{0}+\rho_{2}^{0})\in L^{1}(\mathbb{R}^{d})$. Let $V\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ be a
vector field such that $\nabla\cdot V\in L^{\infty}(Q_{\infty})$ and let
$F_{i,j}$ be source terms satisfying (F1-F2). Let
$\rho_{1,k},\rho_{2,k}\in\mathcal{X}(e_{k})$,
$q_{k}\in\mathcal{Y}(e^{*}_{k})$, $n_{k}\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ be
sequences of density pressure and nutrient variables such that
$\nabla\rho_{1,k},\nabla\rho_{2,k}\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$.
Suppose that for each $k$, the variables $(\rho_{1,k},\rho_{2,k},q_{k},n_{k})$
are weak solutions to the system (5.1) with energy $e_{k}$, viscosity constant
$\gamma_{k}\geq 0$, and initial data $(\rho_{1}^{0},\rho_{2}^{0},n^{0})$. If
$\gamma_{k}$ converges to $0$ and at least one of the following two conditions
hold:
1. (a)
$\partial z(a)$ is a singleton for all $a\in(0,\infty)$,
2. (b)
the source terms satisfy the additional condition (F3),
then any limit point $(\rho_{1},\rho_{2},q,n)$ of the sequence is a solution
of (1.3).
###### Proof.
Step 1: Uniform bounds, basic convergence properties, and parabolic structure.
Summing the first two equations of (5.1) together, we see that for any test
function $\psi\in W^{1,1}_{c}([0,\infty);H^{1}(\mathbb{R}^{d}))$
$\rho_{k},q_{k}$ are weak solutions to the parabolic equation
(5.11)
$\int_{\mathbb{R}^{d}}\psi(0,x)\rho^{0}=\int_{Q_{\infty}}-\rho_{k}\partial_{t}\psi+\nabla\psi\cdot(\nabla
q_{k}+\gamma_{k}\nabla\rho_{k})-\rho_{k}\nabla\psi\cdot V-\psi\mu_{k}$
where $\rho_{k}=\rho_{1,k}+\rho_{2,k}$, $\mu_{k}=\mu_{1,k}+\mu_{2,k}$ and
$\mu_{i,k}=\rho_{1,k}F_{i,1}\big{(}(z_{k}^{*})^{-1}(q_{k},n_{k})\big{)}+\rho_{2,k}F_{i,2}\big{(}(z_{k}^{*})^{-1}(q_{k},n_{k})\big{)}$.
Thanks to Proposition 4.3, $\rho_{k},q_{k},\mu_{k}$ must satisfy the energy
dissipation inequality
$\int_{Q_{\infty}}-e(\rho_{k})\partial_{t}\omega+\omega|\nabla
q_{k}|^{2}+\omega e^{*}(q_{k})\nabla\cdot
V-\omega\mu_{k}q_{k}\leq\int_{\mathbb{R}^{d}}\omega(0)e(\rho^{0}(x))\,dx,$
for every nonnegative $\omega\in W^{1,\infty}([0,\infty))$ and the estimates
(4.10)-(4.17). After plugging estimate (4.10) into estimate (4.12), it follows
that all of the estimates (4.11-4.17) are independent of $k$ and only depend
on $\rho^{0}$, $V$ and the bounds on $F_{i,j}$. Thus, $\rho_{k},q_{k}$ are
uniformly bounded in the norms estimated in (4.11)-(4.17). As a result, there
must exist $\rho\in\mathcal{X}(e)$, $q\in\mathcal{Y}(e^{*})$ and $\mu\in
L^{\infty}_{\operatorname{\textup{loc}}}([0,\infty);L^{\infty}(\mathbb{R}^{d})\cap
L^{1}(\mathbb{R}^{d}))$ such that $\rho_{k},q_{k},\mu_{k}$ converge weakly in
$L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}_{\operatorname{\textup{loc}}}(\mathbb{R}^{d}))$
(along a subsequence that we do not relabel) to $\rho,q,\mu$ respectively.
Note that for $\rho_{k},\mu_{k}$ the weak convergence in fact holds in
$L^{r}_{\operatorname{\textup{loc}}}(Q_{\infty})$ for any $r<\infty$.
Property (F2) implies that $0\leq\rho_{1,k},\rho_{2,k}\leq\rho_{k}$. Hence,
$\rho_{1,k},\rho_{2,k}$ are uniformly bounded in
$L^{\infty}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d})\cap
L^{\infty}(\mathbb{R}^{d}))$ and there exist limit points $\rho_{1},\rho_{2}$
(and a subsequence that we do not relabel) such that $\rho_{1,k},\rho_{2,k}$
converge weakly in
$L^{r}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d})\cap
L^{r}(\mathbb{R}^{d}))$ to $\rho_{1},\rho_{2}$ respectively for any
$r<\infty$. Furthermore, the bounds on $\rho_{1,k},\rho_{2,k}$ combined with
standard results for the heat equation imply that $n_{k}$ is uniformly bounded
in $L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))\cap
H^{1}_{\operatorname{\textup{loc}}}([0,\infty);H^{-1}(\mathbb{R}^{d}))$.
Hence, the Aubin-Lions Lemma implies that there exists a limit point $n\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ and a
subsequence (that we do not relabel) such that $n_{k}$ converges to $n$ in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$.
Thanks to the linear structure of equation (5.11), the convergence properties
we have established are strong enough to send $k\to\infty$. Thus, $\rho,q,\mu$
satisfy the weak equation
(5.12) $\int_{\mathbb{R}^{d}}\psi(0,x)\rho^{0}(x)\,dx=\int_{Q_{\infty}}\nabla
q\cdot\nabla\psi-\rho\partial_{t}\psi-\rho V\cdot\nabla\psi-\mu\psi.$
for any $\psi\in W^{1,1}_{c}([0,\infty);H^{1}(\mathbb{R}^{d}))$ After taking
the limit, the bounds on $\rho,q,\mu$ inherited from the estimates (4.11-4.17)
allow us to conclude that (5.12) holds for any $\psi\in
W^{1,1}_{c}([0,\infty);L^{1}(\rho)\cap\dot{H}^{1}(\mathbb{R}^{d}))$. Thus,
Proposition 4.2 implies that for every $\omega\in
W^{1,\infty}_{c}([0,\infty))$ the limit variables $\rho,\mu,q$ satisfy the
energy dissipation relation
$\int_{\mathbb{R}^{d}}\omega(0)e(\rho(0,x))\,dx=\int_{Q_{\infty}}-e(\rho)\partial_{t}\omega+\omega|\nabla
q|^{2}+\omega e^{*}(q)\nabla\cdot V-\omega\mu q.$
Step 2: Weak convergence of the products $\rho_{1,k}q_{k},\rho_{2,k}q_{k}$.
We want to use Lemma 3.3 to prove that $\rho_{i,k}q_{k}$ converges weakly to
$\rho_{i}q$ for $i=1,2$. This will imply that $\rho_{k}q_{k}$ converges weakly
to $\rho q$. Fix some $\epsilon>0$ and let $\eta_{\epsilon}$ be a spatial
mollifier. Define $\rho_{i,k,\epsilon}=\eta_{\epsilon}*\rho_{i,k}$ and
$\rho_{i,\epsilon}=\eta_{\epsilon}*\rho_{i}$. Thanks to estimates (4.11-4.13),
it follows that
$\sup_{k}\;\;\lVert\partial_{t}\rho_{i,k,\epsilon}\rVert_{L^{2}(Q_{T})}+\lVert\nabla\rho_{i,k,\epsilon}\rVert_{L^{2}(Q_{T})}\lesssim_{\epsilon}\sup_{k}\;\lVert\rho_{i,k}\rVert_{L^{2}(Q_{T})}+\lVert\rho_{i,k}\rVert_{H^{1}([0,T];H^{-1}(\mathbb{R}^{d}))}<\infty.$
Thus, for $\epsilon>0$ fixed, $\rho_{i,k,\epsilon}$ is uniformly
equicontinuous in $L^{2}(Q_{T})$. The uniform bounds (4.11) and (4.13)
automatically upgrade this to uniform equicontinuity in $L^{r}(Q_{T})\cap
L^{1}(Q_{T})$ for any $r<\infty$. In addition, the estimates (4.17) and (4.14)
imply that $q_{k}$ is spatially equicontinuous in
$L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}(Q_{\infty})$. Thus, we can
apply Lemma 3.3 to conclude that $\rho_{i,k}q_{k}$ converges weakly in
$(C_{c}(Q_{\infty}))^{*}$ to $\rho_{i}q$ for $i=1,2$. The uniform boundedness
of $\rho_{i,k}q_{k}$ in
$L_{\operatorname{\textup{loc}}}^{\frac{2d+4}{d+4}}([0,\infty);L^{2}(\mathbb{R}^{d}))$
gives us the automatic upgrade to weak convergence in
$L_{\operatorname{\textup{loc}}}^{\frac{2d+4}{d+4}}([0,\infty);L^{2}(\mathbb{R}^{d}))$.
Now Proposition 3.2 implies that $\rho q=e(\rho)+e^{*}(q)$ almost everywhere
and $e(\rho_{k})$ and $e^{*}(q_{k})$ converge weakly to $e(\rho)$ and
$e^{*}(q)$ respectively.
Step 3: Strong convergence of $\nabla q_{k}$ to $\nabla q$ in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$.
We now want to use Proposition 5.2 to prove the strong convergence of the
pressure gradient. Note that the pointwise everywhere convergence of $z_{k}$
to $z$ implies the pointwise everywhere convergence of $e_{k}$ to $e$. We have
already shown that $\rho_{k}q_{k}$ converges weakly to $\rho q$ and verified
the inequalities (5.5) and (5.6). Thus it remains to show that the upper
semicontinuity property (5.7) holds. To verify this condition, we will need to
consider the scenarios (a) and (b) separately.
Step 3a: Scenario (a) holds. When $\partial z(a)$ is a singleton for all
$a\in(0,\infty)$, it follows that $\partial e(a)$ is a singleton for all
$a\in(0,\infty)$ and hence $e^{*}$ must be strictly convex on
$(0,\infty)\cap(e^{*})^{-1}(\mathbb{R})$. Thus, Lemma A.3 implies that $q_{k}$
converges in measure to $q$. Since $q_{k}$ is uniformly bounded in
$L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}_{\operatorname{\textup{loc}}}(\mathbb{R}^{d}))$,
we can upgrade the convergence in measure to strong convergence in
$L^{r}_{\operatorname{\textup{loc}}}(Q_{\infty})$ for any
$r<\frac{2d+4}{d+4}$. From the strong convergence, it is automatic that
$\limsup_{k\to\infty}\int_{Q_{\infty}}\omega\mu_{k}q_{k}=\int_{Q_{\infty}}\omega\mu
q$
any $\omega\in W^{1,\infty}_{c}([0,\infty))$.
Step 3b: Scenario (b) holds
Without strict convexity of the dual energy, the weak convergence of
$e^{*}_{k}(q_{k})$ does not give us strong convergence of $q_{k}$. Thus, to
prove (5.7) we will need a more delicate argument that exploits the structure
of the product $q_{k}\mu_{k}$
We begin by fixing some $\delta>0$ and letting $J_{\delta}$ be a space time
mollifier. Set $q_{k,\delta}:=J_{\delta}*q_{k}$ and
$q_{\delta}:=q*J_{\delta}$. It is clear that $q_{k,\delta}$ converges strongly
to $q_{\delta}$ in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}_{\operatorname{\textup{loc}}}(\mathbb{R}^{d}))$
and $q_{\delta}$ converges strongly to $q$ in
$L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}_{\operatorname{\textup{loc}}}(\mathbb{R}^{d}))$.
Thus, it will be enough to show that
$\liminf_{\delta\to
0}\limsup_{k\to\infty}\int_{Q_{\infty}}\omega(q_{k}-q_{k,\delta})\mu_{i,k}\leq
0,$
for $i=1,2$.
We focus on the case $i=1$ (the argument for $i=2$ is identical). Assumption
(F3) and the monotonicity of $(z_{k}^{*})^{-1}$ guarantees that $q\mapsto
F_{1,1}\big{(}(z^{*}_{k})^{-1}(q),n\big{)}+F_{1,2}\big{(}(z^{*}_{k})^{-1}(q),n\big{)}$
is decreasing for each fixed value of $n$. As a result, there must exist a
function $f_{k}:[0,\infty)\times[0,\infty)\to\mathbb{R}$ such that for each
fixed value of $n$, we have $f_{k}(0,n)=0$, $q\mapsto f_{k}(q,n)$ is convex,
and
$-\partial_{q}f_{k}(q,n)=F_{1,1}\big{(}(z^{*}_{k})^{-1}(q),n\big{)}+F_{1,2}\big{(}(z^{*}_{k})^{-1}(q),n\big{)}$.
The structure of $\mu_{1,k}$ combined with the convexity of $f_{k}$ implies
that
$\int_{Q_{\infty}}\omega(q_{k}-q_{k,\delta})\mu_{i,k}\leq\int_{Q_{\infty}}\omega\rho_{1,k}\big{(}f_{k}(q_{k,\delta},n_{k})-f_{k}(q_{k},n_{k})).$
Since $F_{1,1}+F_{1,2}$ is uniformly bounded over
$\mathbb{R}\times[0,\infty)$, it follows that $f_{k}$ is uniformly Lipschitz
in the first argument. Uniform equicontinuity in the second argument is clear
when $q=0$. For $q>0$, fix some $\epsilon\in(0,q)$ and consider
$n_{1},n_{2}\geq 0$. We see that
$|f_{k}(q,n_{1})-f_{k}(q,n_{2})|\leq\sum_{i=1}^{2}\int_{0}^{q}|F_{1,i}\big{(}(z_{k}^{*})^{-1}(a),n_{1}\big{)}-F_{1,i}\big{(}(z_{k}^{*})^{-1}(a),n_{2}\big{)}|da.$
$\leq
2B\epsilon+q\sup_{b\in[(z_{k}^{*})^{-1}(\epsilon),(z^{*}_{k})^{-1}(q)]}\sum_{i=1}^{2}|F_{1,i}(b,n_{1}\big{)}-F_{1,i}\big{(}b,n_{2}\big{)}|,$
where $B$ is a bound on $F_{1,1}+F_{1,2}$. Assumption (z3) and the pointwise
everywhere convergence of $z_{k}$ to $z$ implies that
$(z_{k}^{*})^{-1}(\epsilon),(z^{*}_{k})^{-1}(q)$ are uniformly bounded with
respect to $k$. Thus, it now follows that $f_{k}$ is uniformly equicontinuous
in the second argument on compact subsets of $[0,\infty)^{2}$. As a result,
$f_{k}$ must converge uniformly on compact subsets of $[0,\infty)^{2}$ to a
limit function $f$ that is convex in the first variable and continuous in the
second.
For all $k$ we have $|f_{k}(q,n)|\leq Bq$. Thus, it is now clear that
$\liminf_{\delta\to
0}\limsup_{k\to\infty}\int_{Q_{\infty}}\omega\rho_{1,k}\Big{(}|f_{k}(q_{k,\delta},n_{k})-f(q,n)|+|f_{k}(q_{k},n_{k})-f(q_{k},n_{k})|+|f(q_{k},n)-f(q_{k},n_{k})|\Big{)}=0.$
It remains to prove that
$\limsup_{k\to\infty}\int_{Q_{\infty}}\omega\rho_{1,k}\big{(}f(q,n)-f(q_{k},n))\leq
0.$
Let $f^{*}(a,n)=\sup_{q\in[0,\infty)}aq-f(q,n)$. Given any smooth function
$\psi\in C^{\infty}_{c}(Q_{\infty})$, we have
$\int_{Q_{\infty}}\omega\rho_{1,k}\big{(}f(q,n)-f(q_{k},n))\leq\int_{Q_{\infty}}\omega\rho_{1,k}\big{(}f(q,n)-q_{k}\psi)+\omega\rho_{1,k}f^{*}(\psi,n).$
Using the weak convergence of the product $\rho_{1,k}q_{k}$ to $\rho_{1}q$ we
see that
$\limsup_{k\to\infty}\int_{Q_{\infty}}\omega\rho_{1,k}\big{(}f(q,n)-q_{k}\psi)+\rho_{1,k}f^{*}(\psi,n)=\int_{Q_{\infty}}\omega\rho_{1}\big{(}f(q,n)-q\psi)+\omega\rho_{1}f^{*}(\psi,n).$
Taking an infimum over $\psi$, we get
$\limsup_{k\to\infty}\int_{Q_{\infty}}\omega\rho_{1,k}\big{(}f(q,n)-f(q_{k},n))\leq
0.$
as desired.
Step 4: Passing to the limit in the weak equations
Now that we have obtained the strong convergence of the pressure gradient, we
are ready to pass to the limit in the weak equations. In Lemma 5.4, we showed
that the source terms converge weakly to the desired limit under the
convergence properties that we have established. The weak convergence of the
remaining terms is clear except for the weak convergence of the product
$\frac{\rho_{i,k}}{\rho_{k}}\nabla q_{k}$ to $\frac{\rho_{i}}{\rho}\nabla q$.
Given some $\delta>0$, it follows from Lemma 5.3 that
$\frac{1}{\rho_{k}+\delta}\nabla q_{k}$ converges strongly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to
$\frac{1}{\rho+\delta}\nabla q$. Thus, if we can show that
(5.13) $\liminf_{\delta\to
0}\Big{(}\int_{Q_{T}}\frac{\delta\rho_{i}}{\rho(\rho+\delta)}|\nabla
q|^{2}+\limsup_{k\to\infty}\int_{Q_{T}}\frac{\delta\rho_{i,k}}{\rho_{k}(\rho_{k}+\delta)}|\nabla
q_{k}|^{2}\Big{)}=0,$
then it will follow that $\frac{\rho_{i,k}}{\rho_{k}}\nabla q_{k}$ converges
weakly in
$L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$ to
$\frac{\rho_{i}}{\rho}\nabla q$.
Since $\rho_{i,k}\leq\rho_{k}$ and $\rho_{i}\leq\rho$, the left hand side of
(5.13) is bounded above by
$\liminf_{\delta\to 0}\Big{(}\int_{Q_{T}}\frac{\delta}{\rho+\delta}|\nabla
q|^{2}+\limsup_{k\to\infty}\int_{Q_{T}}\frac{\delta}{\rho_{k}+\delta}|\nabla
q_{k}|^{2}\Big{)}$ $=\liminf_{\delta\to
0}\int_{Q_{T}}\frac{2\delta}{\rho+\delta}|\nabla q|^{2},$
where we have used Lemma 5.3 to go from the first line to the second. The
property $\limsup_{a\to 0^{+}}\frac{e(a)}{a}=0$ combined with the duality
relation implies that $q=0$ whenever $\rho=0$. As a result, $|\nabla q|$ gives
no mass to the set of points where $\rho=0$. By dominated convergence
$\liminf_{\delta\to 0}\int_{Q_{T}}\frac{2\delta}{\rho+\delta}|\nabla
q|^{2}=0.$
∎
###### Corollary 5.6.
Let $e$ be an energy satisfying (e1-e3) such that $\partial e(a)$ is a
singleton for all $a\in(0,\infty)$. Let $F_{i,j}$ be source terms satisfying
(F1-F2). Given initial data $\rho_{1}^{0},\rho_{2}^{0}\in
L^{1}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d}),n^{0}\in
L^{2}(\mathbb{R}^{d})$ such that $e(\rho_{1}^{0}+\rho_{2}^{0})\in
L^{1}(\mathbb{R}^{d})$, there exists a weak solution
$(\rho_{1},\rho_{2},q,n)\in\mathcal{X}(e)\times\mathcal{X}(e)\times\mathcal{Y}(e^{*})\times
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ to the
system (1.3).
###### Proof.
For $\gamma_{k}=\frac{1}{k}$, the existence of a solution to the system (5.1)
for the fixed energy $e$ is straightforward. Using these solutions, we can
pass to the limit as $k\to\infty$ using Theorem 5.5. ∎
###### Corollary 5.7.
Let $e$ be an energy satisfying (e1-e3) and let $F_{i,j}$ be source terms
satisfying (F1-F3). Given initial data $\rho_{1}^{0},\rho_{2}^{0}\in
L^{1}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d}),n^{0}\in
L^{2}(\mathbb{R}^{d})$ such that $e(\rho_{1}^{0}+\rho_{2}^{0})\in
L^{1}(\mathbb{R}^{d})$, there exists a weak solution
$(\rho_{1},\rho_{2},q,n)\in\mathcal{X}(e)\times\mathcal{X}(e)\times\mathcal{Y}(e^{*})\times
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ to the
system (1.1).
###### Proof.
See Corollary 5.6. ∎
###### Corollary 5.8.
Let $F_{i,j}$ be source terms satisfying (F1-F3). Given initial data
$\rho_{1}^{0},\rho_{2}^{0}\in L^{1}(\mathbb{R}^{d})\cap
L^{\infty}(\mathbb{R}^{d}),n^{0}\in L^{2}(\mathbb{R}^{d})$ such that
$\rho_{1}^{0}+\rho_{2}^{0}\leq 1$ almost everywhere, let
$(\rho_{1,m},\rho_{2,m},q_{m},n_{m})\in\mathcal{X}(e)\times\mathcal{X}(e)\times\mathcal{Y}(e^{*})\times
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ be weak
solutions of the system (1.3) with the energy $e_{m}(a)=\frac{1}{m}a^{m}$. As
$m\to\infty$, any weak limit point of the sequence
$(\rho_{1,m},\rho_{2,m},q_{m},n_{m})$ is a solution to the system (1.3) with
the incompressible energy
$e_{\infty}(a)=\begin{cases}0&\textup{if}\;\;a\in[0,1],\\\
+\infty&\textup{otherwise.}\end{cases}$
###### Proof.
It is clear that $e_{m}$ converges pointwise everywhere to $e_{\infty}$. We
can use Corollary 5.6 to construct weak solutions of (1.3) for each $m>0$. We
can then use Theorem 5.5 to pass to the limit $m\to\infty$. ∎
At last, we will show that weak solutions to (1.3) can be easily converted
into weak solutions to (1.1).
###### Proposition 5.9.
Let $z$ be an energy satisfying (z1-z3) and define $e$ by formula (2.1).
Suppose that $\rho_{1}^{0},\rho_{2}^{0}\in L^{1}(\mathbb{R}^{d})\cap
L^{\infty}(\mathbb{R}^{d}),n^{0}\in L^{2}(\mathbb{R}^{d})$ is initial data
such that $e(\rho_{1}^{0}+\rho_{2}^{0}),z(\rho_{1}^{0}+\rho_{2}^{0})\in
L^{1}(\mathbb{R}^{d})$. If
$(\rho_{1},\rho_{2},q,n)\in\mathcal{X}(e)\times\mathcal{X}(e)\times\mathcal{Y}(e^{*})\times
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ is a
weak solution to the system (1.3) and we set $p=(z^{*})^{-1}(q)$, then
$(\rho_{1},\rho_{2},p,n)\in\mathcal{X}(e)\times\mathcal{X}(e)\times
L^{\frac{2d+4}{d+4}}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}_{\operatorname{\textup{loc}}}(\rho))\cap
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);\dot{H}^{1}(\rho))\times
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);H^{1}(\mathbb{R}^{d}))$ is a
weak solution of (1.1).
###### Proof.
The duality relation $\rho q=e(\rho)+e^{*}(q)$ is equivalent to
$p\rho=z(\rho)+z^{*}(p)=z(\rho)+q$. Given a compact subset $D\subset
Q_{\infty}$ we have
$\int_{D}\rho|p|\leq\int_{D}|z(\rho)|+q$
Thus, $p\in L^{1}_{\operatorname{\textup{loc}}}(\rho)$.
If $\sup\partial e^{*}(0)>0$, then $(z^{*})^{-1}$ is uniformly Lipschitz on
all of $[0,\infty)$ and $\rho$ is bounded away from zero on $q>0$. By the
duality relation and the chain rule for Sobolev functions, we have $\nabla
p=\frac{1}{\rho}\nabla q$ and $\nabla p\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$. In
this case, it is now clear that $(\rho_{1},\rho_{2},p,n)$ is a weak solution
to (1.1).
Otherwise, we are in the case where $q=0$ implies that $\rho=0$ and we cannot
extend $(z^{*})^{-1}$ to be uniformly Lipschitz on $[0,\infty)$. Fix some
$\delta>0$ and let $\eta_{\delta}:[0,\infty)\to\mathbb{R}$ be a smooth
increasing function such that $\eta_{\delta}(a)=0$ if $a\leq\delta$ and
$\eta_{\delta}(a)=1$ if $a\geq 2\delta$. Since $\limsup_{a\to
0^{+}}\frac{e(a)}{a}=0$, it follows that $\frac{1}{\rho}$ is bounded on
$q\geq\delta$. Given any test function $\varphi\in
L^{\infty}_{c}([0,\infty);W^{1,\infty}_{c}(\mathbb{R}^{d}))$ we see that
$\int_{Q_{\infty}}p\nabla\cdot(\varphi\eta_{\delta}(q))=\int_{Q_{\infty}}p\rho\frac{\eta_{\delta}(q)}{\rho}\nabla\cdot\varphi+p\rho\frac{\eta_{\delta}^{\prime}(q)}{\rho}\nabla
q\cdot\varphi$
Since $p$ must be bounded on the support of $\eta_{\delta}^{\prime}(q)$, it
follows that the above integral is well defined. Define
$q_{\delta}:=\max(q,\delta)$, and $p_{\delta}:=(z^{*})^{-1}(q_{\delta})$.
Since $(z^{*})^{-1}$ is Lipschitz on $[\delta,\infty)$, the chain rule for
Sobolev functions allows us to compute $\nabla
p_{\delta}=\frac{\chi_{\delta}(q)}{\rho}\nabla q$ where $\chi_{\delta}$ is the
characteristic function $[\delta,\infty)$. Furthermore, on the support of
$\eta_{\delta},\eta_{\delta}^{\prime}$ it follows that $p=p_{\delta}$. Hence,
(5.14)
$\int_{Q_{\infty}}p\nabla\cdot(\varphi\eta_{\delta}(q))=\int_{Q_{\infty}}p_{\delta}\rho\frac{\eta_{\delta}(q)}{\rho}\nabla\cdot\varphi+p_{\delta}\rho\frac{\eta_{\delta}^{\prime}(q)}{\rho}\nabla
q\cdot\varphi=\int_{Q_{\infty}}\frac{\eta_{\delta}(q)}{\rho}\nabla
q\cdot\varphi$
Thus, $\nabla p$ is well defined as a distribution against any test vector
field of the form $\eta_{\delta}(q)\psi$ where $\psi\in
L^{\infty}_{c}([0,\infty);W^{1,\infty}_{c}(\mathbb{R}^{d}))$ and when tested
against these fields we have $\nabla p=\frac{1}{\rho}\nabla q$. Examining
equation (5.14), we see that we can in fact relax $\varphi$ to belong to
$L^{2}_{c}([0,\infty);L^{2}(\mathbb{R}^{d}))$.
It is now clear that if $g$ is some function such that $0\leq g\leq\rho$ then
we have $g\nabla p=\frac{g}{\rho}\nabla q$ on the support of
$\eta_{\delta}(q)$. Since $\eta_{\delta}(q)\frac{g}{\rho}\nabla q\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$
independently of $\delta$, it follows that $g\nabla p\in
L^{2}_{\operatorname{\textup{loc}}}([0,\infty);L^{2}(\mathbb{R}^{d}))$. Thus,
we can conclude that
$\frac{\rho_{i}}{\rho}\nabla q\cdot\varphi=\rho_{i}\nabla p\cdot\varphi$
where $\varphi$ is any element of
$L^{2}_{c}([0,\infty);L^{2}(\mathbb{R}^{d}))$. It now follows that
$(\rho_{1},\rho_{2},p,n)$ is a solution to the system (1.1). The regularity of
$p$ can then be improved by arguing as in Propositions 4.2 and 4.3. ∎
The proofs of Theorems 1.1 1.2, and 1.3 are now just corollaries of the
previous proposition, Theorem 5.5, and Corollaries 5.6 and 5.7.
## Appendix A Some Convergence results for sequences of convex functions
###### Lemma A.1.
Let $f:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ be a proper, lower
semicontinuous, convex function such that $f^{-1}(\mathbb{R})$ is not a
singleton. If $f_{k}:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ is a sequence
of proper, lower semicontinuous convex functions such that $f_{k}$ converges
pointwise everywhere to $f$ then the following properties hold:
1. (1)
If $f$ is differentiable at a point $a\in\mathbb{R}$, then
$\limsup_{k\to\infty}\max\Big{(}|\sup\partial
f_{k}(a)-f^{\prime}(a)|\;,\;|\inf\partial f_{k}(a)-f^{\prime}(a)|\Big{)}=0.$
2. (2)
The convergence of $f_{k}$ to $f$ is uniform on compact subsets of the
interior of $f^{-1}(\mathbb{R})$.
3. (3)
$f_{k}^{*}$ converges pointwise everywhere to $f^{*}$ except possibly at the
two exceptional values
$b^{+}_{\infty}=\sup\\{b\in\mathbb{R}:z^{*}(b)<\infty\\},b^{-}_{\infty}=\inf\\{b\in\mathbb{R}:z^{*}(b)<\infty\\}$.
4. (4)
If $f^{*}$ is differentiable at a point $b\in\mathbb{R}$, then
$\limsup_{k\to\infty}\max\Big{(}|\sup\partial
f_{k}^{*}(b)-f^{*\,\prime}(b)|\;,\;|\inf\partial
f_{k}^{*}(b)-f^{*\,\prime}(b)|\Big{)}=0,$
and the convergence of $f_{k}^{*}$ to $f^{*}$ is uniform on compact subsets of
the interior of $(f^{*})^{-1}(\mathbb{R})$.
###### Proof.
Let $a$ be a point of differentiability for $f$. Since $f^{\prime}(a)$ exists
and is finite, there exists $\delta_{0}>0$ such that $f$ is finite on
$[a-\delta_{0},a+\delta_{0}]$. Fix some $\delta\in(0,\delta_{0})$. The
convergence of $f_{k}$ to $f$ implies that there must exist some $N,B$
sufficiently large such that
$|f_{k}(a)|,|f_{k}(a-\delta)|,|f_{k}(a+\delta)|<B$ for all $k>N$. Now we can
use convexity to bound
$\frac{f_{k}(a)-f_{k}(a-\delta)}{\delta}\leq\inf\partial
f_{k}(a)\leq\sup\partial f_{k}(a)\leq\frac{f_{k}(a+\delta)-f_{k}(a)}{\delta}.$
Thus,
$\limsup_{k\to\infty}\max\Big{(}|\sup\partial
f_{k}(a)-f^{\prime}(a)|\;,\;|\inf\partial
f_{k}(a)-f^{\prime}(a)|\Big{)}\leq|\frac{f(a)-f(a-\delta)}{\delta}-f^{\prime}(a)|+|\frac{f(a+\delta)-f(a)}{\delta}-f^{\prime}(a)|.$
Sending $\delta\to 0$ and using the fact that $f$ is differentiable at $a$, we
get the desired result.
Now suppose that $[a_{0},a_{1}]$ is an interval in the interior of
$f^{-1}(\mathbb{R})$ and choose some $\delta>0$ such that
$[a_{0}-\delta,a_{1}+\delta]$ is still in the interior of $f^{-1}(\mathbb{R})$
and $f$ is differentiable at $a_{0}-\delta,a_{1}+\delta$. Given any
$a\in[a_{0},a_{1}]$, we have
$f^{\prime}(a_{0}-\delta)\leq\inf\partial f(a)\leq\sup\partial f(a)\leq
f^{\prime}(a_{1}+\delta).$
It then follows from our above work that $\partial f_{k}(a)$ is uniformly
bounded on $[a_{0},a_{1}]$ for all $k$ sufficiently large. Hence, $f_{k}$ is
uniformly equicontinuous on $[a_{0},a_{1}]$ and thus converges uniformly to
$f$.
Now we consider $f^{*}$. Fix some $b\in\mathbb{R}$. If $f^{*}(b)=+\infty$,
then for each $j\in\mathbb{Z}_{+}$ there exists $a_{j}\in\mathbb{R}$ such that
$a_{j}b-f(a_{j})>j$. We can then compute
$\liminf_{k\to\infty}f_{k}^{*}(b)\geq\liminf_{k\to\infty}a_{j}b-f_{k}(a_{j})>j.$
Thus, $\liminf_{k\to\infty}f_{k}^{*}(b)=+\infty$.
If $b_{\infty}^{-}=b_{\infty}^{+}$ then we are already done. Otherwise, given
$b\in(b_{\infty}^{-},b_{\infty}^{+})$, let $a_{0},a_{1}$ be the infimum and
supremum of the set $\\{a\in\mathbb{R}:b\in\partial f(a)\\}$ respectively.
Since $b\in(b_{\infty}^{-},b_{\infty}^{+})$, $a_{0},a_{1}$ must exist and are
finite. Furthermore, for any $a\in[a_{0},a_{1}]$ we have $f^{*}(b)=ab-f(a)$.
If we fix some $\delta>0$, it follows that
$\frac{f(a_{0})-f(a_{0}-\delta)}{\delta}<b<\frac{f(a_{1}+\delta)-f(a_{1})}{\delta}$,
and hence for all $k$ sufficiently large
$\frac{f_{k}(a_{0})-f_{k}(a_{0}-\delta)}{\delta}<b<\frac{f_{k}(a_{1}+\delta)-f_{k}(a_{1})}{\delta}$
Hence, for all $k$ sufficiently large
$f_{k}^{*}(b)=\sup_{a\in[a_{0}-\delta,a_{1}+\delta]}ab-f_{k}(a).$
It is now clear that $\liminf_{k\to\infty}f_{k}^{*}(b)\geq f^{*}(b)$.
If $a_{0}<a_{1}$ then $f$ is differentiable at all $a\in(a_{0},a_{1})$ and
$f^{\prime}(a)=b$. Therefore if we fix some $a^{\prime}\in(a_{0},a_{1})$ and
for each $k$ choose some $b_{k}\in\partial f_{k}(a^{\prime})$ then
$f_{k}^{*}(b)\leq\sup_{a\in[a_{0}-\delta,a_{1}+\delta]}ab-
f_{k}(a^{\prime})-b_{k}(a-a^{\prime})\leq\max\Big{(}(a_{0}-\delta)b-b_{k}(a_{0}-\delta-a^{\prime})\,,\,(a_{1}+\delta)b-b_{k}(a_{1}+\delta-a^{\prime})\Big{)}-f_{k}(a^{\prime})$
Since $b_{k}\to b$, we get
$\limsup_{k\to\infty}f_{k}^{*}(b)\leq a^{\prime}b-f(a^{\prime})=f^{*}(b).$
Otherwise if $a_{0}=a_{1}$, then since $f^{-1}(\mathbb{R})$ is not a
singleton, we can find a sequence $a_{j}$ converging to $a_{0}$ such that $f$
is differentiable at $a_{j}$ for all $j$. For each $j,k$ choose some
$b_{j,k}\in\partial f_{k}(a_{j})$ and note that
$\lim_{k\to\infty}b_{j,k}=f^{\prime}(a_{j})$. Thus, we can compute
$\limsup_{k\to\infty}f_{k}^{*}(b)\leq\limsup_{k\to\infty}\sup_{a\in[a_{0}-\delta,a_{0}+\delta]}ab-
f_{k}(a_{j})-b_{j,k}(a-a_{j})$ $\leq
a_{0}b-f(a_{j})-f^{\prime}(a_{j})(a_{0}-a_{j})+\delta(|b|+|f^{\prime}(a_{j})|)$
Sending $\delta\to 0$ and then $j\to\infty$, it follows that
$\limsup_{k\to\infty}f^{*}_{k}(b)\leq a_{0}b-f(a_{0})=f^{*}(b)$
as desired.
We have now shown that $\lim_{k\to\infty}f^{*}_{k}(b)=f^{*}(b)$ except
possibly at $b=b_{\infty}^{+},b_{\infty}^{-}$. Since
$b_{\infty}^{+},b_{\infty}^{-}$ does not lie in the interior of
$(f^{*})^{-1}(\mathbb{R})$, we can use the same argument we used to establish
properties (1) and (2) to establish property (4). ∎
###### Lemma A.2.
Let $z:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ be an energy satisfying
(z1-z3) and let $z_{k}:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ be a sequence
of energies satisfying (z1-z3) such that $z_{k}$ converges pointwise
everywhere to $z$. If we set
$b_{\infty}=\inf\\{b\in\mathbb{R}:z^{*}(b)=+\infty\\}$ then $(z_{k}^{*})^{-1}$
converges uniformly to $(z^{*})^{-1}$ on compact subsets of
$\big{(}0,z^{*}(b_{\infty})\big{)}$.
###### Proof.
If $z^{*}(b_{\infty})=0$, then there is nothing to prove. Otherwise, given
$\epsilon\in(0,z^{*}(b_{\infty}))$ there must exist
$b_{\epsilon/2}<b_{\epsilon}\in\mathbb{R}$ such that
$z^{*}(b_{\epsilon/2})=\epsilon/2$ and $z^{*}(b_{\epsilon})=\epsilon.$ It then
follows that for all $b\geq b_{\epsilon}$ and $k$ sufficiently large
$\frac{\epsilon}{4(b_{\epsilon}-b_{\epsilon/2})}\leq\inf\partial
z^{*}_{k}(b).$
As a result, $(z_{k}^{*})^{-1}$ is uniformly Lipschitz on
$[\epsilon,z^{*}(b_{\infty}))$. Choose some value
$a\in[\epsilon,z^{*}(b_{\infty}))$ and let $\bar{b}=(z^{*})^{-1}(a)$. Let
$a_{k}=z^{*}_{k}(\bar{b})$ and note that once $k$ is sufficiently large we
must have $a\in z_{k}^{*}(\mathbb{R})$. Thus,
$|(z^{*})^{-1}(a)-(z^{*}_{k})^{-1}(a)|=|\bar{b}-(z^{*}_{k})^{-1}(a_{k}+a-a_{k})|\leq
L_{\epsilon}|a-a_{k}|=L_{\epsilon}|z^{*}(\bar{b})-z^{*}_{k}(\bar{b})|$
Now the uniform convergence of $z^{*}_{k}$ to $z^{*}$ on compact subsets of
$(-\infty,b_{\infty})$ combined with the Lipschitz bound implies the uniform
convergence of $(z_{k}^{*})^{-1}$ to $(z^{*})^{-1}$ on compact subsets of
$(0,z^{*}(b_{\infty}))$.
∎
###### Lemma A.3.
Suppose that $f_{k}:\mathbb{R}\to\mathbb{R}$ is a sequence of proper, lower
semicontinuous, convex functions that converge pointwise everywhere to a
function $f:\mathbb{R}\to\mathbb{R}$ that is also proper, lower
semicontinuous, and convex with
$a_{0}:=\inf\\{a\in\mathbb{R}:f(a)<\infty\\}<\sup\\{a\in\mathbb{R}:f(a)<\infty\\}=:a_{1}$.
Let $u_{k}\in L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ be a sequence
of uniformly integrable functions such that $u_{k}$ converges weakly in
$L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ to a limit $u\in
L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$. Suppose in addition that the
sequence $f_{k}(u_{k})$ converges weakly in
$L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$ to $f(u)\in
L^{1}_{\operatorname{\textup{loc}}}([0,\infty);L^{1}(\mathbb{R}^{d}))$. If
there exists $v\in L^{\infty}_{\operatorname{\textup{loc}}}(Q_{\infty})$ such
that $v\in\partial f(u)$ and $f$ is strictly convex on the interior of
$f^{-1}(\mathbb{R})$, then $u_{k}$ converges locally in measure to $u$.
###### Proof.
Fix a compact set $D\subset Q_{\infty}$. Fix $\epsilon>0$ and let
$S_{k,\epsilon}=\\{(t,x)\in D:u_{k}>a_{1}+\epsilon\\}$. Choose some value
$a\in(a_{0},a_{1})$. Since $f(a)$ is finite and $f_{k}(a_{1}+\epsilon)$ must
approach $\infty$ as $k\to\infty$, it follows that $f_{k}$ is increasing at
$a_{1}+\epsilon$ for all $k$ sufficiently large. Therefore,
$\limsup_{k\to\infty}|S_{k,\epsilon}|f_{k}(a_{1}+\epsilon)\leq\limsup_{k\to\infty}\int_{S_{k,\epsilon}}f_{k}(u_{k})\leq\limsup_{k\to\infty}\int_{D}|f_{k}(u_{k})|<\infty,$
where in the last inequality we used the fact that the sequence $f_{k}(u_{k})$
is uniformly bounded in $L^{1}_{\operatorname{\textup{loc}}}(Q_{\infty})$. Of
course the above inequality is only possible if
$\limsup_{k\to\infty}|S_{k,\epsilon}|=0$. A similar argument shows that the
measure of the sets $\\{(t,x)\in D:u_{k}(t,x)<a_{0}-\epsilon\\}$ also vanishes
in the $k\to\infty$ limit.
Given some $\delta<(a_{1}-a_{0})/2$, define
$u_{k,\delta}=\max(a_{0}+\delta,\min(u_{k},a_{1}-\delta))$ and
$v_{k,\delta}=\inf\partial f_{k}(u_{k,\delta})$. From the convergence
properties of $u_{k}$ and $f_{k}(u_{k})$ we have
$\lim_{k\to\infty}\int_{D}f_{k}(u_{k})-f(u)-v(u_{k}-u)=0.$
Therefore,
$0\geq\limsup_{k\to\infty}\int_{D}f_{k}(u_{k,\delta})+v_{k,\delta}(u_{k}-u_{k,\delta})-f(u)-v(u_{k}-u).$
For $\delta>0$ fixed, Lemma A.1 implies that
$f_{k}(u_{k,\delta})-f(u_{k,\delta})$ converges uniformly to zero. Hence,
$0\geq\limsup_{k\to\infty}\int_{D}v_{k,\delta}(u_{k}-u_{k,\delta})+f(u_{k,\delta})-f(u)-v(u_{k}-u).$
For $\delta$ sufficiently small, either $v_{k,\delta}(u_{k}-u_{k,\delta})$ is
positive or $v_{k,\delta}$ is bounded. Either way, from the uniform
integrability of $u_{k}$ and our work in the first paragraph, it follows that
$\lim_{\delta\to
0}\limsup_{k\to\infty}\int_{D}v_{k,\delta}(u_{k}-u_{k,\delta})+v(u_{k}-u_{k,\delta})\geq
0.$
Thus,
(A.1) $0\geq\lim_{\delta\to
0}\limsup_{k\to\infty}\int_{D}f(u_{k,\delta})-f(u)-v(u_{k,\delta}-u).$
Given $\epsilon>0$, let $D_{k,\delta,\epsilon}=\\{(t,x)\in
D:|u_{k,\delta}-u|>\epsilon\\}$. Equation (A.1) is a Bregman divergence of a
strictly convex function, therefore,
$\lim_{\delta\to 0}\limsup_{k\to\infty}|D_{k,\delta,\epsilon}|=0.$
If we let $u^{\prime}_{k}=\max(\min(a_{1},u_{k}),a_{0})$ and
$D^{\prime}_{k,\epsilon}=\\{(t,x)\in D:|u_{k}^{\prime}-u|>\epsilon\\}$ then it
is clear that $\limsup_{k\to\infty}|D_{k,\epsilon}|=0.$ Thus, $u_{k}^{\prime}$
converges locally in measure to $u$. From our work in the first paragraph we
know that $u_{k}-u_{k}^{\prime}$ converges locally in measure to zero, thus we
are done.
∎
## References
* [AKY14] Damon Alexander, Inwon Kim, and Yao Yao. Quasi-static evolution and congested crowd transport. Nonlinearity, 27(4):823–858, mar 2014.
* [BCP20] Xiangsheng Xu Brock C. Price. Global existence theorem for a model governing the motion of two cell populations. Kinetic & Related Models, 13(6):1175–1191, 2020.
* [BKMP03] H.M. Byrne, J.R. King, D.L.S. McElwain, and L. Preziosi. A two-phase model of solid tumour growth. Applied Mathematics Letters, 16(4):567–573, 2003.
* [BM14] Filippo Santambrogio Bertrand Maury, Aude Roudneff-Chupin. Congestion-driven dendritic growth. Discrete & Continuous Dynamical Systems, 34(4):1575–1604, 2014\.
* [BPPS19] Federica Bubba, Benoît Perthame, Camille Pouchol, and Markus Schmidtchen. Hele–shaw limit for a system of two reaction-(cross-)diffusion equations for living tissues. Archive for Rational Mechanics and Analysis, 236(2):735–766, Dec 2019.
* [CFSS18] J. A. Carrillo, S. Fagioli, F. Santambrogio, and M. Schmidtchen. Splitting schemes and segregation in reaction cross-diffusion systems. SIAM Journal on Mathematical Analysis, 50(5):5695–5718, 2018.
* [GPŚG19] Piotr Gwiazda, Benoît Perthame, and Agnieszka Świerczewska-Gwiazda. A two-species hyperbolic–parabolic model of tissue growth. Communications in Partial Differential Equations, 44(12):1605–1618, 2019.
* [JKT21] Matt Jacobs, Inwon Kim, and Jiajun Tong. Darcy’s law with a source term. Archive for Rational Mechanics and Analysis, 239(3):1349–1393, Mar 2021.
* [KM18] Inwon Kim and Alpár Richárd Mészáros. On nonlinear cross-diffusion systems: an optimal transport approach. Calculus of Variations and Partial Differential Equations, 57(3):79, Apr 2018.
* [KT20] Inwon Kim and Jiajun Tong. Interface dynamics in a two-phase tumor growth model, 2020.
* [MRCS10] Bertrand Maury, Aude Roudneff-Chupin, and Filippo Santambrogio. A macroscopic crowd motion model of gradient flow type, 2010.
* [Ott01] Felix Otto. The geometry of dissipative evolution equations: the porous medium equation. Comm. Partial Differential Equations, 26(1-2):101–174, 2001.
* [PQV14] Benoît Perthame, Fernando Quirós, and Juan Luis Vázquez. The hele–shaw asymptotics for mechanical models of tumor growth. Archive for Rational Mechanics and Analysis, 212(1):93–127, Apr 2014.
* [PT08] Luigi Preziosi and Andrea Tosin. Multiphase modelling of tumour growth and extracellular matrix interaction: mathematical tools and applications. Journal of Mathematical Biology, 58(4-5):625–656, October 2008\.
* [PV15] Benoît Perthame and Nicolas Vauchelet. Incompressible limit of a mechanical model of tumour growth with viscosity. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 373(2050):20140283, Sep 2015. 26261366[pmid].
* [RBE+10] Jonas Ranft, Markus Basan, Jens Elgeti, Jean-François Joanny, Jacques Prost, and Frank Jülicher. Fluidization of tissues by cell division and apoptosis. Proceedings of the National Academy of Sciences, 107(49):20863–20868, 2010.
|
arxiv-papers
| 2021-07-26T18:08:01 |
2024-09-04T03:07:19.792928
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Matt Jacobs",
"submitter": "Matt Jacobs",
"url": "https://arxiv.org/abs/2107.12412"
}
|
2107.12414
|
# Phase delays in $\omega-2\omega$ above-threshold ionization
S. D. López1, S. Donsa2, S. Nagele2, D. G. Arbó1,3 and J. Burgdörfer2 1
Institute for Astronomy and Space Physics - IAFE (CONICET-UBA), CC 67, Suc.
28, C1428ZAA, Buenos Aires, Argentina 2Institute for Theoretical Physics,
Vienna University of Technology, Wiedner Hauptstr. 8-10/E136, A-1040 Vienna,
Austria, EU 3Facultad de Ciencias Exactas y Naturales y Ciclo Básico Común,
Universidad de Buenos Aires, Argentina
###### Abstract
Relative phases of atomic above-threshold ionization wavepackets have been
investigated in a recent experiment [L. J. Zipp, A. Natan, and P. H.
Bucksbaum, Optica 1, 361-364 (2014)] exploiting interferences between
different pathways in a weak probe field at half the frequency of the strong
ionization pulse. In this work we theoretically explore the extraction of
phase delays and time delays of attosecond wavepackets formed in strong-field
ionization. We perform simulations solving the time-dependent Schrödinger
equation and compare these results with the strong-field and Coulomb-Volkov
approximations. In order to disentangle short- from long- ranged effects of
the atomic potential we also perform simulations for atomic model potentials
featuring a Yukawa-type short-range potential. We find significant deviations
of the ab-initio phase delays between different photoelectron pathways from
the predictions by the strong-field approximation even at energies well above
the ionization threshold. We identify similarities but also profound
differences to the well-known interferometric extraction of phase- and time
delays in one-photon ionization.
###### pacs:
32.80.Rm,32.80.Fb,03.65.Sq
## I Introduction
Measuring and analyzing ionization phases and timing information on electron
wavepackets ionized by absorption of an XUV photon represents one of the major
advances attosecond pulses and phase-controlled femtosecond laser pulses have
enabled during the last decade [2, 1]. Such XUV pulses in combination with
near-infrared or visible (NIR/V) laser light permit the control of electronic
motion on the shortest accessible timescales [3, 4, 5, 6]. Pump-probe
techniques such as attosecond streaking [7, 8, 9] and reconstruction of
attosecond harmonic beating by interference of two-photon transitions (RABBIT)
[10, 11] have been employed to measure attosecond time-resolved electron
emission from noble gas atoms [12, 13, 14, 15, 16, 17], molecules [18, 19],
and solids [20, 21, 22]. Whereas attosecond streaking of electrons ionized by
an XUV pulse can be understood in terms of a classical time-resolved shift in
momentum and energy by the probing IR field [7, 23, 24, 25, 26, 27], RABBIT
employs two interfering quantum paths to the same final state in the continuum
called sideband [13, 27]. This sideband energy can be reached through a two-
photon process involving absorption of photons from one of two adjacent
harmonic orders of a high-order harmonic generation (HHG) radiation followed
by absorption or emission of an IR photon of the fundamental driving frequency
$\omega$ [28, 29, 30].
Two-color ($\omega-2\omega$) laser fields with well-controlled relative phases
between both colors have been experimentally and theoretically studied since
the last decade of the last century [31, 32, 33, 34, 35]. Recently, they have
also been employed as alternative tool to extract information on ionization
phases and time delays [36, 37, 38, 39]. One key feature is that the broken
inversion symmetry of the $\omega-2\omega$ field allows for interference
between odd and even partial waves of the outgoing photoelectron which leads
to a $(\theta\leftrightarrow\pi-\theta)$ asymmetry of the emission signal.
Recently, Zipp et al. [40] extended the measurement of ionization phases and
attosecond time delays to the strong-field multiphoton regime, providing new
perspectives on time-resolved strong-field ionization. In this novel
$\omega-2\omega$ interference protocol the role of electron wavepackets
emitted by absorption of a single photon from either one or two subsequent
harmonics in the RABBIT protocol is replaced by adjacent ATI peaks generated
by a strong driving field of frequency $2\omega$. The concomitant weaker
$\omega$ field opens up interfering pathways to side bands in between
neighboring ATI peaks by absorbing or emitting one $\omega$ photon. Measuring
the photoelectron angular distribution as a function of the relative phase
$\phi$ between the $\omega$ and the $2\omega$ fields provides information on
the ATI amplitudes. This interferometric approach to multi-photon ionization
(Fig. 1a) resembles the original RABBIT protocol for the extraction of the
ionization phase in one-photon ionization (Fig. 1b). It promises new insights
into relative phases and, possibly, attosecond-scale timing information of
multi-photon strong-field processes. Somewhat simplified, it can address the
question what additional phase delays incur or how much longer it takes
forming a wavepacket by absorbing $N+1$ rather than by $N$ photons. Some works
based on the strong field approximation were recently reported in this
direction [42, 41]. Indeed, first simulations employing semiclassical
trajectory methods [43, 44, 41] highlighted the role of transient trapping of
the wavepacket for the phase shift of the ATI peaks close to or even below the
threshold. A detailed analysis of the information encoded in the ionization
phases, their dependence on the intensities of the driving ($I_{2\omega}$) and
probing ($I_{\omega}$) field, and on the properties of the atomic potential
appears to be still missing.
Figure 1: Comparison between (a) multi-photon strong-field interference
(MPSFI) and the standard RABBIT protocol (b) for two interfering pathways from
the initial bound state $\left|i\right\rangle$ to final states
$\left|f\right\rangle$ in the continuum, schematically. While RABBIT applies
to two pathways involving ionization by one photon with energies
$\left(2m-1\right)\omega$ and $\left(2m+1\right)\omega$ generated by HHG,
MPSFI involves (at least) two ATI peaks generated by absorbing $N$ or
$\left(N+1\right)$ photons from the strong pump field with frequency
$2\omega$. The final state $\left|f\right\rangle$ is reached in either case by
the absorption ($\omega$) or emission ($-\omega$) of one photon of the weak
probe field. Each arrow denotes a one-photon transition.
As will be shown in the following, multi-photon strong-field interference
(MPSFI, Fig. 1a) substantially differs from the standard RABBIT protocol as a
multitude of pathways with different number of photons and a broad range of
partial waves of the emerging electronic wavepacket contribute. Phase delays
can be extracted by this photoelectron interferometry not only at energies
near the so-called sidebands but also near the ATI peaks. Moreover, in the
strong-field setting phase delays are, unlike in the RABBIT protocol, found to
be remarkably sensitive to the probe field strengths, rendering the separation
of the atomic field and laser field influence on the resulting phase and time
delay more challenging.
In this work, we theoretically investigate the phase delays in the multi-
photon regime accessible by such a $\omega-2\omega$ interference protocol for
two collinearly polarized laser fields. We find strong deviations of the time-
dependent Schrödinger equation (TDSE) results from SFA predictions clearly
indicating that the atomic potential has a crucial influence on the ionization
phase of ATI peaks in this strong-field regime even at energies well above the
ionization threshold. We also present a simplified analytical description of
the MPSFI phase delays and discuss their potential to access timing
information.
In Sec. II we briefly introduce the simulation methods employed. In Sec. III
we present numerical results for quantum path interferences in multi-photon
ionization. An approximate analytical approach to the extraction of the
information on ionization phases, phase delays and time delays from such a
$\omega-2\omega$ protocol as well as numerical results for a model atom with a
short-ranged Yukawa-type atomic binding potential are discussed in Sec. IV.
The comparison with experimental data for argon described by a suitable model
potential [45] in single active electron (SAE) approximation [32, 46, 47] is
presented in Sec. V. Concluding remarks are given in Sec. VI. Atomic units are
used unless stated otherwise.
## II Methods
We consider a multi-femtosecond laser pulse with frequency $\omega$ and its
second harmonic $2\omega$ with electric field amplitude
$F(t)=f(t)\left[F_{2\omega}\sin(2\omega t+\phi)+F_{\omega}\sin(\omega
t)\right]\hat{\bm{z}},$ (1)
where $f(t)$ is the overall pulse envelope and $\hat{\bm{z}}$ is the
polarization direction of both fields. In the present $\omega-2\omega$
scenario $F_{2\omega}$ is the amplitude of the strong pump field giving rise
to ATI peaks and $F_{\omega}$ is the amplitude of the weak probe field, i.e.
$F_{\omega}\ll F_{2\omega}$. The relative phase $\phi$ between the $\omega$
and $2\omega$ fields is the experimentally accessible knob to control the
interference between different multi-photon pathways. In the following, we
will present results for the integral and angular differential photoelectron
spectra as a function of $\phi$. For the envelope function we choose the form
$f\left(t\right)=\sin^{2}\left(\frac{\pi t}{\tau}\right)$, where $\tau$ is the
pulse duration covering 16 cycles in the strong pump field or eight cycles of
the probe field, i.e., $\tau=16\pi/\omega$.
We solve the time-dependent Schrödinger equation (TDSE) in the single-active
electron (SAE) approximation in the length gauge [32, 46, 47],
$i\frac{\partial\psi(\vec{r},t)}{\partial
t}=\left(\frac{p^{2}}{2}+V_{a}(r)+\vec{r}\cdot\vec{F}(t)\right)\psi(\vec{r},t).$
(2)
In our simulation for argon to be compared with the experiment [40] we employ
as atomic potential $V_{a}$ in Eq. (2) the Muller model potential [45]. In
order to delineate the role of short-ranged and long-ranged potentials we
alternatively use a Yukawa-type atomic potential
$V_{a}\left(r\right)=-\frac{b}{r}e^{-r/a},$ (3)
with charge parameter $b$ and the screening length $a$.
In addition to full solutions of the TDSE, we employ two popular versions of
the distorted-wave Born approximation (DWBA) that allow to account for multi-
photon and strong-field processes, namely the strong-field approximation (SFA)
[48, 49, 50] and the Coulomb-Volkov approximation (CVA) [51]. Accordingly, the
transition amplitude from an initial atomic state
$\left|\phi_{i}(t)\right\rangle$ to a final state
$\left|\varphi_{\vec{k}}\right\rangle$ with asymptotic momentum $\vec{k}$ in
the continuum, i.e.,
$a_{\vec{k}}(\phi)=\lim_{t\rightarrow\infty}\left\langle\varphi_{\vec{k}}\right|\left.\psi(t)\right\rangle$
in the DWBA, is given by
$a(\vec{k},\varphi)=-i\int\limits_{-\infty}^{+\infty}dt\
\langle\chi_{\vec{k}}^{\textsc{DW}}(t)|z\,F\,(t)\left|\phi_{i}(t)\right\rangle.$
(4)
From Eq. (4), the SFA follows when the Volkov state is used as the distorted
wave [48, 49, 50]
$\chi_{\vec{k}}^{(\textsc{DW})-}(\vec{r},t)=\chi_{\vec{k}}^{(\textsc{V})-}(\vec{r},t)=\frac{\exp\mathbf{[}i(\vec{k}+\vec{A})\cdot\vec{r}\mathbf{]}}{\left(2\pi\right)^{3/2}}\exp\left[-i\int_{t}^{+\infty}dt^{\prime}\frac{(\vec{k}+\vec{A}(t^{\prime}))^{2}}{2}\right]\
.$ (5)
The CVA results when approximating the distorted wave by a product of the
Volkov solution and the Coulomb wave [51]
$\chi_{\vec{k}}^{(\textsc{DW})-}(\vec{r},t)=\chi_{\vec{k}}^{(CV)-}(\vec{r},t)=\chi_{\vec{k}}^{(V)-}(\vec{r},t)\;\mathcal{D}_{C}(Z_{T},\vec{k},t),$
(6)
where $\mathcal{D}_{C}(Z_{T},\vec{k},t)=N_{T}^{-}(k)\
_{1}F_{1}(-iZ_{T}/k,1,-ik\ r-i\vec{k}\cdot\vec{r})$ for a hydrogenic atom. The
Coulomb normalization factor $N_{T}^{-}(k)=\exp(\pi
Z_{T}/2k)\Gamma(1+iZ_{T}/k)$ coincides with the amplitude of the Coulomb wave
function at the origin, ${}_{1}F_{1}$ denotes the confluent hypergeometric
function, and $Z_{T}$ is the electric charge of the parent ion. Eq. (5)
describes the final state of a free electron wave in the strong laser field
while completely neglecting the atomic potential. The CVA in Eq. (6) includes
also the Coulomb scattering of the free electron but neglects the effect of
binding and of dynamical Stark shifts. These two DWBA approximations provide
points of reference for identifying dynamical multi-photon effects on
ionization phases.
Because of the azimuthal symmetry, the electron probability distribution
$P(\vec{k})=\left|a_{\vec{k}}\right|^{2}$ depends only on the electron
momentum parallel ($k_{z}$) and transverse ($k_{\perp}$) to the field
polarization direction or, alternatively, on the kinetic energy $E$ and the
polar emission angle $\theta$, i.e.,
$P\left({k_{\perp},k_{z},\phi}\right)=(2E)^{-1/2}P\left({E,\cos\theta,\phi}\right)$.
In the multiphoton regime, the photoelectron spectrum is composed of a series
of peaks positioned at energies $E_{n}$
$E_{n}=n\omega-(I_{p}+U_{p}),$ (7)
corresponding to absorption of a given number of $n_{\omega}$ photons of
frequency $\omega$ and $N$ photons of frequency $2\omega$, such that
$n\omega=n_{\omega}\omega+N\left(2\omega\right)$. In Eq. (7), $I_{p}$ and
$U_{p}$ denote the ionization potential and the ponderomotive energy,
respectively. As a given peak $E_{n}$ can be reached by different combinations
of photon numbers $n_{\omega}$ and $N$, photo-electron interferometry in this
strong-field setting is characterized by multi-path interferences of partial
waves with opposite parity. Consequently, an important quantity for
characterizing interferences between partial waves of opposite parity and,
thus, to map out ionization phases in the $\omega-2\omega$ protocol is the
forward-backward ($\theta\leftrightarrow\pi-\theta$) asymmetry of the
photoelectron emission probability
$A(E,\phi)=\frac{S_{+}(E,\phi)-S_{-}(E,\phi)}{S_{+}(E,\phi)+S_{-}(E,\phi)},$
(8)
where the forward (backward) emission spectra $S_{+}$ ($S_{-}$) are obtained
by integrating the momentum distribution over the +z (-z) hemisphere
$S_{+(-)}(E,\phi)=\int_{0(-1)}^{1(0)}\mathrm{d}\cos\theta\>P(E,\cos\theta,\phi).$
(9)
The calculated (or measured) signal function, generically denoted by
$S(E,\phi)$, representing in the following either the photoemission
probability into one hemisphere, $S_{+(-)}$ [Eq. (9)] or the photoelectron
asymmetry $A(E,\phi)$ [Eq. (8)], can be written in terms of a Fourier series
in the relative phase $\phi$. The emission signal takes the form [52]
$S(E,\phi)=c_{0}(E)+\sum_{i=1}^{\infty}c_{i}(E)\cos(i\phi-\delta_{i}(E)),$
(10)
where the leading term ($i=1$) provides the information of the relative
ionization phase $\delta_{1}(E)=\delta(E)$ in analogy to the RABBIT protocol
[52]. Higher-order Fourier components $c_{i}(E)$ for
$\left\\{i=2,3...\right\\}$ should provide an error estimate of the fit.
Extending the analogy to RABBIT, Zipp et al. [40] introduced an Eisenbud-
Wigner-Smith (EWS) -type time delay [53, 54] by mapping the phase delay
$\delta(E)$ onto a time delay as
$\tau(E)=\frac{\delta(E)}{2\omega}.$ (11)
Eq. (11) can be viewed as finite-difference approximation to the spectral
derivative $d\delta(E)/dE$ of the phase shift $\delta(E)$. We explore the
physical significance of $\delta(E)$ and $\tau(E)$ in more detail below.
## III Energy dependence of phase delay in multi-photon ionization
As a representative example of $\omega-2\omega$ atomic ionization, we choose
the probe field with the fundamental frequency of a Ti:Sapphire laser of 800
nm wavelength in the near-infrared (NIR) region of the spectrum, and the pump
frequency as its second harmonic with a 400 nm wavelength in the visible (V)
region. In line with the experiment of Zipp et al. [40] we study atomic
ionization of argon by the two-color laser field in Eq. (1) with intensities
$I_{2\omega}=c/(8\pi)F_{2\omega}^{2}=8\times 10^{13}$ W/cm2 and
$I_{\omega}=c/(8\pi)F_{\omega}^{2}=4\times 10^{11}$ W/cm2. In Fig. 2 we
exhibit the results of our TDSE calculations in the SAE approximation [46,
47]. In Fig. 2a we show the variation of the total multiphoton spectrum
(integrated over all emission angles $\theta$) as a function of the relative
two-color phase $\phi$. We observe the typical multiphoton peak structure with
peak positions at energies predicted by Eq. (7), with the ionization potential
for argon of $I_{p}=15.78$ eV and ponderomotive energy $U_{p}=1.19$ eV. ATI
peaks at even multiples of $\omega$ result predominantly from absorption of
$N$ photons of frequency $2\omega$, while peaks at odd multiples of $\omega$
result from absorption or emission of at least one additional $\omega$ (probe)
photon. Following the convention of RABBIT [13, 27] we refer to the latter
group of peaks with energies near odd multiples of the NIR frequency $\omega$
as “sidebands” (SB). Unlike the total electron emission integrated over all
angles $\theta$ (Fig. 2a) whose $\phi$ dependence displays a $\pi$ periodicity
(emission near $\phi$ and $\phi+\pi$ are identical), the emission into the
forward hemisphere $S_{+}(E,\phi)$ given by Eq. (9) ($0\leq\theta\leq\pi/2$)
(Fig. 2b) and the asymmetry parameter $A(E,\phi)$ (Fig. 2c) display a $2\pi$
periodicity indicative of the parity-breaking contributions due to
$\omega-2\omega$ interferences. These structures are magnified in the close-up
figures 2d, 2e, and 2f.
Figure 2: (a) Total photoelectron spectrum (logarithmic scale), (b) forward
emission spectrum integrated in the +z hemisphere (logarithmic scale) and (c)
asymmetry parameter $S$ (linear scale) for argon as a function of the relative
phase $\phi$ and the electron energy (in eV) calculated within the TDSE. The
laser intensities are $I_{2\omega}=8\times 10^{13}$ W/cm2 and
$I_{\omega}=4\times 10^{11}$ W/cm2 for the respective frequencies $2\omega$
and $\omega=0.057$ a.u. with pulse duration $\tau=881.85$ a.u., corresponding
to eight full cycles of the latter. (d-f) Zoom in linear scale corresponding
to (a-c), respectively.
From the fit of the variation of the numerical data for $S_{+}(E,\phi)$ and
$A(E,\phi)$ to the Fourier expansion [Eq. (10)] at fixed $E$, the relative
ionization phase $\delta(E)$ can be extracted as the phase shift of the
$\cos\phi$ oscillation. Because of the broad Fourier width of the ultrashort
pulse [Eq. (1)] the multiphoton electron spectrum (Fig. 2a) is a continuous
function of $E$. Accordingly, also the phase shift $\delta(E)$ can be viewed
as a continuous function of $E$. Results for the energy dependence of
$\delta(E)$ predicted by the TDSE, the SFA, and the CVA calculations of
$S(E,\phi)$ are shown in Fig. 3. Most strikingly, the SFA jumps almost
discontinuously and periodically between $\pi$ near the ATI energies [even $n$
in Eq. (7)] and $0$ in the vicinity of side bands [odd $n$ in Eq. (7)]. The
CVA introduces modest variations to this SFA behavior which are a signature of
Coulomb scattering of the ionized electron. By contrast, the full TDSE
solution displays significant deviations from the SFA predictions indicating a
much more complex variation of the energy dependence interference phase
$\delta(E)$. Even at relatively large energies above the threshold ($\sim 20$
eV), no clear indication for the convergence towards the SFA limit as assumed
in previous analyses [40, 43] emerges. These strong variations of $\delta(E)$
and deviations from SFA appearing in the TDSE results are the signature of
simultaneous interaction of the escaping electron with both the atomic force
field and the strong laser fields, in particular intermediate off-shell bound-
bound and continuum-continuum (cc) transitions between field-dressed atomic
states [55]. Such contributions are absent in the SFA and the CVA.
Figure 3: Continuum phase shifts $\delta(E)$ extracted from the asymmetry
$A(E,\phi)$ as a function of the emission energy from the TDSE (thick black
solid line), SFA (dashed blue line), and CVA (thin red solid line) results.
Thick solid vertical gray lines denote ATI peak energies and dashed vertical
lines sideband energies according to Eq. (7). The horizontal dashed line
corresponds to the strong-field limit for ATI phase shifts [$\delta(E)=\pi$].
Interpretation of the phase shift $\delta(E)$ of the forward (or backward)
emission or asymmetry signal [Eq. (10)] requires a more detailed analysis of
the interfering quantum paths. Key point is that in the present
$\omega-2\omega$ multi-photon strong-field interference (MPSFI) scenario a
multitude of pathways contribute, a few of them shown for argon in Fig. 4,
well beyond the subset invoked in the analogy to the RABBITT protocol (Fig.
1a). This renders a quantitative analysis more challenging. For example, the
side-band energy $E_{n}=E_{15}$ can be reached not only by the path pair
$P_{1}$ (Fig. 4a), which resembles the RABBIT protocol, but also by other path
pairs with different sequences of absorption and emission events to the same
first order in the weak probe field (e.g. $P_{2}$, $P_{3}$,…), or to different
orders in the pump field (e.g. $P_{1}^{\prime},P_{2}^{\prime}$). The (virtual)
intermediate states reached by the probe photon may involve continuum (e.g.
$P_{2},P_{1}^{\prime}$) or bound states (e.g. $P_{3},P_{2}^{\prime}$). The
latter are expected to be more important when the path proceeds via a bound-
state resonance.
Figure 4: Examples of pairs of quantum paths reaching the sideband energy
$E_{n}=E_{15}$ (a) or the ATI (or main) peak energy $E_{n}=E_{14}$ (b) for
argon. In (a), the set $P_{i}(i=1,...)$ features pairs, each with absorption
(left L) or emission (right R) of one weak probe photon $\omega$ and the set
$P_{i}^{\prime}$ features pairs with one additional absorption and emission of
the strong pump photon (right R) compared to the direct path (left L) while
both absorbing one weak probe photon $\omega$. The absorption of the probe
photons may occur in the continuum ($P_{1}$, $P_{2}$ and $P_{1}^{\prime}$) or
in virtual intermediate bound states ($P_{3}$ and $P_{2}^{\prime}$). In (b),
the direct ATI process can, to lowest order, interfere with path pairs $P_{1}$
involving absorption or emission of two $\omega$ photons, $P_{2}$ are examples
of a process involving four $\omega$ photons. $P_{0}^{\prime}$ represents one
contribution to the dressing of the ATI electron by the probe field.
Multi-photon path interferences can be analyzed not only near the sidebands
(Fig. 4a) but also near ATI (or main) peaks (Fig. 4b). For example, at ATI
energy $E_{n}=E_{14}$ the direct path $P_{0}$ from the initial state to the
final state with $E_{14}$ via absorption of $7$ photons with frequency
$2\omega$ can interfere with a multitude of paths involving two probe photons
$P_{1}^{\prime\prime}$ (Fig. 4b), which are of the same order in the weak
field as the dressing of the ATI electron by the IR field $P_{0}^{\prime}$.
For a stronger probe field, even higher-order contributions may become
important; examples of which involve the absorption or emission of 2 or 4
$\omega$ photons are shown in Fig. 4b. It is important to realize that the set
of paths in Fig. 4 still do not fully reflect the complexity of the ensemble
of contributing interfering paths as the angular momentum degree of freedom is
omitted here for simplicity (see [56]). Each additional photon absorption or
emission process leads to a branching of paths to multiply degenerate states
of the same energy $E$ but different angular momenta
$\ell\rightarrow(\ell+1,\ell-1)$. Consequently, for an initial state with
angular momentum $\ell_{i}$ all partial waves $E$ within the interval
$\left[\max(0,\ell_{i}-N),\ell_{i}+N\right]$ can be coherently populated at
the final energy when the pulses are linearly polarized.
## IV Analytical model for quantum path interferences in multi-photon
ionization
In order to provide an intuitive guide towards interpreting the ionization
phase shift $\delta(E)$ extracted from the quantum path interferences
contributing to MPSFI we present a simplified analysis based on a (lowest-
order) perturbative multi-photon description. Accordingly, the contribution of
the $N$-photon absorption path (e.g., $P_{0}$ in Fig. 4b) to electron emission
in the $\theta$ direction following the absorption of $N$ photons of frequency
$2\omega$ in the visible has the complex amplitude
$C\left(E_{2N},N\right)=\sum_{\ell}A_{N,\ell}\exp\left[i\left(N\phi-N\frac{\pi}{2}-\ell\frac{\pi}{2}+\eta_{\ell}\left(E_{2N},F\right)\right)\right]Y_{\ell}^{0}\left(\theta\right).$
(12)
In Eq. (12) $A_{N,\ell}$ is the modulus of the $N$-photon absorption amplitude
and $\eta_{\ell}\left(E_{2N},F\right)$ is the atomic ionization phase at
energy $E=E_{2N}$. In the weak field-limit this phase is expected to approach
the one-photon atomic ionization phase at the same energy and angular momentum
$\eta_{\ell}\left(E_{2N},F\rightarrow
0\right)=\eta_{\ell}\left(E_{2N}\right)$. However, in the present strong-field
setting, deviations from this limit are expected. The sum in Eq. (12) extends
over all orbital quantum numbers fulfilling the inequality
$\max\left[0,\ell_{i}-N\right]\leq\ell\leq\ell_{i}+N$. For estimating the
phases in Eq. (12) we have used that each photon absorption or emission event
contributes a phase $\pi/2$, each angular momentum change $\Delta\ell$ adds
another $\Delta\ell\pi/2$, and each absorption of a $2\omega$ pump photon
includes an additional relative phase $\phi$ of the pump field relative to the
probe field [see Eq. (1)]. Applying now Eq. (12) to the left (L) path of pair
$P_{1}$ (Fig. 4a) contributing near the sideband energy $E_{2N+1}$, the
combined amplitude for absorbing $N$ visible (V) $2\omega$ photons followed by
absorbing one NIR $\omega$ photon reads
$\displaystyle C_{P_{1},L}\left(E_{2N+1}\right)=$
$\displaystyle\sum_{\ell,\sigma=\pm
1}A_{N,\ell}^{\mathrm{V}}A_{1+,\sigma}^{\mathrm{NIR}}$
$\displaystyle\exp\left\\{i[N\phi-(N+1)\frac{\pi}{2}-(\ell+\sigma)\frac{\pi}{2}+\eta_{\ell}(E_{2N},F)+\varphi_{\ell+\sigma}^{cc,1+}(E_{2N},F)]\right\\}Y_{\ell+\sigma}^{0}(\theta)$
with $\sigma=\Delta\ell=\pm 1$ the change in angular momentum due to the
absorption of an additional NIR photon. $A_{1+,\sigma}^{\mathrm{NIR}}$ denotes
the modulus and $\varphi_{\ell+\sigma}^{cc,1+}(E_{n-1},F)$ the corresponding
additional phase of the absorption of one additional ($1+$) NIR photon. It
describes the continuum-continuum transition to the angular momentum sector
$\ell+\sigma$ in the sideband reached by the absorption of $N$ photons of
frequency $2\omega$ and one additional photon of frequency $\omega$, i.e.,
$n=2N+1$. In the perturbative limit, this phase is the analogue to the
corresponding phase in RABBIT which depends, in general, on $\ell$ [17].
However, for probe fields beyond the perturbative limit, the continuum-
continuum phase is expected to be dependent also on $F_{\omega}$. When both
pump and probe fields are simultaneously present [(Eq. (1)], the phases will
depend, in general, on the combined field $F$. The corresponding expression
for the right (R) of the path pair $P_{1}$ is accordingly given by
$\displaystyle C_{P_{1},R}\left(E_{2N+1}\right)=$
$\displaystyle\sum_{\ell,\sigma=\pm
1}A_{N+1,\ell}^{\mathrm{V}}A_{1-,\sigma}^{\mathrm{NIR}}Y_{\ell+\sigma}^{0}(\theta)$
$\displaystyle\exp\left\\{i[(N+1)\phi-(N+2)\frac{\pi}{2}-(\ell+\sigma)\frac{\pi}{2}+\eta_{\ell}(E_{2(N+1)},F)+\varphi_{\ell+\sigma}^{cc,1-}(E_{2(N+1)},F)]\right\\}$
where $A_{1-,\sigma}^{\mathrm{NIR}}$ denotes the modulus and
$\varphi_{\ell+\sigma}^{cc,1-}(E_{2(N+1)},F)$ the corresponding cc phase of
the emission amplitude of an IR photon . Note that the range of $\ell$
included in Eq. (IV) is different from that in Eq. (IV) and includes
$\max\left[0,\ell_{i}-(N+1)\right]\leq\ell\leq\ell_{i}+N+1$. When, e.g., only
the path pair $P_{1}$ in Fig. 4a is considered, the emission probability near
the sideband $E=E_{2N+1}$ [Eq. (9)] is now given by the coherent sum of Eq.
(IV) and (IV),
$S_{+(-)}(E_{n},\phi)=\int_{0(-1)}^{1(0)}\mathrm{d}\cos\theta\>\left|C_{P_{1},L}\left(E_{n}\right)+C_{P_{1},R}\left(E_{n}\right)\right|^{2}.$
(15)
The evaluation of Eq. (15) can be drastically simplified by including only the
dominant pathways along the so called “yrast line” well known from beam-foil
spectroscopy [57, 58] or, equivalently, assuming that only the pathways
preferred by the Fano propensity rule [59, 60] are realized. Accordingly, each
photoabsorption leads predominantly to an increase
$\left(1+\leftrightarrow\sigma=1\right)$ and photoemission to a decrease
$\left(1-\leftrightarrow\sigma=-1\right)$ by one unit of angular momentum.
Including only these dominant paths eliminates the summation over $\ell$ and
$\sigma$ in Eqs. (IV) and (IV). We note that this approximate selection rule
is only applicable to resonant bound-bound or continuum-continuum transitions
but not to tunneling or above-threshold ionization. For ATI peaks close to
threshold (th), the dominant $\ell$ values are delimited by [61]
$\ell\leq\ell_{\mathrm{th}}\leq\left(2Z_{T}\alpha\gamma\right)^{1/2}=\left(2\sqrt{2}Z_{T}\sqrt{\frac{N_{\mathrm{th}}}{2\omega}}\right)^{1/2}$
(16)
where $\alpha$ is the quiver amplitude, $\gamma$ the Keldysh parameter of the
laser field with frequency $2\omega$, and $N_{\mathrm{th}}$ the minimum number
photons of frequency $2\omega$ required to reach the continuum
($N_{\mathrm{th}}=6$ for argon). Accordingly, our TDSE calculations yield $f$
waves as dominant partial waves near threshold, which is very close to the
upper bound predicted by Eq. (16) $\ell_{\mathrm{th}}=4$ and well below the
prediction for the yrast line (or propensity rule [59, 62])
$\ell_{i}+N_{\mathrm{th}}=7$ as depicted in Fig. 5. The partial wave content
of the first ATI peak above threshold and starting point of the further spread
in angular momentum is thus centered at lower values of
$\ell\leq\ell_{\mathrm{th}}$. The evolution of the partial wave distribution
$p_{\ell}$ to higher partial waves with increasing ATI peak is discernible
(Fig. 5c). The first ATI peak exhibits a dominant angular momentum of
$\ell_{\mathrm{th}}=3$, whereas for the second ATI peak the dominant angular
momentum is $\ell=4$. The combined contribution of the $d$ and $g$ waves of
the second ATI peak produces a dominant $f$ wave ($\ell=3$) for the third ATI
peak but with an appreciable $\ell=5$ contribution, i.e., $p_{5}\simeq
0.5p_{3}$. Applying the approximate propensity rule to Eqs. (IV), (IV), and
(15) yields, e.g.,
$\displaystyle S_{+}(E_{2N+1},\phi)$ $\displaystyle=$
$\displaystyle\int_{0}^{1}\mathrm{d}\cos\theta\left\\{(A_{N,\ell}^{\mathrm{V}})^{2}(A_{1+}^{\mathrm{NIR}})^{2}(Y_{\ell+1}^{0}(\theta))^{2}+(A_{N+1,\ell+1}^{\mathrm{V}})^{2}(A_{1-}^{\mathrm{NIR}})^{2}(Y_{\ell}^{0}(\theta))^{2}\right.$
$\displaystyle+$ $\displaystyle
2A_{N,\ell}^{\mathrm{V}}A_{N+1,\ell+1}^{\mathrm{V}}A_{1+}^{\mathrm{NIR}}A_{1-}^{\mathrm{NIR}}Y_{\ell+1}^{0}(\theta)Y_{\ell}^{0}(\theta)$
$\displaystyle\times$
$\displaystyle\left.\cos\left[\phi+\eta_{\ell+1}(E_{2(N+1)},F)-\eta_{\ell}(E_{2N},F)+\varphi^{cc,1-}_{\ell}(E_{2(N+1)},F)-\varphi^{cc,1+}_{\ell+1}(E_{2N},F)\right]\right\\}.$
with an analogous expression for $S_{-}\left(E,\phi\right)$. Consequently, the
asymmetry $A(E=E_{2N+1},\phi)$ given by Eq. (8) is proportional to
$\displaystyle A(E_{2N+1},\phi)$ $\displaystyle\sim$ $\displaystyle
S_{+}(E_{2N+1},\phi)-S_{-}(E_{2N+1},\phi)$ $\displaystyle\sim$ $\displaystyle
2A_{N,\ell}^{\mathrm{V}}A_{N+1,\ell+1}^{\mathrm{V}}A_{1+}^{\mathrm{NIR}}A_{1-}^{\mathrm{NIR}}\int_{0}^{1}\mathrm{d}\cos\theta
Y_{\ell+1}^{0}(\theta)Y_{\ell}^{0}(\theta)$ $\displaystyle\times$
$\displaystyle\cos\left[\phi+\eta_{\ell+1}(E_{2(N+1)},F)-\eta_{\ell}(E_{2N},F)+\varphi^{cc,1-}_{\ell}(E_{2(N+1)},F)-\varphi^{cc,1+}_{\ell+1}(E_{2N},F)\right].$
Comparison with Eq. (10) yields now an explicit analytic but approximate
expression of the phase delay between the two paths of the pair $P_{1}$ (Fig.
4a)
$\delta(E_{2N+1})\simeq\eta_{\ell}(E_{2N},F)-\eta_{\ell+1}(E_{2(N+1)},F)+\varphi_{\ell+1}^{cc,1+}(E_{2N},F)-\varphi_{\ell}^{cc,1-}(E_{2(N+1)},F).$
(19)
In the limit where all contributions to the phase of the wavepacket due to the
interplay with the atomic force field and the laser field can be neglected,
$\delta(E_{2N+1})\approx 0$, Eq. (IV) reduces to
$A(E_{2N+1},\phi)\propto S_{+}(E_{2N+1},\phi)-S_{-}(E_{2N+1},\phi)=C\cos\phi$
(20)
which agrees with the result in the SFA approximation first given by Zipp et
al. [40].
Figure 5: Energy spectrum and angular momentum distribution after strong-
field ionization of argon by the one-color $2\omega$ field with the same
parameters as in Fig. 2. (a) Photoelectron spectrum, (b) electron distribution
as a function of the energy and angular momentum on a logarithmic scale
covering three orders of magnitude, and (c) normalized $p_{\ell}$ (integrated
over energy) for the first three ATI peaks from threshold.
A similar analysis for a pair of paths contributing to the asymmetry near the
ATI energy $E_{n}$, where now $n=2N$, taking into account only interference
between the direct ATI path $P_{0}$ and the path $P_{1}^{\prime\prime}(R)$
(Fig. 4b) involving absorption of two NIR photons yields
$\displaystyle A(E_{2N},\phi)$ $\displaystyle\sim$ $\displaystyle
S_{+}(E_{2N},\phi)-S_{-}(E_{2N},\phi)$ $\displaystyle\sim$ $\displaystyle
A_{N,\ell}^{\mathrm{V}}A_{N-1,\ell-1}^{\mathrm{V}}A_{2+}^{\mathrm{NIR}}\int_{0}^{1}\mathrm{d}\cos\theta
Y_{\ell+1}^{0}(\theta)Y_{\ell}^{0}(\theta)$ $\displaystyle\times$
$\displaystyle\cos\left[\phi+\pi+\eta_{\ell}(E_{2N},F)-\eta_{\ell-1}(E_{2(N-1)},F)-\varphi_{\ell+1}^{cc,2+}(E_{2(N-1)},F)\right].$
with $A_{2+}^{\mathrm{NIR}}$ $\left(\varphi_{\ell+1}^{cc,2+}\right)$ the
modulus (phase) of the two-photon transition amplitude from the ATI peak at
$E_{n-2}$ with $\ell-1$ to $\left(E_{n},\ell+1\right)$. Consequently, the
phase delay between these two paths $\delta(E)$ is given by
$\delta(E_{2N})\simeq-\pi-\eta_{\ell}(E_{2N},F)+\eta_{\ell-1}(E_{2(N-1)},F)+\varphi_{\ell+1}^{cc,2+}(E_{2(N-1)},F).$
(22)
In the limit that all atomic force field and laser field effects on the phase
delay can be neglected, the SFA limit would emerge as
$S_{+}(E_{2N},\phi)-S_{-}(E_{2N},\phi)=A\cos\left(\phi+\pi\right),$ (23)
which results, indeed, in a phase jump of $\pi$ between the sidebands [Eq.
(20)] and the ATI peaks [Eq. (23)] in agreement with our numerical results
(Fig. 3). Consequently, the deviations observed in the TDSE simulation and CVA
simulations from these SFA limit are an unambiguous signature of the interplay
between the atomic force field and laser fields in the atomic ionization
phases. It should be emphasized that the TDSE results include all paths
contributing to the multi-photon strong field interference for photoelectron
well beyond the simple “two-path double-slit” model [Eq. (IV) and (23)]
explicitly treated above.
The two-path model can provide guidance as to which information can be
extracted from MPSFI spectra. For example, the phase contributions $\eta$ and
$\phi^{cc}$ will be, in general, dependent on the field strengths
$F_{2\omega}$ and $F_{\omega}$ in a strong-field $\omega-2\omega$ scenario
fundamentally different from the standard RABBIT protocol. Moreover, while the
resulting phase delay $\delta(E)$ is a continuous function of $E$ (see Fig. 3)
the mapping of a phase delay onto a time delay according to Eq. (11) depends
on the specific position within the spectrum. Near sideband energies
$E_{2N+1}$, Eq. (19) has the appearance of a finite difference approximation
as implied by Eq. (11) and can thus be used to extract approximate time delays
$\tau=\delta(E_{2N+1})/2\omega$. Near ATI peaks [Eq. (22)], such
interpretation in terms of a finite-difference approximation fails as the
difference involves now different interfering zero- and two-IR photon paths.
Moreover, when all path pairs are included, a sum over many path pairs each of
which giving rise to terms of the form [Eq. (IV)] for sidebands and of the
form [Eq. (IV)] for ATI peaks will contribute to $A(E,\phi)$ rendering the
extraction of a spectral derivative for a specific phase difficult. Only in
cases where one path pair strongly dominates, in particular the pair $P_{1}$
for the sideband, approximate EWS time delays for a given partial wave can be
unambiguously assigned. With this caveat in place, we also give $\tau(E)$ in
Figs. 6, 7, and 8 for illustrative purposes.
Figure 6: (a) Electron spectra for a Yukawa potential [Eq. (3)] with $a=4$ and
$b=0.629$ calculated for one-color 2$\omega$ (black line) and two-color
$\omega-2\omega$ (red line) laser fields with $\phi=0$. (b) Phase delays
$\delta(E)$ in units of $\pi$ calculated from the asymmetry $A(E,\phi)$
integrated over hemispheres [see Eq. (9)]. For reference we also convert the
phase delay into a time delay [Eq. (11)] (right side axis). The laser
intensities are $I_{2\omega}=10^{11}$ W/cm2 and $I_{\omega}=5\times 10^{8}$
W/cm2. Other laser parameters are the same as in Fig. 2.
Before comparing simulations with experimental data, we illustrate the
partial-wave path-interference structure for a strongly simplified model
system in which the number of contributing paths and, thus, the complexity of
the ionizing process is drastically reduced. We consider an electron bound by
a Yukawa potential [Eq. (3)] with parameters ($a=4$, $b=0.629$) chosen such
that a single $2\omega$ photon is sufficient to reach the continuum and the
shallow potential supports only one 1s-like bound state with $E_{1s}=-0.08$.
Consequently, the energetic position of the first ATI peak coincides in this
case with the position of the standard photoionization peak. For later
reference we note that the screening length of this potential ($a=4$) is
sufficiently large as to include, despite being asymptotically short-ranged,
some Coulomb-laser coupling (CLC) or cc phase contributions [63]. Moreover, we
choose the intensities of the fields sufficiently low ($I_{2\omega}=10^{11}$
W/cm2, $I_{\omega}=5\times 10^{8}$ W/cm2) to be strictly in the perturbative
multi-photon regime. The photoelectron spectrum in both the presence and
absence of the weak probe field are displayed in Fig. 6a. Turning on the
$\omega$ field creates the side bands, as expected, while the ATI peaks remain
largely unaffected by the probe field. The absorption of a single V
($2\omega$) photon from the bound $1s$ initial state ionizes the model atom
creating a $p-$wave electron of energy corresponding to the first peak (ATI1).
The second peak (ATI2) results from the absorption of two-V($2\omega$)
photons, and is composed of the superposition of s and d waves due to the
selection rule of angular momentum $\Delta\ell=\pm 1$. We have determined the
angular momentum composition of ATI2 to contain $9.8\%$ of s character and
$90.1\%$ of d character consistent with the propensity rule invoked above. The
lowest sideband SB1 between the first $2\omega$ photoionization peak ATI1 and
the second peak ATI2 can be reached by either absorption of two photons [one
V:($2\omega$) and one NIR:($\omega$)] or absorption of two V ($2\omega$)
photons and emission of one NIR ($\omega$) photon. For the first sideband SB1
the angular momentum composition is given by $9.4\%$, $0.8\%$, and $89.8\%$
for the $s$, $p$, and $d$ states, respectively. The population of $s-$ and
$d-$partial waves in SB1 is close to that of ATI2 also in line with the
propensity for two-photon absorption irrespective of the different frequencies
involved. This distribution indicates the dominance of the
one-V($2\omega$)-one-NIR($1\omega$) absorption path to the SB1 over the
two-V($2\omega$) absorption and one-NIR($1\omega$) emission path in the
perturbative regime, which is expected since the latter path involves one more
photon from a weak field than the former and, consequently, is a higher-order
photoionization process. However, the latter path provides a small but crucial
contribution giving rise to a non-vanishing $\phi$ dependent contribution from
which the phase delay $\delta(E)$ can be extracted (Fig. 6b).
Remarkably, whereas $\delta(E)$ near the ATI peaks closely follows the SFA
predictions $\delta(E_{n})\simeq\pi$ [Eq. (23)], near the sideband peaks
strong deviations can be observed in Fig. 6b. For the first sidebands for
which this phase could be reliably extracted we find $\delta(E_{n})\simeq
0.3\pi$. For reference we also convert the phase delay $\delta(E)$ into an
EWS-type time delay following Eq. (11) and find for the sideband, within a
fairly small energy window ($3$ eV $\leq E\leq$ $10$ eV), an almost energy-
independent time delay of about $\tau\approx 200$ attoseconds. Using the
approximate expressions [Eqs. (20) and (23)] for a qualitative analysis of the
two-path interference these results suggest that the phase delay near the ATI
peaks is strongly dominated by the SFA contribution ($\sim\pi$) corresponding
to a time delay of $660$ as while atomic field corrections play only a minor
role. By contrast, near the sideband peaks the phase differences induced by
the atomic-field
$\eta_{\ell+1}(E_{2(N+1)})-\eta_{\ell}(E_{2N})+\varphi_{\ell}^{cc,1-}-\varphi_{\ell+1}^{cc,1+}$
are clearly visible. We note that the presence of a non-vanishing contribution
to the phase delay by the one-photon continuum-continuum transition
$\varphi^{cc,1\pm}$ for the Yukawa potential is consistent with the fact that
with increasing screening length ($a=4$ in the present case) an increasing
part of the full long-range Coulomb-laser coupling is restored [63].
Therefore, we can use Eq. (19) to estimate this contribution to the sideband
phase delay as
$\varphi_{\ell+1}^{cc,1+}(E_{2N})-\varphi_{\ell}^{cc,1-}(E_{2(N+1)})\simeq\delta(E_{2N+1})+\eta_{\ell+1}(E_{2(N+1)})-\eta_{\ell}(E_{2N}),$
(24)
where we have dropped the label $F$ because we consider the perturbative limit
($F\rightarrow 0$). The atomic ionization phases $\eta_{\ell}$ can be obtained
by the one-photon atomic ionization phase in a partial-wave expansion for the
Yukawa potential. By using Eq. (24), we estimate the cc phase contribution to
SB1 as $\varphi_{2}^{cc,1+}-\varphi_{1}^{cc,1-}\simeq 0.45$, for SB2 as
$\varphi_{3}^{cc,1+}-\varphi_{2}^{cc,1-}\simeq 0.8$, and for SB3 as
$\varphi_{4}^{cc,1+}-\varphi_{3}^{cc,1-}\simeq 0.92$, corresponding to time
delay contributions of approximately $11$, $19$, and $22$ as, respectively.
These phase contributions could shed some light on how the Yukawa potential
affects the cc contributions to the time delays. Besides, new studies on the
holographic angular streaking of electrons by corotating ($\omega-2\omega$)
fields suggest that nonadiabatic effects in the ionization process could be
responsible for such difference of the time delay with respect to the strong-
field approximation [64, 65]. The identification of non-adiabatic effects on
time delays (included in the TDSE calculations) are beyond the scope of this
paper. It is worth to mention that as the De Broglie’s wavelength of the
electron is longer than the screening length of the Yukawa short-range
potential, classical or semiclassical simulations are not valid for the energy
region shown in Fig. 6b.
## V Comparison with experiment
Figure 7: TDSE phase delays $\delta(E)$ calculated as a function of the
emission energy for (a) ATI peaks and (b) sidebands for the same pulse
parameters as in Fig. 2. Phase shifts extracted from data for forwards half
spheres $S_{+}(E,\phi)$ (squares), and asymmetry $A(E,\phi)$ (circles) with
integration over the energy window around each peak energy (full symbols) and
at the energy peak only (open symbols). Full green dots correspond to
experimental data by Zipp et al. [40] normalized to the TDSE result at the
highest sideband energy ($\sim 15$eV).
For a comparison with the experiment of Zipp et al. [40] we extract the multi-
photon ionization interference phase shifts $\delta(E)$ from the TDSE
simulation (Fig. 2). In view of the rapid variation with the energy $E$ (Figs.
2 and 3), we evaluate $\delta(E)$ not only at the ATI or sideband peaks
$E=E_{n}$ [Eq. (7)] but integrate the spectrum over an energy window of width
$\Delta E=0.3\omega$ centered around the peak. We show in Fig. 7 fits to
$\delta(E)$ for emission into forward hemisphere $S_{+}(E,\phi)$ [Eq. (9)] and
for the asymmetry $A(E,\phi)$ [Eq. (8)]. While minor differences of the order
of less than $0.05\pi$ between the different read-outs of $\delta(E)$ (via
$S_{+}$ or $A$) appear, the overall trends observed are independent of the
particular read-out protocol demonstrating that unambiguous information on the
phase delay can be extracted.
For further analysis and interpretation of the results of Fig. 7, two key
points should be taken into account. First, the experimental data for
$\delta(E)$ presented in [40] were relative and set to coincide with the SFA
value ($\delta=0$) at the highest energy measured ($E=15$eV) (a similar
renormalization was used in [43]). However, we observe significant deviations
in $\delta(E)$ from the SFA limit. Therefore, we instead renormalize the
experimental data to the full TDSE result at the highest experimental energy
in order to preserve this additional information on the absolute value of
$\delta(E)$. Accordingly, in Figs. 7a and b the experimental results are set
to coincide with the TDSE phase shifts calculated by integration over the
energy windows around the peaks and all angles in the forward hemisphere.
Overall, the trend in the experimental data is well reproduced by the
simulations. The sharp rise of the phase shift $\delta(E)$ for the first ATI
peak seen close to threshold in both the experiment and simulations was
recently interpreted in terms of transient trapping of the electron in Rydberg
states by the $\omega-2\omega$ field [43].
The second key feature is that the data in Fig. 7 were extracted at a
moderately strong NIR probe field with $I_{\omega}=4\times 10^{11}$ W/cm2. For
the standard RABBIT protocol or attosecond streaking field strengths
$F_{\omega}$ of that order of magnitude were found to be weak enough to
unambiguously extract atomic continuum-continuum or Coulomb-laser coupling
delays which are independent of the particular value of $I_{\omega}$ in line
with lowest-order perturbation theory [1]. However, in the present MPSFI
scenario the influence of the probe field $F_{\omega}$ beyond a lowest-order
perturbation theory must be considered.
Figure 8: Interference phase delay $\delta(E)$ as a function of the probe
laser intensity $I_{\omega}$ extracted from asymmetry parameter integrated
over hemispheres for three ATI peaks and three sidebands with energies as
indicated. All other laser parameters are the same as in Fig. 2. The
horizontal dashed line corresponds to the strong-field limit for ATI phase
shifts [$\delta(E)=\pi$], the SFA limit for the sidebands is $\delta(E)=0$
(not shown).
Indeed, exploring the variation of the extracted $\delta(E)$ at fixed pump
intensity $I_{2\omega}$ as a function of the probe intensity $I_{\omega}$
(Fig. 8) reveals a surprisingly strong dependence. The experimental value
$I_{\omega}=4\times 10^{11}$ W/cm2 is obviously well beyond the lowest-order
perturbative regime which precludes the direct applicability of a RABBIT-type
analysis. For sideband peaks, phase shifts $\delta(E)$ appears to converge to
the perturbative field-independent limit only for considerably lower fields
$I_{\omega}\lesssim 10^{10}$ W/cm2. These converged values differ, however,
significantly from the SFA limit even at the highest energy measured ($E=15.5$
eV). Near ATI peaks, variations are present even at such low intensities and
the approach to converged field-independent values is not yet obvious. It
appears that for the highest energies measured, e.g. $E=17.1$ eV and at the
lowest probe field $I_{\omega}\lesssim 10^{10}$ W/cm2 the phase near the ATI
peak may approach the SFA limit $\delta(E)\simeq\pi$. It should be noted,
however, that the interference contributions to ATI peaks, which are
responsible for the phase shift $\delta(E)$, result from (at least) a two-
photon absorption or emission event in the probe field ($P_{1}^{\prime\prime}$
as depicted in Fig. 4b), which becomes very weak at low $I_{\omega}$ rendering
the phase extraction uncertain. The non-negligible probe field dependence of
the extracted MPSFI phase delays $\delta(E)$, also indicated in Eqs. (19) and
(22) emerges as an important new feature, absent in standard RABBIT or
streaking measurements, that remains to be explored, experimentally as well as
theoretically.
## VI Concluding remarks
We have presented simulations and the first detailed analysis of the phase
delays $\delta(E)$ in multi-photon ionization. They provide information on the
differences in ionization phases among different pathways open in a
$\omega-2\omega$ scenario for atomic ionization. We show that $\delta(E)$ is
determined by quantum path interferences between different sequences of photon
absorption and emission events. In the SFA limit these phases are given by
$\delta(E)=0$ at sideband energies and by $\delta(E)=\pi$ at the ATI peaks. We
find that the solutions of the time-dependent Schrödinger equation predict
phases strongly differing from these SFA limits even at relatively high
electron emission energies. We relate these phase shifts to the interplay
between the strong $\omega-2\omega$ field and the atomic force field not
accounted for by the SFA. We also point out the intrinsic difficulties to
relate the phase delays $\delta(E)$ to time delays in analogy to the standard
RABBIT protocol for one-photon ionization. A multitude of different
interfering pathways provides obstacles for a straightforward extraction of a
spectral derivative of the phase delay. We have found strong variation of
$\delta(E)$ with the intensities of the pump and probe fields. Our analysis
shows that further experimental insight into the multi-photon ionization phase
delay $\delta(E)$ can be gained by exploring its variation with both
$I_{2\omega}$ and $I_{\omega}$.
###### Acknowledgements.
This work was supported by CONICET PIP0386, PICT-2016-0296 PICT-2017-2945 and
PICT-2016-3029 of ANPCyT (Argentina), Austria-Argentina collaboration
AU/12/02, by the FWF special research programs SFB-041 (ViCoM), and doctoral
programme DK-W1243 (Solid4Fun), and by the European COST Action CA1822. The
computational results presented have been achieved using the Vienna Scientific
Cluster (VSC).
## References
* [1] R. Pazourek, S. Nagele, and J. Burgdörfer, Rev. Mod. Phys. 87, 765 (2015).
* [2] F. Krausz and M. Ivanov, Rev. Mod. Phys. 81, 163-234 (2009).
* [3] V. Véniard, R. Taïeb, and A Maquet, Phys. Rev. Lett. 74, 4161 (1995).
* [4] J. M. Schins, P. Breger, P. Agostini, R. C. Constantinescu, H. G. Muller, A. Bouhal, G. Grillon, A. Antonetti, and A Mysyrowicz, J. Opt. Soc. Am. B 13, 197 (1996).
* [5] T. E. Glover, R. W. Schoenlein, A. H. Chin, and C. V. Shank, Phys. Rev. Lett. 76 2468 (1996).
* [6] J. Hummert, M. Kubin, S. D. López, J. I. Fuks, F. Morales, M. J. J. Vrakking, O. Kornilov, and D. G. Arbó, J. Phys. B: At. Mol. Opt. Phys. 53, 154003 (2020).
* [7] J. Itatani, F. Quéré, G. L. Yudin, M. Yu. Ivanov, F. Krausz and P. B. Corkum, Phys. Rev. Lett., 88, 173903 (2002).
* [8] E. Goulielmakis et al., Science 305, 1267 (2004).
* [9] E. Goulielmakis et al., Science 320, 1614 (2008).
* [10] V. Véniard, R. Taïeb, and A. Maquet, Phys. Rev. A54, 721 (1996).
* [11] P. M. Paul, E. S. Toma, P. Breger, G. Mullot, F. Augé, Ph. Balcou, H. G. Muller, and P. Agostini, Science 292, 1689 (2001).
* [12] M. Schultze, M. Fieß, N. Karpowicz, J. Gagnon, M. Korbman, M. Hofstetter, S. Neppl, A. L. Cavalieri, Y. Komninos, Th. Mercouris, C. A. Nicolaides, R. Pazourek, S. Nagele, J. Feist, J. Burgdörfer, A. M. Azzeer, R. Ernstorfer, R. Kienberger, U. Kleineberg, E. Goulielmakis, F. Krausz, and V. S. Yakovlev, Science 328, 1658 (2010).
* [13] K. Klünder, J. M. Dahlström, M. Gisselbrecht, T. Fordell, M. Swoboda, D. Guénot, P. Johnsson, J. Caillat, J. Mauritsson, A. Maquet, R. Taïeb, and A. L’Huillier, Phys. Rev. Lett. 106, 143002 (2011).
* [14] D. Guénot, K. Klünder, C. L. Arnold, D. Kroon, J. M. Dahlström , M. Miranda, T. Fordell, M. Gisselbrecht, P. Johnsson, J. Mauritsson, E. Lindroth, A. Maquet, R. Taïeb, A. L’Huillier, and A. S. Kheifets, Phys. Rev. A, 85, 053424, (2012).
* [15] D. Guénot, D. Kroon, E. Balogh, E. W. Larsen, M. Kotur, M. Miranda, T. Fordell, P. Johnsson, J. Mauritsson, M. Gisselbrecht, K. Varjú, C. L. Arnold, T. Carette, A. S. Kheifets, E. Lindroth, A. L’Huillier, and J. M. Dahlström, Journal of Physics B: Atomic, Molecular, Optical Physics 47, 245602, (2014).
* [16] J.M. Dahlström, D. Guénot, K. Klunder, M. Gisselbrecht, J. Mauritsson, A. L’Huillier, A. Maquet, and R. Taïeb, Chemical Physics 414, 53-64 (2013).
* [17] Jaco Fuchs, Nicolas Douguet, Stefan Donsa, Fernando Martin, Joachim Burgdörfer, Luca Argenti, Laura Cattaneo, and Ursula Keller, Optica 2, 154 (2020).
* [18] M. Huppert, I. Jordan, D. Baykusheva, A. von Conta, and H. J. Wörner et al., Phys. Rev. Lett 117, 093001 (2016).
* [19] Beaulieu et al., Science 358, 1288 (2017).
* [20] A. Cavalieri et al., Nature 449, 1029 (2007).
* [21] C. Lemell, S. Neppl, G. Wachter, K. Tokesi, R. Ernstorfer, P. Feulner, R. Kienberger, and J. Burgdörfer, Phys. Rev. B 91, 241101(R) (2015).
* [22] S. Haessler, T. Balciunas, G. Fan, T. Witting, R. Squibb, L. Chipperfield, A. Zaïr, G. Andriukaitis, A. Pugzlys, J. W. G. Tisch, J. P. Marangos, and A. Baltuska, Ultrafast Phenomena XIX, Springer Proceedings in Physics, Volume 162, p. 72. Springer International Publishing Switzerland, 2015.
* [23] S. Nagele, R. Pazourek, J. Feist, and J. Burgdörfer, Phys. Rev. A 85, 033401 (2012).
* [24] R. Pazourek, J. Feist, S. Nagele, and J. Burgdörfer, Phys. Rev. Lett. 108, 163001 (2012).
* [25] R. Della Picca, A. A. Gramajo, S. D. López, and D. G. Arbó, Journal of Physics: Conference Series 1412, 042002 (2020).
* [26] R. Della Picca, M. F. Ciappina, M. Lewenstein, and D. G. Arbó, Phys. Rev. A102, 043106 (2020).
* [27] J. M. Dahlström, A. L’Huillier, and A. Maquet, J. of Phys. B: Atomic, Molecular and Optical Physics 45, 183001, (2012).
* [28] A. S. Kheifets, Phys. Rev. A 87, 063404 (2013).
* [29] J. Feist, O. Zatsarinny, S. Nagele, R. Pazourek, J. Burgdörfer, X. Guan, K. Bartschat and B. I. Schneider, Phys. Rev. A 89, 033417, (2014).
* [30] Jing Su, Hongcheng Ni, Andreas Becker, and Agnieszka Jaron-Becker, Phys. Rev. A87, 033420 (2013).
* [31] D. W. Schumacher, F. Weihe, H. G. Muller, and P. H. Bucksbaum, Phys. Rev. Lett. 73, 1344 (1994).
* [32] D. G. Arbó, C. Lemell, S. Nagele, N. Camus, L. Fechner, A. Krupp, T. Pfeifer, S. D. López, R. Moshammer, and J. Burgdörfer Phys. Rev. A 92, 023402 (2015).
* [33] F. Ehlotzky, Phys. Rep. 345, 175 (2001).
* [34] X. Xie, S. Roither, D. Kartashov, E. Persson, D. G. Arbó, L. Zhang, S. Gräfe, M. S. Schöffler, J. Burgdörfer, A. Baltuška, and M. Kitzler, Phys. Rev. Lett., 108, 193004 (2012).
* [35] D. G. Arbó, S. Nagele, X.-M. Tong, X. Xie, M. Kitzler, and J. Burgdörfer, Phys. Rev. A, 89, 043414 (2014).
* [36] D. You, K. Ueda, E. V. Gryzlova, A. N. Grum-Grzhimailo, M. M. Popova, E. I. Staroselskaya, O. Tugs, Y. Orimo, T. Sato, K. L. Ishikawa, et al., Phys. Rev. X 10, 031070 (2020).
* [37] J. Fuchs et al., Phys. Rev. Lett. (2020, submitted) (https://arxiv.org/abs/2012.07426).
* [38] S. Donsa, N. Douguet, J. Burgdörfer, I. Brezinová, and L. Argenti, Phys. Rev. Lett. 123, 133203 (2019).
* [39] G. Laurent, W. Cao, H. Li, Z. Wang, I. Ben-Itzhak, and C. L. Cocke, Phys. Rev. Lett. 109, 083001 (2012).
* [40] L. J. Zipp, A. Natan, and P. H. Bucksbaum, Optica 1, 361-364 (2014).
* [41] Yudi Feng, Min Li, Siqiang Luo, Kun Liu, Baojie Du, Yueming Zhou, and Peixiang Lu, Phys. Rev. A 100, 063411 (2019).
* [42] M. Bertolino and J.M. Dahlström. Phys. Rev. Res 3, 013270 (2021).
* [43] Xiaohong Song, Guangluo Shi, Guojun Zhang, Jingwen Xu, Cheng Lin, Jing Chen, and Weifeng Yang, Phys. Rev. Lett. 121, 103201 (2018).
* [44] Anatoli S. Kheifets and Alexander W. Bray, Phys. Rev. A103, L011101 (2021).
* [45] H. G. Muller, Phys. Rev. A 60, 1341 (1999).
* [46] X.-M. Tong and Shih.-I. Chu, Chemical Physics, 217, 119 (1997).
* [47] X.-M. Tong and Shih.-I. Chu, Phys. Rev. A61, 031401(R) (2000).
* [48] V. Keldysh, Zh. Eksp. Theo. Fiz. 47, 1945 (1964); Sov. Phys. JETP 20, 1307 (1965).
* [49] F. H. M. Faisal, J. Phys. B 6, L89 (1973).
* [50] H. R. Reiss, Phys. Rev. A 22, 1786 (1980).
* [51] M. Jain and N. Tzoar, Phys. Rev. A 18, 538 (1978).
* [52] Stefan Donsa, Manuel Ederer, Renate Pazourek, Joachim Burgdörfer, and Iva Brezinová, Phys. Rev. A102, 033112 (2020).
* [53] E. P. Wigner, Phys. Rev. 98, 145 (1955).
* [54] F. T. Smith, Phys. Rev. 118, 349 (1960); erratum Phys. Rev. 119, 2098 (1960).
* [55] C.-H. Zhang and U. Thumm, Phys. Rev. A82, 043405 (2010).
* [56] Divya Bharti, David Atri-Schuller, Gavin Menning, Kathryn R. Hamilton, Robert Moshammer, Thomas Pfeifer, Nicolas Douguet, Klaus Bartschat, and Anne Harth, Phys. Rev. A103, 022834 (2021).
* [57] F. Bell, G. Trollmann, H. Böckl, H.-D. Betz, Nuclear Instruments and Methods in Physics Research, 194, 423-427 (1982).
* [58] F. Bell, G. Trollmann, H. D. Betz, Phys. Lett. A 88, 37-39 (1982).
* [59] U. Fano, Phys. Rev A 32, 617 (1985).
* [60] D. Busto, J. Vinbladh, S. Zhong, M. Isinger, S. Nandi, S. Maclot, et al., Phys. Rev. Lett. 123, 133201 (2019).
* [61] D. G. Arbó, C. Lemell, J., Burgdörfer, J. Phys. Conf. Ser. 635, 012003 (2015).
* [62] Mattias Bertolino, David Busto, Felipe Zapata, and Jan Marcus Dahlström, J. Phys. B 53, 144002 (2020).
* [63] S. Nagele, R. Pazourek, M. Wais, G. Wachter, J., Burgdörfer, J. Phys. Conf. Ser. 488, 012004 (2014).
* [64] S. Eckart, Phys. Rev. Res. 2, 033248 (2020).
* [65] D. Trabert, S. Brennecke, K. Fehre, N. Anders, A. Geyer, S. Grundmann, M. S. Schöffler, L. Ph. H. Schmidt, T. Jahnke, R. Dr̈ner, M. Kunitski, and S. Eckart, Nat. Commun. 12, 1697 (2021).
|
arxiv-papers
| 2021-07-26T18:08:46 |
2024-09-04T03:07:19.811087
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "S. D. L\\'opez, S. Donsa, S. Nagele, D. G. Arb\\'o and J. Burgd\\\"orfer",
"submitter": "Diego Arb\\'o",
"url": "https://arxiv.org/abs/2107.12414"
}
|
2107.12418
|
# Gravitational Radiation from Accelerating Jets
Elly Leiderschneider Tsvi Piran [email protected] Racah Institute
for Physics, The Hebrew University, Jerusalem, 91904, ISRAEL
###### Abstract
Non-spherical rapid acceleration of mass (or energy) to a relativistic
velocity is a natural source of gravitational radiation. Such conditions arise
in both long and short gamma-ray bursts whose central engine ejects
relativistic jets. The resulting gravitational wave signal is of a memory
type, rising to a finite level (of order $4G{\cal E}/r$) over a duration that
corresponds to the longer of either the injection time and the acceleration
time of the jet. We explore the properties of such signals and their potential
detectability. Unfortunately, the expected signals are below the frequency
band of Advanced LIGO-Virgo-Kagra, and above LISA. However, they fall within
the range of the planned BBO and DECIGO. While current sensitivity is marginal
for the detection of jet gravitational wave signals from GRBs, hidden
relativistic jets that exist within some core collapse SNe could be detected.
Such a detection would reveal the acceleration mechanism and the activity of
the central engine, which cannot be explored directly in any other way.
## I Introduction
Gamma-ray bursts (GRBs) are extremely energetic, with typical energies of
${\cal E}=10^{51}$ erg. Jets associated with a GRB are accelerated to high
Lorentz factors, with ${\Gamma\gtrsim 100}$ being a typical value. They are
highly anisotropic, with the ejected material being confined to a cone with
opening angle ${\theta_{\rm j}\lesssim 10^{o}}$. These jets are accelerated
from rest within a short time, and they last for fraction of a second (in
short GRBs) to tens of seconds (in long ones). The acceleration of a
relativistic jet produces a memory-type gravitational wave (GW) signal [1, 2].
Observations of this GW signal will reveal the nature of the jets and the
acceleration process. Additionally, there is ample evidence for hidden jets
activity within some supernovae [3, 4, 5, 6, 7, 6, 7, 8, 9]. The existence of
these jets can be inferred only indirectly. A detection of this kind of GW
signal is possibly the only direct way to identify these invisible jets and
learn about their hidden features.
While the GW amplitude estimates [2, 10, 11], that are of order
$10^{-24}-10^{-25}$ for reasonably nearby GRBs, and the relevant frequencies
(that are in the decihertz range) both make detection difficult, it is
worthwhile to get back to this problem, and explore in greater details both
the characteristics and the detection prospects of the GW signal.
Segalis and Ori [1] and Piran [2] considered an instantaneously accelerated
point particle, using the zero-frequency limit (ZFL). This approximation that
corresponds to infinite acceleration is appropriate for describing the final
jump in the amplitude of the GW. However, this approximation misses,
naturally, the details of the temporal structure that are crucial for
consideration of detection feasibility.
Sago et al. [10] generalized this result for a GRB model based on a large
number of thin jets (“minijets”) [12] that are ejected at random angles within
a cone and random times within the duration of the GRB. Within this model each
minijet produces a single pulse and these pulses combine to form the GRB light
curve. In their model each minijet is described by an instantaneously
accelerated point particle generating a step function signal. The
superposition of the different step functions results in a complicated GW
light curve. The model captures the effects of the angular structure and of
the overall duration of the GRB resulting in a typical time scale for the
pulse rise time that is comparable to the duration of the burst.
Birnholtz and Piran [11] relaxed the instanteneous acceleration approximation
and developed a scheme for calculating the GW signal from a continuously
accelerating axisymmetric jet. The considered, following the fireball model
[13], an acceleration model in which the jet’s Lorentz factor increases
linearly with time (or distance) until it reaches its final value. They
considered different angular structures and observers at different viewing
angles taking into account integration over equal arrival time surfaces. The
combined effects of prolonged acceleration and taking into account the
integration over the arrival time surface results in a temporal structure of
the order of the acceleration time at viewing angles close to the jet and
longer at larger angles.
In this work, we calculate the GW emission from accelerating jetc combining
both effects of prolonged acceleration and prolonged duration of ejection of
the jet. We calculate properties of the GW that are universal and independent
of particular acceleration models, and combine them with a realistic possible
model of the ejection of outflow in GRBs to derive typical amplitudes and
detection expectation of GRBs and other astrophysical jets. In the following
we will be using G=c=1, but at times we introduce these coefficients for
clarity.
The structure of the paper is as follows. We outline in §II the general
description of the problem and following the methods of [11] (that consider
instantaneous injection) and some results of [10] (that consider instantaneous
acceleration) we describe the GW signals from systems with instantaneous
ejection or instantaneous acceleration. We explore in §III the temporal
structure focusing on the interplay between the two time scales that exist in
the system, the acceleration time scale, $t_{\rm acc}$, and the overall
duration of the activity of the central engine that accelerates the jet,
$t_{\rm inj}$. We consider in §IV an example in use the temporal structure of
GRBs’ light curves as a proxy for the activity of the central engine.
Following this example we consider in §V the detectability of these signals
and we summarize and discuss our results in VI.
## II Instantaneous Ejection and Acceleration
We consider an idealized jet that is accelerated to an ultra-relativistic
velocity. The jet has energy ${\cal E}=m\Gamma$, with $m$ the jet’s mass and
$\Gamma$ its final Lorentz factor. To simplify the discussion we keep only the
essential features of the problem (see Fig. 1). The jet is an axisymmetric top
hat with an opening angle $\theta_{\rm j}$. The jet moves radially outwards,
and every particle emitted at the same time accelerates in the same manner.
Particles emitted at the same time maintain the shape of a radially expanding
infinitesimally thin spherical cap. The observer is located at a distance $r$
and at an angle $\theta_{\rm v}$, relative to the jet’s symmetry axis.
The energy (or mass) ejection function, $\dot{m}(t)$, describes the rate of
mass ejection, where $t$ is measured in the rest frame of the central engine,
and is the same in the observer’s rest frame. The function $\dot{m}(t)$ is
characterized by the timescale $t_{\rm inj}$. The acceleration is described by
the function $\Gamma(t)$, where $t$ is measured in the central engine’s frame
of reference. $\Gamma(t)$ is characterized by the acceleration timescale
$t_{\rm acc}$. The time of flight scale that characterizes the arrival time
from different angular regions of the jet is related to the acceleration time
as
$\tilde{t}_{\rm o}(\theta_{\rm v})=(1-\beta\cos\Delta\theta_{\rm v})t_{\rm
acc},$ (1)
where $\Delta\theta_{\rm v}$ is the “relevant” (as discussed later) angle
between the observer and the source and $\beta$ is the jet’s velocity. As the
critical time scale is the longer of the two we denote $t_{\rm c}\equiv{\rm
max}(\tilde{t}_{\rm o},t_{\rm inj})$. As $\tilde{t}_{\rm o}$ depends on the
viewing angle, the dominant time scale may be $t_{\rm inj}$ for some observers
and $t_{\rm acc}$ for others.
Figure 1: A schematic description of the jet. The top shell has reached the
final Lorentz factor at a distance $ct_{\rm acc}$ from the origin. The
duration of mass injection is $t_{\rm inj}$. A counter-jet is shown in light
colors.
Among the different approximations, the zero-frequency limit (ZFL) stands out
[1, 2]. This approximation ignores the detailed temporal structure of the
source and the corresponding GW signal. The acceleration and mass ejection are
instantaneous: $t_{\rm acc}=0$, and $t_{\rm inj}=0$. While non-physical, this
limit gives an idea of the emerging patterns. It is also relevant for low-
frequency detectors whose response is slower than the relevant timescales of
the system. The waveform, in this limit, is described by a Heaviside step
function:
$h(t,\theta_{\rm v})=h_{0}(\theta_{\rm v})\mathcal{H}(t)\ $ (2)
and its Fourier transform is given by
$\tilde{h}(f,\theta_{\rm v})={h_{0}(\theta_{\rm v})}/{f}\ .$ (3)
### II.1 A Point Particle - $\theta_{\rm j}=0$ and $t_{\rm inj}=0$
We begin considering a point particle of mass $m$ that is instantaneously
accelerated to a Lorentz factor $\Gamma$ so that the total energy is ${\cal
E}=m\Gamma$. The particle is moving at polar angles $\theta_{\rm v}$ and
$\phi$ in the observer’s frame of reference (see Fig. 1). The gravitational
wave amplitudes $h_{\rm+}$ and $h_{\rm x}$ of the two polarization modes are
given by [1]:
$h^{TT}(\theta_{\rm v})=h_{\rm+}+ih_{\rm x}=\frac{2{\cal
E}\beta^{2}}{r}\frac{\sin^{2}\theta_{\rm v}}{1-\beta\cos\theta_{\rm
v}}e^{2i\phi}.$ (4)
For a single point-particle, the phase, $2i\phi$, can be ignored. When
discussing the metric perturbation of an ensemble of particles, though, the
complex phase may lead to destructive interference, and one component of the
perturbation tensor may dominate over the other.
The angular dependence of the amplitude $h(\theta_{\rm v})$ exhibits anti-
beaming: the GW amplitude vanishes along its direction of motion, and remains
small at a cone around it. It reaches 50% of the maximal values at an opening
angle $\Gamma^{-1}$. The function $h(\theta_{\rm v})$ attains a maximum of
$h_{\rm max}=\frac{4{{\cal E}}}{r},\ \ \ \ {\rm at}\ \ \ \theta_{\rm
max}=\sqrt{{2}/{\Gamma}}\ .$ (5)
The total GW energy emitted is given by:
$E_{\rm GW}=\frac{1}{32\pi}\iint{\dot{h}}^{2}dtd\Omega\ ,$ (6)
where $\dot{\ }$ denotes time derivative. For an instantaneously accelerating
particle, this integral diverges. However, this divergence is not physical,
and it arises from the instantaneous approximation. For a finite acceleration
time $t_{\rm acc}$ or a finite injection time the temporal integral can be
calculated in Fourier space:
$E_{\rm GW}=\frac{1}{32\pi}\int d\Omega\int_{0}^{f(\theta_{\rm
v})}\tilde{h}(f)^{2}f^{2}df\ ,$ (7)
where $f(\theta_{\rm v})=\min(t_{\rm j}^{-1},\tilde{t}_{\rm o}^{-1})$ (with
$\tilde{t}_{\rm o}$ calculated here using $\Delta\theta_{\rm v}=\theta_{\rm
v}$) is the angle-dependent upper cutoff on the frequency given by the finite
acceleration and injection times. Integrating we obtain [11]:
$E_{\rm GW}={\cal E}^{2}\begin{cases}\frac{1}{2t_{\rm
acc}}\left[\frac{3-\beta^{2}}{\beta}\ln\frac{1+\beta}{1-\beta}-6\right]&\mbox{if
}\tilde{t}_{\rm o}>t_{\rm inj}\ ,\\\ \frac{2}{t_{\rm
inj}}\left[(2-\frac{4\beta}{3})+\frac{1-b^{2}}{\beta}\ln\frac{1+\beta}{1-\beta}\right]&\mbox{if
}t_{\rm inj}>\tilde{t}_{\rm o}\ .\end{cases}$ (8)
When adding the coefficients $G$ and $c$ this expression becomes $E_{\rm
GW}\propto[G{\cal E}/c^{4}\max(t_{\rm acc},t_{\rm inj})]{\cal E}$.
The ratio of the GW emitted energy to the total energy of the particle, ${\cal
E}$, vanishes when $\beta\rightarrow 0$. However, if $t_{\rm acc}$ is the
dominant (longest) time scale it diverges when $\Gamma\rightarrow\infty$.
Namely, the accelerating engine deposits, in such a case, more energy in
generating gravitational radiation than in accelerating the jet. If the jet is
self-accelerating this is of course impossible, but then the acceleration
process has to be considered more carefully111Note that in this case $t_{\rm
acc}\leq{\cal E}$, however the term in square brackets can still be larger
than unity..
While the GW amplitude, $h$, is anti-beamed, the GW energy is beamed in the
forward direction (see Fig. 2). 50% of the GW energy is deposited in a cone
with an opening angle $\theta_{50\%}=\sqrt{{2}/{\Gamma}}$. This may seem
counter-intuitive at first. One must remember, however, that while the GW
amplitude decreases over an angular scale $\Gamma^{-1}$ around the axis, the
observed frequency of the GW is also boosted in this direction. When both
effects are taken into account we find that, while very little energy is
emitted within the anti-beamed cone of $\Gamma^{-1}$, the overall energy is
still beamed in the forward direction just around this inner cone.
Figure 2: The angular distribution of the normalized GW energy for three
different Lorentz factors. Energy is beamed in the forward direction, such
that 50% of the GW energy is confined in a cone with opening angle
$\sqrt{{2}/{\Gamma}}$. The area under all the distributions is normalized to
unity.
Instantaneous ejection of two point particles in opposite directions will lead
to a wave form that is the sum of the two
$h(\theta_{\rm v})=\frac{4{\cal E}\beta^{2}}{r}\frac{1-\cos^{2}\theta_{\rm
v}}{1-\beta^{2}\cos\theta_{\rm v}^{2}}\ .$ (9)
$h(\theta_{\rm v})$ is almost flat apart from the minima along the two axes.
However, the energy is still beamed in cones of width $\sqrt{2/\Gamma}$, as
the contribution of the particle that is moving away from the observer will be
seen only at much lower frequencies than the one moving towards it.
### II.2 A Narrow stream - $\theta_{\rm j}=0$ and $t_{\rm inj}\neq 0$
Relaxing somewhat the ZFL approximation, we generalize the previous results to
a continuous ejection of a narrow stream over $t_{\rm inj}$. All one needs to
do is to integrate the single particle $h$ (Eq. 9) over the emission time. As
all particles contribute with the same phase, there is no destructive
interference. The final jump in the GW signal remains the same and so is the
angular structure and the maximal viewing angle. There are though two
important differences. The amplitude increases following $m(t)$ on a time
scale $t_{\rm inj}$. (see §III below). This results in a typical frequency of
$1/t_{\rm inj}$ that determines both the temporal structure of $h$ and the
total energy emitted, as already discussed in Eq. 8.
### II.3 A Spherical Cap - $\theta_{\rm j}\neq 0$ and $t_{\rm inj}=0$
We consider next a thin spherical cap of particles ejected simultaneously and
accelerated instantaneously. The cap is defined by its opening angle
$\theta_{\rm j}$, final Lorentz factor $\Gamma$, total energy ${\cal E}$, and
the angle between its center and the observer $\theta_{\rm v}$. We will assume
that the cap is wide, namely $\Gamma^{-1}\ll\theta_{\rm j}$. Otherwise if
$\theta_{\rm j}\lesssim\Gamma^{-1}$ the signal converges to the point-particle
limit.
We define the observer’s line of sight to the emitting source as the $z$ axis
of our coordinate system. The coordinates $\theta$ and $\phi$ are defined in
the observer’s coordinate system in the usual manner (see Fig. 1). Without
loss of generality, we define the direction of the jet as
$(\theta,\phi)=(\theta_{\rm v},0)$ in the observer’s frame of reference.
The axial symmetry implies a symmetry under the transformation
$\phi\rightarrow-\phi$. Therefore, the metric perturbation $h_{\rm x}$ (which
is now summed over the shell) vanishes identically (see Eq. 9, and Fig. 3). In
the following, we simply denote $h=h_{\rm+}$, the only non-vanishing component
of the metric perturbation tensor.
Figure 3: A schematic view of the jet (blue) for $\theta_{\rm v}<\theta_{\rm
j}$. Due to the symmetry, the contribution to the GW amplitude of the part of
the jet that is spherically symmetric around the observer (shown in red )
vanishes. The amplitude from partial rings with $\theta>\theta_{\rm
j}-\theta_{\rm v}$, is reduced compared to the amplitude of a point-particle
with the same energy and angle to the observer. The jet is symmetric under the
transformation $\phi\rightarrow-\phi$: hence, the metric perturbation
component $h_{\rm x}$ vanishes identically.
Integrating over the cap we find:
$h_{\rm cap}(\theta_{\rm v},\theta_{\rm j})=\frac{2{\cal
E}\beta^{2}}{r\Delta\Omega}\int_{|\theta_{\rm v}-\theta_{\rm
j}|}^{min(\theta_{\rm j}+\theta_{\rm v},\pi)}\frac{\sin^{3}\theta\cdot\sin
2\Delta\phi}{1-\beta\cos\theta}d\theta\ ,$ (10)
where $\Delta\Omega\equiv 2\pi(1-\cos\theta_{\rm j}^{2})$, the solid angle of
the cap, and
$\Delta\phi\equiv\cos^{-1}\left[\frac{\cos\theta_{\rm j}-\cos\theta_{\rm
v}\cos\theta}{\sin\theta_{\rm v}\sin\theta}\right]\ .$ (11)
Figure 4: The angular distribution of $h$, the GW amplitude (top) and $dE_{\rm
GW}/d\theta$, the normalized energy distribution (bottom) from an accelerating
spherical cap with $\Gamma=100$. The anti-beaming region is $\approx 0.84\
\theta_{\rm j}$. Note the different angular scale of the two figures. The area
under each energy distribution is normalized to unity. Dashed lines in the top
figure represent the amplitudes of double-sided jets. Figure 5: Specific
viewing angles for a jet with $\Gamma=100$ as a function of $\theta_{\rm j}$:
$\theta_{50\%}$ is the opening angle of the cone which constrains 50% of the
GW’s energy, $\theta_{\rm max}$ is the viewing angle with the maximal observed
GW amplitude, and $\theta_{a-b}$ is the anti-beaming angle, at which the GW
amplitude drops to 50% of maximum. All plots are with $\Gamma=100$. For
$\Gamma^{-1}\ll\theta_{\rm j}$, all three angles are determined by
$\theta_{\rm j}$. The intercepts with the $\theta_{\rm j}=0$ axis are
determined by the point-particle results. Figure 6: The GW energy beaming
angle, $\theta_{50\%}$, as a function of the jet’s opening angle, for jets
with different Lorentz factors. The intercepts with the $\theta_{\rm j}=0$
axis correspond to $\sqrt{{2}/{\Gamma}}$, but for $\Gamma^{-1}\ll\theta_{\rm
j}$, the angle $\theta_{50\%}$ is determined by $\theta_{\rm j}$. Note that
the corresponding energy distribution is peaked around $\theta_{\rm j}$ and
$\pi-\theta_{\rm j}$
Figure 4 depicts $h_{\rm cap}(\theta_{\rm v},\theta_{\rm j})$ for different
opening angles. This angular behavior resembles the point-particle result,
with a major difference: the anti-beaming region, which was $\Gamma^{-1}$ in
the point-particle case, is now $\approx 0.84\ \theta_{\rm j}$ (see Fig. 5),
and it is independent of $\Gamma$. This is due to the fact that any region of
the cap which is axially symmetric around the observer would have no
contribution to the GW amplitude. For $\theta_{\rm v}<\theta_{\rm j}$, only
the outer region of the cap, with $\theta>\theta_{\rm j}-\theta_{\rm v}$,
contributes. The effect is twofold: regions of the cap with
$\theta<\theta_{\rm j}-\theta_{\rm v}$ have a vanishing contribution to the
amplitude, and even in the outer region, destructive interference between
symmetric regions will reduce the GW amplitude.
The maximal GW amplitude is now a function of $\theta_{\rm j}$ (compare with
Eq. 5). For small opening angles:
$h_{\rm max}(\theta_{\rm j})\approx\frac{4{\cal
E}}{r}(1-\frac{3}{4}\theta_{\rm j})\ .$ (12)
Using the amplitude $h_{\rm cap}(\theta_{\rm v},\theta_{\rm j})$, we calculate
the total GW energy, as a straightforward generalization of Eq. 8, but now for
simplicity we estimate it only for $\tilde{t}_{\rm o}$ in which we use
$\Delta\theta_{\rm v}=\theta_{\rm v}+\theta_{\rm j}$:
$E_{\rm cap}(\theta_{\rm j})=\frac{1}{16t_{\rm acc}}\int_{0}^{\pi}\frac{h_{\rm
cap}(\theta_{\rm v},\theta_{\rm j})^{2}}{1-\beta\cos(\theta_{\rm
v}+\theta_{\rm j})}\sin\theta_{\rm v}d\theta_{\rm v}\ .$ (13)
Again the energy will diverge for a strictly instantaneous acceleration. To
estimate the energy in a realistic case, we have to introduce a frequency
cutoff that depends on the acceleration time. But because of time of flight
effects, it also depends on the relation between the viewing angle and the
opening angle of the jet and the final velocity: $\tilde{t}_{\rm
o}=({1-\beta\cos(\theta_{\rm v}+\theta_{\rm j})}){t_{\rm acc}}.$
Similarly to the case of the GW amplitude anti-beaming angle, we find that the
angle of the cone which constrains 50% of the cap GW’s energy,
$\theta_{50\%}$, is determined by $\theta_{\rm j}$ and not by $\Gamma$. Fig. 5
depicts the GW amplitude’s anti-beaming angle, as well as $\theta_{50\%}$ and
the angle $\theta_{\rm max}$ where the observed GW amplitude is maximized, all
as a function of the jet’s opening angle $\theta_{\rm j}$. Fig. 6 shows
$\theta_{50\%}$ as a function of $\Gamma$. For $\Gamma^{-1}\ll\theta_{\rm j}$,
the energy beaming angle is determined only by $\theta_{\rm j}$. Fig. 4 shows
the angular distribution of the GW energy for jets with different opening
angles.
### II.4 Double-sided jets
The angular distribution of the GW signal changes drastically if the jet is
two-sided. We turn to examine two spherical caps of equal energy that are
accelerated along two opposite directions. In this case, the GW amplitude is a
monotonically increasing function of $\theta_{\rm v}$, up to $\pi/2$, where it
is maximal (see Fig. 4). The maximal GW amplitude is:
$h_{\rm max}(\theta_{\rm j})=\frac{4{\cal E}}{r}\cos\theta_{\rm j},$ (14)
where ${\cal E}$ is now the total energy of both caps. The result is similar
to the case of two ejected point particles but, similarly to the single-cap
case, the width of the suppressed area around the axes is now of order
$0.84\theta_{\rm j}$, rather than of order $\sqrt{2/\Gamma}$.
## III The Temporal Structure
We turn now to consider the effect of the more detailed temporal structure of
the source and the acceleration process.
### III.1 Power Spectrum and Timescales
A memory-type signal, rising to an asymptotic value $h_{0}(\theta_{\rm v})$
over a timescale $t_{\rm c}(=\max[t_{\rm acc},t_{\rm inj})])$, has a
characteristic Fourier transform:
$\tilde{h}(f,\theta_{\rm v})=\begin{cases}{h_{0}(\theta_{\rm v})}/{f},\quad
f\leq f_{\rm c}\\\ {h_{0}(\theta_{\rm v})f_{\rm c}}{g(f)},\ \ f\geq f_{\rm
c}\end{cases}$ (15)
where $f_{\rm c}\equiv{1}/{t_{\rm c}}$ is the crossover frequency and $g(f)$,
which depends on the nature of the source, decreases faster than $1/f$. As the
total GW energy must be finite, the integral
$\int_{0}^{\infty}{\dot{h}}^{2}(t)dt=\int_{0}^{\infty}f^{2}{\tilde{h}}^{2}(f)df$
yields an asymptotic bound of $g\propto f^{-\alpha_{\inf}}$ with
$\alpha_{\inf}>3/2$.
The Fourier transform is closely related to the spectral density, which is
typically used to characterize the signal-to-noise ratio of the GW:
$S(f)\equiv\tilde{h}(f)\cdot\sqrt{f}$ (16)
The combination of the crossover frequency, $f_{\rm c}$, and the spectral
density at this frequency, $S(f_{\rm c})$, is critical to determine the
detectability of the signal. The condition
$S_{det}(f)<S(f_{\rm c})(f_{\rm c}/f)^{1/2}$ (17)
is a necessary but not sufficient condition for the detection of this signal.
For a low-frequency detector (with a typical frequency range below $f_{\rm
c}$) this condition is sufficient as it will detect such event if Eq. 17 is
satisfied for some frequency in its range, $f$. This detector will observe a
step function. As $S(f)$ decreases faster than $f^{-1/2}$, Eq. 17 is not a
sufficient condition for detection by a high frequency detector. If it is
sensitive enough, a higher frequency detector can detect the relevant and
interesting temporal structure that exist beyond a simple step function.
As the signal can be characterized by the crossover frequency, the following
sections are concerned with identifying this frequency in the GW’s Fourier
spectrum. We discuss first the simplifying limit $t_{\rm inj}=0$, in which the
jet is emitted at once. We then examine the general case, of a finite $t_{\rm
inj}$.
### III.2 Instantaneously spherical cap - $t_{\rm acc}=0$, $t_{\rm acc}=0$
We consider here the GW signal of a single spherical cap, of angular size
$\theta_{\rm j}$, that is instantaneously injected and accelerated, but the
acceleration takes place at a radius $R$ rather than at the origin. We
decompose the spherical cap to concentric rings around the observer. The
signal from a full ring vanishes. The signal from a partial ring at an angle
$\theta$ to the observer is a Heaviside step function, whose magnitude and
phase are characterized by $l(\theta)e^{2i\Delta\phi}$, where $l(\theta)$ is
the fraction of the ring within the cap (see Fig. 3) and $\Delta\phi$, defined
by Eq. 11, is the corresponding phase.
The arrival time of the signal from this ring is $(1-\beta\cos\theta)R/c$.
Integration over these (partial) rings yields the GW signal:
$\displaystyle\tilde{h}_{\rm cap}(f)=\frac{2{\cal
E}\beta^{2}}{r\Delta\Omega}\int_{|\theta_{\rm v}-\theta_{\rm j}|}^{\theta_{\rm
v}+\theta_{\rm
j}}d\theta\frac{sin^{3}\theta}{1-\beta\cos\theta}l(\theta)e^{2i\Delta\phi}$
$\displaystyle\times\frac{i}{f}e^{if(1-\beta\cos\theta)R/c},$ (18)
where the lower integration limit is determined by the requirement that the GW
contribution of a whole ring vanishes, and
$\exp[{if(1-\beta\cos\theta)R/c}]/{f}$ is the Fourier transform of the
Heaviside function.
The crossover frequency for this GW signal is determined by the time delay
between the earliest and latest components of the signal:
$f_{\rm c}=\frac{1}{\cos(\theta_{\rm v}-\theta_{\rm j})-\cos(\theta_{\rm
v}+\theta_{\rm j})}\frac{c}{R}\ .$ (19)
Note that generalizing this result to a non-instanteous acceleration we can
use $R=ct_{\rm acc}$.
### III.3 Continuously accelerating spherical cap - $t_{\rm acc}\neq 0$.
The signal from a cap that is accelerating continuously depends on the
specific acceleration model. Birnholtz & Piran [11] calculated $h(t)$ for a
cap accelerating according to the basic fireball GRB model [15, 16],
$\Gamma\propto R$.
Repeating their calculations for different $(\theta_{\rm v},\theta_{\rm j})$,
we find (see Fig. 8 and also Fig. 11 of [11]) that the corresponding crossover
frequency is given by the time delay between the earliest ($t=0$ at the
origin) and latest ($\theta=\theta_{\rm v}+\theta_{\rm j}$ at the end of the
acceleration phase) signals:
$f_{\rm c~{}|{t_{\rm inj}=0}}=\frac{1}{1-\beta\cos(\theta_{\rm v}+\theta_{\rm
j})}\frac{1}{t_{\rm acc}}$ (20)
While this result was derived for a specific acceleration model, Eq. 20 is
quite general, being derived purely from geometrical arguments. We plot in
Fig. 7 the Fourier transforms of GWs based on three different acceleration
models: the fireball model $\Gamma(t)-1=(\Gamma-1)t/t_{\rm acc}$; a constant
acceleration in the jet’s frame of reference
$\Gamma(t)^{2}-1=(\Gamma-1)^{2}(t/t_{\rm acc})^{2}$, and
$\Gamma(t)-1=(\Gamma-1)\tanh(t/t^{\prime}_{\rm acc})$. For all three models,
we find that the final jump in amplitude is indeed given by the ZFL limit in
Eq. 10, and that the crossover frequencies are given by Eq. 20.
For all three models considered, we find that the high-frequency behavior,
$g(f)$ is described by a power law $f^{-\alpha}$, with $\alpha\approx 2$. For
a constant acceleration, the low-frequency behavior coincides with the
fireball model. This is no surprise, since the equivalent long-timescale
acceleration in both cases is $\Gamma(t)\sim t$. For the third acceleration
model, the hyperbolic function’s typical timescale is not defined as clearly,
so we tuned its timescale parameter, $t^{\prime}_{\rm acc}$, such that the
high-frequency power law would coincide with the two other models.
Figure 7: The Fourier transforms of $f$ from jets with three different
acceleration models (with $\Gamma_{f}=100$, $\theta_{\rm j}=0.1$, $\theta_{\rm
v}=0.9$). The time constant of the third model was chosen such that the high-
frequency power laws would coincide. Figure 8: The normalized Fourier waveform
multiplied by the frequency for jets with the acceleration model
$\Gamma\propto R$, based on the numerical code describe in [11] for
$\Gamma=100$ and $\theta_{\rm j}=0.1$. Below the crossover frequency,
$\tilde{h}(f)\cdot f$ is a constant.
### III.4 The crossover diagram
The observed frequency of the GW can get boosted by a maximal factor of
$2\Gamma^{2}$ along the direction of motion of the jet. However, because of
the anti-beaming, the signal is minimal in that direction. Fig. 9 depicts the
crossover diagram, $S(f_{\rm c})$ vs. $f_{\rm c}$, for different jets. This
diagram represents how the observed spectral density varies as it is viewed
from different viewing angles.
Figure 9: The crossover diagrams of jets with different opening angles,
compared with that of a point particle for different opening angles (and a
point particle) for $\Gamma=100$ and $t_{\rm acc}=1{\rm sec}$. The spectral
density is normalized by the value $S_{0}\equiv({2{\cal E}}/{r})\sqrt{t_{\rm
acc}}$. Along the curves, both the crossover frequency, $f_{\rm c}$, and the
spectral density at that frequency, $S(f_{\rm c})$, vary as a function of the
observer angle $\theta_{\rm v}$. Some observer angles are indicated for
reference.
Figure 9 demonstrates that the jet’s finite opening angle reduces the possible
boost in the crossover frequency from ${2\Gamma^{2}}$ to $({1-\cos\theta_{\rm
j}})^{-1}$. While the boost in frequency increases the jet’s crossover
frequency, it is always accompanied by a reduction in the observed spectral
density, since the angular region in which the frequency is boosted is well
within the GW’s anti-beaming region, meaning that the overall spectral density
at high frequencies is diminished. The spectral density is comparable to the
maximal value over a wide range of viewing angles. For example, for small
opening angles ($\theta_{\rm j}<0.3$), and with $\Gamma=100$, $S(f)$ is
maximal at an observer angle $\theta_{\rm v}\approx\cos^{-1}({1}/{3})=1.23$,
and it exceeds 50% of the maximum value for $0.38<\theta_{\rm v}<2.17$,
corresponding to 75% of the sky.
The crossover diagram of a double-headed jet, consisting of two jets
propagating in two opposite directions, is rather similar to that of a single
jet. This is, again, due to the anti-beaming of the GW signal. The jump in
amplitude for each jet component is determined by Eq. 10. For small observer
angles, the amplitude of the jet propagating away from the observer will be
negligible compared to the amplitude of the jet heading towards the observer
(see Fig. 4). The two jets will have comparable amplitudes only in the
intermediate range of observer angles, $\theta_{\rm v}\approx{\pi}/{2}$. The
contribution of both jets in this angular range is slightly higher than that
of a single jet.
### III.5 $t_{\rm inj}\neq 0$
With the introduction of another timescale $t_{\rm inj}$, the problem becomes
more complex. The main point of the previous section, though, is unchanged:
the Fourier transform of the signal is monotonically decreasing, and the
crossover from $1/f$ behavior to a steeper decrease occurs at a frequency
$f_{\rm c}$. The only difference between this and the $t_{\rm inj}=0$ case is
the way in which the crossover frequency $f_{\rm c}$ is determined. The
situation is complicated, though, because the timescale determined by the
acceleration, $[1-\beta\cos(\theta_{\rm v}+\theta_{\rm j})]t_{\rm acc}$ is
angle-dependent. While $t_{\rm inj}$ can be larger for some angles, the
acceleration-related timescale can be larger for others.
To demonstrate the behavior we consider a toy model. In this model the signal
from a single cap (i.e., a single accelerating spherical cap) is described by
the function $h(t)$. The mass ejection function, $\dot{m}(t)$, describes the
rate of ejection of shells. We choose a simple non-trivial model which
involves two timescales:
$h(t)=\begin{cases}0\quad&t<0\ ,\\\ [{t}/{\tilde{t}_{\rm o}(\theta_{\rm
v},\theta_{\rm j})}]h_{0}(\theta_{\rm v},\theta_{\rm j})\quad&0\leq
t\leq\tilde{t}_{\rm o}(\theta_{\rm v},\theta_{\rm j})\ ,\\\ h_{0}(\theta_{\rm
v},\theta_{\rm j})\quad&t>\tilde{t}_{\rm o}(\theta_{\rm v},\theta_{\rm j})\
;\end{cases}$ (21)
and
$\dot{m}(t)=\begin{cases}0\quad&t<0\ ,\\\ \dot{m}_{0},\quad&0\leq t\leq t_{\rm
inj}\ ,\\\ 0\quad&t>t_{\rm inj}\ .\end{cases}$ (22)
where $\tilde{t}_{\rm o}(\theta_{\rm v},\theta_{\rm
j})\equiv(1-\beta\cos(\theta_{\rm v}+\theta_{\rm j}))t_{\rm acc}$ and
$h_{0}(\theta_{\rm v},\theta_{\rm j})$ is the (ZFL) jump of the GW amplitude,
given by Eq. 10. The combined GW signal is given by the convolution of the two
functions. The amplitude of the Fourier transform at the crossover frequency
is $h_{0}(\theta_{\rm v},\theta_{\rm j})/{f_{\rm c}}$. The observed crossover
frequency is now determined by two timescales, and one of those,
$\tilde{t}_{\rm o}(\theta_{\rm v},\theta_{\rm j})$, varies with $\theta_{\rm
v}$. Fig. 10 depicts the crossover diagrams determined by the simple model of
eqs. 21 \- 22 .
For $t_{\rm inj}\ll\tilde{t}_{\rm o}$, we recover the previously described
crossover diagram. For $t_{\rm inj}>\tilde{t}_{\rm o}$, the injection time
acts as an upper cutoff on the crossover frequency. For $t_{\rm inj}\gg t_{\rm
acc}$ (which implies $t_{\rm inj}\gg\tilde{t}_{\rm o}$ for all observers) the
crossover diagram is reduced to a single frequency determined by $t_{\rm
inj}$, independent of $\theta_{\rm v}$. Clearly, if several timescales are
involved in the function $\dot{m}(t)$, it is the longest one that determines
the crossover frequency. The shorter timescales will only affect the higher-
frequency range of the Fourier spectrum.
## IV An Example - GWs from GRB light curves
The results of the previous section were based on a simplified model for the
mass flux of the jet $\dot{m}(t)$. Here, we examine a possibly more realistic
description. For this we consider GW emission from GRB jets assuming that the
GRB light curves follow $\dot{m}(t)$ to some extent.
Specifically, Kobayashi et al., [17] have shown that within the internal
shocks model [18, 19, 20] the GRB light curve is related to $\dot{m}(t)$. This
relation is not one-to-one and, moreover, current understanding suggests that
the temporal structure may originate in the interaction of the jet with
stellar material (in long GRBs) or with the ejecta (in short ones). Still, in
the following, we use the GRB light curves as indicators for $\dot{m}(t)$, and
estimate the corresponding GW signal. For a given acceleration model, the
Fourier transform of the GW signal will be proportional to the convolution of
the Fourier transform of the GRB light curve with the GW signal of a single
shell $h_{\rm cap}(t)$. We calculate, under these assumptions, the average GW
spectra for long and short GRBs observed by the Burst and Transient Source
Experiment (BATSE). We use a Fourier transform, $\tilde{h}(f)$, of a single
accelerating spherical cap (Eq. 15), with $\tilde{f}_{\rm
o}\equiv{1}/{\tilde{t}_{\rm o}(\theta_{\rm v},\theta_{\rm j})}$ and
$g(f)=f^{-\alpha}$ with $\alpha>3/2$ being the high-frequency power law from
the acceleration model Fourier transform. The following calculations will
proceed with a general $\alpha$, which is determined by the specific
acceleration model.
### IV.1 Long GRBs
Beloborodov et al. [21] calculated the average Fourier transform, $C_{\it
l}(f)$, of 527 long GRB light curves observed by BATSE:
$C_{\it l}(f)\propto\begin{cases}{\mathrm{c}onst.},\quad\quad f<f_{\it{l}}\\\
f^{-0.75},\ \ \quad f>f_{\it{l}}\end{cases}\ ,$ (23)
where the spectrum changes its slope at $f_{\it{l}}\approx 0.01Hz$. As such,
the Fourier transform of the GW signal, $\tilde{h}_{\it l}(f)$, will be (see
Fig. 13):
$\tilde{h}_{\it l}(f)=C_{\it
l}(f)\tilde{h}(f)\propto\begin{cases}f^{-1},\quad\quad\quad f<f_{\it{l}}\\\
f^{-1.75},\quad\quad f_{\it{l}}<f<\tilde{f}_{\rm o}\\\
f^{-0.75-\alpha},\quad\tilde{f}_{\rm o}<f\ .\end{cases}$ (24)
The low-frequency behavior of the Fourier transform always behaves like $1/f$.
The introduction of a new timescale means that there are two crossover
frequencies, between three different power laws. In the intermediate range
$f_{\it{l}}<f<\tilde{f}_{\rm o}$ the power law is determined purely by the GRB
light curve, namely by the mass injection function. The unknonw high-frequency
power law of the acceleration model, $\alpha$, appears only at frequencies
higher than the acceleration model’s crossover frequency.
### IV.2 Short GRBs
The temporal behavior of short GRBs is different from that of long ones. We
repeated the above procedure, now using the TTE dataset from BATSE’s
measurements, which details the arrival times of individual photons. Using a
bin size of $10$msec, finding the average Fourier transform of short GRBs:
$C_{\it s}(f)\propto\begin{cases}{\mathrm{c}onst.},\quad\quad f<f_{\it{s}}\\\
f^{-0.92},\ \ \quad f>f_{\it{s}}\end{cases}$ (25)
The high frequency power law of the short GRBs power spectrum is stiffer, and
their break frequency is higher, at $f_{\it{s}}\approx 1Hz$, corresponding to
the timescale of an average short GRB (see Fig. [11]). The Fourier transform
of the corresponding GW signal of a short GRB takes the form:
$\tilde{h}_{\it s}(f)=C_{\it
s}(f)\tilde{h}(f)\propto\begin{cases}f^{-1},\quad\quad\quad f<f_{\it{s}}\\\
f^{-1.92},\quad\quad f_{\it{s}}<f<\tilde{f}_{\rm o}\\\
f^{-0.92-\alpha},\quad\tilde{f}_{\rm o}<f\end{cases}$ (26)
This result holds only if $f_{\it{s}}<f_{obs}$. The Fourier transform of the
short GRBs, $C_{\it s}$, allows for the interesting scenario in which this is
not the case, and the observed acceleration timescale $\tilde{t}_{\rm
o}(\theta_{\rm v},\theta_{\rm j})$ may be longer than the mass ejection
timescale $t_{\rm inj}$. In this case, the form of the GW’s Fourier transform
will be slightly different. At low and very high frequencies, the Fourier
transform still behaves like $1/f$ and $f^{-0.92-\alpha}$, correspondingly.
However, in the intermediate frequency range $\tilde{f}_{\rm o}<f<f_{\it{s}}$,
the power law will be different:
$\tilde{h}_{\it s}(f)=C_{\it
s}(f)\tilde{h}(f)\propto\begin{cases}f^{-1},\quad\quad\quad f<\tilde{f}_{\rm
o}\\\ f^{-\alpha},\quad\quad\quad\tilde{f}_{\rm o}<f<f_{\it{s}}\\\
f^{-0.92-\alpha},\quad f_{j}<f\end{cases}$ (27)
The two cases are illustrated in Fig. 13, where we plot the Fourier transforms
of two short GRB’s GWs using the Fireball acceleration model: one with $t_{\rm
inj}>\tilde{t}_{\rm o}(\theta_{\rm v},\theta_{\rm j})$, and one with $t_{\rm
inj}<\tilde{t}_{\rm o}(\theta_{\rm v},\theta_{\rm j})$. As it turns out, for
$\alpha\approx 2$ the power laws of the intermediate frequency range in both
cases are quite similar.
Figure 10: The crossover diagrams for jets with both $t_{\rm acc}$ and $t_{\rm
inj}$. We keep $t_{\rm acc}$ fixed , with $\Gamma=100$ and $\theta_{\rm
j}=0.1$ for all diagrams, and vary $t_{\rm inj}$. The amplitude
$h_{0}(\theta_{\rm v},\theta_{\rm j})$ is given by Eq. 10 and the frequency
$f_{\rm c}$ is then extracted from the Fourier transform of Eq. 21. Figure
11: The averaged Fourier transform of BATSE’s long GRBs, vs. that of BATSE’s
TTE short GRB catalogue. Power law fits for are shown in dashed lines. Figure
12: The Fourier transform of the light curve GRB 930201, one of the brightest
bursts observed by BATSE from the BATSE data (blue). The averaged Fourier
transforms of all bursts observed by BATSE (red). A power-law fit $f^{-n}$ for
the average of the Fourier transforms, with $n=0.75$ (green). When averaging
over many different bursts, the noise components cancel out. The frequency
where the the Fourier transofrm of GRB 930201 levels out to a constant is
determined by the duration of the GRB. Figure 13: The Fourier transform for a
short GRB’s GW calculated with $t_{\rm inj}=1{\rm sec},\tilde{t}_{\rm
o}(\theta_{\rm v})=0.1{\rm sec}$ (blue), and with $t_{\rm inj}=0.1{\rm
sec},\tilde{t}_{\rm o}(\theta_{\rm v})=1{\rm sec}$ (red). The power laws in
the intermediate frequency region between $f_{\it{s}}$ and $\tilde{f}_{\rm o}$
are slightly different for the two cases: $\tilde{h}\propto f^{-1.92}$ for
$t_{\rm inj}>\tilde{t}_{\rm o}(\theta_{\rm v})$, and $\tilde{h}\propto
f^{-\alpha}$ for $t_{\rm inj}<\tilde{t}_{\rm o}(\theta_{\rm v})$.
## V Detectability
When estimating the detectability of a GW signal, we have to compare the
expected $S(f)$ to the detector’s sensitivity curve, $S_{\rm det}$, taking
into account both the amplitude and the relevant frequency range. As we have
seen in §III, for jet GW signals S(f) is always a decreasing function of the
frequency. At the lowest frequency range $S(f)\propto f^{-1/2}$, while at
higher frequencies (above the relevant crossover frequency) it decrease
faster. Hence, a typical low-frequency detector will be most sensitive to a
jet GW signal at its lowest end of its frequency response. For our purposes we
can define this point as the lowest frequency below which $S_{\rm det}$ is
steeper than $f^{-1/2}$. A similar condition holds for a high-frequency
detector (that is, above a crossover frequency) for which we replace the power
$f^{-1/2}$ by the corresponding frequency dependence of the spectral density.
Not surprisingly, like almost any relativistic GW source, the maximal
amplitude of the jet GW is of order
$h\approx\frac{G{\cal E}}{c^{4}r}=3\times 10^{-25}\bigg{(}\frac{{\cal
E}}{10^{51}~{}{\rm erg}}\bigg{)}~{}\bigg{(}\frac{100{\rm Mpc}}{r}\bigg{)}\ .$
(28)
For a one-sided jet this estimate is valid for an observer that is at optimal
angle, namely at $\theta_{\rm v}\approx\theta_{\rm j}$. For two-sided jets,
this estimate is valid for most observers apart from those along the jets (
$\theta_{\rm v}<\theta_{\rm j}$). Different observers will, however, observer
different characteristic frequencies as discussed earlier, with the relevant
frequency is the lowest crossover frequency, $f_{\rm c}$.
### V.1 GW from GRB jets
GRB jets are the most natural sources for these kind of GWs. For an optimal
observer near the jet, when considering the estimates based on the GRB light
curves discussed in §IV, the crossover frequency is dominated by $t_{\rm inj}$
for both long and short GRBs. Thus, $f_{\rm c}=f_{\it{l}}=0.01$ Hz for the
long and $f_{\rm c}=f_{\it{s}}=1$ Hz for short GRBs. This frequency range puts
the events below the frequency limits of current LIGO-Virgo-Kagra, but around
the capability of the planned BBO [22] and DECIGO [23]. Observers further away
from the jet axis will see lower characteristic frequencies, which are even
more difficult to detect.
As seen in the crossover diagram (Fig. 9), any potential increase in the
frequency of the spectral density due to the boost of the crossover
frequencies for observers close to the jet’s axis will be more than balanced
out by the anti-beaming of the GW amplitude, such that the spectral density
never benefits from observation-angle effects. Short GRBs have higher
crossover frequencies and hence are somewhat easier to detect. These bursts
are observed from typically nearer distances since they are intrinsically
weaker and hence their observed rate is lower. However their intrinsic rate is
larger by about a factor of ten than the rate of long ones. Still, due to
current LIGO-Virgo-Kagra lower frequency threshold in the $10$s Hz range,
which is above the expected crossover frequencies of short GRBs and definitely
long GRBs, it is unlikely for any GW signal from either short or long GRB jet
to be detected by these detectors. While these GRB jets GW signals are within
the frequency range of BBO and DECIGO, most GRBs take place at distances that
are beyond the detection horizon.
### V.2 GW 170817A
At $\approx 40$ Mpc, GW170817A was an exceptionally nearby binary neutron star
merger. The merger GW signal was accompanied by a short (albeit atypical – see
e.g. [24, 24, 25]) GRB. The event and its afterglow signature were extremely
well observed, and we have good estimates for most of its parameters. The jet
properties are ${\cal E}\approx 10^{50}$erg, $\theta_{\rm v}\approx 20^{o}$,
$\theta_{\rm j}\approx 5^{o}$. Other parameters, and in particular $t_{\rm
acc}$ and $t_{\rm inj}$ that are most relevant for our analysis, are less
known. The injection duration $t_{\rm inj}$, is capped from above by the
duration of the observed $\gamma$-rays. However, as those arose from a cocoon
shock breakout [24, 26] the observed duration gives only an upper limit on
$t_{\rm inj}$ . In the following we assume that $t_{\rm acc}<t_{\rm inj}=1$
sec. $\Gamma$, is also unknown but it only factors into the result through
${\cal E}$, since $\Gamma^{-1}\ll\theta_{\rm j}$: hence, it is unimportant.
Given the viewing angle and the jet angle, it was also ideally positioned in
terms of the strength of the GW signal from its jet. That is, we were not
within the anti-beamed jet’s cone but not too far from it either. Still, the
jet GW that we consider here could not have been detected by current
detectors. Fig. 14, depicts the spectral density of GW170817 compared with the
sensitivity thresholds of GW detectors [27]. We find that the GW would have
been detectable by the Big Bang Observer (BBO)[28], and would have been
marginally detectable by DECIGO[29] as we discuss below..
Figure 14: The calculated spectral density for our fiducial model for
GW170817, $S(f)$, compared with the sensitivity thresholds of GW detectors
taken from http://gwplotter.com. The dashed line shows the GW emission from
the same source, only 10 times closer and 10 times more energetic. Such a
signal would correspond to a CCSNe jet
We quantify detection distances by considering the signal-to-noise ratio,
$\rho$, of a certain GW signal, with Fourier transform $\tilde{h}(f)$ [27]:
$\rho^{2}=4\int_{-\infty}^{\infty}\frac{\tilde{h}(f)^{2}}{S_{n}(f)^{2}}df,$
(29)
where $S_{n}(f)$ is the detector’s noise amplitude.
We find that the most suitable detector for observing jet GWs is BBO, with a
detection horizon of $r_{d}=75$Mpc. DECIGO closely follows, with
$r_{d}=40$Mpc. The Einstein telescope has $r_{d}=600$kpc, and LISA is at
$r_{d}=80$ kpc. Ultimate DECIGO which will be about hunderd time more sensitve
than DECIGO will detect such events from distances of a few Gpc, that is up to
$z=0.5$. These distances scale linearly with the jet’s energy: a jet with a
short duration like GW170817 but with $E=10^{51}$ erg will be detectable by
DECIGO up to a distance of $400$ Mpc, etc.
A higher GW crossover frequency, $f_{\rm c}$, increases the maximal detection
distance $r_{d}$. Notably, however, $r_{d}$ approaches an asymptotic value,
and increasing $f_{\rm c}$ above a certain detector-specific threshold does
not change that detector’s maximal detection distance. This is because the
integral in Eq. 29 is dominated by the part of the GW’s Fourier transform
which is within the detector’s frequency band. If $f_{\rm c}$ is higher than
this band, then the integral is dominated by the low-frequency $\sim 1/f$
behavior of the transform, which is independent of $f_{\rm c}$. When $f_{\rm
c}$ is within the detector’s frequency band, the SNR will be reduced, due to
the integration over the higher-frequency region of the GW, which behaves as
$\sim 1/f^{2}$.
### V.3 Jets in Core Collapse SNe and low-luminosity GRBs
The prospects for CCSNe-related GW detection are much more optimistic. Shortly
after the discovery of the first low-luminosity GRB 980415 (that was
associated with SN98bw) it was suggested [3, 4, 5] that the emission arose
from shock breakout following an energetic jet that was choked deep in the
accompanying star. Later on it was realized that, while the detection rate of
low-luminosity GRBs is much lower than that of regular long GRBs, their actual
rate is orders of magnitude larger [6, 7]. The detection rate is small
because, given their low luminosity, they are detected only from relatively
short distances. More recently, Piran et al. [30] have shown that a
significant fraction of CCSNe (that are not associated with GRBs) contain an
energetic ($\sim 10^{51}$ erg) choked relativistic jet. While this jet is
relativistic, it is chocked inside the star depositing its energy into a
cocoon. Upon breakout the cocoon material is observed as a high velocity
(0.1-0.2c) material that engulfs the supernova and can be detected within the
first few days. Such signatures have been detected as early as 1997 [31] in SN
1997EF and in several other SNe since then. This suggestion was nicely
confirmed with the exquisite observations of this high velocity material in SN
2017iuk by [8, 9]. If such relativistic jets are associated with a significant
fraction of CCSNe then, as the supernova rate is significantly larger than GRB
rate [16], we can expect much nearer jets that would be sources of such GWs.
Comparing relativistic SNe Jets with GRB jets, we estimate $h$ to be a factor
of 100-1000 larger than the one estimated for short GRBs: a factor of 10 in
the distance (tens of Mpc vs. hundreds of Mpc) and a factor of 10-10 in energy
($10^{51}$ erg vs. $10^{49-50}$ erg). Thus, we expect amplitudes of $3\times
10^{-24}$ (see Eq. 28). Unfortunately, for these events we don’t have a good
clue on $t_{\rm inj}$. A best guess is that it will be of the same order as
the one estimated in long GRB, namely of order of a few tens of seconds. Thus,
the corresponding crossover frequency would be around 0.01 Hz. However, on
average we will observe these events from a large viewing angle, and in this
case the crossover frequency would be even lower. The exact value will depend
on $t_{\rm acc}$, and in turn on the unknown nature of the acceleration
process.
### V.4 Contribution to the GW background
The relativistic jets that arise from GRBs (both long and short) and hidden
jets in SNe produce a continuous background of jet-GW waves at frequency range
of $\sim 0.01-1$Hz depending on the specific source. Both long and short GRBs
are rare and won’t make a significant contribution to such a background.
However, SNe take place at a rate of about one per second in the observable
Universe. If a significant fraction of SNe harbor energetic jets the time
between two such cosmological events, a few seconds, will be comparable to the
characteristic time scale of the GW signals from these jets (assuming that the
hidden jets in SNe are similar in nature to GRB jets). Depending on the ratio
of the time between events and the characteristic frequency of the jet-GW
signal we expect either a continuous background, as expected from the GW
background from merging binary neutron stars, or a pop-corn like signature, as
expected for the GW background from merging binary black holes [32]. With a
typical cosmological distance of a few Gpc the corresponding amplitude of this
jet-GW background is $h\approx 10^{-26}{\cal E}/(10^{51}{\rm erg})$.
## VI Discussion
We have obtained the qualitative and quantitative behavior of the amplitude,
the angular distribution of both $h$ and $dE_{\rm GW}/d\Omega$, and the
Fourier transform of the GW signal of an accelerated jet with an opening angle
$\theta_{\rm j}$. The signal is anti-beamed away from the direction of the
jet. The anti-beaming angle is $\max(\Gamma^{-1},\theta_{\rm j}$). Like
typical relativistic GW sources, the amplitude is of order $G{\cal E}/c^{4}r$.
However, unlike other sources, the signal here is of a memory type, rising to
this amplitude on a characteristic time scale. The signal can be approximated
as a step function when considering detectors whose typical response frequency
is much lower than the characteristic crossover frequency of the jet. This
last feature is of course problematic, as it might be difficult to distinguish
this signal from other step functions that may arise in GW detectors. We won’t
explore the experimental/observational aspects of this question.
The light curve depends on two timescales: the acceleration timescale $t_{\rm
acc}$, and the mass ejection time $t_{\rm inj}$. The spectral density $S(f)$
is monotonically decreasing with the frequency. It is broken into at least two
power laws: the lower frequency region is proportional to $f^{-1/2}$, and the
higher frequency region is proportional to $f^{-1/2-\alpha}$, with
$\alpha>3/2$. The spectral density is characterized by the crossover region,
$f_{\rm c}$, which corresponds to the longest relevant timescale. Since $S(f)$
decreases monotonically with frequency, the crossover region is a good
indicator as to whether a given GW’s signal can be measured by a specific
detector.
The universal form of the ’crossover diagrams’ describe how the frequency and
the amplitude of the spectral density shift due to the dependence of the
observed amplitude and frequency on $\theta_{\rm v}$. We calculated these
’crossover diagrams’ for a point particle, a jet with a finite opening angle,
a double-headed jet, as well as for jets with both $t_{\rm acc}$ and $t_{\rm
inj}$. For $t_{\rm inj}\gg t_{\rm acc}$, the crossover diagram is reduced to a
single characteristic frequency for observers at all angles.
Assuming that the observed GRB light curves are proportional to the jet’s mass
ejection function $\dot{m}(t)$ and assuming a specific acceleration model, we
calculated possible examples of expected GW signals from long and short GRBs
jets. As expected, we find that the composite Fourier transforms are
monotonically decreasing, and that they are described by two crossover
frequencies, between three power laws. One crossover frequency is associated
with $t_{\rm inj}$, and the other is associated with $t_{\rm acc}$. It is
important to note, however, that these estimates should be considered just as
examples.
Recent understanding of jet propagation in dense media suggests that the
injection must be longer than the observed duration of the GRB [7]. Thus, the
latter puts a lower limit on $t_{\rm inj}$. However, the light curves of short
GRBs suggest that in many cases mergers produce jet that are choked inside the
merger ejecta. Those events are not accompanied by a short GRB [33]. In such a
case, $t_{\rm inj}$ can be much shorter (this is the reason that the jet was
choked), and the corresponding GW signal will have a higher frequency.
As an example, we calculated the gravitational waveform of the GW emitted by
the jet associated with GW170817 under the previous assumptions. Using the
event’s parameters, we found that the jet’s GW could have been observed by BBO
and DECIGO. Within the limiting assumptions that the duration of the burst and
the observed $\gamma$-ray light curve reflect the injection time, the relevant
frequencies are quite low, and indeed BBO and DEGIGO are the most suitable
detectors for observing GWs from similar short GRB jets. Anti-beaming will,
however, make it unlikely that we would observe both the $\gamma$-rays and the
GWs. However, other multimessenger signals, and in particular GWs from the
merger itself, would accompany such an event triggering our attention and
providing a additional significance to the detected GW signal. It is
interesting to remark that the jet launching can be delayed by as much as a
second after the merger, and, as such, this GW signal can be easily separated
from the more “regular” pre-merger GW emission, and even from the post-merger
ringdown of the proto-neutron star and collapse to a black hole.
While the detection prospects of a jet GW signature from short or long GRBs
are not that promising, comparable or even more powerful relativistic jets
also take place within some core collapse SNe. The rate of these events is
much larger, and correspondingly within a given observing time frame they will
take place at much nearer distances. Here the detection prospects are very
promising once detectors in the sub-Hz are available. A detection would reveal
features of jet acceleration in the vicinity of black holes that are
impossible to find in any other way.
###### Acknowledgements.
We thank Ofek Birnholtz for providing us his code and for helpful comments and
Ehud Nakar and Amos Ori for fruitful discussions. The research was supported
by an advanced ERC grant TReX.
## References
* Segalis and Ori [2001] E. B. Segalis and A. Ori, Emission of gravitational radiation from ultrarelativistic sources, Phys. Rev. D 64, 064018 (2001), arXiv:gr-qc/0101117 .
* Piran [2002] T. Piran, Gamma-Ray Bursts - a Primer for Relativists, in _General Relativity and Gravitation_, edited by N. T. Bishop and S. D. Maharaj (2002) pp. 259–275, arXiv:gr-qc/0205045 .
* Kulkarni _et al._ [1998] S. R. Kulkarni, D. A. Frail, M. H. Wieringa, R. D. Ekers, E. M. Sadler, R. M. Wark, J. L. Higdon, E. S. Phinney, and J. S. Bloom, Radio emission from the unusual supernova 1998bw and its association with the $\gamma$-ray burst of 25 April 1998, Nature (London) 395, 663 (1998).
* MacFadyen _et al._ [2001] A. I. MacFadyen, S. E. Woosley, and A. Heger, Supernovae, Jets, and Collapsars, Astrophys. J. 550, 410 (2001), arXiv:astro-ph/9910034 .
* Tan _et al._ [2001] J. C. Tan, C. D. Matzner, and C. F. McKee, Trans-Relativistic Blast Waves in Supernovae as Gamma-Ray Burst Progenitors, Astrophys. J. 551, 946 (2001), arXiv:astro-ph/0012003 .
* Soderberg _et al._ [2006] A. M. Soderberg, _et al._ , Relativistic ejecta from X-ray flash XRF 060218 and the rate of cosmic explosions, Nature (London) 442, 1014 (2006), arXiv:astro-ph/0604389 .
* Bromberg _et al._ [2011] O. Bromberg, E. Nakar, and T. Piran, Are Low-luminosity Gamma-Ray Bursts Generated by Relativistic Jets?, ApJL 739, L55 (2011), arXiv:1107.1346 .
* Izzo _et al._ [2019] L. Izzo, _et al._ , Signatures of a jet cocoon in early spectra of a supernova associated with a $\gamma$-ray burst, Nature (London) 565, 324 (2019), arXiv:1901.05500 .
* Nakar [2019] E. Nakar, Heart of a stellar explosion revealed, Nature (London) 565, 300 (2019).
* Sago _et al._ [2004] N. Sago, K. Ioka, T. Nakamura, and R. Yamazaki, Gravitational wave memory of gamma-ray burst jets, Phys. Rev. D 70, 104012 (2004), arXiv:gr-qc/0405067 .
* Birnholtz and Piran [2013] O. Birnholtz and T. Piran, Gravitational wave memory from gamma ray bursts’ jets, Phys. Rev. D 87, 123007 (2013), arXiv:1302.5713 .
* Yamazaki _et al._ [2004] R. Yamazaki, K. Ioka, and T. Nakamura, A Unified Model of Short and Long Gamma-Ray Bursts, X-Ray-rich Gamma-Ray Bursts, and X-Ray Flashes, ApJL 607, L103 (2004), arXiv:astro-ph/0401142 .
* Shemi and Piran [1990] A. Shemi and T. Piran, The Appearance of Cosmic Fireballs, ApJL 365, L55 (1990).
* Note [1] Note that in this case $t_{\rm acc}\leq{\cal E}$, however the term in square brackets can still be larger than unity.
* Goodman [1986] J. Goodman, Are gamma-ray bursts optically thick?, ApJL 308, L47 (1986).
* Piran [1999] T. Piran, Gamma-ray bursts and the fireball model, Phys. Rept. 314, 575 (1999), arXiv:astro-ph/9810256 .
* Kobayashi _et al._ [1997] S. Kobayashi, T. Piran, and R. Sari, Can Internal Shocks Produce the Variability in Gamma-Ray Bursts?, Astrophys. J. 490, 92 (1997), arXiv:astro-ph/9705013 .
* Narayan _et al._ [1992] R. Narayan, B. Paczynski, and T. Piran, Gamma-Ray Bursts as the Death Throes of Massive Binary Stars, ApJL 395, L83 (1992), arXiv:astro-ph/9204001 .
* Rees and Meszaros [1994] M. J. Rees and P. Meszaros, Unsteady Outflow Models for Cosmological Gamma-Ray Bursts, ApJL 430, L93 (1994), arXiv:astro-ph/9404038 .
* Sari and Piran [1997] R. Sari and T. Piran, Variability in Gamma-Ray Bursts: A Clue, Astrophys. J. 485, 270 (1997), arXiv:astro-ph/9701002 .
* Beloborodov [2000] A. M. Beloborodov, Power density spectra of gamma-ray bursts, AIP Conference Proceedings 10.1063/1.1361535 (2000).
* Crowder and Cornish [2005] J. Crowder and N. J. Cornish, Beyond LISA: Exploring future gravitational wave missions, Phys. Rev. D 72, 083005 (2005), arXiv:gr-qc/0506015 .
* Sato _et al._ [2017] S. Sato _et al._ , The status of DECIGO, in _Journal of Physics Conference Series_, Vol. 840 (2017) p. 012010.
* Kasliwal _et al._ [2017] M. M. Kasliwal, _et al._ , Illuminating gravitational waves: A concordant picture of photons from a neutron star merger, Science 358, 1559 (2017), arXiv:1710.05436 .
* Nakar [2020] E. Nakar, The electromagnetic counterparts of compact binary mergers, Phys. Rep 886, 1 (2020), arXiv:1912.05659 .
* Gottlieb _et al._ [2018] O. Gottlieb, E. Nakar, T. Piran, and K. Hotokezaka, A cocoon shock breakout as the origin of the $\gamma$-ray emission in GW170817, MNRAS 479, 588 (2018), arXiv:1710.05896 .
* Moore _et al._ [2014] C. J. Moore, R. H. Cole, and C. P. L. Berry, Gravitational-wave sensitivity curves, Classical and Quantum Gravity 32, 015014 (2014).
* Crowder and Cornish [2005] J. Crowder and N. J. Cornish, Beyond lisa: Exploring future gravitational wave missions, Physical Review D 72, 10.1103/physrevd.72.083005 (2005).
* Sato _et al._ [2017] S. Sato _et al._ , The status of DECIGO, J. Phys. Conf. Ser. 840, 012010 (2017).
* Piran _et al._ [2019] T. Piran, E. Nakar, P. Mazzali, and E. Pian, Relativistic Jets in Core Collapse Supernovae, Astrophys. J. 871, L25 (2019), arXiv:1704.08298 .
* Mazzali _et al._ [2000] P. A. Mazzali, K. Iwamoto, and K. Nomoto, A Spectroscopic Analysis of the Energetic Type Ic Hypernova SN 1997EF, Astrophys. J. 545, 407 (2000), arXiv:astro-ph/0007222 .
* Abbott _et al._ [2018] B. P. Abbott, _et al._ , GW170817: Implications for the Stochastic Gravitational-Wave Background from Compact Binary Coalescences, Phys. Rev. Lett. 120, 091101 (2018), arXiv:1710.05837 .
* Moharana and Piran [2017] R. Moharana and T. Piran, Observational evidence for mass ejection accompanying short gamma-ray bursts, MNRAS 472, L55 (2017), arXiv:1705.02598 .
|
arxiv-papers
| 2021-07-26T18:16:46 |
2024-09-04T03:07:19.825073
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Elly Leiderschneider and Tsvi Piran",
"submitter": "Tsvi Piran",
"url": "https://arxiv.org/abs/2107.12418"
}
|
2107.12421
|
# Parallel Surrogate-assisted Optimization Using Mesh Adaptive Direct Search
Bastien Talgorn1, Stéphane Alarie2, and Michael Kokkolaras1
( 1McGill University, GERAD, Montréal, Québec, Canada
2Hydro-Québec’s Research Institute, GERAD, Montréal, Québec, Canada
)
###### Abstract
We consider computationally expensive blackbox optimization problems and
present a method that employs surrogate models and concurrent computing at the
search step of the mesh adaptive direct search (MADS) algorithm. Specifically,
we solve a surrogate optimization problem using locally weighted scatterplot
smoothing (LOWESS) models to find promising candidate points to be evaluated
by the blackboxes. We consider several methods for selecting promising points
from a large number of points. We conduct numerical experiments to assess the
performance of the modified MADS algorithm with respect to available CPU
resources by means of five engineering design problems.
## 1 Introduction
We consider the optimization problem
$\begin{array}[]{rl}\underset{\mathbf{x}\in\mathcal{X}}{\min}&f(\mathbf{x})\\\
\text{subject to}&c_{j}(\mathbf{x})\leq 0,~{}~{}j=1,2,\ldots,m,\end{array}$
($P$)
where $f(\mathbf{x})$ is the objective function,
$\mathbf{x}\in{\mathbb{R}}^{n}$ is the vector of decision variables,
$\mathcal{X}$ is a subset of ${\mathbb{R}}^{n}$, and $c_{j}(\mathbf{x})$ are
general nonlinear constraints. We assume that some (at least one) of the
functions $\\{f,c_{1},c_{2},\ldots,c_{m}\\}$ are evaluated using simulations
or other computational procedures that are blackboxes. In particular, we
consider the case where these blackboxes are computationally expensive,
possibly nonsmooth and/or nonconvex, and that the process used to evaluate
them may crash or fail to return a value. Finally, we assume that function
gradients either do not exist theoretically or, if they do, cannot be computed
or approximated with reasonable computational effort.
Metaheuristics and derivative-free search algorithms are commonly used for
solving ($P$). The former (e.g., genetic algorithms(GAs), particle swarm
optimization (PSO), tabu search (TS), etc.) are commonly used for global
exploration while the latter (e.g., generalized pattern search (GPS), mesh
adaptive direct search (MADS), and trust-region methods (DFO, COBYLA, CONDOR))
are local methods with convergence properties [1]. In this work, we use the
NOMAD implementation [2, 3] of the MADS algorithm [4] to solve ($P$).
Multiprocessor computers, supercomputers, cloud computing, or just a few
connected PCs can provide parallel (or concurrent) computing opportunities to
speed up so-called trajectory-based optimization algorithms. According to [5],
three ways are commonly used to achieve this: (i) parallel evaluation of
neighborhood solutions (distributed evaluations), (ii) parallel trajectories
from the same (or different) initial guess(es) (independent optimization
runs), (iii) the evaluation of a point $\mathbf{x}$ is performed in parallel
(i.e., the search is sequential). The implementation of (iii) depends only on
the blackbox, while the other two are related to the optimization algorithm.
NOMAD offers implementations for (i) and (ii) through p-MADS for (i) and Coop-
MADS for (ii) [6]. In both cases, the parallel evaluations are handled by
NOMAD by means of MPI calls to the blackbox [3]. However, if one prefers to
take charge of the distribution of evaluations, one can implement p-MADS with
blocks by using NOMAD in batch mode [6]. In that case, instead of using MPI
calls, NOMAD writes all points to be evaluated in an input file, waits for all
evaluations to be completed, and reads the obtained values of $f(\mathbf{x})$
and $c_{j}(\mathbf{x})$ from an output file. Then, NOMAD either stops if a
local optimum is reached or submits a new block of points to evaluate. We use
here the block functionality of NOMAD, adopting option (i) for parallel
evaluations, because of its flexibility and generality.
The MADS algorithm includes two steps at each iteration, the SEARCH and the
POLL. The SEARCH step is flexible (defined by the user) and aims at
determining one or more new points $\mathbf{x}\in\mathcal{X}$ that improves
the current best solution. The POLL step is defined according to the
convergence analysis of the algorithm and generates trial points around the
current best solution. The number of poll points at each iteration is either
$2n$ or $n+1$ depending on the utilized pattern with $n$ being the size of
$\mathbf{x}$.
The number of points evaluated in the SEARCH step depends on the methods
chosen or defined by the user. Several techniques are already implemented and
available in NOMAD, including the speculative search (SS), the quadratic model
search (QUAD), the variable neighborhood search (VNS), the Latin hypercube
search (LHS), and the Nelder Mead search (NMS). One can also implement their
own technique if one so desires, which is called a _user search_ (US).
With the exception of LHS, all provided techniques usually return only one
trial point. When several techniques are used at once, they are called one
after the other along the SEARCH step, each technique providing its own trial
point, which is evaluated by the blackbox before proceeding to the next
technique. Assuming that $2n$ CPUs are available for solving ($P$), the POLL
step can make good use of these CPUs. However, since SEARCH step evaluations
are sequential, progress is slowed down with almost all CPUs being idle. One
may argue that we should then only use LHS since it can generate $2n$ points.
However, since LHS is random, its points will quickly become less promising
after a few iterations.
Considering that the number of available CPUs are now, particularly with the
emergence of cloud computing, relatively inexpensive and unlimited, we should
rethink the SEARCH step to be as effective as the POLL step in terms of CPU
use. In this work, we propose a SEARCH step technique that returns a large
number of diverse points for evaluation.
The paper is structured as follows. The general idea behind the proposed
technique is described in Section 2. In Section 3, six different methods are
presented for selecting various candidates from a large set of points. In
Section 4, the resulting model search for parallel computing is specified. In
Section 5, we test our SEARCH step technique on five engineering design
optimization problems using up to 64 processors. A discussion concludes the
paper.
## 2 Proposed SEACH step technique
One of the practical challenges of the SEARCH step is that only one candidate
is obtained at a significant computational investment [7, 8, 9, 10].
Specifically, regardless of the number of available CPUs, only one CPU is used
in the SEARCH step for blackbox evaluations, with the exception of LHS. Before
presenting our idea for mitigating this practical challenge, we will assume
that computationally inexpensive surrogate models of the expensive blackboxes
are available. We can then consider the surrogate problem of problem ($P$)
$\begin{array}[]{rl}\underset{\mathbf{x}\in\mathcal{X}}{\min}&\hat{f}(\mathbf{x})\\\
\text{subject to}&\hat{c}_{j}(\mathbf{x})\leq
0,~{}~{}j=1,2,\ldots,m,\end{array}$ ($\hat{P}$)
where $\\{\hat{f},\hat{c}_{1},\hat{c}_{2},\ldots,\hat{c}_{m}\\}$ are surrogate
models of $\\{f,c_{1},c_{2},\ldots,c_{m}\\}$, respectively. Note that we only
need to ensure that the minimizers of ($P$) and ($\hat{P}$) are close enough,
and not that the surrogate models are good approximations of the blackboxes
globally. It then follows that a minimizer of ($\hat{P}$) will be a good
candidate for the solution of ($P$).
If both problems have the same minimizers, they may share features in other
areas of $\mathcal{X}$ as well. Since the evaluations of $\hat{f}(\mathbf{x})$
and $\hat{c}_{j}(\mathbf{x})$ are rather inexpensive compared to $f$ and
$c_{j}$, one can allow a very large budget of model evaluations to solve
($\hat{P}$), extending thus the number of design space areas that will be
visited. This is acceptable as long as the solution of ($\hat{P}$) is faster
than any single evaluation of the blackboxes. Considering there are $q$ CPUs
available for blackbox evaluations, one may then select $q$ points from the
available budget by solving ($\hat{P}$). The $q$ points can be selected to
consider areas of $\mathcal{X}$ that have been neglected until now in the
solution of ($P$).
The above proposition proposes the use of surrogate models
$\\{\hat{f},\hat{c}_{1},\hat{c}_{2},\ldots,\hat{c}_{m}\\}$ in a manner that is
not reported in [11], which mentions two ways of exploiting surrogates in the
context of parallelization. The simplest is to fit $q$ different surrogate
models at the same points already evaluated by the blackbox functions. This
allows to get $q$ different promising candidates and requires no uncertainty
quantification for the surrogate models. One can also combine the surrogates;
distance-based criteria can be added to ensure diversity between the
candidates. The other way is to use a single surrogate model and consider $q$
points where the blackboxes should be evaluated at to improve its accuracy.
We propose an intermediate approach. We use only one surrogate model for each
blackbox. The $q$ candidates are extracted from that single surrogate, but not
with the aim of improving it. Instead, the $q$ candidates are selected to be
the most interesting to advance the optimization process.
## 3 Methods for selecting candidate points
Let $\mathbf{X}=\\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{k}\\}$ be
the set of all points evaluated by the blackbox. Note that
$\mathbf{X}\subset\mathcal{X}\subset{\mathbb{R}}^{n}$. We denote
$\mathbf{\hat{X}}$ the _surrogate cache_ , i.e., the set of all points for
which $\\{\hat{f},\hat{c}_{1},\hat{c}_{2},\ldots,\hat{c}_{m}\\}$ have been
evaluated during the solution of ($\hat{P}$). Similarly,
$\mathbf{\hat{X}}\subset\mathcal{X}\subset{\mathbb{R}}^{n}$. Let $\mathbf{S}$
be the set of points that are selected by the SEARCH step to be evaluated with
the blackbox. The set $\mathbf{S}$ is initially empty and is built from the
points of $\mathbf{\hat{X}}$ (ensuring that
$\mathbf{S}\subset\mathbf{\hat{X}}$) with a greedy algorithm by means of up to
six selection methods, each one having a different goal.
* •
Method 1 selects the best point of $\mathbf{\hat{X}}$ not in
$\mathbf{X}\cup\mathbf{S}$;
* •
Method 2 selects the most distant point of $\mathbf{\hat{X}}$ from
$\mathbf{X}\cup\mathbf{S}$;
* •
Method 3 selects the best point of $\mathbf{\hat{X}}$ at a certain distance of
$\mathbf{X}\cup\mathbf{S}$;
* •
Method 4 selects the best point of $\mathbf{\hat{X}}$ under additional
constraints;
* •
Method 5 selects a point of $\mathbf{\hat{X}}$ that is a possible local
minimum of the surrogate problem;
* •
Method 6 selects a point of $\mathbf{\hat{X}}$ in a non-explored area.
Note that some selection methods may fail to return a candidate, particularly
methods 3 and 4. If this happens, the next method is used. We repeat and loop
through all methods until we obtain enough points in $\mathbf{S}$ matching the
available CPUs. Some points $\mathbf{x}\in\mathbf{\hat{X}}$ can also belong to
the set $\mathbf{X}$; this is not an issue since all methods only select point
$\mathbf{x}\in\mathbf{\hat{X}}$ to be added to $\mathbf{S}$ if and only if
$\mathbf{x}\notin\mathbf{X}$. The selection methods are detailed below after
some definitions.
### 3.1 Definitions
Let $d(A,B)$ be the Euclidean distance between two subsets $A$ and $B$ of
${\mathbb{R}}^{n}$
$d(A,B)=\underset{a\in A}{\min}\;\underset{b\in B}{\min}\;\|a-b\|_{2}.$ (1)
As a convention, the distance to an empty set is infinite:
$d(A,\varnothing)=d(\varnothing,\varnothing)=+\infty$. By extension, we will
denote the distance between an element $a\notin B$ and the subset $B$ simply
by $d(a,B)$, which implies that $a$ also refers to the particular subset
containing only $a$, i.e., $\\{a\\}$.
Regarding feasibility, we consider the aggregate constraint violation function
used in [12], i.e.,
$h(\mathbf{x})=\sum_{j=1}^{m}\max\\{0,c_{j}(\mathbf{x})\\}^{2}$. The same
function is used in the _progressive barrier_ mechanism in NOMAD [13].
We also define the order operators between two points $\mathbf{x}$ and
$\mathbf{x}^{\prime}\in\mathcal{X}$:
$\displaystyle\mathbf{x}\prec\mathbf{x}^{\prime}\Leftrightarrow$
$\displaystyle\left\\{\begin{array}[]{l}h(\mathbf{x})<h(\mathbf{x}^{\prime})\\\
\text{or}\\\ h(\mathbf{x})=h(\mathbf{x}^{\prime})\text{ and
}f(\mathbf{x})<f(\mathbf{x}^{\prime})\end{array}\right.$ (5)
$\displaystyle\mathbf{x}\preceq\mathbf{x}^{\prime}\Leftrightarrow$
$\displaystyle\;\;{\tt not}(\mathbf{x}^{\prime}\prec\mathbf{x}),$ (6)
which are transitive. By those definitions, an incumbent solution
$\mathbf{x}^{\prime}$ of the original problem ($P$) is such that
$\mathbf{x}^{\prime}\preceq\mathbf{x},\;\forall\mathbf{x}\in\mathbf{X}$.
Similarly, a global minimizer $\mathbf{x}^{*}$ is such that
$\mathbf{x}^{*}\preceq\mathbf{x},\;\forall\mathbf{x}\in\mathcal{X}$. In the
same manner as we define $\prec$ and $\preceq$ for $f$ and $h$, we define
$\;\widehat{\prec}\;$ and $\;\widehat{\preceq}\;$ for $\hat{f}$ and $\hat{h}$.
Finally, to simplify the description of the proposed selection methods, we
define $\mathbf{s}^{\infty}$ as a _virtual point_ (in the sense that it does
not have coordinates in ${\mathbb{R}}^{n}$), which represents the worst
possible candidate in ${\mathbb{R}}^{n}$:
$\hat{h}(\mathbf{s}^{\infty})=\hat{f}(\mathbf{s}^{\infty})=+\infty\text{ \;\;
and \;\; }d(\mathbf{s}^{\infty},\mathbf{X})=0.$ (7)
### 3.2 Detailed description of selection methods
Method 1. The first selection method selects the best point $\mathbf{s}$ of
$\mathbf{\hat{X}}$ under the constraint that
$d(\mathbf{s},\mathbf{X}\cup\mathbf{S})>0$, which means that $\mathbf{s}$ is
not in the set $\mathbf{X}$ of evaluated points nor already selected (i.e.,
$\notin\mathbf{S}$). This method reflects how surrogate models are typically
used for finding new candidate points.
$\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}^{\infty}$}\\\
\text{for all $\mathbf{s}\in\mathbf{\hat{X}}$, do:}\\\
\left|\;\begin{array}[]{l}\text{if
$\mathbf{s}\;\widehat{\prec}\;\mathbf{s}^{*}$ ~{}{\tt and}~{}
$d(\mathbf{s},\mathbf{X}\cup\mathbf{S})>0$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}\right.\\\ \text{end}\\\
\text{if $\mathbf{s}^{*}\neq\mathbf{s}^{\infty}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{S}\leftarrow\mathbf{S}\cup\\{\mathbf{s}^{*}\\}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}$
Algorithm 1 Selection of the best point (Method 1)
Method 2. The second method aims to maximize the diversity of the candidates
to be evaluated. It selects the point $\mathbf{s}$ of $\mathbf{\hat{X}}$ that
maximizes the distance $d(\mathbf{s},\mathbf{X}\cup\mathbf{S})$, i.e., as far
as possible from points already evaluated.
$\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}^{\infty}$}\\\
\text{for all $\mathbf{s}\in\mathbf{\hat{X}}$, do:}\\\
\left|\;\begin{array}[]{l}\text{if
$d(\mathbf{s},\mathbf{X}\cup\mathbf{S})>d(\mathbf{s}^{*},\mathbf{X}\cup\mathbf{S})$,
then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}\right.\\\ \text{end}\\\
\text{if $\mathbf{s}^{*}\neq\mathbf{s}^{\infty}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{S}\leftarrow\mathbf{S}\cup\\{\mathbf{s}^{*}\\}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}$
Algorithm 2 Selection of the most distant point to $\mathbf{X}\cup\mathbf{S}$
(Method 2)
Method 3. This method selects the best point $\mathbf{s}$ of
$\mathbf{\hat{X}}$ under the constraint that
$d(\mathbf{s},\mathbf{X}\cup\mathbf{S})\geq d_{\min}$, where $d_{\min}$ is
initialized at 0 at the beginning of the selection process and increased
progressively as the method is applied. Method 3 may fail to select a
candidate $\mathbf{s}$ when $d_{\min}$ becomes too large. Since the selected
points $\mathbf{S}$ must be projected on the current mesh
$\mathcal{M}=\\{\mathbf{x}+\Delta^{m}\mathbf{D}\mathbf{z},\mathbf{z}\in{\mathbb{N}}^{n_{D}},\mathbf{x}\in\mathbf{X}\\}$
as required by MADS, incrementing $d_{\min}$ by the current mesh size
$\Delta^{\mathcal{M}}$ allows to avoid that several candidates become
identical after the projection.
$\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}^{\infty}$}\\\
\text{if first use of Method~{}\ref{algo:method3}, then:}\\\
\left|\;\begin{array}[]{l}\text{$d_{\min}\leftarrow 0$}\\\
\end{array}\right.\\\ \text{end}\\\ \text{for all
$\mathbf{s}\in\mathbf{\hat{X}}$, do:}\\\ \left|\;\begin{array}[]{l}\text{if
$\mathbf{s}\;\widehat{\prec}\;\mathbf{s}^{*}$ ~{}{\tt and}~{}
$d(\mathbf{s},\mathbf{X}\cup\mathbf{S})\geq d_{\min}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}\right.\\\ \text{end}\\\
\text{if $\mathbf{s}^{*}\neq\mathbf{s}^{\infty}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{S}\leftarrow\mathbf{S}\cup\\{\mathbf{s}^{*}\\}$}\\\
\text{$d_{\min}\leftarrow d_{\min}+\Delta^{\mathcal{M}}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}$
Algorithm 3 Selection of the best point with a constraint on the distance to
$\mathbf{X}\cup\mathbf{S}$ (Method 3)
Method 4. Considering that the surrogate models $\hat{c}_{j}$ may fail to
predict correctly if $\mathbf{s}$ is feasible, the present method tries to
select points that will be likely to be feasible when evaluated by the
blackboxes $c_{j}$. This is done by selecting the best feasible point
$\mathbf{s}$ of $\mathbf{\hat{X}}$ under the constraint
$\hat{c}_{\max}(\mathbf{s})\leq\hat{c}_{\text{margin}}$, where
$\hat{c}_{\max}(\mathbf{s})$ is defined as being the most violated constraint
of $\mathbf{s}$, i.e.,
$\hat{c}_{\max}(\mathbf{s})=\underset{j=1,2,\dots,m}{\max}\;\hat{c}_{j}(\mathbf{s}),$
(8)
where $\hat{c}_{\text{margin}}$ is set as
$\hat{c}_{\text{margin}}\leftarrow\underset{\begin{subarray}{c}\mathbf{s}\in\mathbf{\hat{X}}\\\
\hat{c}_{\max}(\mathbf{s})<0\end{subarray}}{\max}\hat{c}_{\max}(\mathbf{s})$
(9)
and quantifies, among all feasible points of $\mathbf{\hat{X}}$, the smallest
amount by which these are satisfied.
By definition, $\hat{c}_{\max}(\mathbf{s})\leq 0$ if $\mathbf{s}$ is predicted
to be feasible by the surrogate models. The more negative
$\hat{c}_{\max}(\mathbf{s})$ is, the more likely is $\mathbf{s}$ to be
feasible when evaluated by the blackboxes. Decreasing progressively the value
of $\hat{c}_{\text{margin}}$ after each call of this selection method will
favor candidates that are increasingly likely to be feasible (but possibly
with worse objective function values).
$\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}^{\infty}$}\\\
\text{if first use of Method~{}\ref{algo:method4}, then:}\\\
\left|\;\begin{array}[]{l}\text{$\hat{c}_{\text{margin}}\leftarrow\min\\{0,\underset{\begin{subarray}{c}\mathbf{s}\in\mathbf{\hat{X}}\\\
\hat{c}_{\max}(\mathbf{s})<0\end{subarray}}{\max}\hat{c}_{\max}(\mathbf{s})\\}$}\\\
\end{array}\right.\\\ \text{end}\\\ \text{for all
$\mathbf{s}\in\mathbf{\hat{X}}$, do:}\\\ \left|\;\begin{array}[]{l}\text{if
$\hat{c}_{\max}(\mathbf{s})\leq\hat{c}_{\text{margin}}$ ~{}{\tt and}~{}
$\hat{f}(\mathbf{s})<\hat{f}(\mathbf{s}^{*})$ ~{}{\tt and}~{}
$d(\mathbf{s},\mathbf{X}\cup\mathbf{S})>\Delta^{\mathcal{M}}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}\right.\\\ \text{end}\\\
\text{if $\mathbf{s}^{*}\neq\mathbf{s}^{\infty}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{S}\leftarrow\mathbf{S}\cup\\{\mathbf{s}^{*}\\}$}\\\
\text{$\hat{c}_{\text{margin}}\leftarrow
2\,\hat{c}_{\max}(\mathbf{s}^{*})$}\\\ \end{array}\right.\\\ \text{end}\\\
\end{array}$
Algorithm 4 Selection of the best point with a constraint on the feasibility
(Method 4)
Note that this method requires that $\hat{c}_{\text{margin}}\leq 0$ is always
satisfied. Moreover, we also assume that there is at least one
$\mathbf{s}\in\mathbf{\hat{X}}$ that is feasible. If it is not the case, which
may happen in the first iteration, we will end up with
$\hat{c}_{\text{margin}}>0$ and an inappropriate candidate will be selected.
To avoid this, we initialize $\hat{c}_{\text{margin}}$ to $0$ so that the
method may fail to return a candidate if that is the case.
Method 5. The isolation distance is used here to detect local minima of the
surrogate problem. This concept is inspired from the _topographic isolation_
of a mountain summit, which measures the local significance of a summit. It is
defined as the distance to the closest higher summit.111 In mountaineering,
the topographic isolation of Mount Everest is infinite and the summit with the
second highest isolation is the Aconcagua in Argentina. The Aconcagua is not
the second highest summit but there is no higher mountain in a 16,518 km
range, making it the most important summit in the Americas and in the southern
hemisphere.
Transferred to optimization, the concept of topographic isolation is used to
quantify the importance of a local minimum. Its strict application is however
impossible since it will require to prove that no other point within a certain
distance of $\mathbf{x}$ is better than $\mathbf{x}$. We can only compute
isolation distance of the already evaluated points. Consequently, we define
the isolation distance as being the distance from $\mathbf{s}$ to the closest
point of $\mathbf{\hat{X}}$ that is better than $\mathbf{s}$
$d_{\text{iso}}(\mathbf{s})=\underset{\begin{subarray}{c}\mathbf{s}^{\prime}\in\mathbf{\hat{X}}\\\
\mathbf{s}^{\prime}\\!\;\widehat{\prec}\;\\!\mathbf{s}\end{subarray}}{\min}\;d\big{(}\mathbf{s},\mathbf{s}^{\prime}).$
(10)
Constraints are taken into account by using the order relationship defined in
Equation (5). As a convention, if no point of $\mathbf{\hat{X}}$ is better
than $\mathbf{s}$, then $d_{\text{iso}}(\mathbf{s})=+\infty$. With this
definition, the point of $\mathbf{\hat{X}}$ with the highest isolation
distance is also the best candidate in $\mathbf{\hat{X}}$. However, we have
observed that the other points with a high isolation distance are often poor
points far from any other point of $\mathbf{\hat{X}}$. To address this
problem, we define the _isolation number_ of $\mathbf{s}\in\mathbf{\hat{X}}$
as the number of points of $\mathbf{\hat{X}}$ within the ball of centre
$\mathbf{s}$ and radius $d_{\text{iso}}(\mathbf{s})$
$n_{\text{iso}}(\mathbf{s})=\text{card}\big{\\{}\mathbf{s}^{\prime}:\mathbf{s}^{\prime}\in\mathbf{\hat{X}},d(\mathbf{s},\mathbf{s}^{\prime})<d_{\text{iso}}(\mathbf{s})\big{\\}}.$
(11)
To have a high isolation number, a point must be better than many of its
neighbors, which means that this criterion allows to detect local minima. Note
that Equation (7) implies that
$d_{\text{iso}}(\mathbf{s}^{\infty})=n_{\text{iso}}(\mathbf{s}^{\infty})=0$.
Method 5 selects the point of $\mathbf{\hat{X}}$ that has the highest
isolation number not yet in $\mathbf{X}\cup\mathbf{S}$.
$\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}^{\infty}$}\\\
\text{for all $\mathbf{s}\in\mathbf{\hat{X}}$, do:}\\\
\left|\;\begin{array}[]{l}\text{if
$n_{\text{iso}}(\mathbf{s})>n_{\text{iso}}(\mathbf{s}^{*})$ ~{}{\tt and}~{}
$d(\mathbf{s},\mathbf{X}\cup\mathbf{S})>0$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}\right.\\\ \text{end}\\\
\text{if $\mathbf{s}^{*}\neq\mathbf{s}^{\infty}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{S}\leftarrow\mathbf{S}\cup\\{\mathbf{s}^{*}\\}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}$
Algorithm 5 Selection of the most isolated point (Method 5)
Method 6. The purpose of this method is to select points in neglected areas of
the design space. To do so, it selects points in areas heavily explored while
solving ($\hat{P}$) but overlooked when solving ($P$). The _density number_ of
$\mathbf{s}\in\mathbf{\hat{X}}$ is defined as
$n_{\text{density}}(\mathbf{s})=\text{card}\big{\\{}\mathbf{s}^{\prime}:\mathbf{s}^{\prime}\in\mathbf{\hat{X}},d(\mathbf{s},\mathbf{s}^{\prime})<d(\mathbf{s},\mathbf{X}\cup\mathbf{S})\big{\\}}.$
(12)
Method 6 selects the point of $\mathbf{\hat{X}}$ with the highest density
number. Note that, as for $n_{\text{iso}}$, Equation (7) implies that
$n_{\text{density}}(\mathbf{s}^{\infty})=0$.
$\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}^{\infty}$}\\\
\text{for all $\mathbf{s}\in\mathbf{\hat{X}}$, do:}\\\
\left|\;\begin{array}[]{l}\text{if
$n_{\text{density}}(\mathbf{s})>n_{\text{density}}(\mathbf{s}^{*})$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{s}^{*}\leftarrow\mathbf{s}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}\right.\\\ \text{end}\\\
\text{if $\mathbf{s}^{*}\neq\mathbf{s}^{\infty}$, then:}\\\
\left|\;\begin{array}[]{l}\text{$\mathbf{S}\leftarrow\mathbf{S}\cup\\{\mathbf{s}^{*}\\}$}\\\
\end{array}\right.\\\ \text{end}\\\ \end{array}$
Algorithm 6 Selection of a point in a populated area (Method 6)
## 4 Parallel computing implementation
We now describe how we implement the proposed SEARCH step. We start with the
surrogate models and follow with the algorithms used for solving ($\hat{P}$).
We conclude with how all of this is integrated with the MADS algorithm to
solve ($P$).
### 4.1 Surrogate models
Several surrogate models are mentioned in [11] regarding their use in parallel
computing approaches, including Kriging, radial basis functions (RBF), support
vector regression (SVR), and polynomial response surfaces (PRSs). For a more
exhaustive description and comparison of surrogate models, see [14]. Based on
our previous work reported in [10], we choose to use the locally weighted
scatterplot smoothing (LOWESS) surrogate modeling approach [15, 16, 17, 18].
LOWESS models generalize PRSs and kernel smoothing (KS) models. PRSs are good
for small problems, but their efficacy decreases for larger, highly nonlinear,
or discrete problems. KS models tend to overestimate low function values and
underestimate high ones, but usually predict correctly which of two points
yields the smallest function value [9]. LOWESS models build a linear
regression of kernel functions around the point to estimate. They have been
shown to be suitable for surrogate-based optimization [10]. Their parameters
are chosen using an error metric called “aggregate order error with cross-
validation” (AOECV), which favors equivalence between the original problem
($P$) and the surrogate problem ($\hat{P}$) [9]. We use the SGTELIB
implementation of the surrogate models, which is now integrated as a surrogate
library in NOMAD version 3.8 [19]. Specifically, the considered LOWESS model
is defined in SGTELIB as follows.
$\text{\tt TYPE Lowess DEGREE 1 RIDGE 0 SHAPE\\_COEF OPTIM KERNEL\\_TYPE
OPTIM},$
which means that the local regression is linear, the ridge (regularization)
coefficient is 0, and the kernel shape and kernel type are optimized to
minimize the aggregate order error (see [10]).
The LOWESS model is built as described in Appendix A. Only the Gaussian kernel
was considered in [10]. Six additional kernel functions have meanwhile been
implemented in SGTELIB. Accordingly, not only $\lambda$ (kernel shape) is
chosen to minimize AOECV, but also $\phi$ (kernel type).
### 4.2 Surrogate problem solution
The surrogate problem ($\hat{P}$) is solved by means of an inner instance of
MADS; it is initialized by a Latin hypercube search (LHS) [20, 21] and uses
variable neighborhood search (VNS) [22, 23] as the SEARCH step and a large
budget of function evaluations (10,000). The POLL step is performed using the
ORTHO 2N directions option. This inner MADS is implemented in the SEARCH step
of the outer MADS.
The LHS guarantees that there are surrogate cache points widely spread over
the design space. To ensure this, 30% of all function evaluations (i.e.,
3,000) are devoted to the LHS. Four additional points are considered: The
current best feasible point of the original problem, the current best
infeasible point of the original problem, the best feasible point obtained by
solving the most recent surrogate problem instantiation, and the best
infeasible point by solving the most recent surrogate problem instantiation.
These points are used as initial guesses of the inner MADS problem, which will
be run until the remaining evaluation budget is exhausted. This budget will be
shared between the POLL step and the VNS in a default proportion where 75% is
devoted to VNS, which favors the exploration of multiple local attraction
basins. A large number of evaluations that build the surrogate cache
$\mathbf{\hat{X}}$ favors an accurate solution of the surrogate problem. Using
LHS, VNS, and a large number of function evaluations ensures that
$\mathbf{\hat{X}}$ contains highly promising candidates for the solution of
the original problem ($P$).
### 4.3 The modified MADS algorithm
Recall that each iteration of MADS includes a SEARCH step (performed first)
and a POLL step. Let $t$ denote the current iteration. Then, $\mathbf{X}_{t}$,
$\mathbf{\hat{X}}_{t}$, and $\mathbf{S}_{t}$ denote the sets $\mathbf{X}$,
$\mathbf{\hat{X}}$, and $\mathbf{S}$ considered by MADS at iteration $t$. Let
$q$ be the number of available CPUs, i.e., the number of blackbox evaluations
that can be performed in parallel. The proposed MADS for exploiting $q$ CPUs
is as follows. First, the SEARCH step proceeds by solving the surrogate
problem to populate the set $\mathbf{\hat{X}}_{t}$. From that set, $q$
candidates are selected and returned to be evaluated by the blackbox(es) in
parallel. The selection is made by cycling through a user-defined subset of
the six proposed selection methods (Section 3.2) until a total of $q$
candidates are selected, or until all selection methods consecutively failed
to add a candidate to $\mathbf{S}_{t}$. If $q$ is smaller than the number of
selection methods retained by the user, we do not necessarily go through all
the methods, but stop as soon as we get $q$ candidates.
If/when the SEARCH step fails to return a better objective function value, the
MADS algorithm proceeds to the POLL step. Let $\mathbf{P}_{t}$ be the set of
candidates produced by the polling directions at the iteration $t$. The
cardinality of $\mathbf{P}_{t}$ is denoted by $|\mathbf{P}_{t}|$. To be
consistent with our need to fulfill continuously all the available CPUs with
evaluations, additional candidates are added so that $|\mathbf{P}_{t}|$ is at
least $q$ or a multiple of $q$. This is accomplished by means of NOMAD’s
intensification mechanism ORTHO 1. If $|\mathbf{P}_{t}|=q$, all poll
candidates of $\mathbf{P}_{t}$ are evaluated concurrently, eliminating the
need to order them. If $|\mathbf{P}_{t}|>q$, then the points in
$\mathbf{P}_{t}$ are regrouped in several blocks of $q$ candidates. The blocks
are then evaluated sequentially and opportunistically, which means that if a
block evaluation leads to a success, the remaining blocks are not evaluated.
To increase the probability of success from the first block, and hence
avoiding to proceed with the remaining ones, the candidates of
$\mathbf{P}_{t}$ are sorted using the surrogate models and distributed in the
blocks so that the more promising ones are in the first block.
Recall that $\mathcal{M}_{t}$ is the current mesh, $\Delta^{\mathcal{M}}_{t}$
is the associated mesh size parameter, $\Delta^{\text{P}}_{t}$ is the
corresponding mesh poll parameter, and $\mathbf{x}^{*}_{t}$ is the best
solution found at iteration $t$. Finally, the set $\mathbf{X}_{t}$ is updated
with all the points $\mathbf{x}$ in $\mathbf{S}_{t-1}$ and $\mathbf{P}_{t-1}$
that have been evaluated during the previous iteration. The process is
summarized in Algorithm 7.
$\begin{array}[]{l}\text{{[1] Initialization}}\\\
\;\;\;\;\begin{array}[]{l}\text{$t\leftarrow 0$}\\\ \text{Set initial poll and
mesh sizes $\Delta^{\text{P}}_{0}\geq\Delta^{\mathcal{M}}_{0}>0$}\\\
\text{Initialize $\mathbf{X}_{0}$ with starting points}\\\ \text{Evaluate
$\\{f(\mathbf{x}),c_{1}(\mathbf{x}),c_{2}(\mathbf{x}),\ldots,c_{m}(\mathbf{x})\\}\;\forall\mathbf{x}\in\mathbf{X}_{0}$}\\\
\end{array}\\\ \text{{[2] Model search}}\\\
\;\;\;\;\begin{array}[]{l}\text{Use $\mathbf{X}_{t}$ to build $\hat{f}$ and
$\\{\hat{c}_{j}\\}_{j\in J}$}\\\ \text{Solve surrogate
problem~{}\eqref{eq:SurrogateProblem} using the inner MADS instance}\\\
\text{$\mathbf{\hat{X}}_{t}\leftarrow$ Set of points evaluated with surrogate
model while solving~{}\eqref{eq:SurrogateProblem}}\\\
\text{$\mathbf{S}_{t}\leftarrow$ Cycle through selection steps to select $q$
points of $\mathbf{\hat{X}}_{t}$}\\\ \text{$\mathbf{S}_{t}\leftarrow$
Projection of the points of $\mathbf{S}_{t}$ onto mesh $\mathcal{M}_{t}$}\\\
\text{Parallel evaluation of
$\\{f(\mathbf{x}),c_{1}(\mathbf{x}),c_{2}(\mathbf{x}),\ldots,c_{m}(\mathbf{x})\\}\;\forall\mathbf{x}\in\mathbf{S}_{t}$}\\\
\text{If success, {\tt goto} {[4]}}\\\ \end{array}\\\ \text{{[3] Poll}}\\\
\;\;\;\;\begin{array}[]{l}\text{Build poll set $\mathbf{P}_{t}$}\\\ \text{Sort
$\mathbf{P}_{t}$ according to $\hat{f}$ and $\\{\hat{c}_{j}\\}_{j\in J}$}\\\
\text{Parallel evaluation of
$\\{f(\mathbf{x}),c_{1}(\mathbf{x}),c_{2}(\mathbf{x}),\ldots,c_{m}(\mathbf{x})\\}\;\forall\mathbf{x}\in\mathbf{P}_{t}$}\\\
\end{array}\\\ \text{{[4] Updates}}\\\
\;\;\;\;\begin{array}[]{l}\text{$t\leftarrow t+1$}\\\ \text{Update
$\Delta^{\mathcal{M}}_{t}$, $\Delta^{\text{P}}_{t}$, $\mathbf{x}^{*}_{t}$ and
$\mathbf{X}_{t}$}\\\ \text{If no stopping condition is met, {\tt goto}
{[2]}}\\\ \end{array}\\\ \end{array}$
Algorithm 7 The proposed MADS optimization algorithm
## 5 Numerical investigation
The proposed SEARCH step technique is tested using five optimization problems.
We first describe the algorithms considered for benchmarking. Next, numerical
results are presented and discussed for the five engineering design problems.
### 5.1 Compared algorithms
Five solvers are compared in our numerical experiments, all based on the MADS
algorithm and implemented using NOMAD 3.8 [3]. This ensures avoiding coding
biases since features are identical among solvers.
* •
MADS. Refers to the POLL step of MADS, without any SEARCH step, where $2n$
directions are generated and evaluated in parallel. If needed, $k$ additional
directions are generated such that $2n+k$ is a multiple of $q$.
* •
Multi-Start. Consists of $q$ parallel runs of MADS. They are totally
independent and each instance runs on its own CPU. Each instance proceeds to
its evaluations sequentially, one after the other. Only the POLL step is
executed and no cache is shared between running instances.
* •
LH Search. The MADS solver mentioned above using a Latin hypercube search
(LHS) at the SEARCH step, where $q$ candidates are generated and evaluated in
parallel.
* •
Lowess-A. The MADS solver mentioned above with the described surrogate
optimization conducted at the SEARCH step. The $q$ candidates are selected by
cycling through Methods 1 and 2, and then evaluated in parallel.
* •
Lowess-B. The MADS solver mentioned above with the proposed surrogate
optimization conducted at the SEARCH step. The $q$ search candidates are
selected by cycling through Methods 3, 4, 5, and 6, and then evaluated in
parallel.
Both LOWESS solvers are exactly like Algorithm 7, excepted for the used
selection methods. The only difference between them and the LHS solver is that
the surrogate optimization approach is replaced by a LHS at the SEARCH step.
This should allow us to determine whether surrogate optimization has any
advantage over a random search. The MADS solver is used as the baseline.
Finally, the Multi-Start solver is considered to ensure that one should not
proceed with $q$ independent narrow trajectories instead of one single
trajectory having $q$ wide evaluations.
### 5.2 Engineering design optimization problems
The above solvers are compared on five engineering design application
problems. A short description follows below for each problem. More details are
provided in Appendix B.
* •
TCSD. The Tension/Compression Spring Design problem consists of minimizing the
weight of a spring under mechanical constraints [24, 25, 26]. This problem has
three variables and four constraints. The design variables define the geometry
of the spring. The constraints concern shear stress, surge frequency, and
minimum deflection.
* •
Vessel. This problem considers the design of a compressed air storage tank and
has four design variables and four constraints [24, 27]. The variables define
the geometry of the tank and the constraints are related to the volume,
pressure, and solidity of the tank. The objective is to minimize the total
cost of the tank, including material and labour.
* •
Welded. The welded beam design problem (Version I) has four variables and six
constraints [24, 28]. It aims at minimizing the construction cost of a beam,
under shear stress, bending stress, and deflection constraints. The design
variables define the geometry and the characteristics of the welded joint.
* •
Solar 1. This optimization problem aims at maximizing the energy received over
a period of 24 hours under five constraints related to budget and heliostat
field area [29]. It has nine variables, including an integer one without an
upper bound.
* •
Solar 7. This problem aims at maximizing the efficiency of the receiver over a
period of 24 hours for a given heliostats field under six binary constraints
[29]. It has seven design variables, including an integer one without an upper
bound.
A progressive barrier is used to deal with the aggregated constraints [13].
The three first problems are easier relative to the last two ones. However, it
is difficult to find a feasible solution for the TCSD problem. Among all the
considered problems, Solar 1 is certainly the most difficult one.
### 5.3 Numerical experiments
We compare the efficiency of each solver for different values of block size
$q\in\\{1,2,4,8,16,32,64\\}$. As an example, we will use “Lowess-A 16” to
refer to the solver that relies on LOWESS models cycling over Methods 1 and 2
considering a block size $q=16$. For each problem, we generated 50 sets of 64
starting points with Latin hypercube sampling [20]. For all solvers other than
“Multi-Start”, only the first point of each set is used to perform
optimizations. Doing so, we get 50 runs from the same starting points for each
solver, each problem, and each value of $q$. For “Multi-Start”, since $q$
independent and parallel sequential runs of MADS must be performed, we use the
$q$ first points of each set. Doing so, we still get 50 runs for each problem
and each $q$, while ensuring that all starting points are the same for all
solvers.
To avoid that all “LH Search” runs end up with nearly identical solutions for
a given $q$, we use a random seed for initializing each LHS.
For the relatively three simpler problems (TCSD, Vessel, and Welded), a budget
of 100 block evaluations is allocated. For the two relatively difficult
problems (Solar 1 and Solar 7), the budget is increased to 200 block
evaluations. This means that, for a given problem, all solvers will have the
same “wall-clock” time, but not necessarily the same resources (number of CPUs
available for block evaluations) nor the same total number of blackbox
evaluations.
#### Solution quality
Figures 1 and 2 represent the distribution of the final objective function
over the 50 runs for each problem, each solver, and each block size $q$. The
minimum and maximum objective values that we obtained from the runs are
indicated by circles in the figures. Lower and upper quantiles are delimited
by boxes. Median values are represented by a bar into the boxes. The more a
distribution is on the left side, the better the combination of solver and $q$
is. Since we are mostly interested in the best combinations, the figures only
focus on the smallest values. Otherwise, it would be difficult to discern the
difference among the best combinations. All combinations for which the
distribution is cut on the right side are performing poorly.
Figure 1: Performance summary for the TCSD, Vessel, and Welded problems over
50 runs Figure 2: Performance summary for the Solar 1 and Solar 7 problems
over 50 runs
For the three simpler problems (TCSD, Vessel, and Welded, Figure 1), the
LOWESS solvers (and in particular “Lowess-B”) are by far superior to the
solvers that do not rely on surrogate optimization.
For the TCSD problem, the “MADS” solver often failed to find a feasible design
(thus leading to infinite objective values), even with a large number of
evaluations per block. The four other solvers always managed to find a
feasible point for at least 75% of the runs. The “Lowess-B” solver performs
better than any of the other ones. We see that “Lowess-A 64” is outperformed
by “Lowess-B 8”. As the TCSD problem is very constrained, the final objective
function value depends on the initial guess. This is why the “Multi-Start”
solver performs quite well on this problem.
The same trend is observed for the Vessel, and Welded problems. “Lowess-B”
performs better than “Lowess-A”, which outperforms “LHS” or “MADS”. In
particular, “Lowess-B 8” outperforms the solvers “Lowess-A 8/16/32”. As
expected, increasing the block size improves performance. However, for the
“Lowess-B” solver these three problems are easy to solve, so it is difficult
to see an advantage of using parallel computing because the global optimum is
found most of the time within 100 block evaluations for a block size of 16 or
more. The “Multi-Start” solver performs rather poorly on these two problems.
The numerical results generally follow the same trend for the two Solar
problems (Figure 2). For a block of equal size, the LOWESS solvers outperform
the other solvers while “Lowess-B” outperforms “Lowess-A”.
#### Convergence rate
We now examine the convergence rate of the solvers for the case where $q=64$.
Figures 3 and 4 depict the evolution of the median objective function value of
50 runs as a function of the number of block evaluations. For each problem,
the plots on the left compare the convergence of the five solvers with blocks
of size $q=64$ while the plots on the right compare the convergence of the
best-performing solver i.e., “Lowess-B”, for block sizes ranging from $q=1$ to
64.
TCSD problem
Vessel problem
Welded problem
Figure 3: Results for the TCSD, Vessel, and Welded problems; median objective
value of 50 runs
Solar 1
Solar 7
Figure 4: Results for the Solar 1 and Solar 7 problems; median objective value
of 50 runs
We can conclude that “Lowess-B” yields the best solutions faster than any
other solver (for $q=64$). The worst-performing solvers are “Multi-Start” for
problems TCSD and Solar 1 and “LH Search” for problems Vessel, Welded, and
Solar 7. It is also notable that although “MADS” does not use the SEARCH step,
it performs generally well, except for Solar 1. “Lowess-A” performed well but
does not clearly outperform other solvers.
Considering the performance of “Lowess-B” as a function of $q$, we observe
that, as expected, convergence improves for larger values of $q$. Depending on
the problem, there may be a saturation point beyond which an increase of $q$
does not effect an improvement. E.g., a saturation point arises around $q=8$
or 16 for TCSD, Vessel, and Welded. On the contrary, $q$ could be even larger
than 64 for Solar 1 as more CPUs can be utilized.
#### Performance profiles
We now consider performance profiles, which indicate the percentage of runs
where the problem is solved within a deviation from the best known solution
$\tau$ under a budget of function evaluations [30]. Specifically, for each
solver $s$, each instance $r$ and problem $p$, we compute the number of block
evaluations $b_{s,p,r}(\tau)$ such that
$\frac{|f_{s,b,p,r}-f_{p}^{*}|}{|f_{p}^{*}|}\leq\tau,$ (13)
where $f_{p}^{*}$ is the best known objective value for problem $p$ and
$f_{s,b,p,r}$ is the value obtained with the solver $s$ after $b$ block
evaluations. Let $b^{\min}_{p,r}(\tau)$ be the smallest budget for solving the
instance $r$ of problem $p$ with deviation $\tau$, i.e.,
$b^{\min}_{p,r}(\tau)=\underset{s}{\min}\;b_{s,p,r}(\tau).$
Then, we can plot the proportion of runs of solver $s$ that satisfy Eq. (13)
at a multiple $\alpha$ of the smallest budget, i.e.,
$\alpha\,b^{\min}_{p,r}(\tau)$ block evaluations. Figure 5 depicts the
performance profiles of the five considered problems for $q=64$ over the 50
runs (instances) for $\tau$ values that range from 10-1 to 10-4.
Performance profiles for $\tau=10^{-1}$ | Performance profiles for $\tau=10^{-2}$
---|---
|
Performance profiles for $\tau=10^{-3}$ | Performance profiles for $\tau=10^{-4}$
|
Figure 5: Performance profiles for $q=64$ over the 50 runs of all five
problems
Note that higher curves imply better solver performance. Moreover, a solver
that performs well for small values of $\alpha$ is a solver that can solve,
for the considered $\tau$, a large number of problems with a small evaluation
budget. Figure 5 confirms our previous observations: “Lowess-B” outperforms
all solvers, followed by “Lowess-A” and “LH Search” in second and third
position. “MADS” and “Multi-Start” are in the last position. For large
tolerances, “MADS” does better than “Multi-Start”, and better than “LH Search”
for small values of $\alpha$ ($\leq 4$). For small precisions, “MADS” is
outperformed by the other solvers.
The most interesting observation is the significant gap between the
performance curves of LOWESS solvers and the ones from the three other
solvers. The gap increases as $\tau$ decreases. From moderate to lower $\tau$
values ($\leq 10^{-2}$), “Lowess-B” systematically solves at least twice more
problems than “LH Search”, “Multi-Start” and “MADS”. It is unusual to observe
such clear differences on performance profiles. “Lowess-A” performs almost as
well as “Lowess-B”, particularly when $\tau$ is small (around $10^{-4}$), but
needs at least four times more block evaluations to achieve this.
#### Scalability analysis
We wish to establish the reduction of wall-clock time when using additional
resources for each solver. To that end, we follow the methodology proposed in
[31]. We define $f_{s,b,p,r,q}$ as the value of the objective function
obtained by solver $s$ after $b$ block evaluations on instance $r$ of problem
$p$ when using $q$ CPUs. We also define the _reference_ objective value as the
best value achieved with only one CPU ($q=1$), i.e.,
$f^{\text{ref}}_{s,p,r}=\underset{b}{\min}f_{s,b,p,r,1},$ (14)
and $b^{\text{ref}}_{s,p,r,q}$ the number of block evaluations necessary to
reach $f^{\text{ref}}_{s,p,r}$ when $q$ CPUs are used, i.e.,
$b^{\text{ref}}_{s,p,r,q}=\min\\{b:f^{\text{ref}}_{s,p,r}\leq
f_{s,b,p,r,q}\\}.$ (15)
The _speed-up_ of solver $s$ when solving with $q$ CPUs is defined as
$\text{speed-
up}(s,q)=\underset{p,r}{\text{geomean}}\left(\;\frac{b^{\text{ref}}_{s,p,r,q}}{b^{\text{ref}}_{s,p,r,1}}\;\right)$
(16)
and its _efficiency_ as
$\text{efficiency}(s,q)=\frac{\text{speed-up}(s,q)}{q}.$ (17)
Figure 6 depicts the speed-up and efficiency values obtained by our numerical
experiments.
Figure 6: Speed-up and efficiency
Perfect scalability is obtained when the speed-up is equal to $q$ and the
efficiency is equal to 1. The speed-up curves show that the power introduced
by new CPUs decreases as their number increases. This was observed in Figures
3 and 4 where problems exhibited saturation around $q=8$ and 16. “Lowess-B”
achieves the best speed-up, followed by “MADS”. “Lowess-A” and “LH Search”
come next, followed by the worst-performing solver, namely “Multi-Start”. We
conclude that it is better and more productive to proceed with one search
performing $q$ parallel evalutions instead of conducting $q$ independent
searches consisting of a single evaluation.
The efficiency curves demonstrate rapid decrease except for “Lowess-B”; its
rate exhibits a bump on its efficiency curve at $q=4$. For $q=2$, only methods
3 and 4 are used to generate candidates. For $q\geq 4$, methods 5 and 6 are
also used. The aforementioned bump highlights the important contributions of
these methods to the efficiency of “Lowess-B”.
## 6 Conclusion
Linear LOWESS models with optimized kernel shapes and coefficients seem to
provide high-performing surrogates of the blackboxes. The use of diverse
selection methods (3 to 6) enables an efficient exploration of the design
space, accelerates local convergence, and makes optimal use of additional CPU
resources. Methods 5 and 6 are particularly efficient, outperforming the other
selection methods. This means that the way surrogates are used by method 1 is
not effective. Similarly, the diversification strategy of method 2 is not
adequate to select points that lie far enough from the ones already evaluated.
We cannot draw a definite conclusion about which of the methods 5 or 6 is
better than the other. We believe that the good performance of “Lowess-B” is
due to using method 5.
The proposed selection methods are not specific to the LOWESS model considered
here; they are applicable to any surrogates. We believe that they will work
well with reduced-fidelity (or variable-fidelity) physical-based models since
high- and low-fidelity models typically have similarly structured solution
domains. The selection methods are also applicable to other algorithms using
surrogates to identify promising points to evaluate.
## References
* [1] O. Kramer, D.E. Ciaurri, and S. Koziel. Derivative-free optimization. In S. Koziel and XS. Yang, editors, Computational Optimization, Methods and Algorithms, volume 356 of Studies in Computational Intelligence, pages 61–83. Springer, 2011.
* [2] C. Audet, S. Le Digabel, C. Tribes, and V. Rochon Montplaisir. The NOMAD project. Software available at https://www.gerad.ca/nomad.
* [3] S. Le Digabel. Algorithm 909: NOMAD: Nonlinear optimization with the MADS algorithm. ACM Transactions on Mathematical Software, 37(4):44:1–44:15, 2011\.
* [4] C. Audet and J.E. Dennis, Jr. Mesh adaptive direct search algorithms for constrained optimization. SIAM Journal on Optimization, 17(1):188–217, 2006.
* [5] E. Alba, G. Luque, and S. Nesmachnow. Parallel metaheuristics: recent advances and new trends. International Transactions in Operational Research, 20(1):1–48, 2013.
* [6] S. Le Digabel, M.A. Abramson, C. Audet, and J.E. Dennis, Jr. Parallel versions of the MADS algorithm for black-box optimization. In Optimization days, Montreal, May 2010. GERAD. Slides available at http://www.gerad.ca/Sebastien.Le.Digabel/talks/2010_JOPT_25mins.pdf.
* [7] B. Talgorn, S. Le Digabel, and M. Kokkolaras. Statistical Surrogate Formulations for Simulation-Based Design Optimization. Journal of Mechanical Design, 137(2):021405–1–021405–18, 2015\.
* [8] Mahdi Pourbagian, Bastien Talgorn, WagdiG. Habashi, Michael Kokkolaras, and Sébastien Le Digabel. Constrained problem formulations for power optimization of aircraft electro-thermal anti-icing systems. Optimization and Engineering, pages 1–31, 2015.
* [9] C. Audet, M. Kokkolaras, S. Le Digabel, and B. Talgorn. Order-based error for managing ensembles of surrogates in derivative-free optimization. Journal of Global Optimization, 70(3):645–675, 2018.
* [10] B. Talgorn, C. Audet, M. Kokkolaras, and S. Le Digabel. Locally weighted regression models for surrogate-assisted design optimization. Optimization and Engineering, 19(1):213–238, 2018.
* [11] Raphael T. Haftka, Diane Villanueva, and Anirban Chaudhuri. Parallel surrogate-assisted global optimization with expensive functions – a survey. Structural and Multidisciplinary Optimization, 54(1):3–13, Jul 2016\.
* [12] R. Fletcher and S. Leyffer. Nonlinear programming without a penalty function. Mathematical Programming, Series A, 91:239–269, 2002.
* [13] C. Audet and J.E. Dennis, Jr. A progressive barrier for derivative-free nonlinear programming. SIAM Journal on Optimization, 20(1):445–472, 2009.
* [14] R. Alizadeh, J.K. Allen, and F. Mistree. Managing computational complexity using surrogate models: a critical review. Research in Engineering Design, 2020.
* [15] W.S. Cleveland. Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74:829–836, 1979\.
* [16] W.S. Cleveland. LOWESS: A Program for Smoothing Scatterplots by Robust Locally Weighted Regression. The American Statistician, 35(1), 1981.
* [17] W.S. Cleveland and S.J. Devlin. Locally weighted regression: An approach to regression analysis by local fitting. Journal of the American Statistical Association, 83:596–610, 1988\.
* [18] W.S. Cleveland, S.J. Devlin, and E. Grosse. Regression by local fitting: methods, properties, and computational algorithms. Journal of Econometrics, 37(1):87 – 114, 1988.
* [19] B. Talgorn. SGTELIB: Surrogate model library for derivative-free optimization. https://github.com/bbopt/sgtelib, 2019.
* [20] M.D. McKay, R.J. Beckman, and W.J. Conover. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21(2):239–245, 1979.
* [21] T.J. Santner, B.J. Williams, and W.I. Notz. The Design and Analysis of Computer Experiments, chapter 5.2.2, Designs Generated by Latin Hypercube Sampling, pages 127–132. Springer, New York, NY, 2003.
* [22] N. Mladenović and P. Hansen. Variable neighborhood search. Computers and Operations Research, 24(11):1097–1100, 1997.
* [23] P. Hansen and N. Mladenović. Variable neighborhood search: principles and applications. European Journal of Operational Research, 130(3):449–467, 2001\.
* [24] H. Garg. Solving structural engineering design optimization problems using an artificial bee colony algorithm. Journal of Industrial and Management Optimization, 10(3):777–794, 2014.
* [25] J. Arora. Introduction to Optimum Design. Elsevier Science, 2004.
* [26] A.D. Belegundu. A Study of Mathematical Programming Methods for Structural Optimization. University of Iowa, 1982.
* [27] B. K. Kannan and S. N. Kramer. Augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design, 65:103–112+, 1993.
* [28] Singiresu S. Rao. Engineering Optimization: Theory and Practice, 3rd Edition. Wiley-Interscience, 1996.
* [29] Mathieu Lemyre Garneau. Modelling of a solar thermal power plant for benchmarking blackbox optimization solvers. Master’s thesis, École Polytechnique de Montréal, 2015.
* [30] E.D. Dolan and J.J. Moré. Benchmarking optimization software with performance profiles. Mathematical Programming, 91(2):201–213, 2002.
* [31] Prasad Jogalekar and Murray Woodside. Evaluating the scalability of distributed systems. IEEE Trans. Parallel Distrib. Syst., 11(6):589–603, June 2000\.
* [32] C.G. Atkeson, A.W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence Review, pages 11–73, 1997.
## Appendix A LOWESS predictions
As a convention, we denote with
${\boldsymbol{\xi}}\in\mathcal{X}\subseteq{\mathbb{R}}^{n}$ the point of the
design space where we want to predict the value of the blackbox output.
Locally weighted scatterplot smoothing (LOWESS) models build a local linear
regression at the point ${\boldsymbol{\xi}}$ where the blackbox output
$[f\;c_{1}\ldots c_{m}]$ are to be estimated [32, 15, 16, 17, 18, 10]. This
local regression emphasizes data points that are close to
${\boldsymbol{\xi}}$. The interested reader can refer to [10] for details
about the method described below. We consider here only local linear
regressions; local quadratic regressions and Tikhonov regularization are
considered in [10]. On the contrary, while only a Gaussian kernel was
considered in [10], six others are added here as kernel functions.
We define the output matrix $\mathbf{Y}\in{\mathbb{R}}^{p\times(m+1)}$, the
design matrix $\mathbf{Z}_{\boldsymbol{\xi}}\in{\mathbb{R}}^{p\times(n+1)}$,
and the weight matrix $\mathbf{W}_{\boldsymbol{\xi}}\in{\mathbb{R}}^{p\times
p}$:
$\mathbf{Y}=\left[\begin{array}[]{c c @{\,} c @{\,}
c}f(\mathbf{x}_{1})&c_{1}(\mathbf{x}_{1})&\ldots&c_{m}(\mathbf{x}_{1})\\\
\vdots&\vdots&&\vdots\\\
f(\mathbf{x}_{p})&c_{1}(\mathbf{x}_{p})&\ldots&c_{m}(\mathbf{x}_{p})\end{array}\right],\;\;\mathbf{Z}_{\boldsymbol{\xi}}=\left[\begin{array}[]{c
c}1&(\mathbf{x}_{1}-{\boldsymbol{\xi}})^{\top}\\\ \vdots&\vdots\\\
1&(\mathbf{x}_{p}-{\boldsymbol{\xi}})^{\top}\end{array}\right],\;\;\mathbf{W}_{\boldsymbol{\xi}}=\left[\begin{array}[]{c
@{} c @{} c}w_{1}({\boldsymbol{\xi}})&&\\\ &\ddots&\\\
&&w_{p}({\boldsymbol{\xi}})^{\top}\end{array}\right].$ (18)
The details of the computation of $w_{i}({\boldsymbol{\xi}})$ are described in
Section A.1. Then, we define
$\mathbf{u}_{\boldsymbol{\xi}}\in{\mathbb{R}}^{n+1}$ as the first column of
$(\mathbf{Z}_{\boldsymbol{\xi}}^{\top}\mathbf{W}_{\boldsymbol{\xi}}\mathbf{Z}_{\boldsymbol{\xi}})^{-1}$,
which means that $\mathbf{u}_{\boldsymbol{\xi}}$ is the solution of the linear
system
$\mathbf{Z}_{\boldsymbol{\xi}}^{\top}\mathbf{W}_{\boldsymbol{\xi}}\mathbf{Z}_{\boldsymbol{\xi}}\mathbf{u}_{\boldsymbol{\xi}}=\mathbf{e}_{1}$.
The prediction of the blackbox outputs at ${\boldsymbol{\xi}}$ is then
$\hat{\mathbf{y}}({\boldsymbol{\xi}})=\left[\begin{array}[]{c c c
c}\hat{f}({\boldsymbol{\xi}})&\hat{c}_{1}({\boldsymbol{\xi}})&\ldots&\hat{c}_{m}({\boldsymbol{\xi}})\end{array}\right]=\mathbf{u}_{\boldsymbol{\xi}}^{\top}\mathbf{Z}_{\boldsymbol{\xi}}^{\top}\mathbf{W}_{\boldsymbol{\xi}}\mathbf{Y}.$
(19)
The cross-validation value
$\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{y}}}(\mathbf{x}_{i})$
(i.e., the value of the LOWESS model at $\mathbf{x}_{i}$ when the data point
$\mathbf{x}_{i}$ is not used to build the model) are computed by setting
$w_{i}$ to 0. Unfortunately, unlike for radial basis function (RBF) models or
polynomial response surfaces (PRSs), we do not know any computational shortcut
allowing a more efficient computation of the values of
$\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{y}}}$. However, each value
$\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{y}}}(\mathbf{x}_{i})$ is
computed at the same computational cost as a prediction
$\hat{y}(\mathbf{x}_{i})$.
### A.1 Weights computation in LOWESS models
The weight $w_{i}({\boldsymbol{\xi}})$ quantifies the relative importance of
the data point $\mathbf{x}_{i}$ in the construction of the local regression at
${\boldsymbol{\xi}}$. Like for kernel smoothing, it relies on a kernel
function $\phi$ and depends on the distance between ${\boldsymbol{\xi}}$ and
$\mathbf{x}_{i}$. In our method, we use
$w_{i}({\boldsymbol{\xi}})=\phi\left(\lambda\frac{\|{\boldsymbol{\xi}}-\mathbf{x}_{i}\|_{2}}{d_{n+1}({\boldsymbol{\xi}})}\right),$
(20)
where $\phi(d)$ is one of the kernel functions described in Table 1 and Figure
7. All kernel functions are normalized so that $\phi(0)=1$ and, if applicable,
$\int_{\mathbb{R}}\phi=1$. As the integral of the inverse multi-quadratic
kernel does not converge, the normalization constant 52.015 is introduced to
minimize the $\mathcal{L}^{2}$ distance between the inverse multi-quadratic
and inverse quadratic kernel. The parameter $\lambda>0$ controls the general
shape of the model, and $d_{n+1}({\boldsymbol{\xi}})$ is a local scaling
coefficient that estimates the distance of the $n+1^{th}$ closest data point
to ${\boldsymbol{\xi}}$. The kernel function $\phi$ and the shape parameter
$\lambda$ are chosen to minimize the aggregate order error with cross-
validation (AOECV) described in Section A.2. The fact that some of the
available kernel function have a compact domain gives to LOWESS models the
ability to ignore outliers or aberrant data points. As an example, if the
blackbox fails to compute correctly the objective function for a given data
point, the value returned by the blackbox might be an arbitrarily high value
(e.g., $1.8~{}10^{308}$ for a C++ code returning the standard max double).
With non-compact kernel function, this would perturb the LOWESS model on the
entire design space. However, if there is no such aberrant data points, non-
compact kernel functions tend to yield better results.
Table 1: Possible values for the kernel function $\phi$ # | Kernel name | $\phi:{\mathbb{R}}\rightarrow{\mathbb{R}}^{+}$ | Compact domain
---|---|---|---
1 | Tri-cubic | $\phi(d)=(1-|\frac{162}{140}d|^{3})^{3}\mathbb{1}_{|d|\leq\frac{140}{162}}$ | Yes
2 | Epanechnikov | $\phi(d)=(1-\frac{16}{9}d^{2})\mathbb{1}_{|d|\leq\frac{3}{4}}$ | Yes
3 | Bi-quadratic | $\phi(d)=(1-|\frac{16}{15}d|^{2})^{2}\mathbb{1}_{|d|\leq\frac{15}{16}}$ | Yes
4 | Gaussian | $\phi(d)=\exp(-\pi d^{2})$ | No
5 | Inverse quadratic | $\phi(d)=\frac{1}{1+\pi^{2}d^{2}}$ | No
6 | Inverse multi-quadratic | $\phi(d)=\frac{1}{\sqrt{1+52.015d^{2}}}$ | No
7 | Exp-root | $\phi(d)=\exp(-2\sqrt{|d|})$ | No
Figure 7: Representation of the 7 kernels listed in Table 1
To obtain a model $\hat{y}$ that is differentiable everywhere, [10] defines
$d_{n+1}({\boldsymbol{\xi}})$ such that the expected number of training points
in a ball of center ${\boldsymbol{\xi}}$ and radius
$d_{n+1}({\boldsymbol{\xi}})$ is $n+1$:
${\mathbb{E}}\left[\text{card}\Big{\\{}\mathbf{x}_{i}:\mathbf{x}_{i}\in\mathbf{X},\|{\boldsymbol{\xi}}-\mathbf{x}_{i}\|_{2}\leq
d_{n+1}({\boldsymbol{\xi}})\Big{\\}}\right]=n+1.$ (21)
Moreover, [10] observes that the values
$\big{\\{}\|{\boldsymbol{\xi}}-\mathbf{x}_{i}\|_{2}^{2}\big{\\}}_{i=1,\ldots,p}$
can be fitted well by a Gamma distribution and therefore defines the local
scaling parameter as
$d_{n+1}({\boldsymbol{\xi}})=\sqrt{g^{(-1)}\left(\frac{\mu_{\boldsymbol{\xi}}^{2}}{\sigma_{\boldsymbol{\xi}}^{2}},\frac{\sigma_{\boldsymbol{\xi}}^{2}}{\mu_{\boldsymbol{\xi}}};\frac{n+1}{p}\right)},$
(22)
where $\mu_{\boldsymbol{\xi}}$ (resp. $\sigma^{2}_{\boldsymbol{\xi}}$) denotes
the mean (resp. variance) of $\|{\boldsymbol{\xi}}-\mathbf{x}_{i}\|_{2}^{2}$
over $\mathbf{X}$ and $g^{(-1)}(k,\theta;.)$ is the inverse function of the
cumulative density function of a Gamma distribution with shape parameter $k$
and scale parameter $\theta$.
### A.2 Aggregate Order Error with Cross-Validation
The AOECV is an error metric that aims at quantifying the quality of a _multi-
output_ surrogate model. Specifically, it aims at quantifying the discrepancy
between problems ($P$) and ($\hat{P}$) for a given surrogate model. We first
define the aggregate constraint violation function [12]
$h(\mathbf{x})=\sum_{j=1}^{m}\max\\{0,c_{j}(\mathbf{x})\\}^{2}$. Note that
other definitions of $h$ are possible (notably: number of violated
constraints, most violated constraint, etc.) but as the previous definition of
$h$ is used in the main MADS instance to solve ($P$), we need to use the same
aggregate constraint in our definition of the AOECV.
We then define the order operators
$\displaystyle\mathbf{x}\prec\mathbf{x}^{\prime}\Leftrightarrow$
$\displaystyle\left\\{\begin{array}[]{l}h(\mathbf{x})<h(\mathbf{x}^{\prime})\\\
\text{or}\\\ h(\mathbf{x})=h(\mathbf{x}^{\prime})\text{ and
}f(\mathbf{x})<f(\mathbf{x}^{\prime}),\end{array}\right.$ (26)
$\displaystyle\mathbf{x}\preceq\mathbf{x}^{\prime}\Leftrightarrow$
$\displaystyle\;\;{\tt not}(\mathbf{x}^{\prime}\prec\mathbf{x})$ (27)
which are transitive. In particular, the incumbent solution $\mathbf{x}^{t}$
of the original problem ($P$) is such that
$\mathbf{x}^{t}\preceq\mathbf{x},\;\forall\mathbf{x}\in\mathbf{X}$. Similarly,
a global minimizer $\mathbf{x}^{*}$ is such that
$\mathbf{x}^{*}\preceq\mathbf{x},\;\forall\mathbf{x}\in\mathcal{X}$. By the
same principle, we define the operator
$~{}\widehat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\widehat{\prec}}}~{}$ by
using the cross-validation values
$\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{f}}}$ and
$\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{h}}}=\sum_{j=1}^{m}\max\\{0,\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{c}}}_{j}(\mathbf{x})\\}^{2}$
instead of $f$ and $h$. We then define the aggregated order error with cross-
validation (AOECV) metric:
$\mathcal{E}_{AOECV}=\frac{1}{p^{2}}\displaystyle\sum_{i=1}^{p}\sum_{j=1}^{p}~{}{\tt
xor}~{}\Big{(}\mathbf{x}_{i}\prec\mathbf{x}_{j},\mathbf{x}_{i}~{}\widehat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\widehat{\prec}}}~{}\mathbf{x}_{j}\Big{)}.$
(28)
where xor is the exclusive or operator (i.e., $~{}{\tt xor}~{}(A,B)=1$ if the
booleans $A$ and $B$ differ and 0 otherwise). The metric allows to quantify
how often the model is able to correctly decide which of two points is better.
The shape parameter $\lambda$ and the kernel function $\phi$ are then chosen
to minimize $\mathcal{E}_{AOECV}(\lambda,\phi).$ If two couples
$(\lambda,\phi)$ lead to the same metric value (because of the piecewise-
constant nature of the metric), the couple with the smallest value of
$\lambda$ (i.e., the smoother model) is preferred.
## Appendix B Detailed description of the test problems
The five engineering design application problems considered are listed in
Table 2. Problem size is reflected by $n$ and $m$, where $n$ denotes the
number of design variables and $m$ the number of general nonlinear inequality
constraints. Table 2 also indicates whether any variables are integer or
unbounded and reports the best known value of the objective function.
Table 2: Summary of the five engineering design optimization problems Problem | $n$ | $m$ | Integer | Infinite | Best objective
---|---|---|---|---|---
name | | | variables | bounds | function value
TCSD | 3 | 4 | No | No | 0.0126652
Vessel | 4 | 4 | No | No | 5,885.332
Welded | 4 | 6 | No | No | 2.38096
Solar 1 | 9 | 5 | Yes | Yes | –900,417
Solar 7 | 7 | 6 | Yes | Yes | –4,976.17
The Tension/Compression Spring Design (TCSD) problem consists of minimizing
the weight of a spring under mechanical constraints [24, 25, 26]. The design
variables define the geometry of the spring. The constraints concern shear
stress, surge frequency and minimum deflection. The best known solution,
denoted $\mathbf{x}^{*}$, and the bounds on the variables, denoted by
$\underline{\mathbf{x}}$ and $\bar{\mathbf{x}}$, are given in Table 3.
Table 3: Variables of the TCSD problem Variable description | $\underline{\mathbf{x}}$ | $\bar{\mathbf{x}}$ | $\mathbf{x}^{*}$
---|---|---|---
Mean coil diameter | 0.05 | 2 | 0.051686696913218
Wire diameter | 0.25 | 1.3 | 0.356660815351066
Number of active coil | 2 | 15 | 11.292312882259289
The Vessel problem considers the optimal design of a compressed air storage
tank [24, 27]. The design variables define the geometry of the tank The
constraints are related to the volume, pressure, and solidity of the tank. The
objective is to minimize the total cost of the tank, including material and
labour. Table 4 lists the variable bounds and the best known solution.
Table 4: Variables of the Vessel problem Variable description | $\underline{\mathbf{x}}$ | $\bar{\mathbf{x}}$ | $\mathbf{x}^{*}$
---|---|---|---
Thickness of the vessel | 0.0625 | 6.1875 | 0.778168641330718
Thickness of the head | 0.0625 | 6.1875 | 0.384649162605973
Inner radius | 10 | 200 | 40.319618721803231
Length of the vessel without heads | 10 | 200 | 199.999999998822659
The Welded (or welded beam design) problem (Version I) consists of minimizing
the construction cost of a beam under shear stress, bending stress, load and
deflection constraints [24, 28]. The design variables define the geometry of
the beam and the characteristics of the welded joint. Table 5 lists the
variable bounds and the best known solution.
Table 5: Variables of the Welded problem Variable description | $\underline{\mathbf{x}}$ | $\bar{\mathbf{x}}$ | $\mathbf{x}^{*}$
---|---|---|---
Thickness of the weld | 0.1 | 2 | 0.244368407428265
Length of the welded joint | 0.1 | 10 | 6.217496713101864
Width of the beam | 0.1 | 10 | 8.291517255567012
Thickness of the beam | 0.1 | 2 | 0.244368666449562
The Solar1 and Solar7 problems consider the optimization of a solar farm,
including the heliostat field and/or the receiver [29]. The Solar1
optimization problem aims at maximizing the energy received over a period of
24 hours under several constraints of budget and heliostat field area. This
problem has one integer variable that has no upper bound. Table 6 lists the
variable bounds and the best known solution.
Table 6: Variables of the Solar1 problem Variable description | $\underline{\mathbf{x}}$ | $\bar{\mathbf{x}}$ | $\mathbf{x}^{*}$
---|---|---|---
Heliostat height | 1 | 40 | 6.165258994385601
Heliostat width | 1 | 40 | 10.571794049143792
Tower height | 20 | 250 | 91.948461670428486
Receiver aperture height | 1 | 30 | 6.056202026704944
Receiver aperture width | 1 | 30 | 11.674984434929991
Max number of heliostats (Integer) | 1 | $+\infty$ | 1507
Field maximum angular span | 1 | 89 | 51.762281627953051
Minimum distance to tower | 0.5 | 20 | 1.347318830713629
Maximum distance to tower | 1 | 20 | 14.876940809562798
The Solar7 problem aims at maximizing the efficiency of the receiver over a
period of 24 hours, for a given heliostats field, under 6 binary constraints
[29]. This problem has one integer variable that has no upper bound. The
objective function is the energy transferred to the molten salt. Table 7 lists
the variable bounds and the best known solution.
Table 7: Variables of the Solar7 problem Variable description | $\underline{\mathbf{x}}$ | $\bar{\mathbf{x}}$ | $\mathbf{x}^{*}$
---|---|---|---
Aperture height | 1 | 30 | 11.543687848308958
Aperture width | 1 | 30 | 15.244236061098078
Outlet temperature | 793 | 995 | 803.000346734710888
Number of tubes (Integer) | 1 | $+\infty$ | 1292
Insulation thickness | 0.01 | 5 | 3.399190219909724
Tubes inside diameter | 0.005 | 0.1 | 0.010657067457678
Tubes outside diameter | 0.0055 | 0.1 | 0.011167646941518
|
arxiv-papers
| 2021-07-26T18:28:56 |
2024-09-04T03:07:19.840341
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Bastien Talgorn, St\\'ephane Alarie, and Michael Kokkolaras",
"submitter": "Bastien Talgorn",
"url": "https://arxiv.org/abs/2107.12421"
}
|
2107.12423
|
# HySec-Flow: Privacy-Preserving Genomic Computing with SGX-based Big-Data
Analytics Framework
Chathura Widanage1 Weijie Liu1 Jiayu Li1 Hongbo Chen1 XiaoFeng Wang2 Haixu
Tang2 Judy Fox3 1,2Indiana University 3University of Virginia
1{cdwidana,weijliu,jl145,hc50}@iu.edu 2{xw7,hatang}@indiana.edu
3{ckw9mp}@virginia.edu
###### Abstract
Trusted execution environments (TEE) such as Intel’s Software Guard Extension
(SGX) have been widely studied to boost security and privacy protection for
the computation of sensitive data such as human genomics. However, a
performance hurdle is often generated by SGX, especially from the small
enclave memory. In this paper, we propose a new Hybrid Secured Flow framework
(called ”HySec-Flow”) for large-scale genomic data analysis using SGX
platforms. Here, the data-intensive computing tasks can be partitioned into
independent subtasks to be deployed into distinct secured and non-secured
containers, therefore allowing for parallel execution while alleviating the
limited size of Page Cache (EPC) memory in each enclave. We illustrate our
contributions using a workflow supporting indexing, alignment, dispatching,
and merging the execution of SGX- enabled containers. We provide details
regarding the architecture of the trusted and untrusted components and the
underlying Scorn and Graphene support as generic shielding execution
frameworks to port legacy code. We thoroughly evaluate the performance of our
privacy-preserving reads mapping algorithm using real human genome sequencing
data. The results demonstrate that the performance is enhanced by partitioning
the time-consuming genomic computation into subtasks compared to the
conventional execution of the data-intensive reads mapping algorithm in an
enclave. The proposed HySec-Flow framework is made available as an open-source
and adapted to the data-parallel computation of other large-scale genomic
tasks requiring security and scalable computational resources.
###### Index Terms:
Privacy-preserving Computing; Software Guard Extension (SGX); Reads mapping.
## I Introduction
Security and privacy issues have received increasing attention in big-data
analytics performed on public or commercial clouds. In particular, personal
genomic data contain identifiable information concerning human individuals: it
has been shown that the identity of a participant in a human genome study
could be revealed from her genetic profile through searching online genealogy
databases [1]. As a result, biomedical researchers are cautious of moving the
intensive computation involving personal human genomic data onto the
commercial cloud.
Cryptographic techniques are available to protect data privacy on the cloud.
Homomorphic encryption (HE) [2] allows users to perform computation directly
on encrypted data. However, HE introduces several magnitudes of computational
overheads. A promising alternative has recently been presented by a new
generation of hardware supporting a trusted execution environment (TEE), in
which sensitive data are kept on secure storage and processed in an isolated
environment, called the enclave. A prominent example is the Intel Software
Guard Extension (SGX) [3], which has a set of instructions for establishment
and management of an enclave on Intel’s mainstream processors, which are
available in major cloud service providers such as Microsoft Azure [4].
Current benchmark experiments on data-intensive computing tasks[5] demonstrate
that SGX provides data protection against attacks from the host operating
system or even system administrators while introducing only moderate
computation overhead; therefore, it is widely considered to be suitable for
data-intensive computation, including the computing tasks involving personal
human genomic data.
Figure 1: Framework Overview.
Privacy-preserving algorithms have been developed for several genomic analysis
tasks, including genetic testing and variant searching using human genomic
profiles [6, 7, 8]. These tasks are relatively lightweight and do not require
extensive memory that exceeds the limited Page Cache (EPC) available in an
enclave. Hence, the efforts of these implementations were focused on the data
encryption/decryption and the protection of the data from side-channel
information leaks (e.g., using data oblivious algorithms [9]). More recently,
privacy-preserving algorithms [10, 11] were developed for Genome-wide
Association Studies (GWAS), a common computational approach to identifying the
associations between phenotypes and genetic variants [12]. These methods
exploited sketching algorithms to reduce the memory usage of GWAS computation
to be executed inside the enclave within the limits of EPC memory. However,
the sketching algorithms were customized for the specific computing task
(i.e., GWAS) and cannot be generalized to other tasks. Furthermore, privacy-
preserving approaches are still lacking for parallel data-intensive
computation using multiple enclaves enabled by SGX.
Figure 2: The conventional (Untrusted) workflow: the data with gray background
needs to be encrypted because it involves private information.
This paper presents a generic privacy-preserving data analytics framework for
developing large-scale genome computing tasks using SGX. A key challenge here
is that only limited resources are directly accessible by the code running
inside the enclave. Therefore, it is critical to devise a sophisticated method
to partition the target computation task into subtasks so that each subtask
can be executed efficiently using the enclave when necessary.
It is worth noting that HySec-Flow distinguishes itself from the current
approaches (e.g., Scone [13] and Graphene-SGX [14]) that provide runtime
environments to run existing programs inside the enclave. As shown in our
benchmark experiments, these approaches do not support either a hybrid
enclave/non-enclave architecture or the use of parallel computation with
multiple enclaves and so are not scalable for large data-intensive genome
computing tasks. Furthermore, the efficiency of running specific algorithms is
not optimized inside the enclave in the original applications. As well as
subtasking, other contributions of the paper include:
* •
We design a hybrid task scheduler to integrate secure and non-secure
containers into HySec-Flow for performing the subtasks in enclaves to address
scaling issues of SGX.
* •
We demonstrate the design strategy of our analytics framework using the
implementation of the reads mapping task (i.e., the alignment of millions of
short ( $\approx 100$ bases long) DNA sequences (reads) acquired from a human
individual onto a reference human genome).
Reads mapping serves as a prerequisite step for many downstream analyses in
human genome computing (e.g., genome variation calling, genotyping, gene
expression analysis), and thus many software tools (e.g., BWA [15], Bowtie
[16]) have been developed for this fundamental task. Notably, the reads
acquired from a human individual contain identifiable information about the
donor and should be protected in a public cloud environment. Previously,
customized algorithms were proposed for privacy-preserving reads mapping using
cryptography approaches [17], which introduced significant computing overheads
and did not scale well with massive demands.
To the best of our knowledge, HySec-Flow is the first SGX-based privacy-
preserving solution of reads mapping that introduced reasonable computing
overhead while is highly parallelizable and scalable. Our novel hybrid task
scheduler with secure containers enables a workflow for complex analysis such
as a modified reads mapping and alignment algorithm. The end-to-end secure
analysis framework, as shown in Fig. 1 is released as open-source software at
[18].
## II Background
### II-A Intel SGX
Intel SGX is a set of x86 instruction extensions that offer hardware-based
memory encryption and isolation for application code and data. The protected
memory area (called an enclave) resides in an application’s address space,
providing confidentiality and integrity protection. SGX is a user-space TEE
characterized by flexible process-level isolation: a program component can get
into an enclave mode and be protected by execution isolation, memory
encryption, and data sealing against the threats from the untrusted OS and
processes running in other enclaves. More specifically, the memory of an
enclave is mapped to a special physical memory area called Enclave Page Cache
(EPC). It is encrypted by Memory Encryption Engine (MEE) and cannot be
directly accessed by other system software. Such protection, however, comes
with in-enclave resource constraints. Particularly, only 128 MB (256 MB for
some new processors) encryption-protected memory (called Enclave Page Cache or
EPC) is reserved. Although virtual memory support is available, it incurs
significant overheads in paging.
### II-B SGX-based Data Sealing
SGX remote attestation allows a remote user to verify that the enclave is
correctly constructed and runs on a genuine SGX platform. In Intel’s
attestation model, three parties are involved: (1) The Independent Software
Vendor (ISV) who is registered to Intel as the enclave developer; (2) The
Intel Attestation Service (IAS) hosted by Intel which verifies the enclave;
and (3) The SGX platform, which operates the SGX enclaves. The attestation
begins with the ISV sending an attestation request challenge, which can be
generated by an enclave user who wants to perform the attestation of the
enclave. The attested enclave then generates a verification report including
the enclave measurement, which can be verified by an Intel-signed quoting
enclave (QE) through local attestation. The QE signs the report using the
attestation key, and the generated quote is forwarded to the Intel Attestation
Service (IAS). The IAS verifies the quote and signs the verification result
using the Intel private key. The verification result can convince the ISV or
the enclave user by verifying the signature and comparing the enclave
measurement. When an enclave is instantiated, it protects the data by keeping
it within the enclave boundary. In general, the secrets provisioned to an
enclave are lost when the enclave is closed. However, if the private data must
be preserved during one of these events for future use within an enclave, it
must be stored outside the enclave boundary before closing the enclave. To
protect and preserve the data, a mechanism is in place which allows enclave
software to retrieve a key unique to that enclave that the enclave can only
generate on that particular platform. Using that key, the enclave software can
encrypt data, store them on the platform, or decrypt the encrypted data are
stored on the platform. SGX refers to these encryption and decryption
operations as sealing and unsealing, respectively. When data needs to be
encrypted and stored outside the enclave, sealing and unsealing are needed.
Using sealing, the data within the enclave is encrypted using an encryption
key derived from the CPU hardware.
Intel SGX provides two policies for encryption keys: MRENCLAVE (enclave
identity) and MRSIGNER (signing identity). These policies affect the
derivation of the encryption key and are described in the documentation of
Intel SGX [19]. Developers can take advantage of sealing based on the Signing
Identity policy to share sensitive data via a sealed data blob between
multiple enclaves initiated by a single application and/or those by different
applications. To utilize Intel SGX’s data sealing feature, we use the set of
keys generated and stored in the processor’s fuse array. There are two
identities associated with an enclave. The first is the Enclave Identity and
is represented by the value of MRENCLAVE, which is a cryptographic hash of the
enclave log (measurement) as it goes through every step of the build and
initialization process. MRENCLAVE uniquely identifies any particular enclave,
so using the Enclave Identity will restrict access to the sealed data only to
instances of that enclave. Therefore, we use the other key policy provided by
SGX - MRSIGNER, which generates a key based on the value of the enclave’s
MRSIGNER and the enclave’s version. Specifically, we encapsulate the
`sgx_seal_data()` function, to better leverage the key derived from the
instruction EGETKEY. We also implement utility codes for encrypting the
initial genome data.
### II-C Bloom filter
A Bloom filter is a space-efficient probabilistic data structure. It provides
membership queries over dynamic sets with an allowable false positive rate.
Figure 3: Conventional Bloom filter with $k=3$ that illustrates the true
positive, and false positive.
An ordinary Bloom filter consists of a bit array $B$ of $m$ bits, which are
initially set to $0$, and $k$ hash functions, $h_{1},h_{2},...,h_{k}$, mapping
keys from a universe $U$ to the bit array range $\\{1,2,...,m\\}$. In order to
insert an element $x$ from a set $S=\\{x_{1},x_{2},...,x_{n}\\}$into the
filter, the bits at positions $h_{1}(x),h_{2}(x),...,h_{k}(x)$ are set to $1$.
To query if an element q is in the filter, all of the bits at positions
$h_{1}(q),h_{2}(q),...,h_{k}(q)$ are examined. If at least one bit is equal to
$0$, then $q$ is not in $S$. Otherwise, $q$ likely belongs to the set. The
false positive rate
$F=(1-(1-\frac{1}{m})^{kn})^{k}\approx(1-\exp{(-k/r)})^{k},$ where $r=m/n$ is
the number of bits per element.
### II-D Threat Model
For HySec-Flow, we follow the classical SGX threat model. Denial-of-Service
(DoS), side-channel attacks, and physical attacks against the CPU are out of
scope [20, 21] and can be tackled by different techniques (e.g., mitigating
the negative effect of Hyper-threading [22, 23]). Similarly, enclaves are
trusted and free of vulnerabilities.
## III Architecture
Figure 4: The Workflow of Privacy-Preserving Genomic Computing Framework with
Hybrid Containers and Resources.
In the framework shown in Fig. 1, the driver (task scheduler) runs on the
central control node where a computing task first splits into subtasks, and
the subtasks are then deployed to worker nodes for execution. Subtasks are
deployed with APIs of existing orchestration tools on their distributed
platforms (e.g., Kubernetes and Docker Swarm) and the framework-specific
communication mechanism between the workers (secure/non-secure containers). In
this paper, we implement the reads mapping algorithm using the proposed
framework.
input : $G=\\{g_{1},g_{2},\dots,g_{p}\\}$ : Reference genome partitions;
$I$: Input DNA Sequence;
$args$: Program arguments
output : $Q=\\{q_{1},q_{2},\dots,q_{p}\\}$ : Partitioned user input;
$M=\\{m_{1},m_{2},\dots,m_{p}\\}$ : Reads mapping per partition;
$S$ : Final output
1
2Function _PIPLINE(_G, I, args_)_:
3 B = GenerateBloomFilters(_G, args_)
4 Q = [] // protected fs
5 for _p in $\\{1\dots G.length\\}$ distributed _ do
6 Q[p] = DISPATCH(_B[i], I, args_) // protected
7
8
9 M = [] // protected fs
10 for _p in $\\{1\dots G.length\\}$ distributed _ do
11
12 M[p] = ALIGNMENT(_G[p], Q[p]_) // protected
13
14 S = MERGE(_M_)// protected
Algorithm 1 Distributed Pipeline
We’ve devised a workflow-based approach to address the privacy issues in the
existing read mapping software tools while adding the capability to
parallelize the computation workload across a cluster of Intel SGX-enabled
nodes. The DIDA framework[24] for parallel execution inspires our
implementation of reads mapping on high-performance computing platforms while
designing the SGX-based implementation for the subtasks involving sensitive
data (i.e., input reads). It first partitions the reference genome into
multiple segments and then uses bloom filters to partition the input set of
reads into subsets, which is assigned to a segment that the reads are likely
mapped onto. In the next step, multiple subtasks are deployed and each
substask involves a segment mapping to a subset of reads. Notably, although
Intel SGX provides hardware-assisted encryption and protection of sensitive
data, it comes with a performance cost due to the limited size of the EPC
(Enclave Page Cache) available for the computation inside the enclave. Hence,
we have to be careful only to move data that needs to be protected into the
SGX enclaves and perform only the computations involving sensitive data inside
the SGX enclaves. This clear separation helps to minimize the data that needs
to be moved into the protected space and minimize the computation overhead
introduced by EPC swaps.
Fig. 4 shows the detailed workflow of our implementation for SGX-based secure
reads mapping. We have adapted four significant tasks from the DIDA framework
that will be executed to perform genome sequencing. The driver node accepts
the job, while the worker nodes are used for the data pre-processing and read
alignment. The partitioning of the reference genome into segments and their
indexing is a one-time process. Furthermore, the reference genome is public
and such a step can be performed without using SGX.
input : $G=\\{g_{1},g_{2},\dots,g_{p}\\}$ : Reference genome partitions;
$I$: Input DNA Sequence
1
2Function _DISPATCH(_b, I, args_)_:
3 $q=[]$
4
5 // reading sequences of the input
6 for _seq in I_ do
7 for _bmer in seq_ do
8 if _ $b$.test(bmer)_ then
9 $q$.append(i)
10
11
12 return q
13
14Function _ALIGNMENT(_g, q_)_:
15 return bwa(g, q)
16
17
18Function _MERGE(_M_)_:
19 $S$ = merge(_M_) // call DIDA merge
20 return S
Algorithm 2 Internal operations within the framework
However, the input reads need to be protected. Together with the partitioned
reference genome, inputs are fed into the dispatch process to get the same
number of dispatched reads as the partitioned reference genome. The
partitioned reference genome and the dispatched reads are then distributed to
a cluster of nodes for the parallel running of the actual alignment (or run
sequentially on one node). The partial results are then merged to form the
final output, which will also be encrypted.
In Figs. 4-7, all the processes running within SGX are depicted in blue boxes.
The alignment process in the worker nodes uses Scone to minimize the source
code revision, which is required to handle sensitive data securely. The input
and output SAM files are stored in a protected folder. They are handled
transparently by the Scone file protection feature [25]. All nodes are from
the same cluster and have access to a common shared file system where the
intermediate results between processes are all encrypted. Each process within
an SGX is undergoing the unsealing/sealing process to securely read the data
and write the output to the file system.
### III-A Trust Establishment
When the data owner wants to delegate a job to HySec-Flow, the owner needs to
know that the service provider truly provides the service on a trusted
platform. Therefore, the owner can initiate remote attestation to establish
mutual trust. Since the source code of HySec-Flow is public, the data owner
can easily know whether the remote service is running in a trusted control
node enclave or not through verifying the measurement, which can be derived
from the enclave source code. The RA-TLS protocol can be integrated into our
work for trust establishment and key exchange. After mutual trust between the
data owner and service provider is established via remote attestation, a key
$K_{D}$ can be generated by ECDH key exchange to securely communicate and
transfer data. It should be noted that the key agreement step can be done
using the attestation feature of Scone’s premium version or using Graphene-
SGX’s remote attestation plan, so we don’t implement it by ourselves.
The data owner can then transfer data files encrypted using this key, and
these files can then be decrypted in the work nodes’ enclaves at the server-
side. Notice that the enclaves between the control node and work nodes also
need to establish mutual trust, and the key $K_{D}$ to decrypt data files
should also be passed through a secure channel. Yet intermediate data files
can be securely stored in untrusted storage, such as in a shared file system,
and be transferred via an untrusted channel since they are encrypted. Finally,
the framework can encrypt and return the result to the data owner.
### III-B Partition
This stage is performed to split the reference genome sequence into multiple
partitions such that each partition can be individually indexed and searched
on different nodes of the cluster. The partitioner takes the reference genome
as the input and outputs p number of partitions as shown in steps 1 and 2 of
Fig. 4. The partition task works only on non-sensitive data and can run on a
single node for a simple pass through the reference genome sequence. For the
same p and same reference genome, partitioning will only execute once.
### III-C Indexing
The partitions generated are indexed using a popular read alignment tool like
BWA. This operation can be performed parallelly on each partition utilizing
the available computing resources of the cluster. Furthermore, this operation
does not require to be running in a secure environment. Hence this step reads
and writes to the non-secure shared file system as shown in steps 3 and 4 of
Fig. 4.
### III-D Dispatch
The dispatch stage is performed to reduce the search space of an input DNA
sequence within each partition. This can be performed by utilizing many
application-dependent techniques. We adapt an approach based on the bloom
filters from DIDA. We compute a bloom filter for each partition by inserting
sub-sequences of length ’b’ of the reference genome partition with overlaps of
length ’l’. Bloom filter generation works only on non-sensitive data. We
perform this part of the dispatch process entirely outside Intel SGX (step 5
of Fig. 4). Furthermore, generated bloomfilters can be reused for future
executions as long as ’b’ and ’l’ remain the same. Hence we persist generated
bloom-filters to the disk as a binary file(6 of Fig. 4). Bloom-filters
generated per each partition will be assigned with a uniquely identifiable
name generated based on the reference genome and the ’b’ and ’l’ arguments.
Having a separate binary file for each bloomfilter makes running dispatch
inside the limited enclave memory efficient.
Figure 5: Dispatch
We assume input DNA sequences are in the encrypted form when we receive them
into the framework. The next stage of the _dispatch_ task is looking up the
bloom-filters to determine the membership of the subsequences of the input DNA
sequence within each partition. Since dispatch involves sensitive data, we
have modified the DIDA framework to execute the bloom filter lookup logic
inside the SGX enclaves. Input partitioning is performed by first loading the
encrypted DNA sequence into the SGX enclave and then decrypting internally to
extract the unencrypted data. Then we create empty string builders within the
enclave (Line 6 of algorithm 2) to hold the input sequence for each partition.
Finally, bloom filter lookups are performed to determine the membership. In
case of a positive lookup in the bloom filter, we append the input subsequence
to the corresponding string builder. As shown in Fig. 3 and explained in
section II-C, we expect false positive responses for some of the lookups. But
overall, this approach reduces the search space for the alignment step
significantly. Furthermore, the false-positive rate can be controlled by
configuring the size of the bloom-filter. We then persist input partitions
into the disk by encrypting the files transparently using the file protection
features provided by Scone or Graphene.
The dispatch step can be parallelly run for each reference genome partition as
depicted in line 4-5 of Algorithm 2. The output will be saved back to the
protected file system as shown in step 9 of Fig. 4.
### III-E Alignment
Together with the corresponding index of the partitioned reference genome, the
dispatched reads file is assigned to the cluster’s worker nodes for the
alignment process (Step 10 of Fig. 4). This step could run sequentially on a
single node or distribute over multiple nodes. As a proof-of-concept, we use
BWA for the actual alignment of the reads with the Scone framework to leverage
the SGX capability. Minor changes on the BWA code are needed so it works with
the Scone file protection [25] setup. It provides transparent file decryption
and encryption of the input and output files for the alignment setup. The code
is compiled within a docker image from Scone that provides a cross compiler
toolset and run in a docker container. This approach is generally applicable
to other legacy applications, like BWA, to run within SGX. While using the BWA
tool for this step, other alternative tools could be used, or even customized
programs developed totally with SGX SDK, in which case Scone would not be
needed anymore.
Figure 6: Alignment
The results of this step are the partial SAM output (Step 11 of Fig. 4) from
each dispatched reads and partitioned reference. Once all the results are
ready, they will be merged in the following step to form the final results. In
the evaluation experiment, we considered the input reads files containing
either the paired-end reads, in which two reads were sequenced from a short
distance (i.e., 300-500 base pairs apart) in the genome, or the single-end
reads each read was sequenced independently. Thus, the reads alignment task
for these two types of input is referred to as the paired-end alignment and
the single-end alignment respectively.
### III-F Merge
This task expects multiple encrypted SAM files (Step 12 of Fig. 4) as the
input and performs merging techniques in the DIDA framework inside the SGX
enclaves. The encrypted input SAM files will be decrypted only within the SGX
enclave. Once the merging is done, the output will be sealed (Step 13 of Fig.
4) using the user’s shared key since this is the final output expected by the
user. Besides sealing the final output and unsealing the initial input, we
have delegated encryption and decryption of the intermediate inputs and
outputs to the transparent file system’s encryption mechanisms provided by
Scone or Graphene.
Figure 7: Merge encrypted SAM files
### III-G Pipeline
Algorithm 1 shows how we can run the above stages in a distributed pipeline to
leverage the resources available across the cluster. For a given user query,
the first task scheduled will generate the bloomfilters. If the bloomfilters
are already available in the disk for the provided arguments, this stage
completes immediately. The next task is to run the dispatch in a secure
environment. So the resource scheduler is configured to schedule dispatch
tasks into nodes having SGX hardware capabilities. Once the dispatched task is
completed, the alignment tasks can be parallelly scheduled on multiple SGX
nodes as shown in Algorithm 1, lines 7-8. Once all the dispatch tasks are
completed, the merge can be scheduled on a secure node to generate the final
output.
### III-H Data Sealing
All information that lies outside the trusted parts (enclave) in the workflow
should be in ciphertext state. Therefore we propose sealing/unsealing modules
inside the enclave to encrypt/decrypt intermediate data across nodes.
Assuming the remote attestation has been done before the data owner’s input is
uploaded to the framework, a session key can be retrieved to establish a
secure channel between the genomic data owner and the framework. HySec-Flow
can accept file input in plaintext and can do the initial encryption for the
user. Besides, to protect the data transferring between enclaves from the
outside attacker, we seal the output data and unseal the input data with the
same key. To this end, secure channels can be built.
## IV Evaluation
### IV-A Security Analysis
SGX Enclave can protect the code/data integrity even when the executable is
loaded into a library OS (e.g., Graphene-SGX can provide a measurement hash
against the loaded code/library for checking). Moreover, disk I/O has been
safeguarded by Scone/Graphene’s protected filesystem, which utilizes AES-GCM
to encrypt user data and immediate data during the computation. Under our
threat model, the only security risk is key delivery, which is protected by
the secure channels we built after trust establishment. Therefore, file
tampering attacks can be defeated.
Side channels have been considered to be a threat to trusted execution
environments, including SGX. There is a line of research that identifies such
security risks [20, 21, 26, 27]. In the meantime, prior research also shows
that most of the side channel risks can be detected or mitigated using certain
defense strategies [28, 29, 23, 30, 22]. Most prior studies on SGX-based
computing systems consider side channels to be outside their threat models
[13, 14, 31] with the continuous interest in the topic, as Intel assumes when
developing the TEE [32]. Our research follows this threat model and has not
implemented known protection (including those against a set of micro-
architectural threats) in our prototype. In future research, we will evaluate
the impacts of side-channel leaks on our computing frameworks and genomic data
analysis tasks and build into our system a proper protection when necessary.
### IV-B Experimental setup & data sets
Our experiments are conducted on a 10-nodes SGX-enabled cluster, with each
node has an Intel(R) Xeon(R) CPU E3-1280 v5 @ 3.70GHz CPU and 64G RAM. The SGX
enclaves are initialized with 8GB heap space with both Scone and Graphene. The
libraries are ported into Graphene include ld.so, libc.so, libm.so, libdl.so,
libutil.so, and librt.so. We also port libpthread.so for multi-threading
support. Scone containers are based on Scone’s alpine linux images running
Scone version 1. We use datasets from the 1000 Genome project [33] for the
testing. Without loss of generality, for single-end alignment, we use the
SRR062634.filt.fastq, which has $\sim$309K reads, with 100bp per read. For
paired-end alignment, we use SRR062634_1 and SRR062634_2. These files are
arbitrarily selected as a personal genome. The detailed data set and
specification are shown in Table I.
TABLE I: Dataset specification. | | |
---|---|---|---
Data Set | Source | # Reads | bp/read
SRR062634.filt.fastq | 1000 Genomes[33] | 309K | 100
SRR062634_1 | 1000 Genomes[33] | 24M | 100
SRR062634_2 | 1000 Genomes[33] | 24M | 100
| | |
### IV-C Accuracy
Although the bloomfilter-based dispatch step narrows down the search space for
subsequent steps greatly, that comes with an impact on the accuracy of the
final output. However, the scope of our approach is to perform reads mapping
with an acceptable accuracy securely. Hence we consider the accuracy of the
final outputs from DIDA’s approach as the baseline. We compare the output
files generated by the merge stage after running dispatch, alignment, and
merge in sequence on Scone and outside Scone. When Scone outputs are decrypted
to obtain the plain text output, it matches exactly with the output from the
non-Scone execution.
### IV-D Benchmark of SGX overhead
Using SGX could introduce overhead from multiple aspects.
#### IV-D1 Overhead from enclave initialization
Enclave initialization overhead is impacted by the heap size requested. We
measure the enclave initialization time by varying the HeapMaxSize (16M, 64M,
256M, 1024M, 4096M). The results show a good linear relationship with the
increasing max size of heap/stack. We observe that enclave initialization time
is about 0.04 seconds per MB of the configured maximum heap size.
When developing an SGX application using SGX SDK in enclave configuration file
Enclave.config.xml, we can set the parameters StackMaxSize and HeapMaxSize.
These parameters determine the estimated memory requirements of the generated
enclave.
#### IV-D2 Overhead from OCall/ECall
The SGX-enabled program defines an interface using Enclave Definition Language
(EDL), in which ECalls and OCalls are defined. A program can only invoke these
defined methods to cross the untrusted and trusted execution environment
boundary. We measured the overhead of the invocation of these calls. The
overhead of OCall and Ecall are 5.27 and 4.65 seconds per million calls
respectively. As a comparison, making the same calls within the untrusted
environment only costs 1.3 ms per million calls.
#### IV-D3 Overhead from EPC page swapping
An enclave can only utilize what a Processor Reserved Memory (PRM) can provide
at the current stage, which is 128MB. In actual use, the usable memory size
for an SGX application is only around 90MB, and the system uses the rest.
Therefore, enclave Page Cache (EPC) can only use this memory. When a larger
dataset may not fit into this space, an EPC page swap occurs, and this process
introduces high overhead. For example, for data access pattern within the
memory region of size $\sim$40MB, the results show that 1 billion runs of the
emulated code block, when very few page faults occurred ($\sim 10^{4}$), the
execution time is around 3 seconds. However, when we need to frequently access
data outside of that region and thus EPC page swap occurred more frequently
(more than $10^{7}$ times), the execution time is around 300 seconds, which is
about 100 times slower.
### IV-E Optimal partitions for splitting the reference genome
We have experimented with a different number of partitions for the reference
genome to find the optimal configuration. Fig. 8 shows the results. The
runtime is measured by sequentially run the alignment for dispatched reads on
one single node using SGX via Scone. We notice that with the increasing number
of partitions, the overall runtime decrease. However, when the number of
partitions is greater than 60, it got flattened. Considering the human
reference genome data we used is about 3.2 GB, this translates to the
reference partition size around or smaller than 50 MB. With the usable memory
space around 90 MB for SGX, this optimal configuration suggests that the
entire indexing table can fit into the SGX EPC to minimize the unnecessary EPC
swapping, thus improving the overall performance. We use reference genome
partition number 80 as the optimal number to run within SGX in our future
experiments.
Figure 8: Sequential run time within SGX for different number of partitioned
reference genomes.
### IV-F Execution times of dispatch and merge
TABLE II: Dispatch, Alignment and Merge for BWA (Single End Reads)
| | |
---|---|---|---
# Partitions | Dispatch (seconds) | Alignment (seconds) | Merge (seconds)
Non Secure | Secure | Non Secure | Secure | Non Secure | Secure
min | avg | max | min | avg | max | min | avg | max | min | avg | max | |
10 | 14.45 | 14.52 | 14.45 | 44.00 | 44.58 | 44.87 | 0.27 | 0.43 | 0.77 | 5.19 | 7.58 | 8.70 | 0.88 | 4.70
20 | 6.57 | 7.81 | 8.06 | 22.35 | 26.62 | 27.26 | 0.12 | 0.14 | 0.16 | 2.32 | 4.10 | 4.77 | 0.83 | 4.65
40 | 3.20 | 4.46 | 4.56 | 9.63 | 14.32 | 14.58 | 0.06 | 0.10 | 0.14 | 1.05 | 1.77 | 2.15 | 0.80 | 4.62
80 | 1.48 | 1.48 | 2.93 | 6.29 | 9.12 | 9.68 | 0.03 | 0.13 | 0.47 | 0.44 | 0.73 | 1.18 | 0.81 | 4.66
| | | | | | | | | | | | | |
The alignment step is parallelly executed across the cluster. Minimum, Average
and Maximum time reported by containers.
Table II shows the results for the proposed partition and dispatch. The
partition and dispatch approach shows a slowdown which greatly improves for
higher partition counts due to better EPC utilization. Our approach makes it
easier to run in parallel because of the pleasingly parallel nature of the
data and the application. In the best case(if resources are available), we can
run the single end alignment pipeline securely in 15.53 seconds (based on
9.68s in parallel dispatching, 1.18s in parallel alignment over 80 nodes and
4.66s in merging) by partitioning the problem into 80 subtasks. Even in the
worst case that has only one SGX enabled node, we can expect to complete the
alignment in 792.46 seconds running sequentially.
TABLE III: Non Secure Execution (one-time calculations) | | |
---|---|---|---
#Partitions | Partitioning (s) | Bloomfilter Building (seconds) | Indexing (seconds)
1 | 0 | 1985.09 | 4302
10 | 37.59 | 1211.65 | 3052
20 | 39.39 | 1113.9 | 2853
40 | 40.17 | 1147.48 | 2602
80 | 41.93 | 1316 | 2292
| | |
Table II lists the time used for the dispatch and merge steps (single-end
reads). Although bloom filter building seems to be dominating the entire
workflow, it is a one-time operation. The same set of bloomfilters can be used
for subsequent executions on different user inputs. Also, we notice that
partition size does not significantly impact the execution time as these two
stages of the workflow are not parallelized. However, the dispatch step can be
parallelized to run in parallel on Bfn and Query to produce corresponding
Queryn in contrast to the Dispatch step shown in Fig 5. If computing resources
are available, this should reduce the execution time by a factor of ’p’, where
’p’ is the number of partitions.
### IV-G Execution times of Data Sealing/Unsealing
We use RDTSC (returns a 64-bit time stamp counter (TSC), which is increased on
every clock cycle) to measure the time consumption outside the enclave of data
sealing/unsealing functions we built. Each test runs 10 times. As for sealing
inside, we implement an OCall for timing. The OCall checks the outside TSC
value and itself costs less than 0.01ms. Table IV shows the average execution
time when different datasets are given. When dealing with the single-end input
SRR062634, the sealing time is less than 3s. For the pair-ended data
(SRR062634_1 and SRR062634_2), the sealing time is less than 10s.
TABLE IV: Data Sealing/Unsealing with Intel SGX | |
---|---|---
Operation | Single End (seconds) | Pair End (seconds)
Sealing outside | 2.59 | 8.29
Unsealing inside | 2.59 | 8.30
Sealing inside | 2.59 | 8.31
| |
### IV-H Execution Time of Reads Mapping
Table V lists the execution time for the reads mapping tasks in different
settings, which includes single-end and paired-end execution times. The
overhead of using SGX and the speedup of our proposed solution is compared to
the direct Scone and Graphene solutions. The experiment setup and scripts can
be found at [34], [35] and [36].
TABLE V: BWA Alignment (Sequential)
| | |
---|---|---|---
Alignment Type | Containers | BWA Alignment (seconds) | Slowdown
Single-end | Non Secure | 91 | 1
SCONE | 3291 | 36.1
Graphene | 10603 | 117
Paired-end | Non Secure | 15423 | 1
SCONE | $>$173K | $>$41
Graphene | $>$173K | $>$41
| | |
A single-end reads file has 309K reads and each read is 100bp long. A pair end
of reads file has 24M reads each. Non secure refers to BWA execution w/o SGX.
TABLE VI: Comparison of HySec-Flow against Scone or Graphene for Running BWA | | | | | |
---|---|---|---|---|---|---
Number of Partitions | SCONE (Sequential) Total time (s) | HySec-Flow parallel SCONE Total time (s) | Speedup | Graphene (Sequential) Total time (s) | HySec-Flow parallel Graphene Total time (s) | Speedup
1 | 3291 | 3227.48 | 1.02 | 10603 | 11025.41 | 0.96
10 | 3291 | 58.27 | 56.47 | 10603 | 412.66 | 25.7
20 | 3291 | 36.68 | 89.71 | 10603 | 300.9 | 35.23
40 | 3291 | 21.34 | 154.19 | 10603 | 245.84 | 43.13
80 | 3291 | 15.52 | 211.94 | 10603 | 217.23 | 48.80
| | | | | |
(a) Total Secure Execution Time
(Dispatch, Alignment and Merge) (b) Dispatch (Sequential) (c) Dispatch
(Parallel) (d) Non-Secure Execution Time
(Partitioning, Indexing, and Bloom Filter) (e) Alignment (f) Merge
Figure 9: Comparison of the HySec-Flow execution time of Scone and Graphene in
different stages.
The file protection features of both Scone and Graphene are configured and
enabled, so the input fastq files and output SAM files are secured. The
overhead is measured against the non-SGX approach from Table V. The speedup
can be determined against the SGX Scone solution from the same table.
As shown in Table V, running BWA directly without involving SGX was fast, but
data privacy is not protected. Another approach is running BWA within a Scone
container. This approach provides an easy way to utilize SGX, but the
performance penalty is huge due to the EPC size limitation of SGX and the
frequent page swapping when dealing with big data. We observe about 40x-50x
slowdown comparing to the non-SGX setup. Graphene-SGX performs worse than
Scone because it causes more paging overhead when loading more components with
the whole LibOS to the enclave.
Although the bloom filter generation is mostly a one-time operation, if we
consider that time into account too, the execution times will increase only by
1316 seconds. The best-case of HySec-Flow execution time (15.52 seconds) is 6x
and 212x speedup compared to non-SGX execution (91 seconds) and Scone
execution (3291 seconds) respectively. The total execution times and projected
variation of speedups for other parallelism configurations(10, 20, 40, 80) are
shown in Table VI and illustrated in Fig. 10.
Figure 10: Speedup of HySec-Flow over Scone and Graphene
## V Related Work
DIDA [24] is a distributed indexing-dispatched alignment framework for reads
mapping. Our approach is inspired by the DIDA framework but has taken data
privacy into full consideration: the computation involving sensitive data is
executed in the SGX enclave, and these sensitive data remained encrypted
outside the enclave. We use customized data and computation partitions to
split human reference genome sequence into small segments so that each reads
mapping subtask does not consume much memory. This produces better performance
running inside an enclave. In contrast, the original DIDA framework only
supports a small number of subtasks partitions, each comprising a long
reference genome sequence of the whole chromosome.
Scone [13] provides an easy-to-use container environment that can run
executable code within the SGX enclave. We use Scone to run the individual
alignment worker program in HySec-Flow. However, executing codes in enclaves
directly using Scone alone may introduce significant performance overhead due
to the lack of optimization on the data access according to the limited SGX
EPC space. Our proposed framework addresses this issue by splitting data into
smaller segments and running multiple jobs sequentially in a single enclave or
parallel in multiple enclaves.
Graphene-SGX is a practical library OS for unmodified code on SGX. It uses
Graphene LibOS [37] as the inner core to support the binary code compatibility
[38]. The enclave consists of the application to be protected linked with a
library OS. Graphene-SGX can execute the applications by writing a manifest
file that describes (among other things) the set of libraries used by the
benchmark (among other things). Compared to Scone, Graphene can provide a more
flexible configuration of multithreading support.
Although existing SGX-based secure computing approaches often assume side
channels as an orthogonal research topic [39, 40, 31], side channels impose
serious threats to secure computing using SGX as attackers can use them to
circumvent explicit security defenses implemented by SGX. A rich literature
has focused on discovering SGX side channels [20, 21, 26, 27]. Notably, HySec-
Flow is also vulnerable to such threats. Fortunately, most known side channels
in SGX-based computation can be detected or mitigated using various defense
strategies [28, 29, 23, 30, 22].
## VI Conclusion
We have introduced an architecture for an end-to-end workflow of privacy-
preserving genomic data analytics using Intel’s SGX technology. We use the
reads mapping application (specifically the commonly used BWA algorithm) to
showcase the usability and the performance of the framework. The naive Scone
solution has modest performance improvement on single-node even when using the
partition and dispatch methods. HySec-Flow makes it possible to run in
parallel on multiple nodes while still in a secured fashion. When tested with
single-end reads mapping tasks, we’ve observed a speedup of up to 212x (for 80
partitions) compared to the naive approach directly executing BWA within the
Scone framework. The speedup is mainly achieved from the process level
parallelism as well as significantly reduced search space by the bloomfilter
based dispatch step.
We stress that HySec-Flow can be easily adapted to a category of many genomics
applications where the algorithms are pleasantly data-parallel, e.g., for
genome variation calling [41, 9], for gene expression analysis using RNA-seq
data [42], and peptide identification in clinical proteomics [43]. However, in
each of these cases, we need to devise a customized data partition algorithm
that can assemble subsets of input data for subtasks so that the subtasks are
performed most efficiently.
## VII Future Work
The HySec-Flow framework can be extended to handle multiple search tasks from
different users by adding a new ’driver’ component to securely accept jobs
from users and assign containers on demand from a heterogeneous pool of
containers due to the pleasingly parallel nature of the workloads.
We will further integrate into future work another sophisticated framework,
Harp [44, 45, 46, 47, 48, 49, 50, 51], which utilizes MPI-style collective
communications to deal with Big Data among the nodes from a cluster in an HPC-
Cloud environment with an SGX-enabled machine learning applications.
The HySec-Flow framework has been designed to support non-secure tasks, secure
tasks written directly on Intel SGX API, and secure tasks on Scone or
Graphene. Hence other hybrid workflows (secure / non-secure) other than genome
sequencing can be ported into the framework and scale infinitely using a
programmable API[18]. Reads mapping is a large data-intensive computing task
compared to previously developed SGX-based solutions (e.g., variant searching
and GWAS). Therefore, the framework presented here can be extended to
implement privacy-preserving algorithms for other data-intensive genome
computing tasks such as genome variation calling [52] and gene expression
analyses [53] in future work.
## VIII Acknowledgment
This work is partially supported by NSF grant No.1838083 on BIGDATA: IA:
Enabling Large-Scale, Privacy-Preserving Genomic Computing with a Hardware-
Assisted Secure Big-Data Analytics Framework, NSF grant CCF-1918626
Expeditions: Collaborative Research: Global Pervasive Computational
Epidemiology, NSF grant No. 1835631 CINES: A Scalable Cyberinfrastructure for
Sustained Innovation in Network Engineering and Science, and NIH R01HG010798:
Secure and Privacy-preserving Genome-wide and Phenome-wide Association Studies
via Intel Software Guard Extensions (SGX). We appreciate technical support
from Intel Inc. and would like to thank Robert Henderson and the system team
for their assistance with our experiments on the SGX cluster.
## References
* [1] M. Gymrek, A. L. McGuire, D. Golan, E. Halperin, and Y. Erlich, “Identifying personal genomes by surname inference,” _Science_ , vol. 339, no. 6117, pp. 321–324, 2013.
* [2] C. Fontaine and F. Galand, “A survey of homomorphic encryption for nonspecialists,” _EURASIP Journal on Information Security_ , vol. 2007, p. 15, 2007.
* [3] I. Anati, S. Gueron, S. Johnson, and V. Scarlata, “Innovative technology for cpu based attestation and sealing,” in _Proceedings of the 2nd international workshop on hardware and architectural support for security and privacy_ , vol. 13. Citeseer, 2013, p. 7.
* [4] M. Russinovich, “Introducing Azure confidential computing,” _Seattle, WA: Microsoft_ , 2017.
* [5] F. Shaon, M. Kantarcioglu, Z. Lin, and L. Khan, “Sgx-bigmatrix: A practical encrypted data analytic framework with trusted processors,” in _Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security_ , 2017, pp. 1211–1228.
* [6] F. Chen, C. Wang, W. Dai, X. Jiang, N. Mohammed, M. M. Al Aziz, M. N. Sadat, C. Sahinalp, K. Lauter, and S. Wang, “Presage: privacy-preserving genetic testing via software guard extension,” _BMC medical genomics_ , vol. 10, no. 2, pp. 77–85, 2017.
* [7] F. Chen, S. Wang, X. Jiang, S. Ding, Y. Lu, J. Kim, S. C. Sahinalp, C. Shimizu, J. C. Burns, V. J. Wright _et al._ , “Princess: Privacy-protecting rare disease international network collaboration via encryption through software guard extensions,” _Bioinformatics_ , vol. 33, no. 6, pp. 871–878, 2017\.
* [8] S. Carpov and T. Tortech, “Secure top most significant genome variants search: idash 2017 competition,” _BMC medical genomics_ , vol. 11, no. 4, pp. 47–55, 2018.
* [9] A. Mandal, J. C. Mitchell, H. Montgomery, and A. Roy, “Data oblivious genome variants search on intel sgx,” in _Data Privacy Management, Cryptocurrencies and Blockchain Technology_. Springer, 2018, pp. 296–310.
* [10] C. Kockan, K. Zhu, N. Dokmai, N. Karpov, M. O. Kulekci, D. P. Woodruff, and S. C. Sahinalp, “Sketching algorithms for genomic data analysis and querying in a secure enclave,” _Nature methods_ , vol. 17, no. 3, pp. 295–301, 2020\.
* [11] T. Pascoal, J. Decouchant, A. Boutet, and P. Esteves-Verissimo, “Dyps: Dynamic, private and secure gwas,” _Proceedings on Privacy Enhancing Technologies_ , 2021.
* [12] V. Tam, N. Patel, M. Turcotte, Y. Bossé, G. Paré, and D. Meyre, “Benefits and limitations of genome-wide association studies,” _Nature Reviews Genetics_ , vol. 20, no. 8, pp. 467–484, 2019.
* [13] S. Arnautov, B. Trach, F. Gregor, T. Knauth, A. Martin, C. Priebe, J. Lind, D. Muthukumaran, D. O’keeffe, M. L. Stillwell _et al._ , “$\\{$SCONE$\\}$: Secure linux containers with intel $\\{$SGX$\\}$,” in _12th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 16)_, 2016, pp. 689–703.
* [14] C.-C. Tsai, D. E. Porter, and M. Vij, “Graphene-sgx: A practical library $\\{$OS$\\}$ for unmodified applications on $\\{$SGX$\\}$,” in _2017 $\\{$USENIX$\\}$ Annual Technical Conference ($\\{$USENIX$\\}$$\\{$ATC$\\}$ 17)_, 2017, pp. 645–658.
* [15] H. Li and R. Durbin, “Fast and accurate short read alignment with burrows–wheeler transform,” _bioinformatics_ , vol. 25, no. 14, pp. 1754–1760, 2009.
* [16] B. Langmead and S. L. Salzberg, “Fast gapped-read alignment with bowtie 2,” _Nature methods_ , vol. 9, no. 4, p. 357, 2012.
* [17] Y. Chen, B. Peng, X. Wang, and H. Tang, “Large-scale privacy-preserving mapping of human genomic sequences on hybrid clouds.” in _NDSS_ , 2012.
* [18] “Scalable and secure platform for hybrid task scheduling,” https://github.com/Data-ScienceHub/sgx-tasks, accessed: 2021-07-11.
* [19] V. Costan and S. Devadas, “Intel sgx explained.” _IACR Cryptol. ePrint Arch._ , vol. 2016, no. 86, pp. 1–118, 2016.
* [20] S. Lee, M.-W. Shih, P. Gera, T. Kim, H. Kim, and M. Peinado, “Inferring fine-grained control flow inside $\\{$SGX$\\}$ enclaves with branch shadowing,” in _26th $\\{$USENIX$\\}$ Security Symposium ($\\{$USENIX$\\}$ Security 17)_, 2017, pp. 557–574.
* [21] W. Wang, G. Chen, X. Pan, Y. Zhang, X. Wang, V. Bindschaedler, H. Tang, and C. A. Gunter, “Leaky cauldron on the dark land: Understanding memory side-channel hazards in sgx,” in _Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security_ , 2017, pp. 2421–2434.
* [22] G. Chen, W. Wang, T. Chen, S. Chen, Y. Zhang, X. Wang, T.-H. Lai, and D. Lin, “Racing in hyperspace: Closing hyper-threading side channels on sgx with contrived data races,” in _2018 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2018, pp. 178–194.
* [23] O. Oleksenko, B. Trach, R. Krahn, M. Silberstein, and C. Fetzer, “Varys: Protecting $\\{$SGX$\\}$ enclaves from practical side-channel attacks,” in _2018 $\\{$USENIX$\\}$ Annual Technical Conference ($\\{$USENIX$\\}$$\\{$ATC$\\}$ 18)_, 2018, pp. 227–240.
* [24] H. Mohamadi, B. P. Vandervalk, A. Raymond, S. D. Jackman, J. Chu, C. P. Breshears, and I. Birol, “Dida: Distributed indexing dispatched alignment,” _PloS one_ , vol. 10, no. 4, p. e0126409, 2015.
* [25] “Scone file projection,” https://sconedocs.github.io/SCONE\\_Fileshield/, accessed: 2021-02-04.
* [26] J. Van Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx, “Foreshadow: Extracting the keys to the intel $\\{$SGX$\\}$ kingdom with transient out-of-order execution,” in _27th $\\{$USENIX$\\}$ Security Symposium ($\\{$USENIX$\\}$ Security 18)_, 2018, pp. 991–1008.
* [27] G. Chen, S. Chen, Y. Xiao, Y. Zhang, Z. Lin, and T. H. Lai, “Sgxpectre: Stealing intel secrets from sgx enclaves via speculative execution,” in _2019 IEEE European Symposium on Security and Privacy (EuroS &P)_. IEEE, 2019, pp. 142–157.
* [28] S. Shinde, Z. L. Chua, V. Narayanan, and P. Saxena, “Preventing page faults from telling your secrets,” in _Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security_ , 2016, pp. 317–328.
* [29] M.-W. Shih, S. Lee, T. Kim, and M. Peinado, “T-sgx: Eradicating controlled-channel attacks against enclave programs.” in _NDSS_ , 2017.
* [30] R. Sinha, S. Rajamani, and S. A. Seshia, “A compiler and verifier for page access oblivious computation,” in _Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering_ , 2017, pp. 649–660.
* [31] Y. Shen, H. Tian, Y. Chen, K. Chen, R. Wang, Y. Xu, Y. Xia, and S. Yan, “Occlum: Secure and efficient multitasking inside a single enclave of intel sgx,” in _Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems_ , 2020, pp. 955–970.
* [32] J. Van Bulck and F. Piessens, “Tutorial: Uncovering and mitigating side-channel leakage in intel sgx enclaves,” in _Proceedings of the 8th International Conference on Security, Privacy, and Applied Cryptography Engineering (SPACE’18)_. Springer, 2018\.
* [33] N. Siva, “1000 genomes project,” 2008.
* [34] “BWA using Scone,” https://github.com/dsc-sgx/bwa-sgx-scone, accessed: 2021-02-05.
* [35] “BWA using Graphene-SGX,” https://github.com/StanPlatinum/graphene-bwa, accessed: 2021-06-19.
* [36] “Containerized dida & bwa on scone,” https://github.com/Data-ScienceHub/scone-dida-bwa, accessed: 2021-06-20.
* [37] C.-C. Tsai, K. S. Arora, N. Bandi, B. Jain, W. Jannen, J. John, H. A. Kalodner, V. Kulkarni, D. Oliveira, and D. E. Porter, “Cooperation and security isolation of library oses for multi-process applications,” in _Proceedings of the Ninth European Conference on Computer Systems_ , 2014, pp. 1–14.
* [38] K. Shanker, A. Joseph, and V. Ganapathy, “An evaluation of methods to port legacy code to sgx enclaves,” in _Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_ , 2020, pp. 1077–1088.
* [39] R. Sinha, S. Rajamani, S. Seshia, and K. Vaswani, “Moat: Verifying confidentiality of enclave programs,” in _Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security_. ACM, 2015, pp. 1169–1184.
* [40] P. Subramanyan, R. Sinha, I. Lebedev, S. Devadas, and S. A. Seshia, “A Formal Foundation for Secure Remote Execution of Enclaves,” in _Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security_. ACM, 2017, pp. 2435–2450.
* [41] C. Lambert, M. Fernandes, J. Decouchant, and P. Esteves-Verissimo, “Maskal: Privacy preserving masked reads alignment using intel sgx,” in _2018 IEEE 37th Symposium on Reliable Distributed Systems (SRDS)_. IEEE, 2018, pp. 113–122.
* [42] K. V. Prasad, A. A. Abdel-Hameed, D. Xing, and A. S. Reddy, “Global gene expression analysis using rna-seq uncovered a new role for sr1/camta3 transcription factor in salt stress,” _Scientific reports_ , vol. 6, no. 1, pp. 1–15, 2016.
* [43] S. Decramer, A. G. de Peredo, B. Breuil, H. Mischak, B. Monsarrat, J.-L. Bascands, and J. P. Schanstra, “Urine in clinical proteomics,” _Molecular & cellular proteomics_, vol. 7, no. 10, pp. 1850–1862, 2008\.
* [44] B. Zhang, Y. Ruan, and J. Qiu, “Harp: Collective communication on hadoop,” in _2015 IEEE International Conference on Cloud Engineering_. IEEE, 2015, pp. 228–233.
* [45] B. Zhang, B. Peng, and J. Qiu, “High performance lda through collective model communication optimization,” _Procedia Computer Science_ , vol. 80, pp. 86–97, 2016.
* [46] L. Chen, B. Peng, B. Zhang, T. Liu, Y. Zou, L. Jiang, R. Henschel, C. Stewart, Z. Zhang, E. Mccallum _et al._ , “Benchmarking harp-daal: High performance hadoop on knl clusters,” in _2017 IEEE 10th International Conference on Cloud Computing (CLOUD)_. IEEE, 2017, pp. 82–89.
* [47] B. Peng, B. Zhang, L. Chen, M. Avram, R. Henschel, C. Stewart, S. Zhu, E. Mccallum, L. Smith, T. Zahniser _et al._ , “Harplda+: Optimizing latent dirichlet allocation for parallel efficiency,” in _2017 IEEE International Conference on Big Data (Big Data)_. IEEE, 2017, pp. 243–252.
* [48] B. Peng, L. Chen, J. Li, M. Jiang, S. Akkas, E. Smirnov, R. Israfilov, S. Khekhnev, A. Nikolaev, and J. Qiu, “Harpgbdt: Optimizing gradient boosting decision tree for parallel efficiency,” in _2019 IEEE International Conference on Cluster Computing (CLUSTER)_. IEEE, 2019, pp. 1–11.
* [49] B. Zhang, B. Peng, and J. Qiu, “Model-centric computation abstractions in machine learning applications,” in _Proceedings of the 3rd ACM SIGMOD Workshop on Algorithms and Systems for MapReduce and Beyond_ , 2016, pp. 1–4.
* [50] L. Chen, J. Li, C. Sahinalp, M. Marathe, A. Vullikanti, A. Nikolaev, E. Smirnov, R. Israfilov, and J. Qiu, “Subgraph2vec: Highly-vectorized tree-like subgraph counting,” in _2019 IEEE International Conference on Big Data (Big Data)_. IEEE, 2019, pp. 483–492.
* [51] B. Peng, J. Li, S. Akkas, T. Araki, O. Yoshiyuki, and J. Qiu, “Rank position forecasting in car racing,” in _2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)_. IEEE, 2021, pp. 724–733.
* [52] P. Consortium, “A map of human genome variation from population-scale sequencing,” _Nature_ , vol. 467, no. 7319, p. 1061, 2010.
* [53] M. Alarcón, B. S. Abrahams, J. L. Stone, J. A. Duvall, J. V. Perederiy, J. M. Bomar, J. Sebat, M. Wigler, C. L. Martin, D. H. Ledbetter _et al._ , “Linkage, association, and gene-expression analyses identify cntnap2 as an autism-susceptibility gene,” _The American Journal of Human Genetics_ , vol. 82, no. 1, pp. 150–159, 2008.
|
arxiv-papers
| 2021-07-26T18:31:59 |
2024-09-04T03:07:19.855898
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Chathura Widanage, Weijie Liu, Jiayu Li, Hongbo Chen, XiaoFeng Wang,\n Haixu Tang, Judy Fox",
"submitter": "Jiayu Li",
"url": "https://arxiv.org/abs/2107.12423"
}
|
2107.12434
|
# Twisted Hilbert schemes and division algebras
Eoin Mackall eoinmackall _at_ gmail.com www.eoinmackall.com
###### Abstract.
Let $\mathscr{X}/S$ be any Severi–Brauer scheme of constant relative dimension
$n$ over a Noetherian base scheme $S$. For each polynomial
$\phi(t)\in\mathbb{Q}[t]$, we construct a scheme
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ that étale locally, on
a cover $S^{\prime}/S$ splitting $\mathscr{X}/S$, is the Hilbert scheme
$\mathrm{Hilb}_{\phi(t)}(\mathscr{X}_{S^{\prime}}/S^{\prime})$ of the
projective bundle $\mathscr{X}_{S^{\prime}}/S^{\prime}$.
We then study curves of small degree on a Severi–Brauer variety in order to
analyze examples. Our primary interest, in the case $X$ is a Severi–Brauer
variety with index $n>1$ over a field $k$, is the subscheme
$\mathrm{Ell}_{n}(X)$ of $\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$ parametrizing
curves that are smooth, geometrically connected, and of genus 1.
###### Key words and phrases:
Hilbert schemes; division algebras
###### 1991 Mathematics Subject Classification:
14C05; 16K20
## 1\. Introduction
This work originated from the idea that one could study deformations of curves
on a Severi–Brauer variety to obtain algebraic information on the structure of
the associated central simple algebra. Specifically, this work was an attempt
to implement the following program:
1. (Step 1)
for each polynomial $\phi(t)$ of $\mathbb{Q}[t]$, construct a variant of the
Hilbert scheme parametrizing closed subschemes of a Severi–Brauer variety $X$
with Hilbert polynomial $\phi(t)$ geometrically;
2. (Step 2)
for a fixed polynomial $\phi(t)=rt+s$, classify possible curves $C\subset X$
defined over the ground field with Hilbert polynomial $\phi(t)$ over a
splitting field for $X$;
3. (Step 3)
using a bend-and-break type argument, deform an irreducible curve $C\subset X$
with this Hilbert polynomial to a curve $C^{*}\subset X$ having the same
Hilbert polynomial, and which, additionally, is geometrically a union of
rational curves;
4. (Step 4)
study the action of the absolute Galois group on the geometric irreducible
components of $C^{*}$ to obtain specific restrictions on the possible Galois
splitting fields of $X$.
In this paper, we complete (Step 1) in much broader generality and we provide
some initial analysis in (Step 2) for specific cases pertaining to curves of
minimal degree in a Severi–Brauer variety.
To be precise, we prove in Section 2 (culminating in Theorem 2.5) that for any
polynomial $\phi(t)\in\mathbb{Q}[t]$, and for any Severi–Brauer scheme
$\mathscr{X}/S$ of relative dimension $n$ over a Noetherian base scheme $S$,
there exists a scheme $\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$
parametrizing subschemes of $\mathscr{X}$ that are flat and proper over $S$
with Hilbert polynomial $\phi(t)$ over any étale splitting $S^{\prime}/S$ for
$\mathscr{X}/S$. It turns out that, with only minor changes, one can adapt the
proof of representability for the usual Hilbert scheme of a projective bundle
to this generalized setting of Severi–Brauer schemes.
We then turn, in Section 3, to the study of those Hilbert schemes
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(X/k)$, for a Severi–Brauer variety $X$
over a fixed field $k$, that are associated to linear polynomials $\phi(t)$,
i.e. to those Hilbert schemes parametrizing subschemes consisting of either
curves or curves-and-points. In some cases of minimal degree subschemes, we
get very precise information (e.g. in Example 3.11 we give a satisfying
picture for the components of $\mathrm{Hilb}_{5t}^{\mathrm{tw}}(X/k)$, for a
Severi–Brauer variety $X$ associated to a degree $5$ division algebra, that
can possibly have a rational point).
Of particular interest, to the author, is the subscheme $\mathrm{Ell}_{n}(X)$
of $\mathrm{Hilb}_{nt}^{\mathrm{tw}}(X/k)$ parametrizing smooth and
geometrically connected curves of genus one in a Severi–Brauer variety $X$ of
index $n$. When $X$ contains no twisted linear subvarieties, i.e. when $X$ is
associated to a division algebra, we observe (in Theorem 3.16) that
$\mathrm{Ell}_{n}(X)$ is geometrically irreducible of dimension $n^{2}$. For
the program outlined above, we also observe that, when $X$ is associated to a
division algebra of prime index $n=p>2$, the only geometrically reducible
curves $C\subset X$ appearing in the same component $\mathrm{Ell}_{n}(X)$ and
defined over the ground field are, geometrically, $p$-gons of lines (Lemma
3.12).
The relevance of this latter observation to the program outlined above was, in
the author’s eyes, critical. Indeed, under the assumption that one could carry
out (Step 3) of the above program to obtain such a curve $C^{*}$ defined over
the ground field, it would follow that one could deform a smooth curve
$C\subset X$ of degree $n$ and of genus $1$ (if one exists) to a curve $C^{*}$
containing a point $x$ of minimal degree and splitting $X$. The Galois closure
$E/k(x)$ of the residue field $k(x)$ of $x$ would then have Galois group
admitting a quotient that is a transitive subgroup of the automorphism group
of the $p$-gon $C^{*}_{E}$, i.e. either the cyclic group
$\mathbb{Z}/p\mathbb{Z}$ or the full dihedral group $D_{p}$. With some minimal
assumptions, this implies that the underlying division algebra is cyclic (i.e.
if $p\leq 7$, if the characteristic of $k$ is zero, and if $k$ contains a
$p$th root of unity, see [RS96, Theorem 4.2]).
The largest obstacle to implementing the above program is (Step 3). It’s often
possible to find a deformation of a given irreducible curve $C\subset X$, over
a rational curve defined over the ground field $k$, where one of the fibers is
a geometrically reducible curve having the same Hilbert polynomial over an
algebraic closure of $k$. However, it’s been challenging, for the author, to
show that the geometrically reducible curve one obtains is also defined over
the ground field $k$. This depends on how precisely one can control the
deformations of $C$.
This idea represents one step towards a converse to an avenue of current
research in this area. That is to say, there has been a lot of research done
towards determining when a Severi–Brauer variety either contains or, somewhat
weaker, is split by a smooth and geometrically connected curve of genus 1: in
some cases of small index [dJH12], if one fixes a Severi–Brauer surface and
attempts to determine all such curves [Sal21], if one asks instead about
splitting Brauer classes [CK12], or if one asks instead about splitting
$\mu_{n}$-gerbes [AA21]. One theme present in a handful of this work is
starting from structural assumptions on the underlying central simple algebra
(e.g. assuming that it is cyclic) and proceeding from there. Here we’ve
attempted (mostly unsuccessfully) to do the opposite.
Notation. We use the following notation throughout:
* •
if $k$ is a base field, then we write $\overline{k}$ to denote a fixed
algebraic closure of $k$ and $k^{s}$ to denote the separable closure of $k$
inside $\overline{k}$
Conventions. We use the following conventions throughout:
* •
a variety is an integral scheme that is separated and of finite type over a
base field
* •
a curve is a proper scheme of pure dimension one that is separated and of
finite type over a base field.
Acknowledgments. I’d like to thank both Nitin Chidambaram and Priyankur
Chaudhuri for our frequent meetings discussing the Hilbert scheme where I
learned most of the techniques contained in this paper. I’d also like to thank
Patrick Brosnan for stimulating conversations that gave me both the ideas and
motivation needed to start this work.
## 2\. Descent for Hilbert Schemes
Let $\mathscr{X}/S$ be a Severi–Brauer scheme of relative dimension $n$ over a
Noetherian scheme $S$. Concretely, this means there exists an étale cover
$S^{\prime}=\\{S_{i}\\}_{i\in I}$ of $S$ and isomorphisms
$\mathscr{X}_{S_{i}}=\mathscr{X}\times_{S}S_{i}\cong\mathbb{P}^{n}_{S_{i}}$.
We call the data consisting of an étale cover $S^{\prime}$ and isomorphisms
$\epsilon_{i}:\mathscr{X}_{S_{i}}\rightarrow\mathbb{P}^{n}_{S_{i}}$ a
splitting of $\mathscr{X}/S$. The splitting data $(S_{i},\epsilon_{i})_{i\in
I}$ of $\mathscr{X}/S$ determines a Čech $1$-cocycle $\xi$ giving rise to a
class in $\check{\mathrm{H}}^{1}_{\acute{e}t}(S,\mathrm{PGL}_{n+1/S})$.
Conversely, descent shows that every element $\xi$ of
$\check{\mathrm{H}}^{1}_{\acute{e}t}(S,\mathrm{PGL}_{n+1/S})$ is determined by
the splitting data $(S_{i},\epsilon_{i})_{i\in I}$, over some étale cover
$S^{\prime}=\\{S_{i}\\}_{i\in I}$ of $S$, of some Severi–Brauer scheme
$\mathscr{X}/S$ that is uniquely determined by $\xi$ up to isomorphism. For
each Čech 1-cocycle $\xi$ one can then choose splitting data
$(S_{i},\epsilon_{i})_{i\in I}$ and, for a polynomial
$\phi(t)\in\mathbb{Q}[t]$, descend the Hilbert schemes
$\mathrm{Hilb}_{\phi(t)}(\mathbb{P}^{n}_{S_{i}}/S_{i})$ defined over $S_{i}$
to a scheme $\mathrm{Hilb}^{\text{tw}}_{\phi(t)}(\mathscr{X}/S)$ defined over
$S$.
The scheme $\mathrm{Hilb}^{\text{tw}}_{\phi(t)}(\mathscr{X}/S)$ represents the
functor associating to any locally Noetherian $S$-scheme $T$ the set of all
subschemes of $\mathscr{X}_{T}$ which are flat and proper over $T$ and which,
locally for the étale cover $S^{\prime}/S$, have Hilbert polynomial $\phi(t)$.
The goal for this section is to prove the representability of
$\mathrm{Hilb}^{\text{tw}}_{\phi(t)}(\mathscr{X}/S)$, done in Theorem 2.5, by
extending the construction of the Hilbert scheme of a projective bundle, e.g.
as given in [Kol96], so that it also provides a construction of the scheme
$\mathrm{Hilb}^{\text{tw}}_{\phi(t)}(\mathscr{X}/S)$ for any Severi–Brauer
scheme $\mathscr{X}/S$.
To start, recall from [Qui73, §8.4] that Quillen has constructed a universal
vector bundle $\mathcal{J}$ on the Severi–Brauer scheme $\mathscr{X}/S$ having
the following property: locally for an étale cover $S^{\prime}/S$ splitting
$\mathscr{X}/S$, $\mathcal{J}$ admits isomorphisms
$\mathcal{J}|_{S_{i}}\cong\mathcal{O}_{\mathbb{P}^{n}_{S_{i}}}(-1)^{\oplus
n+1}\quad\mbox{for each }S_{i}\in S^{\prime}$
compatible with the isomorphisms
$\mathscr{X}_{S_{i}}\cong\mathbb{P}^{n}_{S_{i}}$ of the splitting. We write
$\mathcal{Q}=\mathcal{J}^{\vee}=\mathcal{H}om(\mathcal{J},\mathcal{O}_{\mathscr{X}})$
to denote the dual of $\mathcal{J}$ and we call $\mathcal{Q}$ the Quillen
bundle on the Severi–Brauer scheme $\mathscr{X}/S$.
###### Lemma 2.1.
Suppose that $S$ is connected and write $\pi:\mathscr{X}\rightarrow S$ for the
structure map of $\mathscr{X}/S$. Let $\mathcal{F}$ be an $S$-flat coherent
sheaf on $\mathscr{X}$. Then there exists a numerical polynomial
$\phi(t)\in\mathbb{Q}[t]$ and an integer $N$ so that the following equality
holds
$\mathrm{rk}(\pi_{*}(\mathcal{F}\otimes\mathcal{Q}^{\otimes
t}))=\phi(t)\cdot\mathrm{rk}(Q^{\otimes t})$
for all integers $t\geq N$.
###### Proof.
Let $S^{\prime}=\\{S_{i}\\}_{i\in I}$ be an étale cover splitting
$\mathscr{X}/S$ and write $\pi_{i}:\mathscr{X}_{S_{i}}\rightarrow S_{i}$ for
map coming from base change. Then, for all $t\geq 1$, there are isomorphisms
$\pi_{*}(\mathcal{F}\otimes\mathcal{Q}^{\otimes
t})|_{S_{i}}\cong\pi_{i*}(\mathcal{F}|_{S_{i}}\otimes(\mathcal{O}_{\mathbb{P}^{n}_{S_{i}}}(1)^{\oplus
n+1})^{\otimes t})\cong\pi_{i*}(\mathcal{F}|_{S_{i}}(t)^{\oplus(n+1)^{t}}).$
Since
$\pi_{i*}(\mathcal{F}|_{S_{i}}(t)^{\oplus(n+1)^{t}})\cong\pi_{i*}(\mathcal{F}|_{S_{i}}(t))^{\oplus(n+1)^{t}}$,
the $\phi(t)$ of the lemma is necessarily the Hilbert polynomial of
$\mathcal{F}|_{S_{i}}$ on $\mathscr{X}_{S_{i}}\cong\mathbb{P}^{n}_{S_{i}}$. ∎
###### Definition 2.2.
Let $\mathscr{X}/S$ be a Severi–Brauer scheme over a base $S$. Let
$\mathcal{F}$ be an $S$-flat coherent sheaf on $\mathscr{X}$. For each
connected component $S_{\rho}\subset S$ we define the reduced Hilbert
polynomial of $\mathcal{F}$ on $S_{\rho}$ to be the numerical polynomial
$\mathrm{rh}_{\mathcal{F}}(t)\in\mathbb{Q}[t]$ guaranteed to exist by Lemma
2.1. In other words, $\mathrm{rh}_{\mathcal{F}}(t)$ is uniquely characterized
by the existence of an integer $N\geq 0$ and equality
$\mathrm{rk}(\pi_{*}(\mathcal{F}\otimes\mathcal{Q}^{\otimes
t})|_{S_{\rho}})=\mathrm{rh}_{\mathcal{F}}(t)\cdot\mathrm{rk}(Q^{\otimes
t})\quad\mbox{for all $t\geq N$.}$
If the reduced Hilbert polynomial of $\mathcal{F}$ on $S_{\rho}$ is equal to
$\mathrm{rh}_{\mathcal{F}}(t)$ for all connected components $S_{\rho}\subset
S$, then we call $\mathrm{rh}_{\mathcal{F}}(t)$ the reduced Hilbert polynomial
of $\mathcal{F}$. When $\mathcal{F}=\mathcal{O}_{V}$ is the structure sheaf of
a subscheme $V\subset\mathscr{X}$ we write $\mathrm{rh}_{V}(t)$ instead of
$\mathrm{rh}_{\mathcal{O}_{V}}(t)$.
###### Remark 2.3.
If $\mathscr{X}/S$ is a split Severi–Brauer scheme (i.e. if $\mathscr{X}/S$ is
isomorphic over $S$ with a projective bundle $\mathbb{P}_{S}(\mathcal{E})$ for
some vector bundle $\mathcal{E}$ on $S$) then, for any $S$-flat coherent sheaf
$\mathcal{F}$ on $\mathscr{X}$, the reduced Hilbert polynomial
$\mathrm{rh}_{\mathcal{F}}(t)$ is just the usual Hilbert polynomial
$h_{\mathcal{F}}(t)$ with respect to the line bundle
$\mathcal{O}_{\mathbb{P}_{S}(\mathcal{E})}(1)$.
###### Lemma 2.4.
Let $\mathscr{X}/S$ be a Severi–Brauer scheme over any scheme $S$. Let
$\mathcal{F}$ be a coherent sheaf on $\mathscr{X}$. Then for every polynomial
$\phi(t)\in\mathbb{Q}[t]$ there is a locally closed subscheme
$S_{\phi(t)}\subset S$ with the property:
* (f)
given a morphism $T\rightarrow S$, the pullback $\mathcal{F}_{T}$ on
$\mathscr{X}_{T}$ is flat over $T$ with reduced Hilbert polynomial
$\mathrm{rh}_{\mathcal{F}_{T}}(t)=\phi(t)$ if and only if $T\rightarrow S$
factors $T\rightarrow S_{\phi(t)}\subset S$.
###### Proof.
The lemma holds étale locally over the base $S$. More precisely, let
$S^{\prime}=\\{S_{i}\\}_{i\in I}$ be any étale cover splitting $\mathscr{X}/S$
with $I$ a finite set and let
$\epsilon_{i}:\mathscr{X}_{S_{i}}\rightarrow\mathbb{P}^{n}_{S_{i}}$ be
isomorphisms realizing the splitting. Write $T_{i}=T\times_{S}S_{i}$ and
$\mathcal{F}_{i}$ for the pullback of $\mathcal{F}$ to $\mathscr{X}_{T_{i}}$.
Then for each of the indices $i\in I$, there is a locally closed subscheme
$S_{i,\phi(t)}\subset S_{i}$ so that $\mathcal{F}_{i}$ is flat over $T_{i}$
with reduced Hilbert polynomial $\mathrm{rh}_{\mathcal{F}_{i}}(t)=\phi(t)$ if
and only if $T_{i}\rightarrow S_{i}$ factors $T_{i}\rightarrow
S_{i,\phi(t)}\subset S_{i}$. Because of Remark 2.3, the reduced Hilbert
polynomial $\mathrm{rh}_{\mathcal{F}_{i}}(t)$ is just the Hilbert polynomial
of $h_{\epsilon_{i*}\mathcal{F}_{i}}(t)$ and this follows from [Kol96, Theorem
I.1.6] which ultimately refers to [Mum66, Lecture 8].
To see that the lemma also holds over $S$, we note that it’s possible to
descend the $S_{i,\phi(t)}$ to a scheme $S_{\phi(t)}\subset S$ with
$S_{\phi(t)}\times_{S}S_{i}=S_{i,\phi(t)}$. Indeed, both of the schemes
$S_{i,\phi(t)}\times_{S}S_{j}$ and $S_{j,\phi(t)}\times_{S}S_{i}$ are uniquely
characterized as subschemes of $S_{i}\times_{S}S_{j}$ by the given property
with respect to the coherent sheaf
$\mathcal{F}_{i}|_{S_{i}\times_{S}S_{j}}\cong\mathcal{F}_{j}|_{S_{i}\times_{S}S_{j}}$
on $\mathscr{X}_{S_{i}\times_{S}S_{j}}$. As it’s clear that the cocycle
condition on any triple product $S_{i}\times_{S}S_{j}\times_{S}S_{k}$ is
satisfied, it follows that $S_{\phi(t)}$ exists as a scheme over $S$.
It remains to show that $S_{\phi(t)}$ has property (f). Both the flatness of
$\mathcal{F}_{T}$ and the computation for the reduced Hilbert polynomial
$\mathrm{rh}_{\mathcal{F}_{T}}(t)$ can be checked étale locally for the cover
$S^{\prime}/S$. The claim follows then from the construction of $S_{\phi(t)}$.
∎
For any locally Noetherian $S$-scheme $T$, write
$H^{\phi(t)}_{\mathscr{X}/S}(T)$ for the set
(1)
$H^{\phi(t)}_{\mathscr{X}/S}(T):=\left\\{V\subset\mathscr{X}_{T}\middle|\begin{array}[]{c}V\text{
is proper and flat over }T\\\ \text{and
}\mathrm{rh}_{V}(t)=\phi(t)\end{array}\right\\}.$
The association of $T$ to $H^{\phi(t)}_{\mathscr{X}/S}(T)$ defines a
contravariant functor from the category of locally Noetherian $S$-schemes to
the category of sets. For a morphism $\rho:T^{\prime}\rightarrow T$, the
associated map $H^{\phi(t)}_{\mathscr{X}/S}(T)\rightarrow
H^{\phi(t)}_{\mathscr{X}/S}(T^{\prime})$ sends a subscheme
$V\subset\mathscr{X}_{T}$ to
$V\times_{T}T^{\prime}\subset\mathscr{X}_{T^{\prime}}$ where the fiber product
is taken along the morphism $\rho$.
###### Theorem 2.5.
Let $\mathscr{X}/S$ be a Severi–Brauer scheme over a Noetherian base scheme
$S$. Then, for every polynomial $\phi(t)\in\mathbb{Q}[t]$, there exists an
$S$-scheme $\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ which
represents the functor $H^{\phi(t)}_{\mathscr{X}/S}$ from _( 1)_.
In particular, there is a subscheme
$\mathrm{Univ}^{\mathrm{tw}}_{\phi(t)}(\mathscr{X}/S)\subset\mathscr{X}\times_{S}\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$
and, for any locally Noetherian $S$-scheme $T$, there is an equality
$\mathrm{Hom}_{S}(T,\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S))=H^{\phi(t)}_{\mathscr{X}/S}(T)$
where a map
$f:T\rightarrow\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$
corresponds to the subscheme
$V\cong\mathrm{Univ}^{\mathrm{tw}}_{\phi(t)}(\mathscr{X}/S)\times_{\mathscr{X}\times_{S}\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)}T.$
###### Proof.
The proof we give here is, in essence, the same as [Kol96, Proof of Theorem
I.1.4]. We’re going to break the proof into several steps. First, we construct
an $S$-scheme $H$ together with a scheme $U\subset\mathscr{X}\times_{S}H$
which end up being $\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ and
$\mathrm{Univ}^{\mathrm{tw}}_{\phi(t)}(\mathscr{X}/S)$ respectively. Once
we’ve constructed $H$ there will be an obvious functorial map
(2) $\mathrm{Hom}_{S}(T,H)\rightarrow H^{\phi(t)}_{\mathscr{X}/S}(T)$
defined as in the theorem statement. The next step will be to construct a map
in the other direction
(3) $H^{\phi(t)}_{\mathscr{X}/S}(T)\rightarrow\mathrm{Hom}_{S}(T,H).$
The proof will be complete once we show that these two maps are mutually
inverse.
Throughout the proof, we’ll refer to the following diagram.
${\mathscr{X}\times_{S}T}$${\mathscr{X}\times_{S}\mathscr{Y}}$${\mathscr{X}}$${T}$${\mathscr{Y}}$${S}$$\scriptstyle{\tilde{\rho}_{\mathscr{X}}}$$\scriptstyle{\pi_{T}}$$\scriptstyle{\rho_{\mathscr{X}}}$$\scriptstyle{p_{1}}$$\scriptstyle{p_{2}}$$\scriptstyle{\pi}$$\scriptstyle{\tilde{\rho}}$$\scriptstyle{\rho}$$\scriptstyle{\sigma}$
For the first part of the proof, we use the following notation:
* •
$\pi:\mathscr{X}\rightarrow S$ is the $S$-structure map of the Severi–Brauer
scheme $\mathscr{X}/S$ of relative dimension $n$,
* •
$\phi(t)$ is a fixed polynomial from $\mathbb{Q}[t]$ and $N>0$ is an integer
(chosen to be divisible by $n+1$) so that $h^{i}(V,\mathcal{O}_{V}(N))=0$ for
any subscheme $V\subset\mathbb{P}^{n}$ with Hilbert polynomial $\phi(t)$
[Kol96, Theorem I.1.5],
* •
$\mathscr{Y}=\mathbf{Gr}_{S}(\phi(N),\pi_{*}\mathcal{L})$ is the Grassmannian
$S$-bundle of rank $\phi(N)$ quotient bundles of the locally free
$\pi_{*}\mathcal{L}$, where $\mathcal{L}=(\det\mathcal{Q})^{\otimes N/(n+1)}$
is the given tensor power of the determinant of the Quillen bundle, with
$S$-structure map $\sigma:\mathscr{Y}\rightarrow S$,
* •
and
$p_{1},p_{2}:\mathscr{X}\times_{S}\mathscr{Y}\rightarrow\mathscr{X},\mathscr{Y}$
are the first and second projections from the fiber product.
On $\mathscr{Y}$ there is a short exact sequence
$0\rightarrow\mathcal{U}\rightarrow\sigma^{*}\pi_{*}\mathcal{L}\rightarrow\mathcal{V}\rightarrow
0$
with $\mathcal{V}$ the universal quotient bundle of rank $\phi(N)$ and
$\mathcal{U}$ the universal subbundle. Pulling back to
$\mathscr{X}\times_{S}\mathscr{Y}$ we get a map
(4) $p_{2}^{*}\mathcal{U}\rightarrow
p_{2}^{*}\sigma^{*}\pi_{*}\mathcal{L}=p_{1}^{*}\pi^{*}\pi_{*}\mathcal{L}\rightarrow
p_{1}^{*}\mathcal{L}$
by composing with the $(\pi^{*},\pi_{*})$-adjunction map. Let $\mathcal{C}$ be
the cokernel of this composition. The projection
$p_{2}:\mathscr{X}\times_{S}\mathscr{Y}\rightarrow\mathscr{Y}$ realizes
$\mathscr{X}\times_{S}\mathscr{Y}$ as a Severi–Brauer scheme over
$\mathscr{Y}$ so that we can apply Lemma 2.4 to the sheaf
$\mathcal{C}\otimes(p_{1}^{*}\mathcal{L}^{\vee})$. In this way, we get a
subscheme $H\subset\mathscr{Y}$ fitting into a Cartesian diagram
${\mathscr{X}\times_{S}H}$${\mathscr{X}\times_{S}\mathscr{Y}\times_{\mathscr{Y}}H}$${\mathscr{X}\times_{S}\mathscr{Y}}$${H}$${\mathscr{Y}}$$\scriptstyle{i^{\prime}}$$\scriptstyle{p_{2}^{\prime}}$$\scriptstyle{p_{2}}$$\scriptstyle{i}$
so that $\mathcal{C}^{\prime}=i^{\prime*}(\mathcal{C}\otimes
p_{1}^{*}\mathcal{L}^{\vee})$ is flat over $H$ with reduced Hilbert polynomial
$\mathrm{rh}_{\mathcal{C}^{\prime}}(t)=\phi(t)$. Further, since
$\mathcal{C}^{\prime}$ is a quotient of
$\mathcal{O}_{\mathscr{X}\times_{S}H}$, the sheaf $\mathcal{C}^{\prime}$
defines a closed subscheme $U\subset\mathscr{X}\times_{S}H$ with
$\mathcal{O}_{U}=\mathcal{C}^{\prime}$.
Now let $\rho:T\rightarrow S$ be an arbitrary locally Noetherian $S$-scheme.
From the construction of $H$, any morphism $f:T\rightarrow H$ of $S$-schemes
produces an element of $H^{\phi(t)}_{\mathscr{X}/S}(T)$ by pulling back
$U\subset\mathscr{X}\times_{S}H$ along the induced
$f_{\mathscr{X}}:\mathscr{X}\times_{S}T\rightarrow\mathscr{X}\times_{S}H$.
This is the definition of (2).
Conversely, from any subscheme $V\subset\mathscr{X}\times_{S}T$ flat over $T$
with reduced Hilbert polynomial $\mathrm{rh}_{V}(t)=\phi(t)$ we can identify a
morphism $f:T\rightarrow H$ as follows. For this part of the proof, we use
additionally:
* •
$\rho_{\mathscr{X}}:\mathscr{X}\times_{S}T\rightarrow\mathscr{X}$ is the first
projection from the fiber product $\mathscr{X}\times_{S}T$ taken with respect
to $\rho$,
* •
and $\pi_{T}:\mathscr{X}\times_{S}T\rightarrow T$ is the second projection.
Tensoring the ideal sheaf sequence for $V$ with
$\rho_{\mathscr{X}}^{*}\mathcal{L}$ and pushing forward along $\pi_{T}$ gives
the exact sequence
(5)
$0\rightarrow\pi_{T*}(\mathcal{I}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L})\rightarrow\pi_{T*}\rho_{\mathscr{X}}^{*}\mathcal{L}\rightarrow\pi_{T*}(\mathcal{O}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L}).$
The rightmost arrow of this sequence is surjective since the coherent sheaf
$R^{1}\pi_{T*}(\mathcal{I}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L})=0$
vanishes; this can be checked after splitting $\mathscr{X}/S$ and using our
choice of $N$, cf. [Kol96, Theorem I.1.5]. Moreover, each of the terms in (5)
is locally free by [Har77, Theorem III.12.11] and
$\mathrm{rk}(\pi_{T*}(\mathcal{O}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L}))=\phi(N)$.
The base change map
$\rho^{*}\pi_{*}\mathcal{L}\rightarrow\pi_{T*}\rho_{\mathscr{X}}^{*}\mathcal{L}$
is an isomorphism since it is after an étale (fppf) extension of $S$ by
[Nit05, Lemma 5.4]. Hence the surjection
$\psi:\rho^{*}\pi_{*}\mathcal{L}\rightarrow\pi_{T*}(\mathcal{O}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L})$
defines a map
$\tilde{\rho}:T\rightarrow\mathscr{Y}\quad\mbox{with}\quad\tilde{\rho}^{*}(\sigma^{*}\pi_{*}\mathcal{L}\rightarrow\mathcal{V})=\psi$
by the functorial description of $\mathscr{Y}$. If we let
$\tilde{\rho}_{\mathscr{X}}:\mathscr{X}\times_{S}T\rightarrow\mathscr{X}\times_{S}\mathscr{Y}$
denote the map obtained by base change, then we find that
$\displaystyle\tilde{\rho}^{*}_{\mathscr{X}}\mathcal{C}$
$\displaystyle=\tilde{\rho}^{*}_{\mathscr{X}}\mathrm{coker}\left(p_{2}^{*}\mathcal{U}\rightarrow
p_{2}^{*}\sigma^{*}\pi_{*}\mathcal{L}=p_{1}^{*}\pi^{*}\pi_{*}\mathcal{L}\rightarrow
p_{1}^{*}\mathcal{L}\right)$
$\displaystyle=\mathrm{coker}\left(\pi_{T}^{*}\pi_{T*}(\mathcal{I}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L})\rightarrow\pi_{T}^{*}\rho^{*}\pi_{*}\mathcal{L}=\rho_{\mathscr{X}}^{*}\pi^{*}\pi_{*}\mathcal{L}\rightarrow\rho_{\mathscr{X}}^{*}\mathcal{L}\right).$
The composition factors through the adjunction
$\pi_{T}^{*}\pi_{T*}(\mathcal{I}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L})\rightarrow\mathcal{I}_{V}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L},$
which induces the isomorphism
$\tilde{\rho}^{*}_{\mathscr{X}}(\mathcal{C}\otimes
p_{1}^{*}\mathcal{L}^{\vee})\cong\tilde{\rho}^{*}_{\mathscr{X}}\mathcal{C}\otimes\rho_{\mathscr{X}}^{*}\mathcal{L}\cong\mathcal{O}_{V}$.
Since $V$ is flat over $T$ with reduced Hilbert polynomial
$\mathrm{rh}_{V}(t)=\phi(t)$, this implies that $\rho=i\circ f$ factors via a
morphism $f:T\rightarrow H$ since $H$ satisfies property (f). The association
sending $V$ to $f$ defines the map in (3). With a moment’s thought (and also
noting that the factorization above is unique by [Sta19, Tag 01L7]), it’s
clear the maps (2) and (3) are mutually inverse. This completes the proof. ∎
###### Definition 2.6.
We’ll call $\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ the Hilbert
scheme of $\mathscr{X}/S$ that parameterizes subschemes with reduced Hilbert
polynomial $\phi(t)$. The superscript $\mathrm{tw}$ is a reminder that this is
a twist of one of the usual Hilbert schemes of a projective bundle as the next
remark notes.
###### Remark 2.7.
If $\mathscr{X}/S$ is split, i.e. if $\mathscr{X}/S$ is a projective bundle
$\mathbb{P}_{S}(\mathcal{E})$ for some vector bundle $\mathcal{E}$ on $S$,
then the above theorem recovers the usual Hilbert scheme
$\mathrm{Hilb}_{\phi(t)}(\mathbb{P}_{S}(\mathcal{E})/S)$. This also shows the
following statement: if $\mathscr{X}/S$ is any Severi–Brauer scheme over a
Noetherian base scheme $S$, and if $S^{\prime}/S$ is an étale cover splitting
$\mathscr{X}/S$, then there are splitting isomorphisms
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)\times_{S}S^{\prime}\cong\mathrm{Hilb}_{\phi(t)}(\mathscr{X}_{S^{\prime}}/S^{\prime})$
as claimed in the beginning of this section. Consequently, the scheme
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ inherits any property
of $\mathrm{Hilb}_{\phi(t)}(\mathscr{X}_{S^{\prime}}/S^{\prime})$ that is
étale local on the base, e.g. properness, finite-typeness over $S$, smoothness
if it holds over $S^{\prime}$.
The infinitesimal theory of
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ can also be checked on
an étale cover of the base, so we get the following corollary using the fact
that the scheme $\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ is
étale locally, e.g. on a cover $S^{\prime}/S$ splitting $\mathscr{X}/S$,
isomorphic to
$\mathrm{Hilb}_{\phi(t)}(\mathbb{P}_{S^{\prime}}^{n}/S^{\prime})$.
###### Corollary 2.8.
Let $\mathscr{X}/S$ be a Severi–Brauer scheme over $S$. Let $s\in S$ be a
point, let $F$ be a field, and let $p:\mathrm{Spec}(F)\rightarrow s$ be a
morphism. Let $V\subset\mathscr{X}_{F}$ be a subscheme with ideal sheaf
$\mathcal{I}_{V}$ and reduced Hilbert polynomial $\mathrm{rh}_{V}(t)=\phi(t)$.
Then the following are true:
1. (1)
The Zariski tangent space of
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}_{F}/F)$ at the $F$-point
given by $V$ via Theorem 2.5 is naturally isomorphic to
$\mathrm{Hom}_{\mathcal{O}_{\mathscr{X}_{F}}}(\mathcal{I}_{V},\mathcal{O}_{V})=\mathrm{Hom}_{\mathcal{O}_{V}}(\mathcal{I}_{V}/\mathcal{I}_{V}^{2},\mathcal{O}_{V}).$
2. (2)
The dimension of every irreducible component of
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}_{F}/F)$ at the $F$-point
defined by $V$ is at least
$\mathrm{dim}_{F}\mathrm{Hom}_{\mathcal{O}_{\mathscr{X}_{F}}}(\mathcal{I}_{V},\mathcal{O}_{V})-\mathrm{dim}_{F}\mathrm{Ext}^{1}_{\mathcal{O}_{\mathscr{X}_{F}}}(\mathcal{I}_{V},\mathcal{O}_{V})+\mathrm{dim}_{s}S.$
3. (3)
If $V\subset\mathscr{X}_{F}$ is (étale) locally unobstructed, then the
dimension of every irreducible component of
$\mathrm{Hilb}_{\phi(t)}^{\mathrm{tw}}(\mathscr{X}/S)$ at any point in the
image of the point defined by $V$ is at least
$\mathrm{dim}_{F}\mathrm{Hom}_{\mathcal{O}_{V}}(\mathcal{I}_{V}/\mathcal{I}_{V}^{2},\mathcal{O}_{V})-\mathrm{dim}_{F}\mathrm{H}^{1}(V,\mathcal{H}om(\mathcal{I}_{V}/\mathcal{I}_{V}^{2},\mathcal{O}_{V}))+\mathrm{dim}_{s}S.$
Moreover, in either of the cases (2) or (3) above, if the lower bound given
for the dimension is equal to the dimension of every irreducible component of
$\mathrm{Hilb}^{\mathrm{tw}}_{\phi(t)}(\mathscr{X}/S)$ at the point defined by
$V$, then the map
$\mathrm{Hilb}^{\mathrm{tw}}_{\phi(t)}(\mathscr{X}/S)\rightarrow S$
is a local complete intersection morphism at that point.
###### Proof.
This is a combination of [Kol96, Theorems I.2.10 and I.2.15]. See [Kol96,
Definition I.2.11] for the definition of locally unobstructed subschemes. ∎
## 3\. Classifying subschemes
From now on, we work in the following setting: we fix a base field $k$, a
$k$-central simple $k$-algebra $A$, and we let $X=\mathbf{SB}(A)$ be the
associated Severi–Brauer variety of $A$. We use the triple $(d,n,m)$ to refer
to the degree, index, and exponent of $A$ respectively, i.e.
$d=\mathrm{deg}(A),\quad n=\mathrm{ind}(A),\quad m=\mathrm{exp}(A).$
In this section, we analyze the subschemes of $X$ corresponding to points in
the Hilbert scheme $\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$. We assume
throughout this section that $d>2$ so that $X$ is not a curve itself.
###### Lemma 3.1.
Let $C\subset X$ be a curve. Let $p$ be a prime number. Then the degree
$\mathrm{deg}(C)$ satisfies the following
$v_{p}(\mathrm{deg}(C))\geq\begin{cases}v_{p}(n)&\mbox{if $p$ is odd}\\\
v_{p}(n)-1&\mbox{if $p=2$}.\end{cases}$
In other words, the integer $n$ divides $\mathrm{deg}(C)$ if $n$ is odd and
the integer $n/2$ divides $\mathrm{deg}(C)$ if $n$ is even.
###### Proof.
Let $F$ be any splitting field for $X$. The degree $\mathrm{deg}(C)$ is
defined as the unique integer so that there is an equality
$[C_{F}]=\mathrm{deg}(C)[L]$ inside the Chow group $\mathrm{CH}_{1}(X_{F})$
where $L\subset X_{F}\cong\mathbb{P}^{d-1}_{F}$ is any line. The degree
$\mathrm{deg}(C)$ is independent of the choice of splitting field $F$.
To check the given divisibility relations, it suffices to work $p$-locally in
the group $\mathrm{CH}_{1}(X_{F})\otimes\mathbb{Z}_{(p)}$. By a
correstriction-restriction argument, it therefore suffices to assume $n=p^{r}$
is a prime power of $p$. The case that $p$ is odd is [Mac21, Theorem 4.7]. So
we can assume that $p=2$.
By first replacing $k$ with some (possibly large) field extension that doesn’t
change the index of $A$, we can assume that $m=\mathrm{exp}(A)=2$. Let
$e\in\mathrm{CH}^{1}(X)$ be the class of a divisor that has degree
$m=\mathrm{exp}(A)$ over any splitting field for $X$. Let $p$ be any closed
point of $X$ with $[k(p):k]=n=2^{r}$ and set $F=k(p)$. Let $q\in X_{F}$ be any
$F$-rational point inside $p_{F}$. Then since $\mathrm{CH}_{0}(X)=\mathbb{Z}$
is generated by the class of the point $p$, we find (by restricting to $F$ the
relation $e\cdot[C]=a[p]$ that holds over $k$ for some integer $a\geq 1$) that
$e_{F}\cdot[C_{F}]=2\mathrm{deg}(C)[q]=r[p_{F}]=a2^{r}[q].$
In other words, we find $2^{r-1}$ divides the degree $\mathrm{deg}(C)$. ∎
###### Remark 3.2.
In fact, when the index $n$ is a power of $2$, it follows that for any field
$F$ splitting $X$, the image of $\mathrm{CH}_{1}(X)$ inside
$\mathrm{CH}_{1}(X_{F})=\mathbb{Z}$ by restriction is exactly
$(n/2)\mathbb{Z}$. Generators of the image are exactly the restrictions of the
classes $c_{n-2}(\zeta_{X}(1))c_{n}(\zeta_{X}(1))^{(d/n)-1}$ and
$c_{1}(\zeta_{X}(m))^{d-2}$ constructed in [KM19, Appendix A].
###### Example 3.3.
In this example, we construct some curves in Severi–Brauer varieties
associated to central simple $k$-algebras of index $n=2^{r}$ with $2$-adic
valuation of the degree strictly smaller than $r$.
For an example of a curve $C$ with minimal possible degree in $X$, let
$A=Q_{1}\otimes Q_{2}$ be a biquaternion algebra of index $4$ split by a
biquadratic extension $F=k(\sqrt{a},\sqrt{b})$ with Galois group
$\mathrm{Gal}(F/k)=(\mathbb{Z}/2\mathbb{Z})^{\oplus 2}$. Let $p$ be a point on
$X$ with residue field $k(p)=F$ and identify the points in $p_{F}$ with
elements of $\mathrm{Gal}(F/k)$ so that the action of $G$ on $p_{F}$ is the
canonical one. In $X_{F}\cong\mathbb{P}^{3}_{F}$ let $L_{1}$ be the line
passing through the points $(0,0)$ and $(0,1)$ and let $L_{2}$ be the line
passing through the points $(1,0)$ and $(1,1)$. Then $L_{1}\cup L_{2}$ forms a
Galois orbit, hence it descends to the ground field $k$ to give a curve
$C\subset X$ with $\mathrm{deg}(C)=2$.
If the division algebra $A=Q_{1}\otimes\cdots\otimes Q_{r}$ is an $r$-fold
($r>2$) product of distinct quaternion algebras split by a multi-quadratic
extension $F/k$ with $\mathrm{Gal}(F/k)\cong(\mathbb{Z}/2\mathbb{Z})^{\oplus
r}$, then there is a point $p$ on $X$ with residue field $F$. One can pass a
Galois orbit of lines through $p_{F}$ which is essentially an $r$-dimensional
cube $I_{r}=[0,1]^{r}$ with lines replacing the edges of the cube. The curve
$C\subset X$ that one gets from this cube has degree
$\mathrm{deg}(C)=r2^{r-1}$ (equal to the number of edges of the $r$-cube). If
$r=3$ then this curve has arithmetic genus
$h^{1}(C,\mathcal{O}_{C})=2^{3-2}\binom{3}{2}-1$ (equal to the number of faces
of the $3$-cube minus 1).
In general, any division algebra $A$ of index $n=2^{r}$ is split by a
separable field extension $F/k$ of degree $[F:k]=2^{r}$. For this field $F$,
there is a point $p$ on $X$ with residue field $k(p)=F$ and $p_{k^{s}}$
contains $2^{r}$ points over a separable closure $k^{s}$ of $k$. Passing a
line between every pair of points in $p_{k^{s}}$ gives a
$\mathrm{Gal}(k^{s}/k)$ orbit that descends to a curve $C\subset X$. The
degree of $C$ is equal to the number of edges in the complete graph $K_{n}$,
i.e. $\mathrm{deg}(C)=\binom{n}{2}=2^{r-1}(2^{r}-1)$.
###### Lemma 3.4.
Assume that $A$ is a division $k$-algebra, i.e. assume $n=d$. Let $V\subset X$
be any subscheme of $X$ containing an irreducible component $C$ of dimension
$\mathrm{dim}(C)\geq 1$. Then $V$ is geometrically nondegenerate.
Furthermore, if $C\subset X$ is any geometrically integral curve in $X$ with
degree $\mathrm{deg}(C)=p$ for some integer $p\geq 1$, then the geometric
genus of $C$ is bounded above by
$g_{geom}(C)\leq(d-2)\frac{q(q-1)}{2}+qr$
where $q,r$ are the quotient and remainder of dividing $p-1$ by $d-2$, i.e.
where $p-1=q(d-2)+r$ and $0\leq r<d-2$.
###### Proof.
Let $\overline{k}$ be an algebraic closure of $k$ and $k^{s}$ a separable
closure of $k$ inside $\overline{k}$. To see that $V_{\overline{k}}$ is
nondegenerate in $X_{\overline{k}}\cong\mathbb{P}^{d-1}_{\overline{k}}$, it
suffices to show that $C_{\overline{k}}$ is contained in no hyperplane.
Assume to the contrary that there is a hyperplane $H^{\prime}$ containing
$C_{\overline{k}}$, and let $H$ be the unique hyperplane inside $X_{k^{s}}$
with $H\times_{k^{s}}\overline{k}=H^{\prime}$. Let $\alpha$ be a $1$-cocycle
of $G=\mathrm{Gal}(k^{s}/k)$ representing $X$ in
$\mathrm{H}^{1}(G,\mathrm{PGL}_{n}(k^{s}))$. Then $C_{k^{s}}$ is contained in
each of the (finitely many) Galois orbits of $H$ coming from $\alpha$. In
particular, we get an inclusion
$C_{k^{s}}\subset\bigcap_{g\in G}gH$
with the right hand side a Galois invariant linear subspace of $X_{k^{s}}$.
Since this subspace would necessarily descend to the field $k$, this
contradicts the fact that $A$ was assumed to be a division $k$-algebra (which
implies that $X$ has no twisted linear subvarieties).
Now suppose that $C$ is a geometrically integral curve in $X$. To bound the
geometric genus of $C$, it suffices to work over the algebraic closure. In
particular, we can assume that $C$ is an integral curve nondegenerate inside
of $\mathbb{P}^{d-1}$. Then the claim is just Castelnuovo’s bound [Har81]. ∎
###### Remark 3.5.
The above proof even shows the more general statement that, if $A$ is any
central simple $k$-algebra and if $V\subset X$ is any subscheme of $X$
containing an irreducible component $C$ of dimension $\mathrm{dim}(C)\geq 1$,
then $V_{\overline{k}}$ is set-theoretically contained in a hyperplane of
$X_{\overline{k}}$ if and only if $V$ is set-theoretically contained in a
twisted linear subvariety $Y\subsetneq X$.
In particular, if $A$ is a division $k$-algebra, then both $V_{\overline{k}}$
and $(V_{\overline{k}})_{red}$ are geometrically nondegenerate.
###### Remark 3.6.
Assume that $A$ is a $k$-division algebra so that $d=n$. Then for any
geometrically integral curve $C\subset X$ with $\mathrm{deg}(C)\leq n$, we
find $g_{geom}(C)\leq 1$ by applying Lemma 3.4 (and if $\mathrm{deg}(C)<n$
then also $g_{geom}(C)<1$).
Suppose $C\subset X$ as above has both $\mathrm{deg}(C)\leq n$ and
$g_{geom}(C)=0$. Then, in this case, since $C$ is geometrically integral, the
normalization $C^{\nu}$ of $C$ is smooth, geometrically connected, and
geometrically rational. So there is a point $p$ on $C^{\nu}$ with
$[k(p):k]=2$. As $C^{\nu}$ maps to $X$, this can only happen if $d=2$.
It follows that any smooth and geometrically connected curve $C\subset X$ with
$\mathrm{deg}(C)\leq n$ has genus $g_{geom}(C)=h^{1}(C,\mathcal{O}_{C})=1$ and
$\mathrm{deg}(C)=n$.
Typically the arithmetic genus $h^{1}(C,\mathcal{O}_{C})$ gives more
information about a curve $C$ and its embedding $C\subset X$. The next two
lemmas give technical tools that can allow one to determine the possible
values for the arithmetic genera of curves in $X$ in some cases.
###### Lemma 3.7.
Let $p$ be a prime number. Suppose that $V\subset X$ is any subscheme whose
irreducible components $P_{i}\subset V$ have $\mathrm{dim}(P_{i})\leq 1$. Then
we have
$v_{p}(h^{0}(V,\mathcal{O}_{V})-h^{1}(V,\mathcal{O}_{V}))\geq\begin{cases}v_{p}(n)&\mbox{if
$p$ is odd}\\\ v_{p}(n)-1&\mbox{if $p=2$}.\end{cases}$
In other words, the integer $n$ divides $\chi(V,\mathcal{O}_{V})$ if $n$ is
odd and the integer $n/2$ divides $\chi(V,\mathcal{O}_{V})$ if $n$ is even.
###### Proof.
It follows from flat base change [Sta19, Tag 02KH] that this can be checked
geometrically. This fits into a commutative diagram
${K(X_{\overline{k}})}$${\mathbb{Z}}$${K(X)}$${\mathbb{Z}}$
where the horizontal arrows are pushforwards along the structure map (with the
identification $K(\mathrm{Spec}(k))=\mathbb{Z}=K(\mathrm{Spec}(\overline{k}))$
of Grothendieck rings) and the vertical arrows are induced by extension of
scalars. The class $[\mathcal{O}_{V}]$ in $K(X)$ sits in the topological
filtration $\tau_{1}(X)\subset K(X)$ generated by coherent sheaves supported
in dimension 1 or less.
The image of $\tau_{1}(X)$ under the left vertical map is given in [Mac21,
Theorem 4.7] under the assumptions that $p$ is odd and $n=p^{r}$. In that
particular case, $\tau_{1}(X)=p^{r}\tau_{1}(X_{\overline{k}})$ with generators
$p^{r}[\mathcal{O}_{\mathbb{P}^{1}}]$ and $p^{r}[\mathcal{O}_{q}]$ for the
class of a $\overline{k}$-rational point $q\in X_{\overline{k}}$. The
horizontal arrows in the diagram above take the class of a coherent sheaf
$[\mathcal{F}]$ to $\chi(X,\mathcal{F})$. Writing
$[\mathcal{O}_{V}]=ap^{r}[\mathcal{O}_{\mathbb{P}^{1}}]+bp^{r}[\mathcal{O}_{q}]$
it follows that
$p^{r}(a+b)=\chi(X,\mathcal{O}_{V})=\chi(V,\mathcal{O}_{V})=h^{0}(V,\mathcal{O}_{V})-h^{1}(V,\mathcal{O}_{V}).$
Since it suffices to check the claim with coefficients in $\mathbb{Z}_{(p)}$,
the case that $p$ is odd follows from the particular case above by a
restriction-corestriction argument. The case $p=2$ follows by a similar
argument using Lemma 3.1 to show $\tau_{1}(X)\subset
2^{r-1}\tau_{1}(X_{\overline{k}})$ when $n=2^{r}$. ∎
###### Lemma 3.8.
Suppose that $V\subset X$ is any subscheme whose irreducible components
$P_{i}\subset V$ have $\mathrm{dim}(P_{i})\leq 1$. Assume that
$\mathrm{rh}_{V}(t)=rt+s$ for some integers $r,s$ with $r\geq 1$. Then
$h^{1}(V,\mathcal{O}_{V})\leq\frac{1}{2}(r^{2}-3r)+h^{0}(V,\mathcal{O}_{V})$.
###### Proof.
This is a bit overkill but, since we have
$\mathrm{Hilb}^{\mathrm{tw}}_{rt+s}(X/k)\times_{k}\overline{k}\cong\mathrm{Hilb}_{rt+s}(\mathbb{P}^{d-1}_{\overline{k}}/\overline{k})$
it’s enough to show that the right hand side is empty whenever there is an
inequality
$h^{1}(V,\mathcal{O}_{V})>\frac{1}{2}(r^{2}-3r)+h^{0}(V,\mathcal{O}_{V})$.
This is proved in [Har66, Corollary 5.7]. More specifically, Hartshorne shows
there that
$\mathrm{Hilb}_{rt+s}(\mathbb{P}^{d-1}_{\overline{k}}/\overline{k})$ is
nonempty if and only if one has $m_{0}\geq m_{1}\geq 0$ when $rt+s$ is written
as
$\displaystyle rt+s$
$\displaystyle=\binom{t}{1}-\binom{t-m_{0}}{1}+\binom{t+1}{2}-\binom{t+1-m_{1}}{2}$
$\displaystyle=m_{0}+m_{1}t+\frac{1}{2}(m_{1}-m_{1}^{2}).$
Comparing coefficients in the above gives
$r=m_{1}\quad\mbox{and}\quad
s=\chi(V_{\overline{k}},\mathcal{O}_{V_{\overline{k}}})=\chi(V,\mathcal{O}_{V})=m_{0}+\frac{1}{2}(m_{1}-m_{1}^{2}).$
Equivalently, since
$\chi(V,\mathcal{O}_{V})=h^{0}(V,\mathcal{O}_{V})-h^{1}(V,\mathcal{O}_{V})$
this implies
$h^{0}(V,\mathcal{O}_{V})-s=h^{1}(V,\mathcal{O}_{V})=\frac{1}{2}(m_{1}^{2}-m_{1})-m_{0}+h^{0}(V,\mathcal{O}_{V}).$
Now $\mathrm{Hilb}_{rt+s}(\mathbb{P}^{d-1}_{\overline{k}}/\overline{k})$ is
nonempty if and only if $m_{0}\geq m_{1}\geq 0$ if and only if $r\geq 0$ and
$0\leq
h^{1}(V,\mathcal{O}_{V})\leq\frac{1}{2}(r^{2}-3r)+h^{0}(V,\mathcal{O}_{V}).$∎
###### Proposition 3.9.
Suppose that $A$ is a division $k$-algebra with index $n$. Let $V\subset X$ be
any subscheme with $\mathrm{rh}_{V}(t)=f(n)t+s$ where $f(n)=n$ if $n$ is odd
and $f(n)=n/2$ if $n$ is even. Then the following are true.
1. (1)
There is a unique irreducible component $C\subset V$ with $\mathrm{dim}(C)=1$.
Moreover, $C$ is generically reduced and $\mathrm{deg}(C)=f(n)$.
2. (2)
If the index $n=p$ is prime, then the curve $C\subset V$ is geometrically
generically reduced.
3. (3)
If $s=0$ and if the index $n=p$ is prime, then the curve $C\subset V$ is also
geometrically connected.
###### Proof.
As $\mathrm{rh}_{V}(t)=f(n)t+s$ has degree
$\mathrm{deg}(\mathrm{rh}_{V}(t))=1$, and since $\mathrm{rh}_{V}(t)$ is
geometrically the Hilbert polynomial of $V_{\overline{k}}$, the dimension of
any irreducible component $P_{i}$ of $V$ satisfies $\mathrm{dim}(P_{i})\leq
1$. If there were multiple components $P_{i}$ of $\mathrm{dim}(P_{i})=1$, then
it would follow that $\mathrm{deg}(P_{i})<f(n)$ which is impossible by Lemma
3.1.
Similarly, if the unique irreducible component $C\subset V$ of dimension
$\mathrm{dim}(C)=1$ had $\mathrm{length}_{k(C)}\mathcal{O}_{C,\eta}>1$ at the
generic point $\eta$ of $C$, then we would find
$\mathrm{deg}(C_{red})<\mathrm{deg}(C)$ which also contradicts Lemma 3.1.
Hence the curve $C\subset V$ is also generically reduced, proving (1).
To prove (2), i.e. to show that $C$ is geometrically generically reduced when
$n=p$ is prime, we can assume $n=p>2$. If $C$ is geometrically irreducible,
then geometrically we have
$\mathrm{deg}(C)=m_{C_{\overline{k}}}\mathrm{deg}(C_{\overline{k}})\quad\mbox{with}\quad
m_{C_{\overline{k}}}=\mathrm{length}_{\overline{k}(C_{\overline{k}})}\mathcal{O}_{C_{\overline{k}},\overline{\eta}}$
where $\overline{\eta}$ is the generic point of $C_{\overline{k}}$. If
$m_{C_{\overline{k}}}=p$, then $(C_{\overline{k}})_{red}$ is a line in
$X_{\overline{k}}\cong\mathbb{P}^{n-1}_{\overline{k}}$, hence topologically
degenerate, contradicting Remark 3.5. So we must have $m_{C_{\overline{k}}}=1$
in which case $C_{\overline{k}}$ is generically reduced.
On the other hand, if $C$ is geometrically reducible, then the Galois group
$G=\mathrm{Gal}(k^{s}/k)$ acts transitively on the irreducible components of
$C_{\overline{k}}$ and all of these irreducible components have the same
degree. Since there are at least two irreducible components of
$C_{\overline{k}}$, there must be exactly $p$, say $C_{1},...,C_{p}$, each
with degree $\mathrm{deg}(C_{i})=1$. Hence $C$ is also geometrically
generically reduced in this case.
Suppose now that $s=0$ and $n=p>2$ is a prime number. For (3), we suppose that
the unique curve $C\subset V$ is geometrically disconnected and aim for a
contradiction. Since the degree of $C$ is $\mathrm{deg}(C)=p$, there are
exactly $p$ connected components $C_{1},...,C_{p}$ of $C_{\overline{k}}$ with
$\mathrm{deg}(C_{i})=1$. Hence
$(C_{i})_{red}\cong\mathbb{P}^{1}_{\overline{k}}$. Considering the ideal sheaf
sequence
$0\rightarrow\mathcal{N}\rightarrow\mathcal{O}_{C_{\overline{k}}}\rightarrow\mathcal{O}_{(C_{\overline{k}})_{red}}\rightarrow
0,$
we find that $\mathrm{Supp}(\mathcal{N})$ is a finite set of closed points.
Therefore
$h^{1}(C,\mathcal{O}_{C})=h^{1}(C_{\overline{k}},\mathcal{O}_{C_{\overline{k}}})=h^{1}((C_{\overline{k}})_{red},\mathcal{O}_{(C_{\overline{k}})_{red}})=0.$
But
$h^{1}(C,\mathcal{O}_{C})=h^{1}(V,\mathcal{O}_{V})=h^{0}(V,\mathcal{O}_{V})\neq
0$ by the assumption that the reduced Hilbert polynomial of $V$ is
$\mathrm{rh}_{V}(t)=pt$. Hence if $n=p>2$ is prime, then $C$ is geometrically
connected. ∎
###### Remark 3.10.
Proposition 3.9 implies that, if $A$ is a division $k$-algebra of prime index
$n=p$, then any reduced curve $C\subset X$ with $\mathrm{deg}(C)=p$ is
geometrically reduced [Sta19, Tag 04KS].
###### Example 3.11.
If $A$ is a division $k$-algebra with index $n=5$, then we can say something
about possible subschemes $V\subset X$ with $\mathrm{rh}_{V}(t)=5t$. Let
$C\subset V$ be the unique curve sitting in $V$ with degree
$\mathrm{deg}(C)=5$. Then $C$ is geometrically connected (by Proposition 3.9)
and $C$ is either reduced or nonreduced. If $C$ is reduced, then $C$ is
geometrically reduced (by Remark 3.10). In this case
$h^{0}(C,\mathcal{O}_{C})=1$ so that Lemma 3.8 implies
$h^{1}(C,\mathcal{O}_{C})\leq\frac{1}{2}(25-15)+1=6$. With Lemma 3.7 we get
that $5$ divides $1-h^{1}(C,\mathcal{O}_{C})$. So either
$h^{1}(C,\mathcal{O}_{C})=1$ or $h^{1}(C,\mathcal{O}_{C})=6$.
If $C$ is reduced and geometrically reducible, then $C_{\overline{k}}$ is the
union of $5$ irreducible components $C_{1},...,C_{5}$ each with
$\mathrm{deg}(C_{i})=1$. The singular points of $C_{\overline{k}}$ form a
Galois orbit, so that they span
$X_{\overline{k}}\cong\mathbb{P}^{4}_{\overline{k}}$ linearly. In particular,
$C_{\overline{k}}$ is a union of $5$ lines passing through at least $5$
points. We show in Lemma 3.12 below that $C_{\overline{k}}$ is essentially a
$5$-gon of lines with $h^{1}(C,\mathcal{O}_{C})=1$. Since
$\chi(C,\mathcal{O}_{C})=0$, this implies $C=V$.
Otherwise $C$ is geometrically integral and the normalization of $C$ is a
smooth genus 1 curve by Remark 3.6. Then either $C$ is smooth and
$h^{1}(C,\mathcal{O}_{C})=1$, or $h^{1}(C,\mathcal{O}_{C})=6$ and $C$ is
singular [Sta19, Tag 0CE4]. In the former case we again find $V=C$ and, in the
latter case, we find $V=C\cup p$ for some Artinian subscheme $p\subset X$ with
$h^{0}(p,\mathcal{O}_{p})=5$. But, since $5$ divides the degree of any closed
point of $X$, we find that $p$ is a closed point with $[k(p):k]=5$.
When $C$ is nonreduced, we consider instead the reduced subscheme
$C_{red}\subset C$ which still has $\mathrm{deg}(C_{red})=5$ since $C$ is
generically reduced. The scheme $C_{red}$ is both geometrically connected
(since this is true for $C$ by Proposition 3.9) and geometrically reduced
(from Remark 3.10). Hence $h^{0}(C_{red},\mathcal{O}_{C_{red}})=1$ and,
similar to the reduced case, we find that either
$h^{1}(C_{red},\mathcal{O}_{C_{red}})=1$ or
$h^{1}(C_{red},\mathcal{O}_{C_{red}})=6$ from both Lemma 3.8 and Lemma 3.7.
Now from the ideal sheaf sequence for $C_{red}\subset C$ we find
(in)equalities
$h^{0}(C,\mathcal{O}_{C})>h^{0}(C_{red},\mathcal{O}_{C_{red}})\quad\mbox{and}\quad
h^{1}(C,\mathcal{O}_{C})=h^{1}(C_{red},\mathcal{O}_{C_{red}}).$
It follows that if we write $V=C\sqcup p$ as the disjoint union of $C$ and a
possibly empty Artinian scheme $p$, then
$0=\chi(V,\mathcal{O}_{V})=\chi(C,\mathcal{O}_{C})+r>\chi(C_{red},\mathcal{O}_{C_{red}})+r$
with $r=h^{0}(p,\mathcal{O}_{p})\geq 0$. So
$\chi(C_{red},\mathcal{O}_{C_{red}})<0$ and
$h^{1}(C_{red},\mathcal{O}_{C_{red}})=6$. But then
$h^{0}(V,\mathcal{O}_{V})=6$ as well since we have
$h^{1}(C,\mathcal{O}_{C})=h^{1}(V,\mathcal{O}_{V})$. Since there are no closed
points on $X$ of degree less than $5$, it follows that $V=C$ and
$h^{0}(C,\mathcal{O}_{C})=6$.
###### Lemma 3.12.
Let $A$ be a division $k$-algebra of prime index $n=p>2$. Let $V\subset X$ be
any subscheme with $\mathrm{rh}_{V}(t)=pt$ and let $C\subset V$ be the unique
curve from Proposition 3.9. If $C$ is geometrically reducible, then
$C_{\overline{k}}$ is a $p$-gon of lines through $p$ points spanning
$X_{\overline{k}}\cong\mathbb{P}^{p-1}_{\overline{k}}$.
###### Proof.
Since $C$ is geometrically reducible and of degree $\mathrm{deg}(C)=p$, we
know that $(C_{red})_{\overline{k}}$ is the union of $p$-lines
$L_{1},...,L_{p}$. The Galois group $G=\mathrm{Gal}(k^{s}/k)$ acts
transitively on the set of these lines $\\{L_{1},...,L_{p}\\}$, giving a map
$G\rightarrow S_{p}$ whose image contains a $p$-cycle. Proposition 3.9 shows
that the curve $C$ is geometrically connected, so we can find an element $g\in
G$ so that $L_{1}\cap gL_{1}\neq\emptyset$. After possibly relabeling the
lines $L_{1},...,L_{p}$ we can assume
1. (1)
$L_{k}=g^{k-1}L_{1}$ for all $k\leq p$,
2. (2)
$L_{i}\cap L_{i+1}\neq\emptyset$ for all $i=1,...,p-1$
3. (3)
and $L_{p}\cap L_{1}\neq\emptyset$ also.
Assume that there is a hyperplane $H\subset
X_{\overline{k}}\cong\mathbb{P}^{p-1}_{\overline{k}}$ containing the lines
$L_{1},...,L_{p-1}$. Then, in particular, this $H$ contains the set of all
singular points of $C_{\overline{k}}$ which are the union of Galois orbits
under $G$. Since $H$ doesn’t contain $L_{p}$, we have that $gH$ doesn’t
contain $gL_{p}=L_{1}$. So the intersection of these translates is empty,
$\bigcap_{g\in G}gH=\emptyset.$
But this would imply $C$ is smooth, a contradiction since $C$ is assumed
singular. Hence no $H$ can contain $L_{1},...,L_{p-1}$.
Now we go inductively. Starting with $L_{1}$, we add $L_{2}$ with $L_{1}\cap
L_{2}\neq\emptyset$ by our choice of labeling. We get that $L_{1}\cup L_{2}$
is contained in a linear subspace $H_{1}$ with $\mathrm{dim}(H_{1})=2$. Now we
consider $L_{3}$. Adding $L_{3}$ to $L_{1}\cup L_{2}$, we know that $L_{2}\cap
L_{3}\neq\emptyset$ so that $L_{1}\cup L_{2}\cup L_{3}$ is contained in a
linear subspace $H_{2}$ with $\mathrm{dim}(H_{2})=3$. Further, we know
$L_{1}\cup L_{2}\cup L_{3}$ is not contained in $H_{1}$ since, if it were,
then $L_{1}\cup\cdots\cup L_{p-1}$ would then be contained in a hyperplane. So
$\\#(L_{3}\cap H_{1})=1$.
Repeating this process, we find for all $i<p-1$ linear subspaces $H_{i}$ of
dimension $\mathrm{dim}(H_{i})=i+1$ so that each $H_{k}$ contains
$L_{1}\cup\cdots\cup L_{k+1}$, and $\\#(L_{k+1}\cap H_{k-1})=1$. Finally, we
have that $\\#(L_{p}\cap(L_{1}\cup\cdots\cup L_{p-1}))\geq 2$ and we claim
that actually equality holds. If this inequality were strict, then $L_{p}\cap
L_{i}\neq\emptyset$ for some $i\neq 1,p-1$. But then
$gL_{p}\cap gL_{i}=L_{1}\cap L_{i+1}\neq\emptyset$
and $1<i+1\leq p-1$, contradicting that $\\#(H_{i-1}\cap L_{i+1})=1$.
We think of $(C_{red})_{\overline{k}}$ as a graph with singular points as
vertices and lines as edges; the above shows that the graph associated to
$(C_{red})_{\overline{k}}$ is a $p$-gon on exactly $p$-vertices. These
$p$-vertices form a Galois orbit under $G$, so they must span
$X_{\overline{k}}$. Corollary 3.13 below shows that $C=C_{red}$, which
completes the proof. ∎
###### Corollary 3.13.
Let $A$ be a division $k$-algebra of prime index $n=p$. Let $V\subset X$ be
any subscheme with $\mathrm{rh}_{V}(t)=pt$ and let $C\subset V$ be the unique
curve found in Proposition 3.9. If $C$ is geometrically reducible, then $C$ is
reduced, $h^{1}(C,\mathcal{O}_{C})=1$, and $V=C$.
###### Proof.
The proof of Lemma 3.12 describes how to construct $(C_{red})_{\overline{k}}$
as a union of lines. One can use this construction to compute
$h^{1}(C,\mathcal{O}_{C})=1$ from the exact sequence (7) below and the
observation that
$h^{1}(C,\mathcal{O}_{C})=h^{1}(C_{red},\mathcal{O}_{C_{red}})=h^{1}((C_{red})_{\overline{k}},\mathcal{O}_{(C_{red})_{\overline{k}}})$
where the first equality comes from $C$ being generically reduced and the
second from flat base change.
Then
$1=h^{1}(C,\mathcal{O}_{C})=h^{1}(V,\mathcal{O}_{V})=h^{0}(V,\mathcal{O}_{V})$
as $\mathrm{rh}_{V}(t)=pt$. Since
$0\neq H^{0}(C,\mathcal{O}_{C})\subset H^{0}(V,\mathcal{O}_{V}),$
we find that $h^{0}(C,\mathcal{O}_{C})=1$. Hence $C$ is reduced and $V=C$. ∎
It can be difficult to say anything complete regarding the schemes
$\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$, as done in Example 3.11, for general
$X$. However, we can still analyze specific irreducible components of
$\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$, for some particular cases of
Severi–Brauer variety $X$, to some benefit.
From now on we write
(6)
$\psi_{X}:\mathrm{Univ}^{\mathrm{tw}}_{\phi(t)}(X/k)\rightarrow\mathrm{Hilb}^{\mathrm{tw}}_{\phi(t)}(X/k)$
for the canonical map coming from the projection. (By slight abuse of
notation, we use the same $\psi_{X}$ regardless of the function $\phi(t)$
under consideration). For each irreducible component
$V\subset\mathrm{Hilb}^{\mathrm{tw}}_{\phi(t)}(X/k)$ we let $\eta_{V}$ denote
the generic point of $V$. If $\phi(t)=rt+s$ is linear then, for each such $V$,
the generic fiber $\psi_{X}^{-1}(\eta_{V})$ is the union of a curve and a
finite number points.
There may be more than one irreducible curve in the fiber
$\psi_{X}^{-1}(\eta_{V})$. Proposition 3.9 can sometimes show the curve in
$\psi_{X}^{-1}(\eta_{V})$ is irreducible.
###### Corollary 3.14.
Suppose that $A$ is a division $k$-algebra of index $n$. Define $f(n)$ to be
the following function.
$f(n)=\begin{cases}n&\mbox{if $n$ is odd}\\\ n/2&\mbox{if $n$ is
even}\end{cases}$
Assume that $V\subset\mathrm{Hilb}^{\mathrm{tw}}_{f(n)t+s}(X/k)$ is an
irreducible component and let $V_{sm}\subset V$ denote the locus of points
smooth in $V$.
If $V$ has a smooth $k$-rational point, i.e. $V_{sm}(k)\neq\emptyset$, then
$\psi_{X}^{-1}(\eta_{V})$ contains a unique irreducible and geometrically
connected curve.
###### Proof.
Because of the isomorphisms
$\mathrm{Hilb}^{\mathrm{tw}}_{f(n)t+s}(X/k)\times_{k}k(\eta_{V})\cong\mathrm{Hilb}^{\mathrm{tw}}_{f(n)t+s}(X_{k(\eta_{V})}/k(\eta_{V})),$
the generic point $k(\eta_{V})$ corresponds to a subscheme
$\psi_{X}^{-1}(\eta_{V})\subset X_{k(\eta_{V})}$ with reduced Hilbert
polynomial $\mathrm{rh}_{V}(t)=f(n)t+s$.
The Severi–Brauer variety $X_{k(\eta_{V})}$ is associated to the central
simple algebra $A_{k(\eta_{V})}$ which, because of the assumption that
$V_{sm}(k)\neq\emptyset$, has index $n$ as well (apply Lemma A.1 to the
Azumaya algebra $A\otimes_{k}\mathcal{O}_{V,x}$ where $x\in V_{sm}(k)$). Now
the claim follows from Proposition 3.9. ∎
Of particular interest is the following component of
$\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$.
###### Definition 3.15.
Let $\mathrm{Ell}_{n}(X)\subset\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$ denote
the union of the irreducible components $V$ of
$\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$ whose generic fiber
$\psi_{X}^{-1}(\eta_{V})$ is a smooth and geometrically connected curve of
genus $1$.
The scheme $\mathrm{Ell}_{n}(X)$ is never empty. To see this, note that over
an algebraic closure $\overline{k}$ one can always find a smooth genus 1 curve
$C$. If $D$ is any closed point on $C$, then the complete linear system
associated to the divisor $nD$ gives a closed immersion to
$\mathbb{P}^{n-1}_{\overline{k}}\subset X_{\overline{k}}$ whenever $n\geq 3$.
Let $x$ be any closed point of $H=\mathrm{Hilb}_{nt}^{\mathrm{tw}}(X/k)$
defined by a complete linear system as above. By base change we get a morphism
$\psi_{X}|_{\mathscr{C}}:\mathscr{C}=\psi_{X}^{-1}(\mathrm{Spec}(\mathcal{O}_{H,x}))\rightarrow\mathrm{Spec}(\mathcal{O}_{H,x})$
with special fiber $\mathscr{C}_{k(x)}$ geometrically isomorphic with the
curve $C$. By [Sta19, Tag 01V9] there is an open
$U^{\prime}\subset\mathrm{Spec}(\mathcal{O}_{H,x})$ and an open
$U\subset\mathscr{C}$ containing $\mathscr{C}_{k(x)}\subset U$ such that the
restriction
$\psi_{X}|_{U}:U\rightarrow U^{\prime}$
is smooth. The complement $\mathscr{C}\setminus U$ is closed in $\mathscr{C}$
and, since the map $\psi_{X}|_{\mathscr{C}}$ is proper, the image
$\psi_{X}|_{\mathscr{C}}(\mathscr{C}\setminus U)$ is closed in
$\mathrm{Spec}(\mathcal{O}_{H,x})$.
Now every nonempty closed subset of $\mathrm{Spec}(\mathcal{O}_{H,x})$
contains $x$ and, by construction, the set
$\psi_{X}|_{\mathscr{C}}(\mathscr{C}\setminus U)$ is closed and does not
contain $x$. Thus $\psi_{X}|_{\mathscr{C}}(\mathscr{C}\setminus U)=\emptyset$
and necessarily $\mathscr{C}=U$ so that $\psi_{X}|_{\mathscr{C}}$ is smooth.
Since $\psi_{X}|_{\mathscr{C}}$ is a smooth, proper, flat, and finitely
presented morphism it follows from [Sta19, Tag 0E0N] that
$\psi_{X}|_{\mathscr{C}}$ has geometrically connected fibers. The generic
fiber $\psi_{X}^{-1}(\eta_{V})$ for any irreducible component $V\subset H$
containing $x$ is therefore a smooth and geometrically connected curve of
genus $1$ implying that $V\subset\mathrm{Ell}_{n}(X)$.
###### Theorem 3.16.
Assume that $A$ is a division $k$-algebra of index $n$. Then the following are
true:
1. (1)
$\mathrm{Ell}_{n}(X)$ is geometrically irreducible with
$\mathrm{dim}(\mathrm{Ell}_{n}(X))=n^{2}$;
2. (2)
if either $A$ is cyclic or, if $A$ contains a maximal subfield $F\subset A$
whose Galois closure $E/k$ is a Galois extension of degree $2n$ with dihedral
Galois group, then $\mathrm{Ell}_{n}(X)(k)\neq\emptyset$.
###### Proof.
We first prove (2). In either case, let $x$ be a point of $X$ with $k(x)$
either a cyclic Galois extension $E/k$ of $k$ of degree $n$ (in the first
case) or a maximal subfield $k(x)\subset A$ with Galois closure $E/k$ a
dihedral Galois extension of degree $2n$ (in the second case). The field $E$
splits $X$ and $k(x)\otimes_{k}E\cong E^{\oplus n}$ either way. Let
$H\subset\mathrm{Gal}(E/k)$ be a cyclic subgroup of order $n$. Pick an
$E$-rational point $p$ in $x_{E}$ and let $L$ be the line through $p$ and $gp$
for any generator $g$ of $H$.
The union of the $H$ translates of $L$ forms a $\mathrm{Gal}(E/k)$-orbit which
descends to a scheme $V\subset X$ defined over $k$. Geometrically, the scheme
$V_{\overline{k}}$ is an $n$-gon of lines through the points
$x_{\overline{k}}$. Hence $V$ has $\mathrm{rh}_{V}(t)=nt$. We claim the point
defined by $V$ in $\mathrm{Hilb}_{nt}^{\mathrm{tw}}(X/k)$ is contained in
$\mathrm{Ell}_{n}(X)$.
Actually, as $V_{\overline{k}}$ is the scheme-theoretic union of lines we can
use the exact sequence [Sta19, Tag 0C4J]
(7) $0\rightarrow\mathcal{O}_{C\cup
D}\rightarrow\mathcal{O}_{C}\oplus\mathcal{O}_{D}\rightarrow\mathcal{O}_{C\cap
D}\rightarrow 0$
where $V_{\overline{k}}=C\cup D$, with $C$ a chain of $n-1$ lines and $D$ a
line closing the $n$-gon, to compute that $h^{1}(V,\mathcal{O}_{V})=1$ and
that $h^{1}(V_{\overline{k}},\mathcal{O}_{V_{\overline{k}}}(1))=0$ by
tensoring the exact sequence with $\mathcal{O}_{X_{\overline{k}}}(1)$. Since
$V_{\overline{k}}$ has lci singularities, one can apply [Har10, Proposition
29.9] to find that $V_{\overline{k}}$ is smoothable.
More precisely, we find that $\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$ is smooth
at the $k$-rational point defined by $V\subset X$ and, over an algebraic
closure, there is an integral curve passing through both the point
corresponding to $V_{\overline{k}}\subset X_{\overline{k}}$ and the subset of
$\mathrm{Ell}_{n}(X_{\overline{k}})$ parametrizing smooth and connected
curves. In particular, the embedding $V\subset X$ defines a point of
$\mathrm{Ell}_{n}(X)(k)$ completing the proof of (2).
Now we prove part (1). First, if $n=3$, note that
$\mathrm{Hilb}^{\mathrm{tw}}_{3t}(X/k)$ is a form of $\mathbb{P}^{9}$. In this
case, the central division $k$-algebra $A$ associated to $X$ is cyclic
[KMRT98, Theorem 19.2]. By part (2) above, the set
$\mathrm{Hilb}^{\mathrm{tw}}_{3t}(X/k)(k)\neq\emptyset$ is nonempty. Hence
$\mathrm{Ell}_{3}(X)=\mathrm{Hilb}^{\mathrm{tw}}_{3t}(X/k)\cong\mathbb{P}^{9}$.
(Alternatively, one can avoid the use of (2) by using Bertini’s theorem).
So we can assume $n>3$. Let
$U^{\prime}\subset\mathrm{Hilb}_{nt}^{\mathrm{tw}}(X/k)\times_{k}\overline{k}$
be the open set consisting of all smooth and connected curves of degree $n$
and genus $1$. Write $U$ for the image of $U^{\prime}$ inside
$\mathrm{Hilb}_{nt}^{\mathrm{tw}}(X/k)$. Then $U\subset\mathrm{Ell}_{n}(X)$ is
open and irreducible since $U^{\prime}$ is [Ein86, Theorem 8]. We’ll show that
$U$ is dense in $\mathrm{Ell}_{n}(X)$; this will prove the first claim since
$U^{\prime}=U\times_{k}\overline{k}$.
Let $V$ be an irreducible component of $\mathrm{Ell}_{n}(X)$. Since the
generic fiber $\psi_{X}^{-1}(\eta_{V})$ is smooth and geometrically connected,
there is an open subset $W\subset V$ such that for any point $x$ in $W$, the
curve $\psi_{X}^{-1}(x)$ is smooth (see [Gro67, Proposition 17.7.11]) and
geometrically irreducible (by [Sta19, Tag 0559]) of degree $n$ by assumption
and, by Remark 3.6, of genus 1. In particular, we have $W\subset U$ showing
that $U$ is (topologically, but possibly not scheme-theoretically) dense in
$\mathrm{Ell}_{n}(X)$.
The dimension of $\mathrm{Ell}_{n}(X)$ can be determined geometrically, i.e.
over an algebraic closure, and this is done in [Ein86, Theorem 8].
Essentially, if $C\subset X_{\overline{k}}$ is smooth of degree $n$ and genus
1 then one can compute
$h^{0}(C,\mathcal{N}_{C/X_{\overline{k}}})=n^{2}\quad\mbox{and}\quad
h^{1}(C,\mathcal{N}_{C/X_{\overline{k}}})=0$
using the normal bundle sequence (and the Euler sequence for
$X_{\overline{k}}$). This shows both that
$\mathrm{dim}(\mathrm{Ell}_{n}(X))\leq n^{2}$, from Corollary 2.8 (1), and
that $\mathrm{dim}(\mathrm{Ell}_{n}(X))\geq n^{2}$, from Corollary 2.8 (3);
moreover this shows that $\mathrm{Ell}_{n}(X)$ is smooth along $U$. ∎
###### Remark 3.17.
The proof of (2) in Theorem 3.16 is an extension of an argument due to Jason
Starr, cf. [Sta17]. Starr’s original goal is to use the fact that $V$ defines
a smooth $k$-rational point on $\mathrm{Hilb}^{\mathrm{tw}}_{nt}(X/k)$ to
construct a smooth genus 1 curve on a Severi–Brauer variety $X$ over a large
(also called ample) field $k$ (e.g. a $p$-special field or the fraction field
of a Henselian DVR).
We can elaborate on Starr’s result in the setting of Theorem 3.16, i.e. when
$A$ is a division $k$-algebra satisfying the assumptions of (2). Indeed, the
scheme $\mathrm{Ell}_{n}(X)$ is projective so we can construct a smooth curve
$C$ with a $k$-rational point mapping to the $k$-point $x$ associated to the
$n$-gon $V$ constructed in the proof of Theorem 3.16 (2) as follows.
Let $y$ be any point of $\mathrm{Ell}_{n}(X)$ whose associated subscheme
$C\subset X$ is a smooth geometrically connected curve of genus 1. Let
$I=\\{x,y\\}$. Consider the blowup $\mathrm{Bl}_{I}(\mathrm{Ell}_{n}(X))$ with
center the points $I\subset\mathrm{Ell}_{n}(X)$. For any embedding
$\mathrm{Ell}_{n}(X)\subset\mathbb{P}^{N}$, we get an embedding of the blowup
$\mathrm{Bl}_{I}(\mathrm{Ell}_{n}(X))\subset\mathbb{P}^{N}\times\mathbb{P}^{n^{2}-1}\subset\mathbb{P}^{M}$
by composing with the Segre embedding. Now a general linear section of the
correct codimension intersects $\mathrm{Bl}_{I}(\mathrm{Ell}_{n}(X))$ in a
curve (smooth over $x$) by Bertini’s theorem [Jou83, Théorème 6.10 et
Corollaire 6.11]. A general linear section of the same codimension also
intersects the exceptional divisor
$\mathbb{P}^{n^{2}-1}\subset\mathrm{Bl}_{I}(\mathrm{Ell}_{n}(X))$ over $x$ in
a $k$-rational point and the exceptional divisor over $y$ in some number of
points. So we can choose a linear section
$E\subset\mathrm{Bl}_{I}(\mathrm{Ell}_{n}(X))$ doing all three things at once.
The normalization $E^{\nu}$ of $E$ is a curve with all the stated properties.
Over a large (also called ample) field $k$, any irreducible curve having a
smooth $k$-rational point has infinitely many $k$-rational points. Thus the
curve $E^{\nu}$ has infinitely many $k$-rational points and the image along
the composition of the normalization and blowdown
$E^{\nu}\rightarrow
E\rightarrow\mathrm{Bl}_{I}(\mathrm{Ell}_{n}(X))\rightarrow\mathrm{Ell}_{n}(X)$
has nontrivial intersection with the open subset of $\mathrm{Ell}_{n}(X)$
consisting of smooth and geometrically connected genus 1 curves.
###### Example 3.18.
If $A$ is a cyclic division $k$-algebra of index $n$, there are lots of field
extensions $F/k$ where $X_{F}$ contains a smooth geometrically connected curve
of genus 1 and where the algebra $A_{F}$ has index $n$. When $n=p^{r}$ is a
power of a prime $p$, Remark 3.17 shows this holds for a minimal $p$-special
field $F/k$ contained in an algebraic closure $\overline{k}/k$. When the index
$n$ is arbitrary one can instead use the field $k((t))$, which is the fraction
field of a Henselian DVR, and apply Remark 3.17. The index remains $n$ here
since $A_{k((t))}$ specializes to $A$ (Lemma A.1).
One can also construct “generic” examples for an arbitrary division algebra
$A$ of index $n$ as follows. If $n=p$ is prime, then one can first replace the
base field $k$ by an extension $F/k$ with $A_{F}$ a cyclic division algebra of
index $p$ if necessary. Next, one again extends the base field but now to the
function field $L=F(\mathrm{Ell}_{p}(X_{F}))$ of the scheme
$\mathrm{Ell}_{p}(X_{F})$. Since $\mathrm{Ell}_{p}(X_{F})$ has a smooth
$F$-rational point by Theorem 3.16, the algebra $A_{L}$ is nonsplit (hence of
index $p$) by [GS17, Lemma 5.4.7]. The generic fiber
$\psi_{X_{F}}^{-1}(\eta_{\mathrm{Ell}_{p}(X_{F})})$ is then a smooth and
geometrically connected genus 1 curve on $X_{L}$.
If $n$ is not prime, one can use [RTY08] to get a field extension $F/k$ with
$A_{F}$ cyclic of index $n$ and with the restriction
$\mathrm{Br}(k)\rightarrow\mathrm{Br}(F)$ an injection. Setting
$L=F(\mathrm{Ell}_{n}(X_{F}))$ then, as above, $X_{L}$ contains a smooth and
geometrically connected curve of genus 1. In this situation [GS17, Lemma
5.4.7] shows that the restriction $\mathrm{Br}(F)\rightarrow\mathrm{Br}(L)$ is
an injection and Lemma A.1 below shows that $A_{L}$ remains index $n$
(actually, both statements can be obtained from Lemma A.1).
The period of any curve $C$ constructed in this way is equal to the index $n$
of the division $k$-algebra $A$. The period is divisible by $n$, by Lemma 3.1,
since $C\subset X_{L}$ and $X_{L}$ has index $n$. But the map $C\subset X_{L}$
defines a rational point in $\mathbf{Pic}^{n}_{C/L}(L)$ so that $C$ has period
dividing $n$. The index of $C$ is then a multiple of $n$ in the interval
$n\leq\mathrm{ind}(C)\leq n^{2}$.
For an example when one can explicitly determine the index, note that there
exist smooth and geometrically connected curves $E$ of genus 1 with period $n$
and index $n^{2}$ over some number fields $k$ by [CS10]. For any Severi–Brauer
variety $X$ containing one of these curves $E\subset X$ as a smooth closed
subscheme, the generic curve $C$ constructed above has index $n^{2}$ since $C$
specializes to $E$ through a sequence of DVRs (cf. the proof of Lemma A.1) so
that one can apply [GLL13, Theorem 8.2].
However, if the exponent of $A$ is strictly less than $n$, the index of any
generic curve $C$, constructed as above, is then strictly less than $n^{2}$.
Indeed, if $H$ is a divisor of $X_{L}$ of degree $\mathrm{exp}(A)$ then
$[C\cap H]=[C][H]=m[p]$
holds in $\mathrm{CH}_{0}(X_{L})$ for some point $p$ of degree
$\mathrm{ind}(A)$ and $m=\mathrm{exp}(A)$. The left hand side of this equation
has degree a multiple of the index of $C$ whereas the right hand side has
degree $\mathrm{ind}(A)\mathrm{exp}(A)<\mathrm{ind}(A)^{2}$.
## Appendix A On Azumaya algebras
###### Lemma A.1.
Let $R$ be a Noetherian regular local ring with maximal ideal $\mathfrak{m}$,
residue field $k=R/\mathfrak{m}$, and fraction field $F$. Suppose that $A$ is
an Azumaya $R$-algebra. Then there is an inequality
$\mathrm{ind}(A_{k})\leq\mathrm{ind}(A_{F})$.
###### Proof.
We consider the $R$-schemes $X_{m}=\mathbf{SB}_{m}(A)$ which are étale forms
of the Grassmannian $R$-schemes $\mathbf{Gr}_{R}(m,n)$, where $n$ is the
square root of the rank of $A$, and for varying $m$. The $F$ and $k$ fibers of
the structure map over $R$ are canonically
$\mathbf{SB}_{m}(A_{F})\cong\mathbf{SB}_{m}(A)\times_{R}F\quad\mbox{and}\quad\mathbf{SB}_{m}(A_{k})\cong\mathbf{SB}_{m}(A)\times_{R}k,$
which have an $F$-rational point, or a $k$-rational point respectively, if and
only if the index $\mathrm{ind}(A_{F})$, or $\mathrm{ind}(A_{k})$
respectively, divides $m$ [Bla91, Proposition 3]. We’ll show that the
assumption $R$ is regular guarantees that
$\mathbf{SB}_{m}(A_{k})(k)\neq\emptyset$ whenever
$\mathbf{SB}_{m}(A_{F})(F)\neq\emptyset$.
For this, we first note that $R$ admits a sequence of discrete valuation rings
$R_{0},...,R_{t}$ with maximal ideals $\mathfrak{m}_{0},...,\mathfrak{m}_{t}$
for some $t\geq 0$ with the following properties:
1. (1)
$\mathrm{Frac}(R_{0})=F$,
2. (2)
$R_{i}/\mathfrak{m}_{i}\cong\mathrm{Frac}(R_{i+1})$
3. (3)
$R_{t}/\mathfrak{m}_{t}\cong k$.
One can take a regular sequence $(a_{0},...,a_{t-1})$ of generators for
$\mathfrak{m}$ and define $R_{i}=(R/(a_{0},...,a_{i-1}))_{(a_{i})}$ (cf.
[Sta19, Tag 00NQ, Tag 0AFS]). Now the valuative criterion for properness
[Har66, Theorem 4.7] shows
$(X_{m})_{R_{i}}(\mathrm{Frac}(R_{i}))\neq\emptyset\implies(X_{m})_{R_{i+1}}(\mathrm{Frac}(R_{i+1}))\neq\emptyset.$
One can conclude by induction. ∎
###### Example A.2.
The assumption that $R$ is regular cannot be dropped from the statement of
Lemma A.1. Here’s an example from Qixiao Ma. Fix a field $k$. Let $X/k$ be any
Severi–Brauer variety having $X(k)=\emptyset$. Let $x\in X$ be a closed point.
Consider the pushout $\tilde{X}$ in the cocartesian diagram below.
${x}$${\mathrm{Spec}(k)}$${X}$${\tilde{X}}$
Let $\tilde{x}\in\tilde{X}$ denote the canonical (singular) $k$-rational point
of $\tilde{X}$ and $\mathcal{O}_{\tilde{X},\tilde{x}}$ the local ring. If $A$
is the central simple algebra associated to $X$, then the Azumaya algebra
$A\otimes_{k}\mathcal{O}_{\tilde{X},\tilde{x}}$ is split over the generic
point and nontrivial over the closed point by construction.
## References
* [AA21] Benjamin Antieau and Asher Auel, _Explicit descent on elliptic curves and splitting brauer classes_ , 2021.
* [Bla91] Altha Blanchet, _Function fields of generalized Brauer-Severi varieties_ , Communications in Algebra 19 (1991), no. 1, 97–118.
* [CK12] Mirela Ciperiani and Daniel Krashen, _Relative Brauer groups of genus 1 curves_ , Israel J. Math. 192 (2012), no. 2, 921–949. MR 3009747
* [CS10] Pete L. Clark and Shahed Sharif, _Period, index and potential. III_ , Algebra Number Theory 4 (2010), no. 2, 151–174. MR 2592017
* [dJH12] Aise Johan de Jong and Wei Ho, _Genus one curves and Brauer-Severi varieties_ , Math. Res. Lett. 19 (2012), no. 6, 1357–1359. MR 3091612
* [Ein86] Lawrence Ein, _Hilbert scheme of smooth space curves_ , Ann. Sci. École Norm. Sup. (4) 19 (1986), no. 4, 469–478. MR 875083
* [GLL13] Ofer Gabber, Qing Liu, and Dino Lorenzini, _The index of an algebraic variety_ , Invent. Math. 192 (2013), no. 3, 567–626. MR 3049930
* [Gro67] Alexander Grothendieck, _Éléments de géométrie algébrique. IV. Étude locale des schémas et des morphismes de schémas IV_ , Inst. Hautes Études Sci. Publ. Math. (1967), no. 32, 361\. MR 238860
* [GS17] Philippe Gille and Tamás Szamuely, _Central simple algebras and Galois cohomology_ , Cambridge Studies in Advanced Mathematics, vol. 165, Cambridge University Press, Cambridge, 2017, Second edition of [MR2266528]. MR 3727161
* [Har66] Robin Hartshorne, _Connectedness of the Hilbert scheme_ , Inst. Hautes Études Sci. Publ. Math. (1966), no. 29, 5–48. MR 213368
* [Har77] by same author, _Algebraic geometry_ , Springer-Verlag, New York-Heidelberg, 1977, Graduate Texts in Mathematics, No. 52. MR 0463157
* [Har81] Joe Harris, _A bound on the geometric genus of projective varieties_ , Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 8 (1981), no. 1, 35–68. MR 616900
* [Har10] Robin Hartshorne, _Deformation theory_ , Graduate Texts in Mathematics, vol. 257, Springer, New York, 2010. MR 2583634
* [Jou83] J.-P. Jouanolou, _Théorèmes de Bertini et applications_ , Progress in Mathematics, vol. 42, Birkhäuser Boston, Inc., Boston, MA, 1983. MR 725671
* [KM19] Nikita Karpenko and Eoin Mackall, _On the K-theory coniveau epimorphism for products of Severi-Brauer varieties_ , Ann. K-Theory 4 (2019), no. 2, 317–344, With appendices by Mackall. MR 3990787
* [KMRT98] M.-A. Knus, A. Merkurjev, M. Rost, and J.-P. Tignol, _The book of involutions_ , American Mathematical Society Colloquium Publications, vol. 44, American Mathematical Society, Providence, RI, 1998, With a preface in French by J. Tits. MR 1632779
* [Kol96] János Kollár, _Rational curves on algebraic varieties_ , Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 32, Springer-Verlag, Berlin, 1996\. MR 1440180
* [Mac21] Eoin Mackall, _Algebraic connective $K$-theory of a Severi-Brauer variety with prescribed reduced behavior_, Preprint (2021), 1–14, Available on author’s webpage: https://www.eoinmackall.com/s/revconnectivek2.pdf.
* [Mum66] David Mumford, _Lectures on curves on an algebraic surface_ , Annals of Mathematics Studies, No. 59, Princeton University Press, Princeton, N.J., 1966, With a section by G. M. Bergman. MR 0209285
* [Nit05] Nitin Nitsure, _Construction of Hilbert and Quot schemes_ , Fundamental algebraic geometry, Math. Surveys Monogr., vol. 123, Amer. Math. Soc., Providence, RI, 2005, pp. 105–137. MR 2223407
* [Qui73] Daniel Quillen, _Higher algebraic $K$-theory. I_, 85–147. Lecture Notes in Math., Vol. 341. MR 0338129
* [RS96] Louis H. Rowen and David J. Saltman, _Semidirect product division algebras_ , Israel J. Math. 96 (1996), no. part B, 527–552. MR 1433706
* [RTY08] U. Reman, S. V. Tikhonov, and V. I. Yanchevskiĭ, _Symbol algebras and the cyclicity of algebras after a scalar extension_ , Fundam. Prikl. Mat. 14 (2008), no. 6, 193–209. MR 2533621
* [Sal21] David J Saltman, _Genus 1 curves in severi–brauer surfaces_ , 2021.
* [Sta17] Jason Starr, _For each $n$: show there is a genus $1$ curve over some field $k$ with no points of degree less than $n$, (simple argument / best reference)?_, MathOverflow, 2017, https://mathoverflow.net/q/286458 (version: 2017-11-20).
* [Sta19] The Stacks project authors, _The stacks project_ , https://stacks.math.columbia.edu, 2019.
|
arxiv-papers
| 2021-07-26T18:54:22 |
2024-09-04T03:07:19.877725
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Eoin Mackall",
"submitter": "Eoin Mackall",
"url": "https://arxiv.org/abs/2107.12434"
}
|
2107.12435
|
# A Comprehensive Study on Colorectal Polyp Segmentation with ResUNet++,
Conditional Random Field and Test-Time Augmentation
Debesh Jha, Pia H. Smedsrud, Dag Johansen, Thomas de Lange, Håvard D.
Johansen,
Pål Halvorsen, and Michael A. Riegler A preliminary version of this paper was
presented in [1].Manuscript received xxxx-xx-xx; revised xxxx-xx-xx; accepted
xxxx-xx-xx; Date of Publication xxxx-xx-xx. The authors are with the
SimulaMet, Norway, Augere Medical AS, Norway, UiT The Arctic University of
Norway, University of Oslo, Norway, Oslo Metropolitan University, Norway,
Sahlgrenska University Hospital, Mölndal, Sweden, and Bærum Hospital, Vestre
Viken, Norway (Corresponding author: Debesh Jha (e-mail: [email protected]))
###### Abstract
Colonoscopy is considered the gold standard for detection of colorectal cancer
and its precursors. Existing examination methods are, however, hampered by
high overall miss-rate, and many abnormalities are left undetected. Computer-
Aided Diagnosis systems based on advanced machine learning algorithms are
touted as a game-changer that can identify regions in the colon overlooked by
the physicians during endoscopic examinations, and help detect and
characterize lesions. In previous work, we have proposed the ResUNet++
architecture and demonstrated that it produces more efficient results compared
with its counterparts U-Net and ResUNet. In this paper, we demonstrate that
further improvements to the overall prediction performance of the ResUNet++
architecture can be achieved by using Conditional Random Field (CRF) and Test-
Time Augmentation (TTA). We have performed extensive evaluations and validated
the improvements using six publicly available datasets: Kvasir-SEG, CVC-
ClinicDB, CVC-ColonDB, ETIS-Larib Polyp DB, ASU-Mayo Clinic Colonoscopy Video
Database, and CVC-VideoClinicDB. Moreover, we compare our proposed
architecture and resulting model with other State-of-the-art methods. To
explore the generalization capability of ResUNet++ on different publicly
available polyp datasets, so that it could be used in a real-world setting, we
performed an extensive cross-dataset evaluation. The experimental results show
that applying CRF and TTA improves the performance on various polyp
segmentation datasets both on the same dataset and cross-dataset. To check the
model’s performance on difficult to detect polyps, we selected, with the help
of an expert gastroenterologist, $196$ sessile or flat polyps that are less
than ten millimeters in size. This additional data has been made available as
a subset of Kvasir-SEG. Our approaches showed good results for flat or sessile
and smaller polyps, which are known to be one of the major reasons for high
polyp miss-rates. This is one of the significant strengths of our work and
indicates that our methods should be investigated further for use in clinical
practice.
###### Index Terms:
Colonoscopy, polyp segmentation, ResUNet++, conditional random field, test-
time augmentation, generalization
## I Introduction
Cancer is a primary health problem of contemporary society, with colorectal
cancer (CRC) being the third most prevailing type in terms of cancer incidence
and second in terms of mortality globally [2]. Colorectal polyps are the
precursors for the CRC. Early detection of polyps through high-quality
colonoscopy and regular screening are cornerstones for the prevention of
colorectal cancer [3], since adenomas can be found and resected before
transforming to cancer and subsequently reducing CRC morbidity and mortality.
Figure 1: Example images showing the variations in shape, size, color, and
appearance of polyps from the Kvasir-SEG [4].
Regardless of the achievement of colonoscopy in lowering cancer burden, the
estimated adenoma miss-rate is around 6-27% [5]. In a recent pooled analysis
of 8 randomized tandem colonoscopy studies, polyps smaller than $10$ mm,
sessile, and flat polyps [6] are shown to most often be missed [7]. Another
reason why polyps are missed may be that the polyp either was not in the
visual field or was not recognized despite being in the visual field due to
fast withdrawal of the colonoscope [8]. The adenoma miss-rate could be reduced
by improving the quality of bowel preparation, applying optimal observation
techniques, and ensuring a colonoscopy withdrawal time of at least six minutes
[8]. Moreover, adenoma detection rate can also be improved by using advanced
techniques or devices, for example, auxiliary imaging devices, colonoscopes
with increased field of view, add-on-devices, and colonoscopes with integrated
inflatable, reusable balloon [3].
The structure and characteristics of a colorectal polyp changes over time at
different development stages. Polyps have different shapes, sizes, colors, and
appearances, which makes them challenging to analyze (see Figure 1). Moreover,
there are challenges such as the presence of image artifacts like blurriness,
surgical instruments, intestinal contents, flares, and low-quality images that
can cause errors during segmentation.
Polyp segmentation is of crucial relevance in clinical applications to focus
on the particular area of the potential lesion, extract detailed information,
and possibly remove the polyp if necessary. A Computer-Aided Diagnosis (CADx)
system for polyp segmentation can assist in monitoring and increasing the
diagnostic ability by increasing the accuracy, precision, and reducing manual
intervention. Moreover, it could lead to less segmentation errors than when
conducted subjectively. Such systems could reduce doctor’s workload and
improve clinical workflow. Lumen segmentation helps clinicians navigate
through the colon during screening, and it can be useful to establish a
quality metric for the explored colon wall [9]. Thus, an automated CADx system
could be used as a supporting tool to reduce the miss-rate of the overlooked
polyps.
A CADx system could be used in a clinical setting if it addresses two common
challenges: (i) Robustness (i.e., the ability of the model to consistently
perform well on both easy and challenging images), and (ii) Generalization
(i.e., a model trained on specific intervention in a specific hospital should
generalize across different hospitals) [10]. Addressing these challenges is
key to designing a powerful semantic segmentation system for medical images.
Generalization capability checks the usefulness of the model across different
available datasets coming from different hospitals and must finally be
confirmed in multi-center randomized trials. A good generalizable model could
be a significant step toward developing an acceptable clinical system. A
cross-dataset evaluation is crucial to check the model on the unseen polyps
from other sources and test the generalizability of it.
Toward developing a robust CADx system, we have previously proposed ResUNet++
[1]: an initial encoder-decoder based deep-learning architecture for
segmentation of medical images, which we trained, validated, and tested on the
publicly available Kvasir-SEG [4] and CVC-ClinicDB [11] datasets. In this
paper, we describe how the ResUNet++ architecture can be extended by applying
Conditional Random Field (CRF) and Test-Time Augmentation (TTA) to further
improve its prediction performance on segmented polyps. We have tested our
approaches on six publicly available datasets, including both image datasets
and video datasets. We have intentionally incorporated video datasets from
colonoscopies to support the clinical significance. Usually, still-frames have
at least one polyp sample. Videos have a situation where frames consist of
both polyp and non-polyp. Therefore, we have tested the model on these video
datasets and provided a new benchmark for the segmentation task. We have used
extensive data augmentation to increase the training sample and used a
comprehensive hyperparameter search to find optimal hyperparameters for the
dataset. We have provided a more in-depth evaluation by including more
evaluation metrics, and added justification for the ResUNet++, CRF, and TTA.
Additionally, we have performed extensive experiments on the cross-data
evaluation, in-depth analysis of best performing and worst performing cases,
and comparison of the proposed method with other recent works. Moreover, we
have pointed out the necessity of solving tasks related to the miss-detection
of flat and sessile polyps, and showed that our combining approach could
detect the overlooked polyps with high efficiency, which could be of
significant importance in the clinical settings. For this, we also released a
dataset consisting sessile or flat polyps publicly. Furthermore, we have
emphasized the use of cross-dataset evaluation by training and testing the
model with images coming from various sources to achieve the generalizability
goal.
In summary, the main contributions are as follows:
1. 1.
We have extended the ResUNet++ deep-learning architecture [1] for automatic
polyp segmentation with CRF and TTA to achieve better performance. The
quantitative and qualitative results shows that applying CRF and TTA is
effective.
2. 2.
We validate the extended architecture on a large range of datasets, i.e.,
Kvasir-SEG [4], CVC-ClinicDB [11], CVC-ColonDB [12], EITS-Larib [13], ASU-Mayo
Clinic Colonoscopy Video Database [14] and CVC-VideoClinicDB [15, 16], and we
compare our proposed approaches with the recent State-of-the-art (SOTA)
algorithm and set new a baseline. Moreover, we have compared our work with
other recent works, which is often lacking in comparable studies.
3. 3.
We selected $196$ flat or sessile polyps that are usually missed during
colonoscopy examination [7] from the Kvasir-SEG with the help of an expert
gastroenterologist. We have conducted experiments on this separate dataset to
show how well our model performs on challenging polyps. Moreover, we release
these polyp images and segmentation masks as a part of the Kvasir-SEG dataset
so that researchers can build novel architectures and improve the results.
4. 4.
Our model has better detection of smaller and flat or sessile polyps, which
are frequently missed during colonoscopy [7], which is a major strength
compared to existing works.
5. 5.
In medical clinical practice, generalizable models are essential to target
patient population. Our work is focused on generalizability, previously not
much explored in the community. To promote generalizable Deep Learning (DL)
models, we have trained our models on Kvasir-SEG and CVC-ClinicDB and tested
and compared the results over five publicly available diverse unseen polyp
dataset. Moreover, we have mixed two diverse datasets and conducted further
experiments on other unseen datasets to show the behaviour of the model on the
images captured using different devices.
## II Related Work
Over the past decades, researchers have made several efforts at developing
CADx prototypes for automated polyp segmentation. Most of the prior polyp
segmentation approaches were based on analyzing either the polyp’s edge or its
texture. More recent approaches used Convolutional Neural Network (CNN) and
pre-trained networks. Bernal et al. [11] introduced a novel method for polyp
localization that used WM-DOVA energy maps for accurately highlighting the
polyps, irrespective of its type and size. Pozdeev et al. [17] presented a
fully automated polyp segmentation framework using pixel-wise prediction based
upon the Fully Convolutional Network (FCN). Bernal et al. [18] hosted the
automatic polyp detection in colonoscopy videos sub-challenge, and later on,
they presented a comparative validation of different methods for automatic
polyp detection and concluded that the SOTA CNN based methods provide the most
promising results.
Akbari et al. [19] used the FCN-8S network and Otsu’s thresholding method for
automated colon polyp segmentation. Wang et al. [20] used the SegNet [21]
architecture to detect polyps. They obtained high sensitivity, specificity,
and receiver operating characteristic (ROC) curve value. Their algorithm could
achieve a speed of 25 frames per second with some latency during real-time
video analysis. Guo et al. [22] used a Fully Convolutional Neural Network
(FCNN) model for the Gastrointestinal Image ANAlysis (GIANA) polyp
segmentation challenge. The proposed method won first place in the 2017 GIANA
challenge for both standard definition (SD) and high definition image and won
second place in the SD image segmentation task in the 2018 GIANA challenge.
Yamada et al. [23] developed a CADx support system that can be used for the
real-time detection of polyps reducing the number of missed abnormalities
during colonoscopy.
Poorneshwaran et al. [24] used a Generative Adversarial Network (GAN) for
polyp image segmentation. Kang et al. [25] used Mask R-CNN, which relies on
ResNet50 and ResNet101, as a backbone structure for automatic polyp detection
and segmentation. Ali et al. [26] presented various detection and segmentation
methods that could classify, segment, and localize artifacts. Additionally,
there are several recent really interesting studies on polyp segmentation [27,
28, 29, 30]. They are useful steps toward building an automated polyp
segmentation system. There are also some works which have hypothesized that
coupling the existing architecture by applying careful post-processing
technique could improve the model performance [1, 31].
From the presented related work, we observe that automatic CADx systems in the
area of polyp segmentation are becoming mature. Researchers are conducting a
variety of studies with different designs ranging from a retrospective study,
prospective study, to post hoc examination of the prospectively obtained
dataset. Some of the models achieve very high performance with smaller
training and test datasets [32, 20, 1]. The algorithms used for building the
models are the ones that use handcrafted-, CNN\- or pre-trained-features from
ImageNet [33], where DL based algorithms are outperforming and gradually
replacing the traditional handcrafted or machine learning (ML) approaches.
Additionally, the performance of the models improves by the use of advance DL
algorithms, especially designed for polyp segmentation task or any other
similar biomedical image segmentation task. Moreover, there is interest for
testing the proposed architectures with more than one dataset[20, 1].
The main drawbacks in the field are the minimal effort applied towards testing
the generalizability of the CADx system possible to achieve with the cross-
dataset test. Additionally, there is almost no effort involved in designing an
universal model that could accurately segment polyp coming from different
sources, critical for the development of CADx for automated polyp
segmentation. Besides, most of the current works have proposed algorithms that
are tested on single, often small, imbalanced, and explicitly handpicked
datasets. This renders conclusions regarding the performance of the algorithms
almost useless (compared to other areas in ML like, for example, natural image
classification or action recognition where the common practice is to test on
more than one dataset and make source code and datasets publicly available).
Additionally, the used datasets are often not public available (restricted and
difficult to access), and the total number of images and videos used in the
study are not sufficient to believe that the system is robust and
generalizable for use in clinical trials. For instance, the model can produce
output segmentation map with high sensitivity and precision on a particular
dataset and completely fails on other modality images. Moreover, existing work
often use small training and test datasets. These current limitations make it
harder to develop a robust and generalizable systems.
Therefore, we aim to develop a CADx based support system that could achieve
high performances irrespective of the datasets. To achieve the goal, we have
done extensive experiments on various colonoscopy images and video datasets.
Additionally, we have mixed the dataset from multiple centers and tested it on
other diverse unseen datasets to achieve the goal of building a generalizable
and robust CADx system that produces no segmentation errors. Moreover, we set
a new benchmark for the publicly available datasets that can be improved in
the future.
Figure 2: ResUNet++ architecture [1]
## III The ResUNet++ Architecture
ResUNet++ is a semantic segmentation deep neural network designed for medical
image segmentation. The backbone for ResUNet++ architecture is ResUNet [34]:
an encoder-decoder network and based on U-Net [35]. The proposed architecture
takes the benefit of residual block, squeeze and excite block [36], atrous
spatial pyramid pooling (ASPP) [37], and attention block [38]. What
distinguishes ResUNet++ from ResUNet is the use of squeeze-and-excitation
blocks (marked in dark gray) at the encoder, the ASPP block, (marked in the
dark red) at bridge and decoder, and the attention block (marked in light
green) at the decoder (see Figure 2).
In the ResUNet++ model, we introduce the sequence of squeeze and excitation
block to the encoder part of the network. Additionally, we replace the bridge
of ResUNet with ASPP. In the decoder stage, we introduce a sequence of
attention block, nearest-neighbor up-sampling, and concatenate it with the
relevant feature map from the residual block of the encoder through skip
connection. This process is followed by the residual unit with identity
mapping, as shown in Figure 2.
We also introduce a series of additional skip connections from the residual
unit of the encoder section to the attention block of the decoder section. We
assign the number of filters $[32,64,128,256,512]$, along with the levels in
the encoder section, which are the values in our ResUNet++ architecture. These
filter combinations achieved the best results in our ResUNet++ experiment. In
the decoder section, the number of the filter is reversed, and the sequence
becomes $[512,256,128,64,32]$. As the semantic gap between the feature map of
the encoder and decoder blocks are supposed to decrease, the number of filters
in the convolution layers of the decoder block are also decreased to achieve
better semantic coverage. Through this, we ensure that the overall quality of
the feature maps is more alike to the ground truth mask. This is especially
important as the loss in semantic space is likely to decrease, and therefore
it will become more feasible to find a meaningful representation in semantic
space.
The overall ResUNet++ architecture consists of one stem block with three
encoder blocks, an ASPP between the encoder and the decoder, and three decoder
blocks. All the encoder and decoder blocks use the standard residual learning
approach. Skip connections are introduced between encoder and decoder for the
propagation of information. The output of the last decoder block is passed
through the ASPP, followed by a $1\times 1$ convolution and a sigmoid
activation function. All convolutional layers except for the output layer are
batch normalized [39] and are activated by a Rectified Linear Unit (ReLU)
activation function [40]. Finally, we get the output as binary segmentation
maps. A brief explanation of each block is provided in the following sub-
sections.
### III-A Residual Blocks
Training a deep neural network by expanding network depth can potentially
improve overall performance. Nevertheless, simply stacking the CNN layer could
also hamper the training process and cause exploding/vanishing gradient when
backpropagation occurs [41]. Residual connections facilitate the training
process by directly routing the input information to the output and preserves
the nobility of the gradient flow. The residual function simplifies the
objective of optimization without any additional parameters and boosts the
performance, which is the inspiration behind the deeper residual-based network
[42]. Equation (1) below shows the working principle.
$\displaystyle y_{n}={F(x_{n},W_{n})+x_{n}}$ (1)
Here, $x_{n}$ is the input and $F(\cdot)$ is the residual function. The
residual units consist of numerous combinations of Batch Normalization (BN),
ReLU, and convolution layers. A detailed description of the combinations used
and their impact can be found in the work of He et al. [43]. We have employed
the concept of a pre-activation residual unit in the ResUNet++ architecture
from ResUNet.
### III-B Squeeze and Excitation block
The squeeze and excitation (SE) block is the building block for the CNN that
re-calibrates channel-wise feature response by explicitly modeling
interdependencies between the channels [36]. The SE block learns the channel
weights through global spatial information that increases the sensitivity of
the effective feature maps, whereas it suppresses the irrelevant feature maps
[1]. The feature maps produced by the convolution have only access to the
local information, meaning they have no access to the global information left
by the local receptive field. To address this limitation, we perform a squeeze
operation on the feature maps using the global average pooling to generate a
global representation. We then use the global representation and perform
sigmoid activation that helps us to learn a non-linear interaction between the
channels, and capture the channel-wise dependencies. Here, the sigmoid
activation output acts as a simple gating mechanism that ensures us to
adaptively recalibrate the feature maps produced by the convolution. The
adaptive recalibration or excitation operation explicitly models the
interdependencies between the feature channels. The SE net has the capability
of generalizing exceptionally well across various datasets [36]. In the
ResUNet++ architecture, we have stacked the SE block together with the
residual block for improving the performance of the network, increasing the
effective generalization across different medical datasets.
### III-C Atrous Spatial Pyramidal Pooling
Since the introduction of Atrous convolution by Chen et al. [44] to control
the field-of-view to capture contextual information at multi-scale precisely,
it has shown promising results for semantic image segmentation. Later, Chen et
al. [45] proposed ASPP, which is a parallel atrous convolution block to
capture multiple-scale information simultaneously. ASPP captures the
contextual information at different scales, and multiple parallel atrous
convolutions with varying rates in the input feature map are fused [45]. In
ResUNet++, we use ASPP as a bridge between the encoder and the decoder
sections, and after the final decoder block. We adopt ASPP in ResUNet++ to
capture the useful multi-scale information between the encoder and the
decoder.
### III-D Attention Units
Chen et al. [46] proposed an attention model that can segment natural images
by multi-scale input processing. Attention model is an improvement over
average and max-pooling baseline and allows to visualize the features
importance at different scales and positions [46]. With the success of
attention mechanisms, various medical image segmentation methods have
integrated an attention mechanism into their architecture [47, 1, 48, 49]. The
attention block gives importance to the subset of the network to highlight the
most relevant information. We believe that the attention mechanism in our
architecture will boost the effectiveness of the feature maps of the network
by capturing the relevant semantic class and filtering out irrelevant
information. Motivated by the recent achievement of attention mechanism in the
field of medical image segmentation and computer vision in general, we have
integrated an attention block at the decoder part of the ResUNet++ model.
### III-E Conditional Random Field
Conditional Random Field (CRF) is a popular statistical modeling method used
when the class labels for different inputs are not independent (e.g., image
segmentation tasks). CRF can model useful geometric characteristics like
shape, region connectivity, and contextual information [50]. Therefore, the
use of CRF can further improve the models capability to capture contextual
information of the polyps and thus improve overall results. We have used CRF
as a further step to produce more refined output to the test dataset for
improving the segmentation results. we have used a dense CRF for our
experiment.
### III-F Test Time Augmentation
Test-Time Augmentation (TTA) is a technique of performing reasonable
modifications to the test dataset to improve the overall prediction
performance. In TTA, augmentation is applied to each test image, and multiple
augmented images are created. After that, we make predictions on these
augmented images, and the average prediction of each augmented image is taken
as the final output prediction. Inspired by the improvement of recent SOTA
[22], we have used TTA in our work. In this paper, we utilize both horizontal
and vertical flip for TTA.
## IV Experiments
Figure 3: Example polyp and corresponding ground truth from the Kvasir-SEG
### IV-A Datasets
TABLE I: The biomedical segmentation datasets used in our experiments Dataset | Images | Input size | Availability
---|---|---|---
Kvasir-SEG [4] | 1000 | Variable | Public
CVC-ClinicDB [11] | 612 | $384\times 288$ | Public
CVC-ColonDB [12] | 380 | $574\times 500$ | Public
ETIS Larib Polyp DB [13] | 196 | $1225\times 966$ | Public
CVC-VideoClinicDB [15, 16]†⋄ | 11,954 | $384\times 288$ | Public
ASU-Mayo Clinic dataset [14]† | 18,781 | $688\times 550$ | Copyrighted
Kvasir-Sessile∙ | 196 | Variable | Public
† Ground truth for test data not available ⋄Ground truth oval or circle shaped
∙ Part of Kvasir-SEG[4], only sessile polyps
We have used six different datasets of segmented polyps with ground truths in
our experiments as shown in Table I, i.e., Kvasir-SEG [4], CVC-ClinicDB [11],
CVC-ColonDB [12], ETIS Larib Polyp DB [13], CVC-VideoClinicDB [15, 16] and
ASU-Mayo Clinic dataset [14]. They vary e.g., regarding number of images,
image resolution, availability, devices used for capturing and the accuracy of
the segmentation masks. One example is given from the Kvasir-SEG in Figure 3.
The Kvasir-SEG dataset includes 196 polyps smaller than 10 mm classified as
Paris class 1 sessile or Paris class IIa. We have released this dataset
seperately as subset of Kvasir-SEG. Note that for CVC-VideoClinicDB, we have
only used the data from the CVC-VideoClinicDBtrainvalid folder since only
these data have ground truth masks. Moreover, the ASU-Mayo Clinic dataset,
which was made available at the “Automatic Polyp Detection in Colonoscopy
Videos” sub-challenge at Endovis 2015 had ten normal videos (negative shots)
and ten videos with polyps. However, the test subset is not available because
of issues related to licensing. In our experiment, while training, validating,
testing with 80:10:10 split on the ASU-Mayo, we used all 20 videos for
experimentation. However, for the cross-dataset test (i.e., Tables X and XI),
we only tested on ten positive polyp videos.
### IV-B Evaluation Method
To evaluate polyp segmentation methods, where individual pixels should be
identified and marked, we use metrics used in earlier research [18, 22, 20,
26, 4, 51] and in competitions like GIANA111https://giana.grand-
challenge.org/, comparing the correctly and wrongly identified pixels of
findings. The Dice coefficient (DSC) and the Intersection over Union (IoU) are
the most commonly used metrics. We use the DSC to compare the similarity
between the produced segmentation results and the original ground truth.
Similarly, the IoU is used to compare the overlap between the output mask and
original ground truth mask of the polyp. The mean Intersection over Union
(mIoU) calculates IoU of each semantic class of the image and compute the mean
over all the classes. There is a correlation between DSC and mIoU. However, we
calculate both the metrics to provide a comprehensive results analysis that
could lead to better understanding of the results.
Moreover, other often-used metrics for the binary classification are recall
(true positive rate) and precision (positive predictive value). For the polyp
segmentation, precision is the ratio of the number of correctly segmented
pixels versus the total number of all the pixels. Similarly, recall is the
ratio of correctly segmented pixel versus the total number of pixels present
in the ground truth. In the polyp image segmentation, precision and recall are
used to indicate over-segmentation and under-segmentation. For formal
definitions and formulas, see the definitions in for example [4, 51]. Finally,
the receiver operating characteristic (ROC) curve analysis is also an
important metric to characterize the performance of the binary classification
system. In our study, we therefore calculate DSC, mIoU, recall, precision, and
ROC when evaluating the segmentation models.
### IV-C Data Augmentation
Data augmentation is a crucial step in increasing the number of polyp samples.
This solves the data insufficiency problem, improves the performance of the
model, and help to reduce over-fitting. We have used a large number of
different data augmentation techniques to increase the training sample. We
divide all the polyp datasets into training, validation, and testing sets
using the ratio of 80:10:10 based on the random distribution except for the
mixed datasets. After splitting the dataset, we apply data augmentation
techniques such as center crop, random rotation, transpose, elastic transform,
grid distortion, optical distortion, vertical flip, horizontal flip,
grayscale, random brightness, random contrast, hue saturation value, RBG
shift, course dropout, and different types of blur. For cropping the images,
we have used a crop size of $256\times 256$ pixels. For the experiment, we
have resized the complete training, validation, and testing dataset to
$256\times 256$ pixels to reduce the computational complexity. We have only
augmented the training dataset. The validation data is not augmented, and the
test datasets were augmented while evaluation using TTA.
### IV-D Implementation and Hardware Details
We have implemented all the models using the Keras framework [52] with
Tensorflow [53] as a backend. Source code of our implementation and
information about our experimental setup are made publicly available on
Github222https://github.com/DebeshJha/ResUNet-with-CRF-and-TTA. Our
experiments were performed using a Volta 100 Tensor Core GPU on a Nvidia DGX-2
AI system capable of 2-petaFLOPS tensor performance. We used a Ubuntu
18.04.3LTS operating system with Cuda 10.1.243 version installed. We have
performed different experiments with different sets of hyperparameters
manually on the same dataset in order to select the optimal set of
hyperparameters for the ResUNet++. Our model performed well with the batch
size of $16$, Nadam as an optimizer, binary cross-entropy as the loss
function, and learning rate of $1\mathrm{e}{-5}$. The dice loss function was
also competitive. These hyperparameters were chosen based on the empirical
evaluation. All the models were trained for $300$ epochs. We have used early
stopping to prevent the model from over-fitting. To further improve the
results, we have used stochastic gradient descent with warm restarts (SGDR).
All the hyperparameters were same except the learning rate, which was adjusted
based on the requirement. We have also included the Tensorboard for the
analysis and visualization of the results.
## V Results
In our previous work, we have showed that ResUNet++ outperforms the SOTA UNet
[35] and ResUNet [34] models trained on Kvasir-SEG and CVC-ClinicDB
dataset[1]. In this work, we aim to improve the results of ResUNet++ by
utilizing further hyperparameter optimization, CRF and TTA. In this section,
we present and compare the results of ResUNet++ with CRF, TTA, and both
approaches combined on the same dataset, mixed dataset, and cross-dataset.
Although a direct comparison of approaches from the literature is difficult
due to different testing mechanisms used by various authors, we nonetheless
compare the results with the recent work for the evaluation.
TABLE II: Results comparison on Kvasir-SEG Method | DSC | mIoU | Recall | Precision
---|---|---|---|---
UNet [35] | 0.7147 | 0.4334 | 0.6306 | 0.9222
ResUNet [34] | 0.5144 | 0.4364 | 0.5041 | 0.7292
ResUNet-mod [34] | 0.7909 | 0.4287 | 0.6909 | 0.8713
ResUNet++ [1] | 0.8119 | 0.8068 | 0.8578 | 0.7742
ResUNet++ + CRF | 0.8129 | 0.8080 | 0.8574 | 0.7775
ResUNet++ TTA | 0.8496 | 0.8318 | 0.8760 | 0.8203
ResUNet++ +TTA + CRF | 0.8508 | 0.8329 | 0.8756 | 0.8228
### V-A Results comparison on Kvasir-SEG dataset
Table II and Figure 4 show the quantitative and qualitative results
comparison. Figure 7 shows the ROC curve for all the models. As seen in the
quantitative results (Table II), qualitative results (Figure 4), and ROC curve
(Figure 7), our proposed methods outperform ResUNet++ on the Kvasir-SEG
dataset. The improvement in results demonstrates the advantage of the use of
the TTA, CRF and their combinations.
Figure 4: Qualitative results comparison of the proposed models with UNet,
ResUNet, and ResUNet++. The figure shows the example of polyps that are
usually missed-out during colonoscopy examination. We see that there is a high
similarity between ground truth and predicted mask for the proposed models.
Figure 5: Result of model trained on CVC-ClinicDB and tested on Kvasir-SEG
Figure 6: Example images where the proposed models fails on Kvasir-SEG
Figure 7: ROC curve of proposed models on the Kvasir-SEG
Figure 8: ROC curve for all the models trained and tested on CVC-ClinicDB TABLE III: Results comparison on CVC-ClinicDB Method | DSC | mIoU | Recall | Precision
---|---|---|---|---
MultiResUNet⋄ [31] | - | 0.8497 | - | -
cGAN† [24] | 0.8848 | 0.8127 | - | -
SegNet[20] | - | - | 0.8824 | -
FCN∙ [54] | - | - | 0.7732 | 0.8999
CNN [55] | (0.62-0.87) | - | - | -
MSPBψ CNN [56] | 0.8130 | - | 0.7860 | 0.8090
UNet [35] | 0.6419 | 0.4711 | 0.6756 | 0.6868
ResUNet [34] | 0.4510 | 0.4570 | 0.5775 | 0.5614
PraNet [57] | 0.8980 | 0.8400 | - | -
ResUNet-mod [34] | 0.7788 | 0.4545 | 0.6683 | 0.8877
ResUNet++ [1] | 0.9199 | 0.8892 | 0.9391 | 0.8445
ResUNet++ + CRF | 0.9203 | 0.8898 | 0.9393 | 0.8459
ResUNet++ + TTA | 0.9020 | 0.8826 | 0.9065 | 0.8539
ResUNet++ + TTA \+ CRF | 0.9017 | 0.8828 | 0.9060 | 0.8549
† Conditional generative adversarial network ⋄Data augmentation |
∙Fully convolutional network ψ multi-scale patch-based |
### V-B Results comparison on CVC-ClinicDB
CVC-ClinicDB is a commonly used dataset for polyp segmentation. Therefore, it
becomes important that we bring different work from the literature together
and compare the proposed algorithms with the existing works. We compare our
algorithms with the SOTA algorithms. Table III demonstrates that the
combination of ResUNet++ and CRF achieves DSC of 0.9293 and mIoU of 0.8898,
which is 2.23% improvement on PraNet [57] in DSC and 4.98% improvement in
mIoU, respectively, and the proposed methods shows the SOTA result on CVC-
ClinicDB.
The ROC curve measures the performance for the classification problem provided
a set threshold. We have set the probability threshold of $0.5$. The
combination of ResUNet++ and TTA has the maximum Area Under Curve - Receiver
Operating Characteristic (AUC-ROC) of 0.9814, as shown in Figure 8. Therefore,
the results in Table III and Figure 8 show that applying TTA gives an
improvement on CVC-ClinicDB.
TABLE IV: Results comparison on CVC-ColonDB Method | DSC | mIoU | Recall | Precision
---|---|---|---|---
FCN-8S + Otsu [19] | 0.8100 | - | 0.7480 | -
FCN-8s + Texton [58] | 0.7014 | - | 0.7566 | -
SA-DOVA Descriptor [12] | 0.5533 | - | 0.6191 | -
PraNet [57] | 0.7090 | 0.6400 | - | -
ResUNet++ [1] | 0.8469 | 0.8456 | 0.8511 | 0.8003
ResUNet++ + CRF | 0.8458 | 0.8456 | 0.8497 | 0.7767
ResUNet++ + TTA | 0.8474 | 0.8466 | 0.8434 | 0.8118
ResUNet++ + TTA + CRF | 0.8452 | 0.8459 | 0.8411 | 0.8125
### V-C Results comparison on CVC-ColonDB dataset
Our results using the CVC-ColonDB dataset are presented in Table IV. The table
shows that proposed method of combining ResUNet++ and TTA achieved the highest
DSC of 0.8474, which is 3.74% higher than SOTA [19], and mIoU of 0.8466 which
is 20.66% higher than [57]. The recall and precision of all three proposed
methods are quite acceptable. When compared with ResUNet++, there is an
improvement of 1.22% in precision. There are negligible differences in recall,
with ResUNet++ slightly outperforming the others.
TABLE V: Results on ETIS-Larib Polyp DB Method | DSC | mIoU | Recall | Precision
---|---|---|---|---
PraNet [57] | 0.6280 | 0.5670 | - | -
ResUNet++ [1] | 0.6364 | 0.7534 | 0.6346 | 0.6467
ResUNet++ + CRF | 0.6228 | 0.7520 | 0.6242 | 0.5648
ResUNet++ + TTA | 0.6136 | 0.7458 | 0.5996 | 0.6565
ResUNet++ + TTA + CRF | 0.6018 | 0.7426 | 0.5914 | 0.5755
### V-D Results comparison on ETIS-Larib Polyp DB
Table V shows the results of the proposed models on the ETIS-Larib Polyp DB.
In this case, we do not compare the results with UNet and ResUNet, but compare
the models directly with ResUNet++ as it already showed superior performance
on Kvasir-SEG and CVC-ClinicDB [1]. Here, there are only marginal differences
in the results of ResUNet++, “ResUNet++ + CRF”, “ResUNet++ + TTA”, and
“ResUNet++ + CRF + TTA”. However, ResUNet++ achieves maximum DSC of 0.6364,
which is 0.84% improvement over SOTA [57] and mIoU of 0.7534 which is 18.64%
improvement over [57]. The recall of ResUNet++ is 0.6346, which is slightly
higher than the proposed methods. However, the precision of combining
ResUNet++ and TTA is higher as compared to ResUNet++.
From the results, we can say that the performance of architecture is data
specific. Our proposed methods outperformed SOTA over five independent
datasets, however, ResUNet++ shows better results than the combinational
approaches on ETIS-Larib dataset. Still, the precision of combining ResUNet++
and TTA is slightly higher than ResUNet++. It is to be noted that ETIS-Larib
contains only $196$ images, out of which only $156$ images are used for
training. Even with the small training dataset, the models are performing
satisfactory as compared to the SOTA [57] with significant margin in mIoU,
which can be considered as the strength of the algorithm.
TABLE VI: Results on Kvasir-Sessile Method | DSC | mIoU | Recall | Precision
---|---|---|---|---
ResUNet++ [1] | 0.4600 | 0.64086 | 0.4382 | 0.5838
ResUNet++ + CRF | 0.4522 | 0.6394 | 0.4326 | 0.5708
ResUNet++ + TTA | 0.5042 | 0.6606 | 0.4851 | 0.6796
ResUNet++ + TTA + CRF | 0.4901 | 0.6565 | 0.4766 | 0.6277
### V-E Results on Kvasir-Sessile
As this is the first work on Kvasir-Sessile, we have compared the proposed
methods with ResUNet++. Table VI shows that combining ResUNet++ and TTA gives
the DSC of 0.5042, mIoU of 0.6606 which can be considered a decent score on a
smaller size dataset. The dataset contains small, diverse images, which are
difficult to generalize with very few training samples.
TABLE VII: Results comparison on CVC-VideoClinicDB Method | DSC | mIoU | Recall | Precision
---|---|---|---|---
ResUNet++ [1] | 0.8798 | 0.8730 | 0.7749 | 0.6702
ResUNet++ + CRF | 0.8811 | 0.8739 | 0.7743 | 0.6706
ResUNet++ + TTA | 0.8125 | 0.8467 | 0.6896 | 0.6421
ResUNet++ + TTA + CRF | 0.8130 | 0.8477 | 0.6875 | 0.6276
### V-F Results comparison on CVC-VideoClinicDB
Table VII shows the results of the proposed models on the CVC-VideoClinicDB.
From the results, we can see that all models perform well on the dataset
despite the fact that masks are not pixel perfect. One of the reasons for high
performance is the presence of $11,954$ polyps and normal video frames that
was used in training and testing. The combination of ResUNet++ and CRF
obtained a DSC of $0.8811$, mIoU of $0.8739$, recall of $0.7743$, and
precision of $0.6706$ which is quite acceptable for the segmentation task with
this type of dataset. In CVC-VideoClinicDB, the ground-truth is marked with a
oval or circle shape. However, it is understandable that pixel-precise
annotations of this dataset will need great manual effort from expert
endoscopists and engineers.
TABLE VIII: Results comparison on ASUMayo Clinic Method | DSC | mIoU | Recall | Precision
---|---|---|---|---
ResUNet++ [1] | 0.8743 | 0.8569 | 0.6534 | 0.4896
ResUNet++ + CRF | 0.8850 | 0.8635 | 0.6504 | 0.4858
ResUNet++ + TTA | 0.8553 | 0.8535 | 0.6162 | 0.4912
ResUNet++ + TTA + CRF | 0.8550 | 0.8551 | 0.6107 | 0.4743
### V-G Results comparison on AUS-Mayo ClinicDB
Table VIII shows the results of the proposed models on the ASU-Mayo ClinicDB.
ASU-Mayo contains 18,781 frames, both polyp and non-polyp images. The
combination of ResUNet++ and CRF obtained a DSC of 0.8850 and mIoU of 0.8635.
As in the real clinical settings, the models trained on this type of dataset
are more meaningful (as it contains both polyp and non-polyp frames). The
capability to achieve good performance for these more challenging datasets is
one of the strengths of the proposed method. This is supported by the fact
that this dataset also contains a sufficient amount of images to enable
sufficient training.
TABLE IX: Results comparison using (Kvasir-SEG + CVC-ClinicDB) as the training set Test set | Method | DSC | mIoU | Recall | Precision
---|---|---|---|---|---
CVCColonDB | ResUNet++ [1] | 0.4974 | 0.6800 | 0.4787 | 0.6019
ResUNet++ + CRF | 0.4920 | 0.6788 | 0.4744 | 0.5636
ResUNet++ + TTA | 0.5084 | 0.6859 | 0.4795 | 0.5973
ResUNet++ + TTA + CRF | 0.5061 | 0.6852 | 0.4775 | 0.5770
CVC-VideoClinicDB | ResUNet++ [1] | 0.3460 | 0.6348 | 0.2272 | 0.3383
ResUNet++ + CRF | 0.3552 | 0.6412 | 0.2228 | 0.3065
ResUNet++ + TTA | 0.3573 | 0.6440 | 0.2104 | 0.3338
ResUNet++ + TTA + CRF | 0.3603 | 0.6468 | 0.2068 | 0.3038
### V-H Results comparison on mixed dataset
To check the performance of the proposed approaches on the images captured
using different devices, we have mixed the Kvasir-SEG and CVC-ClinicDB and
used them for training. The model were tested on CVC-ColonDB and CVC-
VideoClinicDB. Table IX shows the result of the mixed dataset on both
datasets. The combination of ResUNet++ and TTA obtains a DSC of 0.5084 and
mIoU of 0.6859 with CVC-ColonDB. The combination of ResUNet++, CRF, and TTA
obtained a DSC of 0.3603 and mIoU of 0.6468 with CVC-VideoClinicDB.
From the table, we can see that the combination of ResUNet++, CRF, and TTA
performs better or very competitive in both still images and video frames.
Here, it is also evident that the model trained on the smaller dataset
(Kvasir-SEG and CVC-ClinicDB) which do not include non-polyp images is not
performing well on larger and diverse datasets (CVC-VideoClinicDB) that
contain both polyp and non-polyp frames. Additionally, for the CVC-
VideoClinicDB datasets, the provided ground truth is not perfect (oval/circle)
shaped. As the model trained on Kvasir-SEG and CVC-ClinicDB have perfect
annotations, the model is good at predicting a perfect shaped mask. When we
make predictions on the CVC-VideoClinicDB with imperfect masks, even if the
predictions are good, the scores may not be high because of the difference in
the provided ground truth and the predicted masks.
TABLE X: Cross-dataset results using Kvasir-SEG as the training set Test set | Method | DSC | mIoU | Recall | Precision
---|---|---|---|---|---
CVC-ClinicDB | ResUNet++ [1] | 0.6468 | 0.7311 | 0.6984 | 0.6510
ResUNet++ + CRF | 0.6458 | 0.7321 | 0.6955 | 0.6425
ResUNet++ + TTA | 0.6737 | 0.7507 | 0.7108 | 0.6833
ResUNet++ + TTA + CRF | 0.6712 | 0.7506 | 0.7078 | 0.6680
ETIS-Larib Polyp DB | ResUNet++ [1] | 0.4017 | 0.6415 | 0.4412 | 0.3925
ResUNet++ + CRF | 0.4012 | 0.6427 | 0.4379 | 0.3755
ResUNet++ + TTA | 0.4014 | 0.6468 | 0.4294 | 0.4014
ResUNet++ + TTA + CRF | 0.3997 | 0.6466 | 0.4267 | 0.3710
CVC-ColonDB | ResUNet++ [1] | 0.5135 | 0.6742 | 0.5398 | 0.5461
ResUNet++ + CRF | 0.5122 | 0.6748 | 0.5367 | 0.5285
ResUNet++ + TTA | 0.5593 | 0.7030 | 0.5626 | 0.5944
ResUNet++ + TTA + CRF | 0.5563 | 0.7024 | 0.5595 | 0.5811
CVC-VideoClinicDB | ResUNet++ [1] | 0.3175 | 0.6082 | 0.2915 | 0.3299
ResUNet++ + CRF | 0.3334 | 0.6185 | 0.2862 | 0.3141
ResUNet++ + TTA | 0.3505 | 0.6337 | 0.2601 | 0.3488
ResUNet++ + TTA + CRF | 0.3601 | 0.6402 | 0.2555 | 0.3252
ASU-Mayo | ResUNet++ [1] | 0.3482 | 0.6346 | 0.2196 | 0.2021
ResUNet++ + CRF | 0.3747 | 0.6516 | 0.2136 | 0.1797
ResUNet++ + TTA | 0.3823 | 0.6583 | 0.1962 | 0.2165
ResUNet++ + TTA + CRF | 0.3950 | 0.6681 | 0.1890 | 0.1781
### V-I Cross-dataset result evaluation on Kvasir-SEG
For the cross-dataset evaluation, we trained the models on the Kvasir-SEG
dataset and tested it on the other five independent datasets. Table X shows
the results of cross-data generalizability of ResUNet++ alone, and with the
CRF and TTA techniques. The results of the models trained on Kvasir-SEG
produces an average best mIoU of 0.6817 and an average best DSC of 0.4779 for
both image and video datasets. From the above table, we can see that the
proposed combinational approaches are performing competitive. For the image
datasets, the combination of ResUNet++ and TTA is performing better, and for
the video datasets, the combination of ResUNet++, CRF, and TTA is performing
best. It is to be noted that we are training a model with 1000 Kvasir-SEG
pixel segmented polyps and testing on (for example, 11,954 frames) oval-shaped
polyp ground truth. Here, even if the predictions are correct, the evaluation
scores will not be good because of the oval/circle shaped ground truth.
Moreover, the datasets such as ASU-Mayo and CVC-VideoClinicDB are heavily
imbalanced, but the model trained on Kvasir-SEG contains at least one polyp.
This may also have caused the poor performance.
TABLE XI: Cross-dataset results on CVC-ClinicDB as the training set Test set | Method | DSC | mIoU | Recall | Precision
---|---|---|---|---|---
Kvasir-SEG | ResUNet++ [1] | 0.6876 | 0.7374 | 0.7027 | 0.7354
ResUNet++ + CRF | 0.6877 | 0.7389 | 0.7004 | 0.7371
ResUNet++ + TTA | 0.7218 | 0.7616 | 0.7225 | 0.7855
ResUNet++ + TTA + CRF | 0.7208 | 0.7621 | 0.7204 | 0.7831
CVC-ColonDB | ResUNet++ [1] | 0.5489 | 0.6942 | 0.5577 | 0.5816
ResUNet++ + CRF | 0.5470 | 0.6949 | 0.5546 | 0.5727
ResUNet++ + TTA | 0.5686 | 0.7080 | 0.5702 | 0.5935
ResUNet++ + + TTA + CRF | 0.5667 | 0.7081 | 0.5687 | 0.5773
ETIS-Larib Polyp DB | FCN-VGG [59] | 0.7023 | 0.5420 | - | -
ResUNet++ [1] | 0.4012 | 0.6398 | 0.4232 | 0.4013
ResUNet++ + CRF | 0.3990 | 0.6403 | 0.4191 | 0.3974
ResUNet++ + TTA | 0.4027 | 0.6522 | 0.3969 | 0.4235
ResUNet++ + TTA + CRF | 0.3973 | 0.6514 | 0.3906 | 0.4078
CVC-VideoClinicDB | ResUNet++ [1] | 0.3666 | 0.6422 | 0.2568 | 0.3632
ResUNet++ + CRF | 0.3788 | 0.6500 | 0.2530 | 0.3399
ResUNet++ + TTA | 0.3941 | 0.6582 | 0.2516 | 0.3829
ResUNet++ + TTA + CRF | 0.3988 | 0.6616 | 0.2481 | 0.3542
ASU-Mayo | ResUNet++ [1] | 0.2797 | 0.6113 | 0.1627 | 0.1443
ResUNet++ + CRF | 0.3167 | 0.6323 | 0.1591 | 0.1348
ResUNet++ + TTA | 0.3085 | 0.6331 | 0.1265 | 0.1571
ResUNet++ + TTA + CRF | 0.3233 | 0.6426 | 0.1225 | 0.1270
### V-J Cross-dataset evaluation on CVC-ClinicDB
To further test generaliziblity, we trained the models on CVC-CliniDB and
tested it across five independent, diverse image and video datasets. Tables XI
shows the results of cross-data generalizability. Like the previous test on
Kvasir-SEG, the results follow the same pattern with the combination of
ResUNet++ and TTA outperforming others on the image datasets and the
combination of ResUNet++, CRF, and TTA outperforming its competitors on video
datasets. ResUNet++ and TTA still remain competitive. Moreover, the values of
DSC and mIoU of the best model are similar for both the CVC-VideoClinicDB and
the ASU-Mayo Clinic dataset. We have compared the results with the existing
work that used CVC-CliniDB for training and ETIS-Larib for testing. Our model
achieves highest mIoU of 0.6522.
### V-K Result summary
In summary, from all obtained results (i.e., qualitative, quantitative, and
ROC curve), the following main observations can be drawn: (i) the proposed
ResUNet++ is capable of segmenting the smaller, larger and regular polyps;
(ii) the combination of ResUNet++ with CRF achieves the best performance in
terms of DSC, mIoU, recall and precision when trained and tested on the same
dataset (see Table III, Table VII, and Table VIII) whereas it remains
competitive when tested on other datasets; (iii) the combination of ResUNet++
and TTA and the combination of ResUNet++, CRF and TTA performs similar for the
mixed datasets; (iv) the combination of ResUNet++ and TTA outperforms others
on still images; (v) the combination of ResUNet++, CRF and TTA shows
improvement on all the video datasets compared to ResUNet++; (vi) all the
models perform better when the images have higher contrast; (vii) ResUNet++ is
particularly good at segmenting smaller and flat or sessile polyps, which is a
prerequisite for developing an ideal CADx polyp detection system [1]; (viii)
ResUNet++ fails especially on the images that contains over-exposed regions
termed as saturation or contrast (see Figure 6); (ix) ResUNet and ResUNet-mod
particularly showed over-segmented or under-segmented results, (see Figure 4).
## VI Discussion
TABLE XII: Total number of trainable parameters Model | Trainable parameters
---|---
U-Net | 5,400,289
ResUNet | 8,221,121
ResUNet-mod | 2,058,465
ResUNet++ | 16,228,001
### VI-A General Performance
The tables and figures suggest that applying CRF and TTA improved the
performance of ResUNet++ on the same datasets, mixed datasets and cross-
datasets. Specifically, the combination of ResUNet++ and TTA, and the
combination of ResUNet++, CRF and TTA are more generalizable for all the
datasets, where TTA with ResUNet++ performs best on the still images, and the
combinations of ResUNet++, CRF, and TTA are outperforming others on video
datasets. For all of the proposed models, the value of AUC is greater than
$0.93$. This indicates that our models are good at distinguishing between the
polyp and non-polyps. It also suggests that the model produces sufficient
sensitivity.
The total number of trainable parameters increases by increasing the number of
blocks in the networks (see Table XII). However, in ResUNet++, there is
significant performance gain that compensates for the training time, and our
model requires fewer parameters if we compare with the models that use pre-
trained encoders.
### VI-B Cross Dataset Performance
The cross-data test is an excellent technique to determine the generalizing
capability of a model. The presented work is an initiative towards improving
the generalizability of segmentation methods. Our contribution towards
generalizability is to train on one dataset and test on several other public
datasets that may come from different centers and use different scope
manufacturers. Thus, we believe that to tackle this issue, out-of-sample
multicenter data must be used to test the built methods. The work is a step
forward in raising an issue regarding method interpretability and we also
raise questions about generalizability and domain adaptation of supervised
methods in general.
From the results analyses, we can see that different proposed algorithms
perform well with different types of datasets. For instance, CRF outperformed
others on tables III, VII, and VIII. TTA showed improvement on tables IV, IX,
X and XI. CRF performs better than TTA while trained and tested on video
datasets (see tables VII and VIII). CRF also outperformed TTA on most of the
images dataset. However, TTA still remains competitive. On the mixed dataset
and the cross-dataset test, TTA performs better than CRF on all the datasets.
On the mixed datasets and on the cross-dataset test on videos, the combination
of ResUNet++, CRF, and TTA remains the best choice (see tables IX, X, and XI).
There is a performance improvement over ResUNet++ while combining CRF, TTA,
and the combination of CRF and TTA.
However, there is no significant performance improvement of any methods on the
others. From the results, we can see that the results are typically data-
dependent. However, as the proposed methods perform well on video frames, it
may work better in the clinic, as the output from a colonoscope is a video
stream. Thus, it becomes critical to show the results with all three
approaches on each dataset. Therefore, we provide extensive experiments
showing both success (Figure 4, Figure 5) and failure cases (Figure 6) and
present the overall analysis.
### VI-C Challenges
There are several challenges associated with segmenting polyps, such as bowel-
quality preparation during colonoscopy, angle of the cameras, superfluous
information, and varying morphology, which can affect the overall performance
of a DL model. For some of the images, there even exists variation in the
decision between endoscopists. While ResUNet++ with CRF and TTA also struggle
with producing satisfactory segmentation maps for these images, it performs
considerably better than our previous model and also outperforms another SOTA
algorithm.
The quality of a colonoscopy examination is largely determined by the
experience and skill of the endoscopist [23]. Our proposed model can help in
two ways: (i) it can be used to segment a detected polyp, providing an extra
pair of eyes to the endoscopist; and (ii) it performs well on both flat and
small polyps, which are often missed during endoscopic examinations. The
qualitative analysis (see Figure 4) and the quantitative analyses from the
above tables and figures support this argument. This is a major strength of
our work and makes it a candidate for clinical testing.
### VI-D Possible Limitations
Possible limitations of this work are that it is a retrospective study.
Prospective clinical evaluation is essential because data analyzed with the
retrospective study is the different prospective study (for example, the case
of missing data that should be considered on the basis of best-case and worse
case scenarios) [60]. Also, all data in these experiments are curated, while a
prospective clinical trial would mean testing on full colonscopy videos.
During model training, we have resized all the images to $256\times 256$ to
reduce the complexity, which costs in loss of information, and can affect the
overall performance. We have worked on optimizing the code, but further
optimization may exist, that can potentially improve the performance of the
model.
## VII Conclusion
In this paper, we have presented the ResUNet++ architecture for semantic polyp
segmentation. We took inspiration from the residual block, ASPP, and attention
block to design the novel ResUNet++ architecture. Furthermore, we applied CRF
and TTA to improve the results even more. We have trained and validated the
combination of ResUNet++ with CRF and TTA using six publicly available
datasets, and analyzed and compared the results with the SOTA algorithm on
specific datasets. Moreover, we analyzed the cross-data generalizability of
the proposed model towards developing generalizable semantic segmentation
models for automatic polyp segmentation. A comprehensive evaluation of the
proposed model trained and tested on six different datasets showed good
performance of the (ResUNet++ and CRF) on image datasets and (ResUNet++ and
TTA), (ResUNet++, CRF, and TTA) model for the mixed datasets and cross-
datasets. Further, a detailed study on cross-dataset generalizability of the
models trained on Kvasir-SEG and CVC-ClinicDB and tested on five independent
datasets, confirmed the robustness of the proposed ResUNet++ + TTA method for
cross-dataset evaluation.
The strength of our method is that we successfully detected smaller and flat
polyps, which are usually missed during colonoscopy examination [61, 20]. Our
model can also detect the polyps that would be difficult for the endoscopists
to identify without careful investigations. Therefore, we believe that the
ResUNet++ architecture, along with the additional CRF and TTA steps, could be
one of the potential areas to investigate, especially for the overlooked
polyps. We also point out that the lack of generalization issues of the
models, which is evidenced by the unsatisfactory result for cross-dataset
evaluation in most of the cases. In the future, our CADx system should also be
investigated on other bowel conditions. Moreover, a prospective trial should
also be conducted with image and video datasets.
## Acknowledgement
This work is funded in part by Research Council of Norway project number
263248. Experiments are performed on the Experimental Infrastructure for
Exploration of Exascale Computing (eX3), supported by the Research Council of
Norway under contract 270053.
## References
* [1] D. Jha _et al._ , “Resunet++: An advanced architecture for medical image segmentation,” in _Proc. of IEEE ISM._ , 2019, pp. 225–230.
* [2] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” _CA: a cancer journal for clinicians_ , vol. 68, no. 6, pp. 394–424, 2018.
* [3] T. Matsuda, A. Ono, M. Sekiguchi, T. Fujii, and Y. Saito, “Advances in image enhancement in colonoscopy for detection of adenomas,” _Nat. Revi. Gastroenter. & Hepato._, vol. 14, no. 5, pp. 305–314, 2017.
* [4] D. Jha _et al._ , “Kvasir-seg: A segmented polyp dataset,” in _Proc. of MMM_ , 2020, pp. 451–462.
* [5] S. B. Ahn, D. S. Han, J. H. Bae, T. J. Byun, J. P. Kim, and C. S. Eun, “The miss rate for colorectal adenoma determined by quality-adjusted, back-to-back colonoscopies,” _Gut and liver_ , vol. 6, no. 1, pp. 64–70, 2012.
* [6] D. o. Heresbach, “Miss rate for colorectal neoplastic polyps: a prospective multicenter study of back-to-back video colonoscopies,” _Endoscopy_ , vol. 40, no. 04, pp. 284–290, 2008.
* [7] Zimmermann-Fraedrich _et al._ , “Right-sided location not associated with missed colorectal adenomas in an individual-level reanalysis of tandem colonoscopy studies,” _Gastroenterology_ , vol. 157, no. 3, pp. 660–671, 2019.
* [8] A. Shaukat _et al._ , “Longer withdrawal time is associated with a reduced incidence of interval cancer after screening colonoscopy,” _Gastroenterology_ , vol. 149, no. 4, pp. 952–957, 2015.
* [9] D. Vázquez _et al._ , “A benchmark for endoluminal scene segmentation of colonoscopy images,” _Journal of healthcare engineering_ , vol. 2017, 2017\.
* [10] T. Roß _et al._ , “Robust medical instrument segmentation challenge 2019,” _arXiv preprint arXiv:2003.10299v1_ , 2020.
* [11] J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, D. Gil, C. Rodríguez, and F. Vilariño, “Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” _Computeri. Med. Imag. and Graph._ , vol. 43, pp. 99–111, 2015.
* [12] J. Bernal, J. Sánchez, and F. Vilarino, “Towards automatic polyp detection with a polyp appearance model,” _Patt. Recognit._ , vol. 45, no. 9, pp. 3166–3182, 2012.
* [13] J. Silva, A. Histace, O. Romain, X. Dray, and B. Granado, “Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer,” _Int. Jour. of Comput. Assis. Radiol. and Surg._ , vol. 9, no. 2, pp. 283–293, 2014.
* [14] N. Tajbakhsh, S. R. Gurudu, and J. Liang, “Automated polyp detection in colonoscopy videos using shape and context information,” _IEEE Trans. Med. Imag._ , vol. 35, no. 2, pp. 630–644, 2015.
* [15] Q. Angermann _et al._ , “Towards real-time polyp detection in colonoscopy videos: Adapting still frame-based methodologies for video sequences analysis,” in _Comput. Assis. and Robot. Endos. and Clin. Image-Based Proced._ , 2017, pp. 29–41.
* [16] J. Bernal _et al._ , “Polyp detection benchmark in colonoscopy videos using gtcreator: A novel fully configurable tool for easy and fast annotation of image databases,” in _Proceedings of CARS conference_ , 2018.
* [17] A. A. Pozdeev, N. A. Obukhova, and A. A. Motyko, “Automatic analysis of endoscopic images for polyps detection and segmentation,” in _Proc. of EIConRus_ , 2019, pp. 1216–1220.
* [18] J. Bernal _et al._ , “Comparative validation of polyp detection methods in video colonoscopy: results from the miccai 2015 endoscopic vision challenge,” _IEEE Trans. Med. Imag._ , vol. 36, no. 6, pp. 1231–1249, 2017\.
* [19] M. Akbari _et al._ , “Polyp segmentation in colonoscopy images using fully convolutional network,” in _Proc. of EMBC_ , 2018, pp. 69–72.
* [20] P. Wang _et al._ , “Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy,” _Nat. biomed. engineer._ , vol. 2, no. 10, pp. 741–748, 2018.
* [21] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” _IEEE trans. on patt. analys. and mach. intellige._ , vol. 39, no. 12, pp. 2481–2495, 2017.
* [22] Y. B. Guo and B. Matuszewski, “Giana polyp segmentation with fully convolutional dilation neural networks,” in _Proc. of VISIGRAPP_ , 2019, pp. 632–641.
* [23] M. Yamada _et al._ , “Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy,” _Scienti. repo._ , vol. 9, no. 1, pp. 1–9, 2019.
* [24] J. Poomeshwaran, K. S. Santhosh, K. Ram, J. Joseph, and M. Sivaprakasam, “Polyp segmentation using generative adversarial network,” in _Proc. of EMBC_ , 2019, pp. 7201–7204.
* [25] J. Kang and J. Gwak, “Ensemble of instance segmentation models for polyp segmentation in colonoscopy images,” _IEEE Access_ , vol. 7, pp. 26 440–26 447, 2019.
* [26] S. Ali _et al._ , “Endoscopy artifact detection (ead 2019) challenge dataset,” _arXiv preprint arXiv:1905.03209_ , 2019.
* [27] N.-Q. Nguyen and S.-W. Lee, “Robust boundary segmentation in medical images using a consecutive deep encoder-decoder network,” _IEEE Access_ , vol. 7, pp. 33 795–33 808, 2019.
* [28] V. de Almeida Thomaz, C. A. Sierra-Franco, and A. B. Raposo, “Training data enhancements for robust polyp segmentation in colonoscopy images,” in _Proc. of CBMS_ , 2019, pp. 192–197.
* [29] X. Sun, P. Zhang, D. Wang, Y. Cao, and B. Liu, “Colorectal polyp segmentation by u-net with dilation convolution,” _arXiv preprint arXiv:1912.11947_ , 2019\.
* [30] D. Jha, M. A. Riegler, D. Johansen, P. Halvorsen, and H. D. Johansen, “Doubleu-net: A deep convolutional neural network for medical image segmentation,” in _Proc. of IEEE CBMS_ , 2020.
* [31] N. Ibtehaz and M. S. Rahman, “Multiresunet: Rethinking the u-net architecture for multimodal biomedical image segmentation,” _Neural Networks_ , vol. 121, pp. 74–87, 2020.
* [32] P. Brandao _et al._ , “Towards a computed-aided diagnosis system in colonoscopy: automatic polyp segmentation using convolution neural networks,” _Jour. of Medi. Robot. Resear._ , vol. 3, no. 02, p. 1840002, 2018\.
* [33] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _Proc. of CVPR_ , 2009, pp. 248–255.
* [34] Z. Zhang, Q. Liu, and Y. Wang, “Road extraction by deep residual u-net,” _IEEE Geosci. and Remo. Sens. Lett._ , vol. 15, no. 5, pp. 749–753, 2018\.
* [35] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _Proc. of MICCAI_ , 2015, pp. 234–241.
* [36] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in _Proc. of CVPR_ , 2018, pp. 7132–7141.
* [37] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” _arXiv preprint arXiv:1706.05587_ , 2017.
* [38] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in _Proc. of NIPS_ , 2017, pp. 5998–6008.
* [39] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” _arXiv preprint arXiv:1502.03167_ , 2015.
* [40] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _nature_ , vol. 521, no. 7553, pp. 436–444, 2015.
* [41] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proc. of CVPR_ , 2016, pp. 770–778.
* [42] L. Wang, R. Chen, S. Wang, N. Zeng, X. Huang, and C. Liu, “Nested dilation network (ndn) for multi-task medical image segmentation,” _IEEE Access_ , vol. 7, pp. 44 676–44 685, 2019.
* [43] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in _Proc. of ECCV_ , 2016, pp. 630–645.
* [44] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” _arXiv preprint arXiv:1412.7062_ , 2014.
* [45] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” _IEEE trans. on pattern anal. and mach. intelli._ , vol. 40, no. 4, pp. 834–848, 2017.
* [46] L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to scale: Scale-aware semantic image segmentation,” in _Proc. of CVPR_ , 2016, pp. 3640–3649.
* [47] Y. Wang _et al._ , “Deep attentional features for prostate segmentation in ultrasound,” in _Proc. of MICCAI_ , 2018, pp. 523–530.
* [48] D. Nie, Y. Gao, L. Wang, and D. Shen, “Asdnet: Attention based semi-supervised deep networks for medical image segmentation,” in _Proc. of MICCAI_ , 2018, pp. 370–378.
* [49] A. Sinha and J. Dolz, “Multi-scale guided attention for medical image segmentation,” _arXiv preprint arXiv:1906.02849_ , 2019.
* [50] F. I. Alam, J. Zhou, A. W.-C. Liew, X. Jia, J. Chanussot, and Y. Gao, “Conditional random field and deep feature learning for hyperspectral image classification,” _IEEE Trans. on Geosci. and Remo. Sens._ , vol. 57, no. 3, pp. 1612–1628, 2018.
* [51] K. Pogorelov _et al._ , “Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection,” in _Proc. of MMSYS_ , 2017, pp. 164–169.
* [52] F. Chollet _et al._ , “Keras,” 2015.
* [53] M. Abadi _et al._ , “Tensorflow: A system for large-scale machine learning,” in _Proc. of OSDI_ , 2016, pp. 265–283.
* [54] Q. Li _et al._ , “Colorectal polyp segmentation using a fully convolutional neural network,” in _Proc. of CISP-BMEI_ , 2017, pp. 1–5.
* [55] Q. Nguyen and S.-W. Lee, “Colorectal segmentation using multiple encoder-decoder network in colonoscopy images,” in _Proc. of IKE_ , 2018, pp. 208–211.
* [56] D. Banik, D. Bhattacharjee, and M. Nasipuri, “A multi-scale patch-based deep learning system for polyp segmentation,” in _Advan. Comput. and Syst. for Secur._ , 2020, pp. 109–119.
* [57] D.-P. Fan _et al._ , “Pranet: Parallel reverse attention network for polyp segmentation,” in _Proc. of MICCAI_ , 2020, pp. 263–273.
* [58] L. Zhang, S. Dolwani, and X. Ye, “Automated polyp segmentation in colonoscopy frames using fully convolutional neural network and textons,” in _Proc. ov MIUA_ , 2017, pp. 707–717.
* [59] P. Brandao _et al._ , “Fully convolutional neural networks for polyp segmentation in colonoscopy,” in _Medical Imaging 2017: Computer-Aided Diagnosis_ , vol. 10134, 2017, pp. 101 340F1 – 101 340F1.
* [60] Y. Mori and S.-e. Kudo, “Detecting colorectal polyps via machine learning,” _Nat. biomed. engineer._ , vol. 2, no. 10, pp. 713–714, 2018.
* [61] A. Leufkens, M. Van Oijen, F. Vleggaar, and P. Siersema, “Factors influencing the miss rate of polyps in a back-to-back colonoscopy study,” _Endoscopy_ , vol. 44, no. 05, pp. 470–475, 2012.
*[CRF]: Conditional Random Field
*[TTA]: Test-Time Augmentation
*[CRC]: colorectal cancer
*[CADx]: Computer-Aided Diagnosis
*[SOTA]: State-of-the-art
*[DL]: Deep Learning
*[CNN]: Convolutional Neural Network
*[FCN]: Fully Convolutional Network
*[ROC]: receiver operating characteristic
*[FCNN]: Fully Convolutional Neural Network
*[GIANA]: Gastrointestinal Image ANAlysis
*[SD]: standard definition
*[ML]: machine learning
*[ASPP]: atrous spatial pyramid pooling
*[ReLU]: Rectified Linear Unit
*[BN]: Batch Normalization
*[SE]: squeeze and excitation
*[DSC]: Dice coefficient
*[IoU]: Intersection over Union
*[mIoU]: mean Intersection over Union
*[AUC-ROC]: Area Under Curve - Receiver Operating Characteristic
|
arxiv-papers
| 2021-07-26T18:55:58 |
2024-09-04T03:07:19.893462
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Debesh Jha, Pia H. Smedsrud, Dag Johansen, Thomas de Lange, H{\\aa}vard\n D. Johansen, P{\\aa}l Halvorsen, and Michael A. Riegler",
"submitter": "Debesh Jha",
"url": "https://arxiv.org/abs/2107.12435"
}
|
2107.12436
|
# Feature Synergy, Redundancy, and Independence in Global Model Explanations
using SHAP Vector Decomposition
Jan Ittner Lukasz Bolikowski Konstantin Hemker Ricardo Kennedy
###### Abstract
We offer a new formalism for global explanations of pairwise feature
dependencies and interactions in supervised models. Building upon Shap values
and Shap interaction values, our approach decomposes feature contributions
into synergistic, redundant and independent components (S-R-I decomposition of
SHAP vectors). We propose a geometric interpretation of the components and
formally prove its basic properties. Finally, we demonstrate the utility of
synergy, redundancy and independence by applying them to a constructed data
set and model.
Machine Learning, ICML
## 1 Introduction
Understanding how and why a model produces its output is an essential part of
building a robust machine learning solution. There are various reasons why
data scientists opt to “unpack” their models, including
1. 1.
Diagnostic: ensuring that good model performance is not a result of data
leakage, the evaluation protocol is not compromised, and the model has learned
to properly generalise from the training data.
2. 2.
Validation: checking that relationships discovered by the model are plausible
also from the perspective of domain experts
3. 3.
Feature selection: pruning redundant features with low or no marginal impact
while protecting groups of synergistic features
4. 4.
Fairness and compliance: detecting a model’s direct or indirect use of
protected attributes to avoid discriminatory bias, or violation of other
regulatory requirements
Some machine learning models, by design, offer limited insights into their
decision making process. Examples include comparing coefficients of linear
regression models, counting how often a feature is used in random forest
models, or tracking neuron activations under various inputs in neural
networks. Still, the most valuable explanatory frameworks are those that can
unpack an arbitrary “black box” model without the need to access its
internals.
Model explanation typically takes the form of attributing importance to input
features, individually or by groups. Several approaches have been proposed to
date, with Shap (Lundberg & Lee, 2017) being the most popular.
However, the primary focus of Shap is to quantify _local_ contributions of one
or more features, and is not designed to explain global relationships among
features from the perspective of a given model: Does the model combine
information from groups of features, meaning that any feature of that group
would be less impactful in the absence of its counterparts? Which features are
fully or partially redundant with respect to the target variable, and could
therefore be substituted for each other with little or no loss of model
performance?
This paper offers new answers to questions such as the above, proposing an
approach with favourable mathematical properties to quantify dependencies and
interactions between features in a model: given any pair of features $x_{i}$
and $x_{j}$, we interpret their Shap values across multiple observations as
vectors, then decompose them into multiple subvectors representing different
types of relationships, and quantify the strength of these relationships by
the magnitudes of the vectors. We distinguish three types of relationships:
_synergy_ , _redundancy_ , and _independence_.
1. 1.
The _synergy_ of feature $x_{i}$ relative to another feature $x_{j}$
quantifies the degree to which predictive contributions of $x_{i}$ rely on
information from $x_{j}$. As an example, two features representing coordinates
on a map need to be used synergistically to predict distances from arbitrary
points on the map.
2. 2.
The _redundancy_ of feature $x_{i}$ with feature $x_{j}$ quantifies the degree
to which the predictive contribution of $x_{i}$ uses information that is also
available through $x_{j}$. For example, the temperature and pressure measured
in a vessel are highly redundant features since both are mutually dependent
owing to the ideal gas law.
3. 3.
The _independence_ of feature $x_{i}$ relative to feature $x_{j}$ quantifies
the degree to which the predictive contribution of $x_{i}$ is neither
synergistic or redundant with $x_{j}$
Synergy, redundancy, and independence are expressed as percentages of feature
importance. They are additive, and sum up to 100% for any pair of features.
Importantly, neither relationship is necessarily symmetrical: While one
feature may replicate or complement some or all of the information provided by
another feature, the reverse need not be the case.
## 2 State of the Art
Model interpretability is a subject of intensive research in the recent years.
However, the very notion of interpretability can be understood in different
ways. Doshi-Velez & Kim (Doshi-Velez & Kim, 2017), as well as Lipton (Lipton,
2018), and Gilpin et al. (Gilpin et al., 2018) worked towards clarifying
related terminology, as well as listing motivations for, and flavors of,
interpretability.
Pioneering works of Strumbelj & Kononenko (Štrumbelj & Kononenko, 2014) and
Local Interpretable Model-agnostic Explanations (LIME) by Ribeiro et al.
(Ribeiro et al., 2016) were refined into a unified framework called SHapley
Additive exPlanation (Shap) by Lundberg & Lee (Lundberg & Lee, 2017) which is
a foundation for most of the currently developed approaches. In a follow-up
article, higher-order Shap values, so-called Shap interaction values were
introduced (Lundberg et al., 2018). Efficient Shap implementations for tree
ensemble models were also found (Lundberg et al., 2019).
As Shap became a reference framework for model explanation, several authors
turned to exploring the utility of Shap and expanding it. Rathi (Rathi, 2019)
showed how to generate GDPR-compliant counterfactual and contrastive
explanations using Shap. Merrick & Taly (Merrick & Taly, 2020) demonstrated
how to calculate confidence intervals of attributions. Shapley Additive Global
importancE (SAGE) (Covert et al., 2020) were proposed for quantifying a
model’s dependence on its features. Sundararajan & Najmi (Sundararajan &
Najmi, 2020) explored axioms and desired properties of various attribution
methods.
Naturally, critical analysis of Shap revealed its limitations. Kumar et al.
(Kumar et al., 2020b) pointed to certain mathematical shortcomings of Shap
(including the question of addressing causality) and the fact that Shapley
values represent only a summary of a game. The same authors (Kumar et al.,
2020a) offered a concept of Shapley residuals, vectors capturing information
lost by Shapley values. Their approach is based on work of Stern & Tettenhorst
(Stern & Tettenhorst, 2019), who have shown a way of decomposing an arbitrary
game and the relation of such decompositions to Shapley values.
## 3 Preliminaries
Let us start by briefly recalling the key concepts upon which the S-R-I
decomposition is founded.
### 3.1 Original Shapley Values
Shapley values were originally introduced as a concept in game theory to
describe the distribution total surplus of different coalitions of players in
an $n$-person game. As each player in different coalitions has a different
contribution to the final outcome, Shapley values provide a way of modeling
the marginal contribution of each player to the overall cooperation of the
game. Formally, Shapley (Shapley, 1953) expresses the amount allocated to
player $i$ in a collaborative game with players $N$ and outcomes $f_{x}(S)$
for any subset _(coalition)_ of players $S\subseteq N$ as:
$\displaystyle{\phi}_{i}$ $\displaystyle=\sum_{S\subseteq
N\setminus\\{i\\}}{\frac{|S|!\;(|N|-|S|-1)!}{|N|!}}\nabla_{i}(S)$ (1) where
$\displaystyle\nabla_{i}$ $\displaystyle=f_{x}(S\cup\\{i\\})-f_{x}(S)$ (2)
${\phi}_{i}$ expresses the average incremental contribution of player $i$ when
added to all possible permutations of coalitions $S\subseteq
N\setminus\\{i\\}$.
### 3.2 Shap Vectors
Shap values are an application of Shapley values for a predictive model
$f:\mathbb{R}^{n}\to\mathbb{R}$. In this context, the game outcome $f_{x}$ is
the model evaluated for a sample $x\in\mathbb{R}^{n}$ with different sets of
features present. “Players” are the features used in the model and
“coalitions” of features correspond to subsets of features that are provided
to the model to make predictions. The term $f_{x}(S)$ in (1) is defined to be
the original model $f$ restricted to use only features in $S$, by taking the
expectation value over features not in $S$. In the notation of (Chen et al.,
2020):
$f_{x}(S)=\mathbb{E}[f(x)|S]$ (3)
In particular,
$\displaystyle f_{x}(N)$ $\displaystyle=\mathbb{E}[f(x)|N]=f(x)$ (4) and
$\displaystyle f_{x}(\emptyset)$
$\displaystyle=\mathbb{E}[f(x)|\emptyset]=\mathbb{E}[f(x)]$ (5)
Given $M$ samples in the training corpus for the model, we can calculate the
Shap value for each feature of each sample, resulting in a $N\times M$ Shap
value matrix for each feature $\mathit{x}_{i}$ and observation $u$.
In turn, we define the _Shap vector_ as
${\mathbf{p}}_{i}=({\phi}^{1}_{i},\dots,{\phi}^{m}_{i})$ (6)
being the Shap values for samples $u=1\dots m$ for feature $i$.
### 3.3 Shap Interaction Vectors
Shap interaction effects (Lundberg et al., 2018) quantify the interactions
between any pair of features $x_{i}$ and $x_{j}$ by calculating the difference
between the Shap value for feature $i$ when $j$ is present, and the Shap value
for feature $i$ when $j$ is absent. Formally, this relationship is captured by
$\nabla_{ij}$ in (7) and (3.3).
$\displaystyle{\phi}_{ij}$ $\displaystyle=\sum_{S\subseteq
N\setminus\\{i,j\\}}{\frac{|S|!(|N|-|S|-2)!}{2(|N|-1)!}\nabla_{ij}(S)}$ (7)
$\displaystyle\nabla_{ij}$
$\displaystyle=f_{x}(S\cup\\{i,j\\})-f_{x}(S\cup\\{i\\})$
$\displaystyle-(f_{x}(S\cup\\{j\\})-f_{x}(S))$ (8)
where $S$ is a coalition of features representing a subset of all features
$N$. The summation extends over all possible coalitions of $N$ that don’t
contain the feature pair $\\{i,j\\}$. In (7) the Shap interaction value is
split equally between features $x_{i}$ and $x_{j}$ hence
${\phi}_{ij}={\phi}_{ji}$. We can isolate the _main effect_ ${\phi}_{ii}$ for
feature $x_{i}$ by subtracting the interaction values for all $j\neq i$ from
the Shap value ${\phi}_{i}$:
$\displaystyle{\phi}_{ii}$ $\displaystyle={\phi}_{i}-\sum_{j\neq
i}{\phi}_{ij}$ (9)
Similarly to _Shap vectors_, we define the _Shap interaction vector_ as the
vector of Shap values for samples $u\in 1...m$ given a pair of features
$\\{x_{i},x_{j}\\}$:
$\displaystyle{\mathbf{p}}_{ij}$
$\displaystyle=({\phi}^{1}_{ij},\dots,{\phi}^{m}_{ij})\quad\forall i,j\in
N\times N$ (10)
From (9) in conjunction with (6) and (10), it follows that all interaction
vectors for feature $x_{i}$ add up to the Shap vector for $x_{i}$:
$\displaystyle{\mathbf{p}}_{i}$ $\displaystyle=\sum_{j\in
N}{\mathbf{p}}_{ij}\quad\forall i$ (11)
## 4 Synergy, Redundancy, and Independence
In the following section we will introduce and examine various $m$-dimensional
vectors, where $m$ is the number of observations. Vectors representing Shap
values (6) and Shap interaction values (10) will be our building material,
from which we will construct other informative vectors. Without loss of
generality, at all times, we will focus on one feature, $\mathit{x}_{i}$ (with
corresponding Shap vector ${\mathbf{p}}_{i}$), and explore its relationship
with one other feature $\mathit{x}_{j}$ (with Shap vector ${\mathbf{p}}_{j}$
and Shap interaction vector ${\mathbf{p}}_{ij}$).
We will be concerned with angles between vectors in the $m$-dimensional space.
The smaller the angle between two vectors, the more information is shared by
them. Our goal will often be to decompose vectors into orthogonal components
(see Figure 1).
Figure 1: Geometric interpretation of synergy, redundancy and independence of
feature $\mathit{x}_{i}$ relative to feature $\mathit{x}_{j}$. In this
3-dimensional representation, vectors ${\mathbf{p}}_{i}$, ${\mathbf{p}}_{ij}$,
${\mathbf{s}}_{ij}$ and ${\mathbf{a}}_{ij}$ are co-planar and in the plane of
the paper. Vectors ${\mathbf{a}}_{ij}$, ${\mathbf{a}}_{ji}$,
${\mathbf{r}}_{ij}$ and ${\mathbf{i}}_{ij}$ are co-planar and in a plane
orthogonal to the paper (for better visibility, the perspective is slightly
skewed sideways). Feature vector ${\mathbf{p}}_{i}$ is projected on
interaction vector ${\mathbf{p}}_{ij}$ to obtain synergy vector
${\mathbf{s}}_{ij}$. Autonomy vector ${\mathbf{a}}_{ij}$ is orthogonal to
${\mathbf{s}}_{ij}$ and the two add up to ${\mathbf{p}}_{i}$. Redundancy
vector ${\mathbf{r}}_{ij}$ is a projection of ${\mathbf{a}}_{ij}$ onto
${\mathbf{a}}_{ji}$ (${\mathbf{a}}_{ji}$ is the autonomy vector from the
perspective of feature $\mathit{x}_{j}$). Independence vector
${\mathbf{i}}_{ij}$ is orthogonal to ${\mathbf{r}}_{ij}$ and the two add up to
${\mathbf{a}}_{ij}$.
### 4.1 Vector Representation
###### Definition 1 (Synergy vector)
${\mathbf{s}}_{ij}=\frac{\langle{\mathbf{p}}_{i},{\mathbf{p}}_{ij}\rangle}{\|{\mathbf{p}}_{ij}\|^{2}}{\mathbf{p}}_{ij}\quad\forall
i\neq j$ (12)
Geometrically speaking, the synergy vector for $\mathit{x}_{i}$ and
$\mathit{x}_{j}$ is a projection of ${\mathbf{p}}_{i}$ on ${\mathbf{p}}_{ij}$.
Synergy represents the advantage that feature $\mathit{x}_{i}$ receives when
aided by $\mathit{x}_{j}$.
For example, if features $\mathit{x}_{i}$ and $\mathit{x}_{j}$ represent
geographic latitude and longitude, and our function is elevation above mean
sea level, then both features work synergistically and neither can determine
the outcome without the other.
Note that the definition is asymmetric, hence ${\mathbf{s}}_{ji}$ need not
equal ${\mathbf{s}}_{ij}$.
###### Definition 2 (Autonomy vector)
${\mathbf{a}}_{ij}={\mathbf{p}}_{i}-{\mathbf{s}}_{ij}\quad\forall i\neq j$
(13)
Autonomy is the converse of synergy. As such, autonomy represents the
predictive contributions $\mathit{x}_{i}$ makes without help from
$\mathit{x}_{j}$, either because it is redundant, or independent (subsequent
definitions will help us distinguish between these two cases).
Geometrically, the autonomy vector is perpendicular to the synergy vector, and
both add up to ${\mathbf{p}}_{i}$.
###### Definition 3 (Redundancy vector)
${\mathbf{r}}_{ij}=\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle}{\|{\mathbf{a}}_{ji}\|^{2}}{\mathbf{a}}_{ji}\quad\forall
i\neq j$ (14)
The redundancy vector represents information in $\mathit{x}_{i}$ that is
replicated by $\mathit{x}_{j}$. Geometrically, this is the projection of
vector ${\mathbf{a}}_{ij}$ onto vector ${\mathbf{a}}_{ji}$.
For example, distance in kilometres and distance in miles are perfectly
redundant features, whereas a child’s age and height are partially (but not
fully) redundant.
###### Definition 4 (Independence vector)
${\mathbf{i}}_{ij}={\mathbf{a}}_{ij}-{\mathbf{r}}_{ij}\quad\forall i\neq j$
(15)
Independence represents the information in feature $\mathit{x}_{i}$ that has
no synergy or redundancy with feature $\mathit{x}_{j}$. Geometrically,
${\mathbf{i}}_{ij}$ and ${\mathbf{r}}_{ij}$ are orthogonal, and together they
add up to ${\mathbf{a}}_{ij}$.
Let us sum up basic properties of the vectors introduced above. First of all,
it follows directly from the definitions that:
$\displaystyle{\mathbf{p}}_{i}$
$\displaystyle={\mathbf{s}}_{ij}+{\mathbf{a}}_{ij}={\mathbf{s}}_{ij}+{\mathbf{r}}_{ij}+{\mathbf{i}}_{ij}$
(16)
$\displaystyle{\mathbf{s}}_{ij}\perp{\mathbf{r}}_{ij}\perp{\mathbf{i}}_{ij}\perp{\mathbf{s}}_{ij}$
(17)
Thanks to the above, we also have:
$\displaystyle\|{\mathbf{p}}_{i}\|^{2}$
$\displaystyle=\|{\mathbf{s}}_{ij}\|^{2}+\|{\mathbf{a}}_{ij}\|^{2}=\|{\mathbf{s}}_{ij}\|^{2}+\|{\mathbf{r}}_{ij}\|^{2}+\|{\mathbf{i}}_{ij}\|^{2}$
(18)
For any $\mathit{x}_{i}$ and $\mathit{x}_{j}$, the vectors ${\mathbf{p}}_{i}$,
${\mathbf{p}}_{ij}$, ${\mathbf{s}}_{ij}$ and ${\mathbf{a}}_{ij}$ are co-
planar. Another important plane, orthogonal to the first one, contains the
vectors ${\mathbf{a}}_{ij}$, ${\mathbf{a}}_{ji}$, ${\mathbf{r}}_{ij}$ and
${\mathbf{i}}_{ij}$ (also see Figure 1).
### 4.2 Scalar Representation and $S$, $R$, $I$ Values
For practical reasons, instead of working with the full vectors, we introduce
their scalar counterparts. For each of the scalar values $S_{ij}$, $R_{ij}$
and $I_{ij}$ we have three equivalent characterisations:
* •
geometrically, as the relative length of the projection onto
${\mathbf{p}}_{i}$,
* •
as the ratio of squared norms
$\frac{\|\cdot\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}$,
* •
as the square of the uncentered correlation coefficient $\frac{\langle
v,w\rangle^{2}}{\|v\|^{2}\|w\|^{2}}$.
###### Definition 5 (Synergy value)
$S_{ij}=\frac{\langle{\mathbf{s}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\|{\mathbf{s}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\langle{\mathbf{p}}_{i},{\mathbf{p}}_{ij}\rangle^{2}}{\|{\mathbf{p}}_{i}\|^{2}\|{\mathbf{p}}_{ij}\|^{2}}\quad\forall
i\neq j$ (19)
###### Definition 6 (Redundancy value)
$R_{ij}=\frac{\langle{\mathbf{r}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\|{\mathbf{r}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}=(1-S_{ij})\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle^{2}}{\|{\mathbf{a}}_{ij}\|^{2}\|{\mathbf{a}}_{ji}\|^{2}}\quad\forall
i\neq j$ (20)
###### Definition 7 (Independence value)
$I_{ij}=\frac{\langle{\mathbf{i}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\|{\mathbf{i}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}=1-S_{ij}-R_{ij}\quad\forall
i\neq j$ (21)
In the appendix, we derive the equivalence between the three characterizations
for each scalar value in eqs. (19), (20), and (21) respectively.
We have thus defined scalar values quantifying synergy, redundancy and
independence from a global perspective. The three values are non-negative and
sum up to unity:
$\displaystyle S_{ij}+R_{ij}+I_{ij}=1$ (22) $\displaystyle 0\leq S_{ij}\leq 1$
(23) $\displaystyle 0\leq R_{ij}\leq 1$ (24) $\displaystyle 0\leq I_{ij}\leq
1$ (25)
### 4.3 Orthogonality Correction
Shap interaction vectors representing main effects ${\mathbf{p}}_{ii}$ are not
guaranteed to be orthogonal to pairwise interaction vectors
${\mathbf{p}}_{ij}$. In order to split the main effects from the interaction
vectors, we correct Shap interaction values by projecting them onto the
subspace that is orthogonal to ${\mathbf{p}}_{ii}$ and ${\mathbf{p}}_{jj}$. In
other words, we determine constants $\alpha$ and $\beta$ such that
$\displaystyle{\mathbf{p^{\prime}}}_{ij}:={\mathbf{p}}_{ij}-\alpha{\mathbf{p}}_{ii}-\beta{\mathbf{p}}_{jj}$
(26)
$\displaystyle{\mathbf{p}}_{ii}\perp{\mathbf{p^{\prime}}}_{ij}\perp{\mathbf{p}}_{jj}$
(27)
and apply the S-I-R calculations based on the corrected vectors
${\mathbf{p^{\prime}}}_{ij}$. A further formalisation of this preprocessing
step is part of our current research (see also the outlook in section 6).
## 5 Experimental Results
Table 1: Synergy, redundancy and independence values for pairs of features of
the examined model.
$S_{ij}$ | $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$
---|---|---|---|---|---
$x_{1}$ | - | 1.00 | 1.00 | 0.00 | 0.00
$x_{2}$ | 0.79 | - | 0.00 | 0.00 | 0.00
$x_{3}$ | 0.79 | 0.00 | - | 0.00 | 0.00
$x_{4}$ | 0.00 | 0.00 | 0.00 | - | 0.00
$x_{5}$ | 0.00 | 0.00 | 0.00 | 0.00 | -
$R_{ij}$ | $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$
$x_{1}$ | - | 0.00 | 0.00 | 0.00 | 0.00
$x_{2}$ | 0.00 | - | 1.00 | 0.00 | 0.00
$x_{3}$ | 0.00 | 1.00 | - | 0.00 | 0.00
$x_{4}$ | 0.00 | 0.00 | 0.00 | - | 0.00
$x_{5}$ | 0.00 | 0.00 | 0.00 | 0.00 | -
$I_{ij}$ | $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$
$x_{1}$ | - | 0.00 | 0.00 | 1.00 | 1.00
$x_{2}$ | 0.21 | - | 0.00 | 1.00 | 1.00
$x_{3}$ | 0.21 | 0.00 | - | 1.00 | 1.00
$x_{4}$ | 1.00 | 1.00 | 1.00 | - | 1.00
$x_{5}$ | 1.00 | 1.00 | 1.00 | 1.00 | -
Let us now examine how S-R-I decomposition works in practice to gain a deeper
understanding of the relationships between model features.
Consider $m=1\,000$ observations of $n=5$ features, represented by
$m$-dimensional vectors: $\\{\mathbf{x}_{1},\dots,\mathbf{x}_{n}\\}$. Each
value of $\mathbf{x}_{1}$, $\mathbf{x}_{2}$, $\mathbf{x}_{4}$ and
$\mathbf{x}_{5}$ is drawn independently from uniform distribution $[0,1]$,
while $\mathbf{x}_{3}=\mathbf{x}_{2}$. Consider a model (see Figure 2):
$\displaystyle f(\mathbf{x}):=$
$\displaystyle\sin(2\pi\mathbf{x}_{1})\sin(2\pi\frac{\mathbf{x}_{2}+\mathbf{x}_{3}}{2})+\mathbf{x}_{4}+\mathbf{x}_{5}$
(28)
Figure 2: Function used in the experiment, plotted against the first feature
on the x-axis, and the second and third features (duplicated) on the y-axis.
In other words, features $\mathbf{x}_{2}$ and $\mathbf{x}_{3}$ are identical,
redundant copies. Features $\mathbf{x}_{4}$ and $\mathbf{x}_{5}$ impact the
model independently of each other and of any other feature. Impact of feature
$\mathbf{x}_{1}$ is linked to that of features $\mathbf{x}_{2}$,
$\mathbf{x}_{3}$, as neither can increase the function’s value without “co-
operation” with the others (there is a large degree of synergy between them).
We have calculated exact Shap values for each observation, applied
orthogonality correction described in 4.3, and then calculated S-R-I
decomposition for feature pairs. Table 1 presents synergy, redundancy and
independence values for each pair of features.
Investigating the results we notice that $S_{12}=S_{13}=1$, indicating that
$\mathbf{x}_{1}$ can provide the “missing piece of information” to
$\mathbf{x}_{2}$ and $\mathbf{x}_{3}$. At the same time, $S_{21}=S_{31}=0.79$,
meaning that $\mathbf{x}_{2}$ can also reinforce $\mathbf{x}_{1}$, but is
limited by $\mathbf{x}_{3}$ (and vice versa).
Looking at $R_{ij}$, the only pair of redundant features is $\mathbf{x}_{2}$
and $\mathbf{x}_{3}$, with $R_{23}=R_{32}=1$. We have $I_{4\cdot}=I_{\cdot
4}=I_{5\cdot}=I_{\cdot 5}=1$, expressing the fact that the last two features
contribute fully independently to the overall outcome. Lastly, as expected, in
all cases $S_{ij}+R_{ij}+I_{ij}=1$.
To sum up, we have observed that synergy, redundancy and independence values,
as defined in this paper, are intuitive and quantifiable reflections of their
respective notions.
## 6 Conclusions
In this work we have shown that an interaction between any two features in a
model can be decomposed into three components: synergy (S), redundancy (R) and
independence (I). We have characterized S-R-I using geometric properties, and
have proven equivalence between alternative formulations. We have also used an
example using a synthetic dataset to demonstrate how a global explanation
using S-R-I decomposition can enhance our understanding of the relationships
among model features.
The three values are defined in terms of Shap values and Shap interaction
values. They can be efficiently calculated, so that the marginal cost of the
S-R-I decomposition is negligible. We have released an open-source
implementation of S-R-I decomposition in our Explainable AI software library
FACET: https://github.com/BCG-Gamma/facet.
The notion of global explanations using orthogonal vectors in the space of
observations deserves further attention. Our current research focuses on
determining desirable geometric properties of interaction values, and
proposing relevant orthogonalisation steps.
## Appendix
As discussed in section 4.2, each scalar value for synergy, redundancy and
independence has three equivalent characterizations:
* •
geometrically, as the relative length of the projection onto
${\mathbf{p}}_{i}$,
* •
as the ratio of squared norms
$\frac{\|\cdot\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}$,
* •
as the square of the uncentered correlation coefficient $\frac{\langle
v,w\rangle^{2}}{\|v\|^{2}\|w\|^{2}}$.
Here, we derive the equivalence between the three characterizations for each
scalar value as stated for $S_{ij}$ in eq. (19), for $R_{ij}$ in eq. (20), and
for $I_{ij}$ in eq. (21) respectively.
Starting with $S_{ij}$, the equivalence in eq. (19) can be shown as follows:
$\displaystyle\frac{\langle{\mathbf{s}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{s}}_{ij},{\mathbf{s}}_{ij}+{\mathbf{a}}_{ij}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\langle{\mathbf{s}}_{ij},{\mathbf{s}}_{ij}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\|{\mathbf{s}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}$
(29)
$\displaystyle\frac{\langle{\mathbf{s}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{p}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}\frac{\langle{\mathbf{p}}_{i},{\mathbf{p}}_{ij}\rangle}{\|{\mathbf{p}}_{ij}\|^{2}}=\frac{\langle{\mathbf{p}}_{i},{\mathbf{p}}_{ij}\rangle^{2}}{\|{\mathbf{p}}_{i}\|^{2}\|{\mathbf{p}}_{ij}\|^{2}}$
(30)
For $R_{ij}$, the equivalence in eq. (20) is due to:
$\displaystyle\frac{\langle{\mathbf{r}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{r}}_{ij},{\mathbf{s}}_{ij}+{\mathbf{r}}_{ij}+{\mathbf{i}}_{ij}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{r}}_{ij},{\mathbf{r}}_{ij}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\|{\mathbf{r}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}$
(31)
$\displaystyle\frac{\langle{\mathbf{r}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{p}}_{i},{\mathbf{a}}_{ji}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle}{\|{\mathbf{a}}_{ji}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{s}}_{ij}+{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle}{\|{\mathbf{a}}_{ji}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle}{\|{\mathbf{a}}_{ji}\|^{2}}$
$\displaystyle=\frac{\|{\mathbf{a}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle^{2}}{\|{\mathbf{a}}_{ij}\|^{2}\|{\mathbf{a}}_{ji}\|^{2}}$
$\displaystyle=(1-\frac{\|{\mathbf{s}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}})\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle^{2}}{\|{\mathbf{a}}_{ij}\|^{2}\|{\mathbf{a}}_{ji}\|^{2}}$
$\displaystyle=(1-S_{ij})\frac{\langle{\mathbf{a}}_{ij},{\mathbf{a}}_{ji}\rangle^{2}}{\|{\mathbf{a}}_{ij}\|^{2}\|{\mathbf{a}}_{ji}\|^{2}}$
(32)
For $I_{ij}$, the equivalence in eq. (21) is due to:
$\displaystyle\frac{\langle{\mathbf{i}}_{ij},{\mathbf{p}}_{i}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{i}}_{ij},{\mathbf{s}}_{ij}+{\mathbf{r}}_{ij}+{\mathbf{i}}_{ij}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\langle{\mathbf{i}}_{ij},{\mathbf{i}}_{ij}\rangle}{\|{\mathbf{p}}_{i}\|^{2}}=\frac{\|{\mathbf{i}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}$
(33) $\displaystyle\frac{\|{\mathbf{i}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}$
$\displaystyle=\frac{\|{\mathbf{p}}_{i}\|^{2}-\|{\mathbf{s}}_{ij}\|^{2}-\|{\mathbf{r}}_{ij}\|^{2}}{\|{\mathbf{p}}_{i}\|^{2}}=1-S_{ij}-R_{ij}$
(34)
## References
* Chen et al. (2020) Chen, H., Janizek, J. D., Lundberg, S., and Lee, S.-I. True to the model or true to the data? _arXiv preprint arXiv:2006.16234_ , 2020.
* Covert et al. (2020) Covert, I., Lundberg, S., and Lee, S.-I. Understanding global feature contributions with additive importance measures. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Doshi-Velez & Kim (2017) Doshi-Velez, F. and Kim, B. Towards a rigorous science of interpretable machine learning. _arXiv preprint arXiv:1702.08608_ , 2017.
* Gilpin et al. (2018) Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., and Kagal, L. Explaining explanations: An overview of interpretability of machine learning. In _2018 IEEE 5th International Conference on data science and advanced analytics (DSAA)_ , pp. 80–89. IEEE, 2018.
* Kumar et al. (2020a) Kumar, I. E., Scheidegger, C., Venkatasubramanian, S., and Friedler, S. Shapley residuals: Quantifying the limits of the Shapley value for explanations. In _ICML Workshop on Workshop on Human Interpretability in Machine Learning (WHI)_ , 2020a.
* Kumar et al. (2020b) Kumar, I. E., Venkatasubramanian, S., Scheidegger, C., and Friedler, S. Problems with Shapley-value-based explanations as feature importance measures. In _International Conference on Machine Learning_ , pp. 5491–5500. PMLR, 2020b.
* Lipton (2018) Lipton, Z. C. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. _Queue_ , 16(3):31–57, 2018.
* Lundberg & Lee (2017) Lundberg, S. M. and Lee, S.-I. A unified approach to interpreting model predictions. _Advances in Neural Information Processing Systems_ , 30:4765–4774, 2017.
* Lundberg et al. (2018) Lundberg, S. M., Erion, G. G., and Lee, S.-I. Consistent individualized feature attribution for tree ensembles. _arXiv preprint arXiv:1802.03888_ , 2018.
* Lundberg et al. (2019) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. Explainable AI for trees: From local explanations to global understanding. _arXiv preprint arXiv:1905.04610_ , 2019.
* Merrick & Taly (2020) Merrick, L. and Taly, A. The explanation game: Explaining Machine Learning models using Shapley values. In _International Cross-Domain Conference for Machine Learning and Knowledge Extraction_ , pp. 17–38. Springer, 2020.
* Rathi (2019) Rathi, S. Generating counterfactual and contrastive explanations using SHAP. _arXiv preprint arXiv:1906.09293_ , 2019.
* Ribeiro et al. (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. ”why should I trust you?” explaining the predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_ , pp. 1135–1144, 2016.
* Shapley (1953) Shapley, L. A value for n-person games. _Contributions to the Theory of Games_ , pp. 31–40, 1953.
* Stern & Tettenhorst (2019) Stern, A. and Tettenhorst, A. Hodge decomposition and the Shapley value of a cooperative game. _Games and Economic Behavior_ , 113:186–198, 2019.
* Štrumbelj & Kononenko (2014) Štrumbelj, E. and Kononenko, I. Explaining prediction models and individual predictions with feature contributions. _Knowledge and information systems_ , 41(3):647–665, 2014.
* Sundararajan & Najmi (2020) Sundararajan, M. and Najmi, A. The many Shapley values for model explanation. In _International Conference on Machine Learning_ , pp. 9269–9278. PMLR, 2020.
|
arxiv-papers
| 2021-07-26T18:56:31 |
2024-09-04T03:07:19.908427
|
{
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"authors": "Jan Ittner, Lukasz Bolikowski, Konstantin Hemker and Ricardo Kennedy",
"submitter": "{\\L}ukasz Bolikowski",
"url": "https://arxiv.org/abs/2107.12436"
}
|
2107.12437
|
# PyCharge: An open-source Python package for self-consistent electrodynamics
simulations of Lorentz oscillators and moving point charges
Matthew J. Filipovich Stephen Hughes Department of Physics, Engineering
Physics and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada
###### Abstract
PyCharge is a computational electrodynamics Python simulator that can
calculate the electromagnetic fields and potentials generated by moving point
charges and can self-consistently simulate dipoles modeled as Lorentz
oscillators. To calculate the total fields and potentials along a discretized
spatial grid at a specified time, PyCharge computes the retarded time of the
point charges at each grid point, which are subsequently used to compute the
analytical solutions to Maxwell’s equations for each point charge. The Lorentz
oscillators are driven by the electric field in the system and PyCharge self-
consistently determines the reaction of the radiation on the dipole moment at
each time step. PyCharge treats the two opposite charges in the dipole as
separate point charge sources and calculates their individual contributions to
the total electromagnetic fields and potentials. The expected coupling that
arises between dipoles is captured in the PyCharge simulation, and the
modified radiative properties of the dipoles (radiative decay rate and
frequency shift) can be extracted using the dipole’s energy at each time step
throughout the simulation. The modified radiative properties of two dipoles
separated in the near-field, which require a full dipole response to yield the
correct physics, are calculated by PyCharge and shown to be in excellent
agreement with the analytical Green’s function results ($<0.2\%$ relative
error, over a wide range of spatial separations). Moving dipoles can also be
modeled by specifying the dipole’s origin position as a function of time.
PyCharge includes a parallelized version of the dipole simulation method to
enable the parallel execution of computationally demanding simulations on high
performance computing environments to significantly improve run time.
###### keywords:
Computational Electrodynamics , Nano-Optics , Electromagnetic Field Solver ,
Open Source , Python.
††journal: Computer Physics Communications
PROGRAM SUMMARY
Program Title: PyCharge
CPC Library link to program files: (to be added by Technical Editor)
Developer’s repository link:
github.com/MatthewFilipovich/pycharge
Code Ocean capsule: (to be added by Technical Editor)
Licensing provisions: GPLv3
Programming language: Python 3.7 or newer
Supplementary material:
Documentation is available at pycharge.readthedocs.io. The PyCharge package
and its dependencies can be installed from PyPI: pypi.org/project/pycharge
Nature of problem:
Calculating the electromagnetic fields and potentials generated by complex
geometries of point charges, as well as the self-consistent simulation of
Lorentz oscillators.
Solution method:
PyCharge calculates the individual contributions from each point charge in the
system to calculate the total electromagnetic fields and potentials, and
computes the dipole moment of the Lorentz oscillators at each time step by
solving their governing equation of motion.
Additional comments including restrictions and unusual features:
The parallel simulation method is implemented using the mpi4py package [1].
## References
* [1] L. D. Dalcin, R. R. Paz, P. A. Kler, A. Cosimo, Parallel distributed computing using python, Advances in Water Resources 34 (9) (2011) 1124–1139.
## 1 Introduction
The majority of electrodynamics problems can be divided into two distinct
classes:111There are other classes of electrodynamics problems without
sources, used to obtain the underlying modes, which we are not concerned with
here. (i) one in which the goal is to solve for the electromagnetic (EM)
fields generated by specified sources of charge and current (e.g., antennas,
radiation from multipole sources), and (ii) one in which the motion of the
charges and currents are to be determined based on the known fields in the
system (e.g., motion of charges in electric and magnetic fields, energy-loss
phenomena) [1]. However, there exists another class of electrodynamics
problems where the solution requires that the fields and sources are treated
self-consistently. That is, a correct treatment of the problem must include
the reaction of the radiation on the motion of the sources. The self-
consistent treatment of sources and fields is an old and difficult problem
that stems from one of the most fundamental aspects of physics: the nature of
an elementary particle. This problem of self-consistency is not only limited
to classical electrodynamics, as these difficulties also arise in quantum-
mechanical discussions and modelings of these systems [2].
Motivated by the need for an electrodynamics simulator that self-consistently
treats the reaction of the radiation on the real-time motion of the point
charge sources, we developed the open-source Python package PyCharge. PyCharge
can calculate the EM fields and potentials generated by sources in a system at
specified grid points in space and time, which can then be visualized using a
plotting library such as Matplotlib [3]. To calculate these fields and
potentials, PyCharge exploits the principle of superposition in classical
electrodynamics by determining the individual contributions from each source
and then calculating the sum. The equations describing the scalar and vector
potentials generated by a single moving point charge in a vacuum are given by
the Liénard–Wiechert potentials, and the complete and relativistically correct
equations for the time-varying electric and magnetic fields can be derived
from these potentials [4].
PyCharge currently supports two types of sources: point charges that have
predefined trajectories (specified as parametric equations of motion in the
$x$, $y$, and $z$ directions as functions of time), and Lorentz oscillators
(i.e., oscillating electric dipoles). The Lorentz oscillators (LOs) consist of
two equal and opposite point charges that oscillate around the origin position
(center of mass) along the axis of polarization, with a dipole moment that is
dynamically calculated at each time step by solving the governing harmonic
oscillator differential equation. The LOs are driven by the electric field
component along the direction of polarization generated by the other sources
in the system (which includes its own scattered field). As well, the LOs are
naturally damped since they radiate energy as they oscillate, which dissipates
kinetic energy (classically caused by radiation reaction) and decreases the
dipole moment [5]. This damping allows PyCharge to calculate the self-
consistent radiative decay rates from LOs in arbitrary motion and also in the
presence of interactions with other LOs, including collective effects such as
superradiance and subradiance.
The scattering of EM waves by LOs can be solved using a closed scalar and
dyadic Green’s function approach, where the LOs are treated as point-like
objects such that their structure cannot be resolved on the scale of the
wavelength of light [6]. However, this method requires a full dipole response
and cannot account for certain LO configurations (e.g., moving LOs). PyCharge
simulations provide an alternative numerical method to this standard approach
that yield highly accurate results and can model systems that cannot be solved
analytically. Our approach also has notable advantages over other self-
consistent EM solvers such as the finite-difference time-domain (FDTD) method
[7], which require a very careful treatment of the LO’s divergent nature when
treated as a point dipole, which leads to (unphysical) frequency shifts that
are dependent on the grid-size.
PyCharge was designed to be accessible for a wide range of use cases: first,
it can be used as a pedagogical tool for undergraduate and graduate-level EM
theory courses to provide an intuitive understanding of the EM waves generated
by moving point charges, and second, it can also be used by researchers in the
field of nano-optics to investigate the complex interactions of light in
nanoscale environments, including interactions with moving point charges and
chains of resonant LOs.
We have also implemented a parallelized version of the PyCharge simulation
method, using the standard Message Passing Interface (MPI) for Python package
(mpi4py) [8], which can be executed on high performance computing environments
to significantly improve the run time of computationally demanding simulations
(e.g., involving multiple dipoles). The PyCharge package can be installed
directly from PyPI on systems running Python 3.7 or newer. Further
documentation, including Python script examples and the API reference, is
available at pycharge.readthedocs.io.
The rest of our paper is organized as follows: in Sec. 2, we discuss the
relevant theoretical background and the applied numerical methods for
calculating the EM fields and potentials generated by moving point charges; as
well, we introduce the LO model for simulating dipoles and review the known
effects of coupling between LOs using a photonic Green’s function theory. In
Sec. 3, we present the general framework of the PyCharge package including the
relevant classes and methods, as well as the MPI implementation. In Sec. 4, we
demonstrate several electrodynamics simulations that can be performed with
PyCharge and provide minimal Python listings that demonstrate PyCharge’s user
interface. We also verify the accuracy of simulating two coupled dipoles by
comparing the calculated radiative properties and dipole energies with the
known analytical solutions. Finally, we present our conclusions in Sec. 5.
In addition, we provide three appendices: A presents the Green’s function for
a free-space medium and the master equation for coupled point dipoles in a
Born-Markov approximation. From these, we obtain the key quantum
electrodynamics (QED) expressions for the radiative decay rates and coupling
parameters of point dipoles. We then provide an explicit solution to the
master equation for initially excited dipoles treated as two level systems
(TLSs), as these solutions demonstrate equivalence in the limit of weak
excitation (linear response) with the decay dynamics of coupled LOs simulated
with PyCharge. B presents the derivation of the free-space spontaneous
emission (SE) rate from the standard Fermi’s golden rule approach. C presents
the exact EM fields generated by an oscillating electric dipole as functions
of space and time, which we use to benchmark the accuracy of our code.
## 2 Background and methods
### 2.1 Moving point charges
The charge and current densities of a point charge $q$ at the position
$\mathbf{r}_{p}(t)$ with velocity $c\boldsymbol{\beta}(t)$ are, respectively,
$\rho\left(\mathbf{r},t\right)=q\delta\left[\mathbf{r}-\mathbf{r}_{p}\right]$
(1)
and
$\mathbf{J}\left(\mathbf{r},t\right)=qc\boldsymbol{\beta}\delta\left[\mathbf{r}-\mathbf{r}_{p}\right],$
(2)
where $c$ is the vacuum speed of light.
The scalar and vector potentials of a moving point charge in the Lorenz gauge,
known as the Liénard–Wiechert potentials [9], are derived from Maxwell’s
equations as
$\Phi(\mathbf{r},t)=\frac{q}{4\pi\epsilon_{0}}\left[\frac{1}{\kappa
R}\right]_{\mathrm{ret}}$ (3)
and
$\mathbf{A}(\mathbf{r},t)=\frac{\mu_{0}q}{4\pi}\left[\frac{\boldsymbol{\beta}}{\kappa
R}\right]_{\mathrm{ret}},$ (4)
where $\epsilon_{0}$ and $\mu_{0}$ are the vacuum permittivity and
permeability, respectively, $R=|\mathbf{r}-\mathbf{r}_{p}(t^{\prime})|$, and
$\kappa=1-\mathbf{n}(t^{\prime})\cdot\boldsymbol{\beta}(t^{\prime})$ such that
${\mathbf{n}=(\mathbf{r}-\mathbf{r}_{p}(t^{\prime}))/R}$ is a unit vector from
the position of the charge to the field point, and the quantity in brackets is
to be evaluated at the retarded time $t^{\prime}$, given by
$t^{\prime}=t-\frac{R(t^{\prime})}{c}.$ (5)
The physical (gauge-invariant) relativistically-correct, time-varying electric
and magnetic fields generated by a moving point charge are, respectively,
$\mathbf{E}\left(\mathbf{r},t\right)=\frac{q}{4\pi\epsilon_{0}}\Bigg{[}\frac{\left(\mathbf{n}-\boldsymbol{\beta}\right)\left(1-\beta^{2}\right)}{\kappa^{3}R^{2}}+\frac{\mathbf{n}}{c\kappa^{3}R}\times\left[\left(\mathbf{n}-\boldsymbol{\beta}\right)\times\boldsymbol{\dot{\beta}}\right]\Bigg{]}_{\mathrm{ret}}$
(6)
and
$\mathbf{B}\left(\mathbf{r},t\right)=\frac{1}{c}\left[\mathbf{n}\times\mathbf{E}\right]_{\mathrm{ret}},$
(7)
where $\boldsymbol{\dot{\beta}}$ is the derivative of $\boldsymbol{\beta}$
with respect to $t^{\prime}$ [1].
The first term in Eq. (6) is known as the electric Coulomb field and is
independent of acceleration, while the second term is known as the electric
radiation field and is linearly dependent on $\boldsymbol{\dot{\beta}}$:
$\mathbf{E}_{\mathrm{Coul}}\left(\mathbf{r},t\right)=\frac{q}{4\pi\epsilon_{0}}\left[\frac{\left(\mathbf{n}-\boldsymbol{\beta}\right)\left(1-\beta^{2}\right)}{\kappa^{3}R^{2}}\right]_{\mathrm{ret}}$
(8)
and
$\mathbf{E}_{\mathrm{rad}}\left(\mathbf{r},t\right)=\frac{q}{4\pi\epsilon_{0}c}\left[\frac{\mathbf{n}}{\kappa^{3}R}\times\left[\left(\mathbf{n}-\boldsymbol{\beta}\right)\times\boldsymbol{\dot{\beta}}\right]\right]_{\mathrm{ret}}.$
(9)
The magnetic Coulomb and radiation field terms can be determined by
substituting Eqs. (8) and (9) into Eq. (7). Notably, the Coulomb field falls
off as $1/R^{2}$, similar to the static field, while the radiation field
decreases as $1/R$ [4].222The conventional notation of the EM fields and
potentials presented in this paper are from Ref. 1 (Jackson); however, the
PyCharge package implements these equations using the notation from Ref. 4
(Griffiths).
### 2.2 Computing the fields and potentials
PyCharge can directly calculate the EM fields and potentials generated by a
moving point charge along a discretized spatial grid at a specified time. At
each point on the spatial grid, the retarded time of the moving point charge,
which is determined by the point charge’s trajectory, is calculated using the
secant method (from the SciPy package [10]) to find the approximate solution
of Eq. (5). Then, the retarded position, velocity, and acceleration of the
point charge at each grid point are determined. Finally, the scalar and vector
potentials are calculated from Eqs. (3) and (4), and the total, Coulomb, and
radiation fields are computed using Eqs. (6), (8), and (9) for the respective
electric fields; the corresponding magnetic fields are calculated from Eq.
(7).
In systems with multiple point charges, PyCharge exploits the superposition
principle for electrodynamics simulations: the fields and potentials generated
by each source are calculated using the previously described approach, and the
total fields and potentials are given by the sum of the individual point
charge contributions. A continuous charge density $\rho$ can be approximated
in PyCharge using many point charges within the volume, where the charge value
of each point charge depends on $\rho$. Similarly, a continuous current
density, described by $\mathbf{J}=\rho\mathbf{v}$, can be approximated using
evenly spaced point charges traveling along a path, where the charge value of
each point charge depends on $\mathbf{J}$. The accuracy of the calculated
fields and potentials generated by these approximated continuous densities is
dependent on both the number of point charges used in the simulation and the
distance at the field point from the sources [11].
As previously discussed, PyCharge can simulate point charges that have
specified trajectories defined by a parametric equation
$\mathbf{r}(t)=\left(x\left(t\right),y\left(t\right),z\left(t\right)\right)$,
as well as dipoles (which consist of two point charges) that are modeled as
LOs with a dipole moment that is dynamically determined at each time step. In
previous work [11], we simulated several interesting systems of point charges
with predefined trajectories using a similar computational approach, including
magnetic dipoles, oscillating and linearly accelerating point charges,
synchrotron radiation, and Bremsstrahlung. The simulation of LOs in PyCharge
is discussed in the next section.
### 2.3 Lorentz oscillator model
The optical interactions between light and matter at the nanometer scale are
important phenomena for a variety of research fields, and a rigorous
understanding of these interactions requires the use of QED theory. However,
nanometer-scale structures are often too complex to be solved rigorously using
only QED; in these cases, a classical approach that invokes the results of QED
in a phenomenological way can be applied [5]. PyCharge uses the LO model,
which is an approximation from quantum theory that can be derived (e.g., from
the time-dependent Schrödinger equation or a quantum master equation, see A)
to simulate the interaction of a bound charge (e.g., an electron) with light
[12].
In the classical model, an oscillating dipole produces EM radiation which
dissipates energy and modifies the self-consistent dipole moment. The recoil
force, $\mathbf{F}_{\mathrm{r}}$, acting on the accelerating point charges in
the dipole is called the radiation reaction or radiation damping force. The
equation of motion for an undriven LO (e.g., in a vacuum) that includes the
radiation reaction force is given by
$m\mathbf{\ddot{r}_{\rm
dip}}(t)+\omega_{0}^{2}m\mathbf{r_{\mathrm{dip}}}(t)=\mathbf{F}_{\mathrm{r}}(t),$
(10)
where $\mathbf{r}_{\rm dip}$ is the displacement from the LO’s negative charge
to positive charge and $\mathbf{\ddot{r}_{\rm dip}}$ is its second derivative
with respect to time, $m$ is the effective mass of the LO (further discussed
below), and $\omega_{0}$ is the natural angular frequency of the LO [5].
The radiation reaction force, $\mathbf{F}_{\mathrm{r}}$, acting on the
accelerating point charges in the dipole is described by the Abraham-Lorentz
formula for non-relativistic velocities:
$\mathbf{F}_{\mathrm{r}}(t)=\frac{q^{2}}{6\pi\epsilon_{o}c^{3}}\mathbf{\dddot{r}_{\mathrm{dip}}}(t),$
(11)
where $\mathbf{\dddot{r}_{\mathrm{dip}}}$ is the third derivative of the
displacement between the two charges [4]. We can perform the approximation
${\mathbf{\dddot{r}}_{\mathrm{dip}}\approx-\omega_{0}^{2}\mathbf{\dot{r}}_{\mathrm{dip}}}$
in Eq. (11) if the damping on the point charges introduced by the radiation
reaction force is negligible (i.e.,
$|\mathbf{F}_{\mathrm{r}}|\ll\omega_{0}^{2}m|\mathbf{r}_{\mathrm{dip}}|$),
such that the following condition is satisfied:
$\frac{q^{2}\omega_{0}}{m}\ll 6\pi\epsilon_{0}c^{3}.$ (12)
In an inhomogeneous environment, an oscillating electric dipole will
experience the external electric field $\mathbf{E}_{\mathrm{d}}$ as a driving
force, which is the component of the total electric field in the polarization
direction at the dipole’s origin (center of mass) position $\mathbf{R}$
generated by the other sources in the system and its own scattered field. If
the condition in Eq. (12) is satisfied, the equation of motion for a driven LO
is
$\mathbf{\ddot{d}}(t)+\gamma_{0}\mathbf{\dot{d}}(t)+\omega_{0}^{2}{\bf
d}(t)=\frac{q^{2}}{m}\mathbf{E}_{\mathrm{d}}(\mathbf{R},t),$ (13)
where ${\bf d}=q{\bf r_{\rm dip}}$ is the dipole moment, $\mathbf{\dot{d}}$
and $\mathbf{\ddot{d}}$ are the first and second derivatives of $\mathbf{d}$,
and $\gamma_{0}$ is the free-space energy decay rate given by
$\gamma_{0}=\frac{q^{2}\omega_{0}^{2}}{6\pi\epsilon_{0}c^{3}m}.$ (14)
This equation of motion for an LO corresponds to a Lorentzian atom model with
transition frequency $\omega_{0}$ and linewidth $\gamma_{0}$ (where
$\gamma_{0}\ll\omega_{0}$), and is limited to non-relativistic velocities as
it does not account for relativistic mass [5].
The effective mass $m$ (also called the reduced mass) of the dipole is given
by
$m=\frac{m_{1}m_{2}}{m_{1}+m_{2}},$ (15)
where $m_{1}$ and $m_{2}$ are the masses of the two point charges in the
dipole [12]. These charges oscillate around the center of mass position
$\mathbf{R}$, defined by
$\mathbf{R}=\frac{m_{1}\mathbf{r}_{1}+m_{2}\mathbf{r}_{2}}{m_{1}+m_{2}},$ (16)
where $\mathbf{r}_{1}$ and $\mathbf{r}_{2}$ are the positions of the two point
charges. The point charge positions can therefore be defined in terms of the
displacement between the two charges $\mathbf{r}_{\mathrm{dip}}$:
$\mathbf{r}_{1}=\mathbf{R}+\frac{m_{2}}{m_{1}+m_{2}}\mathbf{r}_{\mathrm{dip}}$
(17)
and
$\mathbf{r}_{2}=\mathbf{R}-\frac{m_{1}}{m_{1}+m_{2}}\mathbf{r}_{\mathrm{dip}}.$
(18)
It is also useful to discuss how the decay dynamics of LOs are related to
those of a quantum TLS, in certain limits. Specifically, in the limit of weak
excitation (linear response), we can connect the quantum mechanical equations
of motion for a TLS to the classical equations of motion by replacing
$q^{2}/m$ with $q^{2}f/m$, where $f$ is the oscillator strength, defined by
$f=\frac{2m\omega_{0}d_{0}^{2}}{\hbar q^{2}},$ (19)
where $d_{0}=|\mathbf{d}(t=0)|$. We thus recover the usual expression for the
SE rate $\gamma_{0,\mathrm{TLS}}$ from an excited TLS,
$\gamma_{0,\mathrm{TLS}}=\frac{\omega_{0}^{3}d_{0}^{2}}{3\pi\epsilon_{0}\hbar
c^{3}}.$ (20)
An alternative argument to relate the dipole moment with the radiative decay
rate is to connect the total mean energy of the LO to the ground state energy
of a quantized harmonic oscillator, so that
$\frac{m\omega_{0}^{2}d_{0}^{2}}{q^{2}}=\frac{\hbar\omega_{0}}{2},$ (21)
yielding $q^{2}/m=2\omega_{0}d_{0}^{2}/\hbar$, as expected from Eq. (19). As
well, the decay rate can be derived using a Fermi’s golden rule approach (see
B) from the interaction Hamiltonian $H_{\mathrm{int}}=-{\bf d}\cdot\hat{\bf
E}$, which leads to the following rate equations for the populations of an
isolated TLS in a vacuum:333Note that we are ignoring thermal excitation
processes, which is an excellent approximation for optical frequencies, since
$\hbar\omega_{0}\gg k_{\rm B}T$, where $k_{\rm B}$ is the Boltzmann constant.
$\dot{n}_{\mathrm{e}}(t)=-\gamma_{0}n_{\mathrm{e}}(t)$ (22)
and
$\dot{n}_{\mathrm{g}}(t)=\gamma_{0}n_{\mathrm{e}}(t),$ (23)
where $n_{g}$ and $n_{e}$ are the populations of the ground and excited states
($n_{g}+n_{e}=1$), respectively, and we neglect all other processes. In this
picture, $\gamma_{0}$ is also identical to the well known Einstein A
coefficient [12]. Therefore, the energy decay rate is equivalent to the
population decay rate. We stress again that we can only make the connection
between LO dynamics and populations of TLS states in a regime of weak
excitation.
The total energy $\mathcal{E}$ of a dipole, which is the sum of its kinetic
and potential energies, is calculated by PyCharge using
$\mathcal{E}(t)=\frac{m\omega_{0}^{2}}{2q^{2}}d^{2}(t)+\frac{m}{2q^{2}}\dot{d}^{2}(t),$
(24)
where $\dot{d}=|\mathbf{\dot{d}}|$. Since the total energy of a dipole ${\cal
E}$ is proportional to $n_{e}$, the population of the excited state using the
normalized total energy can be determined by PyCharge from
$n_{e}(t)=\frac{\mathcal{E}(t)}{\max(\mathcal{E})}.$ (25)
### 2.4 Coupled Lorentz oscillators
It is well known that an atom’s surrounding environment modifies its radiative
properties. In the classical model, the modification of the SE rate is
generated by the scattering of the atomic field (as the LO is driven by the
electric field at its origin position), while in QED theory the SE rate is
stimulated by vacuum field fluctuations or radiation reaction, which partly
depends on the ordering of the quantum field operators [2]. Regardless, in the
weak coupling regime (where the atom-field coupling constant is much less than
the photon decay rate inside the cavity), the interactions can be treated
perturbatively such that QED and classical theory yield the same results for
the modification of the SE rate [5]. An exception is when the surrounding
medium contains gain [13]. The modification of radiative properties for two
coupled LOs in close vicinity is given in A by invoking QED theory and using
the dyadic Green’s function for a dipole.
The classical analogs of the superradiant and subradiant states of two coupled
TLSs (where the dipoles are quantized) occur when they are polarized along the
same axis and begin either in phase (direction of the two dipole moments are
equal) or out of phase (direction of the two dipole moments are reversed),
respectively. PyCharge can calculate the frequency shift $\delta_{12}$ and SE
rate $\gamma^{\pm}$ of two coupled LOs in either collective state by curve
fitting the discretized kinetic energy (KE) values, which are calculated by
PyCharge at each time step, to the expected harmonic equation (which also
connects to the master equation solutions shown in A)
$\mathrm{KE}=Ae^{-(\gamma^{\pm}t)}\sin\left((\omega_{0}\pm\delta_{12})t+\phi\right)^{2},$
(26)
where $A$ and $\phi$ are constants necessary to accurately fit the function
and are dependent on the initial conditions of the simulation. The curve fit
should be performed using the kinetic energy values after a number of time
steps have elapsed in the simulation to allow the scattered fields to
propagate back to the LO’s origin position. When the two coupled LOs are in
the superradiant or subradiant states, the population of their excited state
and their total energy $\mathcal{E}$ (related by Eq. (25)) are exponentially
decaying functions with a decay rate of $\gamma^{+}$ or $\gamma^{-}$,
respectively.
It is also useful to note that the total EM power radiated by an accelerating
point charge in a vacuum (at non-relativistic speeds) can be calculated using
the Larmor formula [14]:
$P(t)=\frac{q^{2}a^{2}(t)}{6\pi\epsilon_{0}c^{3}}.$ (27)
The power radiated by a dipole can also be calculated using the above equation
by replacing $q^{2}a^{2}$ with $|\mathbf{\ddot{d}}|^{2}$. Assuming that the
dipoles begin oscillating at $t=0$ s, the radiated energy at time $t^{\prime}$
can be calculated by integrating the radiated power from $t=0$ s to
$t=t^{\prime}$ (which can be approximated with PyCharge using a discrete
integration). As well, if there are two or more dipoles in a system that
interact, then each dipole will ‘absorb’ a certain amount of energy
$W_{\mathrm{abs}}$ radiated from the other dipoles. The total (constant)
energy of a system that contains $N$ dipoles is the sum of the energy gains
and losses of all the dipoles, given by
$W_{\mathrm{total}}=\sum_{\mathrm{i}=1}^{N}\left(\mathcal{E}_{i}(t^{\prime})-W_{\mathrm{abs},\,i}(t^{\prime})+\int_{0}^{t^{\prime}}P_{i}(t)\,dt\right),$
(28)
where $\mathcal{E}_{i}$ is the total energy (sum of the kinetic and potential
energies) of the $i$th dipole in the system, defined by Eq. (24).
## 3 PyCharge package overview
Figure 1: The Simulation object is instantiated with a list of the sources in
the system (i.e., Dipole and subclasses of Charge). The Simulation object can
calculate the EM fields and potentials at points along a spatial grid at a
specified time $t$. The Simulation object can also run (parallel) simulations
to calculate the trajectory of the Dipole objects over a range of time steps.
PyCharge uses an object-oriented framework for representing the sources in the
system and for executing simulations. All of the sources present in the system
must be instantiated as objects, which include point charges with predefined
trajectories and LOs (i.e., oscillating dipoles) which have dipole moments
that are determined dynamically at each time step. An overview of the classes
and methods implemented in the PyCharge package is shown in Fig. 1.
### 3.1 Electromagnetic sources
Point charge objects with predefined trajectories are instantiated from
subclasses of the `Charge` abstract parent class, which contains the charge
$q$ as an attribute and abstract methods for the position in the $x$, $y$, and
$z$ directions as functions of time. The `Charge` class also has methods for
the velocity and acceleration as functions of time which return the respective
derivatives using finite difference approximations; however, the user can
specify the exact velocity and acceleration equations in the subclasses if
desired. The `Charge` class also contains the method `solve_time` which
returns Eq. (5) in a modified form and is used by PyCharge to calculate the
retarded time at specified spatial points using the secant method, as
discussed in Sec. 2.2. Several point charge classes are included with PyCharge
(e.g., `StationaryCharge`, `OscillatingCharge`), where features of these
charge trajectories (e.g., angular frequency, radius) can be modified when
instantiated. Users can also create their own custom subclasses of `Charge` to
specify unique point charge trajectories.
The LO sources are instantiated from the `Dipole` class, which represents a
pair of oscillating point charges with a dipole moment that is dynamically
determined at each time step from Eq. (13); the positions of the point charges
are then calculated using the dipole moment (Eqs. (17) and (18)). In PyCharge,
the positive and negative charge pair are represented as `_DipoleCharge`
objects (which is a subclass of the `Charge` class); however, they are not
directly accessed by the user. The `Dipole` objects are instantiated with the
natural angular frequency $\omega_{0}$, the origin position, the initial
displacement ${\mathbf{r}}_{\mathrm{dip}}(t=0)$ between the two point charges
in the dipole, the charge magnitude $q$ (default is $e=1.602\times
10^{-19}\,$C) of the charges, and the mass of each charge ($m_{1}$ and
$m_{2}$); the default mass for both charges is $m_{e}$ (with
$m_{e}=9.109\times 10^{-31}$ kg) such that the dipole has an effective mass of
$m_{e}/2$ (see Eq. (15)).
The origin position (center of mass) of the dipole can either be stationary or
specified as a function of time. The `Dipole` object also contains the dipole
moment and its derivatives as attributes (stored as NumPy arrays), which are
calculated and saved at each time step during the simulation. The dipole
moment and origin position determine the motion of its two `_DipoleCharge`
objects, which are also updated at each time step. Unlike the point charge
objects that have predefined trajectories (implemented as continuous
functions), the position and related derivatives of the `_DipoleCharge`
objects are stored as discrete values at each time step; linear interpolation
is used to calculate the values between the discrete time steps.
### 3.2 Simulations
Algorithm 1 Simulation.run
1:Initialize sources in simulation
2:for $t$ in range(0, $t_{\mathrm{max}}$, $dt$): do
3: for dipole in sources: do
4: Calculate $\mathbf{E}_{\mathrm{d}}$ and solve Eq. (13) using RK4 at $t+dt$
5: Update trajectory arrays of dipole at $t+dt$
6: end for
7:end for
8:Save Simulation and Dipole objects with trajectories
The core features of the PyCharge package, including calculating the EM fields
and potentials and running simulations with `Dipole` objects, are executed
using the `Simulation` class. The `Simulation` object is instantiated with the
source objects that are present in the system. The `Simulation` object can
calculate the electric and magnetic fields, as well as the scalar and vector
potentials generated by the sources at specified spatial points at time $t$
using the methods `calculate_E`, `calculate_B`, `calculate_V`, and
`calculate_A`. Additionally, the specific EM field type (Coulomb, radiation,
or total field) to be calculated by the `calculate_E` and `calculate_B`
methods can be specified. These calculations are performed using the numerical
approach described in Sec. 2.2, and have a time complexity of $\mathcal{O}(N)$
with respect to both the number of sources in the simulation and the number of
spatial points in the grid. However, the trajectories of all the sources must
be defined at time $t$; therefore, the dipole moments of any `Dipole` objects
in the system must be known at $t$.
`Dipole` objects can be simulated in a system over a specified period of time
using the `run` method from the `Simulation` object. The `run` method
calculates the dipole moment and corresponding derivatives at each time step
by solving the equation of motion given in Eq. (13) using the Runge-Kutta
(RK4) method. The dipoles only begin oscillating after the first time step,
and have stationary dipole moments for $t\leq 0$ s. To calculate the driving
field $\mathbf{E}_{\mathrm{d}}$ of each `Dipole` object at a given time, the
electric field generated by all of the other sources in the system is
calculated at the dipole’s origin position using the `calculate_E` method.
Since the electric field generated by the `Dipole` object must be excluded in
the total field calculation to determine its own driving field
$\mathbf{E}_{\mathrm{d}}$, the `Dipole` object is passed as a parameter to the
`calculate_E` method, which ensures that it does not contribute to the total
field. Once the simulation is complete and the dipole trajectories are
calculated at each time step, the `Simulation` object and its instantiated
source objects can optionally be saved using Python object serialization into
an external file. The objects in the file can then be loaded by the
`Simulation` object for future analysis. An overview of the `run` method is
given in Algorithm 1.
When the `run` method is called, the number of time steps and size of the time
steps ($dt$) must be specified. The size of $dt$ must be appropriate for the
simulation being performed: the minimum requirement is that $dt$ must be small
enough such that the generated radiation does not reach the other dipoles in a
single time step, and in general a smaller $dt$ value reduces the amount of
error in the simulation. Other optional arguments include the name of the
external file where the `Simulation` object is saved after the simulation is
complete (alternatively where the `Simulation` object is loaded from if the
simulation has already been performed), a boolean indicating whether the
driving field $\mathbf{E}_{\mathrm{d}}$ at each time step is saved (which
increases memory usage), and the maximum possible velocity achieved by the
dipole’s charges as the LO model does not account for relativistic effects
(PyCharge raises an error if the velocity becomes larger; default is $c/100$).
The run time over 100 time steps as a function of the number of simulated
`Dipole` objects is shown in Fig. 2.
Figure 2: The average run time of the run method over 100 time steps with
respect to the number of Dipole objects in the simulation. Simulations were
performed using an Intel Xeon Processor E7-4800 v3 CPU.
### 3.3 MPI implementation
Algorithm 2 Simulation.run_mpi
1:Initialize sources in simulation
2:process_dipoles = []
3:for i in range(MPI.rank, len(dipoles), MPI.size) do
4: process_dipoles.append(dipoles[i])
5:end for
6:for $t$ in range(0, $t_{\mathrm{max}}$, $dt$): do
7: for dipole in process_dipoles: do
8: Calculate $\mathbf{E}_{\mathrm{d}}$ and solve Eq. (13) using RK4 at $t+dt$
9: Update trajectory arrays of dipole at $t+dt$
10: end for
11: Broadcast process_dipoles trajectories at $t+dt$
12: Receive and update trajectories from other dipoles
13:end for
14:Save Simulation and Dipole objects with trajectories
Simulating the LOs using the previously described approach is embarrassingly
parallelizable, as the task of solving the equation of motion (Eq. (13)) for
the dipoles at each time step can be distributed across multiple processes.
Ideally, each process will be tasked to calculate the trajectory of a single
`Dipole` object at each time step. However, if there are more `Dipole` objects
in the simulation than available processes, the set of `Dipole` objects can be
evenly distributed among the processes; in this case, the trajectories of the
`Dipole` objects are calculated sequentially. Once the processes have finished
calculating the trajectories of their assigned `Dipole` object(s), the
trajectories are broadcasted to all of the other processes. The trajectories
of the other dipoles, received from the other processes, are then updated for
the given time step. A description of this MPI implementation is provided in
Algorithm 2.
The original implementation of the simulation using the `run` method is
executed in $\mathcal{O}(N^{2})$ time for $N$ `Dipole` objects, since the
driving electric field $\mathbf{E}_{\mathrm{d}}$ of each dipole requires the
calculation of the field contributions from the other $N-1$ dipoles. By taking
advantage of the parallel computations, the ideal time complexity of our MPI
implementation (using $N$ processes for $N$ `Dipole` object) is
$\mathcal{O}(N)$. However, since each process must store the trajectory arrays
of the $N$ dipoles, the MPI implementation has a space complexity of
$\mathcal{O}(N^{2})$, while the space complexity of the original
implementation is $\mathcal{O}(N)$. The average speedup offered by the MPI
method using up to 128 processes is shown in Fig. 3.
Future improvements to the MPI implementation could potentially reduce the
space complexity to $\mathcal{O}(N)$ by pooling the dipole trajectory arrays
into a single location. However, this could significantly increase the time
required to fetch these trajectory values from memory. As well, the number of
broadcast operations could be reduced since it is not necessary to send the
trajectory information to the other processes at each time step; instead, the
trajectory values could be broadcast in batches only when they are required by
the other processes, which would improve run time.
Figure 3: The average speedup of the run_mpi method simulating 128 Dipole
objects as a function of the number of MPI processes. Simulations were
performed using an Intel Xeon Processor E7-4800 v3 CPU.
### 3.4 Performance and accuracy
There are two main sources of numerical error in the PyCharge package:
calculating the retarded time of the sources at a given position (for
determining the EM fields and potentials) by solving Eq. (5) using the secant
method, and determining the dipole moment at each time step for the `Dipole`
objects by solving Eq. (13) using the RK4 method.
The tolerance of the secant method (from the SciPy package) can be set as an
initialization argument of the `Simulation` object. However, the default value
should be satisfactory for most simulations, typically yielding a relative
error less than $10^{-6}\%$ for the fields and potentials (see Fig. 7). Extra
consideration is required if the point charges are moving at relativistic
velocities, as the secant method could yield a significant error. The compute
time required by the `calculate_E`, `calculate_B`, `calculate_V`, and
`calculate_A` methods is dependent on several factors: as previously
mentioned, the methods have a time complexity $\mathcal{O}(N)$ with respect to
both the number of sources in the simulation and the number of spatial points
in the grid, and also depend on the spacing of the spatial points in the grid
and the trajectory of the point charges. In general, the computation time for
these methods using a grid with 106 spatial points and a single `Charge`
object is 0.5–2 s.444Compute times recorded using an Intel Xeon Processor
E7-4800 v3 CPU.
The RK4 numerical method used by the `run` method introduces fourth order
convergence with time step size. For calculating the modified radiative
properties of two coupled dipoles, we found that choosing a time step value
$dt$ such that there are at least 10,000 time steps per dipole period yields
relative errors less than 0.2% (see Fig. 8). In general, the choice of
$\gamma_{0}$ (which is dependent on $q$, $m$, and $\omega_{0}$) must satisfy
$\gamma_{0}\ll\omega_{0}$ (see Eq. (12)), and the simulation error increases
with respect to the ratio $\gamma_{0}/\omega_{0}$. To accurately curve fit the
kinetic energy values to the expected harmonic motion, as shown in Listing 11,
we ran simulations for four dipole periods (40,000 time steps) and used the
energy values after a single dipole period (10,000 time steps) had elapsed.
Using the `run` method, this simulation had a run time of approximately ten
minutes.4 Saving the simulation data into a file requires approximately 2.1 kB
of memory per time step for each `Dipole` object in the simulation.
## 4 Example simulations
Figure 4: The scalar potential and electric field components (shown as arrows)
generated by two stationary, opposite point charges (shown as red dots). The
sources (with charge magnitude $e$) are separated by 20 nm along the $x$ axis.
The scalar potential is plotted on a symmetrical logarithmic scale that is
linear between $-10^{-2}$ V and $10^{-2}$ V.
In this section, we demonstrate three different electrodynamics simulations
performed using PyCharge: calculating the EM fields and potentials generated
by moving point charges with predefined trajectories, simulating two coupled
dipoles and determining their modified radiative properties, and instantiating
moving dipoles for use in simulations. We also provide minimal Python listings
that showcase the succinctness of the PyCharge interface. The Python scripts
used to create the following figures can be found in the PyCharge package
repository, and further examples and tutorials are available in the
documentation.
### 4.1 Point charges with predefined trajectories
The EM fields and potentials generated by time-dependent point charge
geometries can be complex and counterintuitive compared to their static
counterparts. The calculation of the analytical solution, if one exists, often
requires sophisticated vector calculus techniques that can obscure an
individual’s understanding and appreciation of the final result. However,
using only a few lines of code, the PyCharge package allows users to calculate
and visualize the full solutions to Maxwell’s equations for complicated point
charge geometries.
In the first example, we calculate the total electric field and scalar
potential generated by two stationary, opposite point charges (i.e., a
stationary electric dipole). The corresponding program code is shown in
Listing 6. The sources (two `StationaryCharge` objects) are separated by 20 nm
along the $x$ axis and have equal and opposite charges of magnitude $e$. The
program code calculates the electric field components and scalar potential (at
$t=0$ s) at each point on a $1001\times 1001$ spatial grid, which is generated
using the NumPy `meshgrid` method. The grid is centered at the origin and
extends 50 nm along the $x$ and $y$ axes. A plot of the calculated electric
field components (shown as arrows) and scalar potential is shown in Fig. 4.
Figure 5: The magnitude of the Poynting vector of the EM fields generated by
two harmonically oscillating, opposite point charges (shown as red dots). The
sources (with charge magnitude $e$) oscillate around the origin with an
amplitude of 2 nm and an angular frequency $\omega_{0}$ of $7\times 10^{16}$
rad/s.
⬇
import pycharge as pc
from numpy import linspace, meshgrid
from scipy.constants import e
sources = (pc.StationaryCharge((10e-9, 0, 0), e),
pc.StationaryCharge((-10e-9, 0, 0), -e))
simulation = pc.Simulation(sources)
coord = linspace(-50e-9, 50e-9, 1001)
x, y, z = meshgrid(coord, coord, 0, indexing=’ij’)
Ex, Ey, Ez = simulation.calculate_E(0, x, y, z)
V = simulation.calculate_V(0, x, y, z)
Figure 6: Calculates the electric field components and scalar potential
generated by two stationary point charges along a 2D spatial grid. Figure 7:
The $x$ component of the numerically computed electric field (top) and the
respective relative error (bottom) generated by an oscillating dipole located
at the origin as a function of $z$, which is scaled by the dipole’s wavelength
($\lambda_{0}$). The theoretical values are given in Eq. (52). The electric
dipole has an angular frequency $\omega_{0}$ of $7\times 10^{16}$ rad/s and an
initial dipole moment $d_{0}$ of $4e\times 10^{-9}$ C$\cdot$m. Figure 8: The
simulated and theoretical frequency shift $\delta_{12}$ (top) and SE rate
$\gamma^{+}$ (bottom) of superradiant s and p dipoles as functions of
separation. The frequency shift and SE rate are scaled by the free-space decay
rate $\gamma_{0}$, and the separation is scaled by the dipole’s wavelength
$\lambda_{0}$. The value of $\gamma_{0}$ for the dipoles is 19.791 MHz ($q=e$,
$m=m_{e}/2$, and $\omega_{0}=200\pi\times 10^{12}$ rad/s). The frequency shift
is plotted on a symmetrical logarithmic scale that is linear between
$-10^{-1}$ $\gamma_{0}$ and $10^{1}$ $\gamma_{0}$. The theoretical values for
$\gamma^{+}$ and $\delta_{12}$ are calculated by PyCharge using Eqs. (33) and
(35). The average relative errors of the $\delta_{12}$ and $\gamma^{+}$ values
for the p dipoles are 0.15% and 0.04%, and for the s dipoles are 0.19% and
0.13%.
The fields and potentials generated by different charge configurations can be
simulated using the same code by instantiating other types of sources. For
example, we can simulate a harmonically oscillating electric dipole by
instantiating two `OscillatingCharge` objects with opposite charge values
($q$) in the simulation. Users can also instantiate point charges with custom
trajectories by creating a subclass of the `Charge` class and defining its
motion along the $x$, $y$, and $z$ directions as functions of time.
Once the electric and magnetic fields in the system have been determined by
PyCharge, we can calculate the Poynting vector $\mathbf{S}$ (the directional
energy flux of the EM fields), defined by
$\mathbf{S}=\frac{1}{\mu_{0}}\mathbf{E}\times\mathbf{B}.$ (29)
The magnitude of the Poynting vector from the EM fields generated by an
oscillating electric dipole with an initial dipole moment $d_{0}$ of $4e\times
10^{-9}$ C$\cdot$m and an angular frequency $\omega_{0}$ of $7\times 10^{16}$
rad/s is shown in Fig. 5.
Additionally, the $x$ component of the electric field generated by the
oscillating electric dipole along the $z$ axis, and its relative error
compared to the known analytical solution (given in Eq. (52)), are shown in
Fig. 7. The analytical solution describes an idealized electric dipole where
the separation between the charges is infinitesimal; thus, to reduce the
relative error in the near-field, the PyCharge simulation uses a separation of
$4e\times 10^{14}$ m and a charge value $q$ of $e\times 10^{5}$ C, which
recovers the inital dipole moment $d_{0}$ of $4e\times 10^{-9}$ C$\cdot$m.
Using these separation values, the relative error remains less than
$10^{-6}\%$ but diverges in the very near-field of the dipole; this is
expected since PyCharge is simulating a physical dipole with a non-
infinitesimal separation.
### 4.2 Two coupled dipoles
In this section, we simulate two coupled dipoles (modeled as LOs) in a system
and calculate their modified radiative properties. An example program code for
simulating two s dipoles (transverse), which are polarized along the $y$ axis
and separated by 80 nm along the $x$ axis, is shown in Listing 11. The two
dipoles have a natural angular frequency $\omega_{0}$ of $200\pi\times
10^{12}$ rad/s and are simulated over 40,000 time steps (with a time step $dt$
of $10^{-18}$ s). The two charges in the dipole both have a mass of $m_{e}$
(the effective mass of the dipole is $m_{e}/2$) and a charge magnitude of $e$.
Once the simulation is complete, the `Simuation` and related source objects
are saved into the file `s_dipoles.dat`, which can be accessed for analyses.
The dipoles begin oscillating in phase with an initial charge displacement
$\mathbf{r}_{\mathrm{dip}}$ of 1 nm, resulting in superradiance and a modified
SE rate $\gamma^{+}$. The rate $\gamma^{+}$ and frequency shift $\delta_{12}$
are then calculated in PyCharge by curve fitting the kinetic energy of the
dipole (using the kinetic energy values after the 10,000 time step), as
discussed in Sec. 2.4. As well, the theoretical values for $\gamma_{12}$
(related to $\gamma^{+}$ by Eq. (34)) and $\delta_{12}$ are calculated by
PyCharge using Eqs. (33) and (35).
Figure 9: The normalized populations of the excited states of two dipoles $a$
and $b$, where dipole $a$ is initially excited ($\rho_{aa}(0)=1$) and dipole
$b$ is not (${\rho_{bb}(0)=0}$). The dipoles are separated by 80 nm (0.053
$\lambda_{0}$) and have a natural angular frequency $\omega_{0}$ of
$400\pi\times 10^{12}$ rad/s. The free-space decay rate $\gamma_{0}$ of the
dipoles is 7.916 GHz ($q=20e$ and $m=m_{e}/2$). The total energy is calculated
using Eq. (24), and the analytical solutions for the excited state populations
are given in Eqs. (42) and (43). Figure 10: The dipole moment in the frequency
domain for one isolated LO (free-space decay) and two coupled LOs in free-
space, where the latter response clearly shows the subradiant (lower energy
resonance) and supperradiant states (higher energy resonance). The two LOs are
separated by 80 nm and both have angular frequencies of $400\pi\times 10^{12}$
rad/s, and the theoretical (scaled) frequency shift $\delta_{12}$ is 18.86
$\gamma_{0}$.
The radiative properties of two coupled dipoles as a function of separation
can be calculated by repeatedly running the previous simulation while sweeping
across a range of dipole separation values. Using this technique, the modified
rate $\gamma^{+}$ and frequency shift $\delta_{12}$ for in phase
(superradiant) s and p dipoles, scaled by the free-space emission rate
$\gamma_{0}$, are plotted in Fig 8. The theoretical results from QED theory
are also shown in the figure, and the relative errors values ($<0.2\%$) are
provided.
⬇
import pycharge as pc
from numpy import pi
timesteps = 40000
dt = 1e-18
omega_0 = 100e12*2*pi
origins = ((0, 0, 0), (80e-9, 0, 0))
init_r = (0, 1e-9, 0)
sources = (pc.Dipole(omega_0, origins[0], init_r),
pc.Dipole(omega_0, origins[1], init_r))
simulation = pc.Simulation(sources)
simulation.run(timesteps, dt, ’s_dipoles.dat’)
d_12, g_plus = pc.calculate_dipole_properties(
sources[0], first_index=10000)
d_12_th, g_12_th = pc.s_dipole_theory(
r=1e-9, d_12=80e-9, omega_0=omega_0)
Figure 11: Runs the simulation of two coupled (in phase) s dipoles and
calculates their radiative properties, as well as the theoretical radiative
results from QED theory. From the code: $\delta_{12}=156.919$,
$\delta_{12,\mathrm{th}}=156.926$, $\gamma^{+}=1.997$, and
$\gamma_{12,\mathrm{th}}=0.994$ (scaled in units of $\gamma_{0}$).
We can also plot the normalized populations of the excited states of two
coupled dipoles, $\rho_{aa}(t)$ and $\rho_{bb}(t)$, using the normalized total
energy of the dipoles at each time step (Eqs. (24) and (25)). This yields
particularly interesting results for coupled dipoles with small separations
when one dipole is initially excited ($\rho_{aa}(0)=1$) and the other is not
($\rho_{bb}(0)=0$). In this scenario, the populations are a linear combination
of the superradiant and subraddiant states, which leads to the observed energy
transfer between dipoles known as Förster coupling,555The solution calculated
by PyCharge is more general as we also include dynamical coupling terms beyond
the usual $1/|r|^{3}$ static coupling regime, but the Förster coupling is
fully recovered. Indeed for chains of coupled dipoles, the retardation effects
become essential to include [15]. as further discussed in A. This phenomenon
can be simulated in PyCharge by initializing the excited dipole with a much
larger dipole moment (and total energy) than the other. The simulation results
and analytical solution, given in Eqs. (42) and (43), are shown in Fig. 9.
Additionally, the dipole moment of dipole $a$ in the frequency domain is shown
in Fig. 10, which clearly shows the frequency peaks of the subradiant and
supperradiant states.666An identical frequency plot could also be created
using the dipole moment of dipole $b$. The dipole moment of an isolated LO in
the frequency domain is also shown for comparison.
### 4.3 Moving dipoles
In addition to stationary dipoles, PyCharge can self-consistently simulate
moving dipoles (e.g., oscillating) with a time-dependent origin (center of
mass) position. Other direct EM simulation approaches (e.g., the FDTD method)
cannot accurately model moving dipoles, which can have practical importance
for nano-scale interactions as real atoms are rarely stationary. Thus,
PyCharge can be used to explore new physics phenomena that arise from this
additional dipole motion (e.g., phonons in dipole chains). Simulations with
moving dipoles are performed in PyCharge by creating a function that accepts
the time $t$ as a parameter and returns the position of the dipole’s origin
position at $t$ as a three element array ($x$, $y$, $z$). This function is
then passed as a parameter when instantiating the `Dipole` object. An example
of instantiating a `Dipole` object with a time-dependent origin is given in
Listing 12. A detailed analysis of moving dipoles using the PyCharge package
will appear in future work.
⬇
from numpy import pi, cos
import pycharge as pc
def fun_origin(t):
x = 1e-10*cos(1e12*2*pi*t)
return ((x, 0, 0))
omega_0 = 100e12*2*pi
init_d = (0, 1e-9, 0)
source = pc.Dipole(omega_0, fun_origin, init_d)
Figure 12: Instantiates a Dipole object with a time-dependent origin position
that oscillates along the $x$ axis with an amplitude of 0.1 nm and an angular
frequency of $2\pi\times 10^{12}$ rad/s.
## 5 Conclusions
PyCharge was developed as an open-source simulation package to allow both
novice and experienced users model a wide range of classical electrodynamics
systems using point charges. PyCharge can calculate the time-dependent,
relativistically correct EM fields and potentials generated by moving point
charges with predefined trajectories. The user can create custom point charge
objects in PyCharge by defining the $x$, $y$, and $z$ charge positions as
functions of time. PyCharge can also self-consistently simulate the motion of
LOs (dipoles), which are driven by the electric field generated by the other
sources in the system. With only a few lines of code to set up the simulation,
PyCharge can return the calculated modified radiative properties of the LOs
(SE rates and frequency shift) in the system.
Simulating multiple LOs in PyCharge is numerically exact and does not rely on
a Markov approximation, which has clear advantages for scaling to multiple
dipoles where analytically solving chains of atoms via coupling rates and
master equations becomes tedious and eventually intractable. As well, the
origin position of the LOs can be stationary or time-dependent, and the latter
is often very difficult to calculate analytically. We hope that PyCharge will
prove useful as a novel simulator in the rapidly advancing field of
computational electrodynamics, and expect that future versions of PyCharge
will be improved by implementing new ideas from the open-source research
community.
## Acknowledgements
This work was supported by the Natural Sciences and Engineering Research
Council of Canada, Queen’s University, and the Canadian Foundation for
Innovation. We also acknowledge support from CMC Microsystems, Xanadu Quantum
Technologies, and Mitacs, as well as from the Centre for Advanced Computing
(CAC) at Queen’s University.
## Appendix A Green’s functions, quantum master equations, and analytical
expressions for the radiative decay rates and coupling parameters
### A.1 Green’s function for free-space
To describe the general theory of light emission, we first define the dyadic
Green’s function $\mathbf{G}(\mathbf{r},\mathbf{r^{\prime}};\omega)$, which
describes the field response at $\mathbf{r}$ to an oscillating polarization
dipole at $\mathbf{r^{\prime}}$ as a function of frequency. The Green’s
function is the solution to the wave equation [6, 16, 17]
$\left[\nabla\times\nabla\times-\frac{\omega^{2}}{c^{2}}\epsilon(\mathbf{r})\right]\mathbf{G}\left(\mathbf{r},\mathbf{r}^{\prime},\omega\right)=\frac{\omega^{2}}{c^{2}}\mathbf{I}\delta\left(\mathbf{r}-\mathbf{r}^{\prime}\right),$
(30)
where $\mathbf{I}$ is the unit dyadic , and $\epsilon=n^{2}$ is the dielectric
constant that we will assume is lossless (real), and we also assume a non-
magnetic material. For a homogeneous dielectric with a refractive index $n$
(where $n=1$ in a free-space medium), the homogeneous Green’s function can be
written analytically given the wavevector in the background medium $k=\omega
n/c$:
$\displaystyle{\mathbf{G}}_{\mathrm{hom}}(R;\omega)$
$\displaystyle=\left({\mathbf{I}}+\frac{\nabla\nabla}{k^{2}}\right)\frac{k_{0}^{2}e^{ikR}}{4\pi
R}$ (31) $\displaystyle=\frac{{\mu_{0}k_{0}^{2}}\exp\left(ikR\right)}{4\pi
R}\left[\left(1+\frac{ikR-1}{k^{2}R^{2}}\right){\mathbf{I}}\right.$
$\displaystyle+\left.\left(\frac{3-3ikR-k^{2}R^{2}}{k^{2}R^{2}}\right)\frac{\mathbf{R}\otimes\mathbf{R}}{R^{2}}\right]+\frac{\delta(R)}{3n^{2}}{\mathbf{I}},$
where $R=|\mathbf{R}|=|\mathbf{r}-\mathbf{r^{\prime}}|$ and $k_{0}=\omega/c$.
Although it is possible to analytically define the exact time-dependent
solution from a Fourier transform of an exact Dyson solution in the presence
of a finite number of quantum emitters (treated as quantized harmonic
oscillators), and thus obtain an exact solution to the emitted spectrum [18,
19], below we present a simpler and more common solution (by invoking a Markov
approximation) that immediately connects to the main physics regimes studied
in this paper.
### A.2 Quantum master equation for coupled two level systems and the
coupling rates
In QED, treating the atoms as TLSs, one can use a Born-Markov approximation to
derive the master equation for the reduced density $\rho$, where the decay
rates $\gamma_{ij}$ appear directly as Lindblad superoperators, and
$\delta_{12}$ is a simple frequency shift
$\omega_{i}\rightarrow\omega_{i}+\delta_{ij}$. For two coupled TLSs, $a$ and
$b$, the resulting master equation (in the interaction picture) is [20, 21,
22]
$\displaystyle\frac{d\rho}{dt}$
$\displaystyle=\sum_{\alpha,\,\beta=a,\,b}\frac{\gamma_{\alpha\beta}(\omega_{\alpha})}{2}\left[2\sigma^{-}_{\alpha}\rho\sigma^{+}_{\beta}-\sigma^{+}_{\alpha}\sigma^{-}_{\beta}\rho-\rho\sigma^{+}_{\alpha}\sigma^{-}_{\beta}\right]$
$\displaystyle-i\left[\left(\delta_{ab}(\omega_{b})\sigma^{+}_{a}\sigma^{-}_{b}+\delta_{ba}(\omega_{a})\sigma^{+}_{b}\sigma^{-}_{a}\right),\rho\right],$
(32)
where $\sigma^{\pm}_{\alpha}$ and $\sigma^{\pm}_{\beta}$ are the Pauli
operators for the TLSs (i.e.,
$\sigma^{+}_{\alpha}=\ket{e_{\alpha}}\bra{g_{\alpha}}$ and
$\sigma^{-}_{\alpha}=\ket{g_{\alpha}}\bra{e_{\alpha}}$). The master equation
accounts for the interactions between the quantum emitters and the surrounding
environment, and we have also used a rotating wave approximation.
For two coupled TLSs (which recover the same model as two quantized harmonic
oscillators in the weak excitation approximation), $a$ and $b$, in close
vicinity with equal resonance frequencies
($\omega_{0}=\omega_{a}=\omega_{b}$), the self ($\gamma_{aa,bb}$) and cross
($\gamma_{ab,ba}$) decay rates are obtained from [20, 21, 23]
$\displaystyle\gamma_{\alpha\beta}$ $\displaystyle=\frac{2{\bf
d}_{\alpha}^{\dagger}\cdot{\rm Im}{\bf G}({\bf r}_{\alpha},{\bf
r}_{\beta},\omega_{0})\cdot{\bf d}_{\beta}}{\epsilon_{0}\hbar}.$ (33)
Assuming the two TLSs are identical ($\mathbf{d}_{a}=\mathbf{d}_{b}$ and
$\omega_{a}=\omega_{b}$), we define the on-resonance Markovian decay rates as
$\gamma_{0}\equiv\gamma_{aa}=\gamma_{bb}$ and
$\gamma_{12}\equiv\gamma_{ab}=\gamma_{ba}$. The hybrid system (in the presence
of coupling) can then form superradiant or subradiant states [24], defined
from $\ket{\psi^{+}}=1/\sqrt{2}\,(\ket{e_{a},g_{b}}+\ket{g_{a},e_{b}})$ and
$\ket{\psi^{-}}=1/\sqrt{2}\,(\ket{e_{a},g_{b}}-\ket{g_{a},e_{b}})$,
respectively, which decay with the modified rates
$\gamma^{\pm}=\gamma_{0}\pm\gamma_{12}.$ (34)
The so-called virtual photon transfer rate (or dipole-dipole induced frequency
shift) between two TLSs with equal resonance frequencies is
$\delta_{\alpha\beta}|_{\alpha\neq\beta}=-\frac{{\bf
d}_{\alpha}^{\dagger}\cdot{\rm Re}{\bf G}({\bf r}_{\alpha},{\bf
r}_{\beta},\omega_{0})\cdot{\bf d}_{\beta}}{\epsilon_{0}\hbar}.$ (35)
This “exchange” term fully recovers Förster coupling and can yield
superradiant and subradiant states (Dicke states) for two coupled TLSs at
small separation distances [24].
Although the expressions in terms of the photonic Green’s function are general
for any medium, to recover the free-space dipole problem in the main text, we
simply replace ${\bf G}$ by ${\bf G}_{\rm hom}$ and obtain these rates
analytically (within a Markov approximation, i.e., evaluated at a single
frequency). We can rewrite the quantum master equation for two coupled TLSs
with equal resonance frequencies as
$\displaystyle\frac{d\rho}{dt}=$
$\displaystyle\sum_{\alpha,\,\beta=a,\,b}\frac{\gamma_{\alpha\beta}}{2}\left[2\sigma^{+}_{\alpha}\rho\sigma^{-}_{\beta}-\sigma^{+}_{\alpha}\sigma^{-}_{\beta}\rho-\rho\sigma^{+}_{\alpha}\sigma^{-}_{\beta}\right]$
$\displaystyle-i\delta_{12}\left[\left(\sigma^{+}_{a}\sigma^{-}_{b}+\sigma^{+}_{b}\sigma^{-}_{a}\right),\rho\right].$
(36)
From the master equation, we can easily derive the equation of motion for any
observable of interest, i.e., from $\braket{\dot{O}}=\braket{\dot{\rho}O}={\rm
Tr}(\dot{\rho}O)$. For example, the population equation of motion for the two
coupled dipoles are
$\displaystyle\dot{\rho}_{aa}$
$\displaystyle=-\gamma_{aa}\rho_{aa}-\gamma_{ab}\rho_{ab}-i\delta_{ab}\rho_{ab},$
(37) $\displaystyle\dot{\rho}_{bb}$
$\displaystyle=-\gamma_{bb}\rho_{bb}-\gamma_{ba}\rho_{ba}+i\delta_{ba}\rho_{ba},$
(38)
where the density matrix elements are
$\rho_{\alpha\beta}=\braket{\alpha}{\rho}{\beta}$.
The coherence between the TLSs, accounted for by the terms $\rho_{ab}$ and
$\rho_{ba}$ (whose equations can be derived similarly), can significant affect
the radiative decay rates, allowing various collective solutions such as
superradiant and subradiant decays. For example, given the initial conditions
$\rho_{aa}(0)=1$ and $\rho_{bb}(0)=\rho_{ab}(0)=\rho_{ba}(0)=0$, and assuming
the dipoles are identical, the excited state populations have a non-trivial
time dependence with oscillatory dynamics, as shown in Eqs. (42) and (43).
In the next section, we will solve the density matrix equations in a different
basis (using the dressed states), which both simplifies their solution and
clearly shows the collective modified decay rates for the superradiant and
subradiant states – which decay with the rates $\gamma^{+}$ and $\gamma^{-}$,
respectively.
### A.3 Time-dependent solution to the master equation for initially excited
atoms
With no initial driving field included, the reduced master equation (Eq.
(A.2)) can be solved analytically. To make this clear, we can restrict the
size of the basis to include the following four states:
$\ket{I}=\ket{g_{a},g_{b}}$, $\ket{II}=\ket{e_{a},e_{b}}$, and
$\ket{\pm}=1/\sqrt{2}\,(\ket{e_{a},g_{b}}\pm\ket{g_{a},e_{b}})$, where $g$ and
$e$ label the ground and excited states of the TLSs. If the initial excitation
only involves the density matrix elements $\rho_{++}$, $\rho_{--}$,
$\rho_{+-}$, and $\rho_{-+}$ (so only the atoms are excited, i.e., the fields
are in a vacuum state, $\ket{\phi}_{\rm fields}=\ket{\\{0}\\}$), with
$\rho_{\alpha\beta}=\ket{\alpha}\bra{\beta}$, then we have the following
density matrix equations for two identical TLSs:
$\displaystyle\dot{\rho}_{++}$
$\displaystyle=-(\gamma_{0}+\gamma_{12})\rho_{++},$ (39)
$\displaystyle\dot{\rho}_{--}$
$\displaystyle=-(\gamma_{0}-\gamma_{12})\rho_{--},$
$\displaystyle\dot{\rho}_{+-}$
$\displaystyle=-(\gamma_{0}+i2\delta_{12})\rho_{+-},$
$\displaystyle\dot{\rho}_{-+}$
$\displaystyle=-(\gamma_{0}-i2\delta_{12})\rho_{-+},$
which have the explicit solutions
$\displaystyle\rho_{++}(t)$
$\displaystyle=\rho_{++}(0)e^{-(\gamma_{0}+\gamma_{12})t},$ (40)
$\displaystyle\rho_{--}(t)$
$\displaystyle=\rho_{--}(0)e^{-(\gamma_{0}-\gamma_{12})t},$
$\displaystyle\rho_{+-}(t)$
$\displaystyle=\rho_{+-}(0)e^{-(\gamma_{0}+2i\delta_{12})t},$
$\displaystyle\rho_{-+}(t)$
$\displaystyle=\rho_{-+}(0)e^{-(\gamma_{0}-2i\delta_{12})t},$
which is a particular case of weak excitation, so the two quantum state
($\ket{II}$) is decoupled. Consequently, this coupled TLS solution recovers
the solution of coupled quantized harmonic oscillators, and this is also why
the radiative decay of classical LOs are then identical in this limit.
These decay solutions are precisely the cases of superradiant decay,
subradiant decay, and a linear combination of superradiant and subradiant
decay. The latter case will cause population beatings that oscillate with a
beating time of $T_{\rm beat}=\pi/\delta_{12}$. Although we have derived these
equations in a Markov approximation, we note that this is not necessary in
general, and the full time-dependent quantum dynamics can also be worked out
analytically in a weak excitation approximation [19]. The PyCharge simulations
are also numerically exact and do not rely on a Markov approximation, and have
clear advantages for scaling to multiple dipoles, where analytically solving
chains of atoms via coupling rates and master equations becomes tedious and
eventually intractable.
The expectation values for observables in the original basis are derived in
the usual way, e.g., for the excited population in the TLS $a$, we have
$\rho_{aa}=\braket{\sigma^{+}_{a}\sigma^{-}_{a}}=\sum_{i,j}\bra{j}\sigma^{+}_{a}\sigma^{-}_{a}\ket{i}\rho_{ji},$
(41)
where $i,j$ sums over states $\ket{I},\ket{II},$ and $\ket{\pm}$, and
similarly for $\rho_{bb}$. For an initial condition of $\rho_{aa}(0)=1$ and
$\rho_{bb}(0)=\rho_{ba}(0)=\rho_{ab}(0)=0$, this is equivalent to having
$\rho_{++}(0)=\rho_{+-}(0)+\rho_{-+}(0)=\rho_{--}(0)=1/4$. The explicit time-
dependent solutions for the population decays, from Eq. (40), is
$\rho_{aa}(t)=\frac{1}{4}\left(e^{-(\gamma_{0}-\gamma_{12})t}+e^{-(\gamma_{0}+\gamma_{12})t}+2\cos(2\delta_{12}t)e^{-\gamma_{0}t}\right)$
(42)
and
$\rho_{bb}(t)=\frac{1}{4}\left(e^{-(\gamma_{0}-\gamma_{12})t}+e^{-(\gamma_{0}+\gamma_{12})t}-2\cos(2\delta_{12}t)e^{-\gamma_{0}t}\right).$
(43)
Finally, in the limit of very small dipole separations, where
$\gamma_{12}\approx\gamma_{0}$, we have the approximate solutions
$\rho_{aa}(t)\approx\frac{1}{4}\left(1+e^{-2\gamma_{0}t}+2\cos(2\delta_{12}t)e^{-\gamma_{0}t}\right)$
(44)
and
$\rho_{b}(t)\approx\frac{1}{4}\left(1+e^{-2\gamma_{0}t}-2\cos(2\delta_{12}t)e^{-\gamma_{0}t}\right).$
(45)
## Appendix B Fermi’s Golden Rule for the Free-Space Spontaneous Emission
Rate
Here, we briefly show the standard Fermi’s golden rule approach for
calculating the free-space SE rate. Fermi’s golden rule is written as
$\gamma_{i\rightarrow f}(\omega_{f})=\frac{2\pi}{\hbar}\left|\braket{i}{H_{\rm
int}}{f}\right|^{2}D(\omega_{f}),$ (46)
where $D$ is the density of states (assumed to be approximately constant over
the region of emission), and $i$ and $f$ are the initial and final states,
respectively. Consistent with the Markov approximation in the density matrix
approach, this is also a long time Markovian “rate”.
The dipole interaction Hamiltonian $H_{\rm int}$ has the usual form
$\displaystyle H_{\rm int}=-\sum_{{\bf k},\eta}\sqrt{\frac{\hbar\omega_{\bf
k}}{2\epsilon_{0}}}\left(\sigma^{+}+\sigma^{-}\right){\bf
d}_{ge}\cdot\left({\bf f}_{{\bf k},\eta}\hat{a}_{{\bf k},\eta}+{\bf
f}^{*}_{{\bf k},\eta}\hat{a}^{\dagger}_{{\bf k},\eta}\right),$ (47)
where $\hat{a}^{\dagger}_{{\bf k},\eta}$ and $\hat{a}_{{\bf k},\eta}$ are the
creation and annihilation operators for the fields at wave vector ${\bf k}$
with polarization $\eta$. The classical normal modes can be written as
${\bf f}_{{\bf k},\eta}=\frac{1}{\sqrt{V}}\hat{\varepsilon}_{{\bf
k},\eta}e^{i{\bf k}\cdot{\bf r}},$ (48)
where $V$ is an arbitrary volume.
Beginning in the excited state, $\ket{i}=\ket{e,\\{{0}\\}}$ and evolving to
the final state $\ket{f}=\ket{g,{\bf 1}_{{\bf k},\eta}}$, the relevant matrix
element for photon emission is
$\braket{e,\\{{0}\\}}{H_{\rm int}}{g,{\bf 1}_{{\bf
k},\eta}}=\sqrt{\frac{\hbar\omega_{\bf
k}}{2\epsilon_{0}V}}\left(\hat{\varepsilon}_{{\bf k},\eta}\cdot{\bf
d}_{ge}\right)e^{i{\bf k}\cdot{\bf r}}.$ (49)
Computing the free-space density of states in the usual way, namely with
periodic boundary conditions, we have
$D(\omega_{0})=\frac{\omega_{0}^{2}V}{\pi^{2}\hbar c^{3}}.$ (50)
Finally, using $\omega_{\bf k}\approx\omega_{0}$ and $|\hat{\varepsilon}_{{\bf
k},\eta}\cdot{\bf d}_{ge}|^{2}=|{d}_{ge}|^{2}/3$ (isotropic averaging), the SE
rate is given by
$\gamma_{0}=\frac{\omega_{0}^{3}|{\bf d}_{ge}|^{2}}{3\pi\epsilon_{0}\hbar
c^{3}},$ (51)
which is identical to the $\gamma_{0}$ expressions in the main text (Eq.
(20)), and also with Eq. (33) when using the free-space Green’s function. Note
in the quantum case, the dipole matrix element is formally defined from ${\bf
d}_{ge}=\braket{g}{{\bf\hat{d}}}{e}$.
## Appendix C Electromagnetic fields generated by an oscillating electric
dipole
The exact equations that define the electric and magnetic fields generated by
an idealized oscillating electric dipole located at the origin are given by
$\begin{split}\mathbf{E}(\mathbf{r},t)=\frac{1}{4\pi\epsilon_{0}}\Bigg{[}&k^{2}(\hat{\mathbf{r}}\times\mathbf{d})\times\hat{\mathbf{r}}\frac{e^{ikr}}{r}\\\
&+[3(\hat{\mathbf{r}}\cdot\mathbf{d})\hat{\mathbf{r}}-\mathbf{d}]\left(\frac{1}{r^{3}}-\frac{ik}{r^{2}}\right)e^{ikr}\Bigg{]}\end{split}$
(52)
and
$\mathbf{B}(\mathbf{r},t)=\frac{\mu_{0}}{4\pi}\Bigg{[}ck^{2}(\hat{\mathbf{r}}\times\mathbf{d})\frac{e^{ikr}}{r}\left(1-\frac{1}{ikr}\right)\Bigg{]},$
(53)
where $k=\omega/c$ and $\omega$ is the angular frequency of the oscillating
dipole, $\mathbf{d}=d_{0}e^{-i\omega t}$ is the time-dependent dipole moment,
${r=|\mathbf{r}|}$, and $\hat{\mathbf{r}}=\mathbf{r}/r$ [1].
## References
* [1] J. D. Jackson, Classical electrodynamics, 3rd Edition, Wiley, 1999.
* [2] P. Milonni, Semiclassical and quantum-electrodynamical approaches in nonrelativistic radiation theory, Physics Reports 25 (1) (1976) 1–81.
URL https://doi.org/10.1016/0370-1573(76)90037-5
* [3] J. D. Hunter, Matplotlib: A 2d graphics environment, Computing in Science & Engineering 9 (3) (2007) 90–95. doi:10.1109/MCSE.2007.55.
* [4] D. J. Griffiths, Introduction to Electrodynamics, 4th Edition, Cambridge University Press, 2017.
* [5] L. Novotny, B. Hecht, Principles of nano-optics, Cambridge University Press, 2006.
URL http://www.books24x7.com/marc.asp?bookid=44009
* [6] P. de Vries, D. V. van Coevorden, A. Lagendijk, Point scatterers for classical waves, Rev. Mod. Phys. 70 (1998) 447–466. doi:10.1103/RevModPhys.70.447.
URL https://link.aps.org/doi/10.1103/RevModPhys.70.447
* [7] E. Schelew, R.-C. Ge, S. Hughes, J. Pond, J. F. Young, Self-consistent numerical modeling of radiatively damped lorentz oscillators, Phys. Rev. A 95 (2017) 063853. doi:10.1103/PhysRevA.95.063853.
URL https://link.aps.org/doi/10.1103/PhysRevA.95.063853
* [8] L. D. Dalcin, R. R. Paz, P. A. Kler, A. Cosimo, Parallel distributed computing using python, Advances in Water Resources 34 (9) (2011) 1124–1139. doi:10.1016/j.advwatres.2011.04.013.
* [9] E. Wiechert, Elektrodynamische elementargesetze, Annalen der Physik 309 (4) (1901) 667–689. doi:10.1002/andp.19013090403.
* [10] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, et al., Scipy 1.0: fundamental algorithms for scientific computing in python, Nature Methods 17 (3) (2020) 261–272. doi:10.1038/s41592-019-0686-2.
* [11] M. J. Filipovich, S. Hughes, Space-time computation and visualization of the electromagnetic fields and potentials generated by moving point charges, American Journal of Physics 89 (5) (2021) 482–489.
URL https://aapt.scitation.org/doi/10.1119/10.0003207
* [12] P. W. Milonni, J. H. Eberly, Lasers physics, John Wiley & Sons, 2010.
* [13] S. Franke, J. Ren, M. Richter, A. Knorr, S. Hughes, Fermi’s golden rule for spontaneous emission in absorptive and amplifying media, Phys. Rev. Lett. 127 (2021) 013602. doi:10.1103/PhysRevLett.127.013602.
URL https://link.aps.org/doi/10.1103/PhysRevLett.127.013602
* [14] J. Larmor, LXIII. On the theory of the magnetic influence on spectra; and on the radiation from moving ions, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 44 (271) (1897) 503–512. doi:10.1080/14786449708621095.
* [15] D. S. Citrin, Coherent transport of excitons in quantum-dot chains: role of retardation, Optics Letters 20 (8) (1995) 901. doi:10.1364/ol.20.000901.
URL https://doi.org/10.1364/ol.20.000901
* [16] P. Yao, V. S. C. Manga Rao, S. Hughes, On-chip single photon sources using planar photonic crystals and single quantum dots: On-chip single photon sources using planar photonic crystals, Laser & Photonics Reviews 4 (4) (2010) 499–516. doi:10.1002/lpor.200810081.
* [17] C. P. Van Vlack, Dyadic Green functions and their applications, Ph.D. thesis, Queen’s University Canada (2012).
* [18] M. Wubs, L. G. Suttorp, A. Lagendijk, Multiple-scattering approach to interatomic interactions and superradiance in inhomogeneous dielectrics, Physical Review A 70 (5) (2004) 053823. doi:10.1103/PhysRevA.70.053823.
* [19] P. Yao, S. Hughes, Macroscopic entanglement and violation of Bell’s inequalities between two spatially separated quantum dots in a planar photonic crystal system, Optics Express 17 (14) (2009) 11505. doi:10.1364/oe.17.011505.
URL https://doi.org/10.1364/oe.17.011505
* [20] G. S. Agarwal, Quantum electrodynamics in the presence of dielectrics and conductors. iv. general theory for spontaneous emission in finite geometries, Phys. Rev. A 12 (1975) 1475–1497. doi:10.1103/PhysRevA.12.1475.
URL https://link.aps.org/doi/10.1103/PhysRevA.12.1475
* [21] G. Angelatos, S. Hughes, Entanglement dynamics and Mollow nonuplets between two coupled quantum dots in a nanowire photonic-crystal system, Phys. Rev. A 91 (2015) 051803. doi:10.1103/PhysRevA.91.051803.
URL https://link.aps.org/doi/10.1103/PhysRevA.91.051803
* [22] S. A. H. Gangaraj, A. Nemilentsau, G. W. Hanson, S. Hughes, Transient and steady-state entanglement mediated by three-dimensional plasmonic waveguides, Optics Express 23 (17) (2015) 22330. doi:10.1364/oe.23.022330.
URL https://doi.org/10.1364/oe.23.022330
* [23] H. T. Dung, L. Knöll, D.-G. Welsch, Resonant dipole-dipole interaction in the presence of dispersing and absorbing surroundings, Phys. Rev. A 66 (2002) 063810. doi:10.1103/PhysRevA.66.063810.
URL https://link.aps.org/doi/10.1103/PhysRevA.66.063810
* [24] R. H. Dicke, Coherence in spontaneous radiation processes, Phys. Rev. 93 (1954) 99–110. doi:10.1103/PhysRev.93.99.
URL https://link.aps.org/doi/10.1103/PhysRev.93.99
|
arxiv-papers
| 2021-07-26T18:56:42 |
2024-09-04T03:07:19.919357
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Matthew J. Filipovich, Stephen Hughes",
"submitter": "Matthew Filipovich",
"url": "https://arxiv.org/abs/2107.12437"
}
|
2107.12439
|
# Proof of non-convergence of the short-maturity expansion for the SABR model
Alan L. Lewis111Newport Beach, California, USA; email: [email protected]
and Dan Pirjol222School of Business, Stevens Institute of Technology, Hoboken,
NJ 07030; email:[email protected]
(26 July 2021)
###### Abstract
We study the convergence properties of the short maturity expansion of option
prices in the uncorrelated log-normal ($\beta=1$) SABR model. In this model
the option time-value can be represented as an integral of the form
$V(T)=\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}g(u)du$ with $g(u)$ a “payoff
function” which is given by an integral over the McKean kernel $G(s,t)$. We
study the analyticity properties of the function $g(u)$ in the complex
$u$-plane and show that it is holomorphic in the strip $|\Im(u)|<\pi$. Using
this result we show that the $T$-series expansion of $V(T)$ and implied
volatility are asymptotic (non-convergent for any $T>0$). In a certain limit
which can be defined either as the large volatility limit
$\sigma_{0}\to\infty$ at fixed $\omega=1$, or the small vol-of-vol limit
$\omega\to 0$ limit at fixed $\omega\sigma_{0}$, the short maturity
$T$-expansion for the implied volatility has a finite convergence radius
$T_{c}=\frac{1.32}{\omega\sigma_{0}}$.
## 1 Introduction and motivation
The SABR model is a versatile stochastic volatility model which has proved
very popular with practitioners since its introduction almost 20 years ago
[5]. It was originally introduced to model interest rate volatilities, but its
application has been extended later also to other asset classes, such as FX
and commodities. The model is described by the diffusion
$\displaystyle dS_{t}=\sigma_{t}\mathcal{C}(S_{t})dW_{t}$ (1) $\displaystyle
d\sigma_{t}=\omega\sigma_{t}dZ_{t}$ (2)
where $(W_{t},Z_{t})$ are standard Brownian motions correlated with
correlation $\rho\leq 0$. The volatility of volatility (vol-of-vol) parameter
$\omega$ determines the curvature of the implied volatility, and the backbone
function $\mathcal{C}(S_{t})$ is introduced such that the model captures the
smile dynamics of the ATM (“at-the-money”) implied volatility under spot price
changes.
In the original SABR paper [5] the backbone function was chosen as a power
function (CEV-like) $\mathcal{C}(S)=S^{\beta}$ with $0\leq\beta\leq 1$. In
practice more complicated forms are used, such as the three-regime backbone of
de Guillaume, Rebonato and Pogudin [4], reflecting the empirically observed
backbone behavior of swaption volatilities. Since in the academic literature
the SABR model is typically defined with CEV-like backbone, we also use the
same convention and refer to the values of $\beta$ in the SABR model implied
as the CEV backbone.
The leading order in the short maturity expansion for the implied volatility
for the SABR model was obtained in [5]. The subleading $O(T)$ correction was
also computed in this paper at the ATM point. The result has a simple
analytical form, and is easily implemented in practice for model simulation
and calibration. This feature contributed to the widespread adoption and
popularity of the model.
Higher order corrections to the short maturity expansion of the implied
volatility in the SABR model were obtained by Henry-Labordére [6] and Paulot
[14]. The complete $O(T^{2})$ contribution was obtained in [14], although its
evaluation involves numerical integration for some terms. A systematic
algorithm for expanding the implied volatility in a double series expansion in
log-strike $x=\log(K/S_{0})$ and maturity $T$ was mentioned in [8], and is
used here once to generate (1) below. (However, none of our subsequent results
rely upon this algorithm.)
Throughout, we work with the so-called “log-normal” SABR model: $\beta=1$. The
(Black-Scholes) implied volatility in this model has the full parametric
dependence $\sigma_{\rm BS}=\sigma_{\rm BS}(x,T,\sigma_{0},\omega,\rho)$. Here
$x=\log(K/S_{0})$, so ATM means $x=0$, and $T$ is the time to option maturity.
For reference purposes we give here the expansion of the ATM implied
volatility to $O(T^{2})$ (to our knowledge the full $O(T^{2})$ term is new
[9]):
$\displaystyle\hskip 30.0pt\frac{1}{\sigma_{0}}\sigma_{\rm
BS}(0,T,\sigma_{0},\omega,\rho)=1+\frac{1}{24}\sigma_{0}\omega
T\Big{[}6\rho+\frac{\omega}{\sigma_{0}}(2-3\rho^{2})\Big{]}$
$\displaystyle\,\,+\frac{1}{1920}\omega^{2}\sigma_{0}^{2}T^{2}\Big{[}(-80+240\rho^{2})+\frac{\omega}{\sigma_{0}}\rho(240-180\rho^{2})+\frac{\omega^{2}}{\sigma_{0}^{2}}(-12+60\rho^{2}-45\rho^{4})\Big{]}$
$\displaystyle\qquad+O(T^{3})\,.$ (3)
From here on, we work with $\omega=1$; the general case can be recovered as
$\sigma_{\rm BS}(x,T,\sigma_{0},\omega,\rho)=\omega\times\sigma_{\rm
BS}\left(x,\omega^{2}T,\frac{\sigma_{0}}{\omega},1,\rho\right).$ (4)
With $\omega=1$ and $x=\rho=0$ (suppressing the display of all three
parameters), a second reference expansion is:
$\displaystyle\quad\Sigma_{\rm
BS}^{2}(T,\sigma_{0})\equiv\left(\frac{\sigma_{\rm
BS}}{\sigma_{0}}\right)^{2}=1+\frac{1}{6}T-\frac{1}{180}T^{2}(1+15\sigma_{0}^{2})+\frac{1}{1680}T^{3}(4-161\sigma_{0}^{2})$
(5)
$\displaystyle\,\,-\frac{1}{453600}T^{4}(579+29980\sigma_{0}^{2}-7560\sigma_{0}^{4})+O(T^{5}).$
Although efficient numerical techniques are available for option pricing with
general strike and maturity [8, 1], series expansions are very convenient.
Thus, it is useful to explore their limits of applicability. We address the
following questions: what is the nature of the short-maturity expansion of the
implied volatility around the ATM point? Is it strictly asymptotic or
convergent? If convergent, is there a finite radius of convergence?
In this note, we show that the full expansion underlying (5) is strictly
asymptotic, via a careful analysis of the (closed-form) SABR option value
$V(T,\sigma_{0})$. Briefly, the argument is as follows. We show in (12) below
that the value function can be put into the form
$V(T,\sigma_{0})=Ce^{-T/8}\int_{0}^{\infty}e^{-u^{2}/2T}g(u,\sigma_{0})\,\frac{du}{\sqrt{T}}$,
calling $g$ a “payoff function”. Our key result, Theorem 2, establishes that
$g(u,\sigma_{0}$) admits an analytic continuation to the strip $|\Im(u)|<\pi$
in the complex $u$-plane. This function has singularities at $u=\pm i\pi$.
Theorem 2 may be of separate interest because $g(u,\sigma_{0})$ itself is
given by a non-trivial integral. From Theorem 2, the non-convergence of (5)
for all $T>0$ follows from term-by-term integration of $g$’s power series in
$u$.
Arguably, non-convergence may have been the expected result. However, there is
a curious and interesting “large $\sigma_{0}$ scaling limit” – where the
convergence story changes. This limit was introduced and first studied in
[15]. In that limit, under our current setup, take
$\sigma_{0}\rightarrow\infty$ and $T\rightarrow 0$, holding
$\tau\equiv\frac{1}{2}\sigma_{0}T$ fixed. (Under general $\omega$ this limit
corresponds to taking $\omega\to 0,\sigma_{0}\to\infty$ at fixed and arbitrary
$T$, holding $\sigma_{0}\omega$ fixed.) Substituting $T=2\tau/\sigma_{0}$ in
(5), one sees that
$\hat{\Sigma}_{\rm
BS}^{2}(\tau)\equiv\lim_{\sigma_{0}\rightarrow\infty}\Sigma_{\rm
BS}^{2}\left(\frac{2\tau}{\sigma_{0}},\sigma_{0}\right)=1-\frac{1}{3}\tau^{2}+\frac{4}{15}\tau^{4}-\frac{92}{315}\tau^{6}+O(\tau^{8}).$
(6)
Mechanically, large $\sigma_{0}$ scaling has the effect of (i) suppressing all
the odd $T$-powers in (5) and (ii) keeping only the largest power of
$\sigma_{0}$ in each even $T$-power. In the Appendix, we present a (new)
derivation of the closed-form relation for $\hat{\Sigma}_{\rm BS}^{2}(\tau)$,
a relation first obtained in [15] by a time discretization argument. The
series in (6) has a finite (non-zero) convergence radius in the complex
$\tau$-plane (see Sec. 6).
## 2 The SABR value function
Our starting point is the exact representation of the (time) value $V$ of call
option prices in the uncorrelated $\beta=1$ SABR model. From Eq. (3.103) in
[1]:
$\displaystyle V(K,T)\equiv\mathbb{E}[(S_{T}-K)^{+}]-(S_{0}-K)^{+}$ (7)
$\displaystyle\quad=\frac{2\sqrt{KS_{0}}}{\pi}\int_{s_{-}}^{\infty}\frac{G(T,s)}{\sinh
s}\sin\left(\frac{1}{2}\sigma_{0}\sqrt{\sinh^{2}s-\sinh^{2}s_{-}}\right)ds$
with $s_{-}=\frac{1}{\sigma_{0}}\log|K/S_{0}|$. This expression assumes
$\omega=1$. This parameter can be restored to a general value by replacing
$T\to\omega^{2}T$ and $\sigma_{0}\to\sigma_{0}/\omega$ as in (4).
The function $G(t,s)$ is related to the McKean heat kernel $\mathcal{G}(t,s)$
[11] for the Brownian motion on the Poincaré hyperbolic plane
$\mathbb{H}^{2}$. The precise relation is
$G(t,s)=2\pi\int_{s}^{\infty}\mathcal{G}(t,u)\sinh udu$. The function $G(t,s)$
is given explicitly by
$G(t,s)=\frac{e^{-t/8}}{\sqrt{\pi
t}}\int_{s}^{\infty}\frac{e^{-u^{2}/(2t)}\sinh u}{\sqrt{\cosh u-\cosh
s}}du\,.$ (8)
Geometrically, suppose $s(x,y)$ is the hyperbolic distance between two points
$x$ and $y$ in $\mathbb{H}^{2}$. Then, $\mathcal{G}(t,s(x,y))$ is the
transition density for a hyperbolic Brownian particle, starting at $x$, to
reach the point $y$ after time $t$. Thus $G(t,s)$ is similar to a
complementary distribution function – the tail probability for the particle to
move a hyperbolic distance greater than $s$ at time $t$. (See [1] for further
discussion.)
A similar integral expression to (7) holds in the uncorrelated SABR model with
$0<\beta<1$, with different upper and lower integration limits, and a
different form for the sine factor. We study here the $\beta=1$ case for
definiteness.
Combining (7) with the integral representation of $G(t,s)$ in Eq. (8) and
exchanging the order of integration, the option time value can be put into the
form
$V(K,T)=\frac{2\sqrt{KS_{0}}e^{-T/8}}{\pi^{3/2}\sqrt{T}}\int_{s_{-}}^{\infty}e^{-\frac{u^{2}}{2T}}g(u,s_{-})du$
(9)
with
$\displaystyle g(u,s_{-})\equiv\sinh u\int_{s_{-}}^{u}\frac{1}{\sqrt{\cosh
u-\cosh s}}h(s,s_{-})ds\,,$ (10) $\displaystyle
h(s,s_{-})\equiv\frac{\sin(\frac{\sigma_{0}}{2}\sqrt{\sinh^{2}s-\sinh^{2}s_{-}})}{\sinh
s}\,.$ (11)
At the ATM point $K=S_{0}$ we have $s_{-}=0$ and the expression (9) simplifies
as
$V(K=S_{0},T)=\frac{2S_{0}e^{-T/8}}{\pi^{3/2}\sqrt{T}}\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}g(u)du$
(12)
with
$g(u)\equiv\sinh u\int_{0}^{u}\frac{1}{\sqrt{\cosh u-\cosh s}}h(s)ds\,,\quad
h(s)\equiv\frac{\sin(\frac{\sigma_{0}}{2}\sinh s)}{\sinh s},\quad(u>0).$ (13)
We study the analyticity of $V(K,T)$ in the complex $T$ variable. It is
instructive to first generalize the situation.
## 3 Analyticity of a class of general value functions $V(T)$
Consider the analyticity of general value functions $V(T)$ in the complex
$T$-plane which can be represented in the form
$V(T)=c_{1}e^{-c_{2}T}\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}g(u)\frac{du}{\sqrt{2\pi
T}},$ (14)
and where $g(u)$ is any “analytic payoff function”. Here $c_{1,2}$ are two
model-dependent constants, irrelevant for analyticity. We introduce the
following definition.
A payoff function $g(u)$ is said to be an analytic payoff function if there
exists a function $G(z)$ of the complex variable $z$ which is regular
(analytic and single-valued) in the circle $|z|<R$ $(0<R\leq\infty)$ and a
constant $\Delta>0$ such that $G(u)=g(u)$ for $0\leq u<\Delta$. We call $G(z)$
the analytic continuation of $g(u)$.333 Our definition is in the spirit of
Lukacs [10] for “analytic characteristic functions”. A difference is that the
agreement between $G$ and $g$ need only hold here for (an interval of) the
_positive_ real axis. When $R=\infty$, then $G(z)$ is entire.
A further adopted restriction, very convenient for our problem, assumes $G(u)$
an odd function: $G(-u)=-G(u)$. As it turns out, this antisymmetry holds for
analytic continuations $G(u)$ under both SABR and Black-Scholes (BS) models,
with $G_{\rm{SABR}}(u)$ and $G_{\rm{BS}}(u)$ respectively. There is an
important (subtle) point regarding oddness. As it’s seen in the BS case in an
elementary way, consider that first.
With $x_{T}=\log S_{T}$, a generic call option value function
$V(T)=E[(e^{x_{T}}-K)^{+}|I_{0}]$, with $K$ the strike price, using
$x^{+}\equiv\max(x,0)$. Here $I_{0}$ is the set of initial conditioning
information: $S_{0}$ in the BS model, $(S_{0},\sigma_{0})$ in the SABR model,
etc. Under BS, $S_{T}$ follows geometric Brownian motion with $x_{T}-x_{0}\sim
N(-\frac{1}{2}\sigma^{2}T,\sigma^{2}T)$. Here $N(\mu,v)$ denotes a normal
distribution with mean $\mu$ and variance $v$, and (for this section alone)
“$\sim$” denotes “is distributed as”. Taking ATM with $S_{0}=K=\sigma=1$,
$\displaystyle V_{BS}(T)\,\,$
$\displaystyle=\int_{0}^{\infty}e^{-(u+\frac{1}{2}T)^{2}/2T}(e^{u}-1)\frac{du}{\sqrt{2\pi
T}}$
$\displaystyle=2\,e^{-\frac{1}{8}T}\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}\sinh\left(\frac{u}{2}\right)\frac{du}{\sqrt{2\pi
T}}=\mbox{Erf}\left(\frac{T^{1/2}}{2^{3/2}}\right),$
showing a well-known result using the error function. Thus,
$G_{BS}(u)=\sinh\left(\frac{u}{2}\right)$, odd as advertised. But, of course
the “original” payoff function $g_{\rm BS}(u)$ was defined for $u<0$
($S_{T}<K$) and vanishes there. Arguably, $g_{\rm
BS}(u)=\\{(\sinh(\frac{u}{2}))^{+}:u\in\bf{R}\\}$, certainly not odd. Yet,
given the representation (14), we find $g(u)$ admits an _analytic
continuation_ $G(u)$, an odd function. Thus, $G_{\rm BS}(u)\neq g_{\rm BS}(u)$
for real negative $u$, a possibility already hinted at in footnote 3.
We belabor the point because a similar thing happens with the SABR model.
Manifestly from (13), $g(-u)=g(u)$, easily confirmed from a plot. But, the
power series (33) below is an odd series. The resolution of the apparent
discrepancy is that the analytic continuation of $g(u)$, to the negative real
axis via $G(u)$, is _not_ found by mechanically taking a negative value of $u$
in the integral of (13), even though that (extended) integral technically
exists. (Indeed, a mechanical plot of (13) over an interval $(u_{1},u_{2})$,
with $u_{1}<0<u_{2}$, shows a function not even differentiable at $u=0$).
Instead, the analytic continuation _enforces the antisymmetry_ of the power
series for $g(u)$. That should motivate part of our key Theorem 2 below. That
theorem will establish that $G_{\rm SABR}(u)$ is indeed an analytic payoff
function with finite convergence radius $R$.
Small-maturity expansion of the value function. Under our assumptions,
$G(u)=\sum_{k=0}^{\infty}a_{k}u^{2k+1}$ for some sequence of coefficients
$\\{a_{k}\\}$, as long as $|u|<R\leq\infty$. We allow finite $R$, or
$R=+\infty$ for entire functions. Integrating term-by-term gives a formal
expansion of a “normalized value function” $\hat{V}(T)$, defined by
$\displaystyle\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}\,G(u)\frac{du}{\sqrt{2\pi
T}}=\sum_{k=0}^{\infty}a_{k}\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}u^{2k+1}\frac{du}{\sqrt{2\pi
T}}$ (16)
$\displaystyle=\sqrt{\frac{T}{2\pi}}\sum_{k=0}^{\infty}a_{k}\,(2T)^{k}\,\Gamma(1+k)=\sqrt{\frac{T}{2\pi}}\,\hat{V}(T).$
$\hat{V}(T)$ differs from $V(T)$ by a $\sqrt{T}$ and the pre-factors
$c_{1}e^{-c_{2}T}$ of (14). Our issue is the convergence or not, of the power
series for $\hat{V}(T)$. Consider three cases:
1. 1.
$G(u)$ analytic with $R<\infty$. (Example: SABR with $R=\pi$. This will be
shown below in Sec.4.).
2. 2.
$G(u)$ entire and of exponential type $k$. (Example: BS with
$k=\frac{1}{2}$.).
3. 3.
$G(u)$ entire of order 2 and type $k$.
Now, by the root test for convergence, if $\hat{V}(T)=\sum b_{n}T^{n}$
converges, its radius of convergence is
$r=\limsup_{n\rightarrow\infty}|b_{n}|^{-1/n}$. Here
$b_{n}=2^{n}a_{n}\Gamma(1+n)$. Freely invoking Stirling’s approximation, we
obtain the following convergence properties for each of the cases enumerated.
Case 1. Since $G(u)$ is analytic with radius $R$,
$\limsup_{n\rightarrow\infty}|a_{n}|^{-1/(2n)}=R$. Thus, the convergence
radius of $\hat{V}(T)$ is
$\displaystyle\sqrt{r}\,\,$
$\displaystyle=\lim_{n\rightarrow\infty}|b_{n}|^{-1/(2n)}=2^{-1/2}\times\lim_{n\rightarrow\infty}|a_{n}|^{-1/(2n)}|\Gamma(1+n)|^{-1/(2n)}$
$\displaystyle=2^{-1/2}\times\lim_{n\rightarrow\infty}R\,e^{-\frac{1}{2}\log
n-\frac{1}{2}+O((\log n)/n)}=0,$
In words, the $T$-series for $\hat{V}(T)$, under Case 1, has zero radius of
convergence.
Case 2. Since $G(u)=\sum_{k=0}^{\infty}a_{k}u^{2k+1}$ and $G(u)=O(e^{ku})$ at
$\infty$, for large $n$, $|a_{n}|\sim k^{2n}/(2n!)$. Now
$\displaystyle\sqrt{r}\,\,$
$\displaystyle=\lim_{n\rightarrow\infty}|b_{n}|^{-1/(2n)}\sim\frac{1}{\sqrt{2}\,k}\times\lim_{n\rightarrow\infty}\left(\frac{\Gamma(1+n)}{\Gamma(1+2n)}\right)^{-1/(2n)}$
$\displaystyle=\frac{1}{\sqrt{2}\,k}\times\lim_{n\rightarrow\infty}\,e^{\frac{1}{2}\log
n+(\log 2-\frac{1}{2})+O((\log n)/n)}=+\infty,$
Thus, under Case 2, $\hat{V}(T)$ is entire. This agrees with the BS result
from (3): since $\mbox{Erf}(z)$ is entire and odd,
$\hat{V}(T)=\mbox{Erf}(c\sqrt{T})/\sqrt{T}$ is an entire function of $T$.
Case 3. If $G(u)=O(e^{-ku^{2}})$ at $\infty$, for large $n$, $|a_{n}|\sim
k^{n}/n!$. Now
$\displaystyle\sqrt{r}\,\,$
$\displaystyle=\lim_{n\rightarrow\infty}|b_{n}|^{-1/(2n)}\sim\frac{1}{\sqrt{2k}}\times\lim_{n\rightarrow\infty}\left(\frac{\Gamma(1+n)}{\Gamma(1+n)}\right)^{-1/(2n)}=\frac{1}{\sqrt{2k}}.$
For order 2, type $k$ payoffs, the $\hat{V}(T)$ series converges for
$T<\frac{1}{2k}$.
## 4 Analyticity of the SABR payoff function
In this section we study the extension of the function $g(u)$ defined by the
integral (13) to complex values of $u$. The integral (13) is well defined
along the real axis $\Re(u)>0$. We would like to construct a holomorphic
function $G(u)$ which reduces to $g(u)$ along the positive real axis, and
determine its maximal domain of holomorphicity. The limitations of this domain
are due to the singularities of the factor $\sqrt{\cosh(u)-\cosh(s)}$ in the
denominator. Defining this factor as a single-valued function for complex $u$
requires some care in the choice of the branch cut of the square root. We will
choose to define the square root with a cut along the real positive axis, and
denote it as $(\sqrt{z})_{+}$. Specifically, if $\sqrt{z}$ denotes the
standard square-root with a branch-cut along the negative real axis, then
$\displaystyle(\sqrt{z})_{+}=\left\\{\begin{array}[]{cl}\sqrt{z}&\Im(z)\geq
0,\\\ -\sqrt{z}&\Im(z)<0.\end{array}\right.$ (19)
This choice is guided by the following lemma.
###### Lemma 1.
The equation $\cosh u-\cosh(wu)=z$ with $w\in[0,1]$ and real $z>0$ has no
solutions in the half-strip $\Re(u)\geq 0,0<\Im(u)<\pi$.
###### Proof.
Writing $u=x+iy$ we have $\cosh u-\cosh(wu)=r(x,y)+is(x,y)$ with
$\displaystyle r(x,y)=\cos y\cosh x-\cos(wy)\cosh(wx)$ (20) $\displaystyle
s(x,y)=\sin y\sinh x-\sin(wy)\sinh(wx)$ (21)
We distinguish the two cases:
i) $x=0$. We have $s(0,y)=0$ and $r(0,y)\leq 0$ for all $0<y<\pi$, so the
equation $\cosh(u)-\cosh(wu)=z>0$ clearly does not have a solution.
ii) $x>0$. Fix $x$ and vary $y$ in $[0,\pi]$. We will show that if
$s(x,y_{0})=0$ has a zero at $y_{0}\in(0,\pi)$, then $r(x,y_{0})<0$, which
proves the statement of the lemma.
Step 1. For $y\in[0,\frac{\pi}{2}]$ we have the lower bound
$s(x,y)>[\sin y-\sin(wy)]\sinh x>0\,,\mbox{ for }0<y<\frac{\pi}{2}$ (22)
since $\sin y$ is increasing on $[0,\frac{\pi}{2}]$. We used here
$\sinh(wx)<\sinh x$ which holds for all $x>0$. This implies that if $s(x,y)$
has a zero $y_{0}$ then it must lie in $[\frac{\pi}{2},\pi]$.
Step 2. For $y\in[\frac{\pi}{2},\pi]$, the function $r(x,y)$ is negative.
This follows from the upper bound
$r(x,y)<\cosh(wx)[\cos y-\cos(wy)]<0\,,\mbox{ for }\frac{\pi}{2}<y<\pi$ (23)
since $\cos y$ is decreasing on $[0,\pi]$. We used here $\cosh x>\cosh(wx)$
and $\cos y<0$. ∎
We rely subsequently upon two textbook results for analytic continuation. The
first is the classic Schwarz reflection principle. The second is the
analyticity of certain functions defined via an integration. For ease of
reference we state the result below, in the formulation of Stein and Shakarchi
[17].
###### Theorem 1 (Stein and Shakarchi [17], Th. 5.4 Ch. 2).
Let $F(z,w)$ be defined for $(z,w)\in\Omega\times[0,1]$, where $\Omega$ is an
open set in ℂ. Suppose $F$ satisfies the following properties:
(i) $F(z,w)$ is holomorphic in $z$ for each $w$.
(ii) $F$ is continuous on $\Omega\times[0,1]$.
Then the function $f$ defined on $\Omega$ by $f(z)=\int_{0}^{1}F(z,w)\,dw$ is
holomorphic.
We shall construct a function $G(u)$ which is holomorphic in the entire strip
$-\pi<\Im(u)<\pi$, and which reduces to the function $g(u)$ defined by the
integral in (13) along the real positive $u$ axis.
###### Definition 1.
First, using $(\sqrt{z})_{+}$, define $G(u)\equiv G_{Q1}(u)$ in the interior
and boundaries of the upper half-strip in the first quadrant
$Q_{1}=\\{u|arg(u)\in[0,\pi/2]\cap\Im(u)<\pi\\}$ as the integral
$G_{\rm Q1}(u)\equiv\sinh u\int_{0}^{u}\frac{h(s)ds}{(\sqrt{\cosh u-\cosh
s})_{+}}\,,u\in Q_{1}\,.$ (24)
Elsewhere in the $|\Im(u)|<\pi$ strip, the function $G(u)$ is defined by
combined application of the odd symmetry property in $u$ and Schwarz
reflection principle
$\displaystyle
G(u)\equiv\left\\{\begin{array}[]{ccl}-(G_{Q1}(-u^{*}))^{*}&\,,&u\in
Q_{2}=\\{u|arg(u)\in(\frac{\pi}{2},\pi]\cap\Im(u)<\pi\\}\\\
-G_{Q1}(-u)&\,,&u\in
Q_{3}=\\{u|arg(u)\in[-\pi,-\frac{\pi}{2})\cap|\Im(u)|<\pi\\}\\\
(G_{Q1}(u^{*}))^{*}&\,,&u\in
Q_{4}=\\{u|arg(u)\in[-\frac{\pi}{2},0)\cap|\Im(u)|<\pi\\}\\\
\end{array}\right.$ (28)
###### Theorem 2.
$G(u)$ is holomorphic (and hence analytic) in $|\Im(u)|<\pi$.
###### Proof.
The $s$-integral in (13) is well defined for positive real $u$. We would like
to define an analytic continuation of this integral to the complex $u$-plane,
in the strip $-\pi<\Im(u)<\pi$, which agrees with the original integral for
positive real $u$.
With $(\sqrt{z})_{+}$, along the upper side of the cut (the positive real
axis),
$(\sqrt{\cosh(u)-\cosh(wu)})_{+}$ is positive and real, and reproduces the
denominator in (13).
Figure 1: The function $G(u)$ is holomorphic in the strip $-\pi<\Im(u)<\pi$.
The convergence domain of the series expansion for $G(u)$ is the disc
$|u|<\pi$, and the closest singularities to $u=0$ are the branch points at
$\pm i\pi$.
Next introduce the integral $G_{Q1}(u)$ defined as in (24) for $u\in Q_{1}$.
This can be written equivalently as
$G_{Q1}(u)=u\sinh u\int_{0}^{1}\frac{h(wu)dw}{(\sqrt{\cosh
u-\cosh(wu)})_{+}}\,.$ (30)
The integrand is a holomorphic function for $u\in\mbox{Int}(Q_{1})$ for each
$w\in[0,1]$ and is jointly continuous in $u,w$. Continuity follows by Lemma 1
which ensures that the argument of the square root never crosses the cut for
all $u\in\mbox{Int}(Q_{1})$. This property is illustrated graphically in
Figure 2 which shows the mapping of the half-strip $0\leq\Im(u)<\pi,\Re(u)\geq
0$ by the function $\cosh u-\cosh(wu)$ at fixed $w=0.8$. As $w$ is varied in
$(0,1)$ the image of the half-strip does not cross the real positive axis,
which ensures continuity of $(\sqrt{\cosh u-\cosh(wu)})_{+}$ in $u$.
Figure 2: The mapping of the half-strip $0\leq\Re(u),0\leq\Im(u)<\pi$ by the
function $\cosh(u)-\cosh(wu)$ with $w=0.8$. Lines of constant $\Im(u)$ are
mapped to the radial curves extending from the center outwards.
By Theorem 1 it follows that:
(i) the function $G_{Q1}(u)$ is holomorphic for $u\in{\rm Int}(Q_{1})$.
In addition, we have:
(ii) The limits of $G_{Q1}(u)$ along the axes bordering $Q_{1}$ are continuous
with the interior values. Along the real $u$ axis this follows from the
equality $(\sqrt{z})_{+}=\sqrt{z}$ for real positive $z$, and along the
imaginary $u$ axis from Lemma 1.
Combining (i) and (ii), the Schwarz reflection principle provides the analytic
extension of $G(u)$ to $Q_{1}\cup Q_{4}$. This gives the last line in (28).
Enforcing the odd property $G(-u)=-G(u)$ and applying the Schwarz principle
again provides the remaining continuations to $Q_{2}$ and $Q_{3}$, and thus to
the entire open strip $-\pi<\Im(u)<\pi$.
∎
Comments. By construction, along the real positive $u$ axis, $G_{Q1}(u)$
reproduces the integral $g(u)$ in (13). In addition, $G(u)$ approaches $g(u)$
continuously as $u$ approaches the real axis from below
$g(u)=\lim_{\epsilon\to
0^{+}}G_{Q1}(u-i\epsilon)\,,\quad\Im(u)=0,\,\Re(u)>0\,.$ (31)
Of course, we more generally have that $G(u)$ is both continuous and
continuously differentiable as any axis is crossed in $|\Im(u)|<\pi$, since
$G$ is analytic there. Along the negative real axis we have $G(-u)=-G(u)$ by
the odd symmetry in $u$. All of this smoothness is evident in Fig. 3, which
shows plots of both the real and imaginary parts of $G(u)$ in a rectangle.
Figure 3: Plots of $\Re(G(u))$ (left) and $\Im(G(u))$ (right) for
$\sigma_{0}=0.1$, with $u=x+iy$ in the ranges $x:[-2,2],y:[-\pi,\pi]$.
### 4.1 The $G(u)$ power series
By Theorem 2, the function $G(u)$ can be expanded in a power series around
$u=0$ which is convergent for $|u|<\pi$. As advertised, the series contains
only odd powers
$G(u)=\sigma_{0}\sum_{k=0}^{\infty}a_{k}(\sigma_{0})u^{2k+1}\,.$ (32)
The first few terms are
$G(u)=\frac{\pi}{2\sqrt{2}}\sigma_{0}u\left(1+\frac{1}{48}(5-\sigma_{0}^{2})u^{2}+\frac{23-110\sigma_{0}^{2}+3\sigma_{0}^{4}}{15360}u^{4}+O(u^{6})\right)\,.$
(33)
### 4.2 Singularity structure
The function $G(u)$ has logarithmic singularities on the imaginary axis of the
form
$G(u)=\frac{\sigma_{0}}{2}\sqrt{2}\cdot(u-u_{\pm})\log\frac{1}{u-u_{\pm}}+\mbox{regular}\,,\quad
u\to u_{\pm}=\pm i\pi\,.$ (34)
In order to study this singularity, take $u=iy$ along the imaginary axis with
$y<\pi$. Denoting the integration variable $s=it$ and neglecting the factor
$h(s)$ which is regular as $s\to i\pi$, the integral is approximated as
$\int_{0}^{y}\frac{idt}{\sqrt{\cos y-\cos t}}=\int_{0}^{y}\frac{dt}{\sqrt{\cos
t-\cos y}}=\sqrt{2}\log\frac{8}{\pi-y}+O(\pi-y)\,.$ (35)
The convergence domain of the Taylor series of $G(u)$ around $u=0$ is
restricted by this singularity to the open disk $|u|<\pi$. See Fig. 1. The
root test – see Fig. 4 – confirms convergence of the series expansion for
$G(u)$ with a convergence radius $\pi$.
Figure 4: Root test for the convergence of the series expansion (32) for
$G(u)$, showing the reduced coefficients $|a_{n}|^{-1/n}$ vs $1/n$. The
horizontal line is at $\pi$. $\sigma_{0}=0.5$.
Application of “transfer results” in Flajolet and Sedgewick (see equation (26)
in Ch.VI.2 of [3]) gives the leading large-$n$ order asymptotics of the
coefficients in the series expansion (32)
$a_{k}(\sigma_{0})=(-1)^{k}\frac{\sqrt{2}}{\pi^{2k}(2k)(2k+1)}+O(k^{-3})\,.$
(36)
### 4.3 Non-convergence (revisited)
Recall from (14), dropping pre-factors:
$V(T)=\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}g(u)\frac{du}{\sqrt{2\pi
T}}=\sqrt{\frac{T}{2\pi}}\sum_{k=0}^{\infty}a_{k}(\sigma_{0})(2T)^{k}\Gamma(1+k).$
(37)
It’s worth revisiting the non-convergence argument with the improved knowledge
from (36). Now, the large-$k$ asymptotics of the coefficients have the form
$a_{k}(2T)^{k}k^{k}e^{-k}\sim\left(\frac{2kT}{\pi^{2}e}\right)^{k}\,,\quad
k\to\infty\,.$ (38)
Again the root test shows that the $T$-series for the ATM option price has
zero radius of convergence.
### 4.4 Non-convergence of the short-maturity expansion of the implied
volatility function
This is our title result. Consider the expansion for the implied volatility
$\sigma_{\rm BS}(T)=\sum_{k=0}^{\infty}b_{k}T^{k}$. This is related to the ATM
option price as
$\frac{1}{\sqrt{T}}C(K=S_{0},T)=S_{0}\frac{1}{\sqrt{T}}\mbox{Erf}\left(\frac{\sigma_{\rm
BS}(T)\sqrt{T}}{2\sqrt{2}}\right)\,.$ (39)
We prove that a finite convergence radius of the series for the implied
variance $\sigma_{\rm BS}^{2}(T)$ implies a finite convergence radius of the
option price $\frac{1}{\sqrt{T}}C(K=S_{0},T)$. Since the latter series is non-
convergent we conclude that the previous series must be non-convergent. The
proof proceeds in two steps.
Step 1. First we observe that the function
$f(z)\equiv\frac{1}{\sqrt{z}}\mbox{Erf}(\sqrt{z})$ is entire. This follows
from the application of the root test for convergence to its Taylor expansion
$f(z)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}(-1)^{n}\frac{1}{(2n+1)n!}z^{n}$.
The root test gives that the convergence radius of this series expansion is
infinite, which proves that $f$ is entire.
Step 2. Suppose $g(z)$ is an analytic function with finite convergence radius
$|z|<R$ and denote $R_{0}>0$ the radius of the largest disk centered on $z=0$
which is mapped by $z\to g(z)$ to a region which does not include the origin.
Then $h(z)\equiv\frac{1}{\sqrt{z}}\mbox{Erf}[g(z)\sqrt{z}]$ has the
convergence radius $\min(R,R_{0})$. This follows from writing
$h=\sqrt{g^{2}}\times f\circ(g^{2}z)$ as the composition of $f(z)$ with
$g^{2}(z)z$. The analyticity domain of $h$ is limited either by the
analyticity domain of $g$, or by the branch cut of $\sqrt{g^{2}(z)}$ starting
at the point where $g(z)=0$, and is thus the same as the disk
$|z|<\min(R,R_{0})$.
From the non-convergence of the series for $\frac{1}{\sqrt{T}}V(T)$, it
follows that the series for $\sigma_{\rm BS}(T)$ also has zero convergence
radius.
## 5 Numerical illustrations and error estimates
The asymptotic nature of the $T$-expansion of option prices and implied
volatility for the SABR model requires a careful application for practical
use. The $T$-series (37) for the value function $V(T)$ must be truncated to
some finite order $N$. Two issues must be addressed in relation to the use of
asymptotic series: i) what is the optimal truncation order $N_{*}$, and ii)
estimate the best attainable error of the series
$\varepsilon_{*}=\inf_{N}|\varepsilon_{N}(T)|$, where
$\varepsilon_{N}(T)=V(T)-V_{N}(T)$ is the truncation error.
We illustrate these issues on the example of the value function $V_{0}(T)$
defined by taking $h(s)=1$. This situation corresponds to the small volatility
regime $\sigma_{0}\ll 1$, when the $h(s)$ factor is well approximated by a
constant for $\sigma_{0}\sinh s\ll 1$. For this case the integral in (13) can
be evaluated exactly as
$g_{0}(u)\equiv\sinh u\int_{0}^{u}\frac{ds}{\sqrt{\cosh u-\cosh s}}=-2i\sinh
u\frac{F(\frac{1}{2}iu|-\mbox{cosech}^{2}(u/2))}{\sqrt{\cosh u-1}}\,,$ (40)
where $F(\phi|m)=\int_{0}^{\phi}(1-m\sin^{2}\theta)^{-1/2}d\theta$ is the
elliptic integral of the first kind. The value function $V_{0}(T)$ is defined
by (14) with the replacement $g(u)\to g_{0}(u)$.
Figure 5 shows the numerical evaluation of $V_{0}(T)$ from the series
expansion (37) truncated to order $n$, plotted as a function of $n$, compared
with numerical evaluation of $V_{0}(T)$ using the exact result (40) for the
integrand. The different plots correspond to several values of the $T$
parameter.
Figure 5: Dots: The partial sum of $V_{0}(T)$ from the series expansion (37)
keeping terms up to $O(T^{n})$, vs $n$. Horizontal black line: numerical
evaluation of $V_{0}(T)$.
We note from these plots that the truncated series agrees best with the
numerical evaluation at that order $N_{*}$ where the last neglected term
$V_{N_{*}+1}-V_{N_{*}}$ reaches a minimum. This agrees with the typical
behavior of asymptotic series [2]. The optimal truncation order $N_{*}$ can be
estimated from the large-order asymptotics (38) of the coefficients as
$N_{*}\sim\frac{e\pi^{2}}{2T}$. $N_{*}$ decreases as $T$ increases, and
approaches unity for $T\sim\frac{1}{2}e\pi^{2}\simeq 13.4$. These arguments
show that the asymptotic series has a maximum range of validity and breaks
down for too large $T$.
An upper bound on the optimal truncation error of the series can be obtained
from a bound on the contribution to the integral (14) from the region $u>\pi$.
Denoting
$V_{\rm
err}(T)=\int_{\pi}^{\infty}e^{-\frac{u^{2}}{2T}}g(u)\frac{du}{\sqrt{2\pi T}}$
(41)
we have
$|V_{\rm
err}(T)|\leq\sqrt{2}\sigma_{0}\left\\{\sqrt{\frac{T}{2\pi}}e^{-\frac{\pi(\pi-T)}{2T}}+\frac{1}{2}e^{T/8}TN\left(\frac{\frac{1}{2}T-\pi}{\sqrt{T}}\right)\right\\}\,.$
(42)
with $N(x)=\int_{-\infty}^{x}e^{-\frac{1}{2}t^{2}}\frac{dt}{\sqrt{2\pi}}$ the
CDF of the standard normal distribution.
Figure 6: The best attainable error (in percent), measured by the ratio of the
contribution to $V(T)$ from the region $u>\pi$ where the $g(u)$ series is not
convergent to the complete $V(T)$ integral. The relative error increases with
maturity $T$, and decreases with $\sigma_{0}$. The black curve corresponds to
$V_{0}(T$) and the colored curves correspond to $\sigma_{0}=0.1,0.5,1.0$.
###### Proof.
The error bound (42) follows from an upper bound on $g(u)$: for any
$\sigma_{0}>0$, we have
$g(u)\leq\sigma_{0}\sqrt{2}u\cosh(u/2)\,,\quad u\geq 0\,.$ (43)
It is sufficient to prove this bound for $g_{0}(u)$ defined by setting
$h(s)\to 1$ in the definition (13), since we have
$|h(s)|\leq\frac{1}{2}\sigma_{0}$. The argument of the square root in the
denominator is a concave function of $s$ and thus is bounded from below as
$\cosh u-\cosh s\geq(1-s/u)[\cosh(u)-1]$. This gives the upper bound
$\int_{0}^{u}\frac{ds}{\sqrt{\cosh u-\cosh s}}\leq\frac{2u}{\sqrt{\cosh u-1}}$
(44)
This yields the bound (43).
The bound (43) can be used to obtain an upper bound on the truncation error
but the analytical result is lengthy. The simpler result (42) is obtained from
a weaker bound $g(u)\leq\sqrt{2}\sigma_{0}ue^{u/2}$.
∎
The bound (42) shows that the optimal truncation error is exponentially
suppressed as $\sim e^{-O(1)/T}$ for $T<\pi$. For $T>\pi$ the error may be
still small, but the bound (42) is not strong enough to guarantee it.
Numerical simulations suggest that this error bound overestimates the actual
error for larger $\sigma_{0}$.
Numerical evaluations of the error term $V_{\rm err}(T)$ in Fig. 6 confirm
that the relative error is negligibly small for $T<1.0$ and increases rapidly
with $T$. We also note that the error decreases with $\sigma_{0}$. This effect
is due to the factor $h(s)$ which is fast oscillating at a rate which
increases with $\sigma_{0}$. For very large $\sigma_{0}$ the oscillations have
an effect of suppressing the contribution from the integration region $u>\pi$,
and thus decreases the error of the asymptotic series.
## 6 The large $\sigma_{0}$ scaling limit
In the large volatility limit $\sigma_{0}\to\infty$ the function $g(u)$
approaches a simple form
$\lim_{\sigma_{0}\to\infty}g(u)=\frac{\pi}{\sqrt{2}}\cosh(u/2)\equiv
g_{\infty}(u)$. This follows from the limit in distribution sense
$\lim_{\epsilon\to 0}\frac{\sin(x/\epsilon)}{\pi x}=\delta_{+}(x)$. Here
$\delta_{+}(x)$ is defined by
$\int_{0}^{u}\delta_{+}(x)f(x)dx=\frac{1}{2}f(0)$, with $f(x)$ some test
function defined on the positive axis. Taking
$h(s)\to\pi\frac{\sigma_{0}}{2}\delta_{+}(s)$ into the definition (13), the
integral is trivially evaluated with the result shown.
Taking $g(u)\to g_{\infty}(u)$ in (12) gives
$\lim_{\sigma_{0}\to\infty}C(K=S_{0})=S_{0}$. (The exchange of limit and
integration is justified by the Lebesgue dominated convergence theorem, using
that $g(u)$ is bounded from above as shown in (43).) What is the approach to
this limit? The answer to this question is related to the
$\sigma_{0}\to\infty$ asymptotics of the ATM implied volatility $\sigma_{\rm
BS}(0,T)$. This asymptotics takes a simple form when considered at fixed
product $\tau=\frac{1}{2}\sigma_{0}T$. Expressed in terms of $\tau$, the
Black-Scholes formula gives
$S_{0}-C\left(K=S_{0},T=\frac{2\tau}{\sigma_{0}}\right)=2S_{0}\,N\left(-\sqrt{\frac{1}{2}\sigma_{0}\tau}\,\,\Sigma_{BS}\left(\frac{2\tau}{\sigma_{0}},\sigma_{0}\right)\right)$
(45)
where $\Sigma_{\rm BS}(T,\sigma_{0})$ is defined in Eq. (5).
The large $\sigma_{0}$ asymptotics of the implied volatility function
$\Sigma_{BS}(T,\sigma_{0})$ turns out to depend only on $\tau$, and has a
calculable form, given by the following result.
###### Proposition 1.
We have the limit
$\lim_{\sigma_{0}\to\infty}\frac{1}{\sigma_{0}}\log\left[S_{0}-C\left(S_{0},\frac{2\tau}{\sigma_{0}}\right)\right]=-\frac{1}{4}\hat{\Sigma}^{2}_{\rm
BS}(\tau)\,\tau$ (46)
with
$\hat{\Sigma}^{2}_{\rm BS}(\tau)=\frac{\sin
2\lambda}{\lambda}-\frac{1}{2}(1+\cos(2\lambda))$ (47)
where $\lambda=\lambda(\tau)$ is the solution of
$\frac{\lambda}{\cos\lambda}=\tau\,.$ (48)
###### Proof.
See the Appendix. ∎
The result (47) reproduces the asymptotic implied volatility in the $\beta=1$
SABR model in the uncorrelated limit from [15].
We show next that the convergence properties of the series expansion of
$\hat{\Sigma}_{\rm BS}(\tau)$ are better behaved than expected from the non-
convergence of the implied volatility $\sigma_{\rm BS}(0,T)$. For simplicity
we consider the series expansion of the implied variance
$\hat{\Sigma}_{BS}^{2}(\tau)=\sum_{n=0}^{\infty}a_{n}\tau^{n}\,.$ (49)
The convergence properties of this expansion are given by the following
result.
###### Proposition 2.
The series expansion (49) converges for $|\tau|\leq R_{\tau}$ with
$R_{\tau}=\frac{y_{0}}{\cosh y_{0}}\sim 0.662743$ and $y_{0}=1.19968$ the
positive solution of the equation $y\tanh y=1$.
###### Proof.
The function $\hat{\Sigma}_{\rm BS}^{2}(\tau)=g(\lambda(\tau))$ is the
composition of two functions, with $g(\lambda)$ defined by the function on the
right-hand side of (47) and $\lambda(\tau)$ the inverse of $\tau(\lambda)$ in
(48).
The function $\lambda(\tau)$ has two branch points of order $\frac{1}{2}$ on
the imaginary axis at $\tau_{\pm}=\pm i\tau_{0}$ where
$\tau_{0}=\frac{y_{0}}{\cosh y_{0}}\simeq 0.663$ and $y_{0}=1.19968$ is the
positive solution of the equation $y_{0}\tanh y_{0}=1$. This result follows
from a study of the critical points of $f(z):=\frac{z}{\cos z}$. It is known,
see e.g. Theorem 3.5.1 in [16], that the inversion of a complex function
$w=f(z)$ around a critical point $f^{\prime}(z_{0})=0$, gives a multivalued
function and thus $z(w)$ has a branch point at $w_{0}=f(z_{0})$. See also
Theorem VI.6 in Flajolet and Sedgewick [3]. For a similar application to a
more complex case see Sec. 2 in [12].
The critical points of $f$ are solutions of the equation
$f^{\prime}(z)=\frac{1}{\cos z}(1+z\tan z)=0$. This equation has two types of
solutions:
i) Solutions on the imaginary axis. They are at $z_{\pm}=\pm iy_{0}$ with
$y_{0}=1.19968$ the positive solution of $y_{0}\tanh y_{0}=1$. These points
are mapped to $\tau_{\pm}=\pm i\tau_{0}$ with $\tau_{0}=\frac{y_{0}}{\cosh
y_{0}}\simeq 0.663$.
ii) Infinitely many solutions along the real axis at $z_{k}$ given by the
solutions of $\tan z_{k}=-\frac{1}{z_{k}}$ with $k\in\mathbb{Z}$. These
solutions are mapped to $\tau_{k}=f(z_{k})$ which are further away from origin
than $\tau_{0}$.
In conclusion the dominant singularities of $\lambda(\tau)$ are the two branch
points at $\tau_{\pm}=\pm i\tau_{0}$. Since $g(\lambda)$ is entire, the
singularities of $\hat{\Sigma}_{\rm BS}^{2}(\tau)$ are the two branch points
at $\tau_{\pm}=\pm i\tau_{0}$, which thus determine the convergence radius of
the series (49).
∎
While we have worked throughout with $\omega=1$, under general $\omega$, the
expansion is in powers of $\tau=\frac{1}{2}\omega\sigma_{0}T$. Then Prop. 2
implies a corresponding convergence radius for the $T$-series expansion of the
implied variance $T_{c}=1.32/(\omega\sigma_{0})$.
## 7 Summary and discussion
We studied the nature of the short maturity expansion for option prices and
implied volatility in the uncorrelated log-normal SABR model, and showed that
in general the expansion diverges for any maturity. This implies that the
series expansion is asymptotic, and that its application for numerical
evaluation has to consider issues such as optimal truncation order. The
optimal truncation error is exponentially suppressed for sufficiently small
maturity $\omega^{2}T<\pi$. For these maturities the first few terms of the
asymptotic series give generally a good approximation of the exact result, but
for longer maturity numerical approaches are preferable to the series
expansion evaluation.
Despite the asymptotic nature of the short maturity expansion in this model,
it is surprising that essentially a subset of terms in the short maturity
expansion of the implied volatility converges, with a finite convergence
radius. This subset corresponds to the large $\sigma_{0}$ scaling limit at
$\omega=1$ or more generally to the limit $\omega\to 0,\sigma_{0}\to\infty$ at
fixed product $\omega\sigma_{0}$, and can be summed in closed form.
Although the analysis focused on the ATM option prices and implied volatility,
the methods used are more general and may be used also for the study of the
short maturity expansion at fixed strike, or in various regimes of joint small
maturity-small log-strike.
Acknowledgements. We thank an anonymous referee for helpful suggestions that
improved the presentation of the paper.
## Appendix – Large $\sigma_{0}$ asymptotics
There are two components to the asymptotics for the $\sigma_{0}\to\infty$
limit. First, we have the large $\sigma_{0}$ asymptotics of $g(u)$.
###### Proposition 3.
The leading $\sigma_{0}\to\infty$ asymptotics of the function $g(u)$ is
$g_{\infty}(u)-g(u)\simeq\frac{2\sqrt{\pi}}{\sqrt{\sigma_{0}\sinh(2u)}}\cos\left(\frac{1}{2}\sigma_{0}\sinh
u+\frac{\pi}{4}\right)+O(\sigma_{0}^{-3/2})\,,\quad\sigma_{0}\to\infty$ (50)
where $g_{\infty}(u)\equiv\frac{\pi}{\sqrt{2}}\cosh(u/2)$.
###### Proof.
Use Theorem 6.1 in Chapter 4.6 Laplace’s method for contour integrals in Olver
[13]. The leading contribution to the $\sigma_{0}\to\infty$ asymptotics comes
from the upper boundary of the integration region in (13). ∎
Second, the asymptotics (50) is translated into an asymptotic result for the
integral
$\Delta
V(T,\sigma_{0})=\int_{0}^{\infty}e^{-\frac{u^{2}}{2T}}(g(u)-g_{\infty}(u))\frac{du}{\sqrt{2\pi
T}}$ (51)
which determines the price of a covered ATM call as
$S_{0}-C(K=S_{0},T)=\frac{2\sqrt{2}}{\pi}S_{0}\,e^{-T/8}\Delta
V(T,\sigma_{0})$ (52)
Substituting the leading approximation (50), the integral (51) can be written
equivalently as
$\displaystyle\Delta V(T,\sigma_{0})$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{\tau}}\int_{0}^{\infty}e^{-\frac{\sigma_{0}u^{2}}{4\tau}}\cos\left(\frac{1}{2}\sigma_{0}\sinh
u+\frac{\pi}{4}\right)\frac{du}{\sqrt{\sinh(2u)}}$ $\displaystyle=$
$\displaystyle\frac{1}{2\sqrt{\tau}}\int_{0}^{\infty}\left(e^{-\sigma_{0}\varphi_{+}(u)}\sqrt{i}+e^{-\sigma_{0}\varphi_{-}(u)}\sqrt{-i}\right)\frac{du}{\sqrt{\sinh(2u)}}$
where we denoted
$\varphi_{\pm}(u)=\frac{u^{2}}{4\tau}\mp\frac{i}{2}\sinh u\,.$ (54)
The two integrals can be evaluated using the saddle point method. They are
similar so it is sufficient to consider the first integral
$I_{+}(\tau)=\int_{0}^{\infty}e^{-\sigma_{0}\varphi_{+}(u)}\frac{du}{\sqrt{\sinh(2u)}}$
(55)
The function $\varphi_{+}(u)$ has saddle points given by the solutions of the
equation
$\varphi^{\prime}_{+}(u)=\frac{1}{2}(\frac{u}{\tau}-i\cosh u)=0$. This
equation has solutions on the imaginary axis. The closest saddle point is at
$u=i\lambda$ where $\lambda<\frac{\pi}{2}$ is the positive solution of the
equation $\frac{\lambda}{\cos\lambda}=\tau$. This establishes (48). At this
point we have
$\varphi^{\prime\prime}_{+}(i\lambda)=\frac{1}{2}(1/\tau+\sin\lambda)>0$.
The curves of steepest descent (ascent) from the saddle point are solutions of
the equation
$\Im\varphi_{+}(u)=\frac{xy}{\tau}-\cos y\sinh x=0\,,\quad u=x+iy$ (56)
This equation is satisfied along the imaginary axis $x=0$, and along a curve
given by
$y(x)=\lambda\left(\tau\frac{\sinh x}{x}\right)$ (57)
where $\lambda(\tau)$ is the solution of $\frac{\lambda}{\cos\lambda}=\tau$.
Since $\lim_{\tau\to\infty}\lambda(\tau)=\frac{\pi}{2}$, the curve $y(x)$
approaches $\pi/2$ as $|x|\to\infty$. $y(x)$ intersects the imaginary axis at
the saddle point $S=i\lambda$. See Fig. 7 for an illustration for $\tau=1$.
For the application of the saddle point method we deform the contour of
integration from the real positive axis such that it passes through the saddle
point $S$ as shown in Fig. 7 as the red curve. Along the vertical segment
$u\in[0,S]$ the function $\Re\varphi_{+}(u)$ increases as $u\to S$ and along
the curve in the first quadrant, $\Re\varphi_{+}(u)$ increases further as we
move away from the saddle point. We call the latter curve a steepest descent
path, and the former a steepest ascent path.
The integration contour can be deformed from the real axis to this path
without encountering any singularities of the integrand $1/\sqrt{\sinh(2u)}$.
The closest singularities of this function to the origin are branch points at
$\pm i\frac{\pi}{2}$, which are farther away than the saddle point.
Figure 7: Steepest descent paths $\Im\varphi_{+}(u)=0$ for $\tau=1.0$. The
integration contour is deformed from the real axis to the path shown in red.
Along the vertical piece of this path the function $\Re\varphi_{+}(u)$
increases as one approaches $S$, and along the curved portion it increases
further as one moves away from the saddle point. The contribution from the
vertical path along the imaginary axis cancels out in the final result. The
only contribution appears from the curved path.
The integral is written as a sum of two contributions from the two pieces of
the contour $I_{+}=\int_{0}^{S}+\int_{S}^{i\frac{\pi}{2}+\infty}$ where
$S=i\lambda$ is the saddle point. The first term is imaginary, and cancels
against an identical contribution from $I_{-}$. The second integral is
dominated by the contribution of the saddle point and is expressed as a
Laplace integral by introducing the new integration variable
$\zeta=\varphi_{+}(u)-\varphi_{+}(S)$. Since along the contour
$\Im\varphi_{+}(u)=0$, the variable $\zeta$ is real.
Expanding the integrand as $\zeta\to 0$ we obtain
$\displaystyle\int_{S}^{i\frac{\pi}{2}+\infty}e^{-\sigma_{0}\varphi_{+}(u)}\frac{du}{\sqrt{\sinh(2u)}}=e^{-\sigma_{0}\varphi_{+}(i\lambda)}\int_{0}^{\infty}e^{-\sigma_{0}\zeta}\frac{d\zeta}{\sqrt{\sinh(2u)}\varphi^{\prime}_{+}(u)}$
(58)
$\displaystyle=e^{-\sigma_{0}\varphi_{+}(i\lambda)}\frac{1}{\sqrt{2i\varphi^{\prime\prime}_{+}(S)\sin(2\lambda)}}\int_{0}^{\infty}e^{-\sigma_{0}\zeta}\frac{d\zeta}{\sqrt{\zeta}}(1+O(\sqrt{\zeta}))$
The integral can be evaluated term by term by Watson’s lemma. The leading
contribution is
$\int_{0}^{\infty}e^{-\sigma_{0}\zeta}\frac{d\zeta}{\sqrt{\sinh(2u)}\varphi^{\prime}_{+}(u)}=\sqrt{\frac{\pi}{2i\varphi^{\prime\prime}_{+}(S)\sin(2\lambda)}}\cdot\frac{1}{\sqrt{\sigma_{0}}}(1+O(\sigma_{0}^{-1/2}))$
(59)
Collecting all factors gives
$\Re[I_{+}\sqrt{i}]=C\frac{1}{\sqrt{\sigma_{0}}}e^{-\sigma_{0}\varphi_{+}(i\lambda)}(1+O(\sigma_{0}^{-1/2}))$
(60)
with $\varphi_{+}(i\lambda)=\frac{1}{4}(2\sin\lambda-\lambda\cos\lambda)$ and
$C=\sqrt{\frac{\pi}{2\varphi^{\prime\prime}_{+}(i\lambda)\sin(2\lambda)}}=\sqrt{\frac{\pi\lambda}{\sin\lambda\cos^{2}\lambda(1+\lambda\tan\lambda)}}$
(61)
The same result is obtained for $I_{-}$. Adding their contributions reproduces
the exponential factor implicitly defined by the $\log$ arg in (46).
## References
* [1] A. Antonov, M. Konikov and M. Spector, Modern SABR Analytics, Springer, New York 2019.
* [2] J.P. Boyd, The Devil’s Invention: Asymptotic, Superasymptotic and Hyperasymptotic Series, Acta Applicandae Mathematica 56 1-98 (1999)
* [3] P. Flajolet and R. Sedgewick, Analytic Combinatorics, Cambridge University Press, Cambridge, 2008
* [4] N. de Guillaume, R. Rebonato and A. Pogudin, The nature of the dependence of the magnitude of rate moves on the level of rates: A universal relationship, Quantitative Finance 13, 351-367 (2013)
* [5] P.S. Hagan, D. Kumar, A.S. Lesniewski and D.E. Woodward, Managing smile risk, Wilmott Magazine, Sept. 2002.
* [6] P. Henry-Labordére, Analysis, Geometry and Modeling in Finance: Advanced Methods in Option Pricing, Chapman and Hall, 2009
* [7] A. Lewis, Option Valuation under Stochastic Volatility, Finance Press, Newport 2000
* [8] A. Lewis, Option Valuation under Stochastic Volatility II, Finance Press, Newport Beach, 2016
* [9] A. Lewis, unpublished work, using the algorithm described in [8] (pg. 505).
* [10] E. Lukacs, Characteristic Functions, Griffin, Second Edition, 1970
* [11] H. McKean, An upper bound to the spectrum of $\Delta$ on a manifold of negative curvature, Journal of Differential Geometry 4, 359-366 (1970)
* [12] P. Nándori and D. Pirjol, On the distribution of the time-integral of the geometric Brownian motion, 2020.
* [13] F.W.J. Olver, Asymptotics and Special Functions, Academic Press 1974.
* [14] L. Paulot, Asymptotic implied volatility at the second order with application to the SABR model, arXiv:0906.0658[q-fin.PR]
* [15] D. Pirjol and L. Zhu, Asymptotics of the time-discretized log-normal SABR model: The implied volatility surface, arXiv:2001.09850, Probability in Engineering and Computational Sciences in print, arXiv:2001.09850[q-fin.MF]
* [16] B. Simon, A Comprehensive Course in Analysis, II.A Basic Complex Analysis, American Mathematical Society 2017.
* [17] E. Stein and R. Shakarchi, Princeton Lectures in Analysis, II Complex Analysis, Princeton University Press, Princeton 2003
|
arxiv-papers
| 2021-07-26T19:03:09 |
2024-09-04T03:07:19.935772
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Alan L. Lewis and Dan Pirjol",
"submitter": "Dan Pirjol",
"url": "https://arxiv.org/abs/2107.12439"
}
|
2107.12443
|
# SeismographAPI: Visualising Temporal-Spatial Crisis Data
Raphael Lepuschitz [email protected] University of
InnsbruckInnsbruckAustria and Niklas Stoehr [email protected] ETH
ZurichZurichSwitzerland
###### Abstract.
Effective decision-making for crisis mitigation increasingly relies on
visualisation of large amounts of data. While interactive dashboards are more
informative than static visualisations, their development is far more time-
demanding and requires a range of technical and financial capabilities. There
are few open-source libraries available, which is blocking contributions from
low-resource environments and impeding rapid crisis responses. To address
these limitations, we present SeismographAPI, an open-source library for
visualising temporal-spatial crisis data on the country- and sub-country level
in two use cases — Conflict Monitoring Map and Pandemic Monitoring Map. The
library provides easy-to-use data connectors, broad functionality, clear
documentation and run time-efficiency.
Open-Source Software, Human-Centred Data Visualisation
††ccs: Human-centered computing Visualization toolkits††ccs: Information
systems Spatial-temporal systems
## 1\. Introduction
For mitigating large-scale crises such as armed conflicts, pandemics and
natural disasters, incorporation of data in decision-making is becoming
indispensable (Beck et al., 2000; Sornette, 2006; Weidmann and Ward, 2010;
O’Brien, 2010; Falck et al., 2020). However, insights from large amounts of
data remain untapped if they are not detected and communicated by means of
intuitive, accurate and preferably interactive scientific visualisation
(Piburn et al., 2015; Kim et al., 2017). Particularly, the development of
interactive visualisation dashboards requires a broad skill set, ranging from
statistical, design and programming knowledge to domain expertise (Lam et al.,
2012). Academic environments, non-governmental and humanitarian aid
organisations often lack the required resources which hinders urgently needed
contributions. The demand for quick crisis responses stands in stark contrast
to time-consuming, expensive development stages. SeismographAPI is an actively
maintained, open-source library for the visualisation of temporal-spatial
crisis data that combines plug-and-play visualisations with versatile
functionality.
Figure 1. Technical overview of SeismographAPI
## 2\. Exemplary Use Cases
SeismographAPI is designed for data analysts to identify patterns in rapid
prototyping. Due to its run time and memory-efficiency, it can also be
deployed as a permanent visualisation tool for use by decision-makers. To
motivate and demonstrate SeismographAPI, we sketch out two practical use cases
that are inspired by real-world visualisation needs (Weidmann and Ward, 2010;
O’Brien, 2010; Hegre et al., 2013; Stephany et al., 2020; Dong et al., 2020).
#### Conflict Monitoring.
With the help of SeismographAPI, we visualise a huge dataset comprising 20
years of conflict data on 141 countries, constructed from ACLED (Raleigh et
al., 2010) and UCDP GED (Sundberg and Melander, 2013) data. Per country and
month, our dataset features 60 socio-economic and political indicators, which
are all displayed in our Conflict Monitoring Map.
#### Pandemic Monitoring
Our second demonstration case is the Pandemic Monitoring Map, a visualisation
of COVID-19 infection numbers. The data is borrowed from Johns Hopkins
University (Dong et al., 2020).
Figure 2. Conflict Monitoring Map and Pandemic Monitoring Map: two exemplary
use cases of the SeismographAPI
## 3\. Main Functionality
#### World Map (center).
The SVG Choropleth map represents the core part of SeismographAPI. It allows
visualising data at the country- and subcountry-level (political subdivisions)
based on the ISO-3166 and ISO-3166-2 norm. Additional information, such as
country-level infection numbers, can be easily displayed on click and hover as
exemplified in the Pandemic Monitoring Map.
#### Time Series Chart (bottom).
The time series chart not only visualises, but allows navigating the temporal
dimension. For instance, the Conflict Monitoring Map even features two time
lines, one showing the prediction and another showing the ground truth
conflict intensity. When hovering or clicking a point in time, all other
panels synchronise. With the help of the “play” controls, users can watch all
data panels as they change over time in a time-machine manner.
#### Auxiliary Information Panel (right).
At the top of the auxiliary information panel, our library provides a menu
allowing to interactively customise the dashboard. Users can hide information
and panels, such as country names and the country list on the left hand side,
zoom-in, choose a night mode and open a “help” window. To simplify the
interface between analysis, report and decision-making, the library has built-
in functionality for screen recording. Due to tight integration with Chart.js,
any chart visualisation can be selected and displayed in the right-hand panel
based on data suitability and information needs. For instance, the Conflict
Monitoring Map displays the most important data features considered for
conflict prediction as a horizontal bar chart. The Pandemic Monitoring Map
relies on stacked line charts to map out infection numbers.
## 4\. Technical Background
#### Run time and Memory.
SeismographAPI builds upon two fast, open-source libraries, Chart.js and SVG
World Map JS. The time required for data loading is mainly determined by the
size of the central SVG world map: ~1,3 MB for ISO-3166-2 country-level and
~3,8 MB including all subdivision data. Depending on the chosen map, rendering
starts between 300ms and 800ms, document completion is done between 400ms and
2.6s and the full loading time varies from ~3s to ~10s. To optimise loading
and usability, SeismographAPI can also be initialised asynchronously with the
JavaScript async/await/resolve method. After the first initialisation of the
map, this enables loading data chunks on demand, which increases smoothness.
This is demonstrated in the Conflict Monitoring Map, where all global conflict
data (~1,1MB) is loaded at startup, but the large amount of detailed conflict
data (~80KB per country, ~21MB in total) is loaded asynchronously on request.
Thus, SeismographAPI is able to visualise more than $N=$170000$$ data points
in the Conflict Monitoring Map in about 3 seconds or nearly $N=$400000$$ data
points in the Pandemic Monitoring Map in about 10 seconds.
#### Ease of Use.
With an intuitive interface and simple data connectors, SeismographAPI is
designed for ease of use in common visualisation tasks and workflows. Data can
be loaded directly via JSON, CSV or as an HTML table. We even offer a Pandas
extension to load Pandas Dataframes (as JSON) and Wikipedia tables. The
library features clear readme instructions and rich documentation.
## 5\. Conclusion
Future versions will include more data connectors, default charts, more
detailed guidelines for deployment and options for switching between different
data within one map. We presented SeismographAPI, an open-source library aimed
at reducing resource constraints and easing swift data visualisation, thereby
improving data-driven decision-making for humanitarian purposes.
## References
* (1)
* Beck et al. (2000) Nathaniel Beck, Gary King, and Langche Zeng. 2000\. Improving Quantitative Studies of International Conflict: A Conjecture. _American Political Science Review_ 94, 1 (2000), 21–35. https://doi.org/10.1017/S0003055400220078 Edition: 2014/08/01 Publisher: Cambridge University Press.
* Dong et al. (2020) Ensheng Dong, Hongru Du, and Lauren Gardner. 2020. An interactive web-based dashboard to track COVID-19 in real time. _Lancet Infectious Diseases_ (2020). https://doi.org/10.1016/S1473-3099(20)30120-1
* Falck et al. (2020) Fabian Falck, Julian Marstaller, Niklas Stoehr, Sören Maucher, Jeana Ren, Andreas Thalhammer, Achim Rettinger, and Rudi Studer. 2020\. Measuring Proximity Between Newspapers and Political Parties: The Sentiment Political Compass. _Policy & Internet_ 12, 3 (Sept. 2020), 367–399. https://doi.org/10.1002/poi3.222
* Hegre et al. (2013) Håvard Hegre, Joakim Karlsen, Håvard Mokleiv Nygård, Håvard Strand, and Henrik Urdal. 2013\. Predicting Armed Conflict, 2010–2050. _International Studies Quarterly_ 57, 2 (2013), 250–270. http://www.jstor.org/stable/24016137 Publisher: Wiley.
* Kim et al. (2017) Yea-Seul Kim, Katharina Reinecke, and Jessica Hullman. 2017\. Explaining the Gap: Visualizing One’s Predictions Improves Recall and Comprehension of Data. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_. ACM, Denver Colorado USA, 1375–1386. https://doi.org/10.1145/3025453.3025592
* Lam et al. (2012) Heidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant, and Sheelagh Carpendale. 2012\. Empirical Studies in Information Visualization: Seven Scenarios. _IEEE Transactions on Visualization and Computer Graphics_ 18, 9 (Sept. 2012), 1520–1536. https://doi.org/10.1109/TVCG.2011.279
* O’Brien (2010) Sean P. O’Brien. 2010\. Crisis Early Warning and Decision Support: Contemporary Approaches and Thoughts on Future Research. _International Studies Review_ 12, 1 (March 2010), 87–104. https://doi.org/10.1111/j.1468-2486.2009.00914.x
* Piburn et al. (2015) Jesse Piburn, Robert Steward, Aaron Myers, and Alexandre Sorokine. 2015. World Spatiotemporal Analytics and Mapping Project (WSTAMP): Discovering, Exploring, and Mapping Spatiotemporal Patters across the World’s Largest Open Source Data Sets. In _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_ , Vol. II-4/W2. 95–102. https://doi.org/10.5194/isprsannals-II-4-W2-95-2015
* Raleigh et al. (2010) Clionadh Raleigh, Andrew Linke, Håvard Hegre, and Joakim Karlsen. 2010. Introducing ACLED-Armed Conflict Location and Event Data. _Journal of Peace Research_ 47, 5 (2010), 651–660. https://journals.sagepub.com/doi/10.1177/0022343310378914
* Sornette (2006) Didier Sornette. 2006\. Endogenous versus Exogenous Origins of Crises. In _Extreme Events in Nature and Society, The Frontiers Collection_. Center for Frontier Sciences. https://doi.org/10.1007/3-540-28611-X_5
* Stephany et al. (2020) Fabian Stephany, Niklas Stoehr, Philipp Darius, Leonie Neuhauser, Ole Teutloff, and Fabian Braesemann. 2020. The CoRisk-Index: A data-mining approach to identify industry-specific risk assessments related to COVID-19 in real-time. _arXiv_ 2003.12432 (2020). https://arxiv.org/abs/2003.12432
* Sundberg and Melander (2013) Ralph Sundberg and Erik Melander. 2013. Introducing the UCDP Georeferenced Event Dataset. _Journal of Peace Research_ 50, 4 (July 2013), 523–532. https://doi.org/10.1177/0022343313484347
* Weidmann and Ward (2010) Nils Weidmann and Michael Ward. 2010. Predicting Conflict in Space and Time. _Journal of Conflict Resolution_ 54, 6 (July 2010), 883–901. https://doi.org/10.1177/0022002710371669 Publisher: SAGE Publications Inc.
## Appendix A Appendix
### A.1. Links
SeismographAPI Github
https://github.com/conflict-AI/seismographAPI
Conflict Monitoring Map
https://conflict-ai.github.io/seismographAPI/conflict-map.html
Pandemic Monitoring Map
https://conflict-ai.github.io/seismographAPI/covid-map.html
|
arxiv-papers
| 2021-07-26T19:20:19 |
2024-09-04T03:07:19.948907
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Raphael Lepuschitz and Niklas Stoehr",
"submitter": "Niklas Stoehr",
"url": "https://arxiv.org/abs/2107.12443"
}
|
2107.12448
|
# Dynamical control of nuclear isomer depletion via electron vortex beams
Yuanbin Wu Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, D-69117
Heidelberg, Germany Simone Gargiulo Institute of Physics, Laboratory for
Ultrafast Microscopy and Electron Scattering, École Polytechnique Fédérale de
Lausanne, Station 6, Lausanne 1015, Switzerland Fabrizio Carbone Institute
of Physics, Laboratory for Ultrafast Microscopy and Electron Scattering, École
Polytechnique Fédérale de Lausanne, Station 6, Lausanne 1015, Switzerland
Christoph H. Keitel Max-Planck-Institut für Kernphysik, Saupfercheckweg 1,
D-69117 Heidelberg, Germany Adriana Pálffy Max-Planck-Institut für
Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg, Germany Department of
Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, D-91058 Erlangen,
Germany
###### Abstract
Long-lived excited states of atomic nuclei can act as energy traps. These
states, known as nuclear isomers, can store a large amount of energy over long
periods of time, with a very high energy-to-mass ratio. Under natural
conditions, the trapped energy is only slowly released, limited by the long
isomer lifetimes. Dynamical external control of nuclear state population has
proven so far very challenging, despite ground-breaking incentives for a clean
and efficient energy storage solution. Here, we describe a protocol to achieve
the external control of the isomeric nuclear decay by using electrons whose
wavefunction has been especially designed and reshaped on demand.
Recombination of these electrons into the atomic shell around the isomer can
lead to the controlled release of the stored nuclear energy. On the example of
93mMo, we show that the use of tailored electron vortex beams increases the
depletion by four orders of magnitude compared to the spontaneous nuclear
decay of the isomer. Furthermore, specific orbitals can sustain an enhancement
of the recombination cross section for vortex electron beams by as much as six
orders of magnitude, providing a handle for manipulating the capture
mechanism. These findings open new prospects for controlling the interplay
between atomic and nuclear degrees of freedom, with potential energy-related
and high-energy radiation sources applications.
Nuclear isomers are metastable, long-lived excited states of atomic nuclei.
Their direct decay to lower-lying levels is strongly suppressed, typically due
to large differences in either spin, nuclear shape or spin projection on the
nuclear symmetry axis Walker and Dracoulis (1999); Walker and Podolyák (2020).
In some nuclei with an advantageous configuration of the nuclear excited
states, an excitation to a level above the isomeric state (termed gateway
state) can lead to the nuclear decay directly to a level below the isomer
itself, thus reaching the ground state in a fast cascade.
Such a process is called isomer depletion, since it allows for the
depopulation of the isomeric state and thus a controlled release of the energy
stored in the metastable nucleus. A typical example is the case of the 2425
keV 93mMo isomer with a halflife of 6.8 h, for which we present the relevant
partial level scheme in Fig. 1. A 4.85 keV excitation from the isomer to the
gateway state at 2430 keV should release the entire stored energy within only
4 ns. This appealing example has been often mentioned in the context of
potential nuclear energy storage solutions without involving fission or fusion
Walker and Dracoulis (1999); Gunst et al. (2014); Pálffy et al. (2007a);
Chiara et al. (2018).
One of the most intriguing means to externally drive the transition from the
isomer to the gateway state is via coupling to the atomic shell. In the
process of nuclear excitation by electron capture (NEEC), an electron
recombining into an atomic vacancy of an ion transfers resonantly its energy
to the nucleus. The sum of the free electron energy and capture orbital
binding energy must thereby match, within the uncertainty relations, the
nuclear transition energy. This process, originally predicted in 1976
Goldanskii and Namiot (1976), attracted a number of theoretical studies Cue et
al. (1989); Yuan and Kimball (1993); Harston and Chemin (1999); Gosselin and
Morel (2004); Pálffy et al. (2006) prior to the first claim of experimental
observation in 2018 Chiara et al. (2018). Interestingly, the NEEC experiment
was investigating exactly the isomer depletion transition in 93Mo. As
theoretical works contradict the experimental results Wu et al. (2019a);
Rzadkiewicz et al. (2021), the subject is at present a matter of vivid debate
Guo et al. (2021); Chiara et al. (2021). Controversy aside, the overall
consensus is that due to the small nuclear transition energy to the gateway
state of 93mMo, NEEC should be stronger than photoexcitation.
So far, the NEEC process has been considered for the case of plane-wave
electrons captured by ions which are initially in their electronic ground
state. However, few recent works suggested that the NEEC cross section can be
influenced by the ion’s out of equilibrium conditions Wu et al. (2019b);
Gargiulo et al. (2021) or a different shape of the electronic wave function
Madan et al. (2020). Here, we take an important step to investigate the
process of NEEC considering specially designed electron beams, which are
tailored to enhance the nuclear excitation. Our results show that capturing an
electron with a properly reshaped wavefunction can lead to an increase of the
NEEC cross section by few orders of magnitude, depending on the specific
situation considered.
In recent years, the achieved capability to fabricate phase masks with
nanometer precision rendered possible to control the coherent superposition of
matter waves producing typical interference patterns by spatial reshaping of a
particle’s wave-function Uchida and Tonomura (2010); Verbeeck et al. (2010);
McMorran et al. (2011); Clark et al. (2015); Luski et al. (2021). Particularly
interesting is the case of so-called vortex beams, which consist of a stream
of particles whose wavefunction spatial profile has been modulated to become
chiral and carry an orbital angular momentum.
Optical vortices have been studied in the context of quantum communications,
nano-plasmonics and optical trapping Shen et al. (2019); Bliokh and Nori
(2015), while imparting chirality to massive composed particles has been
proposed as a method to study Lloyd et al. (2017); Bliokh et al. (2017);
Vanacore et al. (2019); Zhao et al. (2021) and even manipulate Larocque et al.
(2018); Clark et al. (2015); Kaminer et al. (2015); Madan et al. (2020) the
inner structure of neutrons, protons, ions and molecules. Electron vortex
beams carry both orbital angular momentum about their beam axis and the
electron’s intrinsic spin momentum. Experimentally, they are produced by a
number of techniques such as phase-plates, holographic gratings, magnetic
monopole fields or chiral plasmonic near fields Lloyd et al. (2017); Bliokh et
al. (2017); Uchida and Tonomura (2010); Verbeeck et al. (2010); McMorran et
al. (2011); Vanacore et al. (2019), with angular momenta of up to $1000$
$\hbar$ already demonstrated. The angular momentum aspect is particularly
important for nuclear transitions which display in the low-energy region
mostly a dipole-forbidden character. The transition multipolarity, for
instance, electric quadrupole ($E2$) or magnetic dipole ($M1$), together with
the recombination orbital, impose strict selection rules on which angular
momentum components of the incoming electron beam will undergo NEEC. While
plane wave electron beams have a fixed partial wave expansion in all
multipoles, vortex beams can be shaped on purpose to enhance and control the
NEEC outcome.
Figure 1: NEEC and isomer depletion with an electron vortex beam (a) A plane-
wave electron beam incident on a ed mask generates the electron vortex beam.
Upon hitting on an ion beam with impact parameter $\mathbf{b}$, the electrons
recombine into atomic vacancies. (b) At the resonant continuum electron
energy, electron recombination (orange atomic shell levels on the left) will
be accompanied by nuclear excitation (magenta nuclear states on the right) in
the process of NEEC. (c) Partial level scheme of 93Mo. The nuclear isomeric
($I$), gateway ($GW$), intermediate ($F$) and ground state ($GS$) levels are
labeled by their spin, parity and energy in keV. The transitions
$IS\rightarrow GW$ and $GW\rightarrow F$ are both of $E2$ type. Energy
intervals are not to scale.
A possible experimental implementation of this idea is depicted in Fig. 1(a).
A plane wave electron beam is incident on a phase mask which reshapes the
wavefunction generating an electron vortex beam. We illustrate here a so-
called forked mask as an example. The vortex beam is incident on ions with
atomic vacancies that facilitate the NEEC process. The electron energy is
chosen such as to match resonantly the nuclear transition energy upon
recombination into a chosen orbital as shown in Fig. 1(b). As examples we
consider the canonical case of 93Mo, whose partial level scheme is depicted in
Fig. 1(c). The NEEC transition between the isomer and gateway states has 4.85
keV and $E2$ multipolarity. A second example envisaging a 19.70 keV $M1$
transition from the 152mEu isomer at 45.60 keV isomer to a gateway state will
also be considered. These examples are generic, and were chosen to demonstrate
the effect on the two most frequently occurring nuclear transition
multipolarities ($E2$ and $M1$) in the energy range relevant for NEEC. For a
plane-wave electron beam, the maximal NEEC cross section for depletion of
93mMo occurs for recombination into the $2p_{3/2}$ orbital of a Mo36+ ion Wu
et al. (2018); Gunst et al. (2018). This charge state is sufficient for
providing the maximum number of vacancies in the $2p_{3/2}$ orbital. On the
other hand, it ensures that the NEEC channel is allowed, with the resonance
continuum electron energy of only approx. 52 eV. A higher charge state would
close the NEEC channel due to the slight increase of electronic binding
energies.
We consider a vortex beam with the longitudinal linear momentum $p_{z}$, the
modulus of the transverse momentum $|{\bf{p}}_{\bot}|=\zeta$, and the
topological vortex charge, a quantity related to the electron orbital angular
momentum, denoted by $m$ Bliokh et al. (2017, 2011). The corresponding
electron wave function can be written as
$\psi_{s}({\bf{r}})=\int\frac{d^{2}{\bf{p}}_{\bot}}{(2\pi)^{2}}a_{\zeta
m}({\bf{p}}_{\bot})u_{{\bf{p}}s}e^{i{\bf{p}}\cdot{\bf{r}}},$ (1)
where $a_{\zeta
m}({\bf{p}}_{\bot})=(-i)^{m}e^{im\alpha_{p}}\delta(|{\bf{p}}_{\bot}|-\zeta)/\zeta$
and $u_{{\bf{p}}s}$ is the electron bispinor which corresponds to the plane-
wave solution with momentum $\bf{p}$ and the spin state $s$. The linear
momenta of the plane-wave components are given by
${\bf{p}}=({\bf{p}}_{\bot},p_{z})=(\zeta\cos{\alpha_{p}},\zeta\sin{\alpha_{p}},p_{z})$,
as sketched in Fig. 1. We choose the $Oz$ axis parallel to the incident
electron beam. To specify the lateral position of the ion with regard to the
central axis of the incident electron beam, the impact parameter ${\bf{b}}$ is
introduced Bliokh et al. (2017); Serbo et al. (2015). The advantage of the
vortex beam comes into play when restricting the impact parameter Bliokh et
al. (2017); Serbo et al. (2015). Otherwise, an average over arbitrary impact
parameters in the entire beam range will limit the enhancement factor for the
NEEC rate to a factor $p/p_{z}$. We therefore restrict the impact parameter
region to $|{\bf{b}}|\leqslant b$, with $b$ chosen accordingly as a function
of the incoming electron momentum. The incident electron current is averaged
over the impact parameter region.
In order to calculate the NEEC cross sections, the vortex beam is mapped upon
the partial wave expansion of the continuum electron wave function. The
resulting NEEC rate $Y_{neec}^{i\rightarrow g}$ can be written as a function
of the reduced transition probability for the nuclear transition, electronic
radial wave function integrals, and the vortex beam parameters $m$, $\zeta$
and $\alpha_{p}$ (see Methods). The total NEEC cross section can be written as
a function of the continuum electron energy $E$,
$\sigma_{neec}^{i\rightarrow
g}(E)=\frac{4\pi^{2}}{pJ_{z}}Y_{neec}^{i\rightarrow g}\mathcal{L}(E-E_{0}),$
(2)
where $p$ is the modulus of the continuum electron momentum, $J_{z}$ is the
total incident current which can be calculated via Ref. Bliokh et al. (2011),
and $\mathcal{L}(E-E_{0})$ a Lorentz profile centered on the resonance energy
$E_{0}$ and with a full width half maximum given by the width of the nuclear
excited state. Typically, the nuclear widths are very narrow (for example,
$\Gamma_{g}=10^{-7}$ eV for the case of 93mMo), such that
$\mathcal{L}(E-E_{0})$ is approximated with a Dirac-deltalike profile.
Integrating over the continuum electron energy, we obtain the so-called
resonance strength $S_{v}$. We compare this value with the resonance strength
$S_{p}$ obtained for the case of a plane wave electron beam.
We focus our attention first to the case of 93mMo and electron recombination
into the ground state of the Mo36+ ion. We consider NEEC into the ground state
configuration of the Mo36+ ion into orbitals ranging from $2p_{3/2}$ to
$4f_{7/2}$. The continuum electron resonance energy for recombination into
$2p_{3/2}$ is $52$ eV, while for the higher shell orbitals the values lie
between $2.7$ keV and $2.9$ keV for the $M$ shell and between $3.6$ keV and
$3.8$ keV for the $N$ shell. The ratio $S_{v}/S_{p}$ as a function of the
capture orbital for three values of topological charge $m=3,\,4,\,5$ is
presented in Fig. 2(a). The vortex beam parameters are chosen such that
$\zeta=p_{z}$ for the impact parameter range $b=1/\zeta$. Figure 2(a) shows
that, depending on the recombination orbital, the tailored vortex electron
beam leads to an enhancement between two ($p$ orbitals) and six orders of
magnitude ($f$ orbitals) in the NEEC resonance strength. Although the
enhancement for the capture into $M$\- and $N$-shell orbitals is impressive,
these are not the capture orbitals with the largest cross section.
Provided that atomic vacancies are available, NEEC into the $2p_{3/2}$ is the
most efficient isomer depletion channel. For an incident vortex beam, the
resonance strength for NEEC into this orbital is increased by two orders of
magnitude as compared to the plane wave electron beams so far considered in
the literature. This is demonstrated in Fig. 2(b) which shows the vortex beam
resonance strength scaled by the maximum value reached for a plane wave setup.
In the vortex beam setup, also NEEC into the $3d$ or $4d$ and $4f$ orbitals
exceeds the plane wave value for recombination into $2p_{3/2}$, however only
by one order of magnitude. Still, this might become advantageous to ease the
charge state requirements, or when the continuum electron energy cannot be
decreased to very small energies.
Angular momentum conservation in the NEEC process imposes selection rules for
the continuum electron partial wave (see Methods) as a function of
recombination orbital and nuclear transition multipolarity. These selection
rules reflect also upon and determine the most efficient vortex charge $m$ for
a particular NEEC process. For instance a vortex beam with $m>5$ would further
increase NEEC into $d$ and $f$ orbitals. However, increasing $m$ at values
above $m=5$ has less further enhancement effect on the NEEC resonance strength
for the $2p_{3/2}$ orbitals. Depending on the envisaged electron beam energy
(and therefore capture orbital), the proper choice of vortex beam topological
charge $m$ can maximize the NEEC resonance strength. The new aspect here,
specifically related to vortex beams, is that $m$ acts as a new degree of
freedom and can be dynamically controlled on an ultrafast timescale, as
detailed below.
Figure 2: NEEC integrated cross section enhancement for the $4.85$ keV nuclear
transition depleting 93mMo. (a) The enhancement ratio
$S_{v}(nl_{j})/S_{p}(nl_{j})$ comparing vortex and plane wave electron beams
for recombination orbitals in the range $2p_{3/2}$ to $4f_{7/2}$. (b) The
ratio $S_{v}(nl_{j})/S_{p}(2p_{3/2})$ of vortex beam versus maximal plane wave
NEEC resonance strengths corresponding to recombination into the $2p_{3/2}$
orbital (left-hand axis, grey dashed curve with circle), and the absolute
values of $S_{v}(nl_{j})$ (right-hand axis, vertical colored bars). We
consider three values of the topological charge $m=3,\,4,\,5$ (a) or just
$m=5$ (b), with $\zeta=p_{z}$ and impact parameter range $\zeta b=1$. The
resonant electron energy $E_{0}$ is presented in color coding.
We now turn to a different example which investigates NEEC for a $M1$ nuclear
transition in 152Eu. This isotope has an isomer with 9.3 h halflife lying
45.60 keV above the ground state. The envisaged gateway state lies at 65.30
keV and is connected by an $M1$ transition to the isomer. Once the gateway
state is reached, the nucleus will decay within approx. 1 $\mu$s with a
branching ratio of 0.42 to the ground state. For this case, we consider NEEC
occuring into a bare Eu ion. Table 1 displays the plane wave and vortex
electron beam NEEC resonance strengths for the cases of $m=3$ and $m=5$,
assuming $\zeta=p_{z}$ and $\zeta b=1$.
The enhancements compared to the equivalent plane wave case are less dramatic,
with factors between 1.4 and approx. 600. The lowest factor of 1.4 occurs in
the case of NEEC into the $2s_{1/2}$ orbital and stems mainly from the factor
$p/p_{z}$. However, the startling feature in the case of 152Eu is the ability
to change the most efficient capture orbital. For an $M1$ transition, the
strongest NEEC resonance strength for a plane wave electron beam occurs for
the recombination into the lowest available $s$ orbital. For the specific case
of 152Eu, with its nuclear transition and electronic binding energies, this
would be the $2s$ orbital. Surprisingly, the tailored vortex beam changes this
rule of thumb, as the strongest NEEC occurs for the $2p_{1/2}$ orbital (for
$m=3$) or for the $2p_{3/2}$ orbital ($m=5$). Thus, by manipulating the
wavefunction of the incident electronic beam, it is possible not only to
enhance rates but also to shift the maximum effect between orbitals.
In view of the many methods developed to produce specific atomic vacancies
Rudek et al. (2012); Steck and Litvinov (2020), this result can have important
consequences for our ability to manipulate the nuclear excitation. Vortex beam
angular momentum, electron energy and atomic vacancies can be dynamically and
simultaneously controlled to optimize isomer depletion. In fact, the
topological charge of the vortex beam impinging on the isomers, i.e., the
value of $m$, can be switched dynamically on an ultrafast timescale by
modulating the properties of plasmonic Vanacore et al. (2019); Kim et al.
(2010); Wang et al. (2019) and light phase masks Lembessis et al. (2014);
Lembessis (2017). Also when using physical phase plates such as the forked
mask in Fig. 1, deflector coils or apertures can select the desired vortex
topological charge Pohl et al. (2017). With such dynamical control to optimize
isomer depletion, clear experimental signals can be targeted, aiming at
efficient nuclear energy release from isomers.
$nl_{j}$ | $E_{0}$ [keV] | $S_{p}$ [b eV] | $S_{v}$ [b eV] | $S_{v}$ [b eV]
---|---|---|---|---
| | | $m=3$ | $m=5$
$2s_{1/2}$ | $5.20$ | $8.05\times 10^{-4}$ | $1.14\times 10^{-3}$ | $1.14\times 10^{-3}$
$2p_{1/2}$ | $5.19$ | $7.85\times 10^{-5}$ | $1.35\times 10^{-3}$ | $3.34\times 10^{-3}$
$2p_{3/2}$ | $6.02$ | $1.25\times 10^{-5}$ | $4.21\times 10^{-4}$ | $7.61\times 10^{-3}$
Table 1: NEEC resonance strength for isomer depletion of 152mEu for both plane
wave $S_{p}$ and vortex $S_{v}$ electron beams. We assume $\zeta=p_{z}$ and
$\zeta b=1$ and consider two values of the topological charge $m=3,\,5$.
Let us now finally turn to the magnitude of isomer depletion for the 93mMo
isomer. The isomers can be obtained in nuclear reactions such as
93Nb(p,n)93mMo Gunst et al. (2014) or 7Li$(^{90}$Zr, p3n)93Mo Chiara et al.
(2018). Since the resonance condition for electron recombination needs to be
fulfilled in the rest frame of the nucleus, the ion preparation is equally
important to the vortex electron beam generation. The required ion charge
state breeding, storage and cooling requires for instance a storage ring or an
electron beam ion trap in conjunction with a radioactive beam facility.
Isomeric beams have been successfully produced and stored at facilities such
as the GSI Darmstadt Litvinov et al. (2013); Grieser et al. (2012); Dickel et
al. (2015). At a storage ring the condition $\zeta=p_{z}$ could be easily
fulfilled by exploiting the Lorentz boost of the ions. A dedicated electron
vortex beam setup needs to be designed in order to fulfill all experimental
requirements for isomer production, resonance condition match and dynamical
control of vortex beam properties.
Considering the most efficient capture orbital $2p_{3/2}$ and topological
charge $m=5$, the NEEC resonance strength reaches the value $\sim 1$ b eV. In
order to obtain a reaction rate per ion, we multiply this value by the vortex
beam flux. We assume here the generic flux of $10^{24}$ cm-2 s-1 eV-1 Béché et
al. (2017); Reimer and Kohl (2008). Variations around this figure depend on
the exact continuum electron energy required by the resonance condition.
Electron energies below 1 keV will diminish the electron density, such that
additional compression would be required, whereas much larger energies can
even enhance the flux we are considering. The NEEC reaction rate per ion
reaches the value of approx. 1 s-1. Compared to the natural decay of the
isomer (halflife 6.8 h), this represents an enhancement of approx. 4 orders of
magnitude for the isomer depletion rate.
Isomer depletion is a very desirable goal in view of the current search for
energy storage solutions Koningstein and Fork (2014); Prelas et al. (2016).
However, the potential of dynamically controlled vortex beams extends farther
than that. We anticipate new opportunities in nuclear physics, where
projectile beams, starting for instance from protons, neutrons or muons with
reshaped wave fronts Luski et al. (2021); Zhao et al. (2021) would enhance and
dynamically control nuclear reactions. The beam angular momentum is ideal to
specifically select reaction channels according to the final-state spin. This
would enable for instance the targeted production of isotopes or isomers for
medical applications Habs and Köster (2011); Pan et al. (2021) or the search
for dark matter Pospelov et al. (2020). Thus, nuclear physics and engineering
will benefit from the new opportunities raised by vortex beams with intense
flux and dynamical control of beam parameters. In addition, the experimental
methods described above, combining controlled atomic beams (be they electrons
or other particles) with tailored external handles, will offer a unique
perspective for the interplay between the nucleus and its surrounding
electronic shells, with potential also for chemistry and molecular physics
applications.
## I Methods
In order to derive the NEEC rate for vortex electron beams, we relate to the
plane wave results in Refs. Pálffy et al. (2006); Pálffy et al. (2007b); Gunst
et al. (2015) and expand the continuum electronic wave function into partial
waves of definite angular momentum. To specify the lateral position of the ion
with regard to the central axis of the incident electron beam, the impact
parameter ${\bf{b}}$ is introduced Bliokh et al. (2017); Serbo et al. (2015).
The NEEC rate can be written as
$Y_{neec}^{i\rightarrow g}=\int\mathcal{Y}_{neec}^{i\rightarrow
g}({\bf{p}},{\bf{k}})a_{\zeta m}({\bf{p}}_{\bot})a^{*}_{\zeta
m}({\bf{k}}_{\bot})e^{i({\bf{k}}_{\bot}-{\bf{p}}_{\bot}){\bf{b}}}\frac{d^{2}{\bf{p}}_{\bot}}{(2\pi)^{2}}\frac{d^{2}{\bf{k}}_{\bot}}{(2\pi)^{2}}d^{2}{\bf{b}},$
(3)
where $\mathcal{Y}_{neec}^{i\rightarrow g}({\bf{p}},{\bf{k}})$ is the squared
transition amplitude for incoming momenta ${\bf{p}}$ and ${\bf{k}}$. We
restrict the impact parameter region to $|{\bf{b}}|\leqslant b$. The NEEC rate
takes then the from
$Y_{neec}^{i\rightarrow
g}=\frac{b^{2}}{4\pi}\int_{0}^{2\pi}\\!\\!\\!\int_{0}^{2\pi}\frac{d\alpha_{p}}{2\pi}\frac{d\alpha_{k}}{2\pi}e^{im(\alpha_{p}-\alpha_{k})}\mathcal{Y}_{neec}^{i\rightarrow
g}({\bf{p}},{\bf{k}}){}_{0}F_{1}(2;u)/\Gamma(2),$ (4)
with the condition $|{\bf{p}}_{\bot}|=|{\bf{k}}_{\bot}|=\zeta$, and the two
polar angles $\alpha_{p}$ and $\alpha_{k}$ spanning the interval $[0,2\pi)$.
The notation ${}_{0}F_{1}$ stands for the confluent hypergeometric limit
function, $u=-b^{2}\zeta^{2}\left[1-\cos{(\alpha_{k}-\alpha_{p})}\right]/2$,
and $\Gamma(2)$ is the Gamma function.
The remaining factor $\mathcal{Y}_{neec}^{i\rightarrow g}({\bf{p}},{\bf{k}})$
can be related to the plane-wave NEEC amplitude calculated in Refs. Pálffy et
al. (2006); Pálffy et al. (2007b)
$\displaystyle\mathcal{Y}_{neec}^{i\rightarrow g}({\bf{p}},{\bf{k}})$
$\displaystyle=$
$\displaystyle\frac{2\pi(4\pi)(2J_{g}+1)\rho_{i}}{2(2I_{i}+1)(2J_{i}+1)(2j_{g}+1)}$
$\displaystyle\times\sum_{M_{i}s}\sum_{M_{g}m_{g}}\langle
I_{g}M_{g},n_{g}\kappa_{g}m_{g}|H_{N}|I_{i}M_{i},{\bf{p}}s\rangle\langle
I_{g}M_{g},n_{g}\kappa_{g}m_{g}|H_{N}|I_{i}M_{i},{\bf{k}}s\rangle^{\dagger},$
where $H_{N}$ is the electron-nucleus interaction Hamiltonian, $J_{i}$ is the
total angular momentum of the initial electronic configuration of the ion,
$J_{g}$ the total angular momentum of the final electronic configuration of
the ion after NEEC, and $\rho_{i}$ the initial density of continuum electron
states, respectively. The nuclear initial state (final state after NEEC) is
determined by the total angular momentum $I_{i}$ ($I_{g}$) and its projection
$M_{i}$ ($M_{g}$). The bound electron in the capture orbital is determined by
the principal quantum number $n_{g}$, the Dirac angular momentum quantum
number $\kappa_{g}$, and projection $m_{g}$ of the angular momentum.
Furthermore, $j_{g}$ is the total angular momentum of the bound electron in
the capture orbital. The calculation of the electron matrix elements requires
the continuum electron states with definite asymptotic momentum ${\bf{p}}$ (or
${\bf{k}}$) and spin projection $s$ to be expanded in terms of partial waves
$|\varepsilon\kappa m_{j}\rangle$ Pálffy et al. (2006); Pálffy et al. (2007b),
where $\varepsilon$ is the kinetic energy, $\kappa$ is the Dirac angular
momentum quantum number, and $m_{j}$ is the projection of the total angular
momentum $j$. The contribution of each partial wave is given by Pálffy et al.
(2006); Pálffy et al. (2007b)
$\displaystyle\langle
I_{g}M_{g},n_{g}\kappa_{g}m_{g}|H_{N}|I_{i}M_{i},\varepsilon\kappa
m_{j}\rangle$ (6) $\displaystyle=$
$\displaystyle\frac{1}{R_{0}^{L+2}}\sum_{M}(-1)^{I_{g}+M_{i}+L+M+m_{j}+3j_{g}}\left[\frac{4\pi(2j_{g}+1)}{(2L+1)^{3}}\right]^{1/2}\langle
I_{g}||\mathcal{Q}_{L}||I_{i}\rangle$
$\displaystyle\times~{}C(I_{i}~{}I_{g}~{}L;-M_{i}~{}M_{g}~{}M)~{}C(j~{}J_{g}~{}L;-m_{j}~{}m_{g}~{}-M)~{}C(j_{g}~{}L~{}j;\frac{1}{2}~{}0~{}\frac{1}{2})R^{(E)}_{L,\kappa_{g},\kappa},$
for transitions of electric multipolarity $L$, and
$\displaystyle\langle
I_{g}M_{g},n_{g}\kappa_{g}m_{g}|H_{N}|I_{i}M_{i},\varepsilon\kappa
m_{j}\rangle$ (9) $\displaystyle=$
$\displaystyle\sum_{M}(-1)^{I_{i}-M_{i}+M+j-L-1/2}\left[\frac{4\pi(2j+1)}{L^{2}(2L+1)^{2}}\right]^{1/2}\langle
I_{g}||\mathcal{M}_{L}||I_{i}\rangle(\kappa+\kappa_{g})$
$\displaystyle\times~{}C(j~{}L~{}j_{g};m~{}-M~{}m_{g})~{}C(I_{g}~{}I_{i}~{}L;M_{d}~{}-M_{i}~{}M)\left(\begin{array}[]{ccc}j_{g}&j&L\\\
\frac{1}{2}&-\frac{1}{2}&0\end{array}\right)R^{(M)}_{L,\kappa_{g},\kappa},$
for transitions of magnetic multipolarity $L$. Here $\langle
I_{g}||\mathcal{Q}_{L}||I_{i}\rangle$ and $\langle
I_{g}||\mathcal{M}_{L}||I_{i}\rangle$ are the reduced matrix elements of the
electric and magnetic multipole moments, respectively. The are connected to
the reduced nuclear transition probabilities by the expression
$\mathcal{B}\uparrow(E/ML)=\langle
I_{g}||\mathcal{Q}_{L}/\mathcal{M}_{L}||I_{i}\rangle/(2I_{i}+1)$. Furthermore,
$R_{0}$ in Eq. (6) denotes the nuclear radius. The radial integrals
$R^{(E)}_{L,\kappa_{g},\kappa}$ and $R^{(M)}_{L,\kappa_{g},\kappa}$ for
electric and magnetic multipolarities, respectively, are given in Refs. Pálffy
et al. (2006); Pálffy et al. (2007b).
With the expansion of the continuum electronic wave function into partial
waves of definite angular momentum, and the above matrix elements for each
partial wave, we obtain the factor
$\mathcal{Y}_{neec}^{i\rightarrow g}({\bf{p}},{\bf{k}})=4\pi
Y_{a}\\!\sum_{\kappa,m_{l}}\\!\\!\frac{Y_{b}}{2l+1}Y^{*}_{lm_{l}}(\theta_{k},\varphi_{k})Y_{lm_{l}}(\theta_{p},\varphi_{p}),$
(10)
where $Y_{lm_{l}}$ stand for the spherical harmonics with quantum numbers $l$
and $m_{l}$. Furthermore, $\theta_{p}$ ($\theta_{k}$) and $\theta_{p}$
($\theta_{k}$) are the polar and azimuthal angles of the electron momentum
$\bf{p}$ ($\bf{k}$) in the spherical coordinate of the ion. For NEEC
transitions of electric multipolarity $L$,
$Y_{a}=\frac{4\pi^{2}(2J_{g}+1)}{(2J_{i}+1)(2L+1)^{2}}\frac{1}{R_{0}^{2(L+2)}}\mathcal{B}\uparrow(EL)\rho_{i},$
(11)
and
$Y_{b}=\left[C(j_{g}~{}L~{}j;\frac{1}{2}~{}0~{}\frac{1}{2})\right]^{2}\left|R^{(E)}_{L,\kappa_{g},\kappa}\right|^{2}.$
(12)
For NEEC transitions of magnetic multipolarity $L$,
$Y_{a}=\frac{4\pi^{2}(2J_{g}+1)}{(2J_{i}+1)L^{2}(2L+1)^{2}}\mathcal{B}\uparrow(ML)\rho_{i},$
(13)
and
$Y_{b}=(2j+1)(\kappa_{g}+\kappa)^{2}\left(\begin{array}[]{ccc}j_{g}&j&L\\\
\frac{1}{2}&-\frac{1}{2}&0\end{array}\right)^{2}\left|R^{(M)}_{L,\kappa_{g},\kappa}\right|^{2}.$
(14)
In the equations above, $j$ is the total angular momentum of the continuum
electron which connects with $\kappa$ via $j=|\kappa|-1/2$. The radial
integrals $R^{(E/M)}_{L,j_{g},j}$ that enter Eqs. (12) and (14) are calculated
numerically. We use relativistic Coulomb-Dirac wave functions for the
continuum electron and wave functions calculated with the GRASP92 package
Parpia et al. (1996) considering a homogeneously charged nucleus for the bound
electron. The finite size of the nucleus is not affecting significantly the
radial wave functions. We find the values of $R^{(E/M)}_{L,j_{g},j}$ are
nearly constant whether or not we take into account the finite size of the
nucleus or we use Coulomb-Dirac radial wave functions. However, the finite
size of the nucleus has a sensitive effect on the energy levels of the bound
electron. The bound electron energy levels are calculated with GRASP92 and
include quantum electrodynamics corrections.
## II Acknowledgements
The authors thank I. Madan and G. M. Vanacore for fruitful discussions. SG, FC
and AP acknowledge support from Google Inc. AP gratefully acknowledges the
Heisenberg Program of the Deutsche Forschungsgemeinschaft (DFG).
## III Author contributions
YW and AP developed the theoretical formalism. YW performed the analytical and
numerical calculations. SG and FC provided the input on experimental vortex
beam parameters. AP conducted the project. All authors discussed the results
and wrote the manuscript.
## References
* Walker and Dracoulis (1999) P. Walker and G. Dracoulis, Nature 399, 35 (1999).
* Walker and Podolyák (2020) P. Walker and Z. Podolyák, Physica Scripta 95, 044004 (2020).
* Gunst et al. (2014) J. Gunst, Y. A. Litvinov, C. H. Keitel, and A. Pálffy, Phys. Rev. Lett. 112, 082501 (2014).
* Pálffy et al. (2007a) A. Pálffy, J. Evers, and C. H. Keitel, Phys. Rev. Lett. 99, 172502 (2007a).
* Chiara et al. (2018) C. J. Chiara, J. J. Carroll, M. P. Carpenter, J. P. Greene, D. J. Hartley, R. V. F. Janssens, G. J. Lane, J. C. Marsh, D. A. Matters, M. Polasik, et al., Nature 554, 216 (2018).
* Goldanskii and Namiot (1976) V. I. Goldanskii and V. A. Namiot, Phys. Lett. B 62, 393 (1976).
* Cue et al. (1989) N. Cue, J.-C. Poizat, and J. Remillieux, Eurphys. Lett. 8, 19 (1989).
* Yuan and Kimball (1993) Z.-S. Yuan and J. Kimball, Phys. Rev. C 47, 323 (1993).
* Harston and Chemin (1999) M. R. Harston and J. F. Chemin, Phys. Rev. C 59, 2462 (1999).
* Gosselin and Morel (2004) G. Gosselin and P. Morel, Phys. Rev. C 70, 064603 (2004).
* Pálffy et al. (2006) A. Pálffy, W. Scheid, and Z. Harman, Phys. Rev. A 73, 012715 (2006).
* Wu et al. (2019a) Y. Wu, C. H. Keitel, and A. Pálffy, Phys. Rev. Lett. 122, 212501 (2019a).
* Rzadkiewicz et al. (2021) J. Rzadkiewicz, M. Polasik, K. Słabkowska, L. Syrocki, J. J. Carroll, and C. J. Chiara, Phys. Rev. Lett. 127, 042501 (2021).
* Guo et al. (2021) S. Guo, Y. Fang, X. Zhou, and C. M. Petrache, Nature 594, E1 (2021).
* Chiara et al. (2021) C. J. Chiara, J. J. Carroll, M. P. Carpenter, J. P. Greene, D. J. Hartley, R. V. F. Janssens, G. J. Lane, J. C. Marsh, D. A. Matters, M. Polasik, et al., Nature 594, E3 (2021).
* Wu et al. (2019b) Y. Wu, C. H. Keitel, and A. Pálffy, Phys. Rev. A 100, 063420 (2019b).
* Gargiulo et al. (2021) S. Gargiulo, I. Madan, and F. Carbone, arXiv:2102.05718 [nucl-th] (2021).
* Madan et al. (2020) I. Madan, G. M. Vanacore, S. Gargiulo, T. LaGrange, and F. Carbone, Applied Physics Letters 116, 230502 (2020).
* Uchida and Tonomura (2010) M. Uchida and A. Tonomura, Nature (London) 464, 737 (2010).
* Verbeeck et al. (2010) J. Verbeeck, H. Tian, and P. Schattschneider, Nature (London) 467, 301 (2010).
* McMorran et al. (2011) B. J. McMorran, A. Agrawal, I. A. Anderson, A. A. Herzing, H. J. Lezec, J. J. McClelland, and J. Unguris, Science 331, 192 (2011).
* Clark et al. (2015) C. Clark, R. Barankov, M. Huber, M. Arif, D. G. Cory, and D. A. Pushin, Nature (London) 525, 504 (2015).
* Luski et al. (2021) A. Luski, Y. Segev, R. David, O. Bitton, H. Nadler, A. R. Barnea, A. Gorlach, O. Cheshnovsky, I. Kaminer, and E. Narevicius, arXiv:2104.14619 [quantum-ph] (2021).
* Shen et al. (2019) Y. Shen, X. Wang, Z. Xie, C. Min, X. Fu, Q. Liu, M. Gong, and X. Yuan, Light: Science & Applications 8, 90 (2019).
* Bliokh and Nori (2015) K. Y. Bliokh and F. Nori, Physics Reports 592, 1 (2015).
* Lloyd et al. (2017) S. M. Lloyd, M. Babiker, G. Thirunavukkarasu, and J. Yuan, Rev. Mod. Phys. 89, 035004 (2017).
* Bliokh et al. (2017) K. Y. Bliokh, I. Ivanov, G. Guzzinati, L. Clark, R. Van Boxem, A. Béché, R. Juchtmans, M. A. Alonso, P. Schattschneider, F. Nori, et al., Physics Reports 690, 1 (2017).
* Vanacore et al. (2019) G. M. Vanacore, G. Berruto, I. Madan, E. Pomarico, P. Biagioni, R. J. Lamb, D. McGrouther, O. Reinhardt, I. Kaminer, B. Barwick, et al., Nature Materials 18, 573 (2019).
* Zhao et al. (2021) P. Zhao, I. P. Ivanov, and P. Zhang, arXiv preprint arXiv:2106.00345 (2021).
* Larocque et al. (2018) H. Larocque, I. Kaminer, V. Grillo, R. W. Boyd, and E. Karimi, Nature Physics 14, 1 (2018).
* Kaminer et al. (2015) I. Kaminer, J. Nemirovsky, M. Rechtsman, R. Bekenstein, and M. Segev, Nature Physics 11, 261 (2015).
* Wu et al. (2018) Y. Wu, J. Gunst, C. H. Keitel, and A. Pálffy, Phys. Rev. Lett. 120, 052504 (2018).
* Gunst et al. (2018) J. Gunst, Y. Wu, C. H. Keitel, and A. Pálffy, Phys. Rev. E 97, 063205 (2018).
* Bliokh et al. (2011) K. Y. Bliokh, M. R. Dennis, and F. Nori, Phys. Rev. Lett. 107, 174802 (2011).
* Serbo et al. (2015) V. Serbo, I. P. Ivanov, S. Fritzsche, D. Seipt, and A. Surzhykov, Phys. Rev. A 92, 012705 (2015).
* Rudek et al. (2012) B. Rudek, S.-K. Son, L. Foucar, S. W. Epp, B. Erk, R. Hartmann, M. Adolph, R. Andritschke, A. Aquila, N. Berrah, et al., Nature photonics 6, 858 (2012).
* Steck and Litvinov (2020) M. Steck and Y. A. Litvinov, Progress in Particle and Nuclear Physics 115, 103811 (2020).
* Kim et al. (2010) H. Kim, J. Park, S.-W. Cho, S.-Y. Lee, M. Kang, and B. Lee, Nano letters 10, 529 (2010).
* Wang et al. (2019) S. Wang, C. Zhao, and X. Li, Applied Sciences 9, 3297 (2019).
* Lembessis et al. (2014) V. E. Lembessis, D. Ellinas, M. Babiker, and O. Al-Dossary, Phys. Rev. A 89, 053616 (2014).
* Lembessis (2017) V. E. Lembessis, Phys. Rev. A 96, 013622 (2017).
* Pohl et al. (2017) D. Pohl, S. Schneider, P. Zeiger, J. Rusz, P. Tiemeijer, S. Lazar, K. Nielsch, and B. Rellinghaus, Scientific Reports 7, 934 (2017).
* Litvinov et al. (2013) Y. A. Litvinov, S. Bishop, K. Blaum, F. Bosch, C. Brandau, L. X. Chen, I. Dillmann, P. Egelhof, H. Geissel, R. E. Grisenti, et al., Nuclear Instruments and Methods in Physics Research B 317, 603 (2013).
* Grieser et al. (2012) M. Grieser, Y. A. Litvinov, R. Raabe, K. Blaum, Y. Blumenfeld, P. A. Butler, F. Wenander, P. J. Woods, M. Aliotta, A. Andreyev, et al., European Physical Journal Special Topics 207, 1 (2012).
* Dickel et al. (2015) T. Dickel, W. R. Plaß, S. Ayet San Andres, J. Ebert, H. Geissel, E. Haettner, C. Hornung, I. Miskun, S. Pietri, S. Purushothaman, et al., Physics Letters B 744, 137 (2015).
* Béché et al. (2017) A. Béché, R. Juchtmans, and J. Verbeeck, Ultramicroscopy 178, 12 (2017).
* Reimer and Kohl (2008) L. Reimer and H. Kohl, _Transmission Electron Microscopy_ (Springer Science+Business Media, New York, 2008).
* Koningstein and Fork (2014) R. Koningstein and D. Fork, IEEE Spectrum 51, 30 (2014).
* Prelas et al. (2016) M. Prelas, M. Matthew Boraas, F. De La Torre Aguilar, and J.-D. Seelig, _Nuclear Batteries and Radioisotopes_ (Springer, Cham, 2016).
* Habs and Köster (2011) D. Habs and U. Köster, Applied Physics B 103, 501 (2011).
* Pan et al. (2021) W.-T. Pan, S. T., L. H.-Y., Z.-G. Ma, J. Zhang, Z. Zhu, and W. Luo, Applied Radiation and Isotopes 168, 109534 (2021).
* Pospelov et al. (2020) M. Pospelov, S. Rajendran, and H. Ramani, Phys. Rev. D 101, 055001 (2020).
* Pálffy et al. (2007b) A. Pálffy, Z. Harman, and W. Scheid, Phys. Rev. A 75, 012709 (2007b).
* Gunst et al. (2015) J. Gunst, Y. Wu, N. Kumar, C. H. Keitel, and A. Pálffy, Physics of Plasmas 22, 112706 (2015).
* Parpia et al. (1996) F. A. Parpia, C. F. Fischer, and I. P. Grant, Computer Physics Communications 94, 249 (1996).
|
arxiv-papers
| 2021-07-26T19:28:49 |
2024-09-04T03:07:19.958564
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Yuanbin Wu, Simone Gargiulo, Fabrizio Carbone, Christoph H. Keitel and\n Adriana P\\'alffy",
"submitter": "Adriana P\\'alffy",
"url": "https://arxiv.org/abs/2107.12448"
}
|
2107.12449
|
# Asymptotically exact photonic analogues of chiral symmetric topological
tight-binding models
S. Palmer1, Y. Ignatov1, R.V. Craster2 & M. Makwana2 1 The Blackett
Laboratory, Imperial College London, London SW7 2AZ, United Kingdom 2
Department of Mathematics, Imperial College London, London, SW7 2AZ, United
Kingdom
###### Abstract
Topological photonic edge states, protected by chiral symmetry, are attractive
for guiding wave energy as they can allow for more robust guiding and greater
control of light than alternatives; however, for photonics, chiral symmetry is
often broken by long-range interactions. We look to overcome this difficulty
by exploiting the topology of networks, consisting of voids and narrow
connecting channels, formed by the spaces between closely spaced perfect
conductors. In the limit of low frequencies and narrow channels, these void-
channel systems have a direct mapping to analogous discrete mass-spring
systems in an asymptotically rigorous manner and therefore only have short-
range interactions. We demonstrate that the photonic analogues of topological
tight-binding models that are protected by chiral symmetries, such as the SSH
model and square-root semimetals, are reproduced for these void-channel
networks with appropriate boundary conditions. We anticipate, moving forward,
that this paper provides a basis from which to explore continuum photonic
topological systems, in an asymptotically exact manner, through the lens of a
simplified tight-binding model.
*
## 1 Introduction
The field of topological materials has revealed exotic phenomena such as
robust, unidirectional edge states that occur at the interfaces between
materials that belong to two different topological phases. The Nobel Prize in
Physics 2016 was awarded to Thouless, Haldane, and Kosterlitz, [1, 2, 3] for
predicting such phases in electronic systems where such topological edge
states promise to revolutionise electronics and quantum computing [4, 5, 6, 7,
8]. Although topological phases were first discovered in electronic systems,
the underlying principles are applicable to wave systems in general, including
photonic and acoustic systems [9, 10]. There is now great interest in
reproducing topological phases in photonics using _photonic crystals_ :
periodic nanostructures with tunable photonic bands. In the short to medium
term, topological photonic materials may improve the performance of photonic
devices by reducing dissipation, especially when guiding light around sharp
corners, and in the longer term could offer a platform for fault-tolerant
quantum computers in photonics [10, 11, 12]. Applications that are unique to
topological photonic materials include the design of more efficient lasers,
where the topological edge mode acts as a cavity in which light propagates and
is amplified unidirectionally and coherently despite imperfections in the
crystal [13, 14, 15], and the cloaking of large photon sources from each other
using the polarisation of light [16].
Many topological materials are protected by chiral symmetry (also known as
sublattice symmetry) [17, 18, 19], both directly as in the SSH model [20] and
indirectly as in _square-root topological insulators_ [21] where the squared
Hamiltonian is block-diagonal and at least one of the blocks corresponds to a
known non-trivial system. Chiral symmetry acts on Bloch Hamiltonians as [5]
$\hat{\mathcal{S}}\hat{H}(\vec{k})\hat{\mathcal{S}}^{-1}=-\hat{H}(\vec{k}),$
(1)
where $\hat{\mathcal{S}}$ is a unitary operator that squares to $+1$. Chiral
symmetry is relatively common in tight-binding models of electronic systems,
however in photonics, chiral symmetry is often broken by long-range
interactions [22, 23]; despite this, it can be engineered in certain photonic
systems, examples include arrays of dielectric resonators [24] or grating
structures [25].
In this paper, we engineer chiral symmetric photonic systems where transverse-
electric polarised light propagates, in the voids and narrow connecting
channels, located between perfect conductors, as shown in figure 1(a). In
section 2, we review why at low frequencies, this photonic system behaves like
a _discrete_ classical network of inductors and capacitors (or equivalently as
a classical network of masses and springs, as shown in figure 1(b)), in the
limit where the inclusions are closely spaced [26, 27]. The void-channel
networks have vanishing long-range interactions, as the channels are made
increasingly narrow, thus certain configurations of the void-channel networks
have a chiral symmetric limit (up to a shift of constant frequency).
In tight-binding models, terminating the lattice freely does not change the
onsite potential and therefore preserves chiral symmetry. In mass-spring and
void-channel models, the free boundary condition generally breaks chiral
symmetry [28, 29]. We propose that the chiral symmetry can be restored at the
interfaces by “capping” the mass-spring/void-channel networks with heavy
masses/large voids, respectively. In section 3.1 we use the well known SSH
model [20] to demonstrate this principle, and verify that the void-channel SSH
geometry features topological edge states. Although in this article we do not
concentrate upon the asymptotic theory of mapping continuum systems to
discrete models we note that there is considerable advantage in being able to
accurately map between them: the entire machinery and theory for topological
tight-binding systems then carries across into continuum systems. In simpler
settings of connected acoustic tubes and straight channels [30, 31] and [32]
illustrate the power of being able to translate back-and-forth between
continuum and discrete networks; we use the asymptotic methodology of [26, 27]
showing that curved thin channels can be employed for closely spaced cylinders
(and other smooth objects) and noting that a three-dimensional network
extension [33] is also available.
The void-channel geometries are also a promising platform to realise square-
root topological systems [21]. For example, Maimaiti _et al_ [28] showed that
the nearest-neighbour tight-binding model of the honeycomb-kagome lattice is a
_square-root semimetal_ that inherits the topology from the honeycomb sector
of the squared Hamiltonian. The authors also proposed an analogous mass-spring
model using a gravitational potential energy term to adjust the onsite terms
of the equations of motion [28]; it is not apparent to us if an analogue of
this gravitational term exists for the void-channel geometries. In section
3.2, we produce a photonic analogue of the square-root semimetal by capping
our void-channel network with large voids, thereby ensuring that the chiral
symmetry of the squared Hamiltonian is not broken by the interfaces. We study
the interfaces in a ribbon and a triangular metaparticle of the honeycomb-
kagome lattice, and observe that topologically protected edge and corner
states can be excited.
## 2 Methodology
Figure 1: (a) Two voids of vacuum (white) and a narrow connecting channel
embedded in perfect conductor (grey) will behave like a pair of inductors and
a capacitor for transverse-electric polarised light at low frequencies and for
narrow channels [26, 27]. The voids act as inductors with inductance $L$
because the currents, $I$, circulate around the surface of the voids and
induce out-of-plane magnetic fields within the voids. We label the magnetic
field within the left and right voids as $H_{z}^{-}$ and $H_{z}^{+}$,
respectively. The channel acts as a capacitor because the difference of
magnetic field across the channel generates an electric field, $\vec{E}$,
across the gap. This analogy can be extended to larger networks of voids and
channels. (b) The void-channel or inductor-capacitor network is also analogous
to a mass-spring network, where masses $m$ are connected by spring constants
$k$, and the masses oscillate in and out of the page.
We apply the method of Vanel _et al_ [26] to map the TE-polarised Maxwell’s
equations, within networks of voids and channels formed between closely spaced
perfect conductors (as shown in figure 1(a)), to equivalent networks of
resonators in an asymptotically exact manner. The voids behave as inductors
and the channels behave as capacitors; the difference in the out-of-plane
magnetic field across a channel, $H_{z}^{+}-H_{z}^{-}$, results in an electric
field, $\vec{E}$, perpendicular to the channel [26, 27]. Equivalently, we may
consider a mass-spring network where the voids are mapped to masses and the
channels are mapped to springs, as shown in figure 1(b).
The parameters of the discrete inductor-capacitor/mass-spring networks are not
reliant upon lumped parameters and/or heuristic approximations; these
approaches are common in electrical engineering as lumped circuit models [34],
or as optimisation with databases [35], and these successfully to take complex
systems across to networks. The advantage of the alternative approach here is
that the effective parameters are simple and explicit. We proceed by utilising
matched asymptotic expansions, Vanel _et al_ [26] demonstrated that the
precise values of the masses and spring constants, corresponding to a
particular network of voids and channels, can be determined in a remarkably
simple manner at low frequencies and in the limit of narrow channels,
$h/a\rightarrow 0$. The masses are proportional to the area of the voids,
$m_{i}=A_{i}\cdot m_{0}/a^{2},$ (2)
whilst the spring constants are a function of the half-width of the channel,
$h$, in addition to the radius of curvature of the two sides of the channel,
$R_{1}$ and $R_{2}$,
$k=\frac{1}{\pi}\sqrt{\frac{h}{R_{1}}+\frac{h}{R_{2}}}.$ (3)
Equations (2) and (3) allow us to accurately model the continuous void-channel
network using the much simpler equations of motion of a discrete mass-spring
system, without the need for any fine tuning or parameter fitting. The reverse
mapping allows us to propose new photonic void-channel models where the
coupling of the field between different voids is highly controllable. This
afford us complete control over the symmetries of the Hamiltonian and, hence,
suggests that void-channel networks could be a powerful platform for realising
symmetry-protected photonic topological phases.
In this paper, the void-channel solutions were obtained using the open source
finite element solver FreeFem++ [36]; we used this to solve the Helmholtz
equation for our void-channel model,
$\frac{\partial^{2}H_{z}}{\partial t^{2}}-c^{2}\nabla^{2}H_{z}=0,$ (4)
where $c$ is the speed of light in the space between the perfect conductor and
where we notably applied Neumann boundary conditions along the surface of the
perfect conductor [26, 27]. This was achieved using a modified version of a
set of FreeFem++ scripts that were originally used to model phononic crystals
[37].
## 3 Results
### 3.1 The Su-Schrieffer-Heeger chain
Figure 2: Squared frequency spectrum (normalised by
$\omega_{0}=\sqrt{k_{0}/m_{0}}$) of a mass-spring chain with equal masses,
$m=0.1m_{0}$, and alternating spring constants $k_{1}=0.01k_{0}$ and
$k_{2}=0.02k_{0}$, as shown in the inset. The width of the unit cell is $a$.
The spectrum closely resembles the energy spectrum of an electronic SSH tight-
binding model [38, 20], except that the squared frequency has been shifted by
$(k_{1}+k_{2})/m=0.3\omega_{0}^{2}$.
To explore the symmetries of photonic void-channel networks, let us first
consider the equations of motion of an SSH-like mass-spring network consisting
of equal masses, $m$, and alternating spring constants, $k_{1}$ and $k_{2}$,
as shown in the inset of figure 2,
$\left[\begin{array}[]{cc}\frac{k_{1}}{m}+\frac{k_{2}}{m}&-\frac{k_{1}}{m}-\frac{k_{2}}{m}e^{-ika}\\\
-\frac{k_{1}}{m}-\frac{k_{2}}{m}e^{+ika}&\frac{k_{1}}{m}+\frac{k_{2}}{m}\end{array}\right]\left[\begin{array}[]{c}\big{|}u_{1}\big{>}\\\
\big{|}u_{2}\big{>}\end{array}\right]=\omega^{2}(k)\left[\begin{array}[]{c}\big{|}u_{1}\big{>}\\\
\big{|}u_{2}\big{>}\end{array}\right].$ (5)
We see in figure 2 that the squared frequency spectrum of this chain resembles
the energy spectrum of the electronic SSH tight-binding model. Unlike the
tight-binding model the diagonal terms in equation (5) are non-zero; however,
as they are equal, they merely correspond to a simple shift of frequency by
$\sqrt{k_{1}/m+k_{2}/m}$,
$\left[\begin{array}[]{cc}0&-\frac{k_{1}}{m}-\frac{k_{2}}{m}e^{-ika}\\\
-\frac{k_{1}}{m}-\frac{k_{2}}{m}e^{+ika}&0\end{array}\right]\left[\begin{array}[]{c}\big{|}u_{1}\big{>}\\\
\big{|}u_{2}\big{>}\end{array}\right]=\left(\omega^{2}(k)-\frac{k_{1}+k_{2}}{m}\right)\left[\begin{array}[]{c}\big{|}u_{1}\big{>}\\\
\big{|}u_{2}\big{>}\end{array}\right].$ (6)
This frequency shift allows for the chiral symmetry of the bulk equations to
be preserved.
Figure 3: An SSH-like mass-spring chain with free and “wall” boundary
conditions. The mass-spring chain consists of equal masses, $m$, connected by
springs of alternating spring constants $k_{1}$ and $k_{2}$. (a‑b) Schematic
and squared frequency spectrum of an SSH-like mass-spring chain with free
boundary conditions. The chain is 20 masses long and we take
$k_{1}=0.02k_{0}$, $k_{2}=0.01k_{0}$, and $m=0.1m_{0}$. Although the winding
number of the bulk Hamiltonian is non-trivial, no edge states are observed
because the free boundary condition breaks the chiral symmetry. (c‑d)
Schematic and squared frequency spectrum of the same chain but with the “wall”
boundary condition, where the edges of the SSH-like mass-spring chain are
attached to immovable walls with springs of spring constant $k_{2}$. The wall
boundary condition restores the chiral symmetry of the chain and symmetry-
protected edge states are observed in the bulk band gap.
The mass-spring model differs from the original SSH tight-binding model [38,
20] because the forces on the masses are proportional to the _differences_ of
the mass displacements, whereas in tight-binding models the hopping is
proportional to the wavefunction amplitudes themselves [39]. In particular,
while the original SSH model is chiral symmetric for a finite chain with free
boundary conditions, this is not the case for the mass-spring chain with free
boundary conditions. The chiral symmetry is broken, in the latter, by the non-
zero term along the diagonal of the matrix and therefore there are no
topological edge states in the frequency spectrum, (see figure 3(b)),
$\left[\begin{array}[]{ccccc}-\frac{k_{2}}{m}&-\frac{k_{1}}{m}\\\
-\frac{k_{1}}{m}&0&-\frac{k_{2}}{m}\\\ &-\frac{k_{2}}{m}&0&-\frac{k_{1}}{m}\\\
&&-\frac{k_{1}}{m}&0&\ddots\\\
&&&\ddots&\ddots\end{array}\right]\left[\begin{array}[]{c}u_{1}\\\ u_{2}\\\
u_{3}\\\ u_{4}\\\
\vdots\end{array}\right]=\left(\omega^{2}(k)-\frac{k_{1}+k_{2}}{m}\right)\left[\begin{array}[]{c}u_{1}\\\
u_{2}\\\ u_{3}\\\ u_{4}\\\ \vdots\end{array}\right].$ (7)
This distinction arises because the end masses are only connected to a
solitary spring; we can restore the chiral symmetry by anchoring the chain to
an immovable wall with a spring, of spring constant $k_{2}$, as shown in
figure 3(c). The equations of motion of this chain with the “wall” boundary
condition then become,
$\left[\begin{array}[]{ccccc}0&-\frac{k_{1}}{m}\\\
-\frac{k_{1}}{m}&0&-\frac{k_{2}}{m}\\\ &-\frac{k_{2}}{m}&0&-\frac{k_{1}}{m}\\\
&&-\frac{k_{1}}{m}&0&\ddots\\\
&&&\ddots&\ddots\end{array}\right]\left[\begin{array}[]{c}u_{1}\\\ u_{2}\\\
u_{3}\\\ u_{4}\\\
\vdots\end{array}\right]=\left(\omega^{2}(k)-\frac{k_{1}+k_{2}}{m}\right)\left[\begin{array}[]{c}u_{1}\\\
u_{2}\\\ u_{3}\\\ u_{4}\\\ \vdots\end{array}\right].$ (8)
We pictorially see, from figure 3(d), that the chiral symmetry is restored
and, hence, SSH-like edge states emerge at the mid-gap frequency,
$\sqrt{k_{1}/m+k_{2}/m}$.
Figure 4: (a) The “wall” boundary condition of the SSH-like mass-spring chain,
introduced in figure 3(b), can be approximated by replacing the wall with
heavy masses, $M$. (b) The average frequency of the pair of edge states for
the mass-spring chain capped with heavy masses (dots) compared with that for
the mass-spring chain with the wall boundary condition (solid line), for the
same values of $m$, $k_{1}$, and $k_{2}$ as before. Each chain contain 20
masses of mass $m$, and the capped chain has two capping masses of mass $M$ on
each end for a total of 22 masses. For capping masses that are an order of
magnitude heavier than the masses in the rest of the chain, $M\gtrsim 10m$,
the squared frequencies of the edge states of the two chains agree within
about 3% error.
The wall boundary condition can be well approximated by capping mass-spring
models with heavy masses, as shown in figure 4. This allows us to propose a
photonic analogue of the SSH model that consists of a one-dimensional network
of voids and channels as shown in figure 5(a).
Figure 5: (a) Schematic of an SSH-like void-channel chain that is analagous to
the capped mass-spring chain introduced in figure 4. The grey region is
perfect conductor, and the white is air. Only the beginning of the chain is
shown. The bulk of the chain consists of equally sized voids of half-width $H$
and length $L$ connected by channels of alternating half-widths $h_{1}$ and
$h_{2}$. The precise shapes of the voids and channels are defined in the main
text. Note that the bulk region runs from $x=-L/4$ to $(N+1/4)L$ such that the
curvature of the walls is well defined at the narrowest point of each channel.
The half-width at the end of the bulk region is $h_{\mathrm{end}}$; the chain
is then capped by larger voids that are roughly circular with diameter $L$.
(b) Squared frequency spectrum of the void-channel chain (red crosses,
$H=L/10$, $h_{1}=H/400$, $h_{2}=H/100$) and analagous mass-spring chain (black
dots, $k_{1}=0.02$, $k_{2}=0.01$, $m=0.1m_{0}$, $M=\pi/4m_{0}$). The
frequencies are normalised by $\omega_{0}=\sqrt{k_{0}/m_{0}}$ for the mass-
spring model and $\omega_{0}=2\pi c_{0}/L$ for the void-channel model. The
chains consist of 20 masses/voids (or 22 including the pair of larger
masses/voids at the end). (c-d) The magnetic field (red positive, blue
negative) of the two edge modes of the void-channel chain. We see that the
chain preserves chiral symmetry well: the magnetic field is relatively weak in
the large capping voids and each edge mode is well localised to just one
sublattice.
To emulate equal masses connected by alternating spring constants we require
equally sized voids connected by relatively thin channels of alternating
widths. A simple choice for the shape of the upper and lower walls of the
geometry is
$y(x)=\pm\left[H\sin^{2}(\pi x/L)+h_{1}\sin^{2}(\pi x/2L)+h_{2}\cos^{2}(\pi
x/2L)\right],$ (9)
where we take the half-width of the void as $H=L/10$, and the alternating
half-widths of the channels as $h_{1}=H/400$ and $h_{2}=H/100$. Note that the
upper and lower walls run from $x=-\tfrac{1}{4}L$ to $x=(N+\tfrac{1}{4})L$ to
ensure that the radius of curvature of each channel from $x=0$ to $x=NL$ are
well defined. The local radius of curvature of the walls of each channel is
$R=L^{2}/(2\pi^{2}H)$. As $h_{1},h_{2}\ll H$, the area of each void in the
bulk region is approximately $A_{\mathrm{bulk}}\approx
2\int_{0}^{L}H\sin^{2}(\pi x/L)\mathrm{d}x=HL$.
The walls are capped by roughly circular voids of diameter $L$. On the left
hand side,
$\displaystyle x_{\mathrm{L}}(\theta)$
$\displaystyle=\frac{L}{2}\cos(\theta)-\frac{3}{4}L,$ (10) $\displaystyle
y_{\mathrm{L}}(\theta)$
$\displaystyle=\frac{L}{2}\sin(\theta)+h_{\mathrm{end}}\cos(\theta/2),$ (11)
for $\theta=[0,2\pi]$ and on the right hand side,
$\displaystyle x_{\mathrm{R}}(\theta)$
$\displaystyle=\frac{L}{2}\cos(\theta)+\left(N+\frac{3}{4}\right)L,$ (12)
$\displaystyle y_{\mathrm{R}}(\theta)$
$\displaystyle=\frac{L}{2}\sin(\theta)-h_{\mathrm{end}}\sin(\theta/2),$ (13)
for $\theta=[-\pi,\pi]$, where the $h_{\mathrm{end}}=y(-L/4)=y(NL+L/4)$ term
is included to ensure that the walls of the geometry are continuous. As
$h_{\mathrm{end}}\lesssim L$, we can take the area of the large capping voids
as approximately $A_{\mathrm{cap}}=\pi L^{2}/4$. Note that this is a slight
underestimate of the true area of the caps because we do not account for the
region $-L/4\leq x\leq 0$ or for the extra height of the void, described by
Equations (10)-(13), compared to a circle of diameter L.
The alternating spring constants of the corresponding mass-spring network are
$\displaystyle k_{1}$ $\displaystyle=\frac{1}{\pi}\sqrt{2h_{1}/R}\cdot
k_{0}=0.01k_{0},$ (14) $\displaystyle k_{2}$
$\displaystyle=\frac{1}{\pi}\sqrt{2h_{2}/R}\cdot k_{0}=0.02k_{0},$ (15)
the masses in the bulk of the chain are
$\displaystyle m=A_{\mathrm{bulk}}\cdot m_{0}/L^{2}=0.1m_{0},$ (16)
and the larger capping masses are
$\displaystyle M=A_{\mathrm{cap}}\cdot m_{0}/L^{2}=\frac{\pi}{4}m_{0},$ (17)
such that $M/m\approx 8$, where $k_{1}$, $k_{2}$, $m$, and $M$ are as defined
earlier in figure 4. As the capping mass is roughly an order of magnitude
larger than the bulk masses, we expect that the chiral symmetry is largely
restored to the model. Indeed, figure 5(b) shows that the energy spectra of
both the void-channel network (red) and the mass-spring network (black) behave
as non-trivial SSH chains with a pair of topological edge states in the energy
gap.
Overall, there is good agreement between the void-channel and mass-spring
models; in particular we see similar band gaps and a pair of topological edge
states in each model. The models agree well at lower frequencies, because the
mapping between the models is valid for frequencies, below a cut-off frequency
that scales as $\omega_{\mathrm{cutoff}}^{2}\sim 1/h$ [27]. We shall explore
this error, in greater detail, when we study the honeycomb-kagome lattice.
The edge states are largely localised on separate sublattices and decay
quickly into the bulk, as shown for the void-channel model in figures 5(c‑d).
It is intuitive that in the mass-spring model the heavy capping masses will
oscillate with a smaller amplitude than the other masses, and we see that
correspondingly the fields in the capping voids of the void-channel model are
weak. The non-zero field within capping voids indicates that the chiral
symmetry is not perfectly restored. This could be improved by increasing the
size of the capping voids, however this does not seem necessary as the chiral
symmetry violation is weak enough that the squared frequencies of the edge
states remain well centered within the band gap.
### 3.2 Flat edge states in the honeycomb-kagome lattice
Having established that the chiral symmetry of the mass-spring/void-channel
networks can be restored by capping the interfaces with sufficiently heavy
masses/large voids, we now turn to a more complex case of a square-root
semimetal where the topology is protected by the chiral symmetry of the
honeycomb sector of the squared Hamiltonian [21, 40]. We shall see that
despite the differences between the mass-spring/void-channel models and the
original tight-binding model, capping the interfaces again allows the
protecting symmetry to be restored and for the topological edge states to be
observed.
#### 3.2.1 Tight-binding model
Figure 6: (a) Schematic of the tight-binding Hamiltonian of the honeycomb-
kagome lattice, $\hat{H}^{\text{hk}}$, with nearest-neighbour hopping
parameter $t$, as introduced by [40]. The honeycomb and kagome sites are shown
in blue and red, respectively, and $\vec{a}_{1}$ and $\vec{a}_{2}$ are the
lattice parameters. (b) The energy spectrum of $\hat{H}^{\text{hk}}$ is
symmetric about $E=0$ because of the chiral symmetry. The flat band and Dirac
crossings at K are reminiscent of the nearest-neighbour tight-binding models
on the honeycomb and kagome lattices, $\hat{H}^{\mathrm{h}}$ and
$\hat{H}^{\mathrm{k}}$, respectively. This is because
$\left(\hat{H}^{\text{hk}}\right)^{2}$ is block-diagonal, with the blocks
proportional to the energy spectra of $\hat{H}^{\mathrm{h}}$ and
$\hat{H}^{\mathrm{k}}$, up to a constant shift of energy, as shown in (c‑d)
for the honeycomb and kagome sectors of
$\left(\hat{H}^{\text{hk}}\right)^{2}$, respectively. The energy spectrum of
$\hat{H}^{\text{hk}}$ therefore inherits features, including topology, from
$\hat{H}^{\mathrm{h}}$ and $\hat{H}^{\mathrm{k}}$. In particular,
$\hat{H}^{\mathrm{h}}$ is a topological semimetal, and $\hat{H}^{\text{hk}}$
is therefore known as a square-root topological semimetal [21, 40].
Before we introduce our mass-spring and void-channel models, let us review the
tight-binding model introduced by Mizoguchi _et al_ [40] and explain its
topological origins. Figure 6(a) shows the nearest-neighbour tight-binding
model of the honeycomb-kagome lattice, also known as the decorated honeycomb
lattice. The Hamiltonian has a block off-diagonal form [40],
$\underline{\underline{H}}_{\vec{k}}^{\text{hk}}\left[\begin{array}[]{c}u_{1}\\\
\vdots\\\
u_{5}\end{array}\right]=\left[\begin{array}[]{cc}\underline{\underline{0}}_{2\times
2}&t\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\\\
t\underline{\underline{\Psi}}_{\vec{k}}&\underline{\underline{0}}_{3\times
3}\end{array}\right]\left[\begin{array}[]{c}u_{1}\\\ \vdots\\\
u_{5}\end{array}\right],$ (18)
where $u_{1}$ and $u_{2}$ are amplitudes at the honeycomb sites, $u_{3}$,
$u_{4}$, and $u_{5}$ are amplitudes at the kagome sites, $t$ is the hopping
strength, $\underline{\underline{0}}_{n\times m}$ is an $n\times m$ matrix of
zeros, and
$\underline{\underline{\Psi}}_{\vec{k}}=\left[\begin{array}[]{cc}1&1\\\
e^{i\vec{k}\cdot\vec{a}_{1}}&1\\\
e^{i\vec{k}\cdot\vec{a}_{2}}&1\end{array}\right].$ (19)
As there are only hoppings between sites belonging to _different_ sublattices,
this tight-binding model is chiral symmetric. The unitary chiral symmetry
operator is
$\underline{\underline{\Gamma}}=\left[\begin{array}[]{cc}\underline{\underline{I}}_{2}&\underline{\underline{0}}_{2\times
3}\\\ \underline{\underline{0}}_{3\times
2}&-\underline{\underline{I}}_{3}\end{array}\right],$ (20)
where $\underline{\underline{I}}_{n}$ is an $n\times n$ identity matrix.
The energy bands of the tight-binding model, equation (18), are shown in
figure 6(b). The spectrum is symmetric about $E=0$ due to the chiral symmetry.
Curiously, the spectrum contains features of both the underlying honeycomb and
kagome lattices, such as the symmetry-protected Dirac cones at K [41] and the
flat band [42]. This is because equation (18) belongs to a class of
Hamiltonians known as square-root Hamiltonians, meaning that the square of
equation (18) is block diagonal,
$\left(\underline{\underline{H}}_{\vec{k}}^{\text{hk}}\right)^{2}=\left[\begin{array}[]{cc}t^{2}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\underline{\underline{\Psi}}_{\vec{k}}&\underline{\underline{0}}_{2\times
3}\\\ \underline{\underline{0}}_{3\times
2}&t^{2}\underline{\underline{\Psi}}_{\vec{k}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\end{array}\right]=\left[\begin{array}[]{cc}\underline{\underline{H}}_{\vec{k}}^{\mathrm{h}}&\underline{\underline{0}}_{2\times
3}\\\ \underline{\underline{0}}_{3\times
2}&\underline{\underline{H}}_{\vec{k}}^{\mathrm{k}}\end{array}\right],$ (21)
where
$\underline{\underline{H}}_{\vec{k}}^{\mathrm{h}}=\left[\begin{array}[]{cc}3t^{2}&(1+e^{+i\vec{k}\cdot\vec{a}_{1}}+e^{+i\vec{k}\cdot\vec{a}_{2}})t^{2}\\\
(1+e^{-i\vec{k}\cdot\vec{a}_{1}}+e^{-i\vec{k}\cdot\vec{a}_{2}})t^{2}&3t^{2}\end{array}\right]$
(22)
is the tight-binding Hamiltonian of a honeycomb lattice with nearest-neighbour
hopping strength $t^{2}$ and on-site potential $3t^{2}$, and
$\underline{\underline{H}}_{\vec{k}}^{\mathrm{k}}=\left[\begin{array}[]{ccc}2t^{2}&(1+e^{-i\vec{k}\cdot\vec{a}_{1}})t^{2}&(1+e^{-i\vec{k}\cdot\vec{a}_{2}})t^{2}\\\
(1+e^{+i\vec{k}\cdot\vec{a}_{1}})t^{2}&2t^{2}&(1+e^{-i\vec{k}(\vec{a}_{2}-\vec{a}_{1})})t^{2}\\\
(1+e^{+i\vec{k}\cdot\vec{a}_{2}})t^{2}&(1+e^{+i\vec{k}(\vec{a}_{2}-\vec{a}_{1})})t^{2}&2t^{2}\end{array}\right]$
(23)
is the tight-binding Hamiltonian of a kagome lattice with nearest-neighbour
hopping strength $t^{2}$ and on-site potential $2t^{2}$ [40]. The squared
energy spectrum of the honeycomb and kagome sectors, of equation (21), are
shown in figures 6(c‑d), respectively.
Arkinstall _et al_ [21] introduced a class of topological materials whose non-
trivial topology is inherited from the squared Hamiltonian. They named these
materials square-root topological insulators. The nearest-neighbour tight-
binding model of the honeycomb lattice is a topological semi-metal. The
honeycomb-kagome lattice is therefore a square-root topological semi-metal,
with the non-trivial topology inherited from the honeycomb sector of the
squared Hamiltonian [40, 43]. In the following sections, we introduce mass-
spring and void-channel analogues of the square-root topological semimetal on
the honeycomb lattice and study the symmetry protected edge states.
#### 3.2.2 Mass-spring and void-channel models
Figure 7: Mass-spring and void-channel models of a honeycomb-kagome lattice
with nearest-neighbour coupling. (a) In the mass-spring model, the masses at
honeycomb sites ($m_{h}$, blue) and kagome sites ($m_{k}$, red) are connected
by springs of equal spring constants $k$. (b) In the void-channel model, voids
and channels are formed between flower-shaped perfectly conducting particles
arranged on a triangular lattice of lattice parameter $a$. The flower shapes
consist of six cylinders of radius $r$ arranged in a ring of radius $R$. The
voids at honeycomb the honeycomb sites have area $A_{h}$ (blue voids) and the
voids at kagome sites have area $A_{h}$ (red voids). The channels have equal
half-widths $h$. (c) Bulk frequency bands of the mass-spring model (black
points, $m_{h}=0.01104m_{0}$, $m_{k}=0.00736m_{0}$, $k=0.01978k_{0}$) and the
void-channel model (red points, $A_{h}=0.01104a^{2}$, $A_{k}=0.00736a^{2}$,
$2h=0.001a$). The frequencies are normalised by
$\omega_{0}=\sqrt{k_{0}/m_{0}}$ for the mass-spring model and $\omega_{0}=2\pi
c_{0}/a$ for the void-channel model. There is good agreement between the two
models, particularly at the lower frequencies. We chose the masses and areas
such that $m_{h}/m_{k}=A_{h}/A_{h}=3/2$ in order that the models resemble the
tight-binding model of figure 6(c) but with a shift of frequency, as discussed
in the main text.
Figure 7(a) shows a mass-spring model of the honeycomb-kagome lattice where
the honeycomb masses, $m_{h}$, and kagome masses, $m_{k}$, are connected by
springs of equal spring constant $k$,
$\left[\begin{array}[]{cc}\frac{3k}{m_{h}}\underline{\underline{I}}_{2\times
2}&-\frac{k}{m_{h}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\\\
-\frac{k}{m_{k}}\underline{\underline{\Psi}}_{\vec{k}}&\frac{2k}{m_{k}}\underline{\underline{I}}_{3\times
3}\end{array}\right]\left[\begin{array}[]{c}u_{1}\\\ u_{2}\\\ u_{3}\\\
u_{4}\\\
u_{5}\end{array}\right]=\omega^{2}(\vec{k})\left[\begin{array}[]{c}u_{1}\\\
u_{2}\\\ u_{3}\\\ u_{4}\\\ u_{5}\end{array}\right].$ (24)
First, note that the unsquared equations have eigenvalue $\omega^{2}$, and the
squared equations would have eigenvalue $\omega^{4}$. Next, we note that in
the mass-spring model the block-diagonal terms are non-zero and the two off-
diagonal block matrices are scaled by different factors, namely $m_{h}$ and
$m_{k}$. In a recent study of tight-binding and mass-spring honeycomb-kagome
lattices, Mizoguchi _et al_ [40] reproduced the tight-binding model by letting
$m_{h}=m_{k}$ and setting the block-diagonal of the matrix to zero by adding a
gravitional potential term in which the masses roll around in dents on a
floor. Our interest is in mapping the mass-spring models to void-channel
networks but no analogue of these dents for the void-channel network was
apparent to us. Regardless, we shall demonstrate the squared tight-binding and
mass-spring models are analagous. First, we decompose the unsquared mass-
spring matrix equation as
$\left[\begin{array}[]{cc}\frac{3k}{m_{h}}\underline{\underline{I}}_{2\times
2}&-\frac{k}{m_{h}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\\\
-\frac{k}{m_{k}}\underline{\underline{\Psi}}_{\vec{k}}&\frac{2k}{m_{k}}\underline{\underline{I}}_{3\times
3}\end{array}\right]=\frac{\alpha k}{m_{0}}\underline{\underline{I}}_{5\times
5}+\left[\begin{array}[]{cc}+\frac{\beta
k}{m_{0}}\underline{\underline{I}}_{2\times
2}&-\frac{k}{m_{h}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\\\
-\frac{k}{m_{k}}\underline{\underline{\Psi}}_{\vec{k}}&-\frac{\beta
k}{m_{0}}\underline{\underline{I}}_{3\times 3}\end{array}\right],$ (25)
where
$\displaystyle\alpha$ $\displaystyle=\frac{3/m_{h}+2/m_{k}}{2}m_{0},$ (26)
$\displaystyle\beta$ $\displaystyle=\frac{3/m_{h}-2/m_{k}}{2}m_{0}.$ (27)
Taking the $\alpha k/m_{0}$ term to the right hand side of the equations of
motion, we obtain
$\left[\begin{array}[]{cc}+\frac{\beta
k}{m_{0}}\underline{\underline{I}}_{2\times
2}&-\frac{k}{m_{h}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\\\
-\frac{k}{m_{k}}\underline{\underline{\Psi}}_{\vec{k}}&-\frac{\beta
k}{m_{0}}\underline{\underline{I}}_{3\times
3}\end{array}\right]\left[\begin{array}[]{c}u_{1}\\\ \vdots\\\
u_{5}\end{array}\right]=\left(\omega^{2}(\vec{k})-\frac{\alpha
k}{m_{0}}\right)\left[\begin{array}[]{c}u_{1}\\\ \vdots\\\
u_{5}\end{array}\right].$ (28)
Note that the equations of motion are only chiral symmetric about
$\omega^{2}=\alpha k/m_{0}$ if we choose $2m_{h}=3m_{k}$ such that $\beta=0$.
The matrix of equation (28) squares to
$\displaystyle\left[\begin{array}[]{cc}+\frac{\beta
k}{m_{0}}\underline{\underline{I}}_{2\times
2}&-\frac{k}{m_{h}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\\\
-\frac{k}{m_{k}}\underline{\underline{\Psi}}_{\vec{k}}&-\frac{\beta
k}{m_{0}}\underline{\underline{I}}_{3\times 3}\end{array}\right]^{2}$
$\displaystyle=\left(\frac{\beta
k}{m_{0}}\right)^{2}\underline{\underline{I}}+\left[\begin{array}[]{cc}\frac{k^{2}}{m_{k}m_{h}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\underline{\underline{\Psi}}_{\vec{k}}&\underline{\underline{0}}_{2\times
3}\\\ \underline{\underline{0}}_{3\times
2}&\frac{k^{2}}{m_{k}m_{h}}\underline{\underline{\Psi}}_{\vec{k}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\end{array}\right],$
(33)
such that the ensuing squared equations of motion are
$\left[\begin{array}[]{cc}\frac{k^{2}}{m_{k}m_{h}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\underline{\underline{\Psi}}_{\vec{k}}&\underline{\underline{0}}_{2\times
3}\\\ \underline{\underline{0}}_{3\times
2}&\frac{k^{2}}{m_{k}m_{h}}\underline{\underline{\Psi}}_{\vec{k}}\underline{\underline{\Psi}}_{\vec{k}}^{\dagger}\end{array}\right]\left[\begin{array}[]{c}u_{1}\\\
\vdots\\\ u_{5}\end{array}\right]=\left[\left(\omega^{2}(\vec{k})-\frac{\alpha
k}{m_{0}}\right)^{2}-\left(\frac{\beta
k}{m_{0}}\right)^{2}\right]\left[\begin{array}[]{c}u_{1}\\\ \vdots\\\
u_{5}\end{array}\right],$ (34)
which is analagous to the squared tight-binding equation (21) with
$t^{2}\leftrightarrow k^{2}/(m_{k}m_{h})$ and
$E^{2}\leftrightarrow\left(\omega^{2}(\vec{k})-\frac{\alpha
k}{m_{0}}\right)^{2}-\left(\frac{\beta k}{m_{0}}\right)^{2}$. Note that for
the squared equations of motion, $\beta\neq 0$ simply corresponds to another
shift in frequency and does not break any symmetries of the squared equations.
Next, we propose a photonic analogue of this mass-spring network. Our network
of voids and channels is formed between “flower” shaped particles of perfect
conductors arranged on a triangular lattice with lattice parameter $a$; each
flower consists of six cylinders of radius $r$ that are distributed along a
ring of radius $R$ (see figure 7(b)). The voids at the honeycomb and kagome
sites (shown in blue and red, respectively) are connected by narrow channels
of half-width $h$, where
$R\cos\frac{\pi}{6}=\frac{a}{2}-r-h.$ (35)
We fixed the surface-to-surface gap as 2h=a/1000, this is similar to the ratio
we used for the SSH model in the previous section. While this is quite a small
ratio of $h/a$, we see in A that less extreme ratios of $h/a$ would also be
viable. For a given value of $r$, we can then determine
$R=(a/2-h-r)/\cos(\pi/6)$ and numerically calculate the areas of the honeycomb
and kagome voids, $A_{h}$ and $A_{k}$. We settled on $r=0.259a$, for which
$R=0.27771a$, $A_{h}=0.01104a^{2}$, and $A_{k}=0.00736a^{2}$, such that
$A_{h}/A_{k}=3/2$. As shown in figure 7(c), there is good agreement between
the system of voids and channels (red) and the discrete system of masses and
springs (black) where
$\displaystyle m_{h}$ $\displaystyle=A_{h}\cdot m_{0}/a^{2}=0.01104m_{0},$
(36) $\displaystyle m_{k}$ $\displaystyle=A_{k}\cdot
m_{0}/a^{2}=0.00736m_{0},$ (37) $\displaystyle k$
$\displaystyle=\frac{1}{\pi}\sqrt{\frac{2h}{r}}=0.01978k_{0}.$ (38)
We also verify in A that the agreement between the mass-spring and void-
channel networks improves as the channels are made more narrow; this is in
line with expectations from the asymptotic model [26]. We have chosen these
particular parameters such that $A_{h}/A_{k}=m_{h}/m_{k}=3/2$ and therefore
$\beta=0$ in order that the mass-spring and void-channel models more closely
resemble the tight-binding model shown in figure 6. When we study the edge
states in the next section, we shall see that the topological edge states
persist even if $m_{h}/m_{k}\neq 3$ and $\beta\neq 0$.
Now that we have introduced our photonic geometry, let us compare and contrast
our work with some recent studies of the honeycomb-kagome lattice in photonics
and acoustics. Maimaiti _et al_ [28] studied the response of a triangular
array of metallic cylinders to microwave radiation. They noted that the voids
between the cylinders lie on a honeycomb-kagome lattice and they used Monte
Carlo methods to fit their model to a honeycomb-kagome tight-binding model.
Crucially, however, the cylinders were not closely spaced and the authors did
not consider any topological aspects of the array; it is likely that the
quality of the edge states in this system would be reduced by longer-ranged
coupling between voids and their next-nearest-neighbours. On the other hand,
Yan _et al_ [44] studied a honeycomb-kagome array of acoustic resonators
connected by narrow channels and considered the symmetry protected topology.
However, in their work the width of the channels were alternated to produce a
square-root topological insulator where the topology was inherited from the
breathing kagome sector of the squared Hamiltonian, whereas we study the
lattice with equal channel widths, which is akin to the mass-spring/tight-
binding models of Mizoguchi _et al_ [40], where the non-trivial topology is
inherited from the honeycomb sector of the squared Hamiltonian.
#### 3.2.3 Edge states in a ribbon
Figure 8: (a) Schematic of a ribbon of the honeycomb-kagome void-channel
network introduced in figure 7, but terminated by slabs of perfect conductor
at the top and bottom. The large magenta voids at the boundary have area
$A_{\mathrm{cap}}$ and reduce the chiral symmetry breaking at the interfaces.
(b) Frequency bands of a ribbon that is $N_{\mathrm{cells}}=10$ unit cells
long. The frequencies are normalised by $\omega_{0}=\sqrt{k_{0}/m_{0}}$ for
the mass-spring model and $\omega_{0}=2\pi c_{0}/a$ for the void-channel
model. (c-e). Visualisations of the labelled eigenmodes in panel b. For each
shown here, this is also an energy degenerate inversion symmetric partner at
the other edge. (c) The lowest pair of bands are excitations in the large
voids, whereas (d-e) are topological edge states protected by the chiral
symmetry of the squared Hamiltonian.
In order to produce topological edge states, we must introduce interfaces in a
manner that preserves (i) the block-diagonal nature of the squared equations
and (ii) the chiral symmetry of the honeycomb sector of the squared equations
[40]. When we take the square of the tight-binding model with free boundary
conditions, the sites at the edge of the model gain a different onsite
potential when compared to the sites of the same sublattice, all be it, in the
bulk. In order to retain the chiral symmetry of the honeycomb lattice, we
impose that the kagome sites, in the tight-binding model, are located at the
edge of the model. The edges of the mass-spring model therefore consist of
kagome sites capped by heavy masses to emulate the free boundary condition.
Figure 8(b) shows a comparison between the void-channel model (red) and the
mass-spring model (black, with the same values of $k$, $m_{h}$, and $m_{k}$ as
before, and capped with masses of $M=A\cdot m_{0}/a^{2}=0.12247m_{0}$ at the
honeycomb sites along the interface). As before, there is good agreement
between the mass-spring model and void-channel models however the accuracy
decreases as the frequency increases. We observe several new edge states, that
were not present in the bulk eigenmodes; these are marked, in the dispersion
curves, by the red circles c, d and e in figure 8(b), and visualised in
figures 8(c-e), respectively.
We see from figure 8(c) that the lowest pair of bands correspond to
excitations within the large voids. As we increase the size of the capping
voids, these bands would flatten to zero frequency. On the other hand, figures
8(d-e) show the pair of topological edge states arising from the non-trivial
topology of the honeycomb sector of the squared equations [40].
If the squared system was exactly chiral symmetric then the topological edge
states should be flat [40]; instead there is a slight tilt indicating a weak
symmetry breaking. Interestingly, the field within the large voids is weaker
for the higher energy eigenmode in figure 8(e), suggesting that the chiral
symmetry of the squared system is better preserved at the higher frequencies.
This is because the frequency and character of the unwanted excitation in the
capping voids (see figure 8(c)) is more similar in character to the edge state
with lower frequency (figure 8(d), where honeycomb and kagome sites are in
phase) than the edge state with higher frequency (figure 8(e), where honeycomb
and kagome sites are out of phase). The unwanted mode therefore hybridises
more strongly with the lower frequency edge state.
Although we have chosen $m_{h}/m_{k}=3$, such that the mass-spring and void-
channel models more closely resemble the tight-binding model of figure 6(e),
we verify in C that the edge states persist even if $m_{h}/m_{k}\neq 3$ such
that $\beta\neq 0$. We also verify in B that the edge states are not protected
without the presence of the large capping masses/voids which restore the
chiral symmetry of the squared equations of motion.
#### 3.2.4 Edge and corner states in a triangular metaparticle
Figure 9: (a) Frequency eigenspectrum of a triangular metaparticle built from
the mass-spring honeycomb-kagome network introduced in figure 7. The edges are
capped with extremely heavy masses ($M=1000m_{0}$, black points) or realistic
masses ($M=0.12247m_{0}$, as in figure 8). The frequencies are normalised by
$\omega_{0}=\sqrt{k_{0}/m_{0}}$. The upper left inset shows a schematic of the
triangular metaparticle with the heavy masses shown in magenta. The schematic
shows a particle with $N_{\mathrm{cells}}=7$ unit cells along each edge, but
$N_{\mathrm{cells}}=19$ was used in the calculations. The lower right inset
shows a zoom of the lower frequency set of edge and corner states. (b)‑(e)
show steady state fields, upon driving the system with a harmonic force at the
honeycomb site at the center of the highlighted blue region, at the
frequencies indicated in the lower right inset of panel a.
We now study corner and edge states in a large but finite “triangular
metaparticle” of the honeycomb-kagome lattice, as shown in the upper-left
inset of figure 9(a). Having established the validity of the mass-spring model
for the bulk and at the edges, we model the system using only the discrete
mass-spring equations as these are more accessible, far faster to solve and
still retain the crucial physics we are interested in. As with the ribbon, we
cap the ends with heavy masses at honeycomb sites to reduce the breaking of
the chiral symmetry in the honeycomb sector of the squared equations.
The main panel of figure 9(a) shows the energy spectrum of a triangular
metaparticle with $N_{\mathrm{cells}}=19$ unit cells along each edge for a
realistic capping mass ($M=0.12247m_{0}$, red) and for a very large capping
mass where the chiral symmetry of the honeycomb sector of the squared
equations is near-perfectly restored ($M=1000m_{0}$, black). We identify the
large flat region of eigenmodes at $\omega\approx 2.4\omega_{0}$ as the bulk
flat band inherited from the kagome lattice, and the smaller flat regions of
eigenmodes at $\omega\approx 1.25\omega_{0}$ and $\omega\approx 3.1\omega_{0}$
as the topological edge states inherited from the honeycomb lattice. The
lower-right inset of figure 9(a) shows the energy eigenmodes of the lower
frequency edge state in more detail. Although the edge state is extremely
flat, for the unrealistically large value of $M$, there is an advantage to
using a more realistic value of $M$ for which the protecting symmetry is
weakly broken.
Figure 10: Visualisation of the triply degenerate eigenmodes of the triangular
metaparticle for the mode indices (a) 248, (b) 249, (c) 250 of figure 9(a).
The fields reveal that these eigenmodes are corner states. The modes are
degenerate because of the $\mathrm{C}_{3}$ symmetry of the triangle; the
eigensolver has therefore returned arbitrary linear superpositions of the
three corner eigenmodes.
Figures 9(b-e) show the steady-state solutions of the triangular mass-spring
metaparticles capped by realistic masses and driven by time-harmonic forces,
centered at the honeycomb sites highlighted in light blue, for the four
frequencies labelled in the lower-right inset of figure 9(a). Note that we
forced the system at frequencies just below the resonances because the energy
of the closed mass-spring system solution diverges if we drive exactly at a
resonant frequency. In figure 9(b) the energy propagates freely through the
particle. This is because the two eigenmodes that are closest to the driving
frequency are actually bulk eigenmodes corresponding to the Dirac cones at
$\mathrm{K}$ and $-\mathrm{K}$ of figure 7(c). In figure 9(c-d) we see that
the energy propagates around the edge of the particle but not into the bulk.
As the energy increases, the modes become more localised to the edges. Figure
9(e) shows that the field is localised in all directions, when driving at the
edge of the triangle, at the frequency residing slightly below the group of
triply degenerate eigenstates. This is because these eigenstates are corner
eigenstates, as shown in figure 10. Crucially, the weak breaking of the chiral
symmetry of the squared equations has lifted the degeneracies between the bulk
states and the edge/corner states, allowing these to be excited at different
frequencies.
## 4 Conclusions
In this paper we have shown that networks of voids and narrow connecting
channels between perfect conductors are a promising platform for mimicking
chiral or square-root topological tight-binding models within photonics. This
was done by mapping the tight-binding models to mass-spring models, and then
mapping these mass-spring models to their asymptotically exact continuum
analogue [26, 27], comprised of void-channel networks.
We found that although introducing interfaces to the mass-spring/void-channel
networks could break the symmetries that protected the topological edge
states, these symmetries could be restored by capping the interfaces with
heavy masses/large voids. We were able to create a photonic analogue of the 1D
SSH model [20] with a chain of equally sized voids connected by channels of
alternating widths, and a photonic analogue of a square-root topological
semimetal [21, 40] with voids positioned on a honeycomb-kagome lattice and
narrow channels connecting the nearest-neighbour voids.
More broadly, we hope that the asymptotic network approximations espoused here
will provide a direct mapping to other complex photonic crystal phenomena,
including and beyond topological physics. Discrete models are able to
encompass highly non-trivial phenomenology and hence our approach provides a
systematic and simplified route to engineer exotic responses in continuum
photonic structures in an asymptotically exact manner.
## 5 Acknowledgements
S.J.P. acknowledges his studentship from the Centre for Doctoral Training on
Theory and Simulation of Materials at Imperial College London funded by EPSRC
Grant Number EP/L015579/1. The support of the UK EPSRC through grants
EP/L024926/1 is acknowledged by RVC and MM as is that of the ERC H2020 FETOpen
project BOHEME under grant agreement No. 863179.
## Appendix A Validity of the mass-spring networks as an analogue of the
void-channel networks
### A.1 Convergence with decreasing gap size
Figure 11: Normalised frequency of the flat band of honeycomb-kagome networks
of masses and springs (black line) and the equivalent networks of voids and
channels (red points). The agreement between the two models increases as the
gap size is decreased.
Figure 11 shows the frequency of the flat band as a function of gap size in
the honeycomb-kagome network of masses and springs (black line) and voids and
channels (red points) that were originally introduced in figure 7. The
agreement between the two models increases as the gap size decreases, as
expected from the asymptotic analysis of Vanel _et al_ [26, 27].
### A.2 Operating frequencies and length scales
Let us consider the feasibility of manufacturing the honeycomb-kagome network
of voids and channels, and the frequencies and length scales at which it could
operate. We must find a balance between the size of the gaps, the size of the
particles, the frequencies at which the edge states occur, and the frequencies
at which the metals may be treated as perfect conductors.
For example, let us consider the parameters required to obtain edge states at
$\omega=$1\text{\,}\mathrm{THz}$$. The lower frequency edge states have a
normalised frequency of $\omega/\omega_{0}\approx 1.3$, where $\omega_{0}=2\pi
c_{0}/a$ and $c_{0}$ is the speed of light in vacuum, corresponding to a
lattice parameter of $a\approx 1.3\cdot 2\pi
c_{0}/\omega=$2.4\text{\,}\mathrm{mm}$$. This would correspond to channel
half-widths of $h=a/2000=$1.2\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ using
the ratio from earlier, although we have seen in the previous section that
this could be relaxed somewhat without the mapping between the void-channel
and mass-spring networks breaking down. Both $a$ and $h$ are orders of
magnitude greater than the skin depth of gold which is on the order of
$50\text{\,}\mathrm{nm}$ for $\omega=$1\text{\,}\mathrm{THz}$$, and it would
therefore be reasonable to treat the gold particles as perfectly conducting.
## Appendix B No edge states in mass-spring/void-channel networks with free
boundary conditions
Figure 12: (a) Schematic of the same void-channel chain as in figure 5 but
without the large capping voids. (b) There is good agreement between the
squared frequency spectrum of the void-channel chain (red crosses) and the
corresponding mass-spring chain with free boundary conditions (black points).
The frequencies are normalised by $\omega_{0}=\sqrt{k_{0}/m_{0}}$ for the
mass-spring model and $\omega_{0}=2\pi c_{0}/L$ for the void-channel model.
Without the large capping voids/heavy capping masses chiral symmetry is broken
at the edges of these chains and there are no topological edge states in the
band gap.
We show in figure 12(a) the same SSH-like chain, as in figure 5, but without
the capping voids. Figure 12(b) shows the squared frequency spectrum of the
void-channel model (red crosses) and the corresponding mass-spring model with
free boundary conditions (black points). As expected, there are no edge states
because the chiral symmetry is strongly broken at the ends of the chains.
Similarly, we verify in figure 13 that the topological edge states are not
present in the ribbon of the honeycomb-kagome lattice without heavy capping
masses/large capping voids.
Figure 13: Edge states of the ribbon of honeycomb-kagome mass-spring model
introduced in figure 8 with $2m_{h}=3m_{k}$ for (a) the ideal ‘wall’ boundary
condition on the kagome sites (infinitely heavy capping masses at the
honeycomb sites) and (b) the free boundary condition on the kagome sites (no
capping masses). The edge states in (b) are not pinned to a particular energy
because the honeycomb sector of the squared equations of motion is not chiral
symmetric.
## Appendix C Chiral symmetry of the squared honeycomb-kagome lattice for
$m_{h}\neq m_{k}$
Figure 14: The edge states of the square-root semimetal are protected by the
chiral symmetry of the squared equations, and may survive even as the chiral
symmetry of the unsquared equations is broken. We plot the frequencies
(normalised by $\omega=\sqrt{k_{0}/m_{0}}$) of the mass-spring model of the
mass-spring ribbon from figure 8 for (a) $2m_{h}<3m_{k}$, (b) $2m_{h}=3m_{k}$,
and (c) $2m_{h}>3m_{k}$, where $m_{h}$ and $m_{k}$ are the masses at the
honeycomb and kagome sites, respectively. The edge states are robust in all
three systems even though the unsquared equations are not chiral symmetric for
$2m_{h}\neq 3m_{h}$. This is because the topological edge states are protected
by the chiral symmetry of the honeycomb sector of the squared equations [40],
which can be preserved even as the chiral symmetry of the unsquared equations
is lost.
In figure 14 we plot the frequency bands of the mass-spring honeycomb-kagome
network with perfect wall boundary conditions, but relax the constraint
$2m_{h}=3m_{k}$ such that $\beta\neq 0$ where $\beta$ is defined in equation
(27)). This breaks the chiral symmetry of the unsquared equations, yet the
edge states remain flat and robust against this perturbation, because the
honeycomb sector of the _squared_ equations remains chiral symmetric.
## References
## References
* [1] J M Kosterlitz and D J Thouless. Ordering, metastability and phase transitions in two-dimensional systems. Journal of Physics C: Solid State Physics, 6(7):1181, 1973.
* [2] F D M Haldane. Continuum dynamics of the 1-d heisenberg antiferromagnet: Identification with the $o(3)$ nonlinear sigma model. Physics Letters A, 93(9):464–468, 1983.
* [3] F D M Haldane. Nonlinear field theory of large-spin heisenberg antiferromagnets: semiclassically quantized solitons of the one-dimensional easy-axis néel state. Physical Review Letters, 50(15):1153, 1983.
* [4] J G Checkelsky, J Ye, Y Onose, Y Iwasa, and Y Tokura. Dirac-fermion-mediated ferromagnetism in a topological insulator. Nature Physics, 8(10):729–733, 2012.
* [5] C-K Chiu, J C Y Teo, A P Schnyder, and S Ryu. Classification of topological quantum matter with symmetries. Reviews of Modern Physics, 88(3):035005, 2016.
* [6] A A Burkov and D G Hawthorn. Spin and charge transport on the surface of a topological insulator. Physical Review Letters, 105(6):066802, 2010.
* [7] M Vali, D Dideban, and N Moezi. A scheme for a topological insulator field effect transistor. Physica E: Low-dimensional Systems and Nanostructures, 69:360–363, 2015.
* [8] M He, H Sun, and Q L He. Topological insulator: Spintronics and quantum computations. Frontiers of Physics, 14(4):43401, 2019.
* [9] X Zhang, M Xiao, Y Cheng, M-H Lu, and J Christensen. Topological sound. Communications Physics, 1(1):1–13, 2018.
* [10] T Ozawa, H M Price, A Amo, N Goldman, M Hafezi, L Lu, M C Rechtsman, D Schuster, J Simon, O Zilberberg, et al. Topological photonics. Reviews of Modern Physics, 91(1):015006, 2019.
* [11] K von Klitzing, T Chakraborty, P Kim, V Madhavan, X Dai, J McIver, Y Tokura, L Savary, D Smirnova, A M Rey, et al. 40 years of the quantum Hall effect. Nature Reviews Physics, pages 1–5, 2020.
* [12] M Kim, Z Jacob, and J Rho. Recent advances in 2d, 3d and higher-order topological photonics. Light: Science & Applications, 9(1):1–30, 2020.
* [13] M S Rider, S J Palmer, S R Pocock, X Xiao, P Arroyo Huidobro, and V Giannini. A perspective on topological nanophotonics: current status and future challenges. Journal of Applied Physics, 125(12):120901, 2019.
* [14] M A Bandres, S Wittek, G Harari, M Parto, J Ren, M Segev, D N Christodoulides, and M Khajavikhan. Topological insulator laser: Experiments. Science, 359(6381), 2018.
* [15] Y Ota, R Katsumi, K Watanabe, S Iwamoto, and Y Arakawa. Topological photonic crystal nanocavity laser. Communications Physics, 1(1):1–8, 2018.
* [16] A B Khanikaev, S H Mousavi, W-K Tse, M Kargarian, A H MacDonald, and G Shvets. Photonic topological insulators. Nature Materials, 12(3):233–239, 2013.
* [17] A P Schnyder, S Ryu, A Furusaki, and A W W Ludwig. Classification of topological insulators and superconductors in three spatial dimensions. Physical Review B, 78(19):195125, 2008.
* [18] A Kitaev. Periodic table for topological insulators and superconductors. In AIP conference proceedings, volume 1134, pages 22–30. American Institute of Physics, 2009.
* [19] S Ryu, A P Schnyder, A Furusaki, and A W W Ludwig. Topological insulators and superconductors: tenfold way and dimensional hierarchy. New Journal of Physics, 12(6):065010, 2010.
* [20] J K Asbóth, L Oroszlány, and A Pályi. A short course on topological insulators. Lecture notes in physics, 919:997–1000, 2016.
* [21] J Arkinstall, M H Teimourpour, L Feng, R El-Ganainy, and H Schomerus. Topological tight-binding models from nontrivial square roots. Physical Review B, 95(16):165109, 2017.
* [22] S R Pocock, P A Huidobro, and V Giannini. Bulk-edge correspondence and long-range hopping in the topological plasmonic chain. Nanophotonics, 8(8):1337–1347, 2019.
* [23] S Pocock. Topological physics in one-dimensional chains of metallic nanoparticles. PhD thesis, Imperial College London, 2020.
* [24] C Poli, M Bellec, U Kuhl, F Mortessagne, and H Schomerus. Selective enhancement of topologically induced interface states in a dielectric resonator chain. Nature Communications, 6(1):1–5, 2015.
* [25] N Malkova, I Hromada, X Wang, G Bryant, and Z Chen. Observation of optical Shockley-like surface states in photonic superlattices. Optics Letters, 34(11):1633–1635, 2009.
* [26] A L Vanel, O Schnitzer, and R V Craster. Asymptotic network models of subwavelength metamaterials formed by closely packed photonic and phononic crystals. EPL (Europhysics Letters), 119(6):64002, 2017.
* [27] A Vanel. Asymptotic analysis of discrete and continuous periodic media. PhD thesis, Imperial College London, 2018.
* [28] W Maimaiti, B Dietz, and A Andreanov. Microwave photonic crystals as an experimental realization of a combined honeycomb-kagome lattice. Physical Review B, 102(21):214301, 2020.
* [29] H Wakao, T Yoshida, T Mizoguchi, and Y Hatsugai. Topological modes protected by chiral and two-fold rotational symmetry in a spring-mass model with a Lieb lattice structure. Journal of the Physical Society of Japan, 89(8):083702, 2020.
* [30] Li-Yang Zheng, Vassos Achilleos, Olivier Richoux, Georgios Theocharis, and Vincent Pagneux. Observation of edge waves in a two-dimensional su-schrieffer-heeger acoustic network. Phys. Rev. Applied, 12:034014, Sep 2019.
* [31] L-Y Zheng, V Achilleos, Z-G Chen, O Richoux, G Theocharis, Y Wu, J Mei, S Felix, V Tournat, and V Pagneux. Acoustic graphene network loaded with Helmholtz resonators: a first-principle modeling, Dirac cones, edge and interface waves. New Journal of Physics, 22(1):013029, jan 2020.
* [32] L-Y Zheng, X-J Zhang, M-H Lu, Y-F Chen, and J Christensen. Knitting topological bands in artificial sonic semimetals. Materials Today Physics, 16:100299, 2021.
* [33] A L Vanel, O Schnitzer, and R V Craster. Asymptotic modeling of phononic box crystals. SIAM J. Appl. Math., 79(2):506–524, 2019.
* [34] Y. Sun, B. Edwards, A. Alu, and N. Engheta. Experimental realization of optical lumped nanocircuits at infrared wavelengths. Nat. Mater., 11:208–212, 2012.
* [35] K H Matlack, M Serra-Garcia, A Palermo, S D Huber, and C Daraio. Designing perturbative metamaterials from discrete models. Nature Materials, 17(4):323–328, 2018.
* [36] F. Hecht. New development in FreeFem++. Journal of Numerical Mathematics, 20(3-4):251–265, 2012.
* [37] V Laude. Phononic crystals: artificial crystals for sonic, acoustic, and elastic waves. Number Vol. 26 in De Gruyter studies in mathematical physics. De Gruyter, Berlin, 2015.
* [38] W P Su, J R Schrieffer, and A J Heeger. Soliton excitations in polyacetylene. Physical Review B, 22(4):2099, 1980.
* [39] X Ni, M Weiner, A Alu, and A B Khanikaev. Observation of higher-order topological acoustic states protected by generalized chiral symmetry. Nature Materials, 18(2):113–120, 2019.
* [40] T Mizoguchi, T Yoshida, and Y Hatsugai. Square-root topological semimetals. Physical Review B, 103(4):045136, 2021.
* [41] A P Schnyder. Lecture notes on accidental and symmetry-enforced band crossings in topological semimetals. Topological Matter School, San Sebastian, Spain, 2018.
* [42] C Barreteau, F Ducastelle, and T Mallah. A bird’s eye view on the flat and conic band world of the honeycomb and Kagome lattices: towards an understanding of 2D metal-organic frameworks electronic structure. Journal of Physics: Condensed Matter, 29(46):465302, 2017.
* [43] P Delplace, D Ullmo, and G Montambaux. Zak phase and the existence of edge states in graphene. Physical Review B, 84(19):195452, 2011.
* [44] M Yan, X Huang, L Luo, J Lu, W Deng, and Z Liu. Acoustic square-root topological states. Physical Review B, 102(18):180102, 2020.
|
arxiv-papers
| 2021-07-26T19:32:04 |
2024-09-04T03:07:19.971200
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Samuel J Palmer, Yordan Ignatov, Richard V Craster, Mehul P Makwana",
"submitter": "Samuel John Palmer",
"url": "https://arxiv.org/abs/2107.12449"
}
|
2107.12457
|
# Theoretical ground for precursors-based molecular spectroscopy
Alexander Makhlin1 Panagiotis Papoulias2 Eugene Surdutovich3
[email protected] 1 Rapid Research Inc, Southfield, Michigan 48076, USA 2
Science Seals, LLC, Ann Arbor, Michigan 48105, USA 3 Department of Physics,
Oakland University, Rochester, Michigan 48309, USA
###### Abstract
A theory for excitation of molecular resonances by a train of precursors is
developed. Right at the vacuum-medium interface, a train of incident square
waves interacts with light electrons and is converted into a train of
precursors, which further excite molecular dipoles. Analytic calculations
indicate that these excited dipoles generate radiation, including secondary
precursors propagating in the backward direction. Encoded in this radiation
are proper frequencies of excited molecular dipoles allowing for spectroscopic
measurements. The frequency of the train of incident square pulses can be by
several orders of magnitude smaller than the proper frequencies of molecular
resonances.
## I Introduction
The notion of precursors (the name adopted from seismology) as a physical
entity was introduced in optics by A. Sommerfeld as the solution to an
apparent paradox: in the domain of anomalous dispersion the group velocity can
exceed the speed of light, $c$, in vacuum. This was obviously in conflict with
the special theory of relativity First1 . Considering a semi-infinite
sinusoidal signal at the interface between vacuum and medium, Sommerfeld
proved that the leading part of the signal following the front propagates in
the medium with speed $c$. This leading part is known as a precursor. The
physics of this phenomenon was attributed to the dynamic nature of the
refraction index, $n(\omega)$. The latter cannot differ from unity until the
electronic polarization is engaged in response to electromagnetic wave. In
recent years, precursors have attracted attention of both experimentalists and
theorists. An extensive review is given in books by K. Oughstun BS3 . In this
paper we propose to use them for the purpose of spectroscopy.
Traditionally, spectroscopic measurements are conducted in continuous mode and
assume the availability of quasi-monochromatic sources of radiation. An
underlying assumption is that any properties of measured signals are encoded
in the dispersion law of the index of refraction and rely on the availability
of high resolution spectral devices, which may not always be the case, e.g. in
millimeter range radiation. In this paper, we propose another approach.
Reacting to steep wavefronts of the incident electromagnetic field, the medium
generates, right at the vacuum-medium interface, short pulses, precursors,
with their leading fronts traveling through a medium at the speed of light.
Precursors are insensitive to any properties of the medium, except for the
ubiquitous electronic polarization. However, a long train of “primary
precursors”can induce and substantially amplify oscillations in molecular
dipoles, which subsequently radiate not only in the forward, but also in the
backward direction with respect to the incident signal.
The current study is founded on the theoretical work precursor1969 of 1969 by
G.V. Skrotsky and his group. Impetus for their work was provided by advances
in the generation of ultrashort optical pulses with steep wavefronts and the
possibility of measuring of time intervals down to the order of $10^{-14}$ sec
111A successful direct measurement of precursors in a region of anomalous
dispersion was reported only in 2006 Direct by a group from Duke University..
Paper precursor1969 studied the formation of a precursor during a traversal
of a vacuum-medium interface by the front of a light pulse and its passage
through a slab of matter. It was found that precursors can be completely
separated from an initial semi-infinite harmonic signal and that, sufficiently
close to the leading front, an “instantaneous frequency”of the precursor’s
electric field increases with the thickness of the slab, thus making them less
and less sensitive to the properties of a medium. Exactly at the leading
front, the amplitude of the electromagnetic field remains the same at any
distance of its propagation regardless of the number of slabs it crosses. In
the current study, we build on these physically important facts and suggest
that precursors may be utilized in spectroscopic studies of molecules or
detection of various chemical substances.
The approach taken in this study is prompted by a large difference of time
scales involved in the procedure of measurement and can be briefly described
as follows. Let a train of square pulses with sharp wavefronts be incident on
a vacuum-medium interface. Light electrons are immediately accelerated and
radiate even before they acquire velocity and displacement. The electronic
component of electric polarization at a time immediately following the
wavefront can be adequately described by the “plasma” refraction index,
$n_{e}(\omega)$. The scale of this process is determined by the Langmuir
frequency $\Omega_{e}\sim 10^{15}-10^{16}$ rad/sec corresponding to the
density of all electrons. The electric field of precursors produces an
external force in the mechanical equations of motion of elastic molecular
dipoles; these equations can be solved exactly. The scale of this process is
set by the proper frequency $\omega_{0}\sim 10^{12}$ rad/sec and the width
$\Gamma_{0}\ll\omega_{0}$ of a particular molecular resonance. The
acceleration of the dipole’s constituent charges results in a detectable
radiation. The field of this radiation is a sum of slowly varying (with the
proper frequency of the elastic dipole’s oscillation) electromagnetic fields
and of highly oscillating (with the electronic Langmuir frequency) fields of
precursors. The proper frequencies of molecular oscillations can be identified
by positions of maxima in intensity of backward radiation as functions of
duration $T$ of incident pulses (or the frequency $\nu_{0}\sim 10^{8}-10^{10}$
Hz of the pulses’ repetition in the incident train).
The paper is arranged as follows. In Sec.II we consider the first and fastest
process of formation of primary precursors at the vacuum-medium interface. We
begin with the simplest case of a single step and introduce mathematical
methods used throughout the paper. We derive an explicit expression for the
electric field of a single precursor and trace its evolution in the course of
its propagation inside the medium. Then, we consider its passage through an
interface and reflection of the incident signal in the form of a single square
pulse, which, having both leading and rear fronts, produces two precursors.
Finally, we examine propagation of a pulse through a slab of matter with
finite thickness.
In Sec.III and Appendix A we solve the equations of motion for elastically
bound charges in the field of a train of primary precursors, which originate
from an incident train of square pulses. We find an explicit time dependence
of the electric dipole moment of the molecule, as well as the ladder of
amplitudes of harmonic oscillations that are induced and amplified by the
train of precursors. Oscillating dipoles must radiate. Their radiation
propagates inside a dispersive medium and, eventually, escapes into the
vacuum. In Sec. IV we find the Green’s functions that solve the problem of
radiation and also explicitly account for the boundary conditions at the
interfaces between the medium and vacuum. We find that dipoles radiate both in
the forward and backward directions with respect to the direction of
propagation of the trains of incident pulses and of the primary precursors.
The expression for the electric field of the molecular dipoles’ radiation is
derived in Sec.V. The electric dipole polarization induced by the primary
precursors includes two distinct components. One of them is proportional to
the field of the entire train of primary precursors, which does not lead to
radiation and is a strict analytic result. Its presence can be accounted for
by small corrections to the purely electronic refraction index. The second
component, also found analytically, is due to abrupt jumps in the amplitude of
the elastic dipoles’ oscillations. It bears an anticipated harmonic pattern in
addition to yet another train of secondary precursors radiated in the backward
direction.
In Sec. VI.1 we analyze and interpret the results obtained in Sec. V. A
general discussion and outlook follow in Sec. VI.2. Appendices A, B and C
present some details of analytical calculations in Secs. III and V. In
Appendix D, a method allowing for numerical calculations elucidating the
analytical results obtained in Sec. V and presented in Sec. VI.1 is shown and
discussed.
## II Formation and propagation of precursors
In this section, we closely follow Ref. precursor1969 gradually changing the
setup of the problem. We start with a semi-infinite incident step pulse
propagating from vacuum into a medium, then continue with a single rectangular
incident pulse. For the rest of the paper we consider a long train of incident
square pulses with alternating polarity. After any wavefront crosses an
interface, a purely electronic polarization transforms a signal into a
precursor.
For a front of a semi-infinite wave incident on a plane interface between the
vacuum and a medium, a steady state of propagation is reached after some time
has elapsed. The electromagnetic field of a steady state satisfies the
extinction theorem of Ewald and Oseen Born ; Rosenfeld ; two waves are formed
in the medium, a refracted wave with a phase velocity of $c/n$ and a not
refracted wave propagating with the speed of light in vacuum. The latter wave
exactly cancels out the incident wave in the medium and only a refracted wave
is observed. However, a time interval, longer than the characteristic time
inherent to the medium, is required for the steady state to form. During this
interval immediately following the wavefront (before the refracted and non-
refracted waves are formed), a precursor propagating with the speed of light
in the direction of the incident wave is produced.
Traditionally, an electromagnetic signal in a medium is represented by a sum
of harmonics. Each harmonic is a stationary signal, which “knows nothing”of
its origin from a limited wave train, and behaves as a plane wave in a
dispersive medium. Its propagation is described by the stationary index of
refraction and stationary boundary conditions, as given by the Fresnel
formulas. The electromagnetic characteristics of the medium are determined by
natural frequencies $\omega_{q}$ of bound electrons and their relaxation
times, $\tau_{rel}\sim 1/\Gamma_{0}$. Within a time interval of about
$2\pi/\omega_{q}$ from the instant of arrival of the wave front at a given
point, excitation and relaxation processes play only a secondary role. From
the point of view of the damped classical oscillator model, electrons do not
have time to acquire either velocity or displacement with respect to their
equilibrium positions.
### II.1 Introductory calculations, an incident step signal
We examine the properties of precursors and consider the propagation of
various signals in the simplest case, i.e., when a medium has no molecular
resonances, while polarization due to light electrons completely determines
the index of refraction, $n_{e}(\omega)$,
$\displaystyle
n_{e}^{2}(\omega)=1-{\Omega_{e}^{2}\over\omega^{2}},~{}~{}\Omega_{e}^{2}={4\pi
N_{e}e^{2}\over m_{e}}~{},$ (1)
where $\Omega_{e}$ is the Langmuir (plasma) frequency. Even though in
anticipated experiments we expect the incident signal to be a long sequence of
alternating square pulses it is instructive to start with a single step of
unit amplitude, which has a well-known spectral representation,
$\displaystyle E_{0}(t,z)=\theta(t-z/c)={-1\over 2\pi
i}\int_{ia-\infty}^{ia+\infty}{d\omega\over\omega}e^{-i\omega(t-z/c)},~{}~{}~{}~{}$
(2)
where $\omega/c=k_{0}$ is the wave vector of propagation in free space. After
the leading wavefront crosses the vacuum-medium interface, the amplitudes of
Fourier-components of a signal acquire the transmission factor
${\mathfrak{T}}(n_{e})$, while the wave vector $k_{0}=\omega/c$ changes for
$k(\omega)=\omega n_{e}(\omega)/c$. The electric field of such an incident
pulse inside the medium is
$\displaystyle E^{\prime}_{t}(t,z)={-1\over 2\pi
i}\int_{ia-\infty}^{ia+\infty}{d\underline{\omega}\over\underline{\omega}}{\mathfrak{T}}[n_{e}(\omega)]e^{-i\Omega_{e}[\underline{\omega}t-\sqrt{\underline{\omega}^{2}-1}~{}\tilde{z}]}\propto\theta(t-z/c)~{},$
(3)
where $\omega n_{e}(\omega)=\Omega_{e}\sqrt{\underline{\omega}^{2}-1}$, with
$\underline{\omega}=\omega/\Omega_{e}$ and $\tilde{z}=z/c$. The Fresnel
coefficients of transmission, ${\mathfrak{T}}$, and reflection,
${\mathfrak{R}}$, for partial monochromatic waves (on a plane boundary between
medium and vacuum and normal incidence) are well-known Born ,
$\displaystyle{\mathfrak{T}}(n_{e})\\!=\\!{2\over
1+n_{e}}\\!=\\!{2\underline{\omega}\over\underline{\omega}+\sqrt{\underline{\omega}^{2}-1}}={1\over
n_{e}}{\mathfrak{T}}({1\over
n_{e}}),~{}~{}~{}~{}{\mathfrak{R}}(n_{e})={1-n_{e}\over
1+n_{e}}\\!=\\!{\underline{\omega}-\sqrt{\underline{\omega}^{2}-1}\over\underline{\omega}+\sqrt{\underline{\omega}^{2}-1}}=-{\mathfrak{R}}({1\over
n_{e}})~{}.$ (4)
In the integrals like (3) the path $L$ of integration along the real axis of
$\underline{\omega}$ can be augmented with a semicircle having an infinite
radius, $C_{inf}$, in the lower half-plane of $\underline{\omega}$, thus
forming a closed clockwise contour $C_{\omega}$ (since $t-z/c>0$, the integral
over $C_{inf}$ is zero). The integrand has two branching points at
$\omega=\pm\Omega_{e}$ ($\underline{\omega}=\pm 1$) and is double-valued. It
will become single-valued after we cut the complex $\omega$-plane along the
segment of the real axis between the branching points. Since there are no
other singularities, one can take for $C_{\omega}$ any closed path
encapsulating the cut (see. Fig.1).
Figure 1: Cut in $\omega$-complex plane and the integration contour
$C_{\omega}$.
In order to compute this contour integral, we resort to the method originally
proposed by N.G. Denisov Denisov and used in Ref. precursor1969 222 An
indisputable advantage of this method is that in many cases it yields analytic
solutions valid throughout all range of time $t$ and distance $z$. Contrary to
more popular asymptotic methods of saddle point or steepest descent First1 ;
First2 , which provide reasonable approximations only at large times and/or
distances, the Denisov’s approach works even at the earliest moments of a
transient process (which has been reiterated and emphasized in somewhat
different context in Ref.waveguide ). A theoretical analysis along the
traditional guidelines of Sommerfeld and Brillouin (asymptotic calculation of
the spectral integrals) has been revisited more than once Ref. BS1 ; BS2 ; BS3
, where the reader can also find an extensive critical review of many other
papers.. A new variable,
$\zeta=\underline{\omega}-\sqrt{\underline{\omega}^{2}-1}$, corresponds to
that branch of the conformal mapping, $\underline{\omega}=(\zeta+1/\zeta)/2$,
that maps the complex plane of $\underline{\omega}$ with the cut between
branching points $\underline{\omega}=\pm 1$ onto an exterior of a unit circle
$|\zeta|=1$ in the plane of complex $\zeta$. The integration contour in the
$\zeta$-plane is a circle with the center at its origin; in all cases
considered below, it does not enclose any singularities and it is traversed in
the counterclockwise direction. The upper and lower banks of the cut in the
$\underline{\omega}$-plane are mapped onto the upper and lower semicircle in
the $\zeta$-plane, respectively. The phases of complex functions
$\omega_{+}=\omega-\Omega$ and $\omega_{-}=\omega+\Omega$ are fixed in such a
way that for real $\omega>\Omega_{e}$, we have ${\rm arg}(\omega_{+})={\rm
arg}(\omega_{-})=0$. Then, for $|\omega|<\Omega_{e}$, we have ${\rm
arg}(\omega_{+})=+\pi$ and ${\rm arg}(\omega_{-})=0$ on the upper bank of the
cut, with ${\rm Re}k_{z}=0$ and ${\rm Im}k_{z}>0$, as expected.
It is straightforward to check the following formulae, which will often be
used throughout the paper,
$\displaystyle\underline{\omega}={1\over
2}({1\over\zeta}+\zeta),~{}~{}~{}\underline{\omega}n_{e}(\omega)=\sqrt{\underline{\omega}^{2}-1}={1\over
2}({1\over\zeta}-\zeta),~{}~{}~{}{d\underline{\omega}\over\underline{\omega}}=-{d\zeta\over\zeta}{1-\zeta^{2}\over
1+\zeta^{2}},~{}~{}~{}~{}{d\underline{\omega}\over\underline{\omega}}\mathfrak{T}[n_{e}(\omega)]=-{d\zeta\over\zeta}(1-\zeta^{2}).$
(5)
The phase factor,
$e^{-i\Omega_{e}[\underline{\omega}t-\sqrt{\underline{\omega}^{2}-1}~{}\tilde{z}]}$,
in the integrand of (3) becomes
$e^{-i(\Omega_{e}/2)[(t-\tilde{z})/\zeta+(t+\tilde{z})\zeta]}$ and the
integral now reads as,
$\displaystyle E^{\prime}_{t}(t,z)\\!\\!=\\!\\!{\theta(t-\tilde{z})\over 2\pi
i}\oint^{(0_{+})}{d\zeta\over\zeta}(1-\zeta^{2})\exp\bigg{\\{-i{\Omega_{e}\tau\over
2}[{\xi\over\zeta}\\!+\\!{\zeta\over\xi}]\bigg{\\}}}\\!\\!=\\!\\!{\theta(t-\tilde{z})\over
2\pi
i}\oint^{(0_{+})}{d\zeta\over\zeta}(1-\xi^{2}\zeta^{2})\exp{\bigg{\\{}\\!\\!-i{\Omega_{e}\tau\over
2}\big{[}{1\over\zeta}\\!+\\!\zeta\big{]}\bigg{\\}}},~{}~{}$ (6)
where $\tau^{2}=t^{2}-\tilde{z}^{2}$, $\xi^{2}=(t-\tilde{z})/(t+\tilde{z})$.
The factor $\exp{\\{-iq(\zeta+1/\zeta)/2\\}}$ in the integrand of (6) is the
generating function for the Bessel functions of an integer order 333This
representation differs from the originally referred to by Denisov Denisov
(and most often used in the literature, e.g. Watson , §2.2 (4)),
$J_{n}(q)={1\over 2\pi i}\oint^{(0_{+})}{dp\over
p^{1+n}}e^{(q/2)[p-1/p]}=(-1)^{n}J_{-n}(q),$ by a trivial change of the
variable $\zeta=ip$.
$\displaystyle{1\over 2\pi
i}\oint^{(0_{+})}{d\zeta\over\zeta^{1+n}}e^{-i(q/2)[\zeta+1/\zeta]}=(-i)^{n}J_{n}(q)=(+i)^{n}J_{-n}(q).$
(7)
The exact analytic answer reads,
$\displaystyle E_{t}(t,z)\equiv
E^{\prime}_{t}(t,z)=\theta(t-\tilde{z})[J_{0}(\Omega_{e}\tau)+\xi^{2}J_{2}(\Omega_{e}\tau)].$
(8)
The results of calculations for Eq. (8) are presented in Fig. 2. They are
shown as functions of time for different depth, $z$, inside a medium.
Figure 2: Plots illustrating the time dependency of precursors formed
following a stepwise signal incident on the surface at $z=0$. Time is measured
in periods of plasma oscillations ($\tau_{e}=2\pi/\Omega_{e}$). Time
dependencies are shown for different depths $z$ (in units of $\lambda_{e}$).
Several observations reveal the features of precursors that will be important
for the rest of our study. First, the deeper the leading front penetrates the
medium, the sharper the first maximum is and more rapid the first oscillations
are. In other words, in the course of propagation the higher-frequency part of
the spectrum of the precursor increases, catching up to the leading front. The
Langmuir frequency $\Omega_{e}$ is dominant on a long tail of the precursor
and in its full spectrum (see Ref.precursor1969 ). Second, regardless of the
depth $z$, the amplitude at the leading front, $ct=z$, stays the same and
equal to the amplitude of the incident signal. Third, the drop of the
amplitude of plasma oscillations with time at $ct>z$ decreases with increasing
depth.
### II.2 Incident single rectangular pulse, reflection and transmission
Next, we consider several examples of interactions between a rectangular
incident pulse and a medium. Such a pulse is described as the difference of
two step-functions shifted in time by $T$. In the spectral representation, the
incident pulse is as follows,
$\displaystyle E_{0}(t,z)=\theta(t-z/c)-\theta(t-T-z/c)={-1\over 2\pi
i}\int_{ia-\infty}^{ia+\infty}{1-e^{i\omega
T}\over\omega}e^{-i\omega(t-z/c)}d\omega~{}.$ (9)
#### II.2.1 Passage and reflection of a pulse at the vacuum-medium interface
Substituting in Eq.(3) the spectral density (9) of a rectangular pulse yields,
$\displaystyle E_{t}(t,z)=E^{\prime}_{t}(t,z)-E^{\prime}_{t}(t-T,z)={-1\over
2\pi
i}\int_{ia-\infty}^{ia+\infty}{d\underline{\omega}\over\underline{\omega}}{\mathfrak{T}}[n_{e}(\omega)](1-e^{i\omega
T})e^{-i[\omega t-\omega n_{e}(\omega)z/c]}~{},$ (10)
where, according to Eq. (8),
$E^{\prime}_{t}(t,z)=\theta(t-\tilde{z})[J_{0}(\Omega_{e}\tau)+\xi^{2}J_{2}(\Omega_{e}\tau)]$
and, as previously, $\tau^{2}=t^{2}-\tilde{z}^{2}$ and
$\xi^{2}=(t-\tilde{z})/(t+\tilde{z})$. This result is shown in the left panel
of Fig.3 for two different values of $z$, $z=0$ and $z=1.5\lambda_{e}$. The
leading and the rear fronts of a rectangular pulse generate precursors of the
opposite sign.
Figure 3: Two plots illustrating time evolution of precursors produced by a
rectangular pulse. On the left, two fronts of an incident rectangular pulse at
two depths $z$. On the right, a plot of the reflected pulse in the case of
normal incident wave.
The evolution of precursors with depth is similar to that observed in Fig. 2.
The spectral form for an electric field of a reflected (back to the vacuum)
pulse differs from Eq. (3) by replacement of the transmission coefficient
${\mathfrak{T}}$ with the reflection coefficient ${\mathfrak{R}}$ and
reversing the direction of propagation, $z\to-z$. Then, for the field
$E^{\prime}_{r}(t,z)$ reflected at the leading front of the incident pulse,
$\displaystyle E^{\prime}_{r}(t,z)={1\over 2\pi
i}\int_{ia-\infty}^{ia+\infty}{d\underline{\omega}\over\underline{\omega}}{\mathfrak{R}}(n_{e})e^{-i\Omega_{e}\underline{\omega}[t+z/c]}~{}.$
(11)
As previously, we resort to (5) to rewrite the integrand in terms of the
variable $\zeta$. Since ${\mathfrak{R}}(n_{e})=(1-n_{e})/(1+n_{e})=\zeta^{2}$,
we arrive at the following expression for the reflection of a step-like
signal,
$\displaystyle E^{\prime}_{r}(t,z)={\theta(t+\tilde{z})\over 2\pi
i}\oint^{(0_{+})}{d\zeta\over\zeta}~{}{\zeta^{2}-\zeta^{4}\over
1+\zeta^{2}}~{}\exp{\bigg{\\{}-i{\Omega_{e}(t+\tilde{z})\over
2}\big{[}\zeta+{1\over\zeta}\big{]}\bigg{\\}}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle={\theta(t+\tilde{z})\over 2\pi
i}\oint^{(0_{+})}{d\zeta\over\zeta}~{}\sum_{l=0}^{\infty}(-1)^{l}[\zeta^{2l+2}-\zeta^{2l+4}]~{}\exp{\bigg{\\{}-i{\Omega_{e}(t+\tilde{z})\over
2}\big{[}\zeta+{1\over\zeta}\big{]}\bigg{\\}}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(12)
$\displaystyle=-\theta(t+\tilde{z})\sum_{l=0}^{\infty}\big{[}J_{2l+2}(\Omega_{e}(t+\tilde{z}))+J_{2l+4}(\Omega_{e}(t+\tilde{z}))\big{]}=-\theta(t+\tilde{z})[1-2J_{1}(\Omega_{e}(t+\tilde{z}))/\Omega_{e}(t+\tilde{z})].$
Here the last transformation is based on the following identities,
$1=J_{0}(x)+2J_{2}(x)+2J_{4}(x)+...$ and $J_{0}(x)+J_{2}(x)=2J_{1}(x)/x$
Watson . The exact analytic solution for the reflected field of an incident
rectangular pulse is,
$\displaystyle
E_{r}(t,z)=E^{\prime}_{r}(t,z)-E^{\prime}_{r}(t-T,z),~{}~{}~{}E^{\prime}_{r}(t,z)=\theta(t+\tilde{z})[1-2{J_{1}(\Omega_{e}(t+\tilde{z}))\over\Omega_{e}(t+\tilde{z})}].$
(13)
This result is shown in the right panel of Fig.3. An almost static field of a
rectangular pulse cannot propagate in a medium with the refraction index (1),
and is being reflected. The negative sign of the reflected pulse is due to the
boundary condition on the interface $z=0$,
$E_{0}+E^{\prime}_{r}=E^{\prime}_{t}\approx 0$, which is self-evident from
visual inspection of the two plots in Fig.3.
#### II.2.2 Transmission of a pulse through a slab.
The more realistic problem of passage of a pulse through a slab of thickness
$d$ involves two transmission coefficients, one for each interface. For the
first interface, as before, the Fresnel coefficient is ${\mathfrak{T}}(n)$,
and ${\mathfrak{T}}(1/n)$, for the transmission of the pulse from the slab
into the vacuum at $z=d$,
$\displaystyle E^{\prime}_{d}(t,z)={1\over 2\pi
i}\int_{ia-\infty}^{ia+\infty}{d\omega\over\omega}{\mathfrak{T}}(n_{e}){\mathfrak{T}}(1/n_{e})e^{-i\omega
t+i\omega(\tilde{z}-\tilde{d})+i\omega n(\omega)\tilde{d}},$ (14)
where $\tilde{z}=z/c$ and $\tilde{d}=d/c$ 444Multiple reflections in the slab
are ignored.. By virtue of Eqs.(4) and (5), in terms of variable $\zeta$, the
product
$(d\omega/\omega){\mathfrak{T}}(n_{e}){\mathfrak{T}}(1/n_{e})=4\sqrt{\underline{\omega}^{2}-1}(\underline{\omega}-\sqrt{\underline{\omega}^{2}-1})d\underline{\omega}$
becomes $-(d\zeta/\zeta)(\zeta^{2}-1)^{2}$.
Figure 4: Plot illustrating the time dependency of precursors that passed
through a slab with thickness $d=5\lambda_{e}$ and $d=25\lambda_{e}$. The
duration of the incident pulse is $T=30\tau_{e}$.
Hence, the method outlined in Sec. II.1 yields,
$\displaystyle E^{\prime}_{d}(t,z)={\theta(t-\tilde{z})\over 2\pi
i}\oint^{(0_{+})}{d\zeta\over\zeta}(1-2\xi^{2}\zeta^{2}+\xi^{4}\zeta^{4})\exp{\bigg{\\{}\\!\\!-i{\Omega_{e}\tau\over
2}\big{[}{1\over\zeta}+\zeta\big{]}\bigg{\\}}}$
$\displaystyle=\theta(t-\tilde{z})[J_{0}(\Omega_{e}\tau)+2\xi^{2}J_{2}(\Omega_{e}\tau)+\xi^{4}J_{4}(\Omega_{e}\tau],~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(15)
where $\tau^{2}=(t-\tilde{z})(t-\tilde{z}+2\tilde{d})$,
$\xi^{2}=(t-\tilde{z})/(t-\tilde{z}+2\tilde{d})$, $z\geq d$. For a rectangular
pulse $E_{d}(t,z)=E^{\prime}_{d}(t,z)-E^{\prime}_{d}(t-T,z)$, see Fig.4. This
is precisely the result obtained in Ref. precursor1969 under the assumption
that the harmonic wave experiences total internal reflection on the second
boundary of the slab. This can be expected since plasma is optically less
dense than vacuum, $n_{e}(\omega)<1$, and only the precursor passes through.
Also the figure clearly indicates that the thicker the slab is, the sharper
are the leading and rear fronts of the precursors transmitted through a slab
into the vacuum.
## III Excitation of molecular resonances by primary precursors.
In this section we examine the behavior of charges, which form molecular
dipoles, in the field of primary precursors. These dipoles become the sources
of secondary radiation that carries the desired information about important
parameters of dipoles and can be detected.
Let us consider a heavy elastic molecular dipole in the electric field
$E_{t}(t|z_{0})$ of a precursor created at the interface between vacuum and
medium by plasma oscillations of light electrons. Its equation of motion can
be written down as follows,
$\displaystyle\ddot{X}(t,z)+2\Gamma_{0}\dot{X}(t,z)+\omega_{m}^{2}X(t,z)=qE_{t}(t,z)/M~{},$
(16)
where $M$ and $q$ are effective mass and charge of the dipole, $X$ is its
displacement, $\omega_{m}$ and $\Gamma_{0}$ are its proper frequency and
width.
Let a train of square pulses of duration $T$ be incident perpendicular on the
boundary at the point $z=0$ and time $t=0$. Eq.(2) can now be generalized as,
$\displaystyle E_{0}(t,z)={\cal
E}_{0}[\theta(t-\tilde{z})-2\theta(t-T-\tilde{z})+2\theta(t-2T-\tilde{z})-\dots]={-{\cal
E}_{0}\over 2\pi
i}\int_{-\infty}^{+\infty}{d\omega\over\omega}\sum_{m=1}^{m_{p}}(-1)^{m}\epsilon_{m}e^{im\omega
T}e^{-i\omega(t-\tilde{z})}~{},~{}~{}~{}$ (17)
where $\epsilon_{m}$ is a so-called Neumann symbol: $\epsilon_{m}=1$ for $m=0$
and $\epsilon_{m}=2$ for $m\neq 0$; $m_{p}=m_{p}(t)$ is the number of pulses
that have passed the boundary $z=0$ by the time $t$. A dipole located at
$z_{0}$ inside the medium is exposed to the electric field,
$\displaystyle E_{t}(t,z_{0})={\cal
E}_{0}\sum_{m=0}^{m_{p}}\epsilon_{m}(-1)^{m}\theta(t_{\ast}-mT)E^{\prime}_{t}(t_{\ast}-mT),$
(18)
where $t_{\ast}=t-\tilde{z}_{0}>0$ and, according to Eqs.(3) and (8.)
$\displaystyle E_{t}^{\prime}(u)={-{\cal E}_{0}\over 2\pi
i}\oint_{C^{-}_{\omega}}{d\underline{\omega}\over\underline{\omega}}{\mathfrak{T}}[n_{e}(\omega)]e^{-i\omega
u}e^{-i[\omega-\omega n_{e}(\omega)]\tilde{z}_{0}}={\cal
E}_{0}\theta(u)\big{[}J_{0}(\Omega_{e}\sqrt{u(u+2\tilde{z}_{0})})+{u\over
u+2\tilde{z}_{0}}J_{2}(\Omega_{e}\sqrt{u(u+2\tilde{z}_{0})})\big{]}.$
For a dipole located at a distance $z_{0}$ from the interface between vacuum
and the medium, the general solution of Eq.(16) reads as follows,
$\displaystyle X(t|z_{0})={q\over
M}e^{-\Gamma_{0}t}\int_{t_{0}}^{t}e^{\Gamma_{0}t^{\prime}}{\sin\omega_{0}(t-t^{\prime})\over\omega_{0}}E_{t}(t^{\prime},z_{0})dt^{\prime}+e^{-\Gamma_{0}t}[b_{c}(t_{0}|z_{0})\cos\omega_{0}t+b_{s}(t_{0}|z_{0})\sin\omega_{0}t]~{},$
(19)
where $\omega_{0}^{2}=\omega_{m}^{2}-\Gamma_{0}^{2}$ and constants $b_{c}$ and
$b_{s}$ are chosen to satisfy the initial conditions at
$t=t_{0}=z_{0}/c=\tilde{z}_{0}$. If this dipole before being exposed to
precursors’ field was at rest, then
$X(\tilde{z}_{0}|z_{0})=\dot{X}(\tilde{z}_{0}|z_{0})=0$, and, consequently,
$b_{c}=b_{s}=0$. Equation (19) for this dipole becomes
$\displaystyle X(t|z_{0})={q\over
M}\int_{\tilde{z}_{0}}^{t}e^{-\Gamma_{0}(t-t^{\prime})}{\sin\omega_{0}(t-t^{\prime})\over\omega_{0}}E_{t}(t^{\prime},z_{0})dt^{\prime}={q\over
M}\int_{0}^{t_{*}}e^{-\Gamma_{0}(t_{\ast}-t^{\prime}_{\ast})}{\sin\omega_{0}(t_{\ast}-t^{\prime}_{\ast})\over\omega_{0}}E_{t}(t^{\prime}_{*},z_{0})dt^{\prime}_{*}~{},$
(20)
where $t_{\ast}=t-\tilde{z}_{0}$ and
$t^{\prime}_{\ast}=t^{\prime}-\tilde{z}_{0}$.
The source (18) in Eqs.(16) and (20) toggles sign abruptly with each passing
pulse and is piecewise continuous. In order for the general solution (19) of
Eq.(16) to be continuous and differentiable throughout entire time $t$, we
first associate the constants $b_{c}(t_{0})$ and $b_{s}(t_{0})$ with
$X_{(m_{p})}(m_{p}T)$ and $\dot{X}_{(m_{p})}(m_{p}T)$ (see Eqs.(A)). For the
time interval $m_{p}T<t_{\ast}<(m_{p}+1)T$ we obtain in (A) a continuous and
differentiable function for every $t_{\ast}$. At the end $(m_{p}+1)T$ of this
time interval (A) yields a recursion relation connecting
$X_{(m_{p})}((m_{p}+1)T)=X_{(m_{p}+1)}((m_{p}+1)T)$ and
$\dot{X}_{(m_{p})}((m_{p}+1)T)=\dot{X}_{(m_{p}+1)}((m_{p}+1)T)$ with
$X_{(m_{p})}(m_{p}T)$ and $\dot{X}_{(m_{p})}(m_{p}T)$.
Technical part of the cumbersome calculations for $X(t|z_{0})$ and its first
time derivative $\dot{X}(t|z_{0})$ is described in Appendix A, where we also
derive recurrence relations (A) between the ${X}_{(m_{p})}(m_{p}|z_{0})$ and
$\dot{X}_{(m_{p})}(m_{p}|z_{0})$ with adjacent numbers $m_{p}$. In this way we
obtain a ladder of amplitudes $b_{c}(m_{p})$ and $b_{s}(m_{p})$ in Eqs. (19)
and (V.2) for $m_{p}T<t<(m_{p}+1)T$.
Computation of the radiation of molecular dipoles, requires determination of
$\ddot{X}(t|z_{0})$ of the constituent charges; by virtue of (A),
$\displaystyle\ddot{X}_{(m_{p})}(t|z_{0})={{\cal E}_{0}q\over
M}\sum_{m=0}^{m_{p}(t_{*})}\epsilon_{m}(-1)^{m}\theta(t_{*}-mT)E^{\prime}_{t}(t_{*}-mT)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle-{{\cal E}_{0}q\over
M}\sum_{m=0}^{m_{p}(t_{*})}\epsilon_{m}(-1)^{m}\omega_{0}\int_{mT}^{t_{*}}e^{-\Gamma_{0}(t_{*}-t^{\prime})}\bigg{[}(1-{\Gamma_{0}^{2}\over\omega_{0}^{2}})\sin[\omega_{0}(t_{*}-t^{\prime})]+2{\Gamma_{0}\over\omega_{0}}\cos[\omega_{0}(t_{*}-t^{\prime})]\bigg{]}\theta(t^{\prime}-mT)E^{\prime}_{t}(t^{\prime}-mT)dt^{\prime}$
$\displaystyle+\omega_{0}^{2}\bigg{\\{}-\bigg{[}(1+{\Gamma_{0}^{2}\over\omega_{0}^{2}})X_{(m_{p})}(m_{p}T)+2{\Gamma_{0}\over\omega_{0}}{\dot{X}_{(m_{p})}(m_{p}T)\over\omega_{0}}\bigg{]}\cos\omega_{0}(t_{*}-m_{p}T)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(21)
$\displaystyle+\bigg{[}{\Gamma_{0}\over\omega_{0}}(1+{\Gamma_{0}^{2}\over\omega_{0}^{2}})X_{(m_{p})}(m_{p}T)-(1-{\Gamma_{0}^{2}\over\omega_{0}^{2}}){\dot{X}_{(m_{p})}(m_{p}T)\over\omega_{0}}\bigg{]}\sin\omega_{0}(t_{*}-m_{p}T)\bigg{\\}}e^{-\Gamma_{0}(t_{*}-m_{p}T)}~{}.$
The second derivative of the density of the dipole polarization now is
$4\pi\ddot{\cal P}_{mol}(t)=4\pi q\langle N_{q}\ddot{X}(t)\rangle$. We group
$\ddot{\cal P}_{mol}(t|z_{0})$ into the three terms, $\ddot{\cal
P}_{mol}(t|z_{0})=\ddot{\cal P}_{a}(t|z_{0})+\ddot{\cal
P}_{b}(t|z_{0})+\ddot{\cal P}_{c}(t|z_{0})$,
$\displaystyle 4\pi\ddot{\cal P}_{a}(t|z_{0})=\Omega_{q}^{2}{\cal
E}_{0}\sum_{m=0}^{m_{p}(t_{*})}\epsilon_{m}(-1)^{m}E^{\prime}_{t}(t_{*}-mT),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\rm(a)}$
$\displaystyle 4\pi\ddot{\cal P}_{b}(t|z_{0})=-\Omega_{q}^{2}{\cal
E}_{0}\sum_{m=0}^{m_{p}(t_{*})}\epsilon_{m}(-1)^{m}\omega_{0}\int_{mT}^{t_{*}}e^{-\Gamma_{0}(t_{*}-t^{\prime})}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(22)
$\displaystyle\times\bigg{[}(1-{\Gamma_{0}^{2}\over\omega_{0}^{2}})\sin[\omega_{0}(t_{*}-t^{\prime})]+2{\Gamma_{0}\over\omega_{0}}\cos[\omega_{0}(t_{*}-t^{\prime})]\bigg{]}E^{\prime}_{t}(t^{\prime}-mT)dt^{\prime},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\rm(b)}$
$\displaystyle 4\pi\ddot{\cal
P}_{c}(t|z_{0})=4\pi\Omega_{e}^{2}\underline{\omega}_{0}^{2}e^{-\Gamma_{0}(t_{*}-m_{p}T)}\bigg{\\{}C_{1}(m_{p}T)\cos\omega_{0}(t_{*}-m_{p}T)+C_{2}(m_{p}T)\sin\omega_{0}(t_{*}-m_{p}T)\bigg{\\}},~{}~{}~{}~{}~{}~{}~{}{\rm(c)}$
where $\Omega_{q}^{2}=4\pi q^{2}N_{q}/M$ and
$\displaystyle
C_{1}(m_{p}T)=-\bigg{[}(1+{\Gamma_{0}^{2}\over\omega_{0}^{2}}){\cal
P}(m_{p}T)+2{\Gamma_{0}\over\omega_{0}}{\dot{\cal
P}(m_{p}T)\over\omega_{0}}\bigg{]},~{}~{}C_{2}(m_{p}T)=\bigg{[}{\Gamma_{0}\over\omega_{0}}(1+{\Gamma_{0}^{2}\over\omega_{0}^{2}}){\cal
P}(m_{p}T)-(1-{\Gamma_{0}^{2}\over\omega_{0}^{2}}){\dot{\cal
P}(m_{p}T)\over\omega_{0}}\bigg{]},~{}~{}~{}$ (23)
For $t_{*}<T$ (and $m_{p}=0$) we have ${\cal P}_{c}(t|z_{0})=0$. The
difference between these three parts of ${\cal P}_{mol}$ will be discussed in
details when we will be looking at their contributions to the field of the
dipole’s radiation in Sec.V.
## IV Radiation emitted by excited molecular resonances: General equations
The goal of this and the following sections is to find an explicit form for
the field of radiation caused by the polarization field derived in the
previous section. We consider the radiation due to uniformly distributed
molecular dipoles of number density $N_{q}$ in an infinitely thin slab of
thickness $\Delta z_{0}$ perpendicular to the $z$-axis. The electric field of
their radiation, ${\cal E}_{rad}={\cal E}$ satisfies the wave equation,
$\displaystyle{\partial^{2}{\cal E}(t,z)\over\partial z^{2}}-{1\over
c^{2}}{\partial^{2}{\cal E}(t,z)\over\partial t^{2}}={4\pi\over
c^{2}}[\ddot{\cal P}_{e}(t,z)+\ddot{\cal P}_{mol}(t,z)]$ (24)
where ${\cal P}_{e}(t,z)$ and ${\cal P}_{mol}(t,z)$ are the electronic and
molecular components of the electric polarization, respectively. The former,
$\ddot{\cal P}_{e}(t,z)$, is determined from the equation of motion of free
charges,
$\displaystyle 4\pi\ddot{\cal P}_{e}(t,z)=4\pi eN_{e}\ddot{X}_{e}(t|z)=4\pi
eN_{e}~{}(e/m){\cal E}(t,z)=\Omega_{e}^{2}{\cal E}(t,z).~{}~{}~{}$
The latter, $\ddot{\cal P}_{mol}(t,z)$, was computed in Sec.III as the
response of molecular dipoles to the field of primary precursors. In the
adopted approximation, all effects of the electronic polarization can be
incorporated in the refraction index $n_{e}(\nu)$, so that ${\cal
P}_{e}(\nu)=\kappa_{e}(\nu){\cal E}_{(}\nu)$ and
$n_{e}^{2}(\nu)=1+4\pi\kappa_{e}(\nu)=1-\Omega_{e}^{2}/\nu^{2}$. Thus, we are
dealing not with the emission of electromagnetic field in vacuum, but rather
with the excitation of plasma waves that have well-defined wave fronts and
where electrons are involved in a collective process with the electric field.
The incident pulses excite these waves producing primary precursors at the
interface with vacuum. When they reach and excite molecular resonances in the
interior of a medium, the latter must radiate. This radiation propagates in a
dispersive medium and it must cross an interface where it exits into the
vacuum. As will be shown in Sec.V, by some of its properties this secondary
radiation resembles primary precursors.
Let us assume, for the sake of simplicity, that molecular dipoles occupy an
infinitely thin layer at depth $z_{0}$, so that the source surface density in
Eq. (24) is $(4\pi/c^{2})\ddot{\cal
P}_{mol}(t,z)=(4\pi/c^{2})N_{q}q\ddot{X}(t|z)\delta(z-z_{0})\Delta z_{0}$.
After applying a Fourier transform with respect to time, equation (24) reads
as,
$\displaystyle{\partial^{2}{\cal E}(\nu,z|z_{0})\over\partial
z^{2}}+{\nu^{2}\over c^{2}}n_{e}^{2}(\nu){\cal E}(\nu,z|z_{0})={4\pi\over
c^{2}}\ddot{\cal{P}}_{mol}(\nu,z)\delta(z-z_{0})\Delta z_{0}$ (25)
where
$\displaystyle\ddot{\cal{P}}_{mol}(\nu,z)=\int_{-\infty}^{+\infty}\ddot{\cal{P}}_{mol}(t,z)e^{i\nu
t}dt=e^{i\nu\tilde{z}_{0}}\int_{-\infty}^{+\infty}\ddot{\cal{P}}_{mol}(t,z)e^{i\nu
t^{\ast}}dt^{\ast}.$
The solution to the Eq.(24) can be found via its Green’s function,
$G(\tau,z;t,z_{0})$,
$\displaystyle{\cal E}_{rad}(\tau,z)=\int
G(\tau,z;t,z_{0})\cdot\frac{4\pi}{c^{2}}\ddot{\cal{P}}_{mol}(t,z_{0})dz_{0}dt~{},$
(26)
In order to find its explicit expression, let us perform the Fourier transform
of Eq.(25) with respect to coordinate $z$. This results in
$\displaystyle-k^{2}{\cal E}(\nu,k|z_{0})+{\nu^{2}\over
c^{2}}n_{e}^{2}(\nu){\cal E}(\nu,k|z_{0})={4\pi\over
c^{2}}\ddot{\cal{P}}_{mol}(\nu,z_{0})e^{-ikz_{0}}\Delta z_{0}~{},$ (27)
which is an algebraic equation with respect to ${\cal E}(\nu,k|z_{0})$. Hence,
the electric field inside the medium radiated by molecular dipoles at all
depths $z_{0}$ can be obtained as the double inverse Fourier transform of
(27), which then can be integrated over all the radiating dipoles,
$\displaystyle{\cal E}(\tau,z)={-4\pi\over(2\pi)^{2}}\int{\sf
d}z_{0}\int_{-\infty}^{+\infty}d\nu\int_{-\infty}^{+\infty}dk{e^{-i[\nu\tau-k(z-z_{0})]}\over
c^{2}k^{2}-\nu^{2}n_{e}^{2}(\nu)}\ddot{\cal{P}}_{mol}(\nu,z_{0}).$ (28)
We start the calculation of this integral with the integration over $k$ along
the real $k$-axis that can be reduced to an integral over a closed contour in
the complex $k$-plane ($k=k^{\prime}+ik^{\prime\prime}$). The choice of a
contour depends on the direction of radiation from the layer of dipoles.
Indeed, since
$e^{ik(z-z_{0})}=e^{ik^{\prime}(z-z_{0})}e^{-k^{\prime\prime}(z-z_{0})}$, for
the emission in the forward direction, $z>z_{0}$, we choose to close the
contour of integration in the upper half-plane, where $k^{\prime\prime}>0$.
For the emission backwards, $z<z_{0}$, the contour should be closed in the
lower half-plane. Technically, these requirements can be implemented by
specifying the Green function in the $k$-plane as
$[c^{2}k^{2}-\nu^{2}+\Omega_{e}^{2}+i\varepsilon_{z}]^{-1}$, where
$i\varepsilon_{z}$ is an infinitesimal imaginary addition to the wave vector
$k$ (compare with the well-known causal Feynman’s Green’s function of QED and
also comprehensive analysis of the radiation principle in dispersive medium in
Ref.Bolotovsky ). Then the poles corresponding to the propagation in the
forward and backward directions lay slightly below and above the real axis,
respectively. Performing the $k$-integration by the method of residues in
these two cases we end up with
$\displaystyle\int_{-\infty}^{+\infty}dk{e^{ik(z-z_{0})}\over
c^{2}k^{2}-\nu^{2}n_{e}^{2}(\nu)}={2\pi i\over 2c\nu
n_{e}(\nu)}\big{[}\theta(z-z_{0})e^{i\nu
n_{e}(\nu)(z-z_{0})/c}+\theta(z_{0}-z)e^{-i\nu
n_{e}(\nu)(z-z_{0})/c}\big{]}~{},$ (29)
where the first and the second term in brackets correspond to the emission in
the forward and backward directions, respectively. We are interested in the
field outside the medium that occupies the slab $0<z<d$.
To get the field emitted forward, for $z>d$, we must cut off the propagation
in the slab at a depth $z=d$, incorporate an additional Fresnel coefficient
${\mathfrak{T}}[1/n_{e}(\nu)]$ and continue propagation for the extra distance
$z-d$ in free space. To get the field emitted backwards, for $z<0$, we must
account for the in-medium propagation for the distance $z_{0}$, incorporate an
additional Fresnel coefficient ${\mathfrak{T}}[1/n_{e}(\nu)]$ and continue
propagation for the extra distance, $z<0$, in free space. The electric field
for either direction reads,
$\displaystyle{\cal E}(\tau,z>d)=-{i\over c}\int_{0}^{d}{\sf
d}z_{0}\int_{-\infty}^{+\infty}{d\nu\over\nu
n_{e}(\nu)}e^{-i\nu\tau}\ddot{\cal{P}}_{mol}(\nu|z_{0})e^{i\nu
n_{e}(\nu)(d-z_{0})/c}\mathfrak{T}(1/n_{e}(\nu))e^{i\nu(z-d)/c},~{}~{}~{}~{}{\rm(a)}$
$\displaystyle{\cal E}(\tau,z<0)=-{i\over c}\int_{0}^{d}{\sf
d}z_{0}\int_{-\infty}^{+\infty}{d\nu\over\nu
n_{e}(\nu)}e^{-i\nu\tau}\ddot{\cal{P}}_{mol}(\nu|z_{0})e^{i\nu
n_{e}(\nu)\tilde{z}_{0}}\mathfrak{T}(1/n_{e}(\nu))e^{-i\nu
z/c},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\rm(b)}$ (30)
where ${\mathfrak{T}}(1/n_{e})=n_{e}{\mathfrak{T}}(n_{e})$ and the integral
over real axis in the complex $\nu$-plane can be transformed into an integral
over a clockwise contour $C_{\nu}^{-}$ closed by an arc of a large radius in
the lower half-plane. The path we took to obtain this result accounts for that
fact, that normal modes of our problem are not plane waves in the infinite
medium. They satisfy the boundary conditions at the interfaces $z=0$ and
$z=d$, which violates translation symmetry in $z$-direction. Furthermore, the
radiation of molecular dipoles depends, as does the field of primary
precursors, on the depth $z_{0}$ of a particular dipole.
If we express $\ddot{\cal P}(\nu|z_{0})$ in terms of $\ddot{\cal P}(t|z_{0})$,
Eqs. (IV) acquire the form (26), where $G(\tau,z;t,z_{0})$ are the
corresponding retarded Green’s functions that propagate radiation of the
source, $\ddot{\cal{P}}_{mol}(t|z_{0})$, i.e., of the dipoles induced by
precursors at ($z_{0}$, $t$) towards the points of observation ($z$, $\tau$)
on either side of a the slab
$\displaystyle G(\tau,z>d;t,z_{0})=-{i\over
4\pi}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]\cdot
e^{-i\nu[(\tau-t)-(z-d)/c]}e^{i\nu
n_{e}(\nu)(d-z_{0})/c},~{}~{}~{}~{}{\rm(a)}$ $\displaystyle
G(\tau,z<0;t,z_{0})=-{i\over
4\pi}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]\cdot
e^{-i\nu(\tau-t-|z|/c)}e^{+i\nu
n_{e}(\nu)\tilde{z}_{0}}.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\rm(b)}$
(31)
Equation (IV) describes propagation of the radiated electromagnetic field
accounting for the boundary conditions on each interface with the vacuum.
These expressions are similar to integrals (3) and (6). They also set up the
upper limits $t_{max}$ of a subsequent integration over $dt$ in Eq.(26). These
conditions, $\tau>t+|\tilde{z}|+\tilde{z}_{0}$ for the emission backward, and
$\tau>t+\tilde{z}-\tilde{z}_{0}$ for the dipole radiation forward, mean that
there can be no signal until the leading front of the dipole radiation reaches
the point $(\tau,z)$ of an observation. Only the processes in the dipole that
took place at $t<\tau-|\tilde{z}|-\tilde{z}_{0}$ can affect the detector at
time $\tau$. In both cases the path of integration $d\nu$ can be closed by a
semicircle in the lower half-plane. The lower limits
$t_{min}(m)=\tilde{z}_{0}+mT$ is the time when the $m$-th pulse hits the
dipole. Notably, these Green’s functions depend only on the difference
$\tau-t$.
Following the scheme of Sec.III, one can compute these integrals by mapping
the complex plane $\nu$ onto the exterior of a unit circle in the complex
plane $\zeta$, so that $\nu=(\Omega_{e}/2)\big{(}1/\zeta+\zeta\big{)}$ (c.f.
Eqs.(5) ). The result reads as follows,
$\displaystyle G(\tau,z;t,z_{0})={1\over 2}\cdot{1\over 2\pi
i}\oint{d\zeta\over\zeta}(1-\zeta^{2})e^{-i{\Omega_{e}\rho\over
2}[{\mu\over\zeta}+{\zeta\over\mu}]}={\theta(\mu\rho)\over
2}[J_{0}(\Omega_{e}\rho)+\mu^{2}J_{2}(\Omega_{e}\rho)],$ (32)
where $\rho^{2}=(\tau-t-|\tilde{z}|)^{2}-\tilde{z}_{0}^{2}$,
$\mu^{2}=(\tau-t-|\tilde{z}|-\tilde{z}_{0})/(\tau-t-|\tilde{z}|+\tilde{z}_{0})$,
$\mu\rho=\tau-t-(|\tilde{z}|+\tilde{z}_{0})$ for $G(\tau,z<0;t,z_{0})$ that
describes the propagation at the distance $z_{0}+|z|$ backward, and
$\rho^{2}=[(\tau-t)-(z-d)/c]^{2}-(d-z_{0})^{2}/c^{2}$,
$\mu^{2}=[(\tau-t)-(z-d)/c-(d-z_{0})/c]/[(\tau-t)-(z-d)/c+(d-z_{0})/c]$,
$\mu\rho=\tau-t-(\tilde{z}-\tilde{z}_{0})$ for $G(\tau,z>d;t,z_{0})$, that
describes the propagation in forward direction at a distance $z-z_{0}$ 555 For
the dipole’s radiation inside a slab, then the second term in
$G(\tau,0<z<z_{0};t,z_{0})$, $\mu^{2}J_{2}(\Omega_{e}\rho)$, which originates
from the transmission coefficient $\mathfrak{T}[1/n_{e}(\nu)]$, would be
absent..
Ignoring trivial changes of the arguments, $\tau\to\rho$, $\xi\to\mu$, the
result (32) for the Green’s function coincides with the expression (8) for the
field of precursor that excites the emission of molecular resonance and is
plotted in Fig.2.
## V Radiation emitted by excited molecular resonances: The electric field
of radiation.
The source $\ddot{\cal P}_{mol}$ and the field ${\cal E}_{rad}$ of its
radiation are grouped into the terms $\ddot{\cal P}_{a}+\ddot{\cal
P}_{b}+\ddot{\cal P}_{c}$ and ${\cal E}_{a}+{\cal E}_{b}+{\cal E}_{c}$,
respectively. The source $\ddot{\cal P}$ of a single layer of dipoles located
at depth $z_{0}$ is given by Eqs. (III). The Green’s function (IVb) is used.
### V.1 Structureless (singular) term ${\cal E}_{a}$ and regular term ${\cal
E}_{b}$
The terms ${\cal E}_{a}(\tau,z<0)$ and ${\cal E}_{b}(\tau,z<0)$ originate from
the $\ddot{\cal{P}}_{a}$ and $\ddot{\cal{P}}_{b}$ parts of polarization,
respectively. It is shown below that they do not contribute to the total
radiation. The singular, $\ddot{\cal{P}}_{a}$, part of the source is given by
Eq.(IIIa). Using Eq.(26) for the backward emitted part ${\cal
E}_{a}(\tau,z<0|z_{0})$, of the electric field yields,
$\displaystyle{{\sf d}{\cal E}_{a}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}=\int_{mT+\tilde{z}_{0}}^{\tau-|\tilde{z}|-\tilde{z}_{0}}G(\tau,z<0;t,z_{0})~{}4\pi\ddot{\cal{P}}_{a}(t|z_{0}){d(\Omega_{e}t)\over\Omega_{e}^{2}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle={\Omega_{q}^{2}\over\Omega_{e}^{2}}\int_{t^{*}_{min}(m)}^{t^{*}_{max}}\sum_{m=0}^{m_{p}(t)}(-1)^{m}\epsilon_{m}G(\tau,z<0;t_{*},z_{0})E^{\prime}_{t}(t_{*}-mT)d(\Omega_{e}t_{*}),$
(33)
where $\Omega_{q}^{2}=4\pi q^{2}N_{q}/M$, $t^{*}_{min}=mT$ and
$t^{*}_{max}=\tau-|\tilde{z}|-2\tilde{z}_{0}$ are the time it takes the
incident front to reach the dipole and the time it takes the front of the
dipole radiation to reach the point $z$ of observation outside the medium at
time $\tau$, respectively. In the same way, by virtue of (IIIb),
$\displaystyle{{\sf d}{\cal E}_{b}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}=\int_{mT+\tilde{z}_{0}}^{\tau-|\tilde{z}|-\tilde{z}_{0}}G(\tau,z<0;t,z_{0})\cdot
4\pi\ddot{\cal
P}_{b}(t|z_{0}){d(\Omega_{e}t)\over\Omega_{e}^{2}}=-{\Omega_{q}^{2}\over\Omega_{e}^{2}}\sum_{m=0}^{m_{p}(t)}(-1)^{m}\epsilon_{m}\int_{t^{*}_{min}(m)}^{t^{*}_{max}}d(\Omega_{e}t_{*})G(\tau,z<0;t_{*},z_{0})$
$\displaystyle\times\omega_{0}\int_{mT}^{t_{*}}dt^{\prime}e^{-\Gamma_{0}(t_{*}-t^{\prime})}\bigg{[}(1-{\Gamma_{0}^{2}\over\omega_{0}^{2}})\sin[\omega_{0}(t_{*}-t^{\prime})]+2{\Gamma_{0}\over\omega_{0}}\cos[\omega_{0}(t_{*}-t^{\prime})]\bigg{]}E^{\prime}_{t}(t^{\prime}-mT).~{}~{}~{}~{}~{}~{}$
(34)
Here, according to (8) and (IVb),
$\displaystyle E^{\prime}_{t}(t_{*}-mT)={-{\cal E}_{0}\over 2\pi
i}\oint_{C^{-}_{\omega}}{d\underline{\omega}\over\underline{\omega}}{\mathfrak{T}}[n_{e}(\omega)]\cdot
e^{-i\omega(t_{*}-mT)}e^{-i[\omega-\omega n_{e}(\omega)]\tilde{z}_{0}}={\cal
E}_{0}[J_{0}(\Omega_{e}\tau_{m})+\gamma_{m}^{2}J_{2}(\Omega_{e}\tau_{m})],~{}~{}~{}~{}~{}~{}$
(35) $\displaystyle G(\tau,z<0;t_{*},z_{0})={-i\over
4\pi}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]\cdot
e^{-i\nu(\tau-t_{*}-|\tilde{z}|-2\tilde{z}_{0})}e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}=[J_{0}(\Omega_{e}\rho)+\mu^{2}J_{2}(\Omega_{e}\rho)]/2~{},~{}~{}~{}~{}~{}$
(36)
where $\tau_{m}^{2}=(t_{*}-mT)(t_{*}-mT+2\tilde{z}_{0})$,
$\gamma_{m}^{2}=(t_{*}-mT)/(t_{*}-mT+2\tilde{z}_{0})$ and $\rho^{2}=(\tau-
t_{*}-|\tilde{z}|-2\tilde{z}_{0})(\tau-t_{*}-|\tilde{z}|)$, $\mu^{2}=((\tau-
t_{*}-|\tilde{z}|-2\tilde{z}_{0})/(\tau-t_{*}-|\tilde{z}|)$. Noteworthy, the
incident field (35) is, in fact, the Green’s function, which transforms an
incident field that hits the interface, into the field (8) of precursor. The
Green’s function (36) differs from the latter only by replacement
$t\to\tau-t-|\tilde{z}|$; it transforms the field of the dipole radiation into
the wave outside medium. Notably, there is no dependence on the parameters
$\omega_{0}$ and $\Gamma_{0}$ of the molecular dipoles.
To compute the integrals (V.1) and (V.1) we will use the integral
representations (35) for $G(\tau,z<0;t_{*},z_{0})$ and
$E^{\prime}_{t}(t_{*}-mT)$. Splitting sine and cosine in Eq.(V.1) into two
exponents, and integrating $dt^{\prime}$, we find that
$\displaystyle e^{i\omega
mT-\Gamma_{0}t_{*}}\int_{mT}^{t_{*}}e^{-i(\omega+i\Gamma_{0})t^{\prime}}e^{\pm
i\omega_{0}(t_{*}-t^{\prime})}dt^{\prime}={i\over\omega+i\Gamma_{0}\pm\omega_{0}}[e^{-i\omega(t_{*}-mT)}-e^{i(\pm\omega_{0}+i\Gamma_{0})(t_{*}-mT)}].$
(37)
Treated as functions of complex variable $\omega$, these functions are regular
and have no pole at the points $\omega=\mp\omega_{0}-i\Gamma_{0}$. The second
term in brackets, which comes from the lower limit of the integration, does
not depend on $\omega$ and the entire exponent in the integral (35) over
contour $C_{\omega}^{-}$ is reduced to $e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}$. Its contribution to the contour integral equals
to zero. Indeed, after conformal transformation (5), the contour
$C_{\omega}^{-}$ becomes a circle around the origin in $\zeta$-plane, while
the exponent becomes a regular function $e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}\to e^{-i\Omega_{e}\tilde{z}_{0}\zeta}$.
Conversely, the exponent stemming from the first term brings in the factor
$e^{-i\omega(t_{*}-mT)}\to e^{-i\Omega_{e}(t_{*}-mT)(1/\zeta+\zeta)/2}$, which
has an essential singularity at $\zeta=0$. Assembling the exponents back into
sine and cosine and omitting the $\omega$-independent exponent in brackets
yields,
$\displaystyle e^{i\omega
mT-\Gamma_{0}t_{*}}\int_{mT}^{t_{*}}e^{-i(\omega+i\Gamma_{0})t^{\prime}}\genfrac{\\{}{\\}}{0.0pt}{}{\cos[\omega_{0}(t_{*}-t^{\prime})]}{\sin[\omega_{0}(t_{*}-t^{\prime})]}dt^{\prime}=r(\omega)e^{-i\omega(t_{*}-mT)}\genfrac{\\{}{\\}}{0.0pt}{}{i\omega-\Gamma_{0}}{-\omega_{0}},$
(38)
where $r(\omega)=[(\omega+i\Gamma_{0})^{2}-\omega_{0}^{2}]^{-1}$ is the
resonance factor. As shown above, residues at its poles in the $\omega$ plane
are zero.
Using spectral representation (36) for the Green’s function, we can cast
Eqs.(V.1) and (V.1) into two similar double spectral integrals,
$\displaystyle{{\sf d}{\cal E}_{a}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}={\Omega^{2}_{q}{\cal E}_{0}\over 2\pi
i\Omega_{e}}\sum_{m=0}^{m_{p}}\epsilon_{m}(-1)^{m}\bigg{(}{-i\over
4\pi}\bigg{)}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]e^{-i\nu(\tau-|\tilde{z}|)}e^{+i[\nu+\nu
n_{e}(\nu)]\tilde{z}_{0}}$
$\displaystyle\times\oint_{C_{\omega}^{-}}{d\omega\over\omega}\mathfrak{T}[n_{e}(\omega)]e^{i\omega
mT}e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}\cdot\int_{t^{*}_{min}(m)}^{t^{*}_{max}}e^{i(\nu-\omega)t_{*}}dt_{*}.~{}~{}~{}~{}~{}$
(39) $\displaystyle{{\sf d}{\cal E}_{b}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}=-{\Omega^{2}_{q}{\cal E}_{0}\over 2\pi
i\Omega_{e}}\sum_{m=0}^{m_{p}}\epsilon_{m}(-1)^{m}\bigg{(}{-i\over
4\pi}\bigg{)}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]e^{-i\nu(\tau-|\tilde{z}|)}e^{+i[\nu+\nu
n_{e}(\nu)]\tilde{z}_{0}}$
$\displaystyle\times\oint_{C_{\omega}^{-}}{d\omega\over\omega}\mathfrak{T}[n_{e}(\omega)]r(\omega)[2i\Gamma_{0}\omega-\omega_{m}^{2}]e^{i\omega
mT}e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}\cdot\int_{t^{*}_{min}(m)}^{t^{*}_{max}}e^{i(\nu-\omega)t_{*}}dt_{*}.~{}~{}~{}~{}~{}$
(40)
The last integral in the above equations is readily found to be
$\displaystyle\int_{t^{*}_{min}(m)}^{t^{*}_{max}}e^{i(\nu-\omega)t_{*}}dt_{=}{-i\over\nu-\omega}\bigg{[}e^{i(\nu-\omega)(\tau-|\tilde{z}|-2\tilde{z}_{0})}-e^{i(\nu-\omega)mT}\bigg{]},$
(41)
Therefore, we can rewrite equations (V.1) and (V.1) as
$\displaystyle{{\sf d}{\cal E}_{a}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}={\Omega^{2}_{q}{\cal E}_{0}\over 2\pi
i\Omega_{e}}\bigg{(}{-i\over
4\pi}\bigg{)}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}\oint_{C_{\omega}^{-}}{d\omega\over\omega}\mathfrak{T}[n_{e}(\omega)]e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\times\sum_{m=0}^{m_{p}}\epsilon_{m}(-1)^{m}{e^{-i\omega(\tau-|\tilde{z}|-2\tilde{z}_{0}-mT)}-e^{-i\nu(\tau-|\tilde{z}|-2\tilde{z}_{0}-mT)}\over\nu-\omega},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(42) $\displaystyle{{\sf d}{\cal E}_{b}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}=-{\Omega^{2}_{q}{\cal E}_{0}\over 2\pi
i\Omega_{e}}\bigg{(}{-i\over
4\pi}\bigg{)}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}\oint_{C_{\omega}^{-}}{d\omega\over\omega}\mathfrak{T}[n_{e}(\omega)]r(\omega)[2i\Gamma_{0}\omega-\omega_{m}^{2}]~{}e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}$
$\displaystyle\times\sum_{m=0}^{m_{p}}\epsilon_{m}(-1)^{m}{e^{-i\omega(\tau-|\tilde{z}|-2\tilde{z}_{0}-mT)}-e^{-i\nu(\tau-|\tilde{z}|-2\tilde{z}_{0}-mT)}\over\nu-\omega}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(43)
The integrands, as functions of two complex variables $\omega$ and $\nu$, have
a removable singularity at $\omega=\nu$ in either of the two complex planes
and the residue in these poles equal zero. Further analytic calculations,
which are described in Appendix C, show that
$\displaystyle{{\sf d}{\cal E}_{a}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}={{\sf d}{\cal E}_{b}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}=0,$ (44)
This result could have been anticipated from the viewpoint of the rigorous
theory of dispersion Born ; Rosenfeld . Indeed, the terms ${\cal P}_{a}$ and
${\cal P}_{b}$ represent a linear polarization response of the molecular
resonances to the external fields of the entire train (18) of primary
precursors with zero initial conditions,
$X(\tilde{z}_{0}|z_{0})=\dot{X}(\tilde{z}_{0}|z_{0})=0$, which is equivalent
to Eqs. (A.3). The $m$-dependent limits of integration in Eqs. (V.1) and (V.1)
come from the theta-functions $\theta(t_{\ast}-mT)$ in Eq.(18). The absence of
backward radiation from the components ${\cal P}_{a}$ and ${\cal P}_{b}$
indicates that they satisfy the homogeneous wave equation for the average
polarization ${\cal P}$ with a refraction index $n(\omega)$, which is the main
postulate of molecular optics (see §2.4 of the textbook by M. Born and E. Wolf
Born and, especially, the Ch.VI of lectures by L. Rosenfeld Rosenfeld ) .
### V.2 Oscillatory term ${\cal E}_{c}$
With the $\ddot{\cal{P}}_{c}$ defined by Eq.(IIIc), and the Green’s function
(IVb) the component ${\cal E}_{c}$ of the radiation field reads as
$\displaystyle{{\sf d}{\cal E}_{c}(\tau,z<0|z_{0})\over(\Omega_{e}{\sf
d}\tilde{z}_{0})}=\int G(\tau,z;t,z_{0})\cdot\Omega_{e}^{-2}4\pi\ddot{\cal
P}_{c}(t|z_{0})d(\Omega_{e}t)=4\pi{\underline{\omega}_{0}^{2}}\int_{t_{min}(m_{p})}^{t_{max}}d(\Omega_{e}t_{*})e^{-\Gamma_{0}(t_{*}-m_{p}T)}~{}~{}~{}~{}~{}$
$\displaystyle\times{-i\over
4\pi}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]\cdot
e^{-i\nu(\tau-t_{*}-|\tilde{z}|)}e^{+i[\nu+\nu
n_{e}(\nu)]\tilde{z}_{0}}\bigg{[}C_{1}\cos[\omega_{0}(t_{*}-m_{p}T)]+C_{2}\sin[\omega_{0}(t_{*}-m_{p}T)]\bigg{]},$
(45)
where coefficients $C_{1}(m_{p}T)$ and $C_{2}(m_{p}T)$ are defined in Eqs.
(23) and, according to (A), all the ${\cal P}(m_{p}T)$ carry the same
dimensionless factor $\Omega_{q}^{2}/\Omega_{e}^{2}$. The final integration
$dt_{*}$ is carried out after the ladder of amplitudes of free oscillations
from $m=0$ to $m=m_{p}$ is built according to the recursive relations (A).
Taking the Green’s function in the form (IVb), we integrate over $t_{*}$
between $t^{*}_{min}=m_{p}T$ and
$t^{*}_{max}=\tau-|\tilde{z}|-2\tilde{z}_{0}$,
$\displaystyle
e^{+\Gamma_{0}m_{p}T}\int_{t^{*}_{min}}^{t^{*}_{max}}e^{i(\nu+i\Gamma_{0})t_{*}}[(C_{1}-iC_{2})e^{i\omega_{0}(t_{*}-m_{p}T)}+(C_{1}+iC_{2})e^{-i\omega_{0}(t_{*}-m_{p}T)}]dt_{*}$
(46)
Assembling the result (C.5) of integration into Eq.(V.2) we arrive at
$\displaystyle{{\sf d}{\cal E}_{c}(\tau,z<0|z_{0})\over\Omega_{e}{\sf
d}\tilde{z}_{0}}\\!=\\!{4\pi\underline{\omega}_{0}^{2}\Omega_{e}^{2}\over 4\pi
i}\\!\\!\oint_{C_{\nu}^{-}}\\!\\!{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]r(\nu)e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}\bigg{[}[\ldots]\\!-\\!e^{-i\nu\tau_{*}}[(i\underline{\nu}-\underline{\Gamma}_{0})C_{1}(m_{p}T)\\!-\\!\underline{\omega}_{0}C_{2}(m_{p}T)]\bigg{]}\\!,~{}~{}~{}~{}~{}$
(47)
where $r(\nu)=[(\nu+i\Gamma_{0})^{2}-\omega_{0}^{2}]^{-1}$,
$\tau_{*}=t^{*}_{max}-t^{*}_{min}=\tau-|\tilde{z}|-2\tilde{z}_{0}-m_{p}T$ and
expression $[\ldots]$ in brackets (originating from the upper limit of
integration in Eq.(46)) is a linear combination of the products like
$C_{1,2}e^{-\Gamma_{0}\tau_{*}}e^{\pm i\omega_{0}\tau_{*}}$. This term does
not contain powers of $\nu$ higher than one. Here, the only $\nu$-dependent
exponent is $e^{-i[\nu-\nu n_{e}(\nu)]\tilde{z}_{0}}$. Hence,
$\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]r(\nu)e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}~{}\to~{}\oint^{(0+)}{d\zeta\over\zeta}(1-\zeta^{2})~{}r(\zeta)e^{-i\Omega_{e}\tilde{z}_{0}\zeta}~{}=0,$
and this integral will turn to zero after integration over the contour (cf.
Appendix C). However, the terms associated with the lower limit, where the
product of exponents, $e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}e^{-i\nu(\tau-|\tilde{z}|-2\tilde{z}_{0}-m_{p}T)}$,
has an essential singularity in the $\zeta$-plane, must be retained.
Therefore,
$\displaystyle{{\sf d}{\cal E}_{c}(\tau,z<0|z_{0})\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}\\!={4\pi\underline{\omega}_{0}^{2}\Omega_{e}^{2}\over
4\pi
i}\oint_{C_{\nu}^{-}}\\!{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]r(\nu)e^{-i\nu(\tau-|\tilde{z}|-2\tilde{z}_{0}-m_{p}T)}e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}]}[\underline{\omega}_{0}a_{1}(m_{p}T)-i\underline{\nu}a_{2}(m_{p}T)],~{}~{}~{}~{}~{}$
(48)
where
$\displaystyle
a_{1}(m_{p}T)=\bigg{(}1+{\Gamma_{0}^{2}\over\omega_{0}^{2}}\bigg{)}{\dot{\cal
P}(m_{p}T)\over\omega_{0}},~{}~{}a_{2}(m_{p}T)=\bigg{(}1+{\Gamma_{0}^{2}\over\omega_{0}^{2}}\bigg{)}{\cal
P}(m_{p}T)+2{\Gamma_{0}\over\omega_{0}}{\dot{\cal P}(m_{p}T)\over\omega_{0}}.$
(49)
In terms of variable $\zeta$ (cf. Eq.(5)) and with the resonance factor
$r(\zeta)$ given by Eq.(B), Eq.(48) reads as
$\displaystyle{{\sf d}{\cal E}_{c}(\tau,z<0|z_{0})\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}={4\pi\underline{\omega}_{0}\over 2\pi
i}\oint^{(0+)}{d\zeta\over\zeta}\exp{\bigg{\\{}-i{\Omega_{e}\Lambda\over
2}\bigg{[}{\Xi\over\zeta}+{\zeta\over\Xi}\bigg{]}\bigg{\\}}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(50) $\displaystyle\times\sum_{l=1}^{\infty}\bigg{[}{\rm Re}\bigg{(}{\sin
2l\vartheta\over\sin\vartheta}\bigg{)}\zeta^{2l}+i{\rm
Im}\bigg{(}{\sin(2l+1)\vartheta\over\sin\vartheta}\bigg{)}\zeta^{2l+1}\bigg{]}\bigg{\\{}\underline{\omega}_{0}a_{1}(m_{p}T)(1-\zeta^{2})-a_{2}(m_{p}T){i\over
2}(\zeta^{-1}-\zeta^{3})\bigg{\\}},$
where
$\Lambda^{2}=(\tau-|\tilde{z}|-m_{p}T-\tilde{z}_{0})^{2}-\tilde{z}_{0}^{2}$
and
$\Xi^{2}=(\tau-|\tilde{z}|-m_{p}T-2\tilde{z}_{0})/(\tau-|\tilde{z}|-m_{p}T)$.
Using the integral representation (7) of the Bessel coefficients,
$\displaystyle{{\sf d}{\cal E}_{c}(\tau,z<0|z_{0})\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}={{\cal
E}_{0}\Omega_{q}^{2}\over\Omega_{e}^{2}}\underline{\omega}_{0}\bigg{\\{}\underline{\omega}_{0}a_{1}(m_{p}T)[s_{1}(\Lambda,\Xi)+s_{3}(\Lambda,\Xi)]-{a_{2}(m_{p}T)\over
2}[s_{2}(\Lambda,\Xi)+s_{4}(\Lambda,\Xi)]\bigg{\\}},$ (51)
where the factor ${\cal E}_{0}\Omega_{q}^{2}/\Omega_{e}^{2}$, which, starting
from (A.3), is present in every function $4\pi a_{j}(m_{p}T)$, is factored
out. The functions $s_{j}(\Lambda,\Xi)$, which are defined by Eqs.(D) are the
sums of products, $\sum_{l}\Xi^{2l}J_{2l}(\Omega_{e}\Lambda)$. Their behavior
critically depends on the distance $z_{0}$ to the interface and cannot be
comprehended without foregoing analysis of primary precursors’ propagation
presented in Sec.II. This is addressed in detail in Sec.VI.1 and Appendix D.
## VI Spectroscopy with precursors, results of calculations and discussion
### VI.1 Results of calculations
The main results of this study are represented by Eq. (51). This equation
reflects an explicit dependence of the field radiated in the backward
direction on the time interval, $\Delta t=\tau-|\tilde{z}|-m_{p}T$, it takes
the $m_{p}$-th pulse to travel from the interface to a detector located at
some distance $z$ from vacuum-medium interface, where $T$ is the duration of
an individual pulse. The field (51) substantially depends on the depth $z_{0}$
of a dipole’s location in a medium. The expression in curly brackets is the
sum of two terms each of which is a product of an $m_{p}$-dependent amplitude
and the time-dependent signal.
Figure 5: Time dependence of ladder coefficients $a_{1}$ and $a_{2}$.
Amplitudes of the ladder parts of the signal level out with time at any value
of $T$. The maxima of combined signals determine the resonant values for $T$.
The time dependence of $a_{1}(m_{p}T)$ and $a_{2}(m_{p}T)$ shown in Fig. 5,
initially increase and eventually levels off forming a shape that resembles a
ladder. When $\Gamma_{0}\ll\omega_{0}$ then, according to Eqs.(49), parameters
$a_{1}(m_{p}T)\propto\dot{\cal P}(m_{p}T)$ and $a_{2}(m_{p}T)\propto{\cal
P}(m_{p}T)$ approximately coincide with deviations of velocity and
displacement from their zero values at equilibrium the moment when the
$m_{p}$-th pulse hits the dipoles. The precise timing of a hit is critical for
attaining an amplification of the amplitude of the dipole oscillation. The
maximum velocity after a hit is attained when the duration of a pulse is
$T=(n+1/2)T_{0}$ , which is clearly seen in Fig. 5 (where $n=15$). This hit
coincides with the dipole passing through its equilibrium position and ${\cal
P}(m_{p}T)\approx 0$. The added $1/2$ accounts for the change of polarity of
successive pulses. Even a small deviation from the resonant value of $T$
results in a visible decrease of the amplitude of velocity, $\propto a_{1}$,
and increase of the coordinate, $\propto a_{2}$, so that $a_{1}$ and $a_{2}$
become comparable. Saturation of both amplitudes with growth of $m_{p}$ is
observed for all $T$. All calculations shown in Fig. 5 are done with
$\Omega_{e}=10^{15}$ rad/sec, $\omega_{0}=10^{12}$ rad/sec,
$\Gamma_{0}=4\times 10^{9}$ rad/sec, and $z_{0}=3\lambda_{e}$. For larger
$z_{0}$, the behavior of the ladder amplitudes remains qualitatively the same
except that their values decrease by orders of magnitude. This is illustrated
in Fig.6 where $a_{1}$ and $a_{2}$ are plotted for $z_{0}=3\lambda_{e}$ and
$z_{0}=7\lambda_{e}$ with such a non-resonant value of $T$, that $a_{1}$ and
$a_{2}$ are comparable,
Figure 6: Comparison of ladder amplitudes for two values of $z_{0}$,
$3\lambda_{e}$ and $7\lambda_{e}$. The amplitudes of the ladder decrease by
the factor of 100 as the depth of the radiating layer increases from
$3\lambda_{e}$ to $7\lambda_{e}$.
In the final answer (51) for the electric field ${\cal E}_{c}(\tau,z<0|z_{0})$
of radiation, the ladder amplitudes $a_{1}$ and $a_{2}$ are multiplied by the
oscillating source functions $s_{1}+s_{3}$ and $s_{2}+s_{4}$, respectively.
The square of this field (proportional to the radiated power), calculated for
several values of $T$, is shown in Fig. 7 for a small $z_{0}=3\lambda_{e}$.
Figure 7: Square of radiated field from a single layer at $z_{0}=3\lambda_{e}$
for different values of $T$ in the vicinity of resonant $T=15.5T_{0}$. The
saturation of amplitude of intensity of the backward radiation is maximized at
resonant values of $T$, the resonances are rather sharp. Different colors (or
shades of gray) show the approach to resonance from $T=15.46T_{0}$ to
$T=15.50T_{0}$; and then a symmetric drop to $T=15.52T_{0}.$
Here, $T=15.50T_{0}$ is a resonant value of $T$. The general condition for the
resonance, $T_{res}=(n+1/2)T_{0}$, provides an opportunity to measure
$\omega_{0}$; the difference between adjacent resonances in $T$ is equal to
$T_{0}=2\pi/\omega_{0}$. The beginning of each pulse is accompanied by the
precursor, shown in the inset of Fig.7.
Figure 8: Time dependence of signals $s_{1}+s_{3}$ and $s_{2}+s_{4}$ for
shallow dipoles ($z_{0}=3\lambda_{e}$). Left panel: on a large scale the
signal (blue) coincides with a harmonic component (orange); on a small scale
shown in the inset, the difference due to the precursor is visible at an early
time. Right panel: same features as on the left panel, except that in
$s_{2}+s_{4}$ the precursor is more pronounced.
The deeper the molecular dipole is located, the smaller is the amplitude of
the associated harmonic oscillations and the more pronounced are the secondary
precursors of its backward radiation, which is triggered by the sharp fronts
of the primary precursors.
Figure 9: Comparison of time dependencies of signals $s_{1}+s_{3}$ (left
panel) and $s_{2}+s_{4}$(right panel) for different values of $z_{0}$. Each
signal is a sum of a harmonic signal (negative sine on the left and negative
cosine on the right) and an oscillating precursor. As the depth increases,
amplitudes of harmonic parts sharply decrease and the precursor parts
(starting from the same amplitude) attenuate less and less.
This can be seen in Figs. 8 and 9, were we plot, with the same parameters,
signals $s_{1}+s_{3}$ and $s_{2}+s_{4}$ for shallow, $z_{0}=3\lambda_{e}$, and
deep, $z_{0}\geq 7\lambda_{e}$, dipoles, respectively. Every $s_{j}$ appears
to be a sum of a harmonic signal originating from an oscillating dipole and a
secondary precursor formed by the electronic polarization near the leading
front. The pattern can be qualitatively understood from the dependence of
primary precursors on the distance its leading front has penetrated into the
medium, as given by Fig.2. For shallow dipoles, the impulse obtained from the
relatively smooth electric field of primary precursors is large and the
dipoles immediately begin a harmonic motion, as in Fig.8. For deeper dipoles,
the first peak of the electric field of primary precursors is too sharp to
excite harmonic oscillations of large amplitude. Instead, a secondary
precursor is produced, as in Fig.9. This behavior is confirmed by the
asymptotic formulae (D.15)-(D), which bear the pattern
$J_{0}(\Omega_{e}\tau)-\cos(\omega_{0}\tau)$, where a molecular harmonic and a
precursor are clearly visible.
### VI.2 Discussion
Although we intend to measure the same characteristics of matter that are
traditionally studied by means of spectroscopy, our approach is quite
different. We propose to probe properties of matter by a train of square
pulses with sharp wavefronts. This approach does not rely on any kind of
spectral device and is motivated by an inherently large difference in scales
of the physical processes that result in an actually observed signal and
prompt its interpretation. These scales are associated with the Langmuir
frequency $\Omega_{e}\sim 10^{15}-10^{16}$ rad/sec of the electronic component
of polarization, the proper frequency $\omega_{0}\sim 10^{12}$ rad/sec and the
width $\Gamma_{0}$ of a molecular resonance, and, finally, frequency
$\nu_{0}\sim 10^{8}-10^{10}$/sec of the pulses’ repetition in the incident
train that is used to probe molecular resonances. Each of these processes
allows for an exact analytical treatment.
The first process is the formation of primary precursors at the vacuum-medium
interface. It depends only on the highest frequency $\Omega_{e}$ (1), which is
translated into the finest time interval $\tau_{e}=2\pi/\Omega_{e}$ and the
shortest distance $\lambda_{e}=c\tau_{e}$. Its sole parameter is the density
of all electrons in a medium. The electric field of primary precursors can be
found analytically as light electrons begin to radiate and develop collective
behavior forming an index of refraction nearly immediately. Therefore, this
stage is totally under the jurisdiction of the rigorous theory of dispersion
Rosenfeld ; Born . Our calculations show, that in the vicinity of the
interface ($z\lesssim 5\lambda_{e}$), the electric field near the leading
front smoothly oscillates. But the deeper the front penetrates inside a
medium, the sharper the first oscillations become. Regardless of how deeply
the pulse penetrates a medium, its amplitude at the leading front stays the
same as the amplitude of the incident signal.
The second process is the excitation of oscillations in heavy molecular
dipoles. The field acting on heavy elastic dipoles is not an incident
monochromatic wave or a train of square pulses, but rather a field of
precursors formed by the electronic component of polarization. It takes many
periods $T_{0}=2\pi/\omega_{0}$ of proper oscillations to develop a collective
behavior of molecular dipoles that would have contributed to the refraction
index. In our case this limit is not reached. Instead, we address the problem
of driving the proper oscillations by a train of primary precursors directly.
We solve the equation of motion (16) for heavy elastic dipole, located at a
distance $z_{0}$ from the interface, in the presence of the electric field
(18) of the primary precursors. Within the model of a classical oscillator,
this problem allows for an analytic solution. We predict that, starting from
the first pulses, the amplitudes of dipole’s oscillation would form an
ascending ladder that eventually reaches some saturation level, unless the
damping $\Gamma_{0}=0$. The former must be maximum when the duration $T=1/\nu$
of an individual incident pulse is
$T_{n}=(n+1/2)T_{0}=2\pi(n+1/2)/\omega_{0}$, which indicates the existence of
a resonance. The time interval, $T_{0}$, between two neighboring resonances
determines the frequency of proper oscillations, $\omega_{0}$.
The third process is the emission of secondary radiation by the oscillating
molecular dipoles. We have shown that, in the final answer for the measured
signal, the ladder amplitudes are multiplied by the time-dependent source
functions, which explicitly depend on time and the distance of a radiating
dipole from the interface, $z_{0}$. Numerical analysis confirms these
functions to be the sums of two distinctive parts. The first part is a
harmonic function oscillating with the proper frequency $\omega_{0}$ of a
molecular resonance. It dominates for the dipoles located close to the
interface, $z_{0}\lesssim 10\lambda_{e}$, and is due to the oscillations
excited by the leading and relatively smooth parts of primary precursors. The
second part of the radiated field oscillates with the Langmuir frequency
$\Omega_{e}$ and represents the train of secondary precursors, which propagate
not only forward, but also in the backward direction with respect to the
incident train. Secondary precursors come predominantly from deeply located
dipoles, where the near-front oscillations of primary precursors are very
fast. This could have been anticipated from a qualitative inspection of Fig.2,
All numerical calculations were performed with the Wolfram’s Mathematica
software on a standard PC. Because the Lommel’s functions are composed from
highly oscillating Bessel functions, their numerical implementation meets
difficulties. To work around them, we developed a special procedure, which is
explained in Appendix D. More accurate calculations may require faster
computers.
Finally, we would like to discuss the prospect of building a device
implementing the proposed here idea of precursor-based spectroscopy.
Generators that are able to deliver a train of square pulses with the
repetition frequency of 1-5 GHz are widely available. The harmonic part of the
backward radiation with frequency about a few terahertz can be rectified,
e.g., using a Schottky diode as a power detector, and measured with a
reasonable precision. The most intriguing possibility may exist due to the
secondary precursors in the backward radiation. These can be used to
synchronize the incident signal with the returned signal, possibly, even
forming a standing wave between the generator and a sample.
## Appendix A Calculation of the ladder of excitations
Let $m_{p}=m_{p}(t_{\ast})$ be the number of wavefronts that have crossed
$z_{0}$ by the time $t$, $m_{p}T\leqslant t<(m_{p}+1)T$ so that $m_{p}=0$
corresponds to the leading front that enters a yet not polarized medium. When
the dipole is hit by the next pulse, the upper limit $m_{p}(t_{*})$ increases
by one. In order to compute $X(t|z_{0})$, we substitute (18) into Eq.(20),
which yields,
$\displaystyle X(t|z_{0})={{\cal E}_{0}q\over
M}\cdot\int_{0}^{t_{\ast}}K(t_{\ast}-t^{\prime}_{\ast})\bigg{[}\sum_{m=0}^{m_{p}(t_{\ast})}\epsilon_{m}(-1)^{m}\theta(t^{\prime}_{\ast}-mT)E^{\prime}_{t}(t^{\prime}_{\ast}-mT)\bigg{]}dt^{\prime}_{\ast}~{},$
(A.1)
where kernel $K(t-t^{\prime})$, the fundamental solution of differential
equation (16), and its first two time derivatives are as follows,
$\displaystyle K(x)$ $\displaystyle=$
$\displaystyle\theta(x)e^{-\Gamma_{0}x}{\sin\omega_{0}x\over\omega_{0}},~{}~{}~{}\dot{K}(x)={dK(x)\over
dx}=\theta(x)e^{-\Gamma_{0}x}\bigg{[}\cos\omega_{0}x-\Gamma_{0}{\sin\omega_{0}x\over\omega_{0}}\bigg{]},~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\ddot{K}(x)$ $\displaystyle=$ $\displaystyle{d^{2}K(x)\over
dx^{2}}=\delta(x)+\theta(x)e^{-\Gamma_{0}x}\bigg{[}-2\Gamma_{0}\cos\omega_{0}x-(\omega_{0}^{2}-\Gamma_{0}^{2}){\sin\omega_{0}x\over\omega_{0}}\bigg{]},~{}~{}~{}K(0)=0,~{}~{}~{}\dot{K}(0)=1~{}.$
(A.2)
Since we assume that the dipole’s charges are initially at rest, then for
$0<t_{*}<T$,
$\displaystyle X_{(0)}(t_{*})={{\cal E}_{0}q\over
M}\int_{0}^{t_{*}}K(t_{\ast}-t^{\prime})E^{\prime}_{t}(t^{\prime})dt^{\prime},~{}~{}\dot{X}_{(0)}(t_{*})={{\cal
E}_{0}q\over
M}\int_{0}^{t_{*}}\dot{K}(t_{\ast}-t^{\prime})E^{\prime}_{t}(t^{\prime})dt^{\prime},~{}~{}X_{(0)}(0)=\dot{X}_{(0)}(0)=0.~{}~{}~{}$
(A.3)
If the $m_{p}$-th pulse is passing through a dipole, then for
$m_{p}T<t_{*}<(m_{p}+1)T$,
$\displaystyle X_{(m_{p})}(t_{*})={{\cal E}_{0}q\over
M}\int_{m_{p}T}^{t_{*}}K(t_{*}-t^{\prime})[\sum_{m=0}^{m_{p}(t_{*})}(-1)^{m}\epsilon_{m}\theta(t^{\prime}_{\ast}-mT)E^{\prime}_{t}(t^{\prime}-mT)]dt^{\prime}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle+e^{-\Gamma_{0}t_{*}}[b_{c}(m_{p}T)\cos\omega_{0}t_{*}+b_{s}(m_{p}T)\sin\omega_{0}t_{*}],~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(A.4) $\displaystyle\dot{X}_{(m_{p})}(t_{*})={{\cal E}_{0}q\over
M}\int_{m_{p}T}^{t_{*}}\dot{K}(t_{*}-t^{\prime})[\sum_{m=0}^{m_{p}(t_{*})}(-1)^{m}\epsilon_{m}\theta(t^{\prime}_{\ast}-mT)E^{\prime}_{t}(t^{\prime}-mT)]dt^{\prime}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle+e^{-\Gamma_{0}t_{*}}\\{[\omega_{0}b_{s}(m_{p}T)-\Gamma_{0}b_{c}(m_{p}T)]\cos\omega_{0}t_{*}-[\omega_{0}b_{c}(m_{p}T)+\Gamma_{0}b_{s}(m_{p}T)]\sin\omega_{0}t_{*}\\}~{}.~{}~{}$
The coefficients $b_{c}(m_{p}T)$ and $b_{s}(m_{p}T)$ can be expressed in terms
of the dipole’s amplitude $X(m_{p}T)$ and its time derivative
$\dot{X}(m_{p}T)$. Coordinate and velocity at time $T$ are continuous, i.e.,
their values at the end of the first pulse, $m_{p}=0$, and at the beginning of
the second pulse, $m_{p}=1$, are equal. According to (A.3), these are
$\displaystyle X_{(0)}(T)=X_{(1)}(T)={{\cal E}_{0}q\over
M}\int_{0}^{T}K(T-t^{\prime})E^{\prime}_{t}(t^{\prime})dt^{\prime},~{}~{}~{}\dot{X}_{(0)}(T)=\dot{X}_{(1)}(T)={{\cal
E}_{0}q\over
M}\int_{0}^{T}\dot{K}(T-t^{\prime})E^{\prime}_{t}(t^{\prime})dt^{\prime}~{}.$
Similarly, if in Eq. (A) $t_{*}=m_{p}T$, at the beginning of the $m_{p}$-th
interval, the integrals become zero and
$\displaystyle
X_{(m_{p})}(m_{p}T)=e^{-\Gamma_{0}m_{p}T}[b_{c}(m_{p}T)\cos\omega_{0}m_{p}T+b_{s}(m_{p}T)\sin\omega_{0}m_{p}T],~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(A.5)
$\displaystyle\dot{X}_{(m_{p})}(m_{p}T)=e^{-\Gamma_{0}m_{p}T}\\{[\omega_{0}b_{s}(m_{p}T)-\Gamma_{0}b_{c}(m_{p}T)]\cos\omega_{0}m_{p}T-[\omega_{0}b_{c}(m_{p}T)+\Gamma_{0}b_{s}(m_{p}T)]\sin\omega_{0}m_{p}T\\}.$
Now, we can trade $b_{c}(mT)$ and $b_{s}(mT)$ for $X(mT)$ and $\dot{X}(mT)$:
$\displaystyle\omega_{0}e^{-\Gamma_{0}m_{p}T}b_{c}(m_{p}T)=(\omega_{0}\cos\omega_{0}m_{p}T-\Gamma_{0}\sin\omega_{0}m_{p}T)\cdot
X_{(m_{p})}(m_{p}T)-\sin\omega_{0}m_{p}T\cdot\dot{X}_{(m_{p})}(m_{p}T)~{},$
$\displaystyle\omega_{0}e^{-\Gamma_{0}m_{p}T}b_{s}(m_{p}T)=(\omega_{0}\sin\omega_{0}m_{p}T+\Gamma_{0}\cos\omega_{0}m_{p}T)\cdot
X_{(m_{p})}(m_{p}T)+\cos\omega_{0}m_{p}T\cdot\dot{X}_{(m_{p})}(m_{p}T)~{}.$
(A.6)
Then for $m_{p}(t_{\ast})T<t_{\ast}<(m_{p}(t_{\ast})+1)T$,
$\displaystyle X_{(m_{p})}(t_{*})={{\cal E}_{0}q\over
M}\int_{m_{p}T}^{t_{*}}K(t_{\ast}-t^{\prime})~{}\sum_{m=0}^{m_{p}(t_{*})}(-1)^{m}\epsilon_{m}\theta(t^{\prime}-mT)E^{\prime}_{t}(t^{\prime}-mT)~{}dt^{\prime}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle+e^{-\Gamma_{0}(t_{*}-m_{p}T)}\bigg{\\{}\bigg{[}\cos\omega_{0}(t_{*}-m_{p}T)+{\Gamma_{0}\over\omega_{0}}\sin\omega_{0}(t_{*}-m_{p}T)\bigg{]}X_{(m_{p})}(m_{p}T)+\sin\omega_{0}(t_{*}-m_{p}T){\dot{X}_{(m_{p})}(m_{p}T)\over\omega_{0}}\bigg{\\}},~{}~{}~{}~{}$
$\displaystyle{\dot{X}_{(m_{p})}(t_{*})\over\omega_{0}}={{\cal E}_{0}q\over
M}\int_{m_{p}T}^{t_{*}}{\dot{K}(t_{\ast}-t^{\prime})\over\omega_{0}}[\sum_{m=0}^{m_{p}(t_{*})}(-1)^{m}\epsilon_{m}\theta(t^{\prime}-mT)E^{\prime}_{t}(t^{\prime}-mT)]dt^{\prime}~{}+~{}e^{-\Gamma_{0}(t_{*}-m_{p}T)}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(A.7)
$\displaystyle\times\bigg{\\{}-\bigg{(}1+{\Gamma_{0}^{2}\over\omega_{0}^{2}}\bigg{)}\sin\omega_{0}(t_{*}-m_{p}T)X_{(m_{p})}(m_{p}T)+\bigg{[}\cos\omega_{0}(t_{*}-m_{p}T)-{\Gamma_{0}\over\omega_{0}}\sin\omega_{0}(t_{*}-m_{p}T)\bigg{]}{\dot{X}_{(m_{p})}(m_{p}T)\over\omega_{0}}\bigg{\\}}~{}~{}~{}~{}~{}~{}~{}$
For $t_{*}=m_{p}T$, the above equations become identities. For
$t_{*}=(m_{p}+1)T$ in Eqs.(A), we obtain the recursion formula for
coefficients ${X}_{(m_{p})}(m_{p}T)$, which form the “ladder of amplitudes”of
harmonic oscillations,
$\displaystyle X_{(m_{p})}[(m_{p}+1)T]={{\cal E}_{0}q\over
M}\int_{m_{p}T}^{(m_{p}+1)T}K[(m_{p}+1)T-t_{*}]\sum_{m=0}^{m_{p}(t_{*})}(-1)^{m}\epsilon_{m}\theta(t_{*}-mT)E^{\prime}_{t}(t_{*}-mT)dt_{*}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle+e^{-\Gamma_{0}T}\bigg{\\{}[\cos\omega_{0}T+{\Gamma_{0}\over\omega_{0}}\sin\omega_{0}T]\cdot{X}_{(m_{p})}(m_{p}T)+\sin\omega_{0}T\cdot{\dot{X}_{(m_{p})}(m_{p}T)\over\omega_{0}}\bigg{\\}},~{}~{}~{}~{}~{}~{}$
$\displaystyle{\dot{X}_{(m_{p})}[(m_{p}+1)T]\over\omega_{0}}={{\cal
E}_{0}q\over
M}\int_{m_{p}T}^{(m_{p}+1)T}{\dot{K}[(m_{p}+1)T-t_{*}]\over\omega_{0}}\sum_{m=0}^{m_{p}(t_{*})}(-1)^{m}\epsilon_{m}\theta(t_{*}-mT)E^{\prime}_{t}(t_{*}-mT)dt_{*}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(A.8)
$\displaystyle+e^{-\Gamma_{0}T}\bigg{\\{}-(1+{\Gamma_{0}^{2}\over\omega_{0}^{2}})\sin\omega_{0}T\cdot{X}_{(m_{p})}(m_{p}T)+[\cos\omega_{0}T-{\Gamma_{0}\over\omega_{0}}\sin\omega_{0}T]\cdot{\dot{X}_{(m_{p})}(m_{p}T)\over\omega_{0}}\bigg{\\}}.$
As expected, the amplitude of the ladder decreases with a greater duration $T$
of its steps, which is an obvious effect of $\Gamma_{0}\neq 0$. We remind the
reader that every term of the sequence $X_{(m_{p})}$, $m_{p}=1,2,...$,
implicitly bears the factor of ${{\cal E}_{0}q/M}$, which initially appears in
Eqs.(A.3.) for $m_{p}=0$, and is carried through by recursion (A).
## Appendix B Resonance factor ${\bm{r}(\omega)}$ in terms of
${\bm{\zeta}}$-variable
In terms of the variable $\zeta$, which is defined by the mapping (5),
$\underline{\omega}=(\zeta+1/\zeta)/2$, the denominator of the resonance
factor $r(\omega)=[(\omega+i\Gamma_{0})^{2}-\omega_{0}^{2}]^{-1}$ becomes a
fourth order polynomial with respect to $\zeta$. In what follows,
$\underline{\omega}_{0}=\omega_{0}/\Omega_{e}$ and
$\underline{\Gamma}_{0}=\Gamma_{0}/\Omega_{e}$,
$\displaystyle
r(\zeta)={\zeta\over\Omega_{e}^{2}\underline{\omega}_{0}}\bigg{(}{1\over(\zeta-\zeta_{1})(\zeta-\zeta_{2})}-{1\over(\zeta-\zeta_{3})(\zeta-\zeta_{4})}\bigg{)}$
(B.1)
with the roots
$\zeta_{1,2}=(\underline{\omega}_{0}-i\underline{\Gamma}_{0})\pm
i\sqrt{1-(\underline{\omega}_{0}-i\underline{\Gamma}_{0})^{2}}$ and
$\zeta_{3,4}=-(\underline{\omega}_{0}+i\underline{\Gamma}_{0})\pm
i\sqrt{1-(\underline{\omega}_{0}+i\underline{\Gamma}_{0})^{2}}$. Since the
function (37) has no poles in the $\omega$-plane and the radius of the contour
$C_{\zeta}$ can be made arbitrary small, it is possible to expand the
resonance factor $r(\zeta)$ in ascending powers of small $\zeta$. The algebra
can be greatly simplified with an introduction of a complex angle,
$\vartheta=\vartheta^{\prime}+i\vartheta^{\prime\prime}$, such that
$\underline{\omega}_{0}-i\underline{\Gamma}_{0}=\cos\vartheta$,
$\sqrt{1-(\underline{\omega}_{0}-i\underline{\Gamma}_{0})^{2}}=\sin\vartheta$
and, hence,
$\zeta_{1}=e^{i\vartheta}=e^{i\vartheta^{\prime}-\vartheta^{\prime\prime}},~{}~{}\zeta_{2}=1/\zeta_{1},~{}~{}\zeta_{3}=-\zeta_{1}^{\ast}=-e^{-i\vartheta^{\ast}},~{}~{}\zeta_{4}=-\zeta_{2}^{\ast}=1/\zeta_{3}$.
Then, Eq. (B.1) can be rewritten as
$\displaystyle\Omega_{e}^{2}\underline{\omega}_{0}r(\zeta)={\zeta\over(1-\zeta
e^{i\vartheta})(1-\zeta e^{-i\vartheta})}-{\zeta\over(1+\zeta
e^{i\vartheta^{*}})(1+\zeta
e^{-i\vartheta^{*}})}=\sum_{k=2}^{\infty}\bigg{[}{\sin
k\vartheta\over\sin\vartheta}+(-1)^{k}{\sin
k\vartheta^{\ast}\over\sin\vartheta^{\ast}}\bigg{]}\zeta^{k}$
$\displaystyle=\sum_{l=1}^{\infty}\bigg{[}2\;{\rm Re}\bigg{(}{\sin
2l\vartheta\over\sin\vartheta}\bigg{)}\zeta^{2l}+2i\;{\rm
Im}\bigg{(}{\sin(2l+1)\vartheta\over\sin\vartheta}\bigg{)}\zeta^{2l+1}\bigg{]}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(B.2)
where we have noticed that the terms with $k=0$ and $k=1$ or, equivalently,
with $l=0$ are zero. The Taylor series for the $r(\zeta)$ begins with term
$\propto\zeta^{2}$.
## Appendix C Calculation of some integrals.
The double integral (V.1) for ${\cal E}_{a}$, is symmetric with respect to
interchange $\omega\leftrightarrow\nu$. It is sufficient to consider only one
of the two terms in the numerator,
$\displaystyle{{\sf d}{\cal E}_{a}(\tau,z<0)\over{\sf
d}(\Omega_{e}\tilde{z}_{0})}={2\Omega^{2}_{q}{\cal E}_{0}\over 2\pi
i\Omega_{e}}\sum_{m=0}^{m_{p}}\epsilon_{m}(-1)^{m}\bigg{(}{-i\over
4\pi}\bigg{)}\oint_{C_{\omega}^{-}}{d\omega\over\omega}\mathfrak{T}[n_{e}(\omega)]e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}e^{-i\omega(\tau-|\tilde{z}|-2\tilde{z}_{0}-mT)}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\times\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]{e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}\over\nu-\omega}.~{}~{}~{}~{}~{}~{}~{}~{}$ (C.1)
The double integrals (V.1) encountered for ${\cal E}_{b}$, differ slightly,
$\displaystyle
I_{1}=\oint_{C_{\omega}^{-}}{d\omega\over\omega}\mathfrak{T}[n_{e}(\omega)]r(\omega)[2i\Gamma_{0}\omega-\omega_{m}^{2}]~{}e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}e^{-i\omega(\tau-|\tilde{z}|-2\tilde{z}_{0}-mT)}\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]{e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}\over\nu-\omega},$ (C.2) $\displaystyle
I_{2}=\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}e^{-i\nu(\tau-|\tilde{z}|-2\tilde{z}_{0}-mT)}\oint_{C_{\omega}^{-}}{d\omega\over\omega}\mathfrak{T}[n_{e}(\omega)]r(\omega)[2i\Gamma_{0}\omega-\omega_{m}^{2}]~{}{e^{-i[\omega-\omega
n_{e}(\omega)]\tilde{z}_{0}}\over\nu-\omega}.$ (C.3)
We use transformation (5) to trade $\nu$ in Eqs.(C) and (C.2) for a new
variable $\zeta$. This leads to the integral over a circle of an arbitrary
small radius around the origin,
$\displaystyle\oint_{C_{\nu}^{-}}{d\nu\over\nu}\mathfrak{T}[n_{e}(\nu)]{e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}\over\nu-\omega}~{}\to~{}\oint^{(0+)}d\zeta~{}{1-\zeta^{2}\over\zeta^{2}-2\underline{\omega}\zeta+1}~{}e^{-i\Omega_{e}\tilde{z}_{0}\zeta}=0.$
(C.4)
This integral equals zero just because its integrand is a regular function
inside the contour of integration.
The integral in (C.3) differs from those in (C) and (C.2) by an additional
factor $r(\omega)[2i\Gamma_{0}\omega-\omega_{m}^{2}]$ in the integrand.
According to (B), the Taylor expansion of the resonant factor $r(\omega)$ (in
terms of variable $\zeta$) begins with $\zeta^{2}$, while
$\omega\sim\zeta+\zeta^{-1}$, so that the Taylor expansion of the extra factor
begins with $\zeta^{1}$. Hence, this integral is also zero.
Integration (46) is straightforward. Multiplying the result by external factor
$e^{-i\nu(\tau-|\tilde{z}|)}e^{+i[\nu+\nu n_{e}(\nu)]\tilde{z}_{0}}$ from
Eq.(V.2), we obtain,
$\displaystyle e^{-i[\nu-\nu
n_{e}(\nu)]\tilde{z}_{0}}\bigg{[}(C_{1}-iC_{2}){e^{i(\omega_{0}+i\Gamma_{0})\tau_{*}}-e^{-i\nu\tau_{*}}\over
i(\nu+\omega_{0}+i\Gamma_{0})}+(C_{1}+iC_{2}){e^{i(-\omega_{0}+i\Gamma_{0})\tau_{*}}-e^{-i\nu\tau_{*}}\over
i(\nu-\omega_{0}+i\Gamma_{0})}\bigg{]},$ (C.5)
where
$\tau_{*}=t^{*}_{max}-t^{*}_{min}=\tau-|\tilde{z}|-2\tilde{z}_{0}-m_{p}T$ is
the “full time of radiation”. This function has no poles in the complex
$\nu$-plane.
## Appendix D The source functions.
The results of calculations of Sec.VI are expressed via four functions
$s_{j}(\lambda,\xi)$, $j=1,2,3,4$. The functions $s_{j}(\lambda,\xi)$ are
defined as the sums of the following series,
$\displaystyle s_{1}(\lambda,\xi)=\sum_{l=1}^{\infty}(-1)^{l}{\rm
Re}\bigg{[}{\sin
2l\vartheta\over\sin\vartheta}\bigg{]}~{}[\xi^{2l}J_{2l}(\Omega_{e}\lambda)+\xi^{2l+2}J_{2l+2}(\Omega_{e}\lambda)],~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle s_{2}(\lambda,\xi)=\sum_{l=1}^{\infty}(-1)^{l}{\rm
Re}\bigg{[}{\sin
2l\vartheta\over\sin\vartheta}\bigg{]}~{}[\xi^{2l-1}J_{2l-1}(\Omega_{e}\lambda)-\xi^{2l+3}J_{2l+3}(\Omega_{e}\lambda)],~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle s_{3}(\lambda,\xi)=\sum_{l=1}^{\infty}(-1)^{l}{\rm
Im}\bigg{[}{\sin(2l+1)\vartheta\over\sin\vartheta}\bigg{]}[\xi^{2l+1}J_{2l+1}(\Omega_{e}\lambda)+\xi^{2l+3}J_{2l+3}(\Omega_{e}\lambda)],~{}~{}$
(D.1) $\displaystyle s_{4}(\lambda,\xi)=\sum_{l=1}^{\infty}(-1)^{l}{\rm
Im}\bigg{[}{\sin(2l+1)\vartheta\over\sin\vartheta}\bigg{]}[\xi^{2l}J_{2l}(\Omega_{e}\lambda)-\xi^{2l+4}J_{2l+4}(\Omega_{e}\lambda)].~{}~{}~{}~{}~{}~{}~{}~{}$
These sums are intimately connected with the Lommel functions $U_{\nu}(w,z)$
of two variables Watson ,
$\displaystyle
W_{\nu}(w,z)=\sum_{l=0}^{\infty}w^{2l}J_{2l+\nu}(z)\equiv(iw)^{-\nu}U_{\nu}(iwz,z).$
(D.2)
Consider the calculation of $s_{1}$, as an example. Trading the original
$\vartheta$ for $\pi/2+\delta$ (so that
$\sin\delta=-\underline{\omega}_{0}+i\underline{\Gamma}_{0}$) and exercising
simple algebra, we arrive at,
$\displaystyle s_{1}(\lambda,\xi)=\sum_{l=1}^{\infty}{\rm Re}\bigg{[}{\sin
2l\delta\over\cos\delta}\bigg{]}~{}[\xi^{2l}J_{2l}+\xi^{2l+2}J_{2l+2}]=\sum_{l^{\prime}=0}^{\infty}{\rm
Re}\bigg{[}{\sin(2l^{\prime}+2)\delta+\sin
2l^{\prime}\delta\over\cos\delta}\bigg{]}\xi^{2l^{\prime}+2}J_{2l^{\prime}+2}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle=2\xi^{2}{\rm
Re}\bigg{[}\sum_{l^{\prime}=0}^{\infty}\sin[(2l^{\prime}+1)\delta]\xi^{2l^{\prime}}J_{2l^{\prime}+2}\bigg{]}=2\xi^{2}{\rm
Re}\bigg{\\{}{1\over
2i}\sum_{l^{\prime}=0}^{\infty}\bigg{[}e^{i\delta}(e^{i\delta}\xi)^{2l^{\prime}}-e^{-i\delta}(e^{-i\delta}\xi)^{2l^{\prime}}\bigg{]}J_{2l^{\prime}+2}\bigg{\\}},$
(D.3)
where the argument $\Omega_{e}\lambda$ of the Bessel functions is omitted.
Finally, referring to Eqs. (D.2),
$\displaystyle s_{1}(\lambda,\xi)=-\xi^{2}{\rm
Re}\bigg{\\{}i\bigg{[}e^{i\delta}W_{2}(e^{i\delta}\xi,\lambda)-e^{-i\delta}W_{2}(e^{-i\delta}\xi,\lambda)\bigg{]}\bigg{\\}}.$
(D.4)
In the same way, rearranging the first sum in $s_{2}$ as $l\to l^{\prime}+1$
and the second sum as $l\to l^{\prime}-1$ and subtracting the extra terms
yields,
$\displaystyle s_{2}(\lambda,\xi)=2\underline{\omega}_{0}\xi
J_{1}(\Omega_{e}\lambda)+4{\rm
Re}\bigg{[}\sin\delta\sum_{l^{\prime}=0}^{\infty}\cos[2l^{\prime}\delta]\xi^{2l^{\prime}+1}J_{2l^{\prime}+1}\bigg{]}.~{}$
(D.5)
which can be written as,
$\displaystyle s_{2}(\lambda,\xi)=+2\underline{\omega}_{0}\xi
J_{1}(\Omega_{e}\lambda)+2\xi{\rm
Re}\big{\\{}\sin\delta\big{[}W_{1}(e^{i\delta}\xi,\lambda)+W_{1}(e^{-i\delta}\xi,\lambda)\big{]}\big{\\}}.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(D.6)
The remaining two functions, $s_{3}(\lambda,\xi)$ and $s_{4}(\lambda,\xi)$ are
transformed into
$\displaystyle
s_{3}(\lambda,\xi)=2\sum_{l^{\prime}=1}^{\infty}(-1)^{l^{\prime}}{\rm Im}[\cos
2l^{\prime}\vartheta]\xi^{2l^{\prime}+1}J_{2l^{\prime}+1}=2\xi\sum_{l^{\prime}=0}^{\infty}{\rm
Im}[\cos 2l^{\prime}\delta]\xi^{2l^{\prime}}J_{2l^{\prime}+1}=\xi{\rm
Im}\big{[}W_{1}(e^{i\delta}\xi,\lambda)+W_{1}(e^{-i\delta}\xi,\lambda)\big{]}~{}~{}~{}~{}~{}$
(D.7)
and
$\displaystyle s_{4}(\lambda,\xi)\\!=-4{\rm
Im}\bigg{[}\cos\vartheta\sum_{l^{\prime}=0}^{\infty}(-1)^{l^{\prime}}[\cos(2l^{\prime}+1)\vartheta]\xi^{2l^{\prime}+2}J_{2l^{\prime}+2}\bigg{]}=-4\xi^{2}{\rm
Im}\bigg{[}\sin\delta\sum_{l^{\prime}=0}^{\infty}\sin[(2l^{\prime}+1)\delta]\xi^{2l^{\prime}}J_{2l^{\prime}+2}\bigg{]}$
$\displaystyle=\\!2\xi^{2}{\rm
Im}\big{\\{}i\sin\delta\big{[}e^{i\delta}W_{2}(e^{i\delta}\xi,\lambda)\\!-e^{-i\delta}W_{2}(e^{-i\delta}\xi,\lambda)\big{]}\big{\\}}\\!.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(D.8)
The functions $s_{j}(\Lambda,\Xi)$ show up in Eq.(51) for the backward dipole
radiation with the following arguments,
$\Lambda^{2}=\Omega_{e}^{2}[(\tau-|\tilde{z}|-\tilde{z}_{0}-m_{p}T)^{2}-\tilde{z}_{0}^{2}]$,
$\Xi^{2}~{}=~{}(\tau-|\tilde{z}|-2\tilde{z}_{0}-m_{p}T)/(\tau-|\tilde{z}|-m_{p}T)$
and $\Lambda\Xi=\Omega_{e}(\tau-|\tilde{z}|-2\tilde{z}_{0}-m_{p}T)$.
In this study, the computational problems are somewhat alleviated by the fact,
that we are interested in the functions $s_{j}(\Lambda,\Xi)$ only at
relatively small values of $\Lambda\Xi$.
The Lommel functions of two variables, despite being named long ago, are not
studied as exhaustively as, e.g. Bessel functions, and there are no tables (at
least for the complex-valued variables) that could have been used for the
numerical calculations. A priori, two practical methods seem obvious. One is
to cut off the number of terms in the series (D.2). Another one is to use the
integral representation for the Lommel functions (see Eq.§16.53(1) in
Ref.Watson )
$\displaystyle
W_{\nu}(\xi,\lambda)=\sum_{l=0}^{\infty}\xi^{2l}J_{2l+\nu}(\Omega_{e}\lambda)=\Omega_{e}\lambda\int_{0}^{1}J_{\nu-1}(\Omega_{e}\lambda
y)\cosh\big{[}{\Omega_{e}\lambda\xi\over
2}(1-y^{2})\big{]}y^{\nu}dy~{},~{}~{}{\rm Re}(\nu)>0~{},$ (D.9)
In application to our problem, the major challenge in computing of these
functions stems from the fact, that the Langmuir frequency, $\Omega_{e}$, is
very high ($\Omega_{e}/2\pi\sim 10^{15}$ Hz), so that the Bessel functions
rapidly oscillate. Furthermore, in the integrand of (D.9), the amplitude of
these oscillations grows exponentially when $y\to 0$ . Therefore, it is
difficult to estimate the accuracy of the possible approximations. Here, we
attempt to combine these methods. It is straightforward to check the following
recursion formula,
$\displaystyle
W_{\nu}(\xi,\lambda)=J_{\nu}(\Omega_{e}\lambda)+\xi^{2}W_{\nu+2}(\xi,\lambda).$
(D.10)
By iterating the recursion formula (D.10) $N$ times and applying (D.9) to the
$N+1$-st term, one readily obtains,
$\displaystyle
W_{\nu}(\xi,\lambda)=\sum_{l=0}^{N-1}\xi^{2l}J_{2l+\nu}(\Omega_{e}\lambda)+\xi^{2N}\Omega_{e}\lambda\int_{0}^{1}J_{\nu+2N-1}(\Omega_{e}\lambda
y)\cosh\big{[}{\Omega_{e}\lambda\xi\over 2}(1-y^{2})\big{]}y^{2N+\nu}dy.$
(D.11)
When $N$ is sufficiently large, the exponential growth of hyperbolic cosine at
$y\rightarrow 0$ and rapid oscillations of the Bessel function at
$y\rightarrow 1$ in the integrand of $W_{\nu+2N}$ given by (D.9) become
suppressed.
In order to estimate an optimal value of the separation parameter $N$, let us
notice that the lowest zero $j_{\mu}$ of the $J_{\mu}(z)$ and the first
maximum, $j^{\prime}_{\mu}$ of the $J^{\prime}_{\mu}(z)$, are greater than
$\mu$ (see. Watson , §15.3(1)),
$j_{\mu}>\mu,~{}j^{\prime}_{\mu}>\mu.$
For functions of large order, a simple estimate of the smallest zero and the
smallest maximum is as follows (Watson , §15.83),
$\displaystyle
j_{\mu}=\mu+1.855757\times\mu^{1/3}+O(\mu^{-1/3}),~{}~{}~{}~{}j^{\prime}_{\mu}=\mu+0.808618\times\mu^{1/3}+O(\mu^{-1/3}).$
(D.12)
In order that there are no zeros of the Bessel function within the interval of
integration over $y$ in (D.11), it is necessary that the argument of the
Bessel functions does not exceed its smallest zero or its smallest maximum,
i.e, $\Omega_{e}\lambda y<\Omega_{e}\lambda\leq j_{2N+\nu-1}$ or
$\Omega_{e}\lambda\leq j^{\prime}_{2N+\nu-1}$. Hence, the integral
accommodates that part of the sum, where order of the Bessel functions exceeds
its argument. The simplest estimate of $N=N(m,T)$ is given by the equations,
$\Omega_{e}\lambda\leq j_{2N+\nu-1}\sim 2N,~{}~{}~{}\Omega_{e}\lambda\leq
j^{\prime}_{2N+\nu-1}\sim 2N.$
Numerical calculation show, that for sufficiently large upper limit $N(T)$ of
the sum over $l$, the integral in Eq.(D.11) is small.
A few remarks regarding asymptotic behavior of the source functions, which
clarify the origin of their behavior, observed in Figs.8 and 9, which are
based on numerical calculations and presented in Sec.VI.1, are in order. The
period of plasma oscillation is $\tau_{e}=2\pi/\Omega_{e}\sim 5\cdot
10^{-16}-10^{-15}$sec and the corresponding unit of length is
$\lambda_{e}=c\tau_{e}\sim 10^{-5}-10^{-4}{\rm cm}\approx 10^{3}-10^{4}$Å. By
nature of our problem, we are interested in the time interval
$T_{0}=2\pi\omega_{0}\sim 10^{3}\tau_{e}$, so that
$\Xi^{2}=1-2\tilde{z}_{0}/\tau_{m}$ is very close to the constant value of
$1$, while $\Lambda\approx\tau_{m}=\tau-|\tilde{z}|-m_{p}T$. Let us consider
the limit of $\Xi=1$ as the zero-order approximation when $\tau_{m}\gg z_{0}$.
Curiously enough, it coincides with the exact solution with $z_{0}=0$, which
corresponds to the location of the radiating dipole on the interface between
vacuum and a medium. Then the functions $W_{\nu}(e^{i\delta},\Lambda)$, which,
according to (D.2),
$(ie^{i\delta})^{\nu}W_{\nu}(e^{i\delta},\Lambda)=U_{\nu}(ie^{i\delta}\Lambda,\Lambda),$
are the Lommel’s functions $y=U_{\nu}(c\Lambda,\Lambda)$ of two variables with
$w=c\Lambda$, where $c$ is constant. In our case, $c=ie^{i\delta}$, so that
$(c+c^{-1})^{2}=4\sin^{2}\delta$ and
$y=(ie^{i\delta})^{\nu}W_{\nu}(e^{i\delta},\Lambda)$ These functions are
particular integrals of the equation for $y=U_{\nu}(c\Lambda,\Lambda)$ (Watson
, §16.52 (7)). The function $W_{\nu}(e^{i\delta},\Lambda)$ satisfy the
following equation,
$\displaystyle
4\big{\\{}{d^{2}W_{\nu}(e^{i\delta},\Lambda)/d\Lambda^{2}}+\sin^{2}\delta
W_{\nu}(e^{i\delta},\Lambda)\big{\\}}=J_{\nu-2}(\Omega_{e}\Lambda)-e^{-2i\delta}J_{\nu}(\Omega_{e}\Lambda),$
(D.13)
which obviously has, among others, the periodic solutions like
$\cos(\Omega_{e}\Lambda\sin\delta)=\cos(\omega_{0}\Lambda)$.
When $\tau\gg z_{0}$ it is instructive to present the functions
$s_{j}(\Lambda,1)$ in a somewhat different form,
$\displaystyle s_{1}(\lambda,1)=2{\rm
Re}\bigg{\\{}\sum_{l^{\prime}=0}^{\infty}\sin[(2l^{\prime}+1)\delta]J_{2l^{\prime}+2}(\Omega_{e}\Lambda)\bigg{\\}}=2{\rm
Re}\bigg{\\{}\sum_{l^{\prime}=1}^{\infty}\sin[(2l^{\prime}-1)\delta]J_{2l^{\prime}}(\Omega_{e}\Lambda)\bigg{\\}}~{}~{}~{}~{}$
(D.14) $\displaystyle=2{\rm Re}\bigg{\\{}-\sin\delta\sum_{l=1}^{\infty}\cos
2l\delta~{}J_{2l}(\Omega_{e}\Lambda)+\cos\delta\sum_{l=0}^{\infty}\sin[(2l+2)\delta]~{}J_{2l+2}(\Omega_{e}\Lambda)\bigg{\\}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{},$
The first sum in the last equation is well known to be
$2\sum_{l=1}^{\infty}\cos 2l\delta
J_{2l}(\Omega_{e}\lambda)=\cos(\Omega_{e}\lambda\sin\delta)-J_{0}(\Omega_{e}\lambda)$,
while the second sum differs from the original one by replacement
$\sin[(2l+1)\delta]\to\sin[(2l+2)\delta]$. The function $s_{1}(\Lambda,1)$ can
be cast as
$\displaystyle s_{1}(\Lambda,1)={\rm
Re}\big{\\{}\sin\delta[J_{0}(\Omega_{e}\Lambda)-\cos(\Omega_{e}\Lambda\sin\delta)]\big{\\}}-{\rm
Re}\big{\\{}i\cos\delta\big{[}e^{2i\delta}W_{2}(e^{i\delta},\Lambda)-e^{-2i\delta}W_{2}(e^{-i\delta},\Lambda)\big{]}\big{\\}},$
(D.15)
where $\Lambda\approx\tau_{m}$ and $\Omega_{e}\sin\delta\approx\omega_{0}$. In
agreement with numerical calculations, the source function $s_{1}(\Lambda,1)$
contains an observed sum of slow harmonic and a precursor of the dipole
radiation.
In the same way, since in Eq.(D.5) $\cos
2l\delta=\cos\delta\cos(2l+1)\delta+\sin\delta\sin(2l+1)\delta$, the source
function $s_{2}(\Lambda,1)$ can be written down as
$\displaystyle
s_{2}(\Lambda,1)=2\underline{\omega}_{0}J_{1}(\Omega_{e}\Lambda)+{\rm
Re}\bigg{\\{}4\sin^{2}\delta\sum_{l=0}^{\infty}\sin[(2l+1)\delta]J_{2l+1}(\Omega_{e}\Lambda)+2\sin
2\delta\sum_{l=0}^{\infty}\cos[(2l+1)\delta]J_{2l+1}(\Omega_{e}\Lambda)\bigg{\\}}~{}$
(D.16) $\displaystyle=-2\underline{\omega}_{0}J_{1}(\Omega_{e}\Lambda)+2{\rm
Re}\\{\sin^{2}\delta\sin(\Omega_{e}\Lambda\sin\delta)\\}+2{\rm
Re}\big{\\{}\sin
2\delta\big{[}e^{i\delta}W_{1}(e^{i\delta},\Lambda)+e^{-i\delta}W_{1}(e^{-i\delta},\Lambda)\big{]}\big{\\}}.~{}~{}~{}~{}~{}~{}~{}$
where we employed another well-known result,
$2\sum_{l=0}^{\infty}\sin[(2l+1)\delta]J_{2l+1}(\Omega_{e}\Lambda)=\sin(\Omega_{e}\Lambda\sin\delta)\approx\sin(\omega_{0}\Lambda)$.
Using the same transformations, it is straightforward to obtain,
$\displaystyle s_{3}(\Lambda,1)=2\sum_{l=0}^{\infty}{\rm Im}[\cos
2l\delta]J_{2l+1}(\Omega_{e}\Lambda)=2{\rm
Im}\bigg{\\{}\sum_{l=0}^{\infty}[\cos\delta\cos(2l+1)\delta+\sin\delta\sin(2l+1)\delta]J_{2l+1}(\Omega_{e}\Lambda)\bigg{\\}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(D.17) $\displaystyle={\rm
Im}\\{\sin\delta\sin(\Omega_{e}\Lambda\sin\delta)\\}+{\rm
Im}\bigg{\\{}\cos\delta\big{[}e^{i\delta}W_{1}(e^{i\delta},\Lambda)+e^{-i\delta}W_{1}(e^{-i\delta},\Lambda)\big{]}\bigg{\\}}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
and
$\displaystyle s_{4}(\Lambda,1)=-4{\rm
Im}\bigg{\\{}\sin\delta\sum_{l^{\prime}=0}^{\infty}\sin[(2l^{\prime}+1)\delta]J_{2l^{\prime}+2}(\Omega_{e}\Lambda)\bigg{\\}}=4{\rm
Im}\bigg{\\{}\sin\delta\sum_{l=0}^{\infty}[\sin\delta\cos
2l\delta-\cos\delta\sin 2l\delta]J_{2l}(\Omega_{e}\Lambda)\bigg{\\}}~{}$
$\displaystyle=2{\rm
Im}\big{\\{}\sin^{2}\delta[\cos(\Omega_{e}\Lambda\sin\delta)-J_{0}(\Omega_{e}\Lambda)]\big{\\}}+2{\rm
Im}\big{\\{}i\sin
2\delta\big{[}e^{2i\delta}W_{2}(e^{i\delta},\Lambda)-e^{-2i\delta}W_{2}(e^{-i\delta},\Lambda)\big{]}\big{\\}}.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(D.18)
## References
* (1) A. Sommerfeld, Ann. Physik 44, 177 (1914); L. Brillouin, Ann. Physik 44, 203 (1914).
* (2) K.E. Oughstun, Electromagnetic and Optical Pulse Propagation, v.1: Spectral Representations in Temporally Dispersive Media; v.2: Temporal Pulse Dynamics in Dispersive Attenuative Media, Springer, 2019.
* (3) E.G. Skrotskaya, A. N. Makhlin, V. A. Kashin, and G.V. Skrotsky, Zh. Eksp. Teor. Fiz. 56, 220-226 (1969) (Sov. Phys. JETP 29, 123 (1969)).
* (4) H. Jeong, A.M.C. Dawes, and D.J. Gauthier, Direct Observation of Optical Precursors in a Region of Anomalous Dispersion, Phys. Rev. Letters 96, 143901 (2006)
* (5) L. Rosenfeld, Theory of electrons, North-Holland Pub., Amsterdam, 1951.
* (6) M. Born and E. Wolf, Principles of optics, Pergamon Press, 1964.
* (7) N.G. Denisov, Zh. Eksp. Teor. Fiz. 21, 1354 (1951).
* (8) L. Brillouin, Wave propagation and group velocity, Academic Press 1960; A. Sommerfeld, Optics, Academic Press 1954.
* (9) I.N. Onishchenko, D.Yu. Sidorenko, and G.V. Sotnikov, Structure of electromagnetic field excited by an electron bunch in a semi-infinite dielectric-filled waveguide, Phys. Rev. E 65, 066501 (2002).
* (10) E. Gitterman and M. Gitterman, Transient processes for incidence of a light signal on a vacuum-medium interface, Phys. Rev. A 13, 763 (1976).
* (11) W.R. LeFew, S. Venakides, D.J. Gauthier, Accurate description of optical precursors and their relation to weak-field coherent optical transients, https://arxiv.org/0705.4238.
* (12) B.M. Bolotovsky, S.N. Stoljarov, On radiation principles in a dispersive medium, in Problems of Theoretical Physics, A memorial volume to I.E. Tamm, p. 267. Nauka, Moscow 1972 (in Russian).
* (13) G.N. Watson, A treatise on the theory of Bessel functions, Cambridge University Press, 1995.
|
arxiv-papers
| 2021-07-26T20:11:18 |
2024-09-04T03:07:19.986574
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Alexander Makhlin, Panagiotis Papoulias, and Eugene Surdutovich",
"submitter": "Eugene Surdutovich",
"url": "https://arxiv.org/abs/2107.12457"
}
|
2107.12460
|
# Don’t Sweep your Learning Rate under the Rug:
A Closer Look at Cross-modal Transfer of Pretrained Transformers
Danielle Rothermel Margaret Li Tim Rocktäschel Jakob Foerster
###### Abstract
Self-supervised pre-training of large-scale transformer models on text corpora
followed by finetuning has achieved state-of-the-art on a number of natural
language processing tasks. Recently, Lu et al. (2021) claimed that _frozen_
pretrained transformers (FPTs) match or outperform training from scratch as
well as unfrozen (fine-tuned) pretrained transformers in a set of transfer
tasks to _other modalities_. In our work, we find that this result is, in
fact, an artefact of not tuning the learning rates. After carefully
redesigning the empirical setup, we find that when tuning learning rates
properly, pretrained transformers do outperform or match training from scratch
in all of our tasks, but only as long as the _entire model_ is fine-tuned.
Thus, while transfer from pre-trained language models to other modalities does
indeed provide gains and hints at exciting possibilities for future work,
properly tuning hyperparameters is important for arriving at robust findings.
Machine Learning, ICML
Figure 1: Test accuracy on the CIFAR10 LRA task across the learning rate
sweep, with error bounds across 3 seeds. The learning rate reported by Lu et
al. (2021), $1\times 10^{-3}$, is marked with a dashed red line, demonstrating
that any lower learning rate would have given inverted results on this task.
Figure 2: [Left] Test accuracy of the best learning rate for all settings and
tasks, with error bounds over 3 seeds. Exact values reported in Table 1. The
Frozen variants under-perform across all tasks, but the Unfrozen Pretrained
variant matches or exceeds all other variants. [Right] The test accuracy on
the ListOps task across the learning rate sweep, error bounds over 3 seeds.
The learning rate reported by Lu et al. (2021), $1\times 10^{-3}$, is marked
with a dashed red line, demonstrating that any lower learning rate would have
given inverted results on this task. Each learning rate evaluated between
$1\times 10^{-5}$ and $1\times 10^{-3}$ leads to a different conclusion about
the best architecture.
## 1 Introduction
Transformer-based pretrained language models (LMs) have led to a revolution in
the area of natural language processing (NLP) in recent years (Vaswani et al.,
2017), a progress fueled by larger training data sets, bigger models and
increasing computational demand. Finetuning such pretrained LMs has led to
state-of-the-art performance across a variety of NLP benchmarks (Petroni et
al., 2021; Wang et al., 2018, 2019). In these settings, the LM is pretrained
on a large collection of natural language texts such as the Google Book Corpus
(Zhu et al., 2015) or the 1B Word Benchmark (Chelba et al., 2014).
Subsequently, the model is fine-tuned on a given task of interest, e.g.
sentiment analysis (Maas et al., 2011) or text classification (Zhang et al.,
2015).
While the ability to transfer language representations, e.g. word
representations (Mikolov et al., 2013; Pennington et al., 2014) or contextual
representations (Devlin et al., 2019; Radford et al., 2019), between different
language tasks has been well studied and revealed few-shot (Brown et al.,
2020) and zero-shot (Petroni et al., 2019) abilities, recent work has focused
on the exciting possibility of transfer between different modalities (Lu et
al., 2021). Successful transfer between different modalities (e.g. natural
language to images) can be interpreted as less of a pretraining of
transferable representations but instead transfer of the general
_computational structure_ inherent in language to other tasks (Lu et al.,
2021). Indeed, Lu et al. find that finetuning only the input and output layers
of a fully trained NLP transformer model, _frozen_ pretrained transformers
(FPTs), matches or outperforms training from scratch of the same model, across
a variety of tasks in different modalities.
In this paper, we report that the performance gains of FPTs disappear under
fair tuning of the learning rate and one of the main claims of the paper no
longer holds. FTPs perform better than random frozen models but are
significantly worse than training from scratch on all four tasks studied.
Concretely, with a lower learning rate, the ordering of the performance
between the unfrozen and frozen variants is inverted for all tasks (see Figure
1). The impact of hyperparamters on the empirical results in ML papers has
been the subject of intense debates. It is not uncommon for careful finetuning
to change the outcome of a paper or even undo years of “progress” in the
field. For example, Melis et al. (2018) found that when fairly optimizing for
hyperparamters, vanilla LSTM match more sophisticated recently introduced RNN
variants across a variety of NLP tasks.
That said, interestingly, we find that when _not_ frozen, Transformers do
provide gains through transfer from text to other modalities. In particular in
the challenging CIFAR10-LRA task (Tay et al., 2021), which consists of
sequentialized CIFAR images, finetuning the entire pretrained model
outperforms training from scratch by a large margin. On MNIST (LeCun & Cortes,
2010), the gap is small but significant and on the other two tasks finetuning
the pretrained model matches the performance of training from scratch. This
opens up exciting avenues for future research and we hope that our work will
help the community avoid some potential pitfalls around hyperparemter tuning
in the pursuit of this work, ensuring that the findings will stand the test of
time.
## 2 Problem Setting and Background
The recent work from Lu et al. (2021) investigates the capability of
transformers, pretrained on natural language, to generalize to other
modalities with minimal finetuning. They limit the finetuning by freezing the
majority of the weights in the residual layers, and report that this Frozen
Pretrained Transformer (FPT) architecture achieves comparable or better
results than transformers trained from scratch across their chosen tasks.
Lu et al. consider classification tasks across a range of modalities,
including bit manipulation (Miconi et al., 2018), equation evaluation (Tay et
al., 2021; Nangia & Bowman, 2018), image classification (Krizhevsky, 2012) and
protein classification (Rao et al., 2019; Fox et al., 2013; Hou et al., 2018).
The input is chunked into tokens for processing with the GPT2 architecture,
with the tokens for the vision tasks being a flattened representation of a 4
by 4 pixel patch. The authors also include a more challenging version of the
CIFAR10 task, CIFAR-LRA (Tay et al., 2021) where the patches are a single
pixel, resulting in longer sequences.
FPTs as proposed by Lu et al. have the feedforward and multi-head attention
frozen in each of the residual blocks. Only the input and output layers, layer
norm parameters, and positional embeddings are finetuned. The authors compare
this performance with a Frozen Random transformer and an Unfrozen variant. For
the Unfrozen variant they report numbers from different architectures for each
task we consider. For CIFAR10-LRA and ListOps they report numbers from a
vanilla Transformer with tuned hyperparameters as provided in Tay et al.
(2021). For MNIST and CIFAR10 they report results from GPT2, with CIFAR10
using a 3 layer model due to instability in training the full sized version.
The authors report training on a single learning rate ($1\times 10^{-3}$) for
all tasks except Homology, and appear to report the results from a single seed
per variation. The released code111https://github.com/kzl/universal-
computation along with the paper does not use a validation set and they report
the test accuracies from a held-out test set.
## 3 Methods
Training of deep neural networks can be highly sensitive to the learning rates
used for optimizing the network (Choi et al., 2020). Therefore, a natural
question is to ask whether the results reported in Lu et al. (2021) have been
impacted by the choice of using a fixed learning rate. To investigate this, we
rerun the experiments of Lu et al. while broadly sweeping the learning rate.
As we will see later, any given _fixed_ learning rate greatly changes the
results.
To investigate the effects of pretraining and freezing across tasks from
different modalities, we evaluate on four of the tasks explored by Lu et al.:
ListOps, MNIST, CIFAR10 and CIFAR10-LRA. We do not replicate the Bit tasks
because the transformers were able to perfectly solve them in the original
work. The Homology task is not supported in the released codebase so it would
be more difficult to ensure an accurate reproduction of their experimental
setting. We evaluate the performance on the base GPT-2 model, at 12 layers. As
in their work, we experiment with using transformer models pretrained on
natural language, and with freezing the self-attention and feedforward layers
finetuning only the input and output layers, the layer norm and the positional
embeddings. Specifically, we consider:
Frozen Pretrained: The Frozen Pretrained Transformer introduced by Lu et al.
Frozen Random: The transformer is randomly initialized and the self-attention
and feedforward layers are frozen before finetuning.
Unfrozen Pretrained: The transformer is initialized with a pretrained language
model and finetuned without freezing any layers.
Unfrozen Random: The transformer is randomly initialized and finetuned without
freezing any layers.
For each of these settings and tasks, we train using the Adam optimizer
(Kingma & Ba, 2015) and sweep the learning rate logarithmically from $1\times
10^{-6}$ to $0.01$. We use a batch size of eight and train up to a predefined
maximum number of gradient steps (see Appendix 5 for details). We determine
the training step for early stopping based on the performance on the
validation set to obtain the best model from each run and to identify the best
learning rate across the sweep. For each setting, we repeat the experiments
with three seeds and report the mean test accuracy along with the standard
error of the mean as measured on the held-out test set.
For the ListOps task the validation split is provided as part of the dataset
released with the Long Range Arena (Tay et al., 2021), but for the CIFAR10 and
MNIST datasets we create our own validation set. The 50K image train split in
the CIFAR10 dataset is further subdivided into 5 training batches of 10K
images, each containing a balanced number of samples from each class. We
select one of these batches to be the validation set and train on the other
40K images. For MNIST, we split the 60K image training set into a 50K image
training split and a 10K image validation set by randomly selecting 1000
images from each class.
For the ListOps task we note that the codebase released by Lu et al. is
affected by issues only recently fixed in the Long Range Arena
codebase.222https://github.com/google-research/long-range-arena Specifically,
the ListOps dataset includes sequences ranging from 500 to 2000 tokens, but
the dataset tokenization utility truncated all sequences to 512 tokens. In
addition, the “close” parenthesis for the list of operations was not tokenized
due to not being an alphanumeric character. Between these two issues it was
impossible to solve the long sequences of operations provided by the dataset.
We resolved these issues in the dataset tokenization utility and adapted the
architecture choices accordingly, by increasing the context length and the
number of positions used by the transformer architecture to 2000. This change
is possible because in all settings we fine-tune the positional embeddings.
The additional tokenized character increased the token dimension for the
ListOps task to 16.
Lu et al. report that the model capacity impacts the performance of each of
the settings, with increased model capacity hurting the performance of the
Unfrozen variants and helping the performance of the Frozen variants,
resulting in some of the Unfrozen results being reported for models with 3
layers instead of the full 12 layer GPT2. To evaluate the impact of model
capacity and to provide a datapoint between the two model sizes, we also test
using a pretrained DistilGPT2 which is a 6 layer transformer distilled from
the full sized pretrained GPT2 model.
## 4 Results and Discussion
While at a high level, our work confirms the finding from Lu et al. that
transfer from NLP tasks to other modalities is indeed possible through
finetuning, our results contradict theirs regarding which parts of the model
should be fine-tuned. Our main finding is that while the _Unfrozen_ Pretrained
variant matches or outperforms all other settings across all tasks explored,
the _Frozen_ variants often greatly lag in performance comparatively, in
direct contradiction to their findings. Table 1 compares between the different
settings for each of the tasks across 3 seeds. For each task, the Frozen
Pretrained setting outperforms the Frozen Random setting. However, in contrast
to their results, the Unfrozen variants always outperform the Frozen variants
and by a large margin for all the tasks except for MNIST.
Table 1: Comparison of test accuracy across initialization and finetuning
methods for the GPT2 architecture.
| ListOps | MNIST | CIFAR10 | CIFAR10-LRA
---|---|---|---|---
Frozen Random | 38.9 $\pm$ 0.3 | 98.0 $\pm$ 0.0 | 61.8 $\pm$ 0.2 | 44.2 $\pm$ 0.3
Frozen Pretrained | 46.1 $\pm$ 0.3 | 98.5 $\pm$ 0.1 | 66.3 $\pm$ 0.0 | 54.7 $\pm$ 1.4
Unfrozen Random | 57.6 $\pm$ 0.8 | 98.7 $\pm$ 0.0 | 77.8 $\pm$ 0.2 | 62.0 $\pm$ 0.7
Unfrozen Pretrained | 56.3 $\pm$ 0.9 | 99.0 $\pm$ 0.0 | 77.7 $\pm$ 0.1 | 67.8 $\pm$ 0.3
The differences between these results and the ones obtained and reported by Lu
et al. in their Section 3.1, 3.2 and 3.11 can be explained by investigating
the test accuracy across the learning rate sweep, shown in Figure 1 for the
CIFAR10-LRA task. Notably, the learning rate impacts not only the absolute
performance but also the ordering between the settings. The learning rate
reported by Lu et al., $1\times 10^{-3}$, is marked with a vertical dashed red
line. Since $1\times 10^{-3}$ is just before a precipitous drop in the
performance of the unfrozen transformers, had the authors picked a lower
learning rate they would have arrived at very different conclusions.
When repeating this analysis for the ListOps task, in Figure 2, we see an even
greater dependence on the learning rate. Each of the LRs evaluated between
$1\times 10^{-}5$ and $1\times 10^{-}3$ results in different orderings and,
hence, conclusions about the optimal architecture variant. See Appendix A for
plots for MNIST and CIFAR10 tasks which uphold these findings.
The key shared finding between Lu et al. and our work is the benefit of
finetuning from a model pretrained on natural language, even for tasks of
different modalities. In their Section 3.2, Lu et al. find that the Frozen
Pretrained transformer is superior to a Frozen Random variant. We verify that
pretraining improves performance across all tasks for the Frozen transformers,
and in addition find that for some tasks pretraining provides benefits for
finetuning the Unfrozen variants. For the CIFAR10-LRA task, the Unfrozen
Pretrained variant outperforms all other variants by 4.8%, and on MNIST the
Unfrozen Pretrained variant outperforms the rest by a small margin. This
benefit from pretraining on some tasks, paired with matched performance on the
rest, suggests that it may be expedient to run initial experiments in new
settings by finetuning from a natural language pretrained model. However the
varying success of pretraining across tasks raises an open question, for
future work, about which qualities of a task lead to benefits from
pretraining.
In their Section 3.4, Lu et al. compare the computation efficiency between the
Frozen Random and Frozen Pretrained variants by reporting the number of
gradient steps to convergence. When sweeping the learning rate, we see that
the final performance of the Frozen variants is much lower than the Unfrozen
variants. Thus, we instead compare computational efficiency by reporting the
number of gradient steps to match the performance of the Frozen Pretrained
variant in Appendix B. The Frozen Random variant does not match performance in
any of the tasks, verifying the authors’ assertion that pretraining improves
the computational efficiency of the Frozen variants. However, for all tasks
the Unfrozen variants require fewer gradient steps. For all but ListOps the
Unfrozen Pretrained variant requires the least gradient steps, demonstrating
that in some tasks pretraining helps not only final performance but also
computational efficiency.
In their Section 3.6, Lu et al. report that the Frozen Pretrained models
underfit the CIFAR10 task. We validate these findings (in Appendix C) but
argue that underfitting is likely what leads to the poor comparative
performance of the Frozen variants in our experiments. In addition, while the
Frozen variants underfit on the MNIST and CIFAR10 tasks, the Frozen Pretrained
variant has the largest train/test gap of all settings on ListOps,
invalidating the hypothesis that the Frozen variants always underfit the data.
In their Section 3.7, Lu et al. report that increasing the model capacity of
the Frozen Pretrained setting improved the performance on the CIFAR10 task,
suggesting an easy way to achieve performance gains. We report results from
similar experiments, with the addition of the learning rate sweep, in Appendix
D, confirming their finding of performance gains from increased capacity for
the Frozen Pretrained variant on CIFAR10. Our results also verify that these
claims hold for the CIFAR10-LRA task and but show that increasing model
capacity also benefits the Unfrozen Pretrained variant. Interestingly, the
increase in model capacity improves the Unfrozen Pretrained setting across all
tasks whereas it only improves the Unfrozen Random setting on CIFAR10-LRA.
In their Section 3.11, Lu et al. describe the impact of unfreezing part or all
of the pretrained model. They report that unfreezing both the feedforward
layers and the attention heads is detrimental to performance. This experiment
corresponds to our Unfrozen Pretrained variant which we find outperforms all
other variants when the learning rate is properly swept, contrary to their
findings.
## 5 Conclusion
Transformer architectures pretrained on natural language texts are some of the
most successful models in NLP in recent years. Therefore, investigating how
they can best be used as a starting point for solving tasks in other
modalities is an important research direction. In our work, we show that,
across a variety of tasks, the best results are obtained when finetuning all
of the weights of a pretrained model. These results directly contradict prior
work which concluded that freezing most of the model leads to superior
performance. We demonstrate that these prior conclusions were an artefact of
using a specific, fixed learning rate, and hope that our work will help pave
the way for robust investigations into cross-modal transfer in the future.
## 6 Acknowledgements
The authors wish to thank Edward Grefenstette, Hengyuan Hu and Patrick Lewis
for many helpful discussions during the initial stages of this project.
## References
* Brown et al. (2020) Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_ , volume 33, pp. 1877–1901. Curran Associates, Inc., 2020.
* Chelba et al. (2014) Chelba, C., Mikolov, T., Schuster, M., Ge, Q., Brants, T., Koehn, P., and Robinson, T. One billion word benchmark for measuring progress in statistical language modeling. In _Fifteenth Annual Conference of the International Speech Communication Association_ , 2014.
* Choi et al. (2020) Choi, D., Shallue, C. J., Nado, Z., Lee, J., Maddison, C. J., and Dahl, G. E. On empirical comparisons of optimizers for deep learning. In _International Conference on Learning Representations_ , 2020.
* Devlin et al. (2019) Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423.
* Fox et al. (2013) Fox, N. K., Brenner, S. E., and Chandonia, J.-M. Scope: Structural classification of proteins–extended, integrating scop and astral data and classification of new structures. In _Nucleic Acids Research_ , pp. D304–D309, 2013. doi: 10.1093/nar/gkt1240.
* Hou et al. (2018) Hou, J., Adhikari, B., and Cheng, J. Deepsf: Deep convolutional neural network for mapping protein sequences to folds. In _Bioinformatics_ , pp. 1295–1303, 08 2018. doi: doi:10.1093/bioinformatics/btx780.
* Kingma & Ba (2015) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In _International Conference on Learning Representations_ , 2015.
* Krizhevsky (2012) Krizhevsky, A. Learning multiple layers of features from tiny images. _University of Toronto_ , 05 2012.
* LeCun & Cortes (2010) LeCun, Y. and Cortes, C. MNIST handwritten digit database. 2010\. URL http://yann.lecun.com/exdb/mnist/.
* Lu et al. (2021) Lu, K., A. Grover, A., Abbeel, P., and Mordatch, I. Pretrained transformers as universal computation engines. _arXiv:2103.05247_ , 2021.
* Maas et al. (2011) Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., and Potts, C. Learning word vectors for sentiment analysis. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , pp. 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
* Melis et al. (2018) Melis, G., Dyer, C., and Blunsom, P. On the state of the art of evaluation in neural language models. In _International Conference on Learning Representations_ , 2018.
* Miconi et al. (2018) Miconi, T., Stanley, K., and Clune, J. Differentiable plasticity: training plastic neural networks with backpropagation. In Dy, J. and Krause, A. (eds.), _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pp. 3559–3568. PMLR, 10–15 Jul 2018.
* Mikolov et al. (2013) Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q. (eds.), _Advances in Neural Information Processing Systems_ , volume 26. Curran Associates, Inc., 2013.
* Nangia & Bowman (2018) Nangia, N. and Bowman, S. ListOps: A diagnostic dataset for latent tree learning. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop_ , pp. 92–99, New Orleans, Louisiana, USA, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-4013.
* Pennington et al. (2014) Pennington, J., Socher, R., and Manning, C. GloVe: Global vectors for word representation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 1532–1543, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1162.
* Petroni et al. (2019) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., and Miller, A. Language models as knowledge bases? In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pp. 2463–2473, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1250.
* Petroni et al. (2021) Petroni, F., Piktus, A., Fan, A., Lewis, P., Yazdani, M., De Cao, N., Thorne, J., Jernite, Y., Karpukhin, V., Maillard, J., Plachouras, V., Rocktäschel, T., and Riedel, S. KILT: a benchmark for knowledge intensive language tasks. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pp. 2523–2544, 2021.
* Radford et al. (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9, 2019.
* Rao et al. (2019) Rao, R., Bhattacharya, N., Thomas, N., Duan, Y., Chen, P., Canny, J., Abbeel, P., and Song, Y. Evaluating protein transfer learning with tape. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc., 2019.
* Tay et al. (2021) Tay, Y., Dehghani, M., Abnar, S., Shen, Y., Bahri, D., Pham, P., Rao, J., Yang, L., Ruder, S., and Metzler, D. Long range arena : A benchmark for efficient transformers. In _International Conference on Learning Representations_ , 2021.
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, u., and Polosukhin, I. Attention is all you need. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , NIPS’17, pp. 6000–6010, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.
* Wang et al. (2018) Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pp. 353–355, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5446.
* Wang et al. (2019) Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. Superglue: A stickier benchmark for general-purpose language understanding systems. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc., 2019.
* Wolf et al. (2020) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pp. 38–45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6.
* Zhang et al. (2015) Zhang, X., Zhao, J., and LeCun, Y. Character-level convolutional networks for text classification. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 28. Curran Associates, Inc., 2015.
* Zhu et al. (2015) Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., and Fidler, S. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In _Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV)_ , ICCV ’15, pp. 19–27, USA, 2015. IEEE Computer Society. ISBN 9781467383912. doi: 10.1109/ICCV.2015.11.
## Appendix A Learning Rate Sweeps
We report the impact of the learning rate on the test accuracy for all tasks,
MNIST, ListOps and CIFAR10 and CIFAR10-LRA. In all cases, the learning rate
reported by Lu et al. (2021) is marked with a dashed red line and it is clear
that reducing the reported learning rate would invert the findings.
Figure 3: Test accuracy of all tasks across the learning rate sweep, error
bars across 3 seeds.
## Appendix B Computational Efficiency
We investigate the impact of pretraining and freezing on the computational
efficiency of the architectures. Because the final performance varies
dramatically between variants we compare the number of gradient steps
necessary to reach the best test accuracy of the Frozen Pretrained variant. We
find that the Frozen Random variant is never able to match this performance.
The Unfrozen settings require fewer gradient steps to match performance across
all tasks, with pretraining generally improving computational efficiency in
three out of four tasks.
Table 2: The impact of pretraining on compute efficiency, comparing the number
of gradient steps, per variant, to match the reported best mean test accuracy
of the Frozen Pretrained variant.
| ListOps | MNIST | CIFAR10 | CIFAR10-LRA
---|---|---|---|---
Unfrozen Random | $1.0\times 10^{5}$ | $1.1\times 10^{5}$ | $7.0\times 10^{4}$ | $1.8\times 10^{5}$
Frozen Random | - | - | - | -
Unfrozen Pretrained | $1.9\times 10^{5}$ | $4.6\times 10^{4}$ | $4.3\times 10^{4}$ | $5.3\times 10^{4}$
Frozen Pretrained | $1.9\times 10^{5}$ | $2.4\times 10^{5}$ | $3.8\times 10^{5}$ | $2.4\times 10^{5}$
## Appendix C Underfitting versus Overfitting
We investigate the extent to which each of the variants is able to fit the
data by reporting the train and test accuracy at the threshold where we ended
training, specified in Table 5 for each task. We find that across the MNIST
and CIFAR10 tasks the Frozen variants underfit the data. However, this trend
does not hold for the ListOps task where the Frozen Pretrained setting has the
largest train/test gap.
Table 3: Train versus Test accuracy at the maximum number of gradient steps
taken for each task as listed in Table 5.
| | ListOps | MNIST | CIFAR10 | CIFAR10-LRA
---|---|---|---|---|---
Unfrozen Random | Train | 58.1 | 99.9 | 96.4 | 74.1
| Test | 57.5 | 98.6 | 77.7 | 61.7
| Diff | 0.6 | 1.3 | 18.7 | 12.4
Frozen Random | Train | 39.7 | 98.8 | 62.9 | 44.9
| Test | 39.2 | 98.0 | 61.1 | 43.8
| Diff | 0.5 | 0.8 | 1.9 | 1.1
Unfrozen Pretrained | Train | 57.8 | 100.0 | 98.7 | 91.0
| Test | 55.7 | 99.0 | 77.1 | 67.0
| Diff | 2.1 | 1.0 | 21.7 | 24.0
Frozen Pretrained | Train | 52.2 | 99.3 | 67.8 | 55.6
| Test | 46.4 | 98.5 | 65.5 | 53.4
| Diff | 5.8 | 0.8 | 2.3 | 2.2
## Appendix D Scaling Model Capacity
We investigate the impact of scaling the model capacity across three of the
tasks, MNIST, CIFAR10 and CIFAR10-LRA. We compare the DistilGPT2 model at 6
layers against the GPT2 base model at 12 layers, both provided by the
HuggingFace Transformers library (Wolf et al., 2020). Scaling has little or no
impact on MNIST and the only variant to show improvement across all tasks with
increased model capacity is the Unfrozen Pretrained setting. The Frozen
Pretrained setting also improves with model capacity on both CIFAR10 tasks.
Table 4: The impact of scaling the size of the transformers across three of
the tasks, comparing the performance of the DistilGPT2 architecture with that
of the GPT2 architecture.
| | MNIST | CIFAR10 | CIFAR10-LRA
---|---|---|---|---
Frozen Random | distilgpt2 | 98.0 $\pm$ 0.1 | 60.1 $\pm$ 0.1 | 45.0 $\pm$ 0.1
| gpt2 | 98.0 $\pm$ 0.0 | 61.8 $\pm$ 0.2 | 44.2 $\pm$ 0.3
Frozen Pretrained | distilgpt2 | 98.5 $\pm$ 0.1 | 65.2 $\pm$ 0.5 | 51.1 $\pm$ 0.4
| gpt2 | 98.5 $\pm$ 0.1 | 66.3 $\pm$ 0.0 | 54.7 $\pm$ 1.4
Unfrozen Random | distilgpt2 | 98.6 $\pm$ 0.1 | 77.5 $\pm$ 0.1 | 59.7 $\pm$ 0.2
| gpt2 | 98.7 $\pm$ 0.0 | 77.8 $\pm$ 0.2 | 62.0 $\pm$ 0.7
Unfrozen Pretrained | distilgpt2 | 98.9 $\pm$ 0.0 | 76.8 $\pm$ 0.1 | 65.5 $\pm$ 0.5
| gpt2 | 99.0 $\pm$ 0.0 | 77.7 $\pm$ 0.1 | 67.8 $\pm$ 0.3
## Appendix E Evaluation Setup
We trained each of the tasks for a logarithmic sweep of learning rates, from
$1\times 10^{-6}$ to $1\times 10^{-2}$. Each task was run for a fixed number
of gradient steps, specified in Table 5. The validation accuracy was used to
perform early stopping and to identify the model in each run to evaluate and
the test accuracy from that model is reported.
Table 5: Threshold number of gradient steps used to report test accuracy
results, per task and model type.
Task | Model Type | Number Gradient
---|---|---
| | Steps
ListOps | GPT2 | $3\times 10^{5}$
MNIST | GPT2 | $4\times 10^{5}$
CIFAR10 | GPT2 | $4\times 10^{5}$
CIFAR10 LRA | GPT2 | $3\times 10^{5}$
|
arxiv-papers
| 2021-07-26T20:20:48 |
2024-09-04T03:07:20.004368
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Danielle Rothermel, Margaret Li, Tim Rockt\\\"aschel, Jakob Foerster",
"submitter": "Danielle Rothermel",
"url": "https://arxiv.org/abs/2107.12460"
}
|
2107.12465
|
# Varying fundamental constants principal component analysis: additional hints
about the Hubble tension
Luke Hart1 and Jens Chluba1
1Jodrell Bank Centre for Astrophysics, Alan Turing Building, University of
Manchester, Manchester M13 9PL
Email: [email protected]
(Accepted –. Received –.)
###### Abstract
Varying fundamental constants (VFC) [e.g., the fine-structure constant,
$\alpha_{\rm EM}$] can arise in numerous extended cosmologies. Through their
effect on the decoupling of baryons and photons during last scattering and
reionisation, these models can be directly constrained using measurements of
the cosmic microwave background (CMB) temperature and polarization
anisotropies. Previous investigations focused mainly on time-independent
changes to the values of fundamental constants. Here we generalize to time-
dependent variations. Instead of directly studying various VFC
parameterizations, we perform a model-independent principal component analysis
(PCA), directly using an eigenmode decomposition of the varying constant
during recombination. After developing the formalism, we use _Planck_ 2018
data to obtain new VFC limits, showing that three independent VFC modes can be
constrained at present. No indications for significant departures from the
standard model are found with _Planck_ data. Cosmic variance limited modes are
also compared and simple forecasts for The Simons Observatory are carried out,
showing that in the future improvements of the current constraints by a factor
of $\simeq 3$ can be anticipated. Our modes focus solely on VFC at redshifts
$z\geq 300$. This implies that they do not capture some of the degrees of
freedom relating to the reionisation era. This aspect provides important new
insights into the possible origin of the Hubble tension, hinting that indeed a
combined modification of recombination and reionisation physics could be at
work. An extended PCA, covering both recombination and reionisation
simultaneously, could shed more light on this question, as we emphasize here.
###### keywords:
recombination – fundamental physics – cosmology – CMB anisotropies –
statistical techniques – dimensional reduction
††pubyear: 2021††pagerange: Varying fundamental constants principal component
analysis: additional hints about the Hubble tension–References
## 1 Introduction
For the last few decades, modern cosmology has been dominated by the study and
observations of the cosmic microwave background (CMB) anisotropies. The
results from _Planck_ , ACT and SPT have transformed the way we look at the
microwave sky and cosmology (Planck Collaboration et al., 2015a, 2018b; Naess
et al., 2014; Keisler et al., 2015). These experiments have followed the fine
work of their predecessors _COBE_ and _WMAP_ (Bennett et al., 1996, 2013) and
the many ground and balloon-based experiments (e.g., Netterfield et al., 2002;
Rubiño-Martin et al., 2003; Pearson et al., 2003). Currently, attention is
turning to larger ground-based telescopes such as AdvancedACTPol (Henderson et
al., 2016), POLARBEAR (Ade et al., 2014), The Simons Observatory (Ade et al.,
2019) and CMB-Stage-VI (Abazajian et al., 2016; Carlstrom et al., 2019), which
will give us further insight into the CMB anisotropies, with unparalleled
precision for the polarisation power spectra and spanning a vast range of
angular scales.
Beyond the now well-established $\Lambda$CDM model, the immense experimental
progress also enabled us to probe new physics. This includes neutrino physics
through tests of the neutrino masses and relativistic degrees of freedom
(Gratton et al., 2008; Battye & Moss, 2014; Abazajian et al., 2015; Planck
Collaboration et al., 2018b). In addition, we have been able to consider a
variety of models linked to _dark matter annihilation_ and _decay_ (Chen &
Kamionkowski, 2004; Padmanabhan & Finkbeiner, 2005; Galli et al., 2009;
Slatyer et al., 2009; Hütsi et al., 2009; Chluba, 2010; Finkbeiner et al.,
2012; Slatyer & Wu, 2017; Chen & Wang, 2021) and _primordial magnetic fields_
(Sethi & Subramanian, 2005; Shaw & Lewis, 2010; Kunze & Komatsu, 2014; Chluba
et al., 2015; Paoletti et al., 2019; Jedamzik & Saveliev, 2019). CMB
anisotropies can furthermore be used to constrain more complex dark energy
theories, including _k-essence_ , _early_ and _interacting dark energy_
(Silvestri & Trodden, 2009; Di Valentino et al., 2017; Poulin et al., 2018;
Pace et al., 2019; Lin et al., 2020). Many of these extensions have been
proposed to alleviate the _Hubble constant tension_ that is currently
dominating discussions in the field of cosmology (Poulin et al., 2019; Di
Valentino et al., 2019; Knox & Millea, 2020; Schöneberg et al., 2021).
One of the interesting extensions to the standard cosmological model is
_varying fundamental constants_ (VFC). Whilst fundamental constants are
thought to be just that — constant, there are numerous theories that motivate
changes to these parameters at early and late times. Several exhaustive
reviews have discussed the mechanisms and motivations for such variations
(Uzan, 2003, 2011; Martins, 2017). Two compelling parameters that affect
electromagnetism in the early (and late) Universe are the fine structure
constant $\alpha_{\rm EM}$ and the effective electron mass $m_{\rm
e}$111Strictly speaking we allow the effective electron mass to vary so that
more formally, we are varying the electron-proton mass ratio $\mu$, which is
non-dimensional (see Uzan, 2011, for a clearer motivation).. These fundamental
constants can change across cosmic history through modifications to the
electromagnetic Lagrangian and the introduction of additional scalar fields or
particles (Bekenstein, 1982; Sandvik et al., 2002; Mota & Barrow, 2004; Barrow
& Graham, 2013).
Many previous studies have looked at constraining the variations to
$\alpha_{\rm EM}$ using astrophysical probes such as _quasar absorption
spectra_ (Bonifacio et al., 2014; Kotuš et al., 2017; Murphy & Cooksey, 2017;
Wilczynska et al., 2020), _thermonuclear supernovae_ (Negrelli et al., 2018),
_white dwarfs_ (Hu et al., 2020), _supermassive black holes_ (Hees et al.,
2020) and the _Magellanic Clouds_ (Levshakov et al., 2019). More recently,
studies have used the detailed structure of CO clouds to constrain the
electron-proton mass ratio during the epoch of reionisation ($z\sim 6$)
(Levshakov et al., 2020). These works all show that at late times both
$\alpha_{\rm EM}$ and $m_{\rm e}$ cannot depart by more that $\simeq
0.001-0.01\%$ from their standard lab values.
Given the clear connection between the decoupling of photons and the atomic
processes during recombination, several groups have furthermore studied the
changes in the CMB anisotropies arising from VFC (Kaplinghat et al., 1999;
Battye et al., 2001; Avelino et al., 2001; Scóccola et al., 2009; Menegoni et
al., 2009, 2012; Planck Collaboration et al., 2015b). These probe VFC mainly
at recombination, complementing the aforementioned late-time constraints and
limiting possible departures from the standard values to $\lesssim 0.1\%$ at
$z\simeq 10^{3}$ (see Hart & Chluba, 2020b, for most recent constraints).
These previous studies all focused on simple constant (i.e., time-independent)
departures of $\alpha_{\rm EM}$ and $m_{\rm e}$ from their standard values.
However, this picture ought to be unphysical and does not follow the
motivations given by the aforementioned theoretical frameworks. A more general
treatment is therefore desirable.
In Hart & Chluba (2018), the detailed effects of changes to $\alpha_{\rm EM}$
and $m_{\rm e}$ on the ionisation history were explored using the
recombination code CosmoRec (Chluba & Thomas, 2011; Shaw & Chluba, 2011). In
addition, Hart & Chluba (2018) considered a phenomenological time-dependence
to the VFC using a power-law around pivot redshift $z=1100$, showing
explicitly that more than just one model-parameter can be meaningfully
constrained using _Planck_ data. However, rather than propagating a
phenomenological variation of fundamental constants, we can also use
information about the recombination era (i.e., from the CMB anisotropies) to
constrain the most likely time-dependent variations of the constants using a
dimensional reduction technique known as _principal component analysis_ (PCA).
This kind of analysis has been frequently used in cosmology (see Mortonson &
Hu, 2008; Ishida & de Souza, 2011; Finkbeiner et al., 2012; Farhang et al.,
2012, 2013; Dai et al., 2018; Campeti et al., 2019; Sharma et al., 2020, for
various examples), but so far was not applied to VFC.
In Hart & Chluba (2020a, henceforth PCA20), we developed our own PCA
implementation code in C++ known as FEARec++ as a means to constrain the
strongest principal components in the free electron fraction, $X_{\rm e}$, as
a function of redshift. There we created extensively orthogonal modes
optimized specifically for the _Planck_ 2015 likelihood, extending and
improving on the pioneering works of Farhang et al. (2012, 2013). In PCA20, we
also introduced a new parameter constraint apparatus coined the _direct
projection method_ , which allows one to obtain constraints on explicit model
parameters without the need to run the full analysis.
In this paper, we revisit the formalism from PCA20 and directly apply it to
the VFC modelling we developed for CosmoRec in Hart & Chluba (2018). The basic
formalism includes the generation of Gaussian basis functions and the
propagated responses to both the opacity and the CMB power spectra (Sect. 2).
In Sect. 3, we first generate the eigenmodes for a cosmic-variance-limited
(CVL) experimental setup and investigate the structure and propagation from
these variations in $\alpha_{\rm EM}$ and $m_{\rm e}$ to the CMB anisotropies.
The _direct likelihood_ method from PCA20 is utilised in Sect. 4 to constrain
the VFC principal components attainable from the _Planck_ 2018 likelihood
using a _selective sampling_ module patched onto CosmoMC (Lewis & Bridle,
2002). The obtained eigenmodes are included in a detailed MCMC analysis in
Sect. 5, where we present the marginalised results and contours using _Planck_
2018 baseline data. We find that with _Planck_ data, three VFC modes can be
constrained. No indication for significant departures from $\Lambda$CDM are
found (e.g., Tables 4 and 5).
Next, we briefly discuss the implications of the PCA for $m_{\rm e}$
variations on the Hubble tension (Sect. 5.4). The basic idea was discussed for
the _Planck_ 2018 likelihood in Hart & Chluba (2020b, henceforth referred to
as VFC20), where it was highlighted that $m_{\rm e}$ could play an important
role through its combined effect on recombination and reionisation. Finally,
in Sect. 6 we use simulated noise curves from The _Simons Observatory_ (SO)
forecasts with the analytic PCA method to generate predicted modes for this
future CMB ground-based experiment. Our conclusions are presented in Sect. 7.
Several Appendices support our analysis and for completeness also present the
latest $X_{\rm e}$-PCA for _Planck_ 2018, with marginal changes to PCA20,
which was based on _Planck_ 2015 data.
## 2 Recap of the formalism
In this section, we briefly recapitulate on the PCA method used in PCA20 and
how it is carried forward for this work. We also discuss the differences
required for the fundamental constant analysis. For our study, the PCA
methodology is fully implemented in the software package FEARec++. Following
PCA20, we generate a complete set of basis functions, $\phi_{i}(z)$, over a
large redshift space $z_{i}\in\\{300,2000\\}$. These functions can be any
given continuous shape, even periodic (as shown in Farhang et al., 2012). The
functions used in this work are Gaussians centred on $z_{i}$. It is important
that they maximise _orthogonality_ , minimising overlap between neighbouring
functions and optimise for _completeness_ , where the function space is
covered as much as possible. In this analysis, we add these basis functions to
the fine-structure constant $\alpha_{\rm EM}$ and the effective electron mass
$m_{\rm e}$, such that
$\mathcal{C}(z,z_{i})=\mathcal{C}_{0}\left(1+\frac{\Delta\mathcal{C}}{\mathcal{C}_{0}}\left(z,z_{i}\right)\right)=\mathcal{C}_{0}\left[1+\phi_{i}(z)\right]$
(1)
where the fundamental constants $\mathcal{C}\in\\{\alpha_{\rm EM},m_{\rm
e}\\}$ are perturbed by the basis function around $z_{i}$.
Once these functions are added to a recombination code such as CosmoRec they
induce a response in the CMB temperature and polarisation spectra, $C_{\ell}$,
due to the changes during recombination. The CMB anisotropies can be
calculated using a Boltzmann code: in our case CAMB (Lewis et al., 2000). If
we measure the relative difference between the _‘new’_ power spectra with the
added basis function and the fidicual power spectra as $\partial\ln
C_{\ell}/\partial{p_{i}}\equiv 1/C_{\ell}\left(\partial
C_{\ell}/\partial{p_{i}}\right)$, we can construct a _Fisher matrix_ of these
responses by using the fiducial cosmology and a given noise specification as
the effective covariance matrix for the experiment. This Fisher machinery can
be thought of as an $n-$dimensional signal-to-noise matrix where $p_{i}$
defines the amplitudes of the Gaussian functions centred on $z_{i}$.
In Fig. 1, we have shown how the Gaussian changes in $\mathcal{C}$ propogate
through the opacity/free electron fraction, and consequentially project onto
the CMB power spectra222Movies of these responses will be made available
online at:
https://cosmologyluke.github.io.. From the Gaussian responses to the opacity,
there are sweeping negative variations in $\dot{\tau}$ (for
$\delta\mathcal{C}/\mathcal{C}>0$) that arise from the $X_{\rm e}$ variations.
The superposed peaks on both types of variations (positive peaks for
$\alpha_{\rm EM}$ and negative for $m_{\rm e}$) result from the $\sigma_{\rm
T}$ changes that affect the visibility functions (see Hart & Chluba, 2018, and
Sect. 3.1). These variations translate into $\delta\mathcal{D}_{\ell}$
variations that are most responsive around the redshift of most probable last
scattering, $z_{*}\simeq 1100$. Given that the $X_{\rm e}$ variations for a
given $\Delta\mathcal{C}$ are largest around this epoch as well, the responses
in $\mathcal{D}_{\ell}^{\rm TT}$ and $\mathcal{D}_{\ell}^{\rm EE}$ are hyper-
focused around this epoch, with greater diminishes in the tails.
Figure 1: Responses in the weighted free electron fraction $\dot{\tau}$
_(central)_ and the $\ell$-weighted CMB temperature angular power spectra
$\mathcal{D}_{\ell}$ _(bottom)_ for Gaussian basis functions (example for
$\alpha_{\rm EM}$ shown in _top_ panel). These are given for $\alpha_{\rm EM}$
_(solid)_ and $m_{\rm e}$ _(dashed)_ around the pivot redshifts
$z_{i}=\\{900,1100,1400\\}$. All curves are plotted as relative differences
against the $\Lambda$CDM model.
### 2.1 Fisher matrices
The Fisher matrix can be written as the second derivative of the log-
likelihood function, $\ln\mathcal{L}\left(\vec{p}\,|\,{\bf d},M\right)$ around
the maximum likelihood location, where $\vec{p}$ are the parameter values of a
given model $M$ and ${\bf d}$ is the data (from an experiment such as
_Planck_). However for a simple, CMB-like experiment, we can simplify this
using the following equation:
$F_{ij}=\left<\frac{\partial^{2}\ln\mathcal{L}}{\partial
p_{i}^{2}}\right>=\sum_{\ell=0}^{\ell_{\rm
max}}\frac{\partial\vec{C}_{\ell}}{\partial
p_{i}}\cdot{\bf\Sigma}_{\ell}^{-1}\cdot\frac{\partial\vec{C}_{\ell}}{\partial
p_{j}},$ (2)
where the CMB power spectra vector is given by,
$\vec{C}_{\ell}=\left(C_{\ell}^{TT},C_{\ell}^{EE},C_{\ell}^{TE}\right)$ (3)
and the covariance matrix for a given multipole $\ell$ is,
${\bf\Sigma}_{\ell}=\frac{2}{2\ell+1}\begin{bmatrix}C^{\rm TT^{2}}&C^{\rm
TE^{2}}&C^{\rm TT}C^{\rm TE}\\\ C^{\rm TE^{2}}&C^{\rm EE^{2}}&C^{\rm TE}C^{\rm
EE}\\\ C^{\rm TT}C^{\rm TE}&C^{\rm TE}C^{\rm EE}&\frac{1}{2}\left(C^{\rm
TE^{2}}+C^{\rm TT}C^{\rm EE}\right)\end{bmatrix}_{\ell}.$ (4)
Note that here we have assumed there is no cross-multipole correlations
($\ell\times\ell^{\prime}$ terms are 0) allowing us to use the summation in
Eq. (2). Effects of detector noise have been investigated for changes to
recombination in previous works (Farhang et al., 2012). This formalism of the
Fisher matrix has been used extensively in the literature (Tegmark et al.,
1997; Verde, 2010; Finkbeiner et al., 2012)333It is also important to point
out that the derivative of the log-likelihood in Eq. (2) is over an ensemble
average. This is an important detail we have assumed for our data-driven
_direct likelihood_ approach..
#### 2.1.1 Direct likelihood method
For the _Planck_ data, the likelihood function is directly sampled along with
the same basis functions and then the Fisher matrix is calculated using the
_finite difference method_ with a second-order stencil. The likelihood is
evaluated using CosmoMC and the current _Planck_ 2018 likelihood code (Lewis &
Bridle, 2002; Planck Collaboration et al., 2019). This is an effective way of
extracting eigenmodes whilst also removing correlations induced by
cosmological parameters and nuisance parameters associated with the _Planck_
data444The object-oriented nature of the FEARec++ code means this is malleable
towards any alternative dataset. One such generality is the addition of
nuisance parameters external to _Planck_.. Details on the implementation of
FEARec++ and the validation of the direct likelihood method are explained in
PCA20. Subsequently, the stability analysis of the _Planck_ likelihood code
required for this method, along with a comparison to the 2015 likelihood
approach, are included in Appendix A.
#### 2.1.2 Principal components
To generate principal components, the Fisher matrix is diagonalised and
decomposed into its eigenbasis such that,
$F_{ij}=S_{im}\cdot\mathcal{F}_{mn}\cdot S_{nj},$ (5)
where $S_{im}$ is the matrix of eigenvectors of the Fisher matrix and
$\mathcal{F}_{ab}$ is a diagonalised matrix of the eigenvalues. These
eigenvectors are recast as eigenfunctions using the basis functions we
generated initially. If we create $N$ basis functions initially, this can be
written formally as,
$E_{m}(z)=\sum_{i=1}^{N}S_{im}\,\phi_{i}(z).$ (6)
The $E_{m}(z)$ functions are the principal components we have been wishing to
generate and they are ranked by their eigenvalues (i.e., the largest
eigenvalue gives the most probable principal component). In reality, we take
the amplitude of each of the matrix elements for a given function $E_{m}$ and
then interpolate over this since this is much smoother for the Boltzmann code
to process. All the linear algebra stages of this implementation are done by
Eigen3 due to their efficient C++ libraries that have been utilised
(Guennebaud et al., 2010). We can use the Kramer-Rao inequality to estimate
the error of each mode such that
$\sigma_{i}\gtrsim\sqrt{\left(\mathcal{F}^{-1}\right)_{ii}}$.
### 2.2 Using Monte Carlo simulations to constrain the modes
Once the VFC modes have been constructed, both the analytical and direct-
method generated eigenmodes can be incorporated onto a Markov Chain Monte
Carlo (MCMC) simulation using amplitudes $\mu_{i}$ such that,
$\mathcal{C}\left(z\right)=\mathcal{C}_{0}\left(1+\sum_{i}^{M}\mu_{i}E_{i}(z)\right).$
(7)
Here, $\mu_{i}$ amplifies the relative strength of a given mode, where lower
$i$ correspond to the _more_ constrainable components. Equally we can set $M$
as the limit of the modes hierarchy that have enough relevant information
depending on a particular condition. The criterion here is the error
information defined by the Kramer-Rao inequality, where in this analysis (as
in the last analysis), the first three eigenmodes hold the majority of the
information ($\simeq 99\%$). Our configuration for MCMC analysis is explained
in more detail in Sect. 5, with a focus on the _direct projection method_ in
Sect. 5.3.
### 2.3 Changes in the Fisher machinery for VFC
There have been several modifications to the approach from PCA20 to optimise
the analysis for time-varying fundamental constants. Firstly, as shown in Hart
& Chluba (2018), there is a sharp cut-off in the effects to the recombination
history, $X_{\rm e}$ when $z\gtrsim 1500$. This means that the responses in
the CMB radically disappear above this redshift. For this reason, the number
of basis functions has been reduced to $N=80$ over a narrower range for the
generation of these eigenmodes. The modes have been created up to $z_{i}=2000$
for both constants. Even though there were small effects on helium
recombination coming from variations in $\alpha_{\rm EM}$ and $m_{\rm e}$, the
larger effects from around the peak of the Thomson visibility function ($z\sim
1100$) coupled with the weaker constraining power in the CMB anisotropies from
higher-redshift recombination features washes out these variations. Since the
higher-order principal components have much larger errors (much smaller
eigenvalues), it is unlikely that these redshifts can be constrained with CMB
data as part of a principal component analysis.
#### 2.3.1 Propagating additional contributions from VFC
When adding the basis functions to the ionization history, the small
perturbations are propagated through to the CMB anisotropies as discussed in
previous papers (Farhang et al., 2012; Finkbeiner et al., 2012; Hart & Chluba,
2020a). However when we include fundamental constants, the effects are not
exclusive to the free electron fraction. This was clarified in previous
studies of the _Planck_ 2015 data (Planck Collaboration et al., 2015b; Hart &
Chluba, 2018). We showed that there is a non-negligible contribution from the
rescaling of the Thomson cross section ($\sigma_{\rm T}$). As a result, one
can reparametrise the fundamental constant variations arising from
recombination by using the _opacity_ , also known as the differential Thomson
optical depth555In other pieces of literature, $\dot{\tau}$ refers to a
derivative with respect to conformal time; however we restrict ourselves to
redshift in this analysis., $\dot{\tau}$, where in this study,
$\dot{\tau}\equiv\frac{\mathop{}\\!\mathrm{d}{\tau}}{\mathop{}\\!\mathrm{d}{z}}=-\frac{N_{\rm
H}(z)\,X_{\rm e}(z)\,\sigma_{\rm T}(z)\,c}{H(z)\,(1+z)},$ (8)
where $N_{\rm H}$ is the total hydrogen number density and the Hubble factor,
$H(z)$, is independent of the fundamental constants666Strictly speaking, there
are models where fundamental constant variations affect the background energy
density, and by proxy $H(z)$, depending on the underlying mechanism. Here we
only discuss phenomenological variations in $\alpha_{\rm EM}$ and $m_{\rm e}$
arising from recombination. For further discussion of these theories, we point
to a recent review in Martins (2017).. Therefore, if we measure the responses
in $\dot{\tau}$ we will extract the full variation with respect to the
fundamental constant basis functions. The opacity variations are illustrated
in Fig. 1 for both $\alpha_{\rm EM}$ and $m_{\rm e}$. The spikes in positive
or negative directions close to $z_{i}$ arise directly from the extra
$\left[1+\phi_{i}(z)\right]^{2}$ term in $\dot{\tau}$ that is convolved with
the variation arising from the free electron fraction $X_{\rm e}$. Note that
the Thomson cross section depends on the fundamental constants discussed such
that $\sigma_{\rm T}=\left(\alpha_{\rm EM}/\alpha_{\rm
EM,0}\right)^{2}\left(m_{\rm e}/m_{\rm e,0}\right)^{-2}$.
#### 2.3.2 Amplitude normalisation for the MCMC code
In PCA20, we discussed the amplitude adjustments required for different
redshifts when generating basis functions for the direct likelihood method
with CosmoMC. Once the $\alpha_{\rm EM}$ and $m_{\rm e}$ modes have been
constructed for an idealised CVL experiment, the diagonal of the Fisher matrix
serves as the weighting function for the different redshift bins used in the
direct likelihood method. Given the delicate nature of the direct-likelihood
method, this was required to insist on numerical stability when generating the
eigenmodes. Though these new responses in $X_{\rm e}$ are non-trivial when a
Gaussian is added to the $\alpha_{\rm EM}$ or $m_{\rm e}$ parameter during
recombination, these _weighting template functions_ were very similar and
helped constrain numerically stable modes such as those presented in Sect. 4.
#### 2.3.3 Differences in the marginalisation
As in the previous paper, we remove the correlations of the principal
components from both cosmological and nuisance parameters by using the
identity,
$\left({\bf F}^{-1}\right)_{pp}=\left({\bf F}_{pp}-{\bf F}_{ps}{\bf
F}_{ss}^{-1}{\bf F}_{sp}\right)^{-1},$ (9)
where ${\bf F}_{pp}$ refers to the sub-matrix of the Fisher matrix pertaining
to the _principal components_ and ${\bf F}_{ss}$ refers to the sub-matrix
pertaining to the standard parameters: cosmological and nuisance. The nuisance
parameters are a combination of foregrounds and systematics from the data-
processing of the _Planck_ data, however the majority of this machinery
remains unchanged between 2015 and 2018. The only difference as far as the
simulations are concerned is that the 2018 baseline polarisation data includes
no dust-contamination amplitude parameters (referred to as $A^{{\rm
dust}EE}_{\mathcal{F}}$ in Table C1 of PCA20). This is due to the cosmology
being insensitive to the dust amplitudes of these particular parameters for
$EE$ CMB power spectra (see Sect. 3.3.2 of Planck Collaboration et al., 2019,
for more details). For full transparency, this leads to $N_{s}=25$ with 6 less
parameters than our previous analysis.
## 3 Cosmic variance limited (CVL) experiment
Figure 2: The fine-structure constant _(top)_ and electron-mass _(bottom)_
eigenmodes for a CVL-experiment with $\ell_{\rm max}=3500$. Here the redshift
associated with the last scattering surface, $z_{*}=1088$ is shown as a dashed
curve. Note here that the resultant modes for $m_{\rm e}$ have been multiplied
by -1 to compare symmetry with $\alpha_{\rm EM}$.
Figure 3: The differential optical depth (opacity) $\dot{\tau}$ variations
that are caused by the fundamental constant CVL modes from Fig. 2:
$\alpha_{\rm EM}$ _(top)_ and $m_{\rm e}$ _(bottom)_. The amplitudes of these
eigenmodes are lifted directly from the predicted errors of the Fisher matrix
calculation. As in Fig. 2, the $m_{\rm e}$ case has been multiplied by $-1$
for symmetry comparisons.
Figure 4: Responses of the CMB temperature angular power spectra according to
the fine-structure constant modes _(top)_ and electron mass modes _(bottom)_
constrained by a CVL-experiment with $\ell_{\rm max}=3500$ in Fig. 2. These
eigenmodes propagate through the Thomson optical depth (Fig. 3) and then onto
the CMB anisotropies. The grey lines correspond to the peaks of the _Planck_
2018 $\Lambda$CDM fiducial power spectra.
Figure 5: Responses of the CMB $EE$ polarisation angular power spectra
according to the fine-structure constant modes _(top)_ and electron mass modes
_(bottom)_ constrained by a CVL-experiment with $\ell_{\rm max}=3500$ in Fig.
2. The grey lines correspond to the peaks of the _Planck_ 2018 $\Lambda$CDM
fiducial polarisation power spectra.
As mentioned in Sect. 2, the simplest configuration for a PCA with the CMB
anisotropies simulates a CVL-like experiment. The covariance (effective noise)
of the Fisher matrix for this setup is made up solely from the fiducial CMB
$C_{\ell}$s. Here we present the results for the eigenanalysis. For this
section, we have flipped the sign of the $m_{\rm e}$ eigenmodes so they can be
more directly compared to the $\alpha_{\rm EM}$ variations. This has also been
propagated to the responses in the CMB power spectrum, $C_{\ell}$. We ask the
reader to bear this in mind when the full parameter constraints are shown in
Sect. 2.2.
The modes for $\alpha_{\rm EM}$ and $m_{\rm e}$ are shown in Fig. 2. In this
figure, they have been normalised as explained in Sect. 2.1.2 and the most
likely redshift for a photon to decouple, $z_{*}=1088$, is indicated by a
vertical dotted line.
In our previous paper, we found that the first eigenmodes in the hierarchy are
most sensitive around the FWHM of the Thomson visibility function
($970<z<1170$). The relative changes of the $\alpha_{\rm EM}$ and $m_{\rm e}$
modes in that window are incredibly similar. One key difference is the higher
redshift behaviour for both modes at $z>1500$, leaking into the neutral helium
recombination era. In this epoch, the fine structure constant modes sharply
fall to $\Delta\alpha_{\rm EM}\simeq 0$ however the $m_{\rm e}$ components
tail off parallel to the origin for these higher redshifts. This will be
discussed more in Sect. 3.1. In the second eigenmode, $E_{2}(z)$, we can see
from Fig. 2 that the wiggly shape is more pronounced for the $\alpha_{\rm EM}$
modes at $z\simeq 1300$.
The modes are suspiciously similar when first compared to the independent
changes to $X_{\rm e}$ arising from variations in $\alpha_{\rm EM}$ and
$m_{\rm e}$. The $m_{\rm e}$ and $\alpha_{\rm EM}$ variations affect the
$X_{\rm e}$ fraction in distinctly different ways, particularly at lower
redshifts, $z<500$. However, the most constrainable eigenmodes in the
hierarchy are all centered around the recombination redshift, $z_{*}$. At this
redshift, the variations become almost indistinguishable, save for their
relative magnitudes (encoded in their eigenvalues, see Table 1).
The propagation of these modes into the opacity (differential optical depth,
$\dot{\tau}$ as previously mentioned) as a residual
$\Delta\dot{\tau}/\dot{\tau}$ are shown in Fig. 3. The responses from the
first 3 modes are almost identical, mirroring the mode structures in Fig. 2;
however, the 3rd opacity residual of $m_{\rm e}$ is slightly shifted to higher
redshifts. For both constants ($\alpha_{\rm EM}$ and $m_{\rm e}$), the opacity
arises from the modes with their predicted Fisher errors from Table 1. Since
these are larger for $E_{3}$, the amplitude is much higher. However, the
responses from the CMB are similar in magnitude.
We have included the impact on both the temperature and $E$-mode polarisation
angular power spectra777Note that in this work, we will interchangeably talk
about $C_{\ell}$ and $\mathcal{D}_{\ell}$ spectra. Here
$\mathcal{D}_{\ell}\equiv\ell\left(\ell+1\right)C_{\ell}/\left(2\pi\right)$.
The function $\mathcal{D}_{\ell}$ highlights the smaller scale features of the
CMB spectra more effectively. in Figs. 4 and 5. The responses for both
constants in the $TT$ power spectra show the relative changes with the same
$\sim\pi/2$ phase change with respect to the CMB acoustic peaks (grey lines in
Fig. 4). The magnitude of these responses is propagated from the same
responses in the opacity from Fig. 3, hence the similar magnitudes in
$\partial\ln\mathcal{D}_{\ell}$. The shift is consistent with a drift to
smaller multipoles (larger scales), however the overall downward trend of the
residual corresponds to sharper damping of the peaks. This mimics several
aspects of the modes discussed in PCA20, notably that the CMB $TT$ spectra
responses that emerge when varying $n_{\rm s}$. By increasing the matter power
spectral index, this sharply modifies the Silk damping envelope for the CMB
power spectra. This effect is less prominent for the 2nd and 3rd modes where
in particular the 2nd mode gives a sinusoidal-like residual in the $TT$ power
spectra. This indicates shifting in the CMB acoustic peaks, in phase with the
variations from $E_{1}$. The third mode is a complicated superposition of the
two effects where the damping effect becomes less prominent at higher
multipoles. Furthermore, the sinusoidal-like pattern of the $E_{3}$ residual
goes gradually out of phase with the first two eigenmodes at higher
$\ell>2000$. This reflects similar mode patterns in the CMB temperature
spectra from previous PCA analyses (Planck Collaboration et al., 2015a; Hart &
Chluba, 2020a). In Fig. 5, there is a similar effect on the polarisation
responses, $\partial\ln\mathcal{D}_{\ell}^{\rm EE}$. For the $EE$ polarisation
signal, the responses in the CMB behave similarly for modes $E_{1}$ to
$E_{3}$, with a larger residual envelope size. This is due to the smaller
magnitude of the $EE$ polarisation power spectra.
### 3.1 Effects on the Thomson cross section
As mentioned in Sect. 2.3.1, the Thomson cross section needs to be rescaled
when propagating the variations of $\alpha_{\rm EM}$ and $m_{\rm e}$ to the
CMB anisotropies. The effects of including that correction for the eigenmodes,
for a CVL-experiment are shown in Fig. 6. In this figure, we show the
comparison when including this correction for the first 3 eigenmodes. When the
$\sigma_{\rm T}$ rescaling is removed from the analysis, the eigenmodes for
$m_{\rm e}$ and $\alpha_{\rm EM}$ almost entirely overlap. However, when the
full correction is included, the first peaked features of $E_{1}$ at $z\sim
1050$ and $z\sim 1350$ begin to shift. For $\alpha_{\rm EM}$ the features
slightly drift to higher redshifts, whereas they drift to lower redshifts for
$m_{\rm e}$. There is also a peculiar feature at $z\gtrsim 1500$ where the
$m_{\rm e}$ mode tails off less sharply. From inspecting the responses in Fig.
1, this comes from the additional negative change to $\dot{\tau}$ from the
$\sigma_{\rm T}$ scaling, prolonging the effects of the basis functions at
higher redshifts. However, these high redshift features were also seen in the
$X_{\rm e}$ eigenmodes in PCA20 and they were hindered greatly when real data
like _Planck_ was included (see Figs. 3-4 of Hart & Chluba, 2020a, for
comparison).
From the Fisher matrix eigenvalues (see Sect. 2) we see that the predicted
errors for both fundamental constant modification examples are different when
including variations to $\sigma_{\rm T}$. This latter example is what we show
in Table. 1. For $\alpha_{\rm EM}$ the errors are $\simeq 8\%$ larger when
$\sigma_{\rm T}$ is included. By comparison, the predicted errors for $m_{\rm
e}$ are $\simeq 20\%$ larger when $\sigma_{\rm T}$ is included. Though these
modes appear more constrainable, this becomes much harder to disentangle when
we generate data-driven eigenmodes and marginalise over the
cosmological/nuisance parameters.
Figure 6: The first three eigenmodes in a CVL experiment both with (_solid_) and without (_dashed_) $\sigma_{\rm T}$ changes arising from a varying $\alpha_{\rm EM}$ and $m_{\rm e}$. The two parameters shown are coloured _purple_ and _orange_ respectively. Outside of the range shown ($800<z<1500$) the observed differences between the modes are $<0.1\%$. Parameter | Error | CVL | _Planck_ 2018 | SO (forecast)
---|---|---|---|---
$\alpha_{\rm EM}$ | $\sigma_{1}$ | 0.00039 | 0.0060 | 0.0015
| $\sigma_{2}$ | 0.00092 | 0.012 | 0.0040
| $\sigma_{3}$ | 0.0022 | 0.036 | 0.0079
$m_{\rm e}$ | $\sigma_{1}$ | 0.00076 | 0.0089 | 0.0022
| $\sigma_{2}$ | 0.0017 | 0.017 | 0.0060
| $\sigma_{3}$ | 0.0041 | 0.055 | 0.011
Table 1: Errors calculated from the eigenvalues $\lambda_{i}$ of the principal
components $E_{i}$ of $\alpha_{\rm EM}$ and $m_{\rm e}$. These are listed for
each of the configurations discussed in Sect. 3-6. Note that these values have
been used as the amplitudes for the $\dot{\tau}$ responses and the $C_{\ell}$
responses throughout this work (e.g., Figs. 3 \- 5).
## 4 Eigenmodes constrained with _Planck_ data
Applying the direct-likelihood method described in Sect. 2 and Appendix A, we
can use the likelihood function from the _Planck_ dataset to find the most
constrainable eigenmodes. As an additional test for the method with this
particular dataset, we also re-constructed the $X_{\rm e}$ modes for the
_Planck_ 2018 dataset since PCA20 was limited to _Planck_ 2015\. The full
comparison and details of these modes presented in Appendix B show that the
modes have not varied significantly between datasets. This means the step-size
choices and stability confirmations in Appendix A coupled with the consistent
results indicate that the direct likelihood method has been optimally
configured for the following analysis888These eigenmodes are numerically
stable and converged yet they are not 100% optimised (as we discussed in
PCA20). However the noisiness in the _Planck_ likelihood function limits the
precision of constraints. One could improve these limits by modifying the
likelihood function sampling method with future studies..
In this section, we will discuss the direct-likelihood constrained eigenmodes
of $\alpha_{\rm EM}$ and $m_{\rm e}$ and the resultant responses on the CMB
power spectrum. For illustration purposes, we have multiplied the second
eigenmode $E_{2}$ for $m_{\rm e}$ by $-1$ to compare and contrast the similar
structure to $\alpha_{\rm EM}$. This has propagated to the
$\mathcal{D}_{\ell}$ responses in Figs. 8-9 also. As with the CVL case in
Sect. 3, this flipping has not been applied to the modes going into the MCMC
solver. The constrained eigenmodes are shown in Fig. 7. The predicted errors
from the eigensolver of these modes are shown in the second column of Table 1.
The fine-structure constant components are shown in the top panel of Fig. 7.
Much like in previous studies of $X_{\rm e}$ components, the introduction of
sourcing direct data introduces unique features to the _Planck_ modes compared
to a simple CVL case such as those in Fig. 2. For the most constrained
eigenmode, $E_{1}$, the features of the mode (i.e., dip and trough) have
shifted to lower redshifts. The higher redshift peak at $z\sim 1250$ is
considerably sharper than in the CVL case. In both the second and third
eigenmodes, the number of features in each mode has increased. The second mode
for the _Planck_ modes in Fig. 7 is more reminiscent of the third mode in the
CVL case (Fig. 2). Notably the kinks in $E_{2}$ we mentioned in Sect. 3 have
been removed, where they have been replaced by another peak at $z\sim 1350$.
We know from PCA20 that these features, where they are asymmetric with peaks
around $z_{*}$, creates large degeneracies in $H_{0}$ (or $\theta_{\rm MC}$).
These are not present in $E_{2}$ for the direct-method eigenmodes in Fig. 7,
therefore the degeneracies with the expansion should be removed via
marginalisation. In the case of $E_{3}$, similar to PCA20, there is higher
order fine structure at $z\sim 1250-1500$ which seems to arise from the
marginalisation step of generating these modes.
The effective electron mass modes (also in Fig. 7) exhibit a very similar
behaviour as the $\alpha_{\rm EM}$ modes, when created with the direct method.
However the departures from the CVL modes are not identical to the
$\alpha_{\rm EM}$ modes. The first _Planck_ eigenmode for $m_{\rm e}$ does not
have the shift in peaks that is apparent in the CVL case between the two
fundamental constants. Instead the differences predominantly manifest in the
third eigenmode, $E_{3}$. The peaks at $1200<z<1500$ on $E_{3}$ in Fig. 7 are
dampened for the $m_{\rm e}$ case. There also is a non-zero floor in $E_{3}$
for $z>1500$. The same floor is seen in $E_{3}$ at low redshift, $z<600$. Both
of these features could point to more information locked in the fourth
eigenmode $E_{4}$, however the predicted errors on these components are still
very high and therefore, this may need a more rigorous analysis in the future,
potentially when an improved likelihood approach is introduced.
Figure 7: The first three principal components for $\alpha_{\rm EM}$ _(top)_
and $m_{\rm e}$ _(bottom)_ constrained with the _Planck_ 2018 data. The
eigenmodes are all normalised as previous modes such that
$\int|E_{i}^{2}(z)|\mathop{}\\!\mathrm{d}z=1$. As in Fig. 2 and 3, the maxima
of the Thomson visibility function for a $\Lambda$CDM cosmology with _Planck_
data has been included.
### 4.1 Differences in the CMB power spectrum responses
In Fig. 8, we show the $\mathcal{D}_{\ell}^{\rm TT}$ responses according to
the first 3 eigenmodes in $\alpha_{\rm EM}$ and $m_{\rm e}$. For consistency
and completeness, we also present how these eigenmodes affect the $E$-mode
polarisation power spectra encoded in the $\mathcal{D}_{\ell}^{\rm EE}$
variations. These are shown in Fig. 9. Similar to the responses for the CVL
modes in Figs. 4-5, the acoustic peaks of each CMB power spectra (assuming
$\Lambda$CDM) have also been included as grey lines. The first $\alpha_{\rm
EM}$ and $m_{\rm e}$ mode give similar responses in $\mathcal{D}_{\ell}^{\rm
TT}$ to their CVL counterparts as shown in Fig. 8 however the oscillatory
behaviour (which is associated with a slight shift in the peak positions) is
far smaller for the _Planck_ eigenmodes. This pattern emerges in the
polarisation responses from Fig. 9 as well.
The second component, $E_{2}$, starts to show differences between the CVL and
_Planck_ cases for both $\alpha_{\rm EM}$ and $m_{\rm e}$. Instead of creating
an average residual of $\partial\ln\mathcal{D}_{\ell}>0$, the responses starts
to shift downwards. This extra ‘damping’ could be a result of the additional
bump to $E_{2}$ in the _Planck_ modes, where the accelerated recombination has
not only knocked the response out of phase, but also moved the
$\partial\ln\mathcal{D}_{\ell}^{\rm TT}$ at $\ell=2500$ from $\sim 0.02\%$ to
$-\sim 0.75\%$. The change in magnitude is related to the Fisher errors from
Table 1 being propagated through the $C_{\ell}$ calculation. The third mode
$E_{3}$ has a very unique impact on the residual $\mathcal{D}_{\ell}$ power
spectrum due to the majority of the expansion rate ($\theta_{\rm MC}$)
degeneracies being removed at marginalisation. Furthermore, the changes to
$E_{3}$ lead to both changes in the temperature (Fig. 8) and polarisation
(Fig. 9) where the peaks and troughs of the responses are now anti-aligned
with the second and first modes.
Whilst the second eigenmodes for Fig. 8 are similar for $\alpha_{\rm EM}$ and
$m_{\rm e}$, the third mode $E_{3}$ is shifted higher for the effective
electron mass compared to the downward effect seen in $\alpha_{\rm EM}$. The
most likely reason for this is the large degeneracy seen between $m_{\rm e}$
and the expansion rate parameters (i.e, $\theta_{\rm MC}$ or $H_{0}$). As
shown in VFC20, there is a mild degeneracy between $\alpha_{\rm EM}$ and
$H_{0}$ but a far larger geometric degeneracy line between $m_{\rm e}$ and
$H_{0}$. This arises from a small extra tilting in the residual
$\mathcal{D}_{\ell}$ for $m_{\rm e}$. To remove these degeneracies, $m_{\rm
e}$ requires a larger degree of marginalisation (the relevant correlation
coefficients in the Fisher matrix for the perturbations and $\theta_{\rm MC}$
will be larger) and therefore the CMB responses shown for $m_{\rm e}$ in Fig.
8 are more damped. Comparable effects can be seen in the polarisation spectral
residuals in Fig. 9 for $m_{\rm e}$, however these changes are much more
subtle.
The only key difference in the polarisation power spectra, for both
$\alpha_{\rm EM}$ and $m_{\rm e}$ is the breakdown of the periodic residuals
in $E_{2}$ for $\ell<1000$. Here the repetitive wavy pattern has been replaced
by a more complex response. In Hart & Chluba (2018), the shape of the VFC
effects on Silk damping and the location of the horizon $\theta_{*}$ were
explained in detail. However, the effects on the $EE$ polarisation spectra
have not been explored in more detail. In the second eigenmode, the
$\mathcal{D}_{\ell}^{\rm TT}$ and $\mathcal{D}_{\ell}^{\rm EE}$ responses
(shown in Fig. 8 and 9) for both $\alpha_{\rm EM}$ and $m_{\rm e}$ has broad
similarities with the constant variations discussed in our previous papers
(Hart & Chluba, 2018, and VFC20). The changes in the CMB anisotropy power
spectra are degenerate with variations expected with a change in the horizon
size $\theta_{\rm MC}$. If those oscillatory variations at $\ell<1000$ were
removed, that degeneracy may be removed from the marginalised modes. This is
clear for the $\mathcal{D}_{\ell}^{\rm EE}$ variations for $\alpha_{\rm EM}$
and $m_{\rm e}$ in Fig. 9 but; this degeneracy may account for the drop in the
damping variations for $E_{2}$ in the $\mathcal{D}_{\ell}^{\rm TT}$ variations
at $\ell>1500$.
Figure 8: The responses in the CMB power spectra, similar to Fig. 4, however
arising from the _Planck_ converged modes in Fig. 7. Once again, the grey
vertical lines are peaks of the CMB spectra in fiducial $\Lambda$CDM cosmology
with _Planck_ 2018 parameters. All components have once again been multiplied
by the predicted Fisher errors shown in Table 1.
Figure 9: The responses in the CMB power spectra arising from the _Planck_
converged modes in Fig. 7. Once again, the grey vertical lines are peaks of
the CMB spectra in fiducial $\Lambda$CDM cosmology with _Planck_ 2018
parameters. All components have once again been multiplied by the predicted
Fisher errors shown in Table 1.
## 5 Constraining eigenmode amplitudes using Markov Chain Monte Carlo
In this section, we present the MCMC results for the $\alpha_{\rm EM}$ and
$m_{\rm e}$ components. With the modes for the two experimental
configurations, we can constrain the amplitudes of the eigenmodes $\mu_{i}$ as
explained in Sect. 2.2. For the CVL case, the third error starts to contain
all but a negligible contribution to the information ($99\%$) and for the
_Planck_ case, this is smaller ($97\%$); but, the $i>3$ modes in the hierarchy
are numerically unstable using the direct likelihood method. This follows from
a similar problem found in PCA20, however, from these justifications we shall
restrict ourselves to the first 3 eigenmodes.
The amplitudes $\mu_{i}$ are added as free parameters into CAMB and CosmoMC,
where the latter is used to sample over parameter space with the former as the
theoretical model for calculating the resultant likelihood. For the
recombination-era $X_{\rm e}$ eigenmodes, we have already concluded that the
addition of lensing or BAO likelihood information makes little difference to
the marginalised results in PCA20. When the BAO data is added, the error
values for the amplitudes $\mu_{i}$ have negligible differences
($\sigma_{\mu}\lesssim 2\%$) and the largest drift in amplitude is $\mu_{2}$
which shifts by $\simeq 0.2\sigma$. The only other drifts occur in
$\omega_{\rm c}$ and $n_{\rm s}$ which are consistent with the $\Lambda$CDM
variations found in the _Planck_ 2018 results (Planck Collaboration et al.,
2018b). Therefore, we will focus on the addition of the _Planck_ 2018 baseline
likelihood.
In sampling for the Markov chains, the standard _Planck_ priors were used and
the same Gelman-Rubin convergence metric that was used in PCA20 where
$\mathcal{R}-1\leq 0.01$. The parameters varied as part of the MCMC are the
standard 6 parameters: $\left\\{\omega_{\rm b},\omega_{\rm c},100\,\theta_{\rm
MC},\tau,n_{\rm s},\ln\left(10^{10}A_{\rm s}\right)\right\\}$. The nuisance
parameters that were varied in the construction of the _Planck_ modes (Sect.
4) will also be sampled over using the _fast-slow_ algorithm in the _Planck_
likelihood (Lewis, 2013).
Parameter | _Planck_ 2018 TTTEEE + low-$\ell$ | \+ 1 CVL $\alpha_{\rm EM}$ mode | \+ 2 CVL $\alpha_{\rm EM}$ modes | \+ 3 CVL $\alpha_{\rm EM}$ modes
---|---|---|---|---
$\omega_{b}$ | $0.02237\pm 0.00015$ | $0.02234\pm 0.00019$ | $0.02233\pm 0.00018$ | $0.02223\pm 0.00022$
$\omega_{c}$ | $0.1199\pm 0.0012$ | $0.1202\pm 0.0014$ | $0.1203\pm 0.0015$ | $0.1194\pm 0.0018$
$100\theta_{MC}$ | $1.04088\pm 0.00031$ | $1.04089\pm 0.00043$ | $1.04097\pm 0.00091$ | $1.0370^{+0.0038}_{-0.0051}$
$\tau$ | $0.0542\pm 0.0074$ | $0.0544^{+0.0073}_{-0.0082}$ | $0.0539\pm 0.0080$ | $0.0542\pm 0.0080$
${\rm{ln}}(10^{10}A_{s})$ | $3.044\pm 0.014$ | $3.045\pm 0.016$ | $3.044\pm 0.016$ | $3.044\pm 0.017$
$n_{s}$ | $0.9649\pm 0.0041$ | $0.9642\pm 0.0059$ | $0.9641\pm 0.0060$ | $0.9670\pm 0.0065$
$\mu_{1}\;(\alpha_{\rm EM})$ | $--$ | $-0.0008\pm 0.0074$ | $-0.0014\pm 0.0096$ | $0.017^{+0.025}_{-0.019}$
$\mu_{2}\;(\alpha_{\rm EM})$ | $--$ | $--$ | $-0.002\pm 0.014$ | $0.039^{+0.053}_{-0.041}$
$\mu_{3}\;(\alpha_{\rm EM})$ | $--$ | $--$ | $--$ | $0.062^{+0.075}_{-0.060}$
$H_{0}$ | $67.36\pm 0.54$ | $67.27\pm 0.62$ | $67.24\pm 0.61$ | $66.2^{+1.2}_{-1.5}$
$\sigma_{8}$ | $0.8107\pm 0.0059$ | $0.8116\pm 0.0076$ | $0.8120\pm 0.0084$ | $0.806\pm 0.010$
Table 2: Marginalised results at the $68\%$ confidence level for the CVL $\alpha_{\rm EM}$ modes in Fig. 2. This is combined with the _Planck_ 2018 baseline dataset (Planck Collaboration et al., 2018b) and shown against the $\Lambda$CDM standard case. The comparison of all the standard $\Lambda$CDM parameters along with two derived parameters, $H_{0}$ and $\sigma_{8}$, are shown with the $\mu_{i}$ amplitudes. The Gelman-Rubin convergence metric for all the chains that generated these results satisfy $\mathcal{R}-1<0.01$. Figure 10: Posterior distribution contours from varying $\alpha_{\rm EM}$ modes and the most correlated standard $\Lambda$CDM parameters ($\omega_{\rm b}$ and $\theta_{\rm MC}$) with _Planck_ 2018 data. The amplitudes of the $\alpha_{\rm EM}$ principal components are categorised by $\mu_{i}^{(\alpha)}$. Bands of the standard errors coming from _Planck_ TTTEEE + low-$\ell$ 2018 data are shown as well. Parameter | Planck 2018 TTTEEE + low-$\ell$ | \+ 1 CVL $m_{\rm e}$ mode | \+ 2 CVL $m_{\rm e}$ modes | \+ 3 CVL $m_{\rm e}$ modes
---|---|---|---|---
$\omega_{b}$ | $0.02237\pm 0.00015$ | $0.02234\pm 0.00019$ | $0.02235\pm 0.00019$ | $0.02220\pm 0.00022$
$\omega_{c}$ | $0.1199\pm 0.0012$ | $0.1202\pm 0.0014$ | $0.1202\pm 0.0015$ | $0.1195\pm 0.0016$
$100\theta_{MC}$ | $1.04088\pm 0.00031$ | $1.04088\pm 0.00040$ | $1.04096\pm 0.00091$ | $1.0374^{+0.0026}_{-0.0040}$
$\tau$ | $0.0542\pm 0.0074$ | $0.0543\pm 0.0079$ | $0.0542\pm 0.0080$ | $0.0539\pm 0.0079$
${\rm{ln}}(10^{10}A_{s})$ | $3.044\pm 0.014$ | $3.044\pm 0.016$ | $3.044\pm 0.016$ | $3.043\pm 0.017$
$n_{s}$ | $0.9649\pm 0.0041$ | $0.9642\pm 0.0059$ | $0.9644\pm 0.0059$ | $0.9654\pm 0.0060$
$\mu_{1}\;(m_{\rm e})$ | $--$ | $0.001\pm 0.014$ | $0.002\pm 0.018$ | $-0.024^{+0.025}_{-0.033}$
$\mu_{2}\;(m_{\rm e})$ | $--$ | $--$ | $0.003\pm 0.027$ | $-0.069^{+0.055}_{-0.082}$
$\mu_{3}\;(m_{\rm e})$ | $--$ | $--$ | $--$ | $-0.107^{+0.078}_{-0.11}$
$H_{0}$ | $67.36\pm 0.54$ | $67.26\pm 0.61$ | $67.29\pm 0.62$ | $66.2^{+1.0}_{-1.2}$
$\sigma_{8}$ | $0.8107\pm 0.0059$ | $0.8116\pm 0.0076$ | $0.8118\pm 0.0084$ | $0.8060\pm 0.0098$
Table 3: Marginalised results at the $68\%$ confidence level for the CVL
$m_{\rm e}$ modes in Fig. 2. This is combined with the _Planck_ 2018 baseline
likelihood. The standard 6 cosmological parameters are shown with $H_{0}$ and
$\sigma_{8}$ as well as the eigenmode amplitude parameters, $\mu_{i}$. The
Gelman-Rubin convergence metric for all the chains that generated these
results satisfy $\mathcal{R}-1<0.01$. Figure 11: Posterior distribution
contours from varying $m_{\rm e}$ modes and the most correlated standard
$\Lambda$CDM parameters ($\omega_{\rm b}$ and $\theta_{\rm MC}$) with _Planck_
2018 data. The amplitudes of the $\alpha_{\rm EM}$ principal components are
categorised by $\mu_{i}^{(m)}$. Bands of the standard errors coming from
_Planck_ TTTEEE + low-$\ell$ 2018 data are shown as well. Figure 12: Posterior
contours showing the cross-correlations of the first 3 most constrainable
components for $\alpha_{\rm EM}$ defined by $\mu_{i}^{(\alpha)}$. Here the
same _Planck_ TTTEEE+low-$\ell$ baseline data was used as in Fig. 10. Figure
13: Contours for $m_{\rm e}$ modes with the same data source as before. The
degeneracies between each of the first 3 eigenmodes are highlighted here.
### 5.1 Cosmic-variance-limited modes
Initially, we added the CVL modes into CosmoRec and CosmoMC to constrain their
amplitudes $\mu_{i}$ alongside the baseline _Planck_ parameters. Since these
modes are not optimized using the full data covariance matrix (as we have done
in Sect. 5.2), one expects significant correlations with standard parameters.
In Table 2, we present the marginalised results for the $\alpha_{\rm EM}$ with
a CVL setup. The results indeed show that introducing the first eigenmode
created a substantial degeneracy with $n_{\rm s}$ and $\omega_{\rm b}$. This
is due to the error increase in $n_{\rm s}$ by $\sim 44\%$ and error increase
in $\omega_{\rm b}$ by $\sim 25\%$. When analysing the chains, we calculate
the correlations $\rho\,(\omega_{\rm b},\mu_{1}^{(\alpha_{\rm EM})})=0.59$ and
$\rho\,(n_{\rm s},\mu_{1}^{(\alpha_{\rm EM})})=0.67$. The physical origins for
this correlation is the tilted spectra in $\mathcal{D}_{\ell}^{\rm TT}$ from
Fig. 4 and $\mathcal{D}_{\ell}^{\rm EE}$ from Fig. 5, reminiscent of the
tilted residuals from a varied $n_{\rm s}$. Since $n_{\rm s}$ and $\omega_{\rm
b}$ are correlated by $\sim 50\%$, this explains the joint correlations and is
reflected by the results in Table 2.
In Fig. 10, the posterior contours for $\alpha_{\rm EM}$ also show this
correlation for $\omega_{\rm b}$ with 1 mode (_purple_) very well. The
oscillatory nature of these same residuals lead to a shift in the position of
the sound horizon and so this mode has a correlation with $\theta_{\rm MC}$
(more formally $\theta_{*}$) where $\rho(\theta_{\rm MC},\mu_{1}^{(\alpha_{\rm
EM})})=-0.69$. Cross correlations lead to degeneracies between $\mu_{1}$ and
the derived parameters $\sigma_{8}$ and $H_{0}$. When the second and third
modes are added, the degeneracies are predominantly related to $\theta_{\rm
MC}$. This is shown in Table 2 as the error on $\theta_{\rm MC}$ increases by
a factor of two when compared to the baseline _Planck_ case once 3 modes are
added. This is reinforced by the shapes of the posterior contours in Fig. 10
between $\theta_{\rm MC}$ and $\mu_{i}$. A sharp degeneracy line between
$\mu_{2}$, $\mu_{3}$ and $\theta_{\rm MC}$ ( $\rho(\theta_{\rm
MC},\mu_{3}^{(\alpha_{\rm EM})})=-0.99$, $\rho(\theta_{\rm
MC},\mu_{2}^{(\alpha_{\rm EM})})=-0.98$ ) leads to huge jumps in all the
parameter errors that have medium-large degeneracies with $\theta_{\rm MC}$
(i.e., $\omega_{\rm b}$, $\omega_{\rm c}$, $n_{\rm s}$). Throughout all this
analysis the marginalised values and errors of $\tau$ and $A_{\rm s}$ are
unaffected, which is consistent since the $\partial\mathcal{D}_{\ell}^{\rm
TT}$ and $\partial\mathcal{D}_{\ell}^{\rm EE}$ spectra shown in Fig. 4-5 do
not resemble overall amplitude shifts (where the residual of
$\mathcal{D}_{\ell}$ would be a flat, non-zero response999Changes to the CMB
spectra in $\tau$ and $A_{\rm s}$ do leave oscillation-like relics but they
are far smaller-scale structure than the overall amplification of the power
spectra.). The large degeneracies present for $\alpha_{\rm EM}$ leave the non-
orthogonalities tarnished post-MCMC sampling. This means that whilst the
eigenmodes are heavily orthogonal ($>99.9\%$) with each other, they accrue
degeneracies through the assorted cross-correlations previously mentioned. In
Fig. 12, these correlations between the amplitude parameters are clearly shown
and become most apparent when $\mu_{3}$ is added to the simulation.
Similarly, the marginalised constraints for the first 3 $m_{\rm e}$ mode
amplitudes being added to the _Planck_ baseline analysis are shown in Table 3.
The first difference between the two cases from this analysis is that the
errors for $m_{\rm e}$ are twice as large as those for $\alpha_{\rm EM}$
(i.e., $\sigma_{\mu}^{m_{\rm e}}\sim 2\sigma_{\mu}^{\alpha_{\rm EM}}$). This
is fairly consistent for the relative change in magnitudes between
$\alpha_{\rm EM}$ and $m_{\rm e}$ variations explored in Hart & Chluba (2018),
especially since the PCA is focussed around redshifts more associated
exclusively with hydrogen and helium recombination ($300<z<3000$). The
opposite signs of the marginalised values in Table 3 compared to Table 2 are
related to the flipped symmetry of the outputted modes from the
eigensolver101010Note this flipping does not affect the orthonormalisation and
therefore, does not affect the results, simply the sign of the mean amplitude
value $\bar{\mu_{i}}$., as mentioned in Sect. 3.
Aside from the normalised errors on the modes, the standard parameter values
and their marginalised errors are consistently similar to the results for
$\alpha_{\rm EM}$. One peculiar difference is that the sharpness of the
$\theta_{\rm MC}$ contour for $\alpha_{\rm EM}$ is larger than $m_{\rm e}$
($\approx 35\%$ higher). For $m_{\rm e}$ specifically, once again the electron
mass is correlated with the horizon size such that $\rho(\theta_{\rm
MC},\mu_{3}^{(m_{\rm e})})=0.96$ From our previous analyses, this is
inconsistent, but this appears to be related to the degeneracies introduced by
the first 2 modes. Additional marginalisation and generation of eigenmodes
with the appropriate data (as discussed with the direct likelihood method in
Sect. 2.1.1, with modes shown in Sect. 4) reduce these strong correlations. In
Fig. 13, there are similar contours as in Fig. 12 for $\alpha_{\rm EM}$;
however the contours are shifted into the opposite quadrant due to the
flipping of the eigenmodes. Note that the contours broaden out as $\mu_{1}$
and $\mu_{3}$ deviate further from $\mu_{i}=0$ ($\Lambda$CDM case) due to all
3 modes being consistently correlated with $\theta_{\rm MC}$111111Similar
behaviour happens with $\alpha_{\rm EM}$ in Fig. 12 but the effect is much
more subtle and reversed.
Figure 14: Most correlated likelihood contours from the $\alpha_{\rm EM}$
_Planck_ modes shown in Fig. 7. This is the same correlations as in Fig. 10
except here we remove the $\mu_{2}^{(\alpha)}$ contour row because the
degeneracies for this parameter are more derived from $\mu_{1}$ and $\mu_{3}$.
As with all the contour plots for comparing $\Lambda$CDM parameters, the
standard cosmology _Planck_ results are represented by the dark bands. Figure
15: Correlations between the $\mu_{i}$ amplitude parameters with the _Planck_
likelihood generated $\alpha_{\rm EM}$ eigenmodes. This plot is comparable to
Fig. 12 except the modes are generated with the direct likelihood method from
Sect. 2.1.1 instead. The contours are much smaller and close to circular
because the modes have been marginalised (see Sect. 2.3.3).
### 5.2 Planck-data generated modes
Following the analysis with the CVL modes, we carried out a similar approach
with the _Planck_ direct-likelihood method. The marginalised values of the
standard parameters, eigenmode amplitudes and the derived parameters $H_{0}$
and $\sigma_{8}$ are shown in Table 4 for $\alpha_{\rm EM}$, mirroring the
previous analysis. The degeneracy between $\omega_{\rm b}$ and $\mu_{1}$ has
slightly reduced to a correlation $\rho(\omega_{\rm b},\mu_{1})=0.50$.
Parameter | _Planck_ 2018 TTTEEE + low-$\ell$ | \+ 1 _Planck_ $\alpha_{\rm EM}$ mode | \+ 2 _Planck_ $\alpha_{\rm EM}$ modes | \+ 3 _Planck_ $\alpha_{\rm EM}$ modes
---|---|---|---|---
$\omega_{b}$ | $0.02237\pm 0.00015$ | $0.02234\pm 0.00018$ | $0.02234\pm 0.00019$ | $0.02227\pm 0.00020$
$\omega_{c}$ | $0.1199\pm 0.0012$ | $0.1201\pm 0.0014$ | $0.1202\pm 0.0015$ | $0.1202\pm 0.0016$
$100\theta_{MC}$ | $1.04088\pm 0.00031$ | $1.04087\pm 0.00034$ | $1.04091\pm 0.00046$ | $1.04173\pm 0.00063$
$\tau$ | $0.0542\pm 0.0074$ | $0.0541\pm 0.0079$ | $0.0538\pm 0.0078$ | $0.0535\pm 0.0077$
${\rm{ln}}(10^{10}A_{s})$ | $3.044\pm 0.014$ | $3.044\pm 0.016$ | $3.044\pm 0.016$ | $3.037\pm 0.017$
$n_{s}$ | $0.9649\pm 0.0041$ | $0.9642\pm 0.0060$ | $0.9643\pm 0.0060$ | $0.9599\pm 0.0065$
$\mu_{1}\;(\alpha_{\rm EM})$ | $--$ | $-0.0009\pm 0.0066$ | $-0.0006\pm 0.0066$ | $-0.0035\pm 0.0069$
$\mu_{2}\;(\alpha_{\rm EM})$ | $--$ | $--$ | $0.002\pm 0.012$ | $0.001\pm 0.012$
$\mu_{3}\;(\alpha_{\rm EM})$ | $--$ | $--$ | $--$ | $0.081\pm 0.049$
$H_{0}$ | $67.36\pm 0.54$ | $67.28\pm 0.63$ | $67.26\pm 0.64$ | $67.50\pm 0.68$
$\sigma_{8}$ | $0.8107\pm 0.0059$ | $0.8112\pm 0.0078$ | $0.8116\pm 0.0082$ | $0.8084\pm 0.0086$
Table 4: Marginalised results at the $68\%$ confidence level for the
$\alpha_{\rm EM}$ modes in Fig. 7 generated with _Planck_ data using the
direct likelihood method. This is combined with the _Planck_ 2018 baseline
dataset (Planck Collaboration et al., 2018b) and shown against the
$\Lambda$CDM standard case. The comparison of all the standard $\Lambda$CDM
parameters along with two derived parameters, $H_{0}$ and $\sigma_{8}$, are
shown with the $\mu_{i}$ amplitudes. The Gelman-Rubin convergence metric for
all the chains that generated these results satisfy $\mathcal{R}-1<0.01$.
Notably, the degeneracies between the parameters are no longer affected by the
added number of amplitudes. The marginalisation step introduced when creating
the _Planck_ modes reduces the standard parameter dependencies. Consequently,
the inter-mode orthogonality is relatively preserved. The degeneracy between
$\mu_{3}$ and $\theta_{\rm MC}$ has not been totally removed, leaving some
spurious correlations. This also translates into a $\approx 25\%$ increase in
the error to $H_{0}$, given that the matter density parameters have changed
very little with these _Planck_ modes. However, the removal of $\theta_{\rm
MC}$ correlations from $\alpha_{\rm EM}$ variations in general is much more
difficult for marginalisation considering that a broadband, top-hat variation
in $\alpha_{\rm EM}$ will sharply correlate with $\theta_{\rm MC}$ (see Hart &
Chluba, 2018, for more details). This is not as rigorously decorrelated
compared to PCA20; however it is still heavily improved since the error
changes in $\theta_{\rm MC}$ are increased by a factor of 2 when 3 modes are
included. The first two errors are incredibly consistent with the _Planck_
$\alpha_{\rm EM}$ forecasted errors shown in Table. 1; however the larger
$\theta_{\rm MC}$ contour leads to the $\mu_{3}$ error being $36\%$ higher
than the Fisher prediction. Though the modes are strongly decorrelated, one
can see the influence of $\theta_{\rm MC}$ degeneracy lines by the
$\mu_{i}\times\mu_{j}$ correlation contours shown in Fig. 15.
Parameter | _Planck_ 2018 TTTEEE + low-$\ell$ | \+ 1 _Planck_ $m_{\rm e}$ mode | \+ 2 _Planck_ $m_{\rm e}$ modes | \+ 3 _Planck_ $m_{\rm e}$ modes
---|---|---|---|---
$\omega_{b}$ | $0.02237\pm 0.00015$ | $0.02235\pm 0.00018$ | $0.02233\pm 0.00019$ | $0.02226\pm 0.00020$
$\omega_{c}$ | $0.1199\pm 0.0012$ | $0.1201\pm 0.0014$ | $0.1203\pm 0.0015$ | $0.1199\pm 0.0015$
$100\theta_{MC}$ | $1.04088\pm 0.00031$ | $1.04086\pm 0.00032$ | $1.04089\pm 0.00039$ | $1.04040\pm 0.00056$
$\tau$ | $0.0542\pm 0.0074$ | $0.0541\pm 0.0078$ | $0.0542\pm 0.0079$ | $0.0535\pm 0.0080$
${\rm{ln}}(10^{10}A_{s})$ | $3.044\pm 0.014$ | $3.044\pm 0.016$ | $3.045\pm 0.016$ | $3.038\pm 0.017$
$n_{s}$ | $0.9649\pm 0.0041$ | $0.9642\pm 0.0057$ | $0.9643\pm 0.0057$ | $0.9619\pm 0.0061$
$\mu_{1}\;(m_{\rm e})$ | $--$ | $-0.001\pm 0.012$ | $-0.001\pm 0.012$ | $-0.003\pm 0.013$
$\mu_{2}\;(m_{\rm e})$ | $--$ | $--$ | $0.004\pm 0.023$ | $0.001\pm 0.023$
$\mu_{3}\;(m_{\rm e})$ | $--$ | $--$ | $--$ | $-0.116\pm 0.092$
$H_{0}$ | $67.36\pm 0.54$ | $67.27\pm 0.61$ | $67.22\pm 0.64$ | $67.12\pm 0.63$
$\sigma_{8}$ | $0.8107\pm 0.0059$ | $0.8113\pm 0.0077$ | $0.8121\pm 0.0081$ | $0.8075\pm 0.0089$
Table 5: Marginalised results at the $68\%$ confidence level for the $m_{\rm
e}$ modes in Fig. 7 generated with _Planck_ data using the direct likelihood
method. This is combined with the _Planck_ 2018 baseline dataset (Planck
Collaboration et al., 2018b). The comparison of all the standard $\Lambda$CDM
parameters along with two derived parameters, $H_{0}$ and $\sigma_{8}$, are
shown with the $\mu_{i}$ amplitudes. The Gelman-Rubin metric for all the
chains that generated these results satisfy $\mathcal{R}-1<0.01$.
The reduction in inter-correlated degeneracies can be clearly seen in Fig. 14,
where the $\theta_{\rm MC}$ vs. $\mu_{3}$ contour is far smaller than the case
in Fig. 10 for the suboptimal CVL modes. The decorrelation between $\mu_{2}$
and the other standard parameters is evident from the lack of change in the
contours, when the second mode is added. This is corroborated when examining
the column of Table. 4 where 2 modes have been added. The comparison of
correlations between $\mu_{3}$ and $\theta_{\rm MC}$ for both the CVL and
_Planck_ modes is shown in Fig. 18. The reduction in the error on $\mu_{3}$
for both fundamental constants has induced a $\sim 1-1.5\sigma$ departure from
$\Lambda$CDM for $E_{3}$ (see Table 4 and Table 5). This is a small deviation,
however, it further points to the proposition that constant variations of
$\alpha_{\rm EM}$ and $m_{\rm e}$ do not tell the full story. Physically-
motivated models of VFC with more oscillatory behaviour could prove more
detectable in future studies. For both $\alpha_{\rm EM}$ and $m_{\rm e}$, the
wider CVL contours show huge improvements when constrained with a
marginalisation step since the errors have shrunk by more than a factor of 5.
Figure 16: Most correlated likelihood contours from the $m_{\rm e}$ _Planck_
modes shown in Fig. 7. This is the same correlations as in Fig. 11 except here
we remove the $\mu_{2}^{(m_{\rm e})}$ contour row because the degeneracies for
this parameter are mainly derived from $\mu_{1}$ and $\mu_{3}$. Dark bands
represent the $\Lambda$CDM baseline errors.
In Table. 5, we present the marginalised results for the $m_{\rm e}$ _Planck_
modes previously shown in Fig. 7. As in the CVL case, the $m_{\rm e}$ results
are very similar to those for $\alpha_{\rm EM}$, however, the errors are
slightly larger than the eigensolver predicts (see Table 1). The correlations
between standard parameters and eigenmode amplitudes are also fairly
consistent as for $\alpha_{\rm EM}$. For example, when 3 modes are included,
the fine structure correlations, $\rho(\omega_{\rm
b},\mu_{1}^{(\alpha)})=0.51$; however the electron mass correlations,
$\rho(\omega_{\rm b},\mu_{1}^{(m)})=0.54$. Interestingly, the errors on the
standard parameters are modified by a smaller degree in the case of added
$m_{\rm e}$ modes as shown by Table 5. Referring to Table 4 and Table 5,
$\sigma(H_{0})=0.68$ when we add $\alpha_{\rm EM}$ modes whereas the same
parameter error when adding $m_{\rm e}$ modes is $\sigma(H_{0})=0.63$. Though
these are very small changes, one can see the subtle differences in the
contour deformities shown in Fig. 16. The electron mass mode amplitudes
$\mu_{1}$ and $\mu_{2}$ seem thoroughly decorrelated; however the third mode
has the same problem with $\theta_{\rm MC}$ which prevents full decorrelation.
More crucially though, the error contours are much narrower than the CVL case
thanks to the marginalisation step as illustrated in Fig. 18.
Figure 17: Correlations between the $\mu_{i}$ amplitude parameters with the
_Planck_ likelihood generated $m_{\rm e}$ eigenmodes. Contours are generated
from amplitudes using the marginalised eigenmodes as with $\alpha_{\rm EM}$.
One point of contention for $m_{\rm e}$, as with the CVL case, is the size of
the error bars. Specifically the fact that the $\mu_{i}$ eigenmode amplitudes
are so neatly multiplicative factors of the $\alpha_{\rm EM}$ modes. This is
most clearly shown by the comparable _Planck_ contours between $\mu_{3}$ and
the horizon size $\theta_{\rm MC}$. In Hart & Chluba (2018), the error bars
for $m_{\rm e}$ as a constant variation blow up due a degeneracy with
$\theta_{\rm MC}$ (already discussed in Sect. 4.1); however, in Hart & Chluba
(2018) we showed that the majority of the anomaly relies on the rescaling of
the Thomson visibility function. Yet, there is also an interplay between early
and late redshifts (pre- and during recombination) which cannot be accounted
for if the variations $\mathcal{C}(z)$ dissipate before later times (i.e.,
reionisation).
We will discuss this in more detail in Sect. 5.4, however, for now we want to
draw the reader’s attention to the lack of this geometric degeneracy which is
reflected in the contours in Fig. 16 and 18. The change in the horizon scale
error from $\sigma(\theta_{\rm MC})=0.00031$ in the _Planck_ baseline case to
$\sigma(\theta_{\rm MC})=0.00063$ when 3 modes are added, is far smaller than
the $\theta_{\rm MC}$ error jump expected for constant variations of $m_{\rm
e}$. Comparing to the results in VFC20, the error on the horizon size,
$\sigma(\theta_{\rm MC})=0.0003\rightarrow 0.036$ growing 2 orders of
magnitude when including the electron mass variations. This indicates that the
$m_{\rm e}$ eigenmodes lack important contributions from $z<300$, which in
VFC20 opened the geometric degeneracy line that alleviated the Hubble tension.
Figure 18: Posterior contour for $\mu_{3}$ vs. $\theta_{\rm MC}$ when 3 mode
amplitudes are added into the MCMC sampling. Here we compare $\alpha_{\rm EM}$
(_darker_) with the $m_{\rm e}$ (_lighter_) modes generated with the _Planck_
likelihood (_solid_), against the wider CVL-like mode contours from Sect. 3
(_dashed_).
### 5.3 Direct projections for $\alpha_{\rm EM}$ and $m_{\rm e}$
For eigenmodes that are sufficiently decorrelated, we can recast the
variations $\Delta\mathcal{C}/\mathcal{C}(z)$ onto a small deviation from the
fiducial cosmology and attain excellent, first-order estimates for the
parameter values and their errors before jumping onto computationally
expensive MCMCs (for certain cosmological problems). The main methodology of
the projections formalism has been explained in detail in PCA20; however, we
will briefly elucidate some of the key aspects. For the $X_{\rm e}$
eigenmodes, this approach has already been successfully applied in CMB
spectral distortion analysis (Bolliet et al., 2020).
Firstly, we can create a generic variation in the fundamental constants
$\mathcal{C}$ as a function of eigenmodes constrained in the analytic or
direct-likelihood method such that,
$\frac{\Delta\mathcal{C}}{\mathcal{C}}\left(z\right)=\sum_{i}\rho_{i}\,E_{i}(z)\,,\qquad\rho_{i}=\int\frac{\Delta\mathcal{C}}{\mathcal{C}}(z)\cdot
E_{i}(z)\,{\,\rm d}z,$ (10)
where once again, $\rho_{i}$ is the projection of the fundamental constant
eigenmodes onto the given model that one is trying to constrain. If we assume
that we are in the perturbative regime that the relative change in the
fundamental constant is proportional to the relative change in the model
amplitude, (i.e.,
$\Delta\mathcal{C}/\mathcal{C}\propto\Delta\mathcal{A}/\mathcal{A}$ where
$\mathcal{A}$ is the magnitude of a certain model variation121212See the full
derivation and motivation for this method in PCA20.). Since we can suppose the
$\Delta\ln\mathcal{C}$ is proportional to the relative change in the
parameter, the projection is now multiplied by the new parameter change
$\Delta\mathcal{A}$ and weighted by the original change
$\Delta\mathcal{A}_{0}$.
For illustration, the various projections of the eigenmodes with the constant
variation and power law models are shown in Table. 6. For constant variations,
the fine structure constant modes strongly are strongly projected onto the
first two modes, with a slightly weaker contribution from the third mode. In
contrast, the $m_{\rm e}$ modes projected predominantly onto the second
eigenmode, roughly double the projection onto the third mode. Furthermore,
there is negligible projection onto $\mu_{1}$. For power-law time-dependence,
the projections onto the $\alpha_{\rm EM}$ and $m_{\rm e}$ modes are very
similar with the strongest projections onto $E_{1}$ and $E_{3}$; however, both
modes have much smaller projections onto the second mode, with negative
symmetry ($\rho_{2}\left(p;\alpha_{\rm EM}\right)=-0.77$,
$\rho_{2}\left(p;m_{\rm e}\right)=0.64$).
Apply this projection as a $\chi$-squared residual with the $\mu_{i}$
amplitudes using the MCMC covariance matrices such that,
$\chi^{2}=\left(\Delta\mathcal{A}\,\rho_{i}-\mu_{i}\right)^{\rm
T}\Sigma_{ij}^{-1}\left(\Delta\mathcal{A}\,\rho_{j}-\mu_{j}\right).$ (11)
By solving for the minima of this fit one can find the best-fit value allowed,
the accuracy of which is determined by the strength of the marginalisation
when generating the modes. If the goodness-of-fit is treated like a likelihood
such that the value in Eq. (11) is transformed by
$\mathcal{L}=\exp\left(-\chi^{2}/2\right)$, the $68\%$ and $95\%$ percentile
errors can be found for the given parameter change $\mathcal{A}$ as well.
Model | Parameter ($\mathcal{A}$) | $\Delta\mathcal{A}$ | ${\rho}_{1}$ | ${\rho}_{2}$ | ${\rho}_{3}$
---|---|---|---|---|---
Constant | $\alpha_{\rm EM}/\alpha_{\rm EM,0}$ | $0.01$ | $-2.01$ | $2.00$ | $1.37$
Power law | $p$ | $0.001$ | $2.58$ | $-0.77$ | $3.65$
Model | Parameter ($\mathcal{A}$) | $\Delta\mathcal{A}$ | ${\rho}_{1}$ | ${\rho}_{2}$ | ${\rho}_{3}$
Constant | $m_{\rm e}/m_{\rm e,0}$ | $0.01$ | $0.13$ | $-3.65$ | $-1.63$
Power law | $p$ | $0.001$ | $2.80$ | $0.64$ | $2.71$
Table 6: Projections $\rho_{i}$ of fundamental constant $\mathcal{C}$ changes
onto the _Planck_ eigenmodes alongside the parameter step size
$\Delta\mathcal{A}$ used. Each value ${\rho}_{i}$ measures how strongly the
physical variations from the constant and power-law models project onto our
_Planck_ modes in Fig. 7
In Table 7, the projection results for $\alpha_{\rm EM}$ and $m_{\rm e}$ are
compared against the simple constant relation and the phenomenological power
law from our previous work. It is important to point out that the MCMC
parameter values are actually garnered from a best-fit algorithm, since that
is a clearer indication from the minimisation of the $\chi^{2}$. From the
models given, the constant $\alpha_{\rm EM}$ results constrained by the
projections method are exceptionally close to the MCMC sampled value. This is
also the case for both the phenomenological power law cases where the
difference is $\sim 0.25\sigma$ for the $\alpha_{\rm EM}$ modes and $\lesssim
0.1\sigma$ for the $m_{\rm e}$ modes. The power-law variations were even
tested with an added curvature term where $p\rightarrow
p+\beta\ln\left[(1+z)/1100\right]$; however, the results were compatible to
$0.003\sigma$. Though the curvature term has higher physical consistency
($\mathcal{C}(z\rightarrow 0)\rightarrow 0)$, it has a very small impact
around the Thomson visibility function where the recombination constraints are
most sensitive. All these model projections were far closer to PCA20 results
due to the basic functional form these variations for $\mathcal{C}$ take
compared to the free electron fraction, $X_{\rm e}$ and complicated parameter
dependencies.
The key difference is the constant $m_{\rm e}$ projection. As documented in
VFC20, the electron mass exposes a huge degeneracy line with $H_{0}$. This
leads to the $m_{\rm e}$ MCMC error being much higher than $\alpha_{\rm EM}$
(as shown in Table 7). However, here the projection error is an order of
magnitude smaller and the central value of $m_{\rm e}/m_{\rm e,0}$ is far
closer to unity. This suggests that something is amiss with the projection
method for $m_{\rm e}$, as we discuss now.
Fine structure constant variations ($\alpha_{\rm EM}$)
---
Model | $\alpha_{\rm EM}(z)$ | MCMC | Projections
Constant | $\alpha_{\rm EM}/\alpha_{\rm EM,0}$ | $1.0010\pm 0.0024$ | $1.0012\pm 0.0029$
Power law | $\left(\frac{1+z}{1100}\right)^{\,p}$ | $-0.0002\pm 0.0024$ | $0.0004\pm 0.0024$
Effective electron mass variations ($m_{\rm e}$)
Model | $m_{\rm e}(z)$ | MCMC | Projections
Constant | $m_{\rm e}/m_{\rm e,0}$ | $0.844\pm 0.059$ | $0.9995\pm 0.0062$
Power law | $\left(\frac{1+z}{1100}\right)^{\,p}$ | $-0.0006\pm 0.0042$ | $-0.0009\pm 0.0045$
Table 7: Projection results using the first 3 eigenmodes for $\alpha_{\rm EM}$
and $m_{\rm e}$. The constant and power law models have been compared against
the MCMC results from CosmoMC constrained with _Planck_ in Hart & Chluba
(2018). The values from the MCMC are the best fit values along with the
marginalised errors (since the projection module finds the best fit point in
$\mathcal{A}$).
### 5.4 Problems with the $m_{\rm e}$ projection and new hints about the
origin of the Hubble tension
As we have shown in Sect. 5.3, the direct projection method works quite well
for simple models of fundamental constant variations except for the constant
variations in $m_{\rm e}$, which seem to be giving much smaller errors than
the direct constraints (e.g., Hart & Chluba, 2018). What is going on here?
As already mentioned in passing, this may be related with how the modes are
constructed in our VFC PCA. In contrast to the direct constraints, our modes,
operating at $300<z<2000$, do not capture any changes to the Thomson
visibility caused by VFC at $z<300$ and during reionisation.
Figure 19: The visibility function $g(z)=\dot{\tau}e^{-\tau(z)}$ made from the
opacity $\dot{\tau}$ discussed in Sect. 3. The changes from increasing $m_{\rm
e}$ by $10\%$ (_orange_) are shown against the $\Lambda$CDM scenario
(_purple_). This includes both the Thomson visibility function at
recombination (_solid_) and the residual bump of opacity coming from the
reionisation epoch (_dashed_). Reionisation visibility has been multiplied by
a factor of $1000$.
In Fig. 19, we present the visibility function variations when we include a
constant variation of $m_{\rm e}/m_{\rm e,0}=1.1$. While the right panel
focuses on the effect during recombination, the left panel looks at the
variations arising in the reionisation era. The latter rely purely on the
rescaling of the Thomson cross section which modifies the opacity of electrons
during the reionisation epoch, which are not covered by our VFC modes. Note
that the visibility from reionisation had to be amplified $\times 1000$ due to
the smaller opacity during this era.
From our previous study, we also know that the geometric degeneracy between
$m_{\rm e}$ and $H_{0}$ lies in the additional $\sigma_{\rm T}$ rescaling.
Without this rescaling, the errors on $m_{\rm e}$ shrinks by a factor of
$\simeq 5$, providing much less freedom along the geometric degeneracy line
(Hart & Chluba, 2018). In VFC20, we further tested the dependence on various
likelihood configurations and found no clear data source (e.g., high-$\ell$
likelihood, lensing) that causes the large degeneracies with $H_{0}$. The
exception was a $\sim 30\%$ reduction in the tension between $m_{\rm e}$ and
$H_{0}$ which could be accounted by the changes to the $\tau$ value from the
new polarisation $EE$ likelihood131313Testing for this was done with CosmoMC
using the _Planck_ 2015 optical depth prior: $\tau=0.079\pm 0.017$.. The
potency of the polarisation likelihood and its proximity to the Hubble tension
have also been alluded to in Addison (2021).
It therefore seems crucial to account for the full time dependence of the
electron mass variability as a function of redshift, including later eras such
as the dark ages, reionisation and the 21cm regime. This is also corroborated
when adding BAO and SN data (using Riess et al., 2019) in the MCMC analysis,
where one finds a small drift in the parameter values consistent with the
likelihood combinations in $\Lambda$CDM, but negligible changes in the error
bars ($\lesssim 0.01\sigma$). If we recreate the results that are shown in
Table. 7 using the _Planck_ \+ BAO MCMC results instead of the _Planck_
likelihood alone, we find the projection result $m_{\rm e}/m_{\rm
e,0}=1.0013\pm 0.0060$. This departs slightly from the direct MCMC result when
we added BAO in VFC20 ($m_{\rm e}/m_{\rm e,0}=1.0078\pm 0.0067$); however
there are still traces of the geometric degeneracy here, albeit much smaller
variations. However, the changes in the projection result when the BAO
likelihood is included goes into the right direction and is far closer to the
direct projection ($\simeq 1\sigma$ deviation).
Our discussion shows that a coordination between the dark ages, reionisation
and recombination could be vital for modelling the ionisation history in the
future. The link between these epochs and the consequences of a universal
ionisation history solution in the atomic physics regime may aid other
theories. For example, one of the compelling solutions to the Hubble tension
involves a baryon clumping effect that arises from primordial magnetic fields
(Jedamzik & Pogosian, 2020). However, another study has suggested that small-
scale CMB data may contradict this with current Atacama Cosmology Telescope
(ACT) data (Thiele et al., 2021). If the baryonic clumping model was refined
for a wider range of epochs, small changes during the dark ages and
reionisation epoch may restore the consistency problems with the small-scale
CMB data. Additional baryon clumping causes an acceleration of recombination
at last scattering. Conversely, star formation may be enhanced in denser
regions during reionisation, causing an earlier onset and longer duration of
reionisation. To leading order, this is consistent with the modifications that
constant variations of $m_{\rm e}$ introduce, suggesting that a similarly
orchestrated change in the ionization history may be at work behind the
scenes.
A complimentary study using a joint PCA for recombination and reionisation
seems highly motivated by these findings. Specifically for the VFC model and
the time-dependent eigenmodes, the inclusion of reionisation effects are
beyond the scope of this paper. Due to the logarithmic relationship between
conformal time and redshift ($\delta\ln\eta\simeq-\delta\ln z$), the
implementation of basis functions into the reionisation era is more
complicated for perturbations at redshifts $15\leq z_{i}\leq 300$. Variable
basis functions across the same grid could help but have been shown to create
significant correlations between eigenmodes when recasting the Fisher elements
back into the $X_{\rm e}$-basis (see PCA20 for more details). Returning to
$X_{\rm e}$-modes for both recombination and reionisation may be beneficial,
combining the methods of PCA20 and Mortonson & Hu (2008), for instance. These
explorations are left for a future study but most likely are at the core of
the issues seen here.
## 6 Forecasting eigenmodes with Simons Observatory noise curves
To conclude our study, we turn to one of the interesting future projects that
will involve CMB observables: The Simons Observatory (SO) (Ade et al., 2019).
For this analysis, we make use of the publicly available so_noise models code
to generate an added noise term in the Fisher matrices for our analytic model.
In this section, we add in the Simons noise curves with a $40\%$ sky coverage
according to their preliminary forecasts as well as an adjusted $\ell$ range
for their Large Aperture Telescope (LAT) where $40\leq\ell\leq 8000$. For
simplicity, we will be considering the standard-ILC noise that emerges from
the SO forecasts and not any of the deprojection effects from foregrounds
(e.g., dust and synchrotron constrained-ILCs). The noise curves for the LAT
agree with the forecasting paper for SO (Ade et al., 2019). The machinery is
modified such that $C_{\ell}^{X}\rightarrow C_{\ell}^{X}+N_{\ell}^{X}$ within
the covariance matrix which changes the effective signal-to-noise of certain
responses in the Fisher matrix (see Tegmark et al., 1997, for more details).
In Fig. 20, the SO modes are shown together with the CVL modes from Sect. 3.
For both $\alpha_{\rm EM}$ (_top_ , Fig. 20) and $m_{\rm e}$ (_bottom_ , Fig.
20), the biggest impact lies in the second and third eigenmodes. The kink that
was present in $E_{2}$ for the CVL case (Sect. 3) at $z\sim 1300$ has been
removed for the SO modes. Specifically, whatever relic in the
$\mathcal{D}_{\ell}$ power spectra that caused the modes to quickly truncate
to 0 around $z\sim 1500$ has been removed for a smoother exponential-like
decline. The rapid drop in the first mode, $E_{1}$, has also been subtly
changed due to the introduction of the SO noise. The third mode in both cases
also exhibits an amplitude reduction for the peaks where $z<1200$ however a
larger amount of mode information (larger area) for the final peak at $z\sim
1300$. The trading of feature information in the modes could explain the
removal of the kink in $E_{2}$ as well.
The forecasted errors for the SO modes are shown in Table 1 alongside the
previously discussed CVL and _Planck_ results. The predicted errors for a PCA
with SO parameters sit nicely between the marginalised _Planck_ components and
the idealised CVL setup. However, the reduction of these errors with data from
SO heavily relies on careful treatment of the likelihood, covariances and the
data in general. Furthermore, the wider implications of the SO forecasted
modes are that recombination eigenmodes (such as the fundamental constant
eigenmodes) are approaching a critical constraining limit. Applications of the
PCA method to _Planck_ data have made huge strides in constraining these
eigenmodes shown in previous studies where these were forecasted (Farhang et
al., 2012). However, since $\sigma(E_{i}^{\rm P18})\simeq 4\sigma(E_{i}^{\rm
SO})$, we are very close to the CVL floor of constraining power available for
this kind of analysis.
Feeding the estimated errors for SO into the projection machinery discussed in
Sect. 5.3, the predicted error for a constant measurement of fine structure
constant is $\sigma_{\rm SO}\left(\alpha_{\rm EM}\right)\simeq 0.0001$.
Similarly the predicted error for the electron mass is $\sigma_{\rm
SO}\left(m_{\rm e}\right)\simeq 0.0003$. Although in both cases this is
$\simeq 20$ times smaller than the _Planck_ projection result, this neglects
the marginalisation over standard parameters. The _CORE_ collaboration
forecasted for $\alpha_{\rm EM}$ detectability for several experimental
configurations and their baseline, CORE-M5, was similar to SO for high-$\ell$
noise (see Di Valentino et al., 2016, for more details). The constrained error
they found for this setup was $\simeq 0.0007$ which is $\sim 5$ times larger
than our expected error; however, they anticipate the degeneracy between
$\alpha_{\rm EM}$ and $H_{0}$ to start being a limiting factor.
The projection error for SO can be refined with more detailed forecasting
models in the future (including foregrounds and other experimental effects).
However, our estimates are already promising and cement the idea that SO could
be approaching the limit of exceptional constraints for $\alpha_{\rm EM}$ and
$m_{\rm e}$ in upcoming analyses. In particular, sensitivity to time-dependent
variations may be possible, and a VFC PCA provides a robust framework for
mapping various VFC model parameters to direct observables, separating the
model-dependent interpretation step from the data analysis.
### 6.1 Responses in the CMB power spectra with added noise suppression
As an additional illustration, the differences from the responses for these
eigenmodes are shown in Fig. 21 with and without SO noise weighting. For this
example, we are only focused on $\alpha_{\rm EM}$ principal components and
there is a truncation of $\ell\gtrsim 4000$ due to the forecasted noise from
SO. This suppression happens much lower at $\ell\simeq 3000$ for the
polarisation spectra, suggesting the noise has a sharper cutoff than the
temperature spectra. Interestingly, the third eigenmode $E_{3}$ exhibits an
exponentially large response in the both panels at high-$\ell$ (_orange,
dashed_), however the noise helps to damp these residuals away.
For the other two eigenmodes, the noise changes a non-zero floor in
$\Delta\mathcal{D}_{\ell}^{\rm TT}$ into an exponential decay at
higher-$\ell$, showing the influence of the SO noise. This is particularly
noticeable for the $\mathcal{D}_{\ell}^{\rm EE}$ residual of $E_{2}$, where
the out of phase responses in the CMB $EE$ peaks are much smaller in the SO
case. The discrepancies in the modes between CVL and SO arise from the added
$\ell$ modes before the noise kicks in at $\ell\lesssim 3000$. Even a few
hundred extra modes compared to the CVL case can make the difference seen.
Furthermore, the interplay of the different suppression levels between
temperature and polarisation will influence the Fisher matrix as well. Adding
the SAT noise curves may sharply change the eigenmode shapes in the large
scales as well, however these noise curves target the larger scale features
associated with BB error constraints thus far (i.e., tensor spectral index
$r$), which are not affected by the ionisation history (Ade et al., 2019).
Figure 20: Principal components predicted using the analytic method with the
Simons Observatory noise curves. For this particular experiment, it has been
configured for the Large Aperture Telescope (LAT) with $f_{\rm sky}=0.4$ and
$l_{\rm max}=8000$.
Figure 21: Responses from the eigenmodes generates with $\mathcal{N}_{\ell}$
noise curves from the SO forecasts compared against the simplest CVL
alternative ($N_{\ell}=0$). The differences in the CMB power spectra,
$\Delta\mathcal{D}_{\ell}$ are shown as a ratio with the noiseless case
(_dashed_) and the SO noise case with $f_{\rm sky}=0.4$ and $\ell_{\rm
max}=8000$ (_solid_). Grey bands indicate the peaks of the fiducial
$\Lambda$CDM CMB spectra according to _Planck_ 2018.
## 7 Conclusion
In this work, we have performed the first PCA for fundamental constant
variations across recombination. Fundamental constant variations modelled in
Hart & Chluba (2018) can now be broken into a set of basis functions which
provoke responses in the CMB spectra through the changes to the ionisation
history and the opacity scaling (via the Thomson cross section) using
FEARec++. Generalizing the methodology of PCA20, we have constrained the
principal components for a number of experimental setups using an analytical
method (i.e., CVL experiments and Simons Observatory) and a direct likelihood
method (i.e., _Planck_). The obtained principal components all cut off
smoothly above $z\simeq 1500$, as the deviation become negligible in the
ionisation history and the CMB spectra in the tails. The modes given here have
been constructed with the same rigour as PCA20 including stability analysis,
minimisation of non-orthogonalities and parameter marginalisation.
We have shown that the majority of our component analysis does not point to
deviations from $\Lambda$CDM; however, the marginalised _Planck_ 2018 VFC
modes for both constants hint at a $\simeq 1-1.5\sigma$ deviation in the
$\mu_{3}$ amplitudes. This mode is not strongly excited by time-independent
$\alpha_{\rm EM}$ and $m_{\rm e}$ variations (see Table 6), which have been
thoroughly studied in the past, hence suggesting that the story could be more
complicated.
Given the principal components for $\alpha_{\rm EM}$ and $m_{\rm e}$ have now
been constrained, these can be easily applied to complex models of these
variations using rudimentary linear algebra. For constant variations and our
phenomenological power law first discussed in Hart & Chluba (2018), the
results from our projections method are consistent with the direct MCMC runs
(Table 7). For example, constant $\alpha_{\rm EM}$ variations agree with the
MCMC result to the level of $\simeq 0.08\sigma$ whilst the power law is
consistent to $\simeq 0.25\sigma$ for both $\alpha_{\rm EM}$ and $m_{\rm e}$.
Similar studies could be used to constrain physically-motivated models such as
the runaway dilaton and BSBM model variations during recombination (see
Sandvik et al., 2002, for BSBM model).
An inconsistency appears for our $m_{\rm e}$ modes; however, this also
delivers one of the interesting contentions of this work. Specifically, the
constant $m_{\rm e}$ projection results differ radically from the MCMC
results, which strongly exploit a $H_{0}$ geometric degeneracy (see VFC20).
Since here the basis functions are created at $z>300$ and there are non-
negligible variations in the reionisation visibility function arising from
$m_{\rm e}$ variations missing (see Fig. 19), we suggest that a more detailed
analysis with reionisation modes explicitly included could recreate the
degeneracy. More importantly, since the reionisation physics interplay with
recombination is important to these components, we posit the idea that certain
aspects of this interplay are integral to full solutions to the Hubble tension
(see Sect. 5.4 for discussion). This could help rectify current issues with
other promising solutions for the Hubble tension such as those induced by
primordial magnetic fields (Jedamzik & Pogosian, 2020), as we speculate here.
We also forecast how well VFC may be constrained with The Simons Observatory
(see Sect. 6). Here, we see a distinct improvement in the detectability over
the idealised _Planck_ eigenmodes, with some of the structural features from a
CVL experiment. Differences arise due to the carefully computed noise curves
lifted from the SO forecast data release (Ade et al., 2019). While more
detailed forecast including instrumental and foreground effects will be
needed, our estimates highlight the immense potential of future ground-based
observations in this respect.
We close by remarking that variations in $\alpha_{\rm EM}$ and $m_{\rm e}$ are
often motivated by scalar fields (e.g., the BSBM model) which could
conceivably stem from the same variations that give rise to _early dark
energy_ effects (e.g., Poulin et al., 2019). Developing more realistic models
on $\alpha_{\rm EM}$ and $m_{\rm e}$ could have an impact on future
constraints for the early dark energy mechanisms such as the ultra-light axion
model. The extended variations of these constants have not been forecasted
here, however, there is lots of room to continue this in the future with The
Simons Observatory. Since the forecasted eigenmode errors for SO are
significantly improved when compared to _Planck_ , this could put important
constraints on these models and add more information to the current picture
surrounding these more complex formulations for changes to $\alpha_{\rm EM}$
and $m_{\rm e}$.
## Acknowledgements
This work was supported by the ERC Consolidator Grant CMBSPEC (No. 725456) as
part of the European Union’s Horizon 2020 research and innovation program. JC
was also supported as a Royal Society URF at the University of Manchester.
## Data Availability
The current FEARec++ software package will be available at
https://github.com/cosmologyluke/FEARec for solving PCAs during recombination.
The _Planck_ 2018 likelihood files are available at
http://pla.esac.esa.int/pla/. Forecasted data for the different Simons
Observatory specifications are given at
https://github.com/simonsobs/so_noise_models.
## Appendix A Stability analysis of the _Planck_ 2018 likelihood
In this section, we corroborate the analysis from PCA20 by following the same
_direct likelihood_ methodology, mentioned in Sect. 2. Numerical derivatives
requires an appropriate step size and stable minima position for the _Planck_
2018 likelihood with respect to all the standard cosmological parameters
$\left(\left\\{\omega_{\rm b},\omega_{\rm c},\theta_{\rm MC},\tau,n_{\rm
s},A_{\rm s}\right\\}\right)$. In our analysis we have also included
derivatives about the minima for nuisance parameters from the _Planck_ 2018
likelihood.
Figure 22: Log-likelihood functions for the 6 typical standard cosmological
parameters: $[\omega_{\rm b},\omega_{\rm c},\theta_{\rm MC},\tau,n_{\rm
s},A_{\rm s}]$. Here the steps, $\Delta s/\sigma$ are in units of their
respective _Planck_ 2018 marginalised errors and the likelihood used is
_Planck_ 2018 TTTEEE + low-$\ell$ \+ low-E data. This is the recommended
likelihood for the standard _Planck_ 2018 analysis (Planck Collaboration et
al., 2018a). Blue dashed lines refer to the zero minima of the likelihood
residuals, $\Delta\ln\mathcal{L}$, whereas the red lines show the position of
$\Delta\ln\mathcal{L}=1$ for each parameter respectively.
### A.1 Optimisation of 1D likelihood curves
The curves were optimised using the likelihood minimisation routines found in
CosmoMC. The minimisation mode was run for several starting points to check
that the location of the maximum likelihood value (from hereon, MLV) was
attained correctly with respect to all the parameters. The chosen likelihood
configuration was _Planck_ 2018 TTTEEE + low-$\ell$ \+ low-E dataset, given
that this was the baseline for the 2018 papers (Planck Collaboration et al.,
2018a, 2019). The likelihood function was then perturbed from the MLV, $L_{\rm
m}\,\equiv\,L(\vec{p}_{\rm m})$ and the residual was calculated: $\Delta
L_{i}=L\left(\vec{p}_{i}\right)-L_{\rm m}$. For the purposes of this paper, we
use the log-likelihood $L\equiv\ln\mathcal{L}$, where $\mathcal{L}$ is the
actual likelihood function and $\vec{p}_{\rm m}$ refers to the fiducial set of
parameters that define the location of the MLV. The variations in the
likelihood parametrised by a fractional standard deviation (according to
_Planck_ 2018 fiducial results), $\Delta s/\sigma$, are shown in Fig. 22. The
blue dashed lines show that the likelihood minima have been offset to $\Delta
s=0$, whilst the red dashed lines show the $\Delta\ln\mathcal{L}=1$ limits
above the minima.
The noisy structure of the likelihood around the minima for $\omega_{\rm b}$,
$\omega_{\rm c}$ and $\theta_{\rm MC}$ has not disappeared from the previous
analysis in PCA20, where the _Planck_ 2015 data was used (Planck Collaboration
et al., 2016). However $n_{\rm s}$, $\tau$ and $A_{\rm s}$ all look relatively
smooth for very small changes in the parameters $\Delta s/\sigma$ (with
respect to their standard deviations). Furthermore, the likelihood variations
in the Fisher matrix are insensitive to small changes in the nuisance
parameters. Both these details are consistent with the conclusions from our
previous paper, where even the functional form of the noisiness around $\Delta
s\sim 0$ has similar structure. Given the clear parabolic shapes of the log-
normal distributions in Fig. 22 around $\Delta s=0$, the minimised likelihood
here is an ideal configuration for the Fisher method outlined in Sect. 2.
### A.2 Stability of step sizes for cosmological parameters
Once the minimum value of the N-D log-likelihood distribution is
_approximately_ found, one can start to optimise the Fisher for the direct
likelihood method. In this case, we repeat the methodology from Appendix C of
the previous paper, by finding the parameter step size that allows for a
stable evaluation of the Fisher elements $F_{ij}$. Here we focused on the most
sensitive, non-component parameters which were again the standard six
cosmological parameters analysed by _Planck_. The results from evaluating the
diagonal Fisher elements are shown in Fig. 23. In this figure, the lines
correspond to the diagonal Fisher element at some step size for a given
parameter. These are weighted by the value of the given diagonal Fisher
element at the x-value of the dotted lines. For example. for $A_{\rm s}$, the
weighted value of $\Delta s_{0}=0.6\sigma$. The step-sizes that represent an
adequate level of numerical stability are represented by dashed lines for each
parameter in Fig. 23. For $\omega_{\rm b},\omega_{\rm c},\theta_{\rm MC}$ and
$n_{\rm s}$, these lines are complemented by the curves in Fig. 22. The pivot
values shown in Fig. 23 were chosen as they tend to some constant value,
implying the derivatives are stable. On the LHS of Fig 23, the Fisher elements
are affected by propagating noise within the Boltzmann code and the likelihood
function; whereas on the RHS these responses become non-linear and parabolic
(since the Taylor expansion of the likelihood no longer works in this regime).
For comparison, see Fig. C2 of PCA20. In a similar vein to the previous paper,
we are using a _five-point stencil_ 141414Five-point is misleading when you
are using the 2D finite difference method. In reality, for two unique
parameters, this scheme requires evaluating 16 points (full calculation in
PCA20). for derivatives for the same reasons as before: the higher order
scheme allows for derivatives at step-sizes that do not incur noisy likelihood
responses.
The increments used in the direct likelihood method for the modes shown in
Sect. 4 are given in Table 8. For $\omega_{\rm b}$, $\omega_{\rm c}$ and
$n_{\rm s}$, the step sizes given are roughly the same size as in PCA20,
however slightly modified according to the shifts in the best-fit values
between the 2015 and 2018 _Planck_ likelihood. The value of $\Delta s$ for
$\tau$ is $\sim 70\%$ smaller for the case of the 2018 likelihood; however
this is intrinsically linked to the improvements in the polarisation data and
the much smaller value of $\tau$ in the _Planck_ 2018 parameters (Planck
Collaboration et al., 2018b). The only anomaly is the shift for $A_{\rm s}$ is
much larger; however as shown in Fig. 23, the amplitude of $A_{\rm s}$ is much
more forgiving for the Fisher method. The chosen step-size is well outside the
ranges of these curves where small-scale noise dominates the likelihood.
However, one can notice that the step-sizes for $\tau$ and $A_{\rm s}$ are
noticeably larger than their likelihood curves would imply. This is due to the
huge degeneracy arising from these parameters in the analysis of the CMB power
spectra. The amplitude of the matter power spectrum $A_{\rm s}$ contributes to
an overall change in the normalisation of the CMB anisotropies, similar to the
net damping effect caused by an increase in the reionisation optical depth
$\tau$. As such, their stability is far weaker than their 1D curves would
suggest. There are similar correlations across the standard 6 parameters, but
the degeneracy between $\tau$ and $A_{\rm s}$ is by far the largest (see
Planck Collaboration et al., 2018b, for more details).
Parameter ($p$) | $\bar{\mu}_{p}$ | $\sigma_{p}$ | $\Delta s_{p}/\sigma_{p}$
---|---|---|---
$\omega_{\rm b}$ | 0.02237 | 0.00015 | 6.0
$\omega_{\rm c}$ | 0.1201 | 0.0014 | $3.0$
$100\,\theta_{\rm MC}$ | 1.04085 | 0.00031 | $8.0$
$\tau$ | 0.0533 | 0.0074 | $1.0$
$n_{\rm s}$ | 0.9658 | 0.0044 | $2.5$
$\ln\left(10^{10}A_{\rm s}\right)$ | 3.043 | 0.016 | $0.6$
Table 8: The standard 6 cosmological parameters used in the direct method
analysis with _Planck_ 2018 TTTEEE + low-$\ell$ \+ low-E data along with their
best-fit values ($\bar{\mu}_{p}$), their standard deviation ($\sigma_{p}$) and
the choice of $\Delta s/\sigma$ for the calculation of stable derivatives
(shown in Fig. 23). The best-fit values come from iterated minimisation and
the standard deviations come from MCMC, both obtained using the CosmoMC
software package. Figure 23: The diagonal Fisher matrix elements for the
standard cosmological parameters, $F_{ii}$ referenced in Appendix A and PCA20.
These have been weighted by $F_{ii}\left(\Delta s_{0}\right)$, where $\Delta
s_{0}$ is the location of the dashed lines, which have been chosen for the
appropriate step sizes in the direct likelihood method.
## Appendix B Revisiting free electron fraction PCA with 2018 data
In this appendix, we show the results for the $X_{\rm e}$ eigenmodes, similar
to the method in PCA20, as a consistency check for the PCA methodology. The
ionisation history eigenmodes generated with the most recent _Planck_ data are
shown in Fig. 24. With the exception of the small changes in peak height at
$z\lesssim 1200$ for all modes, the shapes across all 3 eigenmodes are
congruous with the 2015 eigenmodes. Given that these were produced via the
same optimisation and stability protocols as defined in Appendix A, we are
confident that the PCA methodology is robust for the production of fundamental
constant eigenmodes using the likelihood as we have in our previous work. The
stability is also shown in the marginalised results in Table 9. Here we have
applied the marginalised 2015 and 2018 modes to _Planck_ likelihood data (2015
and 2018 respectively) with CMB lensing and BAO data included also. This gave
us another check against the previously held _Planck_ 2018 constraints for the
recombination eigenmodes. Compared to the _Planck_ 2018 paper, the errors are
agreeable with a $\sim 70\%$ decrease in the error on $\mu_{3}$ and a distinct
lack of residual degeneracies across the cosmological parameters. As shown in
PCA20, the data combination has little impact on the final MCMC values from
the $X_{\rm e}$ eigenmodes save a few small fluctuations due to the shifts in
the best fit values when lensing and galaxy clustering data in the analysis
(see Alam et al., 2015, for BAO results). These conclusions are shown clearly
in the posterior contours in Fig. 25. The 2018 (_solid_) contour is slightly
compressed in the $\mu_{3}$ dimension compared to the previous work
(_dashed_). There is also a drift in the likelihood contours due to the
baseline changes in the _Planck_ 2018 parameters.
Figure 24: Eigenmodes from the $X_{\rm e}$ PCA with the _Planck_ 2018 likelihood. The reference lines on the plot (_dashed_) are the 2015 converged eigenmodes found in PCA20. The _Planck_ 2018 $X_{\rm e}$ results use the same stability analysis explained in Appendix A. The dotted line represents the most probable last scattering redshift, $z_{*}$ under $\Lambda$CDM. Parameter | _Planck_ 2015 | _Planck_ 2018
---|---|---
| \+ lensing + BAO | \+ lensing + BAO
$\omega_{b}$ | $0.02241\pm 0.00019$ | $0.02246\pm 0.00019$
$\omega_{c}$ | $0.1183\pm 0.0011$ | $0.1189^{+0.0011}_{-0.00096}$
$100\theta_{MC}$ | $1.04079\pm 0.00038$ | $1.04104\pm 0.00036$
$\tau$ | $0.070\pm 0.013$ | $0.0572\pm 0.0075$
${\rm{ln}}(10^{10}A_{s})$ | $3.071\pm 0.024$ | $3.049\pm 0.015$
$n_{s}$ | $0.9686\pm 0.0055$ | $0.9681\pm 0.0053$
$\mu_{1}\;\left(X_{\rm e}\right)$ | $-0.06\pm 0.11$ | $-0.01\pm 0.11$
$\mu_{2}\;\left(X_{\rm e}\right)$ | $-0.16\pm 0.19$ | $0.06\pm 0.18$
$\mu_{3}\;\left(X_{\rm e}\right)$ | $-0.19\pm 0.35$ | $0.02^{+0.19}_{-0.24}$
$H_{0}$ | $67.93\pm 0.49$ | $67.85^{+0.46}_{-0.51}$
$\sigma_{8}$ | $0.8174\pm 0.0091$ | $0.8100\pm 0.0065$
Table 9: Marginalised $68\%$ limit results for the $X_{\rm e}$ eigenmodes
generated with _Planck_ 2018 compared against the results generated with the
_Planck_ 2015 likelihood. The MCMC has been carried out with _Planck_ \+
lensing + BAO to easily compare against the previous _Planck_ papers (Planck
Collaboration et al., 2018b). Figure 25: Posterior contours for the free
electron fraction $X_{\rm e}$ for the largest correlations from PCA20:
$\mu_{1}$ vs. $\omega_{\rm b}$ (_top_) and $\mu_{3}$ vs. $\theta_{\rm MC}$
(_bottom_). The two contours shown are the _Planck_ 2015 modes (_orange,
dashed_) and the _Planck_ 2018 modes (_purple, solid_) with the fiducial
$\Lambda$CDM bands included as well. Here we have included CMB lensing as well
as BAO data from each respective data release.
## References
* Abazajian et al. (2016) Abazajian K. N. et al., 2016, ArXiv:1610.0274
* Abazajian et al. (2015) Abazajian K. N. et al., 2015, Astroparticle Physics, 63, 66
* Addison (2021) Addison G. E., 2021, ApJL, 912, L1
* Ade et al. (2019) Ade P. et al., 2019, JCAP, 2019, 056
* Ade et al. (2014) Ade P. A. R. et al., 2014, Physical Review Letters, 113, 021301
* Alam et al. (2015) Alam S. et al., 2015, The Astrophysical Journal Supplement Series, 219, 12
* Avelino et al. (2001) Avelino P. P. et al., 2001, Phys.Rev.D, 64, 103505
* Barrow & Graham (2013) Barrow J. D., Graham A. A. H., 2013, Phys.Rev.D, 88, 103513
* Battye et al. (2001) Battye R. A., Crittenden R., Weller J., 2001, Phys.Rev.D, 63, 043505
* Battye & Moss (2014) Battye R. A., Moss A., 2014, Physical Review Letters, 112, 051303
* Bekenstein (1982) Bekenstein J. D., 1982, Phys. Rev. D, 25, 1527
* Bennett et al. (1996) Bennett C. L. et al., 1996, ApJL, 464, L1
* Bennett et al. (2013) Bennett C. L. et al., 2013, ApJS, 208, 20
* Bolliet et al. (2020) Bolliet B., Chluba J., Battye R., 2020, arXiv e-prints, arXiv:2012.07292
* Bonifacio et al. (2014) Bonifacio P. et al., 2014, Astronomische Nachrichten, 335, 83
* Campeti et al. (2019) Campeti P., Poletti D., Baccigalupi C., 2019, arXiv e-prints, arXiv:1905.08200
* Carlstrom et al. (2019) Carlstrom J. et al., 2019, in Bulletin of the American Astronomical Society, Vol. 51, p. 209
* Chen & Wang (2021) Chen L., Wang K., 2021, Does the reionization model influence the constraints on dark matter decay or annihilation?
* Chen & Kamionkowski (2004) Chen X., Kamionkowski M., 2004, Phys.Rev.D, 70, 043502
* Chluba (2010) Chluba J., 2010, MNRAS, 402, 1195
* Chluba et al. (2015) Chluba J., Paoletti D., Finelli F., Rubino-Martin J.-A., 2015, ArXiv:1503.04827
* Chluba & Thomas (2011) Chluba J., Thomas R. M., 2011, MNRAS, 412, 748
* Dai et al. (2018) Dai W.-M., Ma Y.-Z., Guo Z.-K., Cai R.-G., 2018, Phys. Rev., astro-ph.CO
* Di Valentino et al. (2016) Di Valentino E. et al., 2016, ArXiv:1612.00021
* Di Valentino et al. (2017) Di Valentino E., Melchiorri A., Mena O., 2017, Phys.Rev.D, 96, 043503
* Di Valentino et al. (2019) Di Valentino E., Melchiorri A., Silk J., 2019, Nature Astronomy
* Farhang et al. (2012) Farhang M., Bond J. R., Chluba J., 2012, ApJ, 752, 88
* Farhang et al. (2013) Farhang M., Bond J. R., Chluba J., Switzer E. R., 2013, ApJ, 764, 137
* Finkbeiner et al. (2012) Finkbeiner D. P., Galli S., Lin T., Slatyer T. R., 2012, Phys.Rev.D, 85, 043522
* Galli et al. (2009) Galli S., Iocco F., Bertone G., Melchiorri A., 2009, Phys.Rev.D, 80, 023505
* Gratton et al. (2008) Gratton S., Lewis A., Efstathiou G., 2008, Phys.Rev.D, 77, 083507
* Guennebaud et al. (2010) Guennebaud G., Jacob B., et al., 2010, Eigen v3. http://eigen.tuxfamily.org
* Hart & Chluba (2018) Hart L., Chluba J., 2018, MNRAS, 474, 1850
* Hart & Chluba (2020a) Hart L., Chluba J., 2020a, MNRAS, 495, 4210
* Hart & Chluba (2020b) Hart L., Chluba J., 2020b, MNRAS, 493, 3255
* Hees et al. (2020) Hees A. et al., 2020, Physical Review Letters, 124
* Henderson et al. (2016) Henderson S. W. et al., 2016, Journal of Low Temperature Physics, 184, 772
* Hu et al. (2020) Hu J. et al., 2020, Monthly Notices of the Royal Astronomical Society
* Hütsi et al. (2009) Hütsi G., Hektor A., Raidal M., 2009, A&A, 505, 999
* Ishida & de Souza (2011) Ishida E. E. O., de Souza R. S., 2011, A&A, 527, A49
* Jedamzik & Pogosian (2020) Jedamzik K., Pogosian L., 2020, Physical Review Letters, 125
* Jedamzik & Saveliev (2019) Jedamzik K., Saveliev A., 2019, Physical Review Letters, 123
* Kaplinghat et al. (1999) Kaplinghat M., Scherrer R. J., Turner M. S., 1999, Phys.Rev.D, 60, 023516
* Keisler et al. (2015) Keisler R. et al., 2015, ApJ, 807, 151
* Knox & Millea (2020) Knox L., Millea M., 2020, Physical Review D, 101
* Kotuš et al. (2017) Kotuš S. M., Murphy M. T., Carswell R. F., 2017, MNRAS, 464, 3679
* Kunze & Komatsu (2014) Kunze K. E., Komatsu E., 2014, JCAP, 1, 9
* Levshakov et al. (2020) Levshakov S. A., Kozlov M. G., Agafonova I. I., 2020, Monthly Notices of the Royal Astronomical Society, 498, 3624–3632
* Levshakov et al. (2019) Levshakov S. A., Ng K. W., Henkel C., Mookerjea B., Agafonova I. I., Liu S. Y., Wang W. H., 2019, MNRAS, 487, 5175
* Lewis (2013) Lewis A., 2013, Phys.Rev.D, 87
* Lewis & Bridle (2002) Lewis A., Bridle S., 2002, Phys. Rev., D66, 103511
* Lewis et al. (2000) Lewis A., Challinor A., Lasenby A., 2000, ApJ, 538, 473
* Lin et al. (2020) Lin M.-X., Hu W., Raveri M., 2020, Physical Review D, 102
* Martins (2017) Martins C. J. A. P., 2017, Reports on Progress in Physics, 80, 126902
* Menegoni et al. (2012) Menegoni E., Archidiacono M., Calabrese E., Galli S., Martins C. J. A. P., Melchiorri A., 2012, Phys.Rev.D, 85, 107301
* Menegoni et al. (2009) Menegoni E., Galli S., Bartlett J. G., Martins C. J. A. P., Melchiorri A., 2009, Phys.Rev.D, 80, 087302
* Mortonson & Hu (2008) Mortonson M. J., Hu W., 2008, ApJ, 672, 737
* Mota & Barrow (2004) Mota D. F., Barrow J. D., 2004, MNRAS, 349, 291
* Murphy & Cooksey (2017) Murphy M. T., Cooksey K. L., 2017, Mon. Not. Roy. Astron. Soc., 471, 4930
* Naess et al. (2014) Naess S. et al., 2014, JCAP, 10, 007
* Negrelli et al. (2018) Negrelli C., Kraiselburd L., Landau S., García-Berro E., 2018, International Journal of Modern Physics D, 27, 1850099
* Netterfield et al. (2002) Netterfield C. B. et al., 2002, ApJ, 571, 604
* Pace et al. (2019) Pace F., Battye R. A., Bolliet B., Trinh D., 2019, JCAP, 2019, 018
* Padmanabhan & Finkbeiner (2005) Padmanabhan N., Finkbeiner D. P., 2005, Phys.Rev.D, 72, 023508
* Paoletti et al. (2019) Paoletti D., Chluba J., Finelli F., Rubiño-Martín J. A., 2019, Monthly Notices of the Royal Astronomical Society, 484, 185–195
* Pearson et al. (2003) Pearson T. J. et al., 2003, ApJ, 591, 556
* Planck Collaboration et al. (2015a) Planck Collaboration et al., 2015a, ArXiv:1502.01589
* Planck Collaboration et al. (2015b) Planck Collaboration et al., 2015b, A&A, 580, A22
* Planck Collaboration et al. (2016) Planck Collaboration et al., 2016, A&A, 594, A11
* Planck Collaboration et al. (2018a) Planck Collaboration et al., 2018a, arXiv e-prints, arXiv:1807.06205
* Planck Collaboration et al. (2018b) Planck Collaboration et al., 2018b, ArXiv:1807.06209
* Planck Collaboration et al. (2019) Planck Collaboration et al., 2019, arXiv e-prints, arXiv:1907.12875
* Poulin et al. (2018) Poulin V., Smith T. L., Grin D., Karwal T., Kamionkowski M., 2018, Physical Review D, 98
* Poulin et al. (2019) Poulin V., Smith T. L., Karwal T., Kamionkowski M., 2019, Phys.Rev.Lett, 122, 221301
* Riess et al. (2019) Riess A. G., Casertano S., Yuan W., Macri L. M., Scolnic D., 2019, Astrophys. J., 876, 85
* Rubiño-Martin et al. (2003) Rubiño-Martin J. A. et al., 2003, MNRAS, 341, 1084
* Sandvik et al. (2002) Sandvik H. B., Barrow J. D., Magueijo J., 2002, Phys.Rev.Lett, 88, 031302
* Schöneberg et al. (2021) Schöneberg N., Abellán G. F., Pérez Sánchez A., Witte S. J., Poulin c. V., Lesgourgues J., 2021, arXiv e-prints, arXiv:2107.10291
* Scóccola et al. (2009) Scóccola C. G., Landau S. J., Vucetich H., 2009, Memorie della Societ Astronomica Italiana, 80, 814
* Sethi & Subramanian (2005) Sethi S. K., Subramanian K., 2005, MNRAS, 356, 778
* Sharma et al. (2020) Sharma R., Mukherjee A., Jassal H. K., 2020, arXiv e-prints, arXiv:2004.01393
* Shaw & Chluba (2011) Shaw J. R., Chluba J., 2011, MNRAS, 415, 1343
* Shaw & Lewis (2010) Shaw J. R., Lewis A., 2010, Phys.Rev.D, 81, 043517
* Silvestri & Trodden (2009) Silvestri A., Trodden M., 2009, Reports on Progress in Physics, 72, 096901
* Slatyer et al. (2009) Slatyer T. R., Padmanabhan N., Finkbeiner D. P., 2009, Physical Review D (Particles, Fields, Gravitation, and Cosmology), 80, 043526
* Slatyer & Wu (2017) Slatyer T. R., Wu C.-L., 2017, Phys. Rev., D95, 023010
* Tegmark et al. (1997) Tegmark M., Taylor A. N., Heavens A. F., 1997, The Astrophysical Journal, 480, 22
* Thiele et al. (2021) Thiele L., Guan Y., Hill J. C., Kosowsky A., Spergel D. N., 2021, arXiv e-prints, arXiv:2105.03003
* Uzan (2003) Uzan J.-P., 2003, Reviews of Modern Physics, 75, 403
* Uzan (2011) Uzan J.-P., 2011, Living Reviews in Relativity, 14, 2
* Verde (2010) Verde L., 2010, Statistical Methods in Cosmology, Vol. 800, Berlin Springer Verlag, pp. 147–177
* Wilczynska et al. (2020) Wilczynska M. R. et al., 2020, Science Advances, 6, eaay9672
|
arxiv-papers
| 2021-07-26T20:34:59 |
2024-09-04T03:07:20.017117
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Luke Hart and Jens Chluba",
"submitter": "Luke Hart Mr.",
"url": "https://arxiv.org/abs/2107.12465"
}
|
2107.12468
|
# Fracture and Fatigue of Thin Crystalline SrTiO3 Membranes
Varun Harbola [email protected] Department of Physics, Stanford University,
Stanford, California 94305, USA Stanford Institute for Materials and Energy
Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94025,
USA Ruijuan Xu Stanford Institute for Materials and Energy Sciences, SLAC
National Accelerator Laboratory, Menlo Park, California 94025, USA Department
of Applied Physics, Stanford University, Stanford, California 94305, USA
Samuel Crossley Stanford Institute for Materials and Energy Sciences, SLAC
National Accelerator Laboratory, Menlo Park, California 94025, USA Department
of Applied Physics, Stanford University, Stanford, California 94305, USA
Prastuti Singh Stanford Institute for Materials and Energy Sciences, SLAC
National Accelerator Laboratory, Menlo Park, California 94025, USA Department
of Applied Physics, Stanford University, Stanford, California 94305, USA
Harold Y. Hwang Stanford Institute for Materials and Energy Sciences, SLAC
National Accelerator Laboratory, Menlo Park, California 94025, USA Department
of Applied Physics, Stanford University, Stanford, California 94305, USA
###### Abstract
The increasing availability of a variety of two-dimensional materials has
generated enormous growth in the field of nanoengineering and nanomechanics.
Recent developments in thin film synthesis have enabled the fabrication of
freestanding functional oxide membranes that can be readily incorporated in
nanomechanical devices. While many oxides are extremely brittle in bulk,
recent studies have shown that, in thin membrane form, they can be much more
robust to fracture as compared to their bulk counterparts. Here, we
investigate the ultimate tensile strength of SrTiO3 membranes by probing
freestanding SrTiO3 drumheads using an atomic force microscope. We demonstrate
that SrTiO3 membranes can withstand an elastic deformation with an average
strain of ~6% in the sub-20 nm thickness regime, which is more than an order
of magnitude beyond the bulk limit. We also show that these membranes are
highly resilient upon a high cycle fatigue test, surviving up to a billion
cycles of force modulation at 85% of their fracture strain, demonstrating
their high potential for use in nanomechanical applications.
††preprint: AIP/123-QED
Since the discovery of graphene, two-dimensional (2D) materials have attracted
a great deal of attention for not only their electronic properties, but also
their mechanical characteristics. These materials exhibit both a large elastic
modulus and high tensile strengthBertolazzi et al. (2011); Lee et al. (2008)
making them attractive for nano-electromechanical applications. Traditionally,
oxide materials can also be grown to be extremely thin as films and
heterostructures, and have an exciting array of physical properties, from
dielectrics to magnetism to superconductivity. However, thin films grown on a
rigid substrate are not ideal for nanomechanical applications due to the
clamping effect from the substrate. Recent technique developments allow for
these thin films to be grown epitaxially and then lifted off and transferred,
such that they form freestanding structuresGan et al. (1998); Paskiewicz et
al. (2016); Bakaul et al. (2016); Lu et al. (2016); Ji et al. (2019); Kum et
al. (2020); Pesquera et al. (2020). This has enabled studies characterizing
the mechanical properties and the effects of different deformations in
freestanding oxide membrane structuresHarbola et al. (2021); Davidovikj et al.
(2020). Furthermore, strain mapping using transmission electron microscopy
(TEM) has shown these membranes can withstand up to 10% strain locallyDong et
al. (2019); Peng et al. (2020), undergoing extreme deformations without
breakingElangovan et al. (2020). The proposed mechanisms for high strain
sustenance by these membranes were low numbers of flaws in smaller samples,
continuous dipole rotation in ferroelectrics during deformation that avoids
sharp domain-switching-driven failureDong et al. (2019), and proximity to a
strain-induced phase transitionPeng et al. (2020).
In this work, we consider the fracture properties of a canonical perovskite
membrane, SrTiO3. SrTiO3 is a high-K dielectric insulator at room temperature
with a relative permittivity of 300 in bulk. Moreover, it is a transparent
oxide with a 3.2 eV bandgap and at low temperatures exhibits dilute
superconductivity upon dopingSchooley et al. (1964). SrTiO3 at low temperature
shows a multitude of structural transitionsScott and Ledbetter (1997) and even
though its permittivity greatly increases at low temperatures, SrTiO3 never
achieves a ferroelectric state due to quantum fluctuationsMüller and Burkard
(1979). However, upon straining, both on substrateHaeni et al. (2004);
Biegalski et al. (2009) and in freestanding formXu et al. (2020),
ferroelectricity has been demonstrated in SrTiO3. Nano resonators made from
SrTiO3 membranes have also been shown to have high Q values and low mechanical
losses at low temperaturesDavidovikj et al. (2020). SrTiO3 cantilevers formed
using an under etch method have also shown promise demonstrating their
capability to be electromechanically actuated at the microscaleBiasotti et al.
(2013). All these properties make SrTiO3 a promising material for
nanomechanical applications, making it important to study the robustness of
this material under strain both in terms of fracture and fatigue.
Figure 1: (a) A schematic representation of the experiment where the SrTiO3
membrane is grown on an epitaxial strontium aluminate buffer layer and
transferred to a porous silicon nitride grid. An AFM probe of radius
$r_{\textrm{tip}}$ forces the membrane of thickness $t$ to its fracture with a
force $F$. A finite element strain map around the tip-membrane contact region
shows that the maximal strains are concentrated under the tip. (b) A force-
displacement ($F$-$d$) curve showing the force response and identification of
the force at which the membrane fractures. Panels (c) and (d) show AFM
topography images of a SrTiO3 drumhead before and after fracture,
respectively.
To quantitatively establish the fracture of a material, the essential
requirement is knowing the stress at which the material fails. To measure the
failure point of bulk materials, generally, a sample of a known cross-section
is stretched to its breaking point, and the force at which it breaks divided
by the cross-sectional area defines the fracture stress. Such a direct
measurement is not always feasible at the nanoscale, and a standard approach
is to use a freestanding geometry of the nanomaterial, which is then forced
with a calibrated nano-probe until its fractureBertolazzi et al. (2011); Lee
et al. (2008); Kaplan-Ashiri et al. (2006); Yu et al. (2000). In this work, we
study the fracture of SrTiO3 membranes via atomic force microscopy (AFM).
SrTiO3 membranes were grown using pulsed laser deposition (PLD) on a water-
soluble and epitaxial buffer layer of 8 nm thick Sr3Al2O6 on a SrTiO3
substrate. The membranes is spun coated with PMMA and then lifted off using
water and transferred onto a porous silicon nitride gridHarbola et al. (2021).
These membranes have been shown to retain high levels of crystallinity down to
thicknesses of 2 nmHong et al. (2017). This transfer forms freestanding
drumheads of SrTiO3 on the nitride membranes. These drumheads can then be
probed using an AFM tip until they rupture to study the fracture mechanics of
thin SrTiO3 membranes (Fig. 1(a)). Recently, a non-monotonic variation in
Young’s modulus with thickness was observed in SrTiO3 membranes, as a
consequence of strain gradient elasticity. Therefore, we study three different
characteristic thicknesses for fracture in the sub-15 nm regime, where a
softening of Young’s modulus was observed with increasing thicknessHarbola et
al. (2021).
To measure the force at which freestanding membranes break, the deflection of
the membrane is registered using a photodiode. The spring constant of the tip
cantilever is calibrated using the thermal methodCook et al. (2006). The force
at which the membrane fractures on tip impact can be obtained through a
force-$d$ curve (Fig. 1(b)), where $d$ is the distance traveled of the
z-direction piezo of the AFM. This can be used to quantify the average 2D
stress under the tip usingBhatia and Nachbar (1968)
$\sigma_{m}t=\left(\frac{FEt}{4\pi r_{\textrm{{tip}}}}\right)^{\frac{1}{2}}$
(1)
where $F$ is the maximum force sustained by the drumhead before breaking, $E$
is the Young’s modulus, $t$ is the thickness of the membrane,
$r_{\textrm{tip}}$ is the radius of curvature of the AFM probe tip, and
$\sigma_{m}$ is the average stress that is sustained by the membrane under the
tip (Fig. 1(a); see supplementary material for more discussion on strain
distribution). The tips are imaged using a scanning electron microscope (SEM)
to estimate the tip radius, found to be 14 $\pm$ 3 nm across different tips,
which is comparable to manufacturer specifications. First, we scan the
drumhead in tapping mode for topography and position the tip at the center of
the drumhead (Fig. 1(c)). Once the tip is in position, we use the $d$ position
of the piezo as the trigger to increase the force on the membrane until it
breaks. This measurement is quasi-static in nature and the tip is always in
contact with the membrane while forcing it, so the force applied by the tip is
the force felt by the membrane to the point of fracture. A topography image of
the broken drumhead clearly shows the rupture at the center of the drumhead.
During the forcing process, the repeatability of the force traces indicates
that there is no slippage or plastic deformation. The fracture is brittle,
showing no yielding of the material before the circular membrane ruptures.
Figure 2: (a) A histogram plot of fracture statistics with respect to the
maximum force sustained by the membrane before fracture for three different
thicknesses. (b) Same data as (a) but plotted as a cumulative probability of
fracture. The solid line is a 2-parameter Weibull fit to the experimental
data. (c) Plot of the statistical fracture strain and the Weibull parameter
$m$ as a function of thickness obtained through Weibull analysis of the data.
Error bars include errors from spring constant calibration of the AFM
cantilever and errors in the radius estimation of the tip. For the 13.6 nm
sample, a total of three different membranes were tested so that error
includes the standard error from 3 samples.
Fracture of a material is a statistical process which is governed by a variety
of factors such as the types of defects, their density, and the flaw size
distribution. Real materials will always have a distribution of stresses at
which various samples will fail, even when they are prepared identically. The
determining factor for sample fracture is the extremal size distribution of
flaws in the effective volume where the sample is experiencing stresses. A
two-parameter Weibull distribution appropriately describes the cumulative
probability of fracture for brittle materials as a function of stress
$(P(\sigma))$ Lee et al. (2008); Quinn and Quinn (2010). It is given as
$P(\sigma)=1-e^{-\left(\frac{\sigma}{\sigma_{0}}\right)^{m}}$ (2)
where $\sigma_{0}$ is the characteristic stress of fracture and $m$ is the
Weibull shape parameter and describes the sharpness of this distribution. A
low $m$ value is indicative of a wide distribution of failure stress, which
implies that a wide distribution of defects is responsible for failure. On the
other hand, a higher $m$ value indicates either an insensitivity to the
presence of defects, or a very narrow distribution of defects which are
responsible for material failureLee et al. (2008). The higher the $m$ value,
the more predictable the failure of a material becomes. Moreover, as Weibull
statistics are rooted in extremal flaw size distribution, Weibull analysis of
a sample failure also allows for scalability when the size of the sample is
changedQuinn and Quinn (2010). Note that the stress is normalized by
$\sigma_{0}$, such that Weibull distributions having the same $m$ values will
have a wider variance of breaking stress for higher $\sigma_{0}$.
The statistical distribution of SrTiO3 drumheads fracture (Fig. 2(a)) shows a
clear peaked distribution as a function of force for all three different
thicknesses studied. These distributions can be changed into cumulative
fracture distributions and can be analyzed using the Weibull distribution
curve. Fig. 2(b) shows that the fracture of SrTiO3 drumheads is well described
by the two parameter Weibull fit. The analysis indicates that one type of flaw
is responsible for the failure of these drumheadsQuinn and Quinn (2010). Using
Eq. (1), the $m$ value obtained through the fit as a function of force can be
mapped to a corresponding $m$ value for stress by multiplying by a factor of
two. The stress can also be converted to the maximum strain sustained by the
film using the Young’s modulus of SrTiO3 for stretching which has been
measured previouslyHarbola et al. (2021) for these thicknesses. We find that
the films can sustain an average strain of 4-6 % before fracture and the $m$
value is close to 16 for thin SrTiO3 (Fig. 2(c)). Using finite element
analysis calculations, we also observe that the maximum local strain sustained
by these SrTiO3 membranes is reasonably consistent with that observed via
TEMDong et al. (2019); Peng et al. (2020) for freestanding oxide membranes
(supplementary material).
Figure 3: (a) A scatter plot of different materials covering a range of both
nanomaterialsBertolazzi et al. (2011); Lee et al. (2008); Kaplan-Ashiri et al.
(2006); Yu et al. (2000); Roy et al. (2017); Zhang et al. (2016) and
bulkGumbsch et al. (2001); Shinno et al. (1988); Bunsell (1975); Rasmussen
(2003); dup (2010) representing the maximum tensile strain
($\varepsilon_{\textrm{max}}$) these materials can sustain before fracture,
plotted against their respective elastic moduli. The SrTiO3 membranes have
three different moduli for the three different thickness and are therefore
plotted separatelyHarbola et al. (2021).
Let us place these results within the context and understanding of other
materials. SrTiO3 in bulk form is extremely brittle and can only sustain a
fraction of a percent of strain at room temperature before breaking. However,
in the thin membrane form, it is able to sustain more than an order of
magnitude higher strain than in bulk. Fig. 3 shows the maximum tensile strain
before fracture for a variety of materials plotted against their elastic
modulus. Given the variation in the elastic modulus of SrTiO3 membranes as a
function of thicknessHarbola et al. (2021), each thickness of SrTiO3 membranes
measured in this study has been plotted separately in Fig. 3. The strain that
thin SrTiO3 membranes can withstand is similar to that displayed by carbon
nanotubesYu et al. (2000) and ZnO nanowiresRoy et al. (2017).
Material | Weibull Parameter $m$
---|---
WS2 nanotubesPugno and Ruoff (2007) | 2.9
Carbon nanofibers Pugno and Ruoff (2007) | 3.8
Carbon MWNT Pugno and Ruoff (2007) | 2.7
ZnO nanowires Roy et al. (2017) | 3.9
Graphene monolayer Lee et al. (2008) | 32
SrTiO3 membrane | 16
Polysilicon (MEMS) Jadaan et al. (2003) | 5-30
Single crystal Si Jadaan et al. (2003) | 3-60
Table 1: Comparison of fracture robustness among materials. This table lists a
variety of nanomaterials which have been tested for their robustness of
fracture with respect to the stress at which they fracture, quantified by
their Weibull shape parameter $m$.
Also notable is that these oxide membranes can bear up to about half the
strain of that in grapheneLee et al. (2008), which is the strongest material
known thus far and can sustain up to 13% strain before breaking. In terms of
the $m$ parameter (Table 1), SrTiO3 membranes compare extremely well with
other nanomaterials. The predictability of failure for SrTiO3 membranes is
much higher than a variety of nanofibersPugno and Ruoff (2007) and nanotubesYu
et al. (2000) and is comparable to single crystal and polycrystalline silicon
micro-electromechanical systems (MEMS)Jadaan et al. (2003). Furthermore, we
can use this $m$ value to predict failure of thin membranes in different
experiments by using volume scaling via:Quinn and Quinn (2010)
$\frac{\sigma_{m1}}{\sigma_{m2}}=\left(\frac{V_{2}}{V_{1}}\right)^{\frac{1}{m}}$
(3)
where $\sigma_{m1}$ and $\sigma_{m2}$ are the maximal stresses that can be
sustained in experiment 1 and experiment 2 and $V_{1}$ and $V_{2}$ are the
effective volumes over which the stresses are being imparted in those
experiments, respectively. We can estimate the effective volume for our
indentation experiment using finite element simulations (Fig. 1(a) and
supplementary material). This volume is approximately a 10 nm radius region
under the tip across the thickness of the membrane. Using this estimated
volume, we can predict the breaking strain for a separate experiment that was
performed for large area SrTiO3 strained membranes on Kapton, in which case
the whole volume of the membrane experienced the applied stress during the
experimentXu et al. (2020). Using Eq. (3), we can estimate the maximum strain
for the Kapton experiment. Most membranes failed at around 2% upon stretching
on Kapton, which is consistent with the average estimate of 1.8% from Weibull
scaling of our AFM measurement. The highest estimate of fracture strain from
the 14 nm thick membranes is 2.7%, which is close to the maximum strain of
2.5% observedXu et al. (2020).
Figure 4: (a) Schematic representation of a high cycle fatigue test of a
freestanding SrTiO3 membrane. The $z$-direction piezo moves a distance $d$ to
force the membrane from its initial state to a high strain state and then the
modulation is provided using the shaker piezo of the tip usually used for
tapping mode scans. This tip modulation probes fatigue of the membrane. (b)
The amplitude and phase response of freestanding membranes (13.6 nm thick)
upon a high cycle fatigue test over a billion cycles. Faded color is the
response of the only membrane that fractured out of the fourteen tested. Inset
shows the experimental procedure of a high cycle fatigue test whereupon the
membrane is forced with a constant force $F_{\textrm{{dc}}}$, which
corresponded to 85% of the breaking stress on these membranes, and then cycled
with a modulation force $F_{\textrm{{mod}}}$ over a set number of cycles. (c)
The force response of a membrane before and after the fatigue test showing
almost identical response, demonstrating a lack of fatigue degradation after
one billion cycles.
Thus far we have demonstrated that SrTiO3 membranes show more than an order of
magnitude higher stress sustenance compared to the bulk, with a high
predictability of failure. We next studied their fatigue properties, which
have not yet been investigated for this class of complex oxide membranes.
Fatigue occurs due to bond reconfigurations and plastic deformations near a
defect upon cyclical force loading. Since ionic bonds are not easily
reconfigurable and they fail in a brittle manner, oxides are one of the least
prone materials to fatigue. Only recently has attention started turning to
fatigue properties of nanomaterialsLi et al. (2014); Cui et al. (2020), and
with the growing prevalence of diverse nanomaterials for nano-
electromechanical purposes, such measurements become increasingly important to
predict the lifetime of nanocomponents. To study the fatigue properties of
SrTiO3 membranes, we used the AFM in dwell mode to conduct the experiment of
high cycle fatigue (Fig. 4(a)). We used the $d$ trigger on the AFM to force
the membrane to the required force, and then modulated the tip at that $d$
value with a frequency of 2 MHz for 500 seconds, to force the membrane through
a billion cycles (Fig. 4(b)) with a modulation force of 10 nN. Out of the 14
drumheads tested for fatigue at over 85% of the characteristic fracture
strain, only one showed fatigue failure at close to 200 million cycles.
Moreover, upon testing the repeatability of elastic response after cyclic
loading, no sign of fatigue yield was found (Fig. 4(c)). This result indicates
that SrTiO3 membranes are comparable to ZnOLi et al. (2014) and grapheneCui et
al. (2020) as far as their fatigue behavior is concerned, over a large number
of cycles under high static stress.
To conclude, we performed a detailed statistical analysis of fracture of
freestanding SrTiO3 membranes and demonstrated that their nanomechanical
fracture is robust and well explained through Weibull statistics. SrTiO3
membranes show more than one order of magnitude enhancement in their strain
sustenance as compared to the bulk upon local loading. Furthermore, through
two-parameter Weibull analysis we showed that the predictability of failure
for SrTiO3 is significantly higher than for many other nanomaterials, and on
par with silicon and graphene. We have also shown excellent fatigue resilience
of these membranes under high stress and over a billion cycles. These findings
add to the growing body of evidence that thin freestanding oxide membranes are
an extremely viable class of materials for nano-electromechanical
applications.
###### Acknowledgements.
We thank Wendy Gu for discussions. This work was supported by the U.S.
Department of Energy, Office of Basic Energy Sciences, Division of Materials
Sciences and Engineering, under contract no. DE-AC02-76SF00515 (synthesis and
membrane devices), and the Air Force Office of Scientific Research (AFOSR)
Hybrid Materials MURI under award no. FA9550-18-1-0480 (elasticity
measurements and analysis).
## Supplementary Materials
See supplementary materials for a more detailed discussion of strain
distribution across the membrane upon loading with a spherical tip and for
notes on Weibull statistics.
## Data availability statement
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Bertolazzi et al. (2011) S. Bertolazzi, J. Brivio, and A. Kis, ACS Nano 5, 9703 (2011).
* Lee et al. (2008) C. Lee, X. Wei, J. W. Kysar, and J. Hone, Science 321, 385 (2008).
* Gan et al. (1998) Q. Gan, R. A. Rao, C. B. Eom, J. L. Garrett, and M. Lee, Applied Physics Letters 72, 978 (1998).
* Paskiewicz et al. (2016) D. M. Paskiewicz, R. Sichel-Tissot, E. Karapetrova, L. Stan, and D. D. Fong, Nano Lett. 16, 534 (2016).
* Bakaul et al. (2016) S. R. Bakaul, C. R. Serrao, M. Lee, C. W. Yeung, A. Sarker, S. L. Hsu, A. K. Yadav, L. Dedon, L. You, A. I. Khan, et al., Nat. Commun. 7, 10547 (2016).
* Lu et al. (2016) D. Lu, D. J. Baek, S. S. Hong, L. F. Kourkoutis, Y. Hikita, and H. Y. Hwang, Nat. Mater. 15, 1255 (2016).
* Ji et al. (2019) D. Ji, S. Cai, T. R. Paudel, H. Sun, C. Zhang, L. Han, Y. Wei, Y. Zang, M. Gu, Y. Zhang, et al., Nature 570, 87 (2019).
* Kum et al. (2020) H. S. Kum, H. Lee, S. Kim, S. Lindemann, W. Kong, K. Qiao, P. Chen, J. Irwin, J. H. Lee, S. Xie, et al., Nature 578, 75 (2020).
* Pesquera et al. (2020) D. Pesquera, E. Parsonnet, A. Qualls, R. Xu, A. J. Gubser, J. Kim, Y. Jiang, G. Velarde, Y.-L. Huang, H. Y. Hwang, et al., Adv. Mater. 32, 2003780 (2020).
* Harbola et al. (2021) V. Harbola, S. Crossley, S. S. Hong, D. Lu, Y. A. Birkhölzer, Y. Hikita, and H. Y. Hwang, Nano Lett. 21, 2470 (2021).
* Davidovikj et al. (2020) D. Davidovikj, D. J. Groenendijk, A. M. R. V. L. Monteiro, A. Dijkhoff, D. Afanasiev, M. Šiškins, M. Lee, Y. Huang, E. van Heumen, H. S. J. van der Zant, et al., Commun. Phys. 3, 163 (2020).
* Dong et al. (2019) G. Dong, S. Li, M. Yao, Z. Zhou, Y.-Q. Zhang, X. Han, Z. Luo, J. Yao, B. Peng, Z. Hu, et al., Science 366, 475 (2019).
* Peng et al. (2020) B. Peng, R.-C. Peng, Y.-Q. Zhang, G. Dong, Z. Zhou, Y. Zhou, T. Li, Z. Liu, Z. Luo, S. Wang, et al., Sci. Adv. 6, eaba5847 (2020).
* Elangovan et al. (2020) H. Elangovan, M. Barzilay, S. Seremi, N. Cohen, Y. Jiang, L. W. Martin, and Y. Ivry, ACS Nano 14, 5053 (2020).
* Schooley et al. (1964) J. F. Schooley, W. R. Hosler, and M. L. Cohen, Phys. Rev. Lett. 12, 474 (1964).
* Scott and Ledbetter (1997) J. F. Scott and H. Ledbetter, Zeitschrift für Physik B Condens. Matter 104, 635 (1997).
* Müller and Burkard (1979) K. A. Müller and H. Burkard, Phys. Rev. B 19, 3593 (1979).
* Haeni et al. (2004) J. H. Haeni, P. Irvin, W. Chang, R. Uecker, P. Reiche, Y. L. Li, S. Choudhury, W. Tian, M. E. Hawley, B. Craigo, et al., Nature 430, 758 (2004).
* Biegalski et al. (2009) M. D. Biegalski, E. Vlahos, G. Sheng, Y. L. Li, M. Bernhagen, P. Reiche, R. Uecker, S. K. Streiffer, L. Q. Chen, V. Gopalan, et al., Phys. Rev. B 79, 224117 (2009).
* Xu et al. (2020) R. Xu, J. Huang, E. S. Barnard, S. S. Hong, P. Singh, E. K. Wong, T. Jansen, V. Harbola, J. Xiao, B. Y. Wang, et al., Nat. Commun. 11, 3141 (2020).
* Biasotti et al. (2013) M. Biasotti, L. Pellegrino, R. Buzio, E. Bellingeri, C. Bernini, A. S. Siri, and D. Marré, J. Micromech. Microeng. 23, 035031 (2013).
* Kaplan-Ashiri et al. (2006) I. Kaplan-Ashiri, S. R. Cohen, K. Gartsman, V. Ivanovskaya, T. Heine, G. Seifert, I. Wiesel, H. D. Wagner, and R. Tenne, Proc. Natl. Acad. Sci. U.S.A 103, 523 (2006).
* Yu et al. (2000) M.-F. Yu, O. Lourie, M. J. Dyer, K. Moloni, T. F. Kelly, and R. S. Ruoff, Science 287, 637 (2000).
* Hong et al. (2017) S. S. Hong, J. H. Yu, D. Lu, A. F. Marshall, Y. Hikita, Y. Cui, and H. Y. Hwang, Sci. Adv. 3, eaao5173 (2017).
* Cook et al. (2006) S. M. Cook, T. E. Schäffer, K. M. Chynoweth, M. Wigton, R. W. Simmonds, and K. M. Lang, Nanotechnology 17, 2135 (2006).
* Bhatia and Nachbar (1968) N. M. Bhatia and W. Nachbar, AIAA J. 6, 1050 (1968).
* Quinn and Quinn (2010) J. B. Quinn and G. D. Quinn, Dent. Mater. 26, 135 (2010).
* Roy et al. (2017) A. Roy, J. Mead, S. Wang, and H. Huang, Sci. Rep. 7, 9547 (2017).
* Zhang et al. (2016) H. Zhang, J. Tersoff, S. Xu, H. Chen, Q. Zhang, K. Zhang, Y. Yang, C.-S. Lee, K.-N. Tu, J. Li, et al., Sci. Adv. 2, e1501382 (2016).
* Gumbsch et al. (2001) P. Gumbsch, S. Taeri-Baghbadrani, D. Brunner, W. Sigle, and M. Rühle, Phys. Rev. Lett. 87, 85505 (2001).
* Shinno et al. (1988) H. Shinno, M. Kitajima, and M. Okada, J. of Nucl. Mater. 155-157, 290 (1988).
* Bunsell (1975) A. R. Bunsell, J. of Mater. Sci. 10, 1300 (1975).
* Rasmussen (2003) K. J. R. Rasmussen, J. Constr. Steel Res. 59, 47 (2003).
* dup (2010) Dupont Kapton Polyimide Film General Specifications, Bulletin Gs-96-7. (2010).
* Pugno and Ruoff (2007) N. M. Pugno and R. S. Ruoff, J. Aerosp. Eng. 20, 97 (2007).
* Jadaan et al. (2003) O. M. Jadaan, N. N. Nemeth, J. Bagdahn, and W. N. Sharpe, J. of Mater. Sci. 38, 4087 (2003).
* Li et al. (2014) P. Li, Q. Liao, S. Yang, X. Bai, Y. Huang, X. Yan, Z. Zhang, S. Liu, P. Lin, Z. Kang, et al., Nano Lett. 14, 480 (2014).
* Cui et al. (2020) T. Cui, S. Mukherjee, P. M. Sudeep, G. Colas, F. Najafi, J. Tam, P. M. Ajayan, C. V. Singh, Y. Sun, and T. Filleter, Nat. Mater. 19, 405 (2020).
|
arxiv-papers
| 2021-07-26T20:47:42 |
2024-09-04T03:07:20.037305
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Varun Harbola, Ruijuan Xu, Samuel Crossley, Prastuti Singh, Harold Y.\n Hwang",
"submitter": "Varun Harbola",
"url": "https://arxiv.org/abs/2107.12468"
}
|
2107.12469
|
# SaRNet: A Dataset for Deep Learning Assisted Search and Rescue with
Satellite Imagery
Michael Thoreau Department of Electrical and Computer Engineering
New York University
[email protected] Frazer Wilson No Affiliation
[email protected]
###### Abstract
Access to high resolution satellite imagery has dramatically increased in
recent years as several new constellations have entered service. High revisit
frequencies as well as improved resolution has widened the use cases of
satellite imagery to areas such as humanitarian relief and even Search and
Rescue (SaR). We propose a novel remote sensing object detection dataset for
deep learning assisted SaR. This dataset contains only small objects that have
been identified as potential targets as part of a live SaR response. We
evaluate the application of popular object detection models to this dataset as
a baseline to inform further research. We also propose a novel object
detection metric, specifically designed to be used in a deep learning assisted
SaR setting.
###### Index Terms:
Satellite, Remote, Search, Object Detection
## I Introduction
In-domain datasets are currently indispensible for applying machine learning
models to real-world problems. One such problem is the search for missing
persons in remote locations, where access can be difficult and timing is
critical. So far, datasets for visual search and rescue (SaR) have mostly
contained images taken by UAVs or light aircraft. This data cannot be simply
translated to a satellite imagery setting, where there are currently no SaR
datasets that we know of, due to greater diversity in viewpoints and
relatively large target sizes. Modern high-resolution satellite constellations
that can be tasked with imaging almost anywhere on the planet in a small
number of hours, might soon enable a powerful complement to aerial searches,
particularly in combination with recent advances in deep learning.
We propose a novel object detection dataset, collected in a live search
setting, that we use to demonstrate the concept of deep learning assisted SaR.
The dataset was created during the search for a missing paraglider pilot, lost
in a remote and mountainous area of the western United States. Over 500
volunteers labelled potential targets in high-resolution images using axis-
aligned bounding boxes. The true target, as seen inset in figure 1, was found
after a three week search and the labels generated for potential targets were
saved. These images and annotations were post-processed to form this dataset
of 2552 images.
Figure 1: Inset - The paraglider wing as it was found. Main - The wing as
detected by our prototype system.
Search and rescue via satellite imagery is a challenging application for off-
the-shelf deep learning methods. Metrics typically used to evaluate a models
performance on datasets such as MS-COCO [6] are very informative when the
ground truth is fairly irrefutable and labelling is consistent, but have some
issues when labels are noisy. Systems that use human verification as part of
target acquisition can also generally tolerate a lower precision. We propose a
new metric that is better suited to deep learning assisted SaR, that provides
an intuitive way to choose a detection threshold for a given set of test
images. We will evaluate a number of popular object detection models using
this new metric on our dataset. Our contributions are as follows:
* •
We present a novel dataset for satellite imagery based SaR, as well as;
* •
A novel and specially informative metric for object detection in a SaR
setting, and;
* •
We perform an comparative study of popular object detection models trained and
tested on this dataset.
Figure 2: Representative training samples, selected at random.
## II Dataset Details
This dataset contains 2552 images with a total of 4206 axis aligned bounding
boxes of a single ‘target’ class. Volunteers were instructed to label anything
they think could be the missing paraglider wing and were provided with
examples of similar objects visible in the source data. The wing is shown
inset in figure 1 as it was found after a three week search. A total of
approximately 5000 annotations were originally generated, however bounding
boxes with a height or width greater than 20 meters were discarded, along with
the corresponding images. A mosaic of 20 labelled targets, selected at random,
can be seen in figure 2. Each 1000x1000 image in the dataset is a jpeg
rendered at a high quality factor corresponding to a 500m x 500m tile of
satellite data, with a pixel pitch of 0.5m. The images have had all geographic
and vendor information removed. The scale of the targets can be as small as
3-4 pixels in some cases, which is a particular challenge for machine learning
models, as we will discuss in a later section. The bounding box labels in this
dataset are generally slightly oversized relative to the target outlines. The
distribution of the length of the longest side of the boxes in the training
set can be seen in figure 3.
Figure 3: Distribution of the longest side of bounding boxes in the training
set.
In contrast to most object detection datasets, including those in the remote
sensing domain, the targets in this dataset cannot be considered a strong
ground truth, however we will argue that they can still be used to train a
useful object detector. Objects that might have been considered by labelers to
be a potential target may have varied over time and between volunteers. This
noise in the labels can be seen qualitatively in figure 2, where there is some
variance in the obviousness of the targets. For a dataset with a weaker ground
truth, metrics based directly on false positives and false negatives are not
as informative, as we will discuss in section V.
Examples have been split in to 70%, 20%, 10% divisions for training,
validation, and testing respectively, with annotations following the MS-COCO
[6] convention. This dataset will be available online with example code:
https://github.com/michaelthoreau/SearchAndRescueNet.
## III Deep Learning Assisted SaR
It is important to consider how the outputs of any model might be used in a
wider system. Let us consider the case of the search [1] that created this
dataset, where targets were identified in a two step process. First,
volunteers labeled potential targets in image tiles without contextual
information. These labels were then verified by other volunteers with
additional information such as historical satellite imagery for comparison. In
this environment, a high number of false positives from the first step could
be tolerated, as they would be filtered in the second step, with only a small
number of potential targets being forwarded on to ground/air search teams.
Anecdotally, we found that verification of the potential targets (second
stage) was less tiring for volunteers than actively searching for targets in
image tiles (first stage).
We propose that an object detection model could be used to replace or assist
humans in the first stage of this SaR pipeline. In this proposed system,
humans would initially label data in a new target domain, and a model would be
trained to provide detections as search areas expanded. These detections could
be used either as proposals in the first stage, highlighting potential targets
and reducing strain on volunteers, or directly as potential targets for
verification in the second stage. In both cases, search areas could be covered
faster while keeping a human in the loop and reducing strain on resources.
## IV Prior Works
A primary challenge for designing, evaluating, and deploying models to assist
in SaR missions is the lack of applicable datasets. One popular dataset [12]
for remote sensing object detection has bounding box annotations for a variety
of object categories, with the smallest being ‘small vehicle’. However this
dataset contains aerial imagery with a finer pixel pitch than the proposed
imagery. Fewer datasets exist for satellite based remote sensing data, and
none currently available contain objects as small as those in the proposed
dataset.
Detection of objects in remote imagery is a fairly well studied field, with
state of the art methods [8] using region based Convolutional Neural Networks
(CNNs) to detect objects such as aircraft and oil tanks with a high degree of
accuracy. These objects however have a scale in the order of 100s of pixels
across, making the features learnt on common pre-training tasks such as
ImageNet [3] very applicable and making transfer learning possible. One of the
closest approaches [10] to our problem achieves robust detection of small
objects with highly variable backgrounds, however this domain has a much more
variable object scale as well as significant viewpoint changes.
The use of deep learning to assist in SaR operations has been explored a
number of times [13, 2, 4] but not as far as we can tell with satellite
imagery. The authors of [13] discuss how the human eye has incredible power to
use context to discern true from false targets but is slow to scan images and
can quickly become fatigued. This group [13] also describes how small targets
can be detected faster than by the human eye in some circumstances. We propose
that an object detector can assist in SaR as a first step in a machine-human
process, with humans verifying potential targets.
## V Object Detection Metrics
A key metric used in object detection is mean Average Precision (mAP) which is
calculated as the area under the curve when precision is plotted against
recall for all classes. Informed by the precision vs recall curve,
practitioners applying object detection models choose a threshold that
corresponds to a point on the precision recall curve that they deem most
reasonable. Due to the nature of our dataset that does not have a strong
ground truth, precision is not directly informative of the performance of the
model on the task. In the appendix, figure 5 shows some ‘false positives’ that
degrade the apparent performance of the model in standard metrics but appear
to be reasonable detections. We also found that when applying object detection
in an SaR setting, we were setting the detection threshold based on the
perceived density of detections. We propose a metric that can be directly
related to the time cost of verifying candidate detections. We will define the
detection density as the number of detections per square km. For a set of
detections with confidence values $\mathcal{Y}=\\{y_{1},y_{2},...,y_{M}\\}$
estimated for a given image $x_{i}$ in a set
$\mathcal{X}=\\{x_{1},x_{2},...,x_{N}\\}$ the detection density $D(\tau)$ can
be found as:
$D(\tau)=\sum_{i=1}^{N}\frac{|y_{i}>\tau|}{\textit{Area(}x_{i}\textit{)}}$ (1)
Where $|y_{i}>\tau|$ denotes the number of detections corresponding to a given
image with a confidence greater than $\tau$.
Similar to mAP, we propose a metric that considers the average performance
over a range of thresholds. In this case we will consider the recall of the
detector at various detection densities between 0 and some reasonable number
per square km, which can be chosen based on the dataset. We found that 20
detections per square km was a reasonable maximum density in this case. Recall
was included in the metric as we found it remained fairly relevant in the
presence of label noise, likely due to the labels being noisy but generally
conservative. We call this metric the Average Recall-Density to 20 or AR-d20.
The metric can be calculated as:
$\text{AR-d20}=\frac{1}{20}\sum_{k=0}^{20}\textit{Recall(}\tau_{k}\textit{)}$
(2)
Where $\tau_{k}$ is the lowest threshold that corresponds to a density lower
than $k$ detections per square km and $\textit{Recall(}\tau_{k}\textit{)}$ is
the recall for a given threshold. Crucially, this metric may inform the use of
a particular model based on the human resources available for considering
detections. In practice, the recall and detection densities can be estimated
on a small amount of labeled test data in the target domain.
## VI Baseline Results
To acquire a baseline solution, we fine tune a number of popular object
detection models on the training data and evaluate the performance of each
model on the test set using our new metric. The framework used to evaluate
these models is Faster R-CNN[9] in which we consider three different feature
extractors available in the Detectron2 [11] ‘model zoo’. As the targets are
very small and the bounding box labels are relatively oversized, we relax the
Intersection over Union (IoU) threshold for detections to $0.1$ to avoid the
elimination of fair detections. We train all three models with a learning rate
of 0.0001 for 5000 iterations. The optimiser was Adam, and each minibatch
contained 4 images. All models were pre-trained on MS-COCO. We evaluate the
baseline solution on the AR-d20 metric and present the results in table I.
Figure 4: Density of detections and recall of 3 different Faster R-CNN models
fine-tuned on this dataset.
The highest scoring model ‘R_50_FPN’ includes a 50 layer resnet [5] feature
extractor. This model, which includes a Feature Pyramid Network [7], had an
average recall of 41.8% for detection densities between 0 and 20 detections
per square km. Figure 4 shows detection density and recall plotted for all
three models at a range of confidence thresholds. The FPN model has a
significantly higher recall than the other models but a similar detection
density across most thresholds, making it a compelling choice. Qualitative
detection results from applying this model to the validation set can be seen
in figure 6 in the appendix.
model | ARd-20
---|---
faster_rcnn_R_50_FPN_3x | 41.82
faster_rcnn_R_50_C4_3x | 28.88
faster_rcnn_R_50_DC5_3x | 35.46
TABLE I: Average Recall-density scores for three popular models.
## VII Conclusion
We presented a novel dataset that we hope will inform future research into
satellite imagery based SaR. We introduced a new metric that may assist in the
application of object detectors to SaR problems and presented a baseline model
trained on this dataset to demonstrate the deep learning assisted SaR concept.
We believe that satellite based SaR is an emerging field that may someday be
used to save lives and bring closure to families of missing persons. We firmly
believe in the applicability of this technology to the search for missing
aircraft, watercraft, and a variety of other targets we cannot yet imagine.
## Acknowledgment
The authors would like to thank Planet Labs, Airbus Defence and Space, and
Maxar Technologies for providing satellite imagery. The authors would also
like to thank the 500+ volunteers who labelled the data, as well as all those
who contributed to the physical air and land search.
## Appendix A Supplementary figures
Figure 5: A selection of ‘False Positives’ on the test set. These are objects
that were detected by the model but missed by volunteers.
## References
* [1] L. Binding. Body of well-known paraglider found in us mountains - weeks after he went missing. Sky News, Sep 2020.
* [2] G. Castellano, C. Castiello, C. Mencar, and G. Vessio. Preliminary evaluation of tinyyolo on a new dataset for search-and-rescue with drones. In 2020 7th International Conference on Soft Computing Machine Intelligence (ISCMI), pages 163–166, 2020.
* [3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [4] S. Gotovac, D. Zelenika, Z. Marusic, and D. Božić-Štulić. Visual-based person detection for search-and-rescue with uas: Humans vs. machine learning algorithm. Remote Sensing, 12, 10 2020.
* [5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition, 2015.
* [6] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: common objects in context. CoRR, abs/1405.0312, 2014.
* [7] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection, 2017.
* [8] Y. Long, Y. Gong, Z. Xiao, and Q. Liu. Accurate object localization in remote sensing images based on convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing, 55(5):2486–2498, 2017.
* [9] S. Ren, K. He, R. B. Girshick, and J. Sun. Faster R-CNN: towards real-time object detection with region proposal networks. CoRR, abs/1506.01497, 2015.
* [10] M. Schembri and D. Seychell. Small object detection in highly variable backgrounds. In 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), pages 32–37, 2019.
* [11] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick. Detectron2. https://github.com/facebookresearch/detectron2, 2019.
* [12] G. Xia, X. Bai, J. Ding, Z. Zhu, S. J. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang. DOTA: A large-scale dataset for object detection in aerial images. CoRR, abs/1711.10398, 2017.
* [13] K. Yun, L. Nguyen, T. Nguyen, D. Kim, S. Eldin, A. Huyen, T. Lu, and E. Chow. Small target detection for search and rescue operations using distributed deep learning and synthetic data generation. CoRR, abs/1904.11619, 2019.
Figure 6: Example detections from the best performing model on the validation
set. Each tile in the mosaic is a fixed size and is centred on the detection.
Some examples are shown multiple times due to overlapping bounding boxes of
very different sizes.
|
arxiv-papers
| 2021-07-26T20:52:36 |
2024-09-04T03:07:20.048377
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"authors": "Michael Thoreau, Frazer Wilson",
"submitter": "Michael Thoreau",
"url": "https://arxiv.org/abs/2107.12469"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.